WO2022244256A1 - Adversarial attack generation device and risk evaluation device - Google Patents

Adversarial attack generation device and risk evaluation device Download PDF

Info

Publication number
WO2022244256A1
WO2022244256A1 PCT/JP2021/019409 JP2021019409W WO2022244256A1 WO 2022244256 A1 WO2022244256 A1 WO 2022244256A1 JP 2021019409 W JP2021019409 W JP 2021019409W WO 2022244256 A1 WO2022244256 A1 WO 2022244256A1
Authority
WO
WIPO (PCT)
Prior art keywords
adversarial
attack
learning
brightness
hostile
Prior art date
Application number
PCT/JP2021/019409
Other languages
French (fr)
Japanese (ja)
Inventor
インダージート シング
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2023522176A priority Critical patent/JPWO2022244256A1/ja
Priority to PCT/JP2021/019409 priority patent/WO2022244256A1/en
Publication of WO2022244256A1 publication Critical patent/WO2022244256A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a hostile attack generation device, a hostile attack generation method, and a hostile attack generation program that generate hostile cases, and a risk evaluation device and risk evaluation method that perform risk evaluation regarding attacks based on hostile cases.
  • Adversarial Machine Learning AML
  • Adversarial Examples rice field.
  • Face authentication is the process of verifying a claimed identity based on images of faces. Face recognition systems, by current definition, include one-to-one and many-to-one facial image matching.
  • machine learning-based risk assessment a common approach has been to focus on attacks with strong adversarial examples. An adversary with complete information can launch powerful adversarial attacks against the system. Complete information includes the architecture of the machine learning model, all model parameters, the loss function used for training, the distribution of the training data, and the entire preprocessing pipeline of the target system. This type of attack is called a white-box attack.
  • Adversaries can conveniently attack practical deep learning-based applications, such as facial recognition systems, from the physical world in scenarios such as person re-recognition and automatic ID (IDentification) matching systems.
  • Adversaries use physical adversarial instances to attack systems in the physical world.
  • Adversarial examples generated in the digital world are transferred to the physical world, such as by printing or painting, and used to attack targeted systems such as facial recognition systems and surveillance systems.
  • Adversarial cases after being captured by a camera are called physical adversarial cases.
  • Adversarial instances in the form of eyeglass frames, hats, stickers, etc., or adversarial instances representing cross-sections of predefined physical objects are some examples of printed physical adversarial attacks.
  • Physical transferability the ability of a well-crafted digital adversarial instance to succeed in the physical realm, is the most important parameter for achieving physical adversarial attacks.
  • General research mainly concentrates on attacking deep learning-based systems from the digital world. Attacks in the physical world are less powerful, but require fewer privileges on the victim system, and most practical facial recognition systems are affected by them.
  • the original image has various digital and physical environment parameters that cause perturbations.
  • Perturbations can be present in color correction, contrast changes, hue changes, brightness changes. These perturbations in the input image change the performance of the machine learning model.
  • adversarial cases are regular images with a small number of adversarial features that are highly correlated with predictions of targeted machine learning models, these perturbations significantly reduce the strength of adversarial attacks. A slight perturbation of these adversarial features can cause complete failure of the adversarial case. Brightness variation is one of the most important parameters among these, resulting in large variations in adversarial case performance.
  • a practical risk assessment process for face recognition systems scans for possible vulnerabilities in machine learning models used in face recognition systems in various types of adversarial cases.
  • Adversarial image brightness variations due to digital and physical parameters can frustrate adversarial cases and are not suitable for practical risk assessment of targeted systems. Therefore, for practical risk assessment of face recognition systems in various lighting conditions, it is necessary to use strong adversarial cases that are robust to changes in brightness.
  • Brightness Agnostic Adversarial Example An adversarial case that succeeds even in environments where the brightness changes is called a Brightness Agnostic Adversarial Example.
  • the overall brightness of the hostile image can vary linearly.
  • brightness changes are non-linear. Changes in image brightness in the digital world are primarily due to the use of image correction techniques in the pre-processing pipeline of the targeted system. Practical face recognition and verification systems use image enhancement techniques to improve performance.
  • Most of the well-known image correction techniques perform non-linear brightness adjustments in digital images. Non-linear brightness adjustment causes different brightness changes in different areas of the image.
  • Changes in the brightness of physical hostile cases are caused not only by physical factors, but also by digital factors.
  • the digital factor is the use of image correction techniques.
  • Physical factors include the lighting conditions of the environment, the capabilities of the printers used to print the adversarial examples, the types of printing papers and surfaces used, and the paintings used to transfer the adversarial examples to the physical world. quality of the paint, etc.
  • the angle of the light source with respect to the face results in different levels of light illuminating areas within the face.
  • the brightness of the hostile patch and its surroundings may vary non-linearly due to the inability to accurately reproduce the digital image.
  • Particular care must be taken when only patches need to be printed for adversarial attacks, for example when an adversary wears adversarial glasses to fool a face recognition system.
  • High-performance cameras can shoot well even in extreme lighting conditions, resulting in better representation of dark and bright areas and less perturbation in the image.
  • Print quality is also affected by the characteristics of printing paper. Surface reflectivity can also affect how bright a patch appears.
  • Non-Patent Document 1 proposes a method of generating adversarial examples based on random transformation of image brightness.
  • the goal of the technique described in Non-Patent Document 1 is to eliminate overfitting from the generated attack by a well-known gradient-based approach.
  • the technique described in Non-Patent Document 1 randomly changes the brightness of the image within a predefined range in each learning iteration. Note that in the context of adversarial machine learning, learning means the attack generation process. By randomly transforming the brightness at each learning step, the optimization prospects are smoothed. Elimination of overfitting improved the empirical transferability to ImageNet data by 23.5%.
  • the technique described in Non-Patent Document 1 uses an ImageNet classifier for empirical evaluation.
  • Non-Patent Document 2 describes Projected Gradient Descent (PGD).
  • Non-Patent Document 1 only considers linear brightness changes in adversarial cases.
  • brightness changes non-linearly The technique described in Non-Patent Document 1 cannot be applied to non-linear (including piece-wise linear) brightness changes of an image during the attack generation process. Therefore, the adversarial cases generated by the technique described in Non-Patent Document 1 are not robust to non-linear brightness changes.
  • an object of the present invention is to enable the generation of robust adversarial cases against nonlinear changes in brightness.
  • a hostile attack generation device is a hostile attack generation device for generating hostile cases, wherein the brightness of learning images is nonlinearly updated during the attack generation process for generating hostile cases. It is characterized by comprising conversion means.
  • the adversarial attack generator according to the present invention is an adversarial attack generator for generating adversarial cases, wherein during the attack generation process for generating adversarial cases, the difficulty level of learning is controlled based on curriculum learning. It is characterized by comprising a degree control means.
  • a risk evaluation apparatus is a risk evaluation apparatus that performs risk evaluation against attacks by hostile cases, and is a nonlinear brightness that nonlinearly updates the brightness of learning images during an attack generation process that generates hostile cases. It comprises conversion means and difficulty control means for controlling the difficulty of learning on a curriculum learning basis during the attack generation process.
  • the adversarial attack generation method according to the present invention is characterized by nonlinearly updating the brightness of learning images during the attack generation process that generates adversarial cases.
  • the adversarial attack generation method is characterized by controlling the difficulty of learning on a curriculum learning basis during the attack generation process of generating adversarial cases.
  • a computer-readable recording medium provides an adversarial attack generation for causing a computer to perform a nonlinear brightness conversion process that nonlinearly updates the brightness of a learning image during an attack generation process that generates adversarial examples.
  • a computer-readable recording medium recording a program.
  • a computer-readable recording medium provides an adversarial attack for causing a computer to perform a difficulty control process for controlling the difficulty of learning on a curriculum learning basis during an attack generation process for generating adversarial examples.
  • a computer-readable recording medium recording a generating program.
  • attack optimization can be enabled during the attack generation process.
  • FIG. 1 is a schematic block diagram showing a configuration example of a computer relating to a hostile attack generation device and a risk evaluation device according to an embodiment of the present invention;
  • FIG. 1 is a block diagram showing an example of an outline of a hostile attack generation device of the present invention;
  • FIG. 4 is a block diagram showing another example of the outline of the hostile attack generation device of the present invention;
  • the apparatus of the present invention operates with an attack generation algorithm that allows non-linear, including piecewise linear, variations in brightness during the attack generation process.
  • the algorithm of the apparatus of the present invention changes the brightness of specific regions by a different amount than the change in brightness of the entire image. This nonlinear brightness change during learning makes the generated adversarial cases robust to nonlinear brightness changes.
  • curriculum learning means classifying learning cases according to the difficulty of learning, giving easy learning cases in the early phase of learning, and facilitating sufficient fine-tuning of the machine learning model with easy learning cases. It is an approach to make the learning of the machine learning model appropriate by giving learning examples of gradually increasing difficulty later.
  • Curriculum-learning-based parameter updating in the present invention classifies training images with the same brightness as before the start of the training and attack generation process as low difficulty. Training images whose brightness changes linearly are classified as medium difficulty. Learning images whose brightness changes non-linearly are classified as high difficulty.
  • the above algorithm starts the learning process for attack generation without changing the brightness of the learning image. As the loss decreases, the algorithm automatically begins gradually increasing the difficulty of the training images. If the difficulty increases too much, the attack generation process automatically lowers the difficulty of the training images.
  • FIG. 1 is a block diagram showing an example of a risk assessment device according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing an algorithm adopted by the risk assessment device.
  • FIG. 3 is a flow chart showing an example of the process progress of the risk assessment device of this embodiment.
  • 4 and 5 are flow charts showing an example of a more specific process progress of the risk assessment device of this embodiment.
  • the risk evaluation device 1 includes a hostile attack generation device 2 that generates hostile cases, and an evaluation unit 6.
  • the hostile attack generation device 2 also includes a nonlinear brightness conversion unit 3 , a difficulty control unit 4 , and a determination unit 5 .
  • the nonlinear brightness transformation unit 3 nonlinearly updates the brightness of the learning image during the attack generation process that generates hostile cases.
  • the difficulty control unit 4 controls the difficulty of learning on a curriculum learning basis during the attack generation process that generates hostile examples.
  • the determination unit 5 determines the end of the learning process.
  • the evaluation unit 6 evaluates the risk of the target face authentication system against the attack by the hostile instance generated by the hostile attack generation device 2.
  • the adversarial attack generator 2 of the present invention includes a curriculum-learning-based algorithm for generating brightness-independent adversarial cases.
  • the generated brightness-independent adversarial examples are used to attack targeted facial recognition systems from the digital and physical worlds and assess their vulnerabilities.
  • FIG. 2 shows an example of a brightness-independent adversarial instance generation method.
  • the generation method can generate adversarial cases that are robust to real-world nonlinear brightness variations.
  • a nonlinear brightness transformer 3 is applied to the training images in the attack generation process. This is shown in process 206 of FIG.
  • the definition of the mask M p shown in operation 206 of FIG. 2 can be updated for each training iteration, allowing the brightness to vary in any number of regions within the training image.
  • the updated mask defines an arbitrary number of pixels in the training image and changes the brightness of that pixel to be different from the rest of the pixels. can be done. The combination of these masks results in non-linear brightness variations.
  • the mask M p i can be defined at each learning iteration as follows.
  • (selected pixels) i ” represent the coordinates of pixels selected to change brightness by a different amount than the rest of the pixels in each learning iteration. Equation (1), which expresses the nonlinear change in brightness, can also be expressed by Equations (2) and (3) shown below.
  • the purpose of the mask M p is to select the desired pixel values for brightness variations.
  • An ensemble of brightness transformations can be taken by setting N b >1 to smooth the gradient update ensuring the gradient descent direction to the global minimum.
  • the non-linear brightness update at each learning iteration greatly increases the optimization difficulty.
  • the difficulty control unit 4 performs curriculum-learning-based parameter update in the non-linear brightness transform function during the learning process. This is illustrated in operations 206, 208, 209, 211 and 212 of FIG.
  • a random brightness transfer function RT depends on a uniform random variable and a random parameter p.
  • the difficulty control unit 4 controls the difficulty of optimizing the learning images by controlling the range of uniform random variables and changing the value of the parameter p based on the curriculum learning.
  • a uniform random variable controls the degree of brightness variation applied to subregions in the training image during the training process.
  • a uniform random variable range is gradually increased using a predefined function (g) to increase the difficulty of the task.
  • the parameter p determines the non-linear brightness transform on the training images. The higher the frequency of brightness conversion, the higher the learning difficulty.
  • the parameter p is updated based on learning loss after a certain period of time. This is illustrated by operations 210, 211 and 212 in FIG. When the learning loss is large, the difficulty level control unit 4 decreases the value of the parameter p to lower the learning difficulty level. This is shown in process 211 of FIG.
  • Non-Patent Document 1 cannot apply nonlinear brightness transformation in the real world.
  • Embodiments of the present invention perform a non-linear brightness transformation and use the concept of curriculum learning to solve the optimization difficulty, resulting in brightness-independent, robust to linear and non-linear brightness changes. Adversarial cases can be generated.
  • the risk assessment device 1 generates hostile cases that do not depend on brightness, and can effectively assess the risks of a practical face recognition system based on the hostile cases.
  • Brightness-independent adversarial cases are robust to linear and non-linear brightness changes due to various digital and physical factors.
  • the risk assessment device 1 of the embodiment of the present invention assesses the risk of a target face recognition system due to digital and physical, brightness-independent adversarial instances.
  • Targeted face recognition systems can use feature extractor-based machine learning models and machine learning classifiers.
  • Feature extractor-based face recognition systems use similarity-based or distance-based functions for verification and classification of input images.
  • the adversary selects a facial image called a source image and generates an attack (step S301 in FIG. 3).
  • Adversarial noise is added to this facial image in the form of patches of arbitrary shape (step S402 in FIG. 4).
  • the type of patch noise initialization depends on the type of gradient-based optimization method used.
  • a PGD (Projected Gradient Descent) attack takes the same range of pixel values as the face image and initializes patch noise using a Gaussian distribution. Add this initialized noise to the source image.
  • the hyperparameters of the algorithm shown in FIG. 2 are initialized as in process 203 of FIG.
  • a learning loop is then started, and at each learning iteration a non-linear brightness transform is applied to the training images (step S303 in FIG. 3). This is shown in process 206 of FIG.
  • the parameter loss t+1 cum automatically updates the parameter p, which causes a brightness change in only some regions of the training image, resulting in non-uniform brightness changes in the training data.
  • the larger the value of the parameter p the higher the learning difficulty of the image.
  • the averaged parameter loss t+1 cum increases the value of the parameter p, which increases the difficulty of learning (processes 208, 210, and 210 shown in FIG. See process 211 and process 212).
  • the step function g also controls the difficulty of learning by controlling the range of the uniform random variable X u (process 209 shown in FIG. 2).
  • the step function g increases the learning difficulty during the learning process by gradually increasing the range of X u .
  • the learning process continues up to a predetermined number of learning steps T (process 204 shown in FIG. 2, step S305 in FIG. 3), or other stopping criteria are used to stop the learning process.
  • a determination unit 5 determines the timing for ending the learning process.
  • Gaussian random variables Yi are responsible for linear brightness variations.
  • the parameter p, the uniform random variable X u , the step function g, and the margin hyperparameter K are responsible for the non-linear brightness variation of the image during the attack generation process.
  • all parameters that enable step-by-step control of the difficulty of training images are based on curriculum learning, and automating them during learning realizes curriculum-learning-based automatic processing. .
  • the generated adversarial examples are fed from the digital world by passing through the target face recognition system's preprocessing pipeline, and the evaluator 6 , to check whether adversarial cases cheat or degrade performance (step S307 in FIG. 3).
  • the generated attacks are forwarded to the physical world (step S308 in FIG. 3) and presented to the camera of the targeted facial recognition system (step S413 in FIG. 5).
  • the captured physical adversarial instances are passed through the pre-processing pipeline of the target face recognition system, and then finally predicted by the machine learning model of that face recognition system.
  • the evaluation unit 6 checks the performance of the face authentication system.
  • the vulnerability of the facial recognition system which is targeted by digital and physical attacks, is evaluated by the evaluation unit 6 based on its performance against hostile cases that do not depend on brightness.
  • a robust face recognition system does not mispredict or degrade performance due to brightness-independent adversarial cases.
  • the nonlinear brightness conversion unit 3, the difficulty level control unit 4, the determination unit 5, and the evaluation unit 6 are realized, for example, by a CPU (Central Processing Unit) of a computer that operates according to a risk evaluation program.
  • the CPU reads a risk evaluation program from a program recording medium such as a computer program storage device, and operates as a nonlinear brightness conversion unit 3, a difficulty control unit 4, a determination unit 5, and an evaluation unit 6 according to the program. do it.
  • a portion of the risk evaluation program that causes the CPU to operate as the nonlinear brightness conversion unit 3, the difficulty control unit 4, and the determination unit 5 corresponds to the hostile attack generation program.
  • the first baseline is a simple method in which a simple PGD patch attack is generated.
  • the second baseline is the method proposed in Non-Patent Document 1. The method described in [1] was implemented in the setting of PGD adversarial patch generation for face recognition systems.
  • Equation (4) The purpose of the adversary in this experiment was to impersonate the target identity in the face data of the given source image.
  • a loss function used to generate a spoofing attack is represented by Equation (4) below.
  • the function SIM computes the similarity of features predicted by the face matcher f between the training adversarial image Xt adv and the target image Xt .
  • a cosine similarity function was used as the function SIM.
  • the function CLT is a linear function in the simple method, a random linear brightness transformation function in the method described in Non-Patent Document 1, and a curriculum-learning-based transformation function in the method of the present invention. .
  • the patch noise ⁇ was initialized with a mean of 0.5 and a variance of 0.1.
  • the mask M p is of the same dimensions as the input image, taking values of 1 at the positions of the spectacle frames and 0 in the rest of the image.
  • the maximum number of iterations T was set to 10000 in all methods.
  • the step function g of the method of the invention is defined as follows.
  • the batch constant N was set to 50.
  • the loss at each training iteration was normalized to the range [0,1]. However, the loss can take any range of values depending on how the loss function is defined.
  • the similarity constant K and constant h were set to one. The value of K is chosen such that the value of p does not exceed one.
  • the number of brightness ensembles Nb in the algorithm of the present invention was set to five. For all methods, the learning rate for PGD updates was set to 0.01.
  • X G is a Gaussian random variable with X G ⁇ N(0.8,0.2).
  • X U is a uniform random variable with X U ⁇ U(0.7,1). All images generated for all adversarial cases are fed to the targeted face matcher to check the attack success rate.
  • MTCNN Multi-Task Cascaded Convolutional Neural Networks
  • the method of the present invention outperforms the two baselines described above for adversarial cases that do not depend on brightness.
  • the method of the present invention resulted in 26.78% and 24.69% higher average spoofing success rates than the method described in Non-Patent Document 1 in the digital domain and the physical domain, respectively.
  • the apparatus of embodiments of the present invention produces adversarial cases that are robust to real-world lighting changes.
  • FIG. 6 is a schematic block diagram showing a configuration example of a computer related to the hostile attack generation device 2 and the risk evaluation device 1 of the embodiment of the present invention.
  • a computer 1000 includes a CPU 1001 , a main memory device 1002 , an auxiliary memory device 1003 and an interface 1004 .
  • the hostile attack generation device 2 and the risk evaluation device 1 of the embodiment of the present invention are realized by the computer 1000, for example. Operations of the hostile attack generation device 2 and the risk evaluation device 1 are stored in the auxiliary storage device 1003 in the form of programs.
  • the CPU 1001 reads the program, develops the program in the main storage device 1002, and executes the processing described in the above embodiment according to the program.
  • the auxiliary storage device 1003 is an example of a non-temporary tangible medium.
  • Other examples of non-transitory tangible media include magnetic disks, magneto-optical disks, CD-ROMs (Compact Disk Read Only Memory), DVD-ROMs (Digital Versatile Disk Read Only Memory), connected via interface 1004, A semiconductor memory and the like are included.
  • the computer 1000 receiving the distribution may develop the program in the main storage device 1002 and execute the processing described in the above embodiments according to the program. .
  • each component may be realized by a general-purpose or dedicated circuit, processor, etc., or a combination thereof. These may be composed of a single chip, or may be composed of multiple chips connected via a bus. A part or all of each component may be implemented by a combination of the above-described circuit or the like and a program.
  • the plurality of information processing devices, circuits, etc. may be centrally arranged or distributed.
  • the information processing device, circuits, and the like may be implemented as a client-and-server system, a cloud computing system, or the like, each of which is connected via a communication network.
  • FIG. 7 is a block diagram showing an example of the outline of the hostile attack generation device of the present invention.
  • the hostile attack generation device 71 comprises nonlinear brightness conversion means 73 .
  • the non-linear brightness transforming means 73 (for example, the non-linear brightness transforming unit 3) non-linearly updates the brightness of the learning images during the attack generation process that generates the hostile cases.
  • FIG. 8 is a block diagram showing another example of the outline of the hostile attack generation device of the present invention.
  • the hostile attack generation device 71 includes difficulty level control means 74 .
  • the difficulty control means 74 eg, difficulty control unit 4 controls the difficulty of learning on a curriculum learning basis during the attack generation process of generating adversarial examples.
  • Such a configuration can enable attack optimization during the attack generation process.
  • the present invention is suitably applied to a hostile attack generation device that generates hostile instances and a risk evaluation device that performs risk assessment regarding attacks by hostile instances.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The purpose of the present invention is to provide an adversarial attack generation device capable of generating an adversarial example robust to a non-linear change in brightness. An adversarial attack generation device 71 comprises a non-linear brightness conversion means 73. The non-linear brightness conversion means 73 non-linearly updates brightness of a training image during execution of an attack generation process that generates an adversarial example.

Description

敵対的攻撃生成装置、および、リスク評価装置Hostile attack generator and risk evaluator
 本発明は、敵対的事例を生成する敵対的攻撃生成装置、敵対的攻撃生成方法、敵対的攻撃生成プログラム、および、敵対的事例による攻撃に関するリスク評価を行うリスク評価装置、リスク評価方法に関する。 The present invention relates to a hostile attack generation device, a hostile attack generation method, and a hostile attack generation program that generate hostile cases, and a risk evaluation device and risk evaluation method that perform risk evaluation regarding attacks based on hostile cases.
 近年の敵対的機械学習(Adversarial Machine Learning: AML )の発展により、最先端のディープラーニングモデルが、敵対的事例(Adversarial Example)と呼ばれる上手く設計された入力サンプルに対して脆弱であることが分かってきた。 Recent advances in Adversarial Machine Learning (AML) have shown that state-of-the-art deep learning models are vulnerable to well-designed input examples, called Adversarial Examples. rice field.
 敵対的事例に対する脆弱性は、顔認証システム(Face Verification System)のような安全性が重要である環境にディープニューラルネットワークを適用する際に、重大なリスクとなる。顔認証は、顔の画像に基づいて、主張された身元を検証するプロセスである。顔認証システムは、現在の定義では、一対一および多対一の顔画像照合を含む。一般的な機械学習ベースのリスク評価では、強力な敵対的事例を使った攻撃に焦点を当てるアプローチが一般的であった。敵対者は、完全な情報を持っていれば、システムに対して強力な敵対的攻撃を行うことができる。完全な情報とは、標的となるシステムの、機械学習モデルのアーキテクチャ、全てのモデルパラメータ、学習に使用される損失関数、学習データの分布、前処理パイプライン全体を含む。このようなタイプの攻撃は、ホワイトボックス攻撃と呼ばれる。  The vulnerability to adversarial cases is a serious risk when applying deep neural networks to environments where safety is important, such as Face Verification Systems. Face authentication is the process of verifying a claimed identity based on images of faces. Face recognition systems, by current definition, include one-to-one and many-to-one facial image matching. In general machine learning-based risk assessment, a common approach has been to focus on attacks with strong adversarial examples. An adversary with complete information can launch powerful adversarial attacks against the system. Complete information includes the architecture of the machine learning model, all model parameters, the loss function used for training, the distribution of the training data, and the entire preprocessing pipeline of the target system. This type of attack is called a white-box attack.
 敵対者は、人物の再認識やID(IDentification)の自動照合システムのようなシナリオで、物理的世界から、顔認証システムのような実用的なディープラーニングベースのアプリケーションを、都合よく攻撃できる。敵対者は、物理的世界のシステムを攻撃するために、物理的な敵対的事例を使用する。デジタル世界で生成された敵対的事例は、印刷や塗装等によって物理的世界に転送され、顔認証システムや監視システム等の標的となるシステムを攻撃するために使用される。カメラで撮影された後の敵対的事例は、物理的な敵対的事例と呼ばれる。眼鏡のフレーム、帽子、ステッカー等の形式の敵対的事例や、予め定義された物理的な物体の断面を表す敵対的事例は、印刷された物理的な敵対的攻撃のいくつかの例である。よくできたデジタルの敵対的事例が物理領域で成功する能力である「物理的移行性」が、物理的な敵対的攻撃を実現するための最も重要なパラメータである。一般的な研究は、主に、ディープラーニングベースのシステムをデジタル世界から攻撃することに集中している。物理的世界での攻撃は、性能は劣るものの、被害を受けるシステムの権限をほとんど必要とせず、実用的な顔認証システムのほとんどが、その攻撃の影響を受ける。 Adversaries can conveniently attack practical deep learning-based applications, such as facial recognition systems, from the physical world in scenarios such as person re-recognition and automatic ID (IDentification) matching systems. Adversaries use physical adversarial instances to attack systems in the physical world. Adversarial examples generated in the digital world are transferred to the physical world, such as by printing or painting, and used to attack targeted systems such as facial recognition systems and surveillance systems. Adversarial cases after being captured by a camera are called physical adversarial cases. Adversarial instances in the form of eyeglass frames, hats, stickers, etc., or adversarial instances representing cross-sections of predefined physical objects are some examples of printed physical adversarial attacks. “Physical transferability,” the ability of a well-crafted digital adversarial instance to succeed in the physical realm, is the most important parameter for achieving physical adversarial attacks. General research mainly concentrates on attacking deep learning-based systems from the digital world. Attacks in the physical world are less powerful, but require fewer privileges on the victim system, and most practical facial recognition systems are affected by them.
 元の画像には、摂動を生じさせる、様々なデジタル環境のパラメータ、および、物理的環境のパラメータがある。摂動は、色補正、コントラストの変化、色相の変化、明るさの変化に、存在し得る。入力画像におけるこれらの摂動によって機械学習モデルの性能が変化する。また、敵対的事例は、標的となる機械学習モデルの予測と高い相関性を持つ少数の敵対的特徴を有する通常の画像であるため、これらの摂動は、敵対的攻撃の強度を著しく低下させる。これらの敵対的特徴のわずかな摂動が、敵対的事例の完全な失敗を引き起こす可能性がある。明るさの変化は、これらの中でも最も重要なパラメータの1つであり、敵対的事例の性能に大きな変化をもたらす。 The original image has various digital and physical environment parameters that cause perturbations. Perturbations can be present in color correction, contrast changes, hue changes, brightness changes. These perturbations in the input image change the performance of the machine learning model. Also, since adversarial cases are regular images with a small number of adversarial features that are highly correlated with predictions of targeted machine learning models, these perturbations significantly reduce the strength of adversarial attacks. A slight perturbation of these adversarial features can cause complete failure of the adversarial case. Brightness variation is one of the most important parameters among these, resulting in large variations in adversarial case performance.
 顔認証システムの実用的なリスク評価プロセスでは、顔認証システムで使用される機械学習モデルに考えられる脆弱性を、様々な種類の敵対的事例でスキャンする。デジタルのおよび物理的なパラメータによる敵対的画像の明るさの変化は、敵対的事例を失敗させる可能性があり、標的となるシステムの実用的なリスク評価には適していない。従って、様々な照明条件での顔認証システムの実用的なリスク評価のためには、明るさの変化に頑健な強力な敵対的事例を使用する必要がある。 A practical risk assessment process for face recognition systems scans for possible vulnerabilities in machine learning models used in face recognition systems in various types of adversarial cases. Adversarial image brightness variations due to digital and physical parameters can frustrate adversarial cases and are not suitable for practical risk assessment of targeted systems. Therefore, for practical risk assessment of face recognition systems in various lighting conditions, it is necessary to use strong adversarial cases that are robust to changes in brightness.
 明るさが変化する環境でも成功する敵対的事例を、明るさに依存しない敵対的事例(Brightness Agnostic Adversarial Example )と呼ぶことにする。敵対的画像全体の明るさは、線形に変化することもあり得る。しかし、現実の世界では、明るさの変化は、非線形である。デジタル世界で画像の明るさが変化するのは、主に、標的となるシステムの前処理パイプラインに画像補正技術が使用されているためである。実用的な顔認証・検証システムは、性能向上のために、画像補正技術を使用している。よく知られている画像補正技術のほとんどは、デジタル画像において非線形な明るさ調整を行う。非線形な明るさ調整は、画像の異なる領域で明るさの変化を異ならせる。 An adversarial case that succeeds even in environments where the brightness changes is called a Brightness Agnostic Adversarial Example. The overall brightness of the hostile image can vary linearly. However, in the real world, brightness changes are non-linear. Changes in image brightness in the digital world are primarily due to the use of image correction techniques in the pre-processing pipeline of the targeted system. Practical face recognition and verification systems use image enhancement techniques to improve performance. Most of the well-known image correction techniques perform non-linear brightness adjustments in digital images. Non-linear brightness adjustment causes different brightness changes in different areas of the image.
 物理的な敵対的事例の明るさの変化は、物理的な要因だけでなく、デジタル的な要因によっても生じる。デジタル的な要因とは、画像補正技術の使用である。物理的な要因とは、環境の照明条件、敵対的事例の印刷に使用されるプリンタの性能、使用される印刷用紙や表面の種類、敵対的事例を物理的世界に転送するために絵画が使用される場合における塗料の質等である。顔に対する光源の角度によって、顔の中の複数の領域への、照射される光量のレベルが異なる。プリンタの性能によっては、デジタル画像を正確に再現できないため、敵対的パッチとその周囲の明るさが非線形に変化する可能性がある。例えば、顔認証システムを欺くために敵対者が敵対的眼鏡をかける場合等に、敵対的攻撃のためにパッチのみを印刷する必要がある場合には、特に注意が必要である。高性能カメラは、極端な明るさの条件下でも良好に撮影を行えるため、暗い領域と明るい領域の表現が良好になり、画像の摂動が少なくなる。また、印刷の品質には、印刷用紙の特性も影響する。また、表面の反射率も、どのような明るさでパッチが現れるかに影響することがある。 Changes in the brightness of physical hostile cases are caused not only by physical factors, but also by digital factors. The digital factor is the use of image correction techniques. Physical factors include the lighting conditions of the environment, the capabilities of the printers used to print the adversarial examples, the types of printing papers and surfaces used, and the paintings used to transfer the adversarial examples to the physical world. quality of the paint, etc. The angle of the light source with respect to the face results in different levels of light illuminating areas within the face. Depending on the capabilities of the printer, the brightness of the hostile patch and its surroundings may vary non-linearly due to the inability to accurately reproduce the digital image. Particular care must be taken when only patches need to be printed for adversarial attacks, for example when an adversary wears adversarial glasses to fool a face recognition system. High-performance cameras can shoot well even in extreme lighting conditions, resulting in better representation of dark and bright areas and less perturbation in the image. Print quality is also affected by the characteristics of printing paper. Surface reflectivity can also affect how bright a patch appears.
 非特許文献1では、画像の明るさのランダムな変換に基づいた敵対的事例の生成方法が提案されている。非特許文献1に記載の技術の目的は、よく知られた勾配ベースのアプローチにより、生成された攻撃からオーバーフィッティングを排除することである。非特許文献1に記載の技術では、各学習反復において、予め定義された範囲内で、画像の明るさをランダムに変化させる。なお、敵対的機械学習の文脈では、学習とは、攻撃生成プロセスを意味する。各学習ステップでランダムに明るさを変換することで、最適化の見通しが滑らかになる。オーバーフィッティングの解消により、ImageNetデータに対する実証的な転移性が23.5%改善された。非特許文献1に記載の技術では、実証的な評価にImageNetの分類器を使用した。 Non-Patent Document 1 proposes a method of generating adversarial examples based on random transformation of image brightness. The goal of the technique described in Non-Patent Document 1 is to eliminate overfitting from the generated attack by a well-known gradient-based approach. The technique described in Non-Patent Document 1 randomly changes the brightness of the image within a predefined range in each learning iteration. Note that in the context of adversarial machine learning, learning means the attack generation process. By randomly transforming the brightness at each learning step, the optimization prospects are smoothed. Elimination of overfitting improved the empirical transferability to ImageNet data by 23.5%. The technique described in Non-Patent Document 1 uses an ImageNet classifier for empirical evaluation.
 また、非特許文献2には、Projected Gradient Descent(PGD)が記載されている。 In addition, Non-Patent Document 2 describes Projected Gradient Descent (PGD).
 しかし、非特許文献1に記載された方法は、敵対的事例の線形な明るさの変化しか考慮していない。しかし、現実の世界では、明るさは非線形に変化する。非特許文献1に記載された技術は、攻撃生成プロセスの際に、画像の非線形(区分線形を含む)の明るさの変化に適用できない。そのため、非特許文献1に記載された技術で生成された敵対的事例は、非線形の明るさの変化に対して頑健ではない。 However, the method described in Non-Patent Document 1 only considers linear brightness changes in adversarial cases. However, in the real world, brightness changes non-linearly. The technique described in Non-Patent Document 1 cannot be applied to non-linear (including piece-wise linear) brightness changes of an image during the attack generation process. Therefore, the adversarial cases generated by the technique described in Non-Patent Document 1 are not robust to non-linear brightness changes.
 また、攻撃生成プロセスの際における攻撃最適化を有効にできることが好ましい。 It is also preferable to be able to enable attack optimization during the attack generation process.
 そこで、本発明は、明るさの非線形な変化に頑健な敵対的事例を生成できるようにすることを目的とする。 Therefore, an object of the present invention is to enable the generation of robust adversarial cases against nonlinear changes in brightness.
 また、攻撃生成プロセスの際における攻撃最適化を有効にできることを目的とする。 It also aims to enable attack optimization during the attack generation process.
 本発明による敵対的攻撃生成装置は、敵対的事例を生成する敵対的攻撃生成装置であって、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換手段を備えることを特徴とする。 A hostile attack generation device according to the present invention is a hostile attack generation device for generating hostile cases, wherein the brightness of learning images is nonlinearly updated during the attack generation process for generating hostile cases. It is characterized by comprising conversion means.
 本発明による敵対的攻撃生成装置は、敵対的事例を生成する敵対的攻撃生成装置であって、敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御手段を備えることを特徴とする。 The adversarial attack generator according to the present invention is an adversarial attack generator for generating adversarial cases, wherein during the attack generation process for generating adversarial cases, the difficulty level of learning is controlled based on curriculum learning. It is characterized by comprising a degree control means.
 本発明によるリスク評価装置は、敵対的事例による攻撃に対するリスク評価を行うリスク評価装置であって、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換手段と、攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御手段とを備えることを特徴とする。 A risk evaluation apparatus according to the present invention is a risk evaluation apparatus that performs risk evaluation against attacks by hostile cases, and is a nonlinear brightness that nonlinearly updates the brightness of learning images during an attack generation process that generates hostile cases. It comprises conversion means and difficulty control means for controlling the difficulty of learning on a curriculum learning basis during the attack generation process.
 本発明による敵対的攻撃生成方法は、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新することを特徴とする。 The adversarial attack generation method according to the present invention is characterized by nonlinearly updating the brightness of learning images during the attack generation process that generates adversarial cases.
 本発明による敵対的攻撃生成方法は、敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御することを特徴とする。 The adversarial attack generation method according to the present invention is characterized by controlling the difficulty of learning on a curriculum learning basis during the attack generation process of generating adversarial cases.
 本発明によるコンピュータ読取可能な記録媒体は、コンピュータに、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換処理を実行させるための敵対的攻撃生成プログラムを記録したコンピュータ読取可能な記録媒体である。 A computer-readable recording medium according to the present invention provides an adversarial attack generation for causing a computer to perform a nonlinear brightness conversion process that nonlinearly updates the brightness of a learning image during an attack generation process that generates adversarial examples. A computer-readable recording medium recording a program.
 本発明によるコンピュータ読取可能な記録媒体は、コンピュータに、敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御処理を実行させるための敵対的攻撃生成プログラムを記録したコンピュータ読取可能な記録媒体である。 A computer-readable recording medium according to the present invention provides an adversarial attack for causing a computer to perform a difficulty control process for controlling the difficulty of learning on a curriculum learning basis during an attack generation process for generating adversarial examples. A computer-readable recording medium recording a generating program.
 本発明によれば、明るさの非線形な変化に頑健な敵対的事例を生成できる。 According to the present invention, it is possible to generate adversarial cases that are robust to nonlinear changes in brightness.
 また、本発明によれば、攻撃生成プロセスの際における攻撃最適化を有効にできる。 Also, according to the present invention, attack optimization can be enabled during the attack generation process.
本発明の実施形態のリスク評価装置の例を示すブロック図である。It is a block diagram showing an example of a risk assessment device of an embodiment of the present invention. リスク評価装置が採用するアルゴリズムを示す模式図である。It is a schematic diagram which shows the algorithm which a risk-evaluation apparatus employ|adopts. リスク評価装置の処理経過の例を示すフローチャートである。It is a flowchart which shows the example of process progress of a risk-evaluation apparatus. リスク評価装置のより具体的な処理経過の例を示すフローチャートである。It is a flowchart which shows the example of the more concrete process progress of a risk-evaluation apparatus. リスク評価装置のより具体的な処理経過の例を示すフローチャートである。It is a flowchart which shows the example of the more concrete process progress of a risk-evaluation apparatus. 本発明の実施形態の敵対的攻撃生成装置およびリスク評価装置に係るコンピュータの構成例を示す概略ブロック図である。1 is a schematic block diagram showing a configuration example of a computer relating to a hostile attack generation device and a risk evaluation device according to an embodiment of the present invention; FIG. 本発明の敵対的攻撃生成装置の概要の一例を示すブロック図である。1 is a block diagram showing an example of an outline of a hostile attack generation device of the present invention; FIG. 本発明の敵対的攻撃生成装置の概要の他の例を示すブロック図である。FIG. 4 is a block diagram showing another example of the outline of the hostile attack generation device of the present invention;
 本発明の装置は、攻撃生成プロセス中に、区分線形を含む非線形の明るさの変化を許容する攻撃生成アルゴリズムによって動作する。本発明の装置のアルゴリズムは、攻撃生成プロセス中に、画像全体の明るさの変化とは異なる変化量で、特定の領域の明るさを変化させる。この学習時の非線形な明るさの変化により、生成される敵対的事例は、非線形な明るさの変化に対して頑健になる。 The apparatus of the present invention operates with an attack generation algorithm that allows non-linear, including piecewise linear, variations in brightness during the attack generation process. During the attack generation process, the algorithm of the apparatus of the present invention changes the brightness of specific regions by a different amount than the change in brightness of the entire image. This nonlinear brightness change during learning makes the generated adversarial cases robust to nonlinear brightness changes.
 しかし、非線形な明るさの変化は、学習プロセス中の最適化を困難にする。この最適化の難しさを解決するために、上記のアルゴリズムは、カリキュラム学習ベースのパラメータの自動更新を行う。 However, nonlinear brightness changes make optimization difficult during the learning process. To solve this optimization difficulty, the above algorithms provide automatic updating of curriculum-learning-based parameters.
 機械学習の文脈において、カリキュラム学習とは、学習事例を学習の難易度に応じて分類し、学習の初期のフェイズでは易しい学習事例を与え、易しい学習事例で機械学習モデルが十分にファインチューニングされた後に、徐々に難易度の高い学習事例を与えることで、機械学習モデルの学習を適切なものにするアプローチである。本発明におけるカリキュラム学習ベースのパラメータ更新は、学習や攻撃生成プロセスの開始前と同じ明るさの学習画像を、低難易度として分類する。線形に明るさが変化する学習画像は、中難易度として分類される。非線形に明るさが変化する学習画像は、高難易度として分類される。 In the context of machine learning, curriculum learning means classifying learning cases according to the difficulty of learning, giving easy learning cases in the early phase of learning, and facilitating sufficient fine-tuning of the machine learning model with easy learning cases. It is an approach to make the learning of the machine learning model appropriate by giving learning examples of gradually increasing difficulty later. Curriculum-learning-based parameter updating in the present invention classifies training images with the same brightness as before the start of the training and attack generation process as low difficulty. Training images whose brightness changes linearly are classified as medium difficulty. Learning images whose brightness changes non-linearly are classified as high difficulty.
 上記のアルゴリズムは、学習画像の明るさを変えずに、攻撃生成のための学習プロセスを開始する。損失が減少すると、アルゴリズムは、自動的に学習画像の難易度を徐々に上げ始める。難易度が上がり過ぎると、攻撃生成プロセスで、学習画像の難易度を自動的に下げる。 The above algorithm starts the learning process for attack generation without changing the brightness of the learning image. As the loss decreases, the algorithm automatically begins gradually increasing the difficulty of the training images. If the difficulty increases too much, the attack generation process automatically lowers the difficulty of the training images.
 以下、本発明の実施形態を図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の実施形態のリスク評価装置の例を示すブロック図である。図2は、リスク評価装置が採用するアルゴリズムを示す模式図である。図3は、本実施形態のリスク評価装置の処理経過の例を示すフローチャートである。図4および図5は、本実施形態のリスク評価装置のより具体的な処理経過の例を示すフローチャートである。 FIG. 1 is a block diagram showing an example of a risk assessment device according to an embodiment of the present invention. FIG. 2 is a schematic diagram showing an algorithm adopted by the risk assessment device. FIG. 3 is a flow chart showing an example of the process progress of the risk assessment device of this embodiment. 4 and 5 are flow charts showing an example of a more specific process progress of the risk assessment device of this embodiment.
 リスク評価装置1は、敵対的事例を生成する敵対的攻撃生成装置2と、評価部6とを備える。また、敵対的攻撃生成装置2は、非線形明るさ変換部3と、難易度制御部4と、判定部5とを備える。 The risk evaluation device 1 includes a hostile attack generation device 2 that generates hostile cases, and an evaluation unit 6. The hostile attack generation device 2 also includes a nonlinear brightness conversion unit 3 , a difficulty control unit 4 , and a determination unit 5 .
 非線形明るさ変換部3は、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する。 The nonlinear brightness transformation unit 3 nonlinearly updates the brightness of the learning image during the attack generation process that generates hostile cases.
 難易度制御部4は、敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する。 The difficulty control unit 4 controls the difficulty of learning on a curriculum learning basis during the attack generation process that generates hostile examples.
 判定部5は、学習プロセスの終了を判定する。 The determination unit 5 determines the end of the learning process.
 評価部6は、敵対的攻撃生成装置2によって生成された敵対的事例による攻撃に対する、標的となる顔認証システムのリスク評価を行う。 The evaluation unit 6 evaluates the risk of the target face authentication system against the attack by the hostile instance generated by the hostile attack generation device 2.
 本発明の敵対的攻撃生成装置2は、明るさに依存しない敵対的事例を生成するための、カリキュラム学習ベースのアルゴリズムを含んでいる。生成された、明るさに依存しない敵対的事例は、デジタルおよび物理的世界から標的となる顔認証システムを攻撃し、その脆弱性を評価するために使用される。明るさに依存しない敵対的事例の生成方法の一例を図2に示す。その生成方法によって、現実世界の非線形な明るさの変化に対して頑健な敵対的事例を生成することができる。 The adversarial attack generator 2 of the present invention includes a curriculum-learning-based algorithm for generating brightness-independent adversarial cases. The generated brightness-independent adversarial examples are used to attack targeted facial recognition systems from the digital and physical worlds and assess their vulnerabilities. FIG. 2 shows an example of a brightness-independent adversarial instance generation method. The generation method can generate adversarial cases that are robust to real-world nonlinear brightness variations.
 このアルゴリズムによって、攻撃生成プロセスにおいて、非線形明るさ変換部3は、学習画像に適用される。このことは、図2の処理206に示されている。図2の処理206に示すマスクMの定義は、学習の反復毎に更新することができ、学習画像内の任意の数の領域で明るさを変化させることができる。式(1)から分かるように、各学習反復において、更新されたマスクは、学習画像内の任意の数のピクセルを定義し、残りのピクセルとは異なるように、そのピクセルの明るさを変えることができる。これらのマスクの組合せにより、非線形の明るさの変化が生じる。 With this algorithm, a nonlinear brightness transformer 3 is applied to the training images in the attack generation process. This is shown in process 206 of FIG. The definition of the mask M p shown in operation 206 of FIG. 2 can be updated for each training iteration, allowing the brightness to vary in any number of regions within the training image. As can be seen from equation (1), at each training iteration, the updated mask defines an arbitrary number of pixels in the training image and changes the brightness of that pixel to be different from the rest of the pixels. can be done. The combination of these masks results in non-linear brightness variations.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
マスクM は、各学習反復において、以下のように定義することができる。 The mask M p i can be defined at each learning iteration as follows.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 “(selected pixels)i”は、各学習反復において、残りのピクセルとは異なる量で明るさが変化するように選択されたピクセルの座標を表す。非線形な明るさの変化を表す式(1)は、以下に示す式(2)、式(3)で表すこともできる。マスクMの目的は、明るさの変化に対して望ましいピクセル値を選択することである。 “(selected pixels) i ” represent the coordinates of pixels selected to change brightness by a different amount than the rest of the pixels in each learning iteration. Equation (1), which expresses the nonlinear change in brightness, can also be expressed by Equations (2) and (3) shown below. The purpose of the mask M p is to select the desired pixel values for brightness variations.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 グローバルミニマムへの勾配降下方向を確保する勾配更新を平滑化するために、N>1と設定することで、明るさ変換のアンサンブルをとることができる。しかし、各学習反復における非線形の明るさの更新は、最適化の難易度を大幅に挙げることになる。最適化の困難性を解決するために、難易度制御部4は、学習プロセス中に、非線形明るさ変換関数において、カリキュラム学習ベースのパラメータ更新を行う。このことは、図2の処理206,208,209,211,212に示されている。ランダムな明るさ変換関数RTは、一様な確率変数と確率パラメータpに依存する。 An ensemble of brightness transformations can be taken by setting N b >1 to smooth the gradient update ensuring the gradient descent direction to the global minimum. However, the non-linear brightness update at each learning iteration greatly increases the optimization difficulty. To solve the optimization difficulty, the difficulty control unit 4 performs curriculum-learning-based parameter update in the non-linear brightness transform function during the learning process. This is illustrated in operations 206, 208, 209, 211 and 212 of FIG. A random brightness transfer function RT depends on a uniform random variable and a random parameter p.
 難易度制御部4は、カリキュラム学習ベースで、一様な確率変数の範囲を制御し、パラメータpの値を変化させることで、学習画像の最適化の難易度を制御する。一様な確率変数は、学習プロセスにおいて、学習画像内の部分領域に適用される明るさの変化の度合いを制御する。一様な確率変数の範囲は、予め定義された関数(g)を用いて徐々に増加され、タスクの難易度を高める。パラメータpは、学習画像に対する非線形の明るさ変換を決定する。明るさ変換の頻度が高いほど、学習の難易度は高くなる。パラメータpは、一定時間後の学習損失に基づいて更新される。このことは、図2の処理210,211,212に示されている。学習損失が大きい場合、難易度制御部4は、パラメータpの値を減少させ、学習の難易度を下げる。このことは、図2の処理211に示されている。 The difficulty control unit 4 controls the difficulty of optimizing the learning images by controlling the range of uniform random variables and changing the value of the parameter p based on the curriculum learning. A uniform random variable controls the degree of brightness variation applied to subregions in the training image during the training process. A uniform random variable range is gradually increased using a predefined function (g) to increase the difficulty of the task. The parameter p determines the non-linear brightness transform on the training images. The higher the frequency of brightness conversion, the higher the learning difficulty. The parameter p is updated based on learning loss after a certain period of time. This is illustrated by operations 210, 211 and 212 in FIG. When the learning loss is large, the difficulty level control unit 4 decreases the value of the parameter p to lower the learning difficulty level. This is shown in process 211 of FIG.
 非特許文献1で提案されている方法は、現実世界の非線形の明るさ変換を適用できない。本発明の実施形態では、非線形の明るさ変換を行い、カリキュラム学習の概念を用いて最適化の困難性を解決することで、線形および非線形の明るさの変化に頑健な、明るさに依存しない敵対的事例を生成することができる。 The method proposed in Non-Patent Document 1 cannot apply nonlinear brightness transformation in the real world. Embodiments of the present invention perform a non-linear brightness transformation and use the concept of curriculum learning to solve the optimization difficulty, resulting in brightness-independent, robust to linear and non-linear brightness changes. Adversarial cases can be generated.
 リスク評価装置1は、明るさに依存しない敵対的事例を生成し、その敵対的事例により、実用的な顔認証システムの効果的なリスク評価を行える。明るさに依存しない敵対的事例は、様々なデジタル的および物理的な要因による線形および非線形の明るさの変化に頑健である。 The risk assessment device 1 generates hostile cases that do not depend on brightness, and can effectively assess the risks of a practical face recognition system based on the hostile cases. Brightness-independent adversarial cases are robust to linear and non-linear brightness changes due to various digital and physical factors.
 本発明の実施形態のリスク評価装置1は、デジタルのおよび物理的な、明るさに依存しない敵対的事例による、標的となる顔認証システムのリスクを評価する。標的となる顔認証システムは、特徴抽出器ベースの機械学習モデルや機械学習分類器を用いることができる。特徴抽出器ベースの顔認証システムは、入力画像の検証や分類のために、類似性ベースまたは距離ベースの関数を用いる。 The risk assessment device 1 of the embodiment of the present invention assesses the risk of a target face recognition system due to digital and physical, brightness-independent adversarial instances. Targeted face recognition systems can use feature extractor-based machine learning models and machine learning classifiers. Feature extractor-based face recognition systems use similarity-based or distance-based functions for verification and classification of input images.
 敵対者は、ソース画像と呼ばれる顔画像を選び、攻撃を生成する(図3のステップS301)。この顔画像に、任意の形状のパッチの形で、敵対的なノイズが付加される(図4のステップS402)。パッチノイズの初期化のタイプは、使用する勾配ベースの最適化方法のタイプに依存する。PGD(Projected Gradient Descent)攻撃では、顔画像と同じ範囲のピクセル値をとって、ガウス分布を用いて、パッチノイズを初期化する。この初期化されたノイズをソース画像に追加する。そして、図2に示すアルゴリズムのハイパーパラメータを、図2の処理203のように、初期化する。 The adversary selects a facial image called a source image and generates an attack (step S301 in FIG. 3). Adversarial noise is added to this facial image in the form of patches of arbitrary shape (step S402 in FIG. 4). The type of patch noise initialization depends on the type of gradient-based optimization method used. A PGD (Projected Gradient Descent) attack takes the same range of pixel values as the face image and initializes patch noise using a Gaussian distribution. Add this initialized noise to the source image. Then, the hyperparameters of the algorithm shown in FIG. 2 are initialized as in process 203 of FIG.
 その後、学習ループが開始され、各学習反復において、非線形の明るさ変換が学習画像に適用される(図3のステップS303)。このことは、図2の処理206に示されている。 A learning loop is then started, and at each learning iteration a non-linear brightness transform is applied to the training images (step S303 in FIG. 3). This is shown in process 206 of FIG.
 もし、パラメータNが1より大きければ、学習画像に非線形の明るさ変換がN回行われ(図2に示す処理205、処理206)、難易度制御部4は、各学習反復においてパッチノイズの更新のために、変換された全ての画像の勾配の合計を使用する(図2に示す処理207、図3のステップS304)。 If the parameter N b is greater than 1, nonlinear brightness transformation is performed on the training image N b times (processing 205 and processing 206 shown in FIG. 2), and the difficulty control unit 4 reduces the patch noise in each learning iteration. For the update of , we use the sum of the gradients of all the transformed images (process 207 shown in FIG. 2, step S304 in FIG. 3).
 パラメータlosst+1 cumは、学習画像の一部の領域だけに明るさの変化をもたらすパラメータpを自動的に更新し、その結果、学習データの明るさの変化が不均一になる。パラメータpの値が大きいほど、画像の学習難易度が高くなる。学習損失が減少すると、一定の反復回数後に、平均化されるパラメータlosst+1 cumによって、パラメータpの値は増加し、その結果、学習の難易度が高まる(図2に示す処理208、処理210、処理211、処理212を参照)。 The parameter loss t+1 cum automatically updates the parameter p, which causes a brightness change in only some regions of the training image, resulting in non-uniform brightness changes in the training data. The larger the value of the parameter p, the higher the learning difficulty of the image. When the learning loss decreases, after a certain number of iterations, the averaged parameter loss t+1 cum increases the value of the parameter p, which increases the difficulty of learning ( processes 208, 210, and 210 shown in FIG. See process 211 and process 212).
 また、ステップ関数gも、一様な確率変数Xの範囲を制御することによって、学習の難易度を制御する(図2に示す処理209)。ステップ関数gは、Xの範囲を徐々に大きくすることによって、学習プロセス中に学習の難易度を上げる。 The step function g also controls the difficulty of learning by controlling the range of the uniform random variable X u (process 209 shown in FIG. 2). The step function g increases the learning difficulty during the learning process by gradually increasing the range of X u .
 学習プロセスは、予め定められた学習ステップ数Tまで継続するか(図2に示す処理204、図3のステップS305)、あるいは、学習プロセスを停止させるための他の停止基準を使用される。判定部5が、学習プロセスを終了させるタイミングを判定する。学習プロセスの終了によって、敵対的事例が敵対者の目標を達成するために十分に学習されていることになる。 The learning process continues up to a predetermined number of learning steps T (process 204 shown in FIG. 2, step S305 in FIG. 3), or other stopping criteria are used to stop the learning process. A determination unit 5 determines the timing for ending the learning process. By the end of the learning process, the adversarial cases have been sufficiently learned to achieve the adversary's goals.
 アルゴリズムに必要な全ての入力パラメータは、図2の入力201に示されている。図2に示す例では、ガウス確率変数Yが線形の明るさの変化を担当する。パラメータp、一様な確率変数X、ステップ関数g、および、マージンハイパーパラメータKは、攻撃生成プロセス中における、画像の非線形の明るさの変化を担当する。このアルゴリズムで、学習画像の難易度の段階的な制御を可能にする全てのパラメータは、カリキュラム学習に基づいていて、学習時にそれらを自動化することで、カリキュラム学習ベースの自動処理を実現している。 All input parameters required for the algorithm are shown in input 201 in FIG. In the example shown in FIG. 2, Gaussian random variables Yi are responsible for linear brightness variations. The parameter p, the uniform random variable X u , the step function g, and the margin hyperparameter K are responsible for the non-linear brightness variation of the image during the attack generation process. In this algorithm, all parameters that enable step-by-step control of the difficulty of training images are based on curriculum learning, and automating them during learning realizes curriculum-learning-based automatic processing. .
 デジタル攻撃の標的となる顔認証システムのリスク評価のために、生成された敵対的事例は、標的となる顔認証システムの前処理パイプラインを通すことで、デジタル世界から供給され、評価部6は、敵対的事例が、性能をごまかしたり、低下させたりするかをチェックする(図3のステップS307)。 For the risk assessment of the target face recognition system for digital attacks, the generated adversarial examples are fed from the digital world by passing through the target face recognition system's preprocessing pipeline, and the evaluator 6 , to check whether adversarial cases cheat or degrade performance (step S307 in FIG. 3).
 物理的世界からの攻撃に対するリスク評価では、生成された攻撃は、物理的世界に転送され(図3のステップS308)、標的となる顔認証システムのカメラに提示される(図5のステップS413)。撮影された物理的な敵対的事例は、標的となる顔認証システムの前処理パイプラインを通され、その後、最終的にその顔認証システムの機械学習モデルで予測される。このとき、評価部6は、顔認証システムの性能をチェックする。 For risk assessment against attacks from the physical world, the generated attacks are forwarded to the physical world (step S308 in FIG. 3) and presented to the camera of the targeted facial recognition system (step S413 in FIG. 5). . The captured physical adversarial instances are passed through the pre-processing pipeline of the target face recognition system, and then finally predicted by the machine learning model of that face recognition system. At this time, the evaluation unit 6 checks the performance of the face authentication system.
 デジタル攻撃や物理的攻撃の標的となる顔認証システムの脆弱性は、明るさに依存しない敵対的事例に対する性能に基づいて、評価部6によって評価される。頑健な顔認証システムは、明るさに依存しない敵対的事例によって、誤った予測を行ったり、性能の低下を生じさせたりすることがない。 The vulnerability of the facial recognition system, which is targeted by digital and physical attacks, is evaluated by the evaluation unit 6 based on its performance against hostile cases that do not depend on brightness. A robust face recognition system does not mispredict or degrade performance due to brightness-independent adversarial cases.
 非線形明るさ変換部3、難易度制御部4、判定部5,および、評価部6は、例えば、リスク評価プログラムに従って動作するコンピュータのCPU(Central Processing Unit )によって実現される。例えば、CPUが、コンピュータのプログラム記憶装置等のプログラム記録媒体からリスク評価プログラムを読み込み、そのプログラムに従って、非線形明るさ変換部3、難易度制御部4、判定部5,および、評価部6として動作すればよい。リスク評価プログラムのうち、CPUを非線形明るさ変換部3、難易度制御部4、判定部5として動作させる部分は、敵対的攻撃生成プログラムに該当する。 The nonlinear brightness conversion unit 3, the difficulty level control unit 4, the determination unit 5, and the evaluation unit 6 are realized, for example, by a CPU (Central Processing Unit) of a computer that operates according to a risk evaluation program. For example, the CPU reads a risk evaluation program from a program recording medium such as a computer program storage device, and operates as a nonlinear brightness conversion unit 3, a difficulty control unit 4, a determination unit 5, and an evaluation unit 6 according to the program. do it. A portion of the risk evaluation program that causes the CPU to operate as the nonlinear brightness conversion unit 3, the difficulty control unit 4, and the determination unit 5 corresponds to the hostile attack generation program.
[実施例]
 非線形に変化する照明条件に頑健な敵対的事例に対する実用的な顔認証システムのリスク評価のために、ホワイトボックス・敵対的パッチ・PGD攻撃の設定がされた、図1に示すリスク評価装置1が用いられる。
[Example]
A risk assessment apparatus 1 shown in FIG. Used.
 本発明の装置の有効性を示すために、2つのベースラインを実施した。 Two baselines were performed to demonstrate the effectiveness of the device of the present invention.
 第1のベースラインは、単純なPGDパッチ攻撃が生成される単純な方法である。第2のベースラインは、非特許文献1で提案されている方法である。非特許文献1に記載された方法は、顔認証システムに対するPGDの敵対的パッチ生成の設定で、実装された。 The first baseline is a simple method in which a simple PGD patch attack is generated. The second baseline is the method proposed in Non-Patent Document 1. The method described in [1] was implemented in the setting of PGD adversarial patch generation for face recognition systems.
 この実験では、顔データで事前に学習されたResNet50特徴抽出器を顔照合器として搭載する顔認証システムを想定した。攻撃は、敵対者が標的となる顔認証システムに完全にアクセスできることを想定したホワイトボックス設定で生成された。敵対者は、標的となる顔認証システムの機械学習モデルの、モデルアーキテクチャ、学習されたパラメータ、学習データの分布、および、損失関数にアクセスできる。 In this experiment, we assumed a face recognition system equipped with a ResNet50 feature extractor pre-trained with face data as a face matcher. The attack was generated in a white-box setting assuming the adversary had full access to the targeted facial recognition system. The adversary has access to the model architecture, learned parameters, training data distribution, and loss function of the targeted facial recognition system's machine learning model.
 この実験における敵対者の目的は、与えられたソース画像の顔データにおいて、標的となるアイデンティティになりすますことであった。なりすまし攻撃を生成するために用いられる損失関数は、以下の式(4)で表される。 The purpose of the adversary in this experiment was to impersonate the target identity in the face data of the given source image. A loss function used to generate a spoofing attack is represented by Equation (4) below.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 関数SIMは、学習用の敵対的画像X advとターゲット画像Xの、顔照合器fによって予測された特徴の類似性を計算する。今回の実験では、関数SIMとしてコサイン類似度関数を使用した。 The function SIM computes the similarity of features predicted by the face matcher f between the training adversarial image Xt adv and the target image Xt . In this experiment, a cosine similarity function was used as the function SIM.
 関数CLTは、単純な方法では、線形関数であり、非特許文献1に記載された方法では、ランダムな線形の明るさ変換関数であり、本発明の方法では、カリキュラム学習ベースの変換関数である。 The function CLT is a linear function in the simple method, a random linear brightness transformation function in the method described in Non-Patent Document 1, and a curriculum-learning-based transformation function in the method of the present invention. .
 実験では、まず、顔データセットからソース画像とターゲット画像の組を5組選んだ。また、公平に評価するために、本発明の方法だけでなく、全てのベースラインでも、同じ画像を用意した。顔データセットで学習されたResNet50特徴抽出器を、標的となる顔認識システムの顔照合器として使用した。そして、ホワイトボックスの知識を仮定して、この特徴抽出器に対して全ての方法で、PGD攻撃を生成した。 In the experiment, we first selected 5 pairs of source and target images from the face dataset. Also, for a fair evaluation, the same images were prepared not only for the method of the present invention, but also for all baselines. A ResNet50 feature extractor trained on face datasets was used as a face matcher in a targeted face recognition system. Then, all methods generated PGD attacks against this feature extractor, assuming white-box knowledge.
 パッチノイズεは、平均0.5、分散0.1で初期化した。入力サンプルを分類し、生成された敵対的事例がそれぞれの標的となるアイデンティティになりすましているか否かをチェックするために、コサイン類似度の閾値τを0.5に保った。マスクMは、入力画像と同じ次元であり、眼鏡フレームの位置では1の値をとり、画像の残りの領域では0の値をとる。今回の実験では、全ての方法において、最大反復回数Tを10000とした。本発明の方法のステップ関数gは、以下のように定義される。 The patch noise ε was initialized with a mean of 0.5 and a variance of 0.1. To classify the input samples and check whether the generated adversarial examples impersonate their respective target identities, we kept the cosine similarity threshold τ at 0.5. The mask M p is of the same dimensions as the input image, taking values of 1 at the positions of the spectacle frames and 0 in the rest of the image. In this experiment, the maximum number of iterations T was set to 10000 in all methods. The step function g of the method of the invention is defined as follows.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 バッチ定数Nは、50に設定した。各学習反復における損失は、[0,1]の範囲で正規化した。ただし、損失は、損失関数がどのように定義されるかによって、任意の範囲の値をとることができる。類似性定数Kと定数hは、1に設定した。Kの値は、pの値が1を超えないように選択される。今回の実験で、本発明のアルゴリズムにおける明るさアンサンブルの数Nは、5とした。全ての方法で、PGD更新の学習率は、0.01に設定した。 The batch constant N was set to 50. The loss at each training iteration was normalized to the range [0,1]. However, the loss can take any range of values depending on how the loss function is defined. The similarity constant K and constant h were set to one. The value of K is chosen such that the value of p does not exceed one. In this experiment, the number of brightness ensembles Nb in the algorithm of the present invention was set to five. For all methods, the learning rate for PGD updates was set to 0.01.
 今回の実験におけるそれぞれの方法のパラメータを初期化した後、選ばれた5組(ソース画像とターゲット画像の組)に対して、なしすまし敵対的事例を生成した。生成された全ての敵対的事例は、敵対成功率を検査するために、デジタル領域および物理領域で評価された。 After initializing the parameters of each method in this experiment, we generated spoofed adversarial examples for the five selected pairs (pairs of source and target images). All adversarial cases generated were evaluated in the digital and physical domains to examine the adversarial success rate.
 明るさの変化に対する、生成された敵対的事例のデジタル評価のために、各敵対的事例に対して99枚の非線形な明るさを持つ変換画像を生成した。デジタル評価のために、ランダムな非線形の明るさ変換に用いた式を以下に示す。 For the digital evaluation of the generated adversarial cases against changes in brightness, we generated 99 transformed images with nonlinear brightness for each adversarial case. For digital evaluation, the equations used for random non-linear brightness transformation are given below.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 Xは、X~N(0.8,0.2)であるガウス確率変数である。Xは、X~U(0.7,1)である一様な確率変数である。全ての敵対的事例に対して生成された全ての画像は、攻撃成功率を検査するために、標的となる顔照合器に与えられる。 X G is a Gaussian random variable with X G ˜N(0.8,0.2). X U is a uniform random variable with X U ˜U(0.7,1). All images generated for all adversarial cases are fed to the targeted face matcher to check the attack success rate.
 明るさが非線形に変化する物理的世界での評価では、物理的な制約を考慮して、評価のために3つの敵対的事例を選択した。敵対的事例には、元の画像を含めて、非線形に明るさが変化する9つの変換画像を印刷した。この9つの組合せは、XとXの値をセット{0.5, 1, 1.5}から選択することによって生成した。敵対的事例用に変換された全ての変換画像は、評価のためにデジタル領域に転送するために、カメラを使って撮影される。攻撃面の反射率の影響を捉えるために、カメラを半径約15cmの水平の円弧状に移動させ、撮影画像の中心に約45°の角度をつけて、短い動画を撮影した。撮影された各画像から、約20枚の画像を抽出した。その後、顔検出と位置合わせに基づいて、MTCNN(Multi-Task Cascaded Convolutional Neural Networks )を用いて、撮影データのクリーンアップとトリミングを行った。前処理された画像は、物理的世界からの明るさに依存しない敵対的事例の攻撃成功率を評価するために、標的となるシステムの顔照合器に与えられる。そして、コサイン類似度に基づいた閾値によって、成功か失敗かを確認する。 For the evaluation in the physical world where the brightness varies non-linearly, considering physical constraints, we chose three adversarial cases for evaluation. For the adversarial case, nine transformed images with non-linear brightness variations were printed, including the original image. The nine combinations were generated by selecting values for XG and XU from the set {0.5, 1, 1.5}. All transformed images for adversarial cases are captured using a camera for transfer to the digital domain for evaluation. To capture the effect of attack surface reflectivity, the camera was moved in a horizontal arc of radius approximately 15 cm, and a short movie was captured at an angle of approximately 45° to the center of the captured image. About 20 images were extracted from each image taken. After that, based on face detection and registration, MTCNN (Multi-Task Cascaded Convolutional Neural Networks) was used to clean up and trim the captured data. The preprocessed images are fed to the target system's face matcher to assess the attack success rate of brightness-independent adversarial cases from the physical world. A threshold based on cosine similarity is then used to determine success or failure.
 この実験分析の結果、明るさに依存しない敵対的事例に関して、本発明の方法が、前述の2つのベースラインよりも、優れていることが観察された。本発明の方法は、デジタル領域および物理領域において、実験での平均なりすまし成功率がそれぞれ、非特許文献1に記載の方法よりも、26.78%、24.69%高い結果となった。本発明の実施形態の装置は、現実世界での明るさの変化に対して頑健な敵対的事例を生成する。 As a result of this experimental analysis, it was observed that the method of the present invention outperforms the two baselines described above for adversarial cases that do not depend on brightness. The method of the present invention resulted in 26.78% and 24.69% higher average spoofing success rates than the method described in Non-Patent Document 1 in the digital domain and the physical domain, respectively. The apparatus of embodiments of the present invention produces adversarial cases that are robust to real-world lighting changes.
 図6は、本発明の実施形態の敵対的攻撃生成装置2およびリスク評価装置1に係るコンピュータの構成例を示す概略ブロック図である。コンピュータ1000は、CPU1001と、主記憶装置1002と、補助記憶装置1003と、インタフェース1004とを備える。 FIG. 6 is a schematic block diagram showing a configuration example of a computer related to the hostile attack generation device 2 and the risk evaluation device 1 of the embodiment of the present invention. A computer 1000 includes a CPU 1001 , a main memory device 1002 , an auxiliary memory device 1003 and an interface 1004 .
 本発明の実施形態の敵対的攻撃生成装置2およびリスク評価装置1は、例えば、コンピュータ1000によって実現される。敵対的攻撃生成装置2およびリスク評価装置1の動作は、プログラムの形式で補助記憶装置1003に記憶されている。CPU1001は、そのプログラムを読み出し、そのプログラムを主記憶装置1002に展開し、そのプログラムに従って、上記の実施形態で説明した処理を実行する。 The hostile attack generation device 2 and the risk evaluation device 1 of the embodiment of the present invention are realized by the computer 1000, for example. Operations of the hostile attack generation device 2 and the risk evaluation device 1 are stored in the auxiliary storage device 1003 in the form of programs. The CPU 1001 reads the program, develops the program in the main storage device 1002, and executes the processing described in the above embodiment according to the program.
 補助記憶装置1003は、一時的でない有形の媒体の例である。一時的でない有形の媒体の他の例として、インタフェース1004を介して接続される磁気ディスク、光磁気ディスク、CD-ROM(Compact Disk Read Only Memory )、DVD-ROM(Digital Versatile Disk Read Only Memory )、半導体メモリ等が挙げられる。また、プログラムが通信回線によってコンピュータ1000に配信される場合、配信を受けたコンピュータ1000がそのプログラムを主記憶装置1002に展開し、そのプログラムに従って上記の実施形態で説明した処理を実行してもよい。 The auxiliary storage device 1003 is an example of a non-temporary tangible medium. Other examples of non-transitory tangible media include magnetic disks, magneto-optical disks, CD-ROMs (Compact Disk Read Only Memory), DVD-ROMs (Digital Versatile Disk Read Only Memory), connected via interface 1004, A semiconductor memory and the like are included. Further, when a program is distributed to the computer 1000 via a communication line, the computer 1000 receiving the distribution may develop the program in the main storage device 1002 and execute the processing described in the above embodiments according to the program. .
 また、各構成要素の一部または全部は、汎用または専用の回路(circuitry )、プロセッサ等やこれらの組合せによって実現されてもよい。これらは、単一のチップによって構成されてもよいし、バスを介して接続される複数のチップによって構成されてもよい。各構成要素の一部または全部は、上述した回路等とプログラムとの組合せによって実現されてもよい。 Also, part or all of each component may be realized by a general-purpose or dedicated circuit, processor, etc., or a combination thereof. These may be composed of a single chip, or may be composed of multiple chips connected via a bus. A part or all of each component may be implemented by a combination of the above-described circuit or the like and a program.
 各構成要素の一部または全部が複数の情報処理装置や回路等により実現される場合には、複数の情報処理装置や回路等は集中配置されてもよいし、分散配置されてもよい。例えば、情報処理装置や回路等は、クライアントアンドサーバシステム、クラウドコンピューティングシステム等、各々が通信ネットワークを介して接続される形態として実現されてもよい。 When part or all of each component is realized by a plurality of information processing devices, circuits, etc., the plurality of information processing devices, circuits, etc. may be centrally arranged or distributed. For example, the information processing device, circuits, and the like may be implemented as a client-and-server system, a cloud computing system, or the like, each of which is connected via a communication network.
 次に、本発明の概要について説明する。図7は、本発明の敵対的攻撃生成装置の概要の一例を示すブロック図である。敵対的攻撃生成装置71は、非線形明るさ変換手段73を備える。非線形明るさ変換手段73(例えば、非線形明るさ変換部3)は、敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する。 Next, the outline of the present invention will be explained. FIG. 7 is a block diagram showing an example of the outline of the hostile attack generation device of the present invention. The hostile attack generation device 71 comprises nonlinear brightness conversion means 73 . The non-linear brightness transforming means 73 (for example, the non-linear brightness transforming unit 3) non-linearly updates the brightness of the learning images during the attack generation process that generates the hostile cases.
 そのような構成により、明るさの非線形な変化に頑健な敵対的事例を生成できる。 With such a configuration, it is possible to generate adversarial cases that are robust to nonlinear changes in brightness.
 また、図8は、本発明の敵対的攻撃生成装置の概要の他の例を示すブロック図である。敵対的攻撃生成装置71は、難易度制御手段74を備える。難易度制御手段74(例えば、難易度制御部4)は、敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する。 Also, FIG. 8 is a block diagram showing another example of the outline of the hostile attack generation device of the present invention. The hostile attack generation device 71 includes difficulty level control means 74 . The difficulty control means 74 (eg, difficulty control unit 4) controls the difficulty of learning on a curriculum learning basis during the attack generation process of generating adversarial examples.
 そのような構成により、攻撃生成プロセスの際における攻撃最適化を有効にできる。 Such a configuration can enable attack optimization during the attack generation process.
 以上、実施形態を参照して本願発明を説明したが、本願発明は上記の実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the embodiments, the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
産業上の利用の可能性Possibility of industrial use
 本発明は、敵対的事例を生成する敵対的攻撃生成装置、および、敵対的事例による攻撃に関するリスク評価を行うリスク評価装置に好適に適用される。 The present invention is suitably applied to a hostile attack generation device that generates hostile instances and a risk evaluation device that performs risk assessment regarding attacks by hostile instances.
 1 リスク評価装置
 2 敵対的攻撃生成装置
 3 非線形明るさ変換部
 4 難易度制御部
 5 判定部
 6 評価部
REFERENCE SIGNS LIST 1 risk evaluation device 2 hostile attack generation device 3 nonlinear brightness converter 4 difficulty control unit 5 determination unit 6 evaluation unit

Claims (9)

  1.  敵対的事例を生成する敵対的攻撃生成装置であって、
     前記敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換手段を備える
     ことを特徴とする敵対的攻撃生成装置。
    An adversarial attack generator for generating adversarial instances, comprising:
    A hostile attack generating apparatus, comprising nonlinear brightness conversion means for nonlinearly updating brightness of a learning image during the attack generation process for generating the hostile example.
  2.  敵対的事例を生成する敵対的攻撃生成装置であって、
     前記敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御手段を備える
     ことを特徴とする敵対的攻撃生成装置。
    An adversarial attack generator for generating adversarial instances, comprising:
    A hostile attack generation device, comprising difficulty level control means for controlling the difficulty level of learning on a curriculum learning basis during the attack generation process for generating the hostile case examples.
  3.  請求項1または請求項2に記載の敵対的攻撃生成装置を備え、
     前記敵対的攻撃生成装置によって生成された敵対的事例による攻撃に対するリスク評価を行う評価手段を備える
     リスク評価装置。
    A hostile attack generation device according to claim 1 or claim 2,
    A risk evaluation device comprising evaluation means for performing risk evaluation against an attack by a hostile instance generated by the hostile attack generation device.
  4.  敵対的事例による攻撃に対するリスク評価を行うリスク評価装置であって、
     前記敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換手段と、
     前記攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御手段とを備える
     ことを特徴とするリスク評価装置。
    A risk assessment device for performing risk assessment against attacks by hostile instances,
    Non-linear brightness conversion means for non-linearly updating the brightness of training images during the attack generation process for generating the adversarial examples;
    a difficulty level control means for controlling the difficulty level of learning on a curriculum learning basis during the attack generation process.
  5.  敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する
     ことを特徴とする敵対的攻撃生成方法。
    An adversarial attack generation method characterized by non-linearly updating the brightness of learning images during the attack generation process for generating adversarial examples.
  6.  敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する
     ことを特徴とする敵対的攻撃生成方法。
    An adversarial attack generation method characterized by controlling learning difficulty on a curriculum learning basis during an attack generation process for generating adversarial examples.
  7.  請求項5または請求項6に記載の敵対的攻撃生成方法を含み、
     前記敵対的攻撃生成方法で生成された敵対的事例による攻撃に対するリスク評価を行う
     リスク評価方法。
    A method for generating an adversarial attack according to claim 5 or claim 6,
    A risk evaluation method for performing a risk evaluation against an attack by the hostile instance generated by the hostile attack generation method.
  8.  コンピュータに、
     敵対的事例を生成する攻撃生成プロセス中に、学習画像の明るさを非線形に更新する非線形明るさ変換処理
     を実行させるための敵対的攻撃生成プログラムを記録したコンピュータ読取可能な記録媒体。
    to the computer,
    A computer-readable recording medium recording a hostile attack generation program for executing nonlinear brightness conversion processing for nonlinearly updating the brightness of a learning image during an attack generation process for generating hostile examples.
  9.  コンピュータに、
     敵対的事例を生成する攻撃生成プロセス中に、カリキュラム学習ベースで、学習の難易度を制御する難易度制御処理
     を実行させるための敵対的攻撃生成プログラムを記録したコンピュータ読取可能な記録媒体。
    to the computer,
    A computer-readable recording medium recording an adversarial attack generation program for executing difficulty control processing for controlling the difficulty of learning on a curriculum learning basis during an attack generation process for generating adversarial examples.
PCT/JP2021/019409 2021-05-21 2021-05-21 Adversarial attack generation device and risk evaluation device WO2022244256A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023522176A JPWO2022244256A1 (en) 2021-05-21 2021-05-21
PCT/JP2021/019409 WO2022244256A1 (en) 2021-05-21 2021-05-21 Adversarial attack generation device and risk evaluation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/019409 WO2022244256A1 (en) 2021-05-21 2021-05-21 Adversarial attack generation device and risk evaluation device

Publications (1)

Publication Number Publication Date
WO2022244256A1 true WO2022244256A1 (en) 2022-11-24

Family

ID=84140366

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019409 WO2022244256A1 (en) 2021-05-21 2021-05-21 Adversarial attack generation device and risk evaluation device

Country Status (2)

Country Link
JP (1) JPWO2022244256A1 (en)
WO (1) WO2022244256A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020170495A (en) * 2019-04-04 2020-10-15 ▲広▼州大学 Single pixel attack sample generating method, device, facility, and storage medium
WO2020230699A1 (en) * 2019-05-10 2020-11-19 日本電気株式会社 Robustness setting device, robustness setting method, storage medium storing robustness setting program, robustness evaluation device, robustness evaluation method, storage medium storing robustness evaluation program, computation device, and storage medium storing program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020170495A (en) * 2019-04-04 2020-10-15 ▲広▼州大学 Single pixel attack sample generating method, device, facility, and storage medium
WO2020230699A1 (en) * 2019-05-10 2020-11-19 日本電気株式会社 Robustness setting device, robustness setting method, storage medium storing robustness setting program, robustness evaluation device, robustness evaluation method, storage medium storing robustness evaluation program, computation device, and storage medium storing program

Also Published As

Publication number Publication date
JPWO2022244256A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
Dong et al. Efficient decision-based black-box adversarial attacks on face recognition
Liao et al. Backdoor embedding in convolutional neural network models via invisible perturbation
Qin et al. Learning meta model for zero-and few-shot face anti-spoofing
Zhong et al. Backdoor embedding in convolutional neural network models via invisible perturbation
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
Pautov et al. On adversarial patches: real-world attack on arcface-100 face recognition system
KR101185525B1 (en) Automatic biometric identification based on face recognition and support vector machines
Rozsa et al. LOTS about attacking deep features
US12008471B2 (en) Robustness assessment for face recognition
Rozsa et al. Adversarial robustness: Softmax versus openmax
Li et al. Unseen face presentation attack detection with hypersphere loss
Li et al. Black-box attack against handwritten signature verification with region-restricted adversarial perturbations
Ding et al. Beyond universal person re-identification attack
Singh et al. On brightness agnostic adversarial examples against face recognition systems
CN113435264A (en) Face recognition attack resisting method and device based on black box substitution model searching
Kumagai et al. Zero-shot domain adaptation without domain semantic descriptors
Singh et al. Powerful physical adversarial examples against practical face recognition systems
Jami et al. Biometric template protection through adversarial learning
Zanddizari et al. Generating black-box adversarial examples in sparse domain
Garofalo et al. Fishy faces: Crafting adversarial images to poison face authentication
Juuti et al. Making targeted black-box evasion attacks effective and efficient
WO2022244256A1 (en) Adversarial attack generation device and risk evaluation device
Wang et al. Improved Activation Clipping for Universal Backdoor Mitigation and Test-Time Detection
Arpit et al. An analysis of random projections in cancelable biometrics
CN115187449A (en) Method for improving anti-sample mobility based on perspective transformation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940867

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023522176

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21940867

Country of ref document: EP

Kind code of ref document: A1