WO2024025152A1 - Appareil et procédé d'apprentissage contradictoire pour entraîner simultanément un réseau de débruitage et un réseau neuronal profond, et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé - Google Patents

Appareil et procédé d'apprentissage contradictoire pour entraîner simultanément un réseau de débruitage et un réseau neuronal profond, et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé Download PDF

Info

Publication number
WO2024025152A1
WO2024025152A1 PCT/KR2023/008459 KR2023008459W WO2024025152A1 WO 2024025152 A1 WO2024025152 A1 WO 2024025152A1 KR 2023008459 W KR2023008459 W KR 2023008459W WO 2024025152 A1 WO2024025152 A1 WO 2024025152A1
Authority
WO
WIPO (PCT)
Prior art keywords
adversarial
deep neural
neural network
reconstructed
network
Prior art date
Application number
PCT/KR2023/008459
Other languages
English (en)
Korean (ko)
Inventor
최대선
류권상
Original Assignee
숭실대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 숭실대학교 산학협력단 filed Critical 숭실대학교 산학협력단
Publication of WO2024025152A1 publication Critical patent/WO2024025152A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention is a technology related to an adversarial learning device. More specifically, a deep neural network that classifies images maintains the classification accuracy for normal images reconstructed through a denoising network similar to the existing one, and also reconstructs them through a denoising network. It relates to an adversarial learning device and method for simultaneously training a denoising network and a deep neural network capable of correctly classifying adversarial examples, and a computer-readable recording medium that records a program for executing the same.
  • Deep neural networks have achieved high performance in various application fields such as image classification, object detection, and natural language processing.
  • DNNs deep neural networks
  • DNNs are vulnerable to adversarial attacks that generate malicious inputs that cause the original normal image to be misclassified. Malicious input generated through an adversarial attack on the original normal image is called an adversarial example, and various attack methods have been proposed to generate adversarial examples.
  • DNN deep neural network
  • Representative defense technologies to defend against adversarial attacks include adversarial learning and denoising technologies.
  • Adversarial learning technology trains a deep neural network using adversarial examples generated by an adversarial attack on the training dataset, thereby increasing the robustness of the deep neural network against adversarial examples.
  • adversarial learning technology uses adversarial examples to learn deep neural networks, it has the problem of low classification accuracy for normal images.
  • denoising technology allows the denoising network to relax the adversarial transformations included in the adversarial examples, allowing the deep neural network to correctly classify the adversarial examples for which the adversarial transformations have been relaxed.
  • the defender cannot know whether the input data is a normal image or an adversarial example, normal images must also be input to the denoising network and then input to the deep neural network. Therefore, denoising technology can reduce the impact of important information in normal images, so deep neural networks are unable to correctly classify normal images that have passed through the denoising network, resulting in lower classification accuracy for normal images.
  • Patent Document 1 Korean Patent No. 10-2395244 (Registration Date: May 2, 2022)
  • the present invention was proposed to solve the above-mentioned problem, and a deep neural network that classifies images simultaneously trains a denoising network and a deep neural network that maintains the classification accuracy for normal images reconstructed through a denoising network similar to the existing one.
  • a computer-readable recording medium that records an adversarial learning device, method, and a program for executing the same.
  • the present invention provides an adversarial learning device and method for simultaneously training a denoising network and a deep neural network that enable a deep neural network that classifies images to correctly classify adversarial examples reconstructed through a denoising network, and a program for executing the same.
  • an adversarial learning device that simultaneously trains a denoising network and a deep neural network according to an aspect of the present invention includes a denoising network that receives an image to be classified and outputs a reconstructed image; A deep neural network that receives the reconstructed image from the denoising network, classifies the image, and outputs a classification result; and an adversarial example generation unit that receives normal images to be learned and generates adversarial examples,
  • the denoising network receives adversarial examples from the adversarial example generator, outputs reconstructed adversarial examples, and performs machine learning to minimize the result of the loss function using the reconstructed adversarial examples and normal images to be learned. Do this,
  • the deep neural network receives the normal image and the adversarial example to be learned and the reconstructed normal image and the reconstructed adversarial example output from the denoising network, and calculates classification results for each. It is characterized by performing machine learning using the value to minimize the result of the loss function.
  • the deep neural network receives the normal image and adversarial example to be learned and the reconstructed normal image and reconstructed adversarial example output from the denoising network, and uses the classification results for each.
  • machine learning is performed to minimize the result of the loss function that is the sum of the first to fourth loss terms,
  • the denoising network calculates the distance between the reconstructed adversarial example and the normal image to be learned, and then calculates the distance between the reconstructed adversarial example and the normal image and the first loss term input from the deep neural network.
  • Machine learning is performed to minimize the result of the defined loss function.
  • the first to fourth loss terms follow the following equation.
  • is the parameter of the deep neural network, is the output of a deep neural network for a normal image
  • the output of the deep neural network for adversarial examples ⁇ is the parameter of the denoising network, is the output of the denoising network for the normal image, is the output of the denoising network for the adversarial example, is the output of the deep neural network for the reconstructed adversarial example, is the output of the deep neural network for the reconstructed normal image.
  • the first to fourth loss terms follow the following equation.
  • is the parameter of the deep neural network, is the output of a deep neural network for a normal image, is the output of the deep neural network for the adversarial example, ⁇ is the parameter of the denoising network, is the output of the denoising network for the normal image, is the output of the denoising network for the adversarial example, is the output of the deep neural network for the reconstructed adversarial example, is the output of the deep neural network for the reconstructed normal image.
  • the first to fourth loss terms follow the following equation.
  • is the parameter of the deep neural network, is the output of a deep neural network for a normal image, is the output of the deep neural network for the adversarial example, ⁇ is the parameter of the denoising network, is the output of the denoising network for the normal image, is the output of the denoising network for the adversarial example, is the output of the deep neural network for the reconstructed adversarial example, is the output of the deep neural network for the reconstructed normal image.
  • An adversarial learning method for simultaneously training a denoising network and a deep neural network includes generating adversarial examples for normal images to be learned;
  • the normal image and adversarial example to be learned, and the reconstructed normal image and reconstructed adversarial example output from the denoising network are input into a deep neural network to calculate classification results for each, and a deep neural network is formed using each classification result. performing machine learning to minimize the result of the loss function;
  • It includes receiving the adversarial example, outputting a reconstructed adversarial example, and performing machine learning using the reconstructed adversarial example and a normal image to minimize the resulting value of the loss function of the denoising network.
  • the normal image and adversarial example to be learned, and the reconstructed normal image and reconstructed adversarial example output from the denoising network are input into a deep neural network to calculate classification results for each, and each classification
  • the step of performing machine learning to minimize the result of the deep neural network's loss function using the result is:
  • calculating a distance between the reconstructed adversarial example and a normal image includes performing machine learning on the loss function of the denoising network, which is the sum of the distance between the calculated reconstructed adversarial example and the normal image and the first loss term calculated from the deep neural network.
  • the denoising network receives an adversarial example and outputs a reconstructed adversarial example, and determines the distance between the reconstructed adversarial example and a normal image and the first loss input from the deep neural network. Perform machine learning on the loss function by minimizing the sum of terms,
  • the deep neural network calculates classification results for the normal image, the adversarial example, and the reconstructed normal image and the reconstructed adversarial example, and provides a first loss that allows classifying the reconstructed adversarial example using each classification result. After calculating a second loss term that allows classifying the term and adversarial examples, a third loss term that allows classifying the reconstructed normal image, and a fourth loss term that allows classifying the normal image, the first loss term is calculated.
  • a deep neural network that classifies images has the advantage of being able to correctly classify adversarial examples reconstructed through a denoising network while maintaining classification accuracy for normal images reconstructed through a denoising network similar to the existing one.
  • the deep neural network recognizes it correctly, preventing casualties due to misrecognition of signs and surrounding objects in self-driving cars or financial damage due to misrecognition of faces. It can be prevented.
  • Figure 1 is an example diagram illustrating the configuration of an adversarial learning device that simultaneously trains a denoising network and a deep neural network according to the present invention.
  • Figure 2 is an example diagram for explaining the loss function of the denoising network and deep neural network according to the present invention.
  • Figure 3 is a flowchart illustrating an adversarial learning method for simultaneously training a denoising network and a deep neural network according to the present invention.
  • hardware may be a data processing device that includes a CPU or other processor.
  • software driven by hardware may refer to a running process, object, executable, thread of execution, program, etc.
  • the adversarial learning device may be in the form of a server or engine that can be mobile or fixed, and may be a device, an apparatus, a terminal, a user equipment (UE), or an MS. It may be called by other terms such as (mobile station), wireless device, or handheld device.
  • the adversarial learning device can execute or produce various software based on an operating system (OS), that is, a system.
  • OS operating system
  • the operating system is a system program that allows software to use the hardware of the device, and includes mobile computer operating systems such as Android OS, iOS, Windows Mobile OS, Bada OS, Symbian OS, Blackberry OS, Windows series, Linux series, Unix series, etc. It can include all computer operating systems such as MAC, AIX, and HP-UX.
  • Figure 1 is an example diagram for explaining the configuration of an adversarial learning device that simultaneously trains a denoising network and a deep neural network according to the present invention
  • Figure 2 is an illustration for explaining the loss function of the denoising network and a deep neural network according to the present invention. This is an example diagram.
  • the adversarial learning device may be implemented including an image input unit 110, a denoising network 120, a deep neural network 130, and an adversarial example generator 140. .
  • the image input unit 110, the denoising network 120, and the deep neural network 130 may operate in an execution mode for image classification or object detection, or may operate in an adversarial learning mode using the adversarial example generation unit 140.
  • the image input unit 110 receives images to be classified.
  • the image to be classified can be a normal image or an adversarial example. Images to be classified can be input through, for example, self-driving cars, door surveillance systems, laptops, or smartphones.
  • the denoising network 120 receives the image to be classified from the image input unit 110 and outputs a reconstructed image. If the input image is a normal image, the denoising network 120 outputs a reconstructed normal image. If the input image is an adversarial example, the denoising network 120 outputs a reconstructed adversarial example.
  • the denoising network 120 may perform machine learning to relax adversarial transformations on adversarial examples so that the deep neural network 130 can classify the reconstructed adversarial examples.
  • the deep neural network 130 receives the reconstructed image from the denoising network 120, classifies the image, and outputs a classification result.
  • the deep neural network 130 of the present invention can correctly classify the image reconstructed through the denoising network 120 even if it is a normal image, and can also perform machine learning to classify it correctly even if it is an adversarial example. .
  • the present inventor used the MNIST, CIFAR-10, and CIFAR-100 datasets as normal images to learn during adversarial learning, which simultaneously trains a denoising network and a deep neural network.
  • the MNIST dataset included 60,000 training images and 10,000 test images with an input size of 1 ⁇ 28 ⁇ 28 and 10 classes.
  • the CIFAR-10 and CIFAR-100 datasets included 50,000 training images and 10,000 test images with an input size of 3 ⁇ 32 ⁇ 32.
  • the adversarial example generator 140 receives a normal image from the image input unit 110 and generates a hostile example.
  • the adversarial example generator 140 can be activated only in a learning mode that simultaneously trains the denoising network 120 and the deep neural network 130.
  • the adversarial example generator 140 may be implemented with Fast Gradient Sign Method (FGSM), DeepFool, Momentum-based Iterative Method (MIM), Projected Gradient Decent (PGD), or an adversarial attack algorithm proposed by Carlini and Wagner. You can.
  • FGSM Fast Gradient Sign Method
  • MIM Momentum-based Iterative Method
  • PPD Projected Gradient Decent
  • the denoising network 120 may be implemented with an encoder including four convolution layers that follow batch normalization and a decoder including four deconvolution layers.
  • the encoder node and the decoder node corresponding to the encoder node can each be connected in a skip-connection method.
  • the speed of machine learning can be increased by connecting the encoder nodes and decoder nodes.
  • the denoising network 120 may be implemented as another denoising network such as another CAPGN in addition to being based on an autoencoder.
  • the denoising network 120 receives the adversarial example 3 generated by the adversarial example generator 140 and outputs a reconstructed adversarial example 4.
  • the denoising network 120 can calculate the distance 11 between the reconstructed adversarial example 4 and the normal image 1 as shown in Equation 1.
  • is the parameter of the denoising network
  • x is the normal image
  • x' is the adversarial example generated by adding the adversarial transformation to the normal image
  • the denoising network 120 uses Equation 2 as the sum of the distance 11 between the reconstructed adversarial example 4 and the normal image 1 in the learning mode and the first loss term 12 input from the deep neural network 130.
  • a loss function such as ( ) is performed so that the result value is minimized.
  • the distance between the reconstructed adversarial example and the normal image is the first loss term, is the first loss term ( ) is the weight of .
  • the deep neural network 130 classifies each classification result for the normal image and adversarial example generated by the adversarial example generator 140, and the reconstructed normal image and reconstructed adversarial example output from the denoising network 120.
  • the deep neural network 130 uses the first loss term 12 to classify the reconstructed adversarial example using each classification result, the second loss term 13 to classify the adversarial example, and the reconstructed normal
  • a third loss term (14) that allows the image to be classified and a fourth loss term (15) that allows the normal image to be classified are calculated.
  • the deep neural network 130 has a loss function such as Equation 3, which can be expressed as the sum of the first to fourth loss terms ( ) is performed to minimize the value of ).
  • the first loss term is the second loss term, is the third loss term, is the fourth loss term, is the third loss term ( ) weight of, is the second loss term ( ) weight of, is the first loss term ( ) is the weight of .
  • the deep neural network 130 can calculate the first to fourth loss terms according to Equation 1 below.
  • is the parameter of the deep neural network, is the output of a deep neural network for a normal image, is the output of the deep neural network for the adversarial example, ⁇ is the parameter of the denoising network, is the output of the denoising network for the normal image, is the output of the denoising network for the adversarial example, is the output of the deep neural network for the reconstructed adversarial example, is the output of the deep neural network for the reconstructed normal image.
  • the deep neural network 130 may calculate the first to fourth loss terms according to Equation 5 below.
  • is the parameter of the deep neural network, is the output of a deep neural network for a normal image, is the output of the deep neural network for the adversarial example, ⁇ is the parameter of the denoising network, is the output of the denoising network for the normal image, is the output of the denoising network for the adversarial example, is the output of the deep neural network for the reconstructed adversarial example, is the output of the deep neural network for the reconstructed normal image.
  • Figure 3 is a flowchart illustrating an adversarial learning method for simultaneously training a denoising network and a deep neural network according to the present invention.
  • the adversarial learning method for simultaneously training a denoising network and a deep neural network includes a processor, a memory storing one or more instructions executed by the processor, an adversarial example generator, a denoising network, and a deep neural network. The description will be made assuming that it is executed in an image processing device including.
  • the processor receives a normal image to be learned from a communication device such as a laptop or a smartphone (S311). Afterwards, the processor inputs the normal image into the adversarial example generator.
  • the adversarial example generator generates an adversarial example for the input normal image (S312).
  • the processor inputs normal images or adversarial examples into the denoising network.
  • the denoising network generates a reconstructed normal image or a reconstructed adversarial example (S313).
  • the processor inputs the normal image, the adversarial example, and the reconstructed normal image and the reconstructed adversarial example generated by the denoising network into the deep neural network.
  • the deep neural network calculates classification results for the normal image, the adversarial example, and the reconstructed normal image and the reconstructed adversarial example, respectively (S314).
  • the deep neural network uses each classification result to provide a first loss term that allows classifying the reconstructed adversarial example, a second loss term that allows classifying the adversarial example, and a third loss term that allows classifying the reconstructed normal image.
  • a fourth loss term that allows classification of the loss term and the normal image is calculated (S315).
  • the deep neural network can calculate the first to fourth loss terms according to Equation 4 or Equation 5 described above.
  • the deep neural network performs machine learning on the loss function of the deep neural network by minimizing the sum of the first to fourth loss terms (S316).
  • the denoising network calculates the distance between the reconstructed adversarial example and the normal image (S317).
  • the denoising network performs machine learning on the loss function of the denoising network by minimizing the distance between the calculated reconstructed adversarial example and the normal image and the sum of the first loss term calculated from the deep neural network (S318).
  • the denoising network reconstructs the input image with intermediate values between normal images and noisy examples, which ultimately results in the denoising network not being able to alleviate the adversarial transformation on the adversarial examples well. Because.
  • the adversarial learning device and method for simultaneously training a denoising network and a deep neural network not only minimizes classification errors for normal images and adversarial examples by the deep neural network, but also minimizes the normal images reconstructed by the denoising network. Helps minimize classification errors for images and adversarial examples. It has two advantages:
  • the deep neural network can correctly classify the reconstructed normal images.
  • the adversarial transformation of the adversarial examples is alleviated by the denoising network, and the reconstructed adversarial examples are input to the deep neural network. Therefore, deep neural networks can correctly classify reconstructed adversarial examples.
  • Such an adversarial learning method that simultaneously trains a denoising network and a deep neural network can be implemented in the form of program instructions that can be executed through computer components and recorded on a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc., singly or in combination.
  • the program instructions recorded on the computer-readable recording medium may be those specifically designed and configured for the present invention, or may be known and usable by those skilled in the computer software field.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and perform program instructions, such as ROM, RAM, flash memory, etc.
  • Examples of program instructions include not only machine language code such as that created by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform processing according to the invention and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne une technologie associée à un appareil d'apprentissage contradictoire, et un appareil d'apprentissage contradictoire et un procédé d'apprentissage simultané d'un réseau de débruitage et d'un réseau neuronal profond, et un support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé, le réseau neuronal profond qui classifie des images pouvant maintenir une précision de classification pour des images normales, qui sont reconstruites par l'intermédiaire du réseau de débruitage, à un niveau similaire à l'état de la technique associé, et pouvant également classifier correctement des exemples contradictoires reconstruits par l'intermédiaire du réseau de débruitage.
PCT/KR2023/008459 2022-07-27 2023-06-19 Appareil et procédé d'apprentissage contradictoire pour entraîner simultanément un réseau de débruitage et un réseau neuronal profond, et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé WO2024025152A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0093394 2022-07-27
KR1020220093394A KR20240015472A (ko) 2022-07-27 2022-07-27 디노이징 네트워크와 심층신경망을 동시에 훈련시키는 적대적 학습장치, 방법 및 이를 실행하기 위한 프로그램을 기록한 컴퓨터 판독가능한 기록매체

Publications (1)

Publication Number Publication Date
WO2024025152A1 true WO2024025152A1 (fr) 2024-02-01

Family

ID=89706710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/008459 WO2024025152A1 (fr) 2022-07-27 2023-06-19 Appareil et procédé d'apprentissage contradictoire pour entraîner simultanément un réseau de débruitage et un réseau neuronal profond, et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé

Country Status (2)

Country Link
KR (1) KR20240015472A (fr)
WO (1) WO2024025152A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717522A (zh) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 图像分类网络的对抗防御方法及相关装置
CN113988293A (zh) * 2021-10-29 2022-01-28 北京邮电大学 一种不同层级函数组合的对抗生成网络的方法
KR20220030635A (ko) * 2020-09-03 2022-03-11 부산대학교 산학협력단 적대적 사례에 강인한 심층 신경망 모델을 위한 입력 장치 및 방법
KR102395244B1 (ko) * 2020-07-01 2022-05-09 인하대학교 산학협력단 적대적 공격이 적용된 차량 번호판을 판별하기 위한 오인식 방지 장치 및 그 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717522A (zh) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 图像分类网络的对抗防御方法及相关装置
KR102395244B1 (ko) * 2020-07-01 2022-05-09 인하대학교 산학협력단 적대적 공격이 적용된 차량 번호판을 판별하기 위한 오인식 방지 장치 및 그 방법
KR20220030635A (ko) * 2020-09-03 2022-03-11 부산대학교 산학협력단 적대적 사례에 강인한 심층 신경망 모델을 위한 입력 장치 및 방법
CN113988293A (zh) * 2021-10-29 2022-01-28 北京邮电大学 一种不同层级函数组合的对抗生成网络的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RYU, GWONSANG ET AL.: "HAT: Hybrid Adversarial Training for Simultaneous Training of Deep Learning Model and Denoising Network", WISA 2021, pages 20210811 - 20210813 *

Also Published As

Publication number Publication date
KR20240015472A (ko) 2024-02-05

Similar Documents

Publication Publication Date Title
Zhao et al. idlg: Improved deep leakage from gradients
WO2020022703A1 (fr) Procédé de masquage de données et dispositif de brouillage de données l'utilisant
WO2022086146A1 (fr) Procédé permettant d'entraîner et de tester un réseau de brouillage pouvant brouiller des données à des fins de confidentialité, et dispositif d'entraînement et dispositif de test faisant appel à celui-ci
EP3461290A1 (fr) Modèle d'apprentissage pour détection de région faciale saillante
WO2022086147A1 (fr) Procédé permettant d'entraîner et de tester un réseau d'apprentissage utilisateur à utiliser pour reconnaître des données brouillées créées par brouillage de données originales pour protéger des informations personnelles et dispositif d'apprentissage utilisateur et dispositif de test faisant appel à celui-ci
WO2021261720A1 (fr) Procédé d'entraînement d'un réseau d'obscurcissement dissimulant des données d'origine à utiliser pour un apprentissage automatique et d'entraînement d'un réseau substitut utilisant les données obscurcies générées par le réseau d'obscurcissement, procédé de test d'un réseau d'obscurcissement entraîné, et dispositif d'apprentissage et dispositif de test les utilisant
WO2023027340A1 (fr) Procédé de formation et de test d'un réseau de dissimulation susceptible de dissimuler des données pour protéger des informations personnelles, et dispositif d'apprentissage et dispositif d'essai l'utilisant
WO2021215710A1 (fr) Procédé pour empêcher la violation de données d'origine pour un apprentissage profond et dispositif de prévention de violation de données utilisant ledit procédé
WO2022086145A1 (fr) Procédé permettant d'entraîner et de tester un réseau de brouillage pouvant traiter des données à brouiller à des fins de confidentialité, et dispositif d'entraînement et dispositif de test faisant appel à celui-ci
WO2020032420A1 (fr) Procédé pour entraîner et tester un réseau d'incorporation de données pour générer des données marquées par intégration de données d'origine avec les données de marque, et dispositif d'apprentissage et dispositif de test utilisant ledit procédé
WO2023014007A1 (fr) Dispositif et procédé d'extraction d'informations composites
WO2021095984A1 (fr) Appareil et procédé pour réapprentissage d'un modèle de substitution pour une attaque d'évasion, et appareil d'attaque d'évasion
WO2020246655A1 (fr) Procédé de reconnaissance de situation et dispositif permettant de le mettre en œuvre
WO2023096445A1 (fr) Procédé de génération d'image obscurcie à utiliser dans un réseau d'apprentissage d'entraînement et dispositif de marquage l'utilisant
WO2024025152A1 (fr) Appareil et procédé d'apprentissage contradictoire pour entraîner simultanément un réseau de débruitage et un réseau neuronal profond, et support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme pour exécuter le procédé
WO2022220354A1 (fr) Dispositif de système de surveillance d'écosystème de banc de poissons destiné à détecter une anomalie dans un écosystème de banc de poissons et son procédé de fonctionnement
WO2020071618A1 (fr) Procédé et système d'apprentissage partiel de réseau neuronal basé sur l'entropie
WO2023096444A1 (fr) Procédé d'apprentissage et dispositif d'apprentissage pour entraînement de réseau d'obscurcissement capable d'obscurcir des données originales pour la confidentialité et procédé de test et dispositif d'essai l'utilisant
WO2023080509A1 (fr) Procédé et dispositif pour l'apprentissage d'étiquettes bruyantes par estimation efficace de la matrice de transition
WO2023033281A1 (fr) Procédé de prédiction d'affinité entre un médicament et une substance cible
WO2023140519A1 (fr) Système de construction d'une base de données d'apprentissage automatique à l'aide d'une technologie de protection d'informations personnelles de visage
CN108512651A (zh) 一种人工智能图像辨识攻击防御方法、系统及存储介质
WO2021182670A1 (fr) Dispositif et procédé de reconnaissance faciale hétérogène basés sur l'extraction de relations entre des éléments
WO2023106479A1 (fr) Procédé et dispositif de débruitage d'image de bruit au moyen d'un ste appliquant un dip et d'un algorithme de détermination de temps d'arrêt automatique optimal
CN111753829A (zh) 一种基于可信计算的车辆综合信息采集装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23846811

Country of ref document: EP

Kind code of ref document: A1