WO2022070342A1 - Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage - Google Patents

Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage Download PDF

Info

Publication number
WO2022070342A1
WO2022070342A1 PCT/JP2020/037256 JP2020037256W WO2022070342A1 WO 2022070342 A1 WO2022070342 A1 WO 2022070342A1 JP 2020037256 W JP2020037256 W JP 2020037256W WO 2022070342 A1 WO2022070342 A1 WO 2022070342A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
frequency component
error
generator
Prior art date
Application number
PCT/JP2020/037256
Other languages
English (en)
Japanese (ja)
Inventor
真弥 山口
関利 金井
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/037256 priority Critical patent/WO2022070342A1/fr
Priority to JP2022553336A priority patent/JPWO2022070342A1/ja
Publication of WO2022070342A1 publication Critical patent/WO2022070342A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to a learning device, a learning method and a learning program.
  • GAN Geneative Adversarial Networks
  • Non-Patent Document 1 GAN (Generative Adversarial Networks) is known as a deep learning model (see, for example, Non-Patent Document 1).
  • the conventional technology has a problem that overfitting may occur and the accuracy of the model may not be improved.
  • the sample generated by the trained GAN generator contains high frequency components that are not included in the actual training data.
  • the discriminator becomes dependent on the high frequency component to perform authenticity determination, and overfitting may occur.
  • the learning device converts the first data into the first frequency component and converts the second data generated by the generator constituting the hostile learning model into the first frequency component.
  • a conversion unit that converts to a second frequency component, a calculation unit that calculates an error between the first frequency component and the second frequency component, and an error calculated by the calculation unit are reduced. It is characterized by having an update unit for updating the parameters of the generator.
  • FIG. 1 is a diagram illustrating a deep learning model according to the first embodiment.
  • FIG. 2 is a diagram illustrating the influence of high frequency components.
  • FIG. 3 is a diagram showing a configuration example of the learning device according to the first embodiment.
  • FIG. 4 is a flowchart showing a processing flow of the learning device according to the first embodiment.
  • FIG. 5 is a diagram showing the results of the experiment.
  • FIG. 6 is a diagram showing the results of the experiment.
  • FIG. 7 is a diagram showing the results of the experiment.
  • FIG. 8 is a diagram showing an example of a computer that executes a learning program.
  • GAN is a technique for learning the data distribution p_data (x) by two deep learning models, a generator G and a classifier D. G learns to deceive D, and D learns to distinguish G from the training data.
  • a model in which such a plurality of models are in a hostile relationship may be called a hostile learning model.
  • Hostile learning models such as GAN are used in the generation of images, texts, sounds and the like.
  • Reference 1 Karras, Tero, et al. "Analyzing and improving the image quality of stylegan.” Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition. 2020. (CVPR 2020)
  • Reference 2 Donahue, Chris, Julian McAuley, and Miller Puckette. "Adversarial audio synthesis.”
  • ICLR 2019 Reference 3: Yu, Lantao, et al. "Seqgan: Sequence generative adversarial nets with policy gradient.” Thirty-first AAAI conference on artificial intelligence. 2017. (AAAI 2017)
  • GAN has a problem that D overfits the learning sample as the learning progresses.
  • each model cannot be meaningfully updated for data generation, and the quality of generation by the generator deteriorates. This is shown, for example, in Figure 1 of Reference 4.
  • Reference 4 Karras, Tero, et al. "Training Generative Adversarial Networks with Limited Data.”
  • ArXiv preprint arXiv: 2006.06676 (2020).
  • Reference 5 describes that the trained CNN output is predicted depending on the high frequency component of the input.
  • Reference 5 Wang, Haohan, et al. "High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks.” Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition. 2020. (CVPR 2020)
  • Reference 6 describes that the neural network constituting the GAN generator G and the classifier D tends to learn in the order of low frequency and high frequency.
  • Reference 6 Rahaman, Nasim, et al. "On the spectral bias of neural networks.” International Conference on Machine Learning. 2019. (ICML 2019)
  • FIG. 1 is a diagram illustrating a deep learning model according to the first embodiment.
  • FIG. 2 is a diagram illustrating the influence of the high frequency component.
  • the CIFAR-10 (two-dimensional power spectrum) is different between the actual data (Real) and the data generated by the generator (GAN).
  • Reference 7 shows that the data generated by various GANs has an increased power spectrum at a high frequency as compared with the actual data.
  • Reference 7 Durall, Ricard, Margret Keuper, and Janis Keuper. "Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions.” Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition. 2020. (CVPR 2020)
  • the classifier D will be used for the data (Real) included in the actual data set X and the data (Fake) generated by the generator G from the random number z. Identifies whether the data in is Real (or Fake).
  • the discriminator D is optimized so that the discriminating accuracy of the discriminator D is improved, that is, the probability that the discriminator D distinguishes Real from Real is increased.
  • the generator G is optimized so that the ability of the generator G to deceive the generator G, that is, the probability that the discriminator D distinguishes Real from Fake increases.
  • the generator G is optimized so that the frequency components of Real and Fake match.
  • the details of the learning process of the deep learning model will be described together with the configuration of the learning device of the present embodiment.
  • FIG. 3 is a diagram showing a configuration example of the learning device according to the first embodiment.
  • the learning device 10 accepts input of data for learning and updates the parameters of the deep learning model. Further, the learning device 10 may output the updated parameters. As shown in FIG. 3, the learning device 10 has an input / output unit 11, a storage unit 12, and a control unit 13.
  • the input / output unit 11 is an interface for inputting / outputting data.
  • the input / output unit 11 may be a communication interface such as a NIC (Network Interface Card) for performing data communication with another device via a network.
  • the input / output unit 11 may be an interface for connecting an input device such as a mouse and a keyboard, and an output device such as a display.
  • the storage unit 12 is a storage device for an HDD (Hard Disk Drive), SSD (Solid State Drive), optical disk, or the like.
  • the storage unit 12 may be a semiconductor memory in which data such as RAM (Random Access Memory), flash memory, NVSRAM (Non Volatile Static Random Access Memory) can be rewritten.
  • the storage unit 12 stores an OS (Operating System) and various programs executed by the learning device 10. Further, the storage unit 12 stores the model information 121.
  • the model information 121 is information such as parameters for constructing a deep learning model, and is appropriately updated in the learning process. Further, the updated model information 121 may be output to another device or the like via the input / output unit 11.
  • the control unit 13 controls the entire learning device 10.
  • the control unit 13 is, for example, an electronic circuit such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or the like. It is an integrated circuit.
  • the control unit 13 has an internal memory for storing programs and control data that specify various processing procedures, and executes each process using the internal memory. Further, the control unit 13 functions as various processing units by operating various programs.
  • the control unit 13 has a generation unit 131, a conversion unit 132, a calculation unit 133, and an update unit 134.
  • the generation unit 131 inputs the random number z to the generator G and generates the second data.
  • the conversion unit 132 converts the first data into the first frequency component, and converts the second data generated by the generator G constituting the hostile learning model into the second frequency component.
  • the conversion unit 132 converts the first data and the second data into frequency components using a differentiable function. This is to enable the parameter update by the inverse error propagation method.
  • the conversion unit 132 converts the first data and the second data into frequency components by a discrete Fourier transform (DFT: discrete Fourier transform) or a discrete cosine transform (DCT: discrete cosine transform).
  • DFT discrete Fourier transform
  • DCT discrete cosine transform
  • the calculation unit 133 calculates the error between the first frequency component and the second frequency component.
  • the calculation unit 133 can calculate the error by any method such as MSE (mean square error, Mean Square Error), RMSE (mean squared error, Root Mean Square Error), L1 and the like.
  • MSE mean square error, Mean Square Error
  • RMSE mean squared error, Root Mean Square Error
  • L1 and the like.
  • MSE mean square error, Mean Square Error
  • RMSE mean squared error, Root Mean Square Error
  • X real and X fake are batches of Real and Fake, respectively. Further,
  • are the respective batch sizes.
  • Real is real data. Fake is data generated by the generator G.
  • F ( ⁇ ) is a function that converts data in the spatial region into frequency components.
  • x real i and x fake j are the i-th data of X real and the j-th data of X fake , respectively, and are examples of the first data and the second data.
  • F (x real i ) corresponds to the first frequency component.
  • F (x fake j ) corresponds to the second frequency component.
  • the calculation unit 133 is obtained by converting each of the batch average of the plurality of first frequency components obtained by converting each of the plurality of first data and the plurality of second data. Calculate the error between the batch averages of the plurality of second frequency components. That is, the error here corresponds to the error between batch averages, not the error between single data samples.
  • the calculation unit 133 increases as the error between the first frequency component and the second frequency component increases, and the first data and the second data by the discriminator constituting the hostile learning model become larger.
  • the loss function LG which increases as the discrimination accuracy decreases, is calculated as in Eq. (2).
  • is a hyperparameter that functions as a weight.
  • G ( ⁇ ) is a function that outputs the data (Fake) generated by the generator G based on the argument.
  • D (.) Is a function that outputs the probability of identifying the data input as an argument by the classifier D as Real.
  • the update unit 134 updates the parameters of the generator G so that the error calculated by the calculation unit 133 becomes small. Specifically, the update unit 134 updates the parameters of the generator G so that the loss function LG is optimized.
  • the update unit 134 updates the parameters of the classifier D so that the loss function of the equation (3) is optimized.
  • x is real data.
  • FIG. 4 is a flowchart showing a processing flow of the learning device according to the first embodiment.
  • the learning device 10 reads the learning data (step S101).
  • the learning device 10 reads existing data (Real) as learning data.
  • the learning device 10 samples a random number z from the normal distribution and generates a sample (Fake) by G (z) (step S102). Further, the learning device 10 converts Real and Fake into frequency components by DCT or DFT, and calculates the batch average of the frequency components (step S103).
  • the learning device 10 calculates the GAN loss function of the generator G (step S104).
  • the GAN loss of the generator G corresponds to the first term on the right side of the equation (2).
  • the learning device 10 calculates the frequency component matching loss from the batch average of the Real-Fake frequency components (step S105).
  • the frequency component matching loss corresponds to L freq in Eq. (1).
  • the learning device 10 calculates the sum of the GAN loss function and the frequency component matching loss with respect to G as the total loss (step S106).
  • the total loss corresponds to LG in equation (2).
  • the learning device 10 may multiply the frequency component matching loss by the weight ⁇ .
  • the learning device 10 updates the parameters of the generator G by the inverse error propagation method of the total loss (step S107).
  • the learning device 10 learns the classifier D (step S108). Specifically, the learning device 10 updates the parameters of the classifier D by the inverse error propagation method of the loss function of the equation (3).
  • step S109 True
  • step S109 False
  • the conversion unit 132 converts the first data into the first frequency component, and the second data generated by the generator constituting the hostile learning model is converted into the second frequency component. Convert to.
  • the calculation unit 133 calculates the error between the first frequency component and the second frequency component.
  • the updater 134 updates the generator parameters so that the error calculated by the calculator 133 is smaller. In this way, the learning device 10 can reflect the influence of the frequency component on the learning. Thereby, according to the present embodiment, it is possible to suppress the occurrence of overfitting and improve the accuracy of the model.
  • the conversion unit 132 converts the first data and the second data into frequency components using a differentiable function. For example, the conversion unit 132 converts the first data and the second data into frequency components by a discrete Fourier transform or a discrete cosine transform. This makes it possible to update the parameters by the inverse error propagation method in the present embodiment.
  • the calculation unit 133 includes a batch average of a plurality of first frequency components obtained by converting each of the plurality of first data, and a plurality of second data obtained by converting each of the plurality of second data. Calculate the error between the batch average of the two frequency components.
  • the calculation unit 133 increases as the error between the first frequency component and the second frequency component increases, and the discriminator accuracy between the first data and the second data by the classifier constituting the hostile learning model increases. Calculate the loss function that increases as is lower.
  • the updater 134 updates the generator parameters so that the loss function is optimized. Thereby, in the present embodiment, the learning of the entire model can be efficiently performed.
  • SSD2GAN is another method for improving the accuracy of the model in consideration of the influence of the frequency component by a method different from that of the first embodiment.
  • FIGS. 5, 6 and 7 are diagrams showing the results of the experiment. As shown in FIG. 5, in FreqMSE and SSD2GAN + Tradeoff + SSCR, it can be said that the FID of the generator G is small and the production quality is improved.
  • overfitting is suppressed by each method except SNGAN.
  • SNGAN overfitting has occurred after 40,000 iterations, and FID continues to deteriorate.
  • FreqMSE and SSD2GAN have an effect of suppressing a non-existent high frequency component contained in the generated sample.
  • each component of each of the illustrated devices is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific forms of distribution and integration of each device are not limited to those shown in the figure, and all or part of them may be functionally or physically dispersed or physically distributed in arbitrary units according to various loads and usage conditions. Can be integrated and configured. Further, each processing function performed by each device is realized by a CPU (Central Processing Unit) and a program that is analyzed and executed by the CPU, or hardware by wired logic. Can be realized as. The program may be executed not only by the CPU but also by another processor such as a GPU.
  • CPU Central Processing Unit
  • the learning device 10 can be implemented by installing a learning program that executes the above learning process as package software or online software on a desired computer. For example, by causing the information processing device to execute the above learning program, the information processing device can be made to function as the learning device 10.
  • the information processing device referred to here includes a desktop type or notebook type personal computer.
  • the information processing device includes smartphones, mobile phones, mobile communication terminals such as PHS (Personal Handyphone System), and slate terminals such as PDAs (Personal Digital Assistants).
  • the learning device 10 can be implemented as a learning server device in which the terminal device used by the user is a client and the service related to the above learning process is provided to the client.
  • the learning server device is implemented as a server device that provides a learning service that inputs learning data and outputs learning model information.
  • the learning server device may be implemented as a Web server, or may be implemented as a cloud that provides the service related to the learning process by outsourcing.
  • FIG. 8 is a diagram showing an example of a computer that executes a learning program.
  • the computer 1000 has, for example, a memory 1010 and a CPU 1020.
  • the computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these parts is connected by a bus 1080.
  • the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012.
  • the ROM 1011 stores, for example, a boot program such as a BIOS (BASIC Input Output System).
  • BIOS BASIC Input Output System
  • the hard disk drive interface 1030 is connected to the hard disk drive 1090.
  • the disk drive interface 1040 is connected to the disk drive 1100.
  • a removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive 1100.
  • the serial port interface 1050 is connected to, for example, a mouse 1110 and a keyboard 1120.
  • the video adapter 1060 is connected to, for example, the display 1130.
  • the hard disk drive 1090 stores, for example, the OS 1091, the application program 1092, the program module 1093, and the program data 1094. That is, the program that defines each process of the learning device 10 is implemented as a program module 1093 in which a code that can be executed by a computer is described.
  • the program module 1093 is stored in, for example, the hard disk drive 1090.
  • the program module 1093 for executing the same processing as the functional configuration in the learning device 10 is stored in the hard disk drive 1090.
  • the hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
  • the setting data used in the processing of the above-described embodiment is stored as program data 1094 in, for example, a memory 1010 or a hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 and the program data 1094 stored in the memory 1010 and the hard disk drive 1090 into the RAM 1012 as needed, and executes the process of the above-described embodiment.
  • the program module 1093 and the program data 1094 are not limited to those stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and the program data 1094 may be stored in another computer connected via a network (LAN (Local Area Network), WAN (Wide Area Network), etc.). Then, the program module 1093 and the program data 1094 may be read from another computer by the CPU 1020 via the network interface 1070.
  • LAN Local Area Network
  • WAN Wide Area Network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

Ce dispositif d'apprentissage comprend une unité de conversion (132) qui convertit de premières données en une première composante de fréquence et convertit de secondes données générées par un générateur constituant un modèle d'apprentissage publicitaire en une seconde composante de fréquence. Une unité de calcul (133) calcule l'erreur entre la première composante de fréquence et la seconde composante de fréquence. Une unité de mise à jour (134) met à jour un paramètre du générateur de façon à réduire l'erreur calculée par l'unité de calcul (133).
PCT/JP2020/037256 2020-09-30 2020-09-30 Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage WO2022070342A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/037256 WO2022070342A1 (fr) 2020-09-30 2020-09-30 Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage
JP2022553336A JPWO2022070342A1 (fr) 2020-09-30 2020-09-30

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037256 WO2022070342A1 (fr) 2020-09-30 2020-09-30 Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage

Publications (1)

Publication Number Publication Date
WO2022070342A1 true WO2022070342A1 (fr) 2022-04-07

Family

ID=80950008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037256 WO2022070342A1 (fr) 2020-09-30 2020-09-30 Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage

Country Status (2)

Country Link
JP (1) JPWO2022070342A1 (fr)
WO (1) WO2022070342A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020087103A (ja) * 2018-11-28 2020-06-04 株式会社ツバサファクトリー 学習方法、コンピュータプログラム、分類器、及び生成器

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020087103A (ja) * 2018-11-28 2020-06-04 株式会社ツバサファクトリー 学習方法、コンピュータプログラム、分類器、及び生成器

Also Published As

Publication number Publication date
JPWO2022070342A1 (fr) 2022-04-07

Similar Documents

Publication Publication Date Title
JP6870508B2 (ja) 学習プログラム、学習方法及び学習装置
US20230196202A1 (en) System and method for automatic building of learning machines using learning machines
JP6992709B2 (ja) マスク推定装置、マスク推定方法及びマスク推定プログラム
US11645441B1 (en) Machine-learning based clustering for clock tree synthesis
Sun et al. Sparse deep learning: A new framework immune to local traps and miscalibration
JP2024051136A (ja) 学習装置、学習方法、学習プログラム、推定装置、推定方法及び推定プログラム
WO2022070342A1 (fr) Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage
WO2020170803A1 (fr) Dispositif, procédé et programme d'augmentation
JP7112348B2 (ja) 信号処理装置、信号処理方法及び信号処理プログラム
CN110489435B (zh) 基于人工智能的数据处理方法、装置、及电子设备
WO2022070343A1 (fr) Dispositif, procédé et programme d'apprentissage
US20220414490A1 (en) Storage medium, machine learning method, and machine learning device
CN110955789A (zh) 一种多媒体数据处理方法以及设备
EP4339832A1 (fr) Procédé permettant de construire un modèle intégré d'ia, et procédé et appareil d'inférence d'un modèle intégré d'ia
WO2022249418A1 (fr) Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage
Sun et al. Generalizing expectation propagation with mixtures of exponential family distributions and an application to Bayesian logistic regression
WO2019208248A1 (fr) Dispositif d'apprentissage, procédé d'apprentissage et programme d'apprentissage
JP7047664B2 (ja) 学習装置、学習方法および予測システム
JP7077746B2 (ja) 学習装置、学習方法及び学習プログラム
WO2023067666A1 (fr) Dispositif de calcul, procédé de calcul et programme de calcul
CN114970431B (zh) Mos管参数估计模型的训练方法和装置
Jiang et al. Renewable Huber estimation method for streaming datasets
WO2023195138A1 (fr) Procédé d'apprentissage, dispositif d'apprentissage et programme d'apprentissage
WO2023238258A1 (fr) Dispositif de fourniture d'informations, procédé de fourniture d'informations et programme de fourniture d'informations
WO2021081809A1 (fr) Procédé et appareil de recherche d'architecture de réseau, support d'enregistrement et produit programme d'ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956270

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022553336

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956270

Country of ref document: EP

Kind code of ref document: A1