WO2024095420A1 - Learning device, estimating device, learning method, estimating method, and program - Google Patents

Learning device, estimating device, learning method, estimating method, and program Download PDF

Info

Publication number
WO2024095420A1
WO2024095420A1 PCT/JP2022/041070 JP2022041070W WO2024095420A1 WO 2024095420 A1 WO2024095420 A1 WO 2024095420A1 JP 2022041070 W JP2022041070 W JP 2022041070W WO 2024095420 A1 WO2024095420 A1 WO 2024095420A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
transformation
estimation
unit
Prior art date
Application number
PCT/JP2022/041070
Other languages
French (fr)
Japanese (ja)
Inventor
友貴 山中
浩義 瀧口
正紀 篠原
拓也 南
泰典 和田
楊 鐘本
啓仁 野村
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/041070 priority Critical patent/WO2024095420A1/en
Publication of WO2024095420A1 publication Critical patent/WO2024095420A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to a learning device, an estimation device, a learning method, an estimation method, and a program.
  • Model Inversion Attack Membership Inference
  • CopyCat CNN extract information (e.g., training data) from trained models in operation, as well as methods to replicate the models themselves.
  • Measures proposed to counter these attacks include improving the robustness of the model itself, limiting the number of queries to the machine learning model, and adding noise to the information output by the model, but none of these methods provide a complete defense.
  • the present invention has been made in consideration of the above, and aims to provide a learning device, an estimation device, a learning method, an estimation method, and a program that can reduce the risk of information leakage from a trained machine learning model and the risk of cloning the model.
  • the learning device is characterized by having an acquisition unit that acquires learning data, a conversion unit that performs a predetermined conversion based on secret information on the learning data, and a learning unit that performs learning of an estimation model based on the learning data converted by the conversion unit.
  • the estimation device is characterized by having an acquisition unit that acquires data to be processed, a conversion unit that performs a predetermined conversion based on secret information on the data to be processed, and an estimation unit that performs estimation processing based on the data to be processed that has been converted by the conversion unit using an estimation model that has been trained based on training data that has been subjected to the predetermined conversion based on the secret information.
  • the present invention can reduce the risk of information leakage from trained machine learning models and the risk of cloning models.
  • FIG. 1 is a diagram for explaining an outline of the process according to the embodiment.
  • FIG. 2 is a diagram illustrating an example of a configuration of a processing device according to an embodiment.
  • FIG. 3 is a diagram illustrating an example of the conversion process executed by the conversion unit.
  • FIG. 4 is a diagram illustrating an example of a processing procedure of the learning process according to the embodiment.
  • FIG. 5 is a diagram illustrating an example of a processing procedure of the estimation process according to the embodiment.
  • FIG. 6 is a diagram illustrating an example of a computer that realizes a learning device by executing a program.
  • [Embodiment] [Overview of Processing of the Embodiment] 1 is a diagram for explaining an outline of the process according to the embodiment.
  • learning data e.g., original data Dp
  • Kc secret information
  • the estimation model 131 (machine learning model) is trained using only the training data that has been transformed according to this rule ((2) in FIG. 1). In this way, in the embodiment, a trained model is generated that can correctly recognize only data that has been transformed according to a specific rule based on the secret information ((3) in FIG. 1).
  • the data to be processed is converted according to specific rules based on the secret information Kc, just like in the learning phase, and then input to the estimation model 131 for estimation processing.
  • specific rules based on the secret information Kc, just like in the learning phase
  • the estimation model 131 for estimation processing.
  • only users or devices that have the secret information and are capable of converting data according to the specific rules can use the estimation model 131 with high accuracy.
  • the estimation model 131 is a model with insufficient detection and classification accuracy for data that has not been converted based on secret information or data that has been converted according to rules other than specific rules, and is a model of no value to other users or devices.
  • [Processing device] 2 is a diagram showing an example of the configuration of a processing device according to an embodiment.
  • the processing device 10 is realized by, for example, loading a predetermined program into a computer or the like including a ROM (Read Only Memory), a RAM (Random Access Memory), a CPU (Central Processing Unit), etc., and having the CPU execute the predetermined program.
  • the processing device 10 also has a communication interface for transmitting and receiving various information to and from other devices connected via a network, etc.
  • the processing device 10 has a data acquisition unit 11 (acquisition unit), a conversion unit 12, an estimation unit 13, and a learning unit 14.
  • the data acquisition unit 11 acquires data to be processed by the processing device 10.
  • learning data is acquired.
  • the learning data is, for example, a labeled image data set or a labeled table data set.
  • operation phase data to be processed in the estimation process is acquired.
  • the data acquisition unit 11 acquires various types of data, for example, via a communication interface.
  • the conversion unit 12 performs a predetermined conversion based on secret information on the learning data acquired by the data acquisition unit 11 or on the data to be processed.
  • the conversion unit 12 performs conversion based on the same secret information and according to the same rules in the learning phase and the operation phase.
  • the secret information is, for example, a secret key stored in a module of the processing device 10.
  • the conversion unit 12 uses, as the secret information, for example, a secret key stored in a TPM (Trusted Platform Module) or the like. This makes it possible, in the embodiment, to generate an estimation model 131 that operates accurately only on a specific device.
  • TPM Trusted Platform Module
  • the predetermined transformation is a transformation that follows a specific rule.
  • Fig. 3 is a diagram for explaining an example of a transformation process executed by the transformation unit 12.
  • the transformation unit 12 performs, for example, Jigsaw transformation (Reference 1) in which a transformation pattern is fixed by a seed value (secret information).
  • the Jigsaw transformation is a transformation method in which an image is cut into patches and then the positions of the patches are rearranged.
  • Reference 1 “aleju/imgaug”, [online], [searched on October 28, 2022], Internet ⁇ URL: https://github.com/aleju/imgaug>
  • the transformation unit 12 may perform, in addition to the Jigsaw transformation, color tone transformation, affine transformation, or a combination of two or more of the Jigsaw transformation, color tone transformation, and affine transformation using transformation rules set using secret information.
  • color tone transformation is a grayscale transformation of a color image.
  • An affine transformation is a transformation that combines linear transformations such as rotation, scaling (enlargement, reduction), and skew with parallel translation.
  • the conversion unit 12 When the data is table data, the conversion unit 12 performs conversion using, for example, shuffling features, applying noise, or a combination of shuffling features and applying noise, using conversion rules set using secret information. These conversion methods are examples, and the conversion unit 12 can apply various conversion methods in accordance with the type of data.
  • the estimation unit 13 uses the estimation model 131 to perform estimation processing based on the data converted by the conversion unit 12.
  • the estimation model 131 is a machine learning model that, when data converted by the conversion unit 12 is input, performs predetermined estimation processing such as detection processing, classification processing, and matching processing, and outputs the estimation result.
  • the estimation unit 13 inputs the learning data converted by the conversion unit 12 to the estimation model 131, and outputs the estimation result output from the estimation model 131 to the learning unit 14.
  • the estimation unit 13 inputs the data to be processed that has been converted by the conversion unit 12 to the estimation model 131, and outputs the estimation result output from the estimation model 131 to an output device, an external device, etc.
  • the estimation model 131 that has been trained based on training data that has been subjected to a predetermined conversion based on secret information is used.
  • the learning unit 14 performs learning of the estimation model 131 based on the learning data converted by the conversion unit 12.
  • the learning unit 14 performs learning of the estimation model 131 using machine learning such as supervised learning, unsupervised learning, reinforcement learning, and deep learning.
  • Fig. 4 shows an example of a processing procedure of the learning process according to the embodiment.
  • the data acquisition unit 11 acquires learning data (step S11), and the conversion unit 12 performs a predetermined conversion based on the secret information on the learning data acquired in step S11 (step S12).
  • the estimation unit 13 uses the estimation model 131 to perform estimation processing based on the learning data converted by the conversion unit 12 (step S13), and outputs the estimation result to the learning unit 14.
  • the learning unit 14 executes a learning process for the estimation model 131 based on the learning data converted by the conversion unit 12 (step S14). For example, when labeled learning data is used, the learning unit 14 optimizes the parameters of the estimation model 131 so that the estimation result output from the estimation unit 13 in step S13 approaches the label of the learning data.
  • Fig. 5 shows an example of a processing procedure of the estimation process according to the embodiment.
  • the data acquisition unit 11 acquires the data to be processed (step S21), and the conversion unit 12 performs a predetermined conversion based on the secret information on the data to be processed acquired in step S21 (step S22).
  • the estimation unit 13 uses the estimation model 131 to perform estimation processing based on the data to be processed that has been converted by the conversion unit 12 (step S23).
  • the estimation model 131 is a model that has been trained based on training data that has been subjected to a predetermined conversion based on secret information.
  • the estimation unit 13 outputs the estimation result (step S24) and ends the estimation processing.
  • a first CNN Convolutional Neural Network
  • the first CNN model was trained using training data obtained by performing a first Jigsaw transformation of a transformation pattern fixed by a seed value (secret information) on a labeled image data set Cifar10.
  • the classification accuracy of this first CNN model was 71.15% for the first evaluation data, which underwent the same first Jigsaw transformation as used during training.
  • the classification accuracy of the first CNN model was 42.36% for the second evaluation data, which did not undergo Jigsaw transformation, and 41.00% for the third evaluation data, which underwent a Jigsaw transformation with a different pattern from the first Jigsaw transformation.
  • the classification accuracy of the second evaluation data and the third evaluation data is significantly degraded compared to the classification accuracy of the first evaluation data.
  • the processing device 10 trains the estimation model 131 using only the training data that has been subjected to a predetermined transformation based on the secret information.
  • the predetermined transformation is a transformation that uses the secret information and follows a specific rule.
  • the estimation model 131 is merely a model with insufficient detection accuracy and classification accuracy for data that has not been converted based on secret information or data that has been converted according to rules other than the specific rules.
  • the estimation model 131 is a worthless model for users or devices that do not have the secret information or the rules for data conversion.
  • an inference model 131 that is valuable to a user or device that has confidential information and is capable of converting data according to specific rules, but is worthless to a user or device that does not have the confidential information or a user or device that does not have the rules for data conversion, it is possible to reduce the risk of information leakage from the inference model 131 and the risk of duplication of the inference model 131.
  • Each component of the processing device 10 shown above is a functional concept, and does not necessarily have to be physically configured as shown in the figure.
  • the specific form of distribution and integration of the functions of the processing device 10 is not limited to that shown in the figure, and all or part of it can be functionally or physically distributed or integrated in any unit depending on various loads, usage conditions, etc.
  • each process performed by the processing device 10 may be realized, in whole or in part, by a CPU and a program analyzed and executed by the CPU. Furthermore, each process performed by the processing device 10 may be realized as hardware using wired logic.
  • [program] 6 is a diagram showing an example of a computer in which a program is executed to realize the processing device 10.
  • the computer 1000 has, for example, a memory 1010 and a CPU 1020.
  • the computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. These components are connected by a bus 1080.
  • the memory 1010 includes a ROM 1011 and a RAM 1012.
  • the ROM 1011 stores a boot program such as a BIOS (Basic Input Output System).
  • BIOS Basic Input Output System
  • the hard disk drive interface 1030 is connected to a hard disk drive 1090.
  • the disk drive interface 1040 is connected to a disk drive 1100.
  • a removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100.
  • the serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example.
  • the video adapter 1060 is connected to a display 1130, for example.
  • the hard disk drive 1090 stores, for example, an OS (Operating System) 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process of the processing device 10 are implemented as program modules 1093 in which code executable by the computer 1000 is written.
  • the program modules 1093 are stored, for example, in the hard disk drive 1090.
  • a program module 1093 for executing processes similar to the functional configuration of the processing device 10 is stored in the hard disk drive 1090.
  • the hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
  • the setting data used in the processing of the above-mentioned embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090.
  • the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
  • the program module 1093 and program data 1094 may not necessarily be stored in the hard disk drive 1090, but may be stored in a removable storage medium, for example, and read by the CPU 1020 via the disk drive 1100 or the like.
  • the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)).
  • the program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

A processing device (10) has: a data acquiring unit (11) that acquires learning data; a converting unit (12) that executes a predetermined conversion based on confidential information, on the learning data; and a training unit (14) that trains an estimation model (131) on the basis of the learning data subjected to the conversion by the converting unit (12).

Description

学習装置、推定装置、学習方法、推定方法及びプログラムLearning device, estimation device, learning method, estimation method, and program
 本発明は、学習装置、推定装置、学習方法、推定方法及びプログラムに関する。 The present invention relates to a learning device, an estimation device, a learning method, an estimation method, and a program.
 機械学習モデルの普及が進むとともに、Model Inversion AttackやMembership Inference、CopyCat CNN等、運用中の学習済みモデルから情報(例えば、学習データ)を抜き取る攻撃や、モデル自体を複製する手法が複数報告されている。これらの攻撃は、プライバシー保護や知的財産保護の観点から脅威となっている。 As machine learning models become more widespread, several attacks have been reported, such as Model Inversion Attack, Membership Inference, and CopyCat CNN, which extract information (e.g., training data) from trained models in operation, as well as methods to replicate the models themselves. These attacks pose a threat from the perspective of privacy protection and intellectual property protection.
 これらの攻撃への対策として、モデル自体のロバスト性の向上や、機械学習モデルへのクエリの回数の制限、モデルから出力する情報にノイズをのせる等が提案されているが、どれも完全な防御手法ではない。  Measures proposed to counter these attacks include improving the robustness of the model itself, limiting the number of queries to the machine learning model, and adding noise to the information output by the model, but none of these methods provide a complete defense.
 本発明は、上記に鑑みてなされたものであって、機械学習の学習済みモデルからの情報漏洩、及び、モデルの複製の危険性を低減することができる学習装置、推定装置、学習方法、推定方法及びプログラムを提供することを目的とする。 The present invention has been made in consideration of the above, and aims to provide a learning device, an estimation device, a learning method, an estimation method, and a program that can reduce the risk of information leakage from a trained machine learning model and the risk of cloning the model.
 上述した課題を解決し、目的を達成するために、本発明に係る学習装置は、学習データを取得する取得部と、前記学習データに、秘密情報に基づく所定の変換を実行する変換部と、前記変換部によって変換が実行された学習データを基に、推定モデルの学習を実行する学習部と、を有することを特徴とする。 In order to solve the above-mentioned problems and achieve the objective, the learning device according to the present invention is characterized by having an acquisition unit that acquires learning data, a conversion unit that performs a predetermined conversion based on secret information on the learning data, and a learning unit that performs learning of an estimation model based on the learning data converted by the conversion unit.
 また、本発明に係る推定装置は、処理対象のデータを取得する取得部と、前記処理対象のデータに、秘密情報に基づく所定の変換を実行する変換部と、前記秘密情報に基づく所定の変換が実行された学習データを基に学習が実行された推定モデルを用いて、前記変換部によって変換が実行された処理対象のデータを基に、推定処理を実行する推定部と、を有することを特徴とする。 The estimation device according to the present invention is characterized by having an acquisition unit that acquires data to be processed, a conversion unit that performs a predetermined conversion based on secret information on the data to be processed, and an estimation unit that performs estimation processing based on the data to be processed that has been converted by the conversion unit using an estimation model that has been trained based on training data that has been subjected to the predetermined conversion based on the secret information.
 本発明によれば、機械学習の学習済みモデルからの情報漏洩、及び、モデルの複製の危険性を低減することができる。 The present invention can reduce the risk of information leakage from trained machine learning models and the risk of cloning models.
図1は、実施の形態の処理の概要を説明する図である。FIG. 1 is a diagram for explaining an outline of the process according to the embodiment. 図2は、実施の形態に係る処理装置の構成の一例を示す図である。FIG. 2 is a diagram illustrating an example of a configuration of a processing device according to an embodiment. 図3は、変換部が実行する変換処理の一例を説明する図である。FIG. 3 is a diagram illustrating an example of the conversion process executed by the conversion unit. 図4は、実施の形態に係る学習処理の処理手順の一例を示す図である。FIG. 4 is a diagram illustrating an example of a processing procedure of the learning process according to the embodiment. 図5は、実施の形態に係る推定処理の処理手順の一例を示す図である。FIG. 5 is a diagram illustrating an example of a processing procedure of the estimation process according to the embodiment. 図6は、プログラムが実行されることにより、学習装置が実現されるコンピュータの一例を示す図である。FIG. 6 is a diagram illustrating an example of a computer that realizes a learning device by executing a program.
 以下に、本願に係る学習装置、推定装置、学習方法、推定方法及びプログラムの実施の形態を図面に基づいて詳細に説明する。また、本発明は、以下に説明する実施の形態により限定されるものではない。 Below, the learning device, estimation device, learning method, estimation method, and program according to the present application will be described in detail with reference to the drawings. Furthermore, the present invention is not limited to the embodiments described below.
[実施の形態]
[実施の形態の処理の概要]
 図1は、実施の形態の処理の概要を説明する図である。学習フェーズでは、まず、学習データ(例えば、元データDp)に、秘密情報Kcに基づく特定の規則に従った変換を実行する(図1の(1))。
[Embodiment]
[Overview of Processing of the Embodiment]
1 is a diagram for explaining an outline of the process according to the embodiment. In the learning phase, first, learning data (e.g., original data Dp) is transformed in accordance with a specific rule based on secret information Kc ((1) in FIG. 1).
 そして、学習フェーズでは、この規則で変換が行われた学習データのみを用いて、推定モデル131(機械学習モデル)の学習を実行する(図1の(2))。このように、実施の形態では、秘密情報に基づく特定の規則で変換が行われたデータのみを正しく認識できる学習済みモデルを生成する(図1の(3))。 Then, in the learning phase, the estimation model 131 (machine learning model) is trained using only the training data that has been transformed according to this rule ((2) in FIG. 1). In this way, in the embodiment, a trained model is generated that can correctly recognize only data that has been transformed according to a specific rule based on the secret information ((3) in FIG. 1).
 運用フェーズでは、処理対象のデータを、学習フェーズと同じ、秘密情報Kcに基づく特定の規則に従って変換した後に、推定モデル131に入力し、推定処理を行う。この場合、秘密情報を有するユーザまたは機器であって、特定の規則でデータを変換可能であるユーザまたは機器のみが、推定モデル131を高い精度で利用できる。 In the operation phase, the data to be processed is converted according to specific rules based on the secret information Kc, just like in the learning phase, and then input to the estimation model 131 for estimation processing. In this case, only users or devices that have the secret information and are capable of converting data according to the specific rules can use the estimation model 131 with high accuracy.
 言い換えると、推定モデル131は、秘密情報に基づいた変換を行っていないデータや、特定の規則とは異なる規則で変換が行われたデータにとっては、検知精度や分類精度が十分でないモデルでしかなく、他のユーザまたは機器にとっては、価値のないモデルである。 In other words, the estimation model 131 is a model with insufficient detection and classification accuracy for data that has not been converted based on secret information or data that has been converted according to rules other than specific rules, and is a model of no value to other users or devices.
[処理装置]
 図2は、実施の形態に係る処理装置の構成の一例を示す図である。処理装置10は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)、CPU(Central Processing Unit)等を含むコンピュータ等に所定のプログラムが読み込まれて、CPUが所定のプログラムを実行することで実現される。また、処理装置10は、ネットワーク等を介して接続された他の装置との間で、各種情報を送受信する通信インタフェースを有する。
[Processing device]
2 is a diagram showing an example of the configuration of a processing device according to an embodiment. The processing device 10 is realized by, for example, loading a predetermined program into a computer or the like including a ROM (Read Only Memory), a RAM (Random Access Memory), a CPU (Central Processing Unit), etc., and having the CPU execute the predetermined program. The processing device 10 also has a communication interface for transmitting and receiving various information to and from other devices connected via a network, etc.
 処理装置10は、データ取得部11(取得部)、変換部12、推定部13及び学習部14を有する。 The processing device 10 has a data acquisition unit 11 (acquisition unit), a conversion unit 12, an estimation unit 13, and a learning unit 14.
 データ取得部11は、処理装置10の処理対象であるデータを取得する。学習フェーズでは、学習データを取得する。学習データは、例えば、ラベル付きの画像データセットや、ラベル付きのテーブルデータセットである。運用フェーズでは、推定処理の処理対象のデータを取得する。データ取得部11は、例えば、通信インタフェースを介して、各種データを取得する。 The data acquisition unit 11 acquires data to be processed by the processing device 10. In the learning phase, learning data is acquired. The learning data is, for example, a labeled image data set or a labeled table data set. In the operation phase, data to be processed in the estimation process is acquired. The data acquisition unit 11 acquires various types of data, for example, via a communication interface.
 変換部12は、データ取得部11が取得した学習データ、または、処理対象のデータに対して、秘密情報に基づく所定の変換を実行する。変換部12は、学習フェーズと運用フェーズとで、同じ秘密情報に基づく、同じ規則に従った変換を行う。 The conversion unit 12 performs a predetermined conversion based on secret information on the learning data acquired by the data acquisition unit 11 or on the data to be processed. The conversion unit 12 performs conversion based on the same secret information and according to the same rules in the learning phase and the operation phase.
 秘密情報は、例えば、処理装置10のモジュールに格納されている秘密鍵である。具体的に、変換部12は、秘密情報として、例えば、TPM(Trusted Platform Module)等に保管されている秘密鍵を用いる。これによって、実施の形態では、特定の機器上でしか精度よく動作しない推定モデル131を生成することができる。 The secret information is, for example, a secret key stored in a module of the processing device 10. Specifically, the conversion unit 12 uses, as the secret information, for example, a secret key stored in a TPM (Trusted Platform Module) or the like. This makes it possible, in the embodiment, to generate an estimation model 131 that operates accurately only on a specific device.
 所定の変換は、特定の規則に従った変換である。図3は、変換部12が実行する変換処理の一例を説明する図である。図3に示すように、データ取得部11が取得したデータが、ラベル付き画像データセット(例えば、Cifar10)である場合、変換部12は、例えば、seed値(秘密情報)によって、変換パターンを固定したJigsaw変換(参考文献1)を行う。Jigsaw変換は、画像をパッチに切り出した上で各パッチの位置を並び替える変換手法である。
参考文献1:“aleju/imgaug”, [online],[令和4年10月28日検索],インターネット<URL:https://github.com/aleju/imgaug>
The predetermined transformation is a transformation that follows a specific rule. Fig. 3 is a diagram for explaining an example of a transformation process executed by the transformation unit 12. As shown in Fig. 3, when the data acquired by the data acquisition unit 11 is a labeled image data set (e.g., Cifar10), the transformation unit 12 performs, for example, Jigsaw transformation (Reference 1) in which a transformation pattern is fixed by a seed value (secret information). The Jigsaw transformation is a transformation method in which an image is cut into patches and then the positions of the patches are rearranged.
Reference 1: “aleju/imgaug”, [online], [searched on October 28, 2022], Internet <URL: https://github.com/aleju/imgaug>
 変換部12は、データが画像データである場合には、Jigsaw変換のほか、秘密情報を用いて設定された変換規則を用いた、色調の変換、アフィン変換、または、Jigsaw変換と色調の変換とアフィン変換との二以上を組み合わせた変換を行ってもよい。例えば、色調の変換は、カラー画像のグレースケール変換である。アフィン変換は、回転、スケーリング(拡大、縮小)、スキュー(skew)などの線形変換と、平行移動とを合成した変換である。 When the data is image data, the transformation unit 12 may perform, in addition to the Jigsaw transformation, color tone transformation, affine transformation, or a combination of two or more of the Jigsaw transformation, color tone transformation, and affine transformation using transformation rules set using secret information. For example, color tone transformation is a grayscale transformation of a color image. An affine transformation is a transformation that combines linear transformations such as rotation, scaling (enlargement, reduction), and skew with parallel translation.
 変換部12は、データがテーブルデータである場合には、例えば、秘密情報を用いて設定された変換規則を用いた、特徴量のシャッフル、ノイズの印加、または、特徴量のシャッフル及びノイズの印加を組み合わせた変換を行う。変換部12は、これらの変換方法は一例であり、変換部12は、データの種別に対応させて、各種変換方法を適用可能である。 When the data is table data, the conversion unit 12 performs conversion using, for example, shuffling features, applying noise, or a combination of shuffling features and applying noise, using conversion rules set using secret information. These conversion methods are examples, and the conversion unit 12 can apply various conversion methods in accordance with the type of data.
 推定部13は、推定モデル131を用いて、変換部12によって変換が実行されたデータを基に、推定処理を実行する。推定モデル131は、変換部12によって変換が実行されたデータが入力されると、検知処理、分類処理、照合処理等の所定の推定処理を行い、推定結果を出力する機械学習モデルである。 The estimation unit 13 uses the estimation model 131 to perform estimation processing based on the data converted by the conversion unit 12. The estimation model 131 is a machine learning model that, when data converted by the conversion unit 12 is input, performs predetermined estimation processing such as detection processing, classification processing, and matching processing, and outputs the estimation result.
 学習フェーズでは、推定部13は、変換部12によって変換が実行された学習データを推定モデル131に入力し、推定モデル131から出力された推定結果を学習部14に出力する。 In the learning phase, the estimation unit 13 inputs the learning data converted by the conversion unit 12 to the estimation model 131, and outputs the estimation result output from the estimation model 131 to the learning unit 14.
 運用フェーズでは、推定部13は、変換部12によって変換が実行された処理対象のデータを推定モデル131に入力し、推定モデル131から出力された推定結果を、出力装置や外部装置等に出力する。運用フェーズでは、秘密情報に基づく所定の変換が実行された学習データを基に学習が実行された推定モデル131を使用する。 In the operation phase, the estimation unit 13 inputs the data to be processed that has been converted by the conversion unit 12 to the estimation model 131, and outputs the estimation result output from the estimation model 131 to an output device, an external device, etc. In the operation phase, the estimation model 131 that has been trained based on training data that has been subjected to a predetermined conversion based on secret information is used.
 学習部14は、変換部12によって変換が実行された学習データを基に、推定モデル131の学習を実行する。学習部14は、教師あり学習、教師なし学習、強化学習、深層学習等の機械学習を用いて、推定モデル131の学習を実行する。 The learning unit 14 performs learning of the estimation model 131 based on the learning data converted by the conversion unit 12. The learning unit 14 performs learning of the estimation model 131 using machine learning such as supervised learning, unsupervised learning, reinforcement learning, and deep learning.
[学習処理]
 次に、処理装置10が実行する学習処理について説明する。図4は、実施の形態に係る学習処理の処理手順の一例である。
[Learning process]
Next, a description will be given of the learning process executed by the processing device 10. Fig. 4 shows an example of a processing procedure of the learning process according to the embodiment.
 図4に示すように、処理装置10は、データ取得部11が、学習データを取得し(ステップS11)、変換部12が、ステップS11において取得された学習データに、秘密情報に基づく所定の変換を実行する(ステップS12)。 As shown in FIG. 4, in the processing device 10, the data acquisition unit 11 acquires learning data (step S11), and the conversion unit 12 performs a predetermined conversion based on the secret information on the learning data acquired in step S11 (step S12).
 推定部13は、推定モデル131を用いて、変換部12によって変換が実行された学習データを基に、推定処理を実行し(ステップS13)、推定結果を学習部14に出力する。 The estimation unit 13 uses the estimation model 131 to perform estimation processing based on the learning data converted by the conversion unit 12 (step S13), and outputs the estimation result to the learning unit 14.
 学習部14は、変換部12によって変換が実行された学習データを基に、推定モデル131の学習処理を実行する(ステップS14)。学習部14は、例えば、ラベル付きの学習データを用いた場合には、ステップS13において推定部13から出力された推定結果が、学習データのラベルに近づくように、推定モデル131のパラメータを最適化する。 The learning unit 14 executes a learning process for the estimation model 131 based on the learning data converted by the conversion unit 12 (step S14). For example, when labeled learning data is used, the learning unit 14 optimizes the parameters of the estimation model 131 so that the estimation result output from the estimation unit 13 in step S13 approaches the label of the learning data.
[推定処理]
 処理装置10が実行する推定処理について説明する。図5は、実施の形態に係る推定処理の処理手順の一例である。
[Estimation process]
A description will now be given of the estimation process executed by the processing device 10. Fig. 5 shows an example of a processing procedure of the estimation process according to the embodiment.
 処理装置10は、データ取得部11が、処理対象のデータを取得し(ステップS21)、変換部12が、ステップS21において取得された処理対象のデータに、秘密情報に基づく所定の変換を実行する(ステップS22)。 In the processing device 10, the data acquisition unit 11 acquires the data to be processed (step S21), and the conversion unit 12 performs a predetermined conversion based on the secret information on the data to be processed acquired in step S21 (step S22).
 推定部13は、推定モデル131を用いて、変換部12によって変換が実行された処理対象のデータを基に、推定処理を実行する(ステップS23)。推定モデル131は、図4に示すように、秘密情報に基づく所定の変換が実行された学習データを基に学習が実行されたモデルである。処理装置10は、推定部13が、推定結果を出力して(ステップS24)、推定処理を終了する。 The estimation unit 13 uses the estimation model 131 to perform estimation processing based on the data to be processed that has been converted by the conversion unit 12 (step S23). As shown in FIG. 4, the estimation model 131 is a model that has been trained based on training data that has been subjected to a predetermined conversion based on secret information. In the processing device 10, the estimation unit 13 outputs the estimation result (step S24) and ends the estimation processing.
[評価実験]
 処理装置10の評価実験を行った。評価実験では、実施の形態に係る学習方法を適用して第1のCNN(Convolutional Neural Network)モデルを生成した。具体的には、ラベル付き画像データセットCifar10に対し、seed値(秘密情報)によって固定した変換パターンの第1のJigsaw変換を行った学習データを用いて、第1のCNNモデルの学習を行った。
[Evaluation experiment]
An evaluation experiment was conducted on the processing device 10. In the evaluation experiment, a first CNN (Convolutional Neural Network) model was generated by applying the learning method according to the embodiment. Specifically, the first CNN model was trained using training data obtained by performing a first Jigsaw transformation of a transformation pattern fixed by a seed value (secret information) on a labeled image data set Cifar10.
 この第1のCNNモデルの分類精度は、学習時と同じ第1のJigsaw変換を行った第1の評価用データについては、71.15%であった。 The classification accuracy of this first CNN model was 71.15% for the first evaluation data, which underwent the same first Jigsaw transformation as used during training.
 これに対し、第1のCNNモデルの分類精度は、Jigsaw変換自体実行しなかった第2の評価用データについては、42.36%であり、第1のJigsaw変換と異なるパターンのJigsaw変換を行った第3の評価用データについては、41.00%であった。 In contrast, the classification accuracy of the first CNN model was 42.36% for the second evaluation data, which did not undergo Jigsaw transformation, and 41.00% for the third evaluation data, which underwent a Jigsaw transformation with a different pattern from the first Jigsaw transformation.
 したがって、第1のCNNモデルでは、第2の評価用データ及び第3の評価用データの分類精度が、第1の評価用データの分類精度と比べると、大きく劣化していることがわかる。 Therefore, it can be seen that in the first CNN model, the classification accuracy of the second evaluation data and the third evaluation data is significantly degraded compared to the classification accuracy of the first evaluation data.
 これより、秘密情報に基づく所定の変換(第1の変換)を行ったデータのみを用いて、機械学習モデルを学習することで、第1の変換自体を行っていないデータや、第1の変換と異なる規則の第2の変換が行われたデータに対しては、精度が十分でないモデルが生成できることが実験的に確かめられた。 As a result, it was experimentally confirmed that by training a machine learning model using only data that has undergone a specified transformation (first transformation) based on secret information, a model with insufficient accuracy can be generated for data that has not undergone the first transformation, or data that has undergone a second transformation that follows different rules from the first transformation.
 なお、本評価実験で用いた第1のCNNモデルと同じ構成の第2のCNNモデルを用いて、通常の学習、つまり、学習データに対して何の変換も行わず、評価時も何の変換も行わずに分類精度を測定した場合、74.63%の精度であった。 When a second CNN model with the same configuration as the first CNN model used in this evaluation experiment was used and classification accuracy was measured using normal training, that is, without any conversion of the training data, and without any conversion during evaluation, the accuracy was 74.63%.
 本評価実験では、CNNモデルとして、パラメータ数の少ない小さいモデルを用いたため、一般的なResnet等と比べると精度は低いが、第1のCNNモデルの精度(71.15%)は、第2のCNNモデルの精度に近い精度であるといえる。したがって、第1のCNNモデルの精度劣化は、第2のCNNモデルと比しても、わずかであるため、第1のCNNモデルも実際の運用に十分対応できる。 In this evaluation experiment, a small model with a small number of parameters was used as the CNN model, so the accuracy is lower than that of general Resnet etc., but the accuracy of the first CNN model (71.15%) can be said to be close to that of the second CNN model. Therefore, the accuracy degradation of the first CNN model is slight compared to the second CNN model, so the first CNN model is also fully suitable for practical operation.
[実施の形態の効果]
 このように、本実施の形態に係る処理装置10では、秘密情報に基づく所定の変換を実行した学習データのみを用いて、推定モデル131を学習させる。所定の変換は、秘密情報を用いた特定の規則に従った変換である。
[Effects of the embodiment]
In this manner, the processing device 10 according to the present embodiment trains the estimation model 131 using only the training data that has been subjected to a predetermined transformation based on the secret information. The predetermined transformation is a transformation that uses the secret information and follows a specific rule.
 評価実験でも示したように、処理装置10によれば、秘密情報を有するユーザまたは機器であって、特定の規則でデータを変換可能であるユーザまたは機器のみが、推定モデル131を正しい精度で利用できる。すなわち、推定モデル131は、秘密情報に基づいた変換を行っていないデータや、特定の規則とは異なる規則で変換が行われたデータにとっては、検知精度や分類精度が十分でないモデルでしかない。 As shown in the evaluation experiment, with the processing device 10, only users or devices that have secret information and are capable of converting data according to specific rules can use the estimation model 131 with correct accuracy. In other words, the estimation model 131 is merely a model with insufficient detection accuracy and classification accuracy for data that has not been converted based on secret information or data that has been converted according to rules other than the specific rules.
 したがって、推定モデル131は、秘密情報を持たないユーザまたは機器、または、データ変換の規則を有しないユーザまたは機器にとって、無価値なモデルである。 Therefore, the estimation model 131 is a worthless model for users or devices that do not have the secret information or the rules for data conversion.
 仮に、攻撃者が、この推定モデル131に対し学習データを抜き取るような攻撃(Model Inversion attackやMembership inference)を行ったとしても、変換後のデータから元のデータを正確に予測することは難しいため、この攻撃を行う価値がない。また、同様に、攻撃者が、この推定モデル131のパラメータを抜き取る攻撃や、複製モデルを攻撃者の手元に生成するような攻撃を行ったとしても、秘密情報を持たないユーザにとっては、精度の低いモデルでしかないため、そもそも窃取する価値がない。 Even if an attacker were to carry out an attack (such as a model inversion attack or membership inference) to extract training data from this estimation model 131, it would be difficult to accurately predict the original data from the converted data, so there would be no point in carrying out such an attack. Similarly, even if an attacker were to carry out an attack to extract parameters from this estimation model 131 or to generate a duplicate model at the attacker's own hand, to a user who does not have the confidential information, it would only be a low-precision model, so there would be no point in stealing it in the first place.
 本実施の形態では、秘密情報を有するユーザまたは機器であって、特定の規則でデータを変換可能であるユーザまたは機器にとっては価値があるが、秘密情報を持たないユーザまたは機器、または、データ変換の規則を有しないユーザまたは機器にとっては価値のない推定モデル131を生成することで、推定131モデルからの情報漏洩、及び、推定モデル131の複製の危険性を低減することができる。 In this embodiment, by generating an inference model 131 that is valuable to a user or device that has confidential information and is capable of converting data according to specific rules, but is worthless to a user or device that does not have the confidential information or a user or device that does not have the rules for data conversion, it is possible to reduce the risk of information leakage from the inference model 131 and the risk of duplication of the inference model 131.
[実施形態のシステム構成について]
 上記に示した処理装置10の各構成要素は機能概念的なものであり、必ずしも物理的に図示のように構成されていることを要しない。すなわち、処理装置10の機能の分散および統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散または統合して構成することができる。
[System configuration of the embodiment]
Each component of the processing device 10 shown above is a functional concept, and does not necessarily have to be physically configured as shown in the figure. In other words, the specific form of distribution and integration of the functions of the processing device 10 is not limited to that shown in the figure, and all or part of it can be functionally or physically distributed or integrated in any unit depending on various loads, usage conditions, etc.
 また、処理装置10においておこなわれる各処理は、全部または任意の一部が、CPUおよびCPUにより解析実行されるプログラムにて実現されてもよい。また、処理装置10においておこなわれる各処理は、ワイヤードロジックによるハードウェアとして実現されてもよい。 Furthermore, each process performed by the processing device 10 may be realized, in whole or in part, by a CPU and a program analyzed and executed by the CPU. Furthermore, each process performed by the processing device 10 may be realized as hardware using wired logic.
 また、実施の形態において説明した各処理のうち、自動的におこなわれるものとして説明した処理の全部または一部を手動的に行うこともできる。もしくは、手動的におこなわれるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上述および図示の処理手順、制御手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて適宜変更することができる。 Furthermore, among the processes described in the embodiments, all or part of the processes described as being performed automatically can be performed manually. Alternatively, all or part of the processes described as being performed manually can be performed automatically using known methods. In addition, the information including the processing procedures, control procedures, specific names, various data, and parameters described above and illustrated can be modified as appropriate unless otherwise specified.
[プログラム]
 図6は、プログラムが実行されることにより、処理装置10が実現されるコンピュータの一例を示す図である。コンピュータ1000は、例えば、メモリ1010、CPU1020を有する。また、コンピュータ1000は、ハードディスクドライブインタフェース1030、ディスクドライブインタフェース1040、シリアルポートインタフェース1050、ビデオアダプタ1060、ネットワークインタフェース1070を有する。これらの各部は、バス1080によって接続される。
[program]
6 is a diagram showing an example of a computer in which a program is executed to realize the processing device 10. The computer 1000 has, for example, a memory 1010 and a CPU 1020. The computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. These components are connected by a bus 1080.
 メモリ1010は、ROM1011およびRAM1012を含む。ROM1011は、例えば、BIOS(Basic Input Output System)等のブートプログラムを記憶する。ハードディスクドライブインタフェース1030は、ハードディスクドライブ1090に接続される。ディスクドライブインタフェース1040は、ディスクドライブ1100に接続される。例えば磁気ディスクや光ディスク等の着脱可能な記憶媒体が、ディスクドライブ1100に挿入される。シリアルポートインタフェース1050は、例えばマウス1110、キーボード1120に接続される。ビデオアダプタ1060は、例えばディスプレイ1130に接続される。 The memory 1010 includes a ROM 1011 and a RAM 1012. The ROM 1011 stores a boot program such as a BIOS (Basic Input Output System). The hard disk drive interface 1030 is connected to a hard disk drive 1090. The disk drive interface 1040 is connected to a disk drive 1100. A removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100. The serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example. The video adapter 1060 is connected to a display 1130, for example.
 ハードディスクドライブ1090は、例えば、OS(Operating System)1091、アプリケーションプログラム1092、プログラムモジュール1093、プログラムデータ1094を記憶する。すなわち、処理装置10の各処理を規定するプログラムは、コンピュータ1000により実行可能なコードが記述されたプログラムモジュール1093として実装される。プログラムモジュール1093は、例えばハードディスクドライブ1090に記憶される。例えば、処理装置10における機能構成と同様の処理を実行するためのプログラムモジュール1093が、ハードディスクドライブ1090に記憶される。なお、ハードディスクドライブ1090は、SSD(Solid State Drive)により代替されてもよい。 The hard disk drive 1090 stores, for example, an OS (Operating System) 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process of the processing device 10 are implemented as program modules 1093 in which code executable by the computer 1000 is written. The program modules 1093 are stored, for example, in the hard disk drive 1090. For example, a program module 1093 for executing processes similar to the functional configuration of the processing device 10 is stored in the hard disk drive 1090. The hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
 また、上述した実施の形態の処理で用いられる設定データは、プログラムデータ1094として、例えばメモリ1010やハードディスクドライブ1090に記憶される。そして、CPU1020が、メモリ1010やハードディスクドライブ1090に記憶されたプログラムモジュール1093やプログラムデータ1094を必要に応じてRAM1012に読み出して実行する。 Furthermore, the setting data used in the processing of the above-mentioned embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
 なお、プログラムモジュール1093やプログラムデータ1094は、ハードディスクドライブ1090に記憶される場合に限らず、例えば着脱可能な記憶媒体に記憶され、ディスクドライブ1100等を介してCPU1020によって読み出されてもよい。あるいは、プログラムモジュール1093およびプログラムデータ1094は、ネットワーク(LAN(Local Area Network)、WAN(Wide Area Network)等)を介して接続された他のコンピュータに記憶されてもよい。そして、プログラムモジュール1093およびプログラムデータ1094は、他のコンピュータから、ネットワークインタフェース1070を介してCPU1020によって読み出されてもよい。 The program module 1093 and program data 1094 may not necessarily be stored in the hard disk drive 1090, but may be stored in a removable storage medium, for example, and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)). The program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.
 以上、本発明者によってなされた発明を適用した実施の形態について説明したが、本実施の形態による本発明の開示の一部をなす記述および図面により本発明は限定されることはない。すなわち、本実施の形態に基づいて当業者等によりなされる他の実施の形態、実施例および運用技術等はすべて本発明の範疇に含まれる。 The above describes an embodiment of the invention made by the inventor, but the present invention is not limited to the description and drawings that form part of the disclosure of the present invention according to this embodiment. In other words, all other embodiments, examples, and operational techniques made by those skilled in the art based on this embodiment are included in the scope of the present invention.
 10 処理装置
 11 データ取得部
 12 変換部
 13 推定部
 14 学習部
 131 推定モデル
REFERENCE SIGNS LIST 10 Processing device 11 Data acquisition unit 12 Conversion unit 13 Estimation unit 14 Learning unit 131 Estimation model

Claims (8)

  1.  学習データを取得する取得部と、
     前記学習データに、秘密情報に基づく所定の変換を実行する変換部と、
     前記変換部によって変換が実行された学習データを基に、推定モデルの学習を実行する学習部と、
     を有することを特徴とする学習装置。
    An acquisition unit for acquiring learning data;
    A conversion unit that performs a predetermined conversion based on secret information on the learning data;
    a learning unit that learns an estimation model based on the learning data converted by the conversion unit;
    A learning device comprising:
  2.  前記秘密情報は、前記学習装置のモジュールに格納されている秘密鍵であることを特徴とする請求項1に記載の学習装置。 The learning device according to claim 1, characterized in that the secret information is a secret key stored in a module of the learning device.
  3.  前記所定の変換は、特定の規則に従った変換であることを特徴とする請求項1に記載の学習装置。 The learning device according to claim 1, characterized in that the predetermined transformation is a transformation that follows a specific rule.
  4.  前記変換部は、前記学習データが画像データである場合には、Jigsaw変換、色調の変換、アフィン変換、または、前記Jigsaw変換と前記色調の変換と前記アフィン変換との二以上を組み合わせた変換を行い、前記学習データがテーブルデータである場合には、特徴量のシャッフル、ノイズの印加、または、前記特徴量のシャッフル及び前記ノイズの印加を組み合わせた変換を行うことを特徴とする請求項3に記載の学習装置。 The learning device according to claim 3, characterized in that, when the learning data is image data, the transformation unit performs a Jigsaw transformation, a color tone transformation, an affine transformation, or a transformation that combines two or more of the Jigsaw transformation, the color tone transformation, and the affine transformation, and, when the learning data is table data, performs a transformation that shuffles features, applies noise, or a transformation that combines the shuffling of features and the application of noise.
  5.  処理対象のデータを取得する取得部と、
     前記処理対象のデータに、秘密情報に基づく所定の変換を実行する変換部と、
     前記秘密情報に基づく所定の変換が実行された学習データを基に学習が実行された推定モデルを用いて、前記変換部によって変換が実行された処理対象のデータを基に、推定処理を実行する推定部と、
     を有することを特徴とする推定装置。
    An acquisition unit that acquires data to be processed;
    a conversion unit that performs a predetermined conversion based on secret information on the data to be processed;
    an estimation unit that performs an estimation process based on the data to be processed that has been converted by the conversion unit, using an estimation model that has been trained based on training data that has been subjected to a predetermined conversion based on the secret information;
    An estimation device comprising:
  6.  学習装置が実行する学習方法であって、
     学習データを取得する工程と、
     前記学習データに、秘密情報に基づく所定の変換を実行する工程と、
     前記実行する工程において変換が実行された学習データを基に、推定モデルの学習を実行する工程と、
     を含んだことを特徴とする学習方法。
    A learning method executed by a learning device, comprising:
    Obtaining training data;
    performing a predetermined transformation on the training data based on secret information;
    a step of executing learning of an estimation model based on the learning data that has been converted in the executing step;
    A learning method comprising:
  7.  推定装置が実行する推定方法であって、
     処理対象のデータを取得する工程と、
     前記処理対象のデータに、秘密情報に基づく所定の変換を実行する工程と、
     前記秘密情報に基づく所定の変換が実行された学習データを基に学習が実行された推定モデルを用いて、前記変換を実行する工程において変換が実行された処理対象のデータを基に、推定処理を実行する工程と、
     を含んだことを特徴とする推定方法。
    An estimation method executed by an estimation device, comprising:
    obtaining data to be processed;
    performing a predetermined transformation on the data to be processed based on secret information;
    performing an estimation process on the data to be processed that has been subjected to the conversion in the performing the conversion process, using an estimation model that has been trained on training data that has been subjected to a predetermined conversion based on the secret information;
    The estimation method according to claim 1, further comprising:
  8.  コンピュータを、請求項1~3のいずれか一つに記載の学習装置、または、請求項5に記載の推定装置として機能させるためのプログラム。 A program for causing a computer to function as a learning device according to any one of claims 1 to 3, or as an estimation device according to claim 5.
PCT/JP2022/041070 2022-11-02 2022-11-02 Learning device, estimating device, learning method, estimating method, and program WO2024095420A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/041070 WO2024095420A1 (en) 2022-11-02 2022-11-02 Learning device, estimating device, learning method, estimating method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/041070 WO2024095420A1 (en) 2022-11-02 2022-11-02 Learning device, estimating device, learning method, estimating method, and program

Publications (1)

Publication Number Publication Date
WO2024095420A1 true WO2024095420A1 (en) 2024-05-10

Family

ID=90929996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/041070 WO2024095420A1 (en) 2022-11-02 2022-11-02 Learning device, estimating device, learning method, estimating method, and program

Country Status (1)

Country Link
WO (1) WO2024095420A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010503252A (en) * 2006-08-31 2010-01-28 インターナショナル・ビジネス・マシーンズ・コーポレーション Computing platform proof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010503252A (en) * 2006-08-31 2010-01-28 インターナショナル・ビジネス・マシーンズ・コーポレーション Computing platform proof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HITOSHI KIYA; APRILPYONE MAUNGMAUNG; YUMA KINOSHITA; SHOKO IMAIZUMI; SAYAKA SHIOTA: "An Overview of Compressible and Learnable Image Transformation with Secret Key and Its Applications", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 April 2022 (2022-04-16), 201 Olin Library Cornell University Ithaca, NY 14853, XP091196806 *
KITAYAMA MASAKI, KIYA HITOSHI: "A privacy preserving culculation method of eigenface using EtC images", IEICE TECHNICAL REPORT, IEICE, JP, vol. 118, no. 494 (EMM2018-91), 6 March 2019 (2019-03-06), JP, pages 1 - 6, XP009554706, ISSN: 0913-5685 *

Similar Documents

Publication Publication Date Title
JP7272363B2 (en) Precision privacy-preserving real-valued function evaluation
Harer et al. Learning to repair software vulnerabilities with generative adversarial networks
Ananth et al. Secure software leasing
WO2015035827A1 (en) Method and apparatus for providing string encryption and decryption in program files
Aschieri et al. On natural deduction in classical first-order logic: Curry–Howard correspondence, strong normalization and Herbrand's theorem
Nithyanand et al. A theoretical analysis: Physical unclonable functions and the software protection problem
Gupta et al. Sigma: Secure gpt inference with function secret sharing
WO2023096571A2 (en) Data processing for release while protecting individual privacy
WO2024095420A1 (en) Learning device, estimating device, learning method, estimating method, and program
Pérez et al. Universal steganography detector based on an artificial immune system for JPEG images
Steffen et al. Breaking and protecting the crystal: Side-channel analysis of dilithium in hardware
Dubiński et al. Towards more realistic membership inference attacks on large diffusion models
WO2018008547A1 (en) Secret computation system, secret computation device, secret computation method, and program
Mossel et al. Shuffling by semi-random transpositions
Cristiani et al. Fit the joint moments: how to attack any masking scheme
Beneduci Positive operator valued measures and feller Markov kernels
CN109559269A (en) A kind of method and terminal of image encryption
Blazy et al. Towards a formally verified obfuscating compiler
CN107667368B (en) System, method and storage medium for obfuscating a computer program
Rajpal et al. Fast digital watermarking of uncompressed colored images using bidirectional extreme learning machine
EP4127981A1 (en) Systems, methods, and storage media for creating secured transformed code from input code using a neural network to obscure a function
Gong et al. Gradient leakage attacks in federated learning
Harraz et al. High-fidelity quantum teleportation through noisy channels via weak measurement and environment-assisted measurement
WO2021038827A1 (en) Information processing method, information processing program, and information processing device
Ji et al. Improving Adversarial Robustness with Data-Centric Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964438

Country of ref document: EP

Kind code of ref document: A1