WO2024079802A1 - Evaluation device, evaluation method, and evaluation program - Google Patents

Evaluation device, evaluation method, and evaluation program Download PDF

Info

Publication number
WO2024079802A1
WO2024079802A1 PCT/JP2022/037938 JP2022037938W WO2024079802A1 WO 2024079802 A1 WO2024079802 A1 WO 2024079802A1 JP 2022037938 W JP2022037938 W JP 2022037938W WO 2024079802 A1 WO2024079802 A1 WO 2024079802A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
unfairness
user
machine learning
learning model
Prior art date
Application number
PCT/JP2022/037938
Other languages
French (fr)
Japanese (ja)
Inventor
俊樹 芝原
尭之 三浦
真昇 紀伊
敦謙 市川
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/037938 priority Critical patent/WO2024079802A1/en
Publication of WO2024079802A1 publication Critical patent/WO2024079802A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to an evaluation device, an evaluation method, and an evaluation program for evaluating fairness regarding privacy risks.
  • Machine learning technologies such as Deep Neural Networks (DNNs) have been pointed out as posing privacy risks due to their tendency to memorize training data. Specifically, it has been shown that it is possible to infer from the output of a trained model whether or not specific data was included in the training data. Therefore, consideration must be given to privacy risks when handling data that users do not want others to know, such as medical data or web browsing history.
  • DNNs Deep Neural Networks
  • Non-Patent Documents 1 and 2 there are conventional techniques for evaluating inter-personal fairness in classification problems (see Non-Patent Documents 1 and 2), but there are no techniques for evaluating fairness with respect to privacy risks as described above. Therefore, the objective of the present invention is to evaluate fairness with respect to privacy risks.
  • the present invention is characterized by comprising a privacy risk calculation unit that calculates the privacy risk of each piece of data included in a dataset used to train a machine learning model, a gain calculation unit that calculates the gain that a user who provides data to the dataset will gain from providing the data, an unfairness calculation unit that calculates the difference between the privacy risk of the data and the gain that the user will gain from providing the data as the unfairness of the user, and evaluates the unfairness of the machine learning model based on the calculated unfairness of each of the users, and an output processing unit that outputs the evaluation result of the unfairness of the machine learning model.
  • the present invention makes it possible to evaluate fairness regarding privacy risks.
  • FIG. 1 is a diagram for explaining an overview of the evaluation device.
  • FIG. 2 is a diagram illustrating an example of the configuration of the evaluation device.
  • FIG. 3 is a flowchart illustrating an example of a processing procedure executed by the evaluation device.
  • FIG. 4 is a flowchart for explaining an application example of the evaluation device.
  • FIG. 5 is a diagram illustrating a computer that executes an evaluation program.
  • the evaluation device of this embodiment evaluates whether users who provide data to a dataset used to build a machine learning model have obtained benefits that are commensurate with the privacy risk (fairness).
  • the evaluation device extracts data from a dataset used to build a machine learning model and calculates the privacy risk of the data.
  • the evaluation device also calculates the gain that a user can obtain by providing the data to the dataset.
  • the evaluation device calculates the unfairness of each user based on the difference between the gain expected from the user's privacy risk and the gain that the user actually obtains by providing data.
  • the evaluation device calculates a higher unfairness the greater the difference between the gain expected from the user's privacy risk and the gain that the user actually obtains by providing data.
  • the evaluation device calculates whether each user who provided data to the dataset has obtained a gain commensurate with the privacy risk (unfairness).
  • the evaluation device evaluates the unfairness of the machine learning model based on the calculated unfairness of each user.
  • the evaluation device 10 includes, for example, an input/output unit 11, a storage unit 12, and a control unit 13.
  • the input/output unit 11 is an interface that handles the input and output of various data.
  • the input/output unit 11 accepts input of a dataset used to build a machine learning model.
  • the input dataset is stored in the storage unit 12.
  • the storage unit 12 stores data, programs, etc. that are referenced when the control unit 13 executes various processes.
  • the storage unit 12 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk.
  • the storage unit 12 stores a data set, etc., received by the input/output unit 11.
  • the storage unit 12 may store information indicating which user provided each data of the data set.
  • the control unit 13 is responsible for controlling the entire evaluation device 10.
  • the functions of the control unit 13 are realized, for example, by a CPU (Central Processing Unit) executing a program stored in the storage unit 12.
  • the control unit 13 includes, for example, a privacy risk calculation unit 131, a gain calculation unit 132, an unfairness calculation unit 133, and an output processing unit 134.
  • the privacy risk calculation unit 131 calculates the privacy risk of each piece of data included in the dataset. For example, the privacy risk calculation unit 131 calculates the privacy risk by calculating the lower bound (LB) of the differential privacy parameter ⁇ based on the following formula (1).
  • the upper bound of the false positive rate (FPR UB ) and the upper bound of the false negative rate (FNR UB ) in formula (1) are calculated using the false positive rate (FPR) and the false negative rate (FNR) when a game of guessing whether the data to be evaluated was used for model training is repeated many times (for example, about 1000 times) (see formula (2)).
  • the upper bounds of the false positive rate (FPR) and the false negative rate (FNR) can be calculated using the Clopper-Pearson method.
  • the privacy risk calculation unit 131 may also calculate the privacy risk based on, for example, the success rate when membership estimation is performed multiple times.
  • the gain calculation unit 132 calculates the gain that a user will gain by providing data to the dataset.
  • the gain that a user will gain by providing data to the dataset is, for example, the degree to which the accuracy of a machine learning model improves when the data is used to train the machine learning model.
  • the gain calculation unit 132 calculates the degree to which the accuracy of the machine learning model improves when the data provided by the user is used for learning as follows. First, the gain calculation unit 132 constructs n shadow models (first shadow models) that use the data provided by the user for learning, and n shadow models (second shadow models) that do not use the data for learning.
  • the gain calculation unit 132 determines the number of first shadow models that output correct answer data among the n first shadow models as c in . Then, the value obtained by dividing c in by n (c in /n) is the accuracy of the first shadow models.
  • the gain calculation unit 132 also determines the number of second shadow models that output correct answer data among the n second shadow models as c out . Then, the value obtained by dividing c out by n (c out /n) is the accuracy of the second shadow models.
  • the gain calculation unit 132 calculates the difference between the accuracy (c in /n) of the first shadow model and the accuracy (c out /n) of the second shadow model as the gain (g) of the user's data (see equation (3)).
  • the gain calculation unit 132 may calculate the gain based on the extent to which other data held by the user improves the accuracy of the shadow model.
  • the user's gain may also be a service or monetary reward provided in exchange for the data provided by the user.
  • the unfairness calculation unit 133 calculates the unfairness of each user who provided data to the dataset. Then, the unfairness calculation unit 133 evaluates the unfairness of the machine learning model based on the unfairness of each user calculated. For example, the unfairness calculation unit 133 calculates the difference between the user's gain expected from the privacy risk of the data calculated by the privacy risk calculation unit 131 and the user's gain obtained by providing the data calculated by the gain calculation unit 132 as the unfairness of the user. Then, the unfairness calculation unit 133 determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model.
  • the unfairness calculation unit 133 calculates the difference between the normalized risk r' and the normalized payoff g' for each user, and the maximum value is the unfairness ( ⁇ ) of the machine learning model (see formula (4)).
  • the unfairness calculation unit 133 may determine the maximum value among the unfairnesses excluding the outlier as the unfairness of the machine learning model.
  • the output processing unit 134 outputs the processing result by the control unit 13. For example, the output processing unit 134 outputs the evaluation result of the unfairness of the machine learning model by the unfairness calculation unit 133.
  • the privacy risk calculation unit 131 of the evaluation device 10 calculates a privacy risk of each piece of data included in the data set (S1).
  • the gain calculation unit 132 calculates the gain obtained by the user by providing data to the dataset (S2). For example, the gain calculation unit 132 calculates the extent to which the accuracy of the shadow model will improve if the data provided by the user is used for learning.
  • the unfairness calculation unit 133 evaluates the unfairness of the machine learning model (S3). For example, the unfairness calculation unit 133 calculates the difference between the gain expected from the privacy risk of the data calculated in S1 and the actual gain of the user obtained by providing the data calculated in S3 as the unfairness of the user. The unfairness calculation unit 133 then determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model. The output processing unit 134 then outputs the evaluation result of the unfairness of the machine learning model obtained in S3 (S4).
  • the evaluation device 10 can evaluate the unfairness of the machine learning model.
  • NN neural network
  • S11 privacy risks
  • DP-SGD Differential Gradient Descent
  • the administrator selects the dataset to be used in evaluating the unfairness of users and the users to be evaluated for unfairness (S12). For example, the administrator selects about 100 users, taking into consideration the diversity of users.
  • the evaluation device 10 uses the dataset selected in S12 to calculate the unfairness of each user selected in S12, and evaluates the unfairness of the machine learning model (NN) based on the unfairness of each user calculated (S13). For example, the evaluation device 10 calculates the privacy risk of each data of the dataset for the NN designed in S11. The evaluation device 10 also calculates the gain of the user selected in S12 (the user who provided data to the dataset). Then, the evaluation device 10 calculates the unfairness of each user based on the calculated privacy risk of each data and the gain of each user who provided the data. Then, the evaluation device 10 determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model (NN).
  • the evaluation device 10 determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model (NN).
  • each component of each part shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure.
  • the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or a part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
  • each processing function performed by each device can be realized in whole or in any part by a CPU and a program executed by the CPU, or can be realized as hardware using wired logic.
  • the evaluation device 10 can be implemented by installing a program (evaluation program) as package software or online software on a desired computer.
  • the program can be executed by an information processing device, causing the information processing device to function as the evaluation device 10.
  • the information processing device referred to here includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone Systems), as well as terminals such as PDAs (Personal Digital Assistants).
  • FIG. 5 is a diagram showing an example of a computer that executes an evaluation program.
  • the computer 1000 has, for example, a memory 1010 and a CPU 1020.
  • the computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these components is connected by a bus 1080.
  • the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012.
  • the ROM 1011 stores a boot program such as a BIOS (Basic Input Output System).
  • BIOS Basic Input Output System
  • the hard disk drive interface 1030 is connected to a hard disk drive 1090.
  • the disk drive interface 1040 is connected to a disk drive 1100.
  • a removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100.
  • the serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example.
  • the video adapter 1060 is connected to a display 1130, for example.
  • the hard disk drive 1090 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process executed by the evaluation device 10 described above are implemented as program modules 1093 in which computer-executable code is written.
  • the program modules 1093 are stored, for example, in the hard disk drive 1090.
  • a program module 1093 for executing processes similar to the functional configuration of the evaluation device 10 is stored in the hard disk drive 1090.
  • the hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
  • the data used in the processing of the above-described embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
  • the program module 1093 and program data 1094 are not limited to being stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)). The program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.
  • a network such as a LAN (Local Area Network), WAN (Wide Area Network)

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

This evaluation device calculates a privacy risk of each piece of data included in a data set used for training a machine-learning model. In addition, the evaluation device calculates a gain that a user has obtained by providing the data to the data set. For example, the evaluation device calculates how much the accuracy of the machine-learning model is improved by using the data for training. In addition, the evaluation device calculates, as an inequality degree, the difference between the gain estimated from the privacy risk of the data and a user gain obtained by providing the data, and evaluates the inequality degree of the machine-learning model by using the calculated inequality of each of the users.

Description

評価装置、評価方法、および、評価プログラムEVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM
 本発明は、プライバシーリスクに関する公平性を評価するための、評価装置、評価方法、および、評価プログラムに関する。 The present invention relates to an evaluation device, an evaluation method, and an evaluation program for evaluating fairness regarding privacy risks.
 Deep Neural Network(DNN)に代表される機械学習技術は、教師データを記憶しやすいという特性からプライバシーリスクがあることが指摘されている。具体的には、特定のデータが教師データに含まれていたか否かを、学習済みモデルの出力から推定できることが示されている。したがって、医療データやウェブの閲覧履歴など、ユーザが他人に知られたくないデータを扱う場合はプライバシーリスクへの配慮が必要である。 Machine learning technologies such as Deep Neural Networks (DNNs) have been pointed out as posing privacy risks due to their tendency to memorize training data. Specifically, it has been shown that it is possible to infer from the output of a trained model whether or not specific data was included in the training data. Therefore, consideration must be given to privacy risks when handling data that users do not want others to know, such as medical data or web browsing history.
 また、機械学習において、教師データとして使用するデータについて、ユーザのプライバシーリスクと、ユーザがデータを提供することの利得(例えば、提供したデータが、学習済みモデルの精度向上にどれくらい役立っているか等)とのバランスを考慮する必要がある。例えば、あるユーザのプライバシーリスクと、当該ユーザがデータを提供することの利得とが釣り合えば、公平であると考えることができる。 In addition, when it comes to data used as training data in machine learning, it is necessary to consider the balance between the user's privacy risk and the benefit the user receives from providing data (e.g., how useful the provided data is in improving the accuracy of the trained model). For example, if a user's privacy risk is balanced against the benefit the user receives from providing data, it can be considered fair.
 従来、機械学習の分野において、分類問題における個人間の公平性を評価する技術(非特許文献1,2参照)は存在するが、上記のようなプライバシーリスクに関する公平性を評価する技術は存在しなかった。そこで本発明は、プライバシーリスクに関する公平性を評価することを課題とする。  In the field of machine learning, there are conventional techniques for evaluating inter-personal fairness in classification problems (see Non-Patent Documents 1 and 2), but there are no techniques for evaluating fairness with respect to privacy risks as described above. Therefore, the objective of the present invention is to evaluate fairness with respect to privacy risks.
 前記した課題を解決するため、本発明は、機械学習モデルの学習に用いられたデータセットに含まれるデータそれぞれのプライバシーリスクを算出するプライバシーリスク算出部と、前記データセットにデータを提供したユーザが、前記データの提供により得る利得を算出する利得算出部と、前記データのプライバシーリスクと前記データの提供により得るユーザの利得との差分を、前記ユーザの不公平度として算出し、算出した前記ユーザそれぞれの不公平度により前記機械学習モデルの不公平度を評価する不公平度算出部と、前記機械学習モデルの不公平度の評価結果を出力する出力処理部とを備えることを特徴とする。 In order to solve the above-mentioned problems, the present invention is characterized by comprising a privacy risk calculation unit that calculates the privacy risk of each piece of data included in a dataset used to train a machine learning model, a gain calculation unit that calculates the gain that a user who provides data to the dataset will gain from providing the data, an unfairness calculation unit that calculates the difference between the privacy risk of the data and the gain that the user will gain from providing the data as the unfairness of the user, and evaluates the unfairness of the machine learning model based on the calculated unfairness of each of the users, and an output processing unit that outputs the evaluation result of the unfairness of the machine learning model.
 本発明によれば、プライバシーリスクに関する公平性を評価することができる。 The present invention makes it possible to evaluate fairness regarding privacy risks.
図1は、評価装置の概要を説明するための図である。FIG. 1 is a diagram for explaining an overview of the evaluation device. 図2は、評価装置の構成例を示す図である。FIG. 2 is a diagram illustrating an example of the configuration of the evaluation device. 図3は、評価装置が実行する処理手順の例を示すフローチャートである。FIG. 3 is a flowchart illustrating an example of a processing procedure executed by the evaluation device. 図4は、評価装置の適用例を説明するためのフローチャートである。FIG. 4 is a flowchart for explaining an application example of the evaluation device. 図5は、評価プログラムを実行するコンピュータを示す図である。FIG. 5 is a diagram illustrating a computer that executes an evaluation program.
 以下、図面を参照しながら、本発明を実施するための形態(実施形態)について説明する。本発明は、本実施形態に限定されない。 Below, a form (embodiment) for carrying out the present invention will be described with reference to the drawings. The present invention is not limited to this embodiment.
[概要]
 本実施形態の評価装置は、機械学習モデルの構築に用いられるデータセットにデータを提供したユーザがプライバシーリスクに見合う利得が得られているか(公平性)を評価する。
[overview]
The evaluation device of this embodiment evaluates whether users who provide data to a dataset used to build a machine learning model have obtained benefits that are commensurate with the privacy risk (fairness).
 例えば、評価装置は、図1に示すように、機械学習モデルの構築に用いられたデータセットからデータを抽出すると、当該データのプライバシーリスクを算出する。また、評価装置は、ユーザがデータセットに当該データを提供することにより得られる利得を算出する。 For example, as shown in FIG. 1, the evaluation device extracts data from a dataset used to build a machine learning model and calculates the privacy risk of the data. The evaluation device also calculates the gain that a user can obtain by providing the data to the dataset.
 そして、評価装置は、ユーザのプライバシーリスクから想定される利得と当該ユーザがデータを提供することにより実際に得られる利得との差分に基づき、各ユーザの不公平度を算出する。ここで、評価装置は、ユーザのプライバシーリスクから想定される利得と当該ユーザがデータを提供することにより実際に得られる利得との差分が大きいほど、不公平度を高く算出する。これにより評価装置は、データセットにデータを提供した各ユーザがプライバシーリスクに見合う利得を得られているか(不公平度)を算出する。そして、評価装置は、算出した各ユーザの不公平度により機械学習モデルの不公平度を評価する。 Then, the evaluation device calculates the unfairness of each user based on the difference between the gain expected from the user's privacy risk and the gain that the user actually obtains by providing data. Here, the evaluation device calculates a higher unfairness the greater the difference between the gain expected from the user's privacy risk and the gain that the user actually obtains by providing data. In this way, the evaluation device calculates whether each user who provided data to the dataset has obtained a gain commensurate with the privacy risk (unfairness). The evaluation device then evaluates the unfairness of the machine learning model based on the calculated unfairness of each user.
[構成例]
 次に、図2を用いて、評価装置10の構成例を説明する。評価装置10は、例えば、入出力部11、記憶部12、および、制御部13を備える。
[Configuration example]
Next, a configuration example of the evaluation device 10 will be described with reference to Fig. 2. The evaluation device 10 includes, for example, an input/output unit 11, a storage unit 12, and a control unit 13.
 入出力部11は、各種データの入出力を司るインタフェースである。入出力部11は、例えば、機械学習モデルの構築に用いるデータセットの入力を受け付ける。入力されたデータセットは記憶部12に格納される。 The input/output unit 11 is an interface that handles the input and output of various data. For example, the input/output unit 11 accepts input of a dataset used to build a machine learning model. The input dataset is stored in the storage unit 12.
 記憶部12は、制御部13が各種処理を実行する際に参照されるデータ、プログラム等を記憶する。記憶部12は、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。例えば、記憶部12は、入出力部11で受け付けたデータセット等を記憶する。また、例えば、記憶部12は、データセットの各データがどのユーザにより提供されたかを示す情報を記憶してもよい。 The storage unit 12 stores data, programs, etc. that are referenced when the control unit 13 executes various processes. The storage unit 12 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. For example, the storage unit 12 stores a data set, etc., received by the input/output unit 11. Also, for example, the storage unit 12 may store information indicating which user provided each data of the data set.
 制御部13は、評価装置10全体の制御を司る。制御部13の機能は、例えば、CPU(Central Processing Unit)が、記憶部12に記憶されるプログラムを実行することにより実現される。制御部13は、例えば、プライバシーリスク算出部131と、利得算出部132と、不公平度算出部133と、出力処理部134とを備える。 The control unit 13 is responsible for controlling the entire evaluation device 10. The functions of the control unit 13 are realized, for example, by a CPU (Central Processing Unit) executing a program stored in the storage unit 12. The control unit 13 includes, for example, a privacy risk calculation unit 131, a gain calculation unit 132, an unfairness calculation unit 133, and an output processing unit 134.
 プライバシーリスク算出部131は、データセットに含まれるデータそれぞれのプライバシーリスクを算出する。例えば、プライバシーリスク算出部131は、以下の式(1)に基づき、差分プライバシーのパラメータεの下界(LB:Lower Bound)を算出することによりプライバシーリスクを算出する。 The privacy risk calculation unit 131 calculates the privacy risk of each piece of data included in the dataset. For example, the privacy risk calculation unit 131 calculates the privacy risk by calculating the lower bound (LB) of the differential privacy parameter ε based on the following formula (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 なお、式(1)における誤検知率の上界(FPRUB)と見逃し率の上界(FNRUB)は、評価対象のデータが、モデルの学習に用いられていたか否かを当てるゲームを多数回(例えば、1000回程度)繰り返したときの、誤検知率(FPR)と見逃し率(FNR)を用いて算出される(式(2)参照)。また、誤検知率(FPR)と見逃し率(FNR)それぞれの上界は、Clopper-Pearson methodで算出できる。 The upper bound of the false positive rate (FPR UB ) and the upper bound of the false negative rate (FNR UB ) in formula (1) are calculated using the false positive rate (FPR) and the false negative rate (FNR) when a game of guessing whether the data to be evaluated was used for model training is repeated many times (for example, about 1000 times) (see formula (2)). The upper bounds of the false positive rate (FPR) and the false negative rate (FNR) can be calculated using the Clopper-Pearson method.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 また、プライバシーリスク算出部131は、例えば、メンバーシップ推定を多数回行ったときの成功率等によりプライバシーリスクを算出してもよい。 The privacy risk calculation unit 131 may also calculate the privacy risk based on, for example, the success rate when membership estimation is performed multiple times.
 利得算出部132は、ユーザがデータセットにデータを提供することにより得る利得を算出する。ユーザがデータセットにデータを提供することにより得る利益は、例えば、当該データを機械学習モデルの学習に使用した場合に当該機械学習モデルの精度が向上する度合い等である。 The gain calculation unit 132 calculates the gain that a user will gain by providing data to the dataset. The gain that a user will gain by providing data to the dataset is, for example, the degree to which the accuracy of a machine learning model improves when the data is used to train the machine learning model.
 例えば、利得算出部132は、ユーザの提供したデータを学習に使用した場合に機械学習モデルの精度が向上する度合いを以下のようにして算出する。まず、利得算出部132は、ユーザの提供したデータを学習に使用したシャドウモデル(第1のシャドウモデル)と当該データを学習に使用しなかったシャドウモデル(第2のシャドウモデル)とをそれぞれn個構築する。 For example, the gain calculation unit 132 calculates the degree to which the accuracy of the machine learning model improves when the data provided by the user is used for learning as follows. First, the gain calculation unit 132 constructs n shadow models (first shadow models) that use the data provided by the user for learning, and n shadow models (second shadow models) that do not use the data for learning.
 次に、利得算出部132は、n個の第1のシャドウモデルのうち、正解データを出力した第1のシャドウモデルの数をcinとする。そして、cinをnで割った値(cin/n)を第1のシャドウモデルの精度とする。また、利得算出部132は、n個の第2のシャドウモデルのうち、正解データを出力した第2のシャドウモデルの数をcoutをとする。そして、coutをnで割った値(cout/n)を第2のシャドウモデルの精度とする。 Next, the gain calculation unit 132 determines the number of first shadow models that output correct answer data among the n first shadow models as c in . Then, the value obtained by dividing c in by n (c in /n) is the accuracy of the first shadow models. The gain calculation unit 132 also determines the number of second shadow models that output correct answer data among the n second shadow models as c out . Then, the value obtained by dividing c out by n (c out /n) is the accuracy of the second shadow models.
 そして、利得算出部132は、第1のシャドウモデルの精度(cin/n)と第2のシャドウモデルの精度(cout/n)との差を、ユーザがデータを利得(g)として算出する(式(3)参照)。 Then, the gain calculation unit 132 calculates the difference between the accuracy (c in /n) of the first shadow model and the accuracy (c out /n) of the second shadow model as the gain (g) of the user's data (see equation (3)).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 なお、利得算出部132は、上記の方法以外にも、ユーザが保有する他のデータがシャドウモデルの精度をどの程度改善するかにより利得を算出してもよい。また、ユーザの利得は、ユーザが提供するデータと引き換えに提供されるサービスや金銭的報酬でもよい。 In addition to the above method, the gain calculation unit 132 may calculate the gain based on the extent to which other data held by the user improves the accuracy of the shadow model. The user's gain may also be a service or monetary reward provided in exchange for the data provided by the user.
 不公平度算出部133は、データセットにデータを提供したユーザそれぞれの不公平度を算出する。そして、不公平度算出部133は、算出したユーザそれぞれの不公平度により、機械学習モデルの不公平度を評価する。例えば、不公平度算出部133は、プライバシーリスク算出部131により算出されたデータのプライバシーリスクから想定されるユーザの利得と、利得算出部132により算出された当該データの提供により得られたユーザの利得との差分を、当該ユーザの不公平度として算出する。そして、不公平度算出部133は、算出された各ユーザの不公平度のうち、その最大値を、機械学習モデルの不公平度とする。 The unfairness calculation unit 133 calculates the unfairness of each user who provided data to the dataset. Then, the unfairness calculation unit 133 evaluates the unfairness of the machine learning model based on the unfairness of each user calculated. For example, the unfairness calculation unit 133 calculates the difference between the user's gain expected from the privacy risk of the data calculated by the privacy risk calculation unit 131 and the user's gain obtained by providing the data calculated by the gain calculation unit 132 as the unfairness of the user. Then, the unfairness calculation unit 133 determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model.
 例えば、全ユーザーをU={uii= n 1とする。この全ユーザのリスクをR={rii= n 1、利得をG={gii= n 1とする。また、平均=0、分散=1に正規化したリスクをR´、利得をG´とする。そして、不公平度算出部133は、ユーザごとに、正規化されたリスクr´と正規化された利得g´との差を算出し、その最大値を機械学習モデルの不公平度(δ^)とする(式(4)参照)。 For example, let all users be U = {u i } i = n 1. Let the risk of all users be R = {r i } i = n 1 , and the payoff be G = {g i } i = n 1. Let the risk normalized to mean = 0 and variance = 1 be R', and the payoff be G'. Then, the unfairness calculation unit 133 calculates the difference between the normalized risk r' and the normalized payoff g' for each user, and the maximum value is the unfairness (δ^) of the machine learning model (see formula (4)).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 なお、不公平度算出部133は、各ユーザの不公平度にはずれ値が含まれていた場合、そのはずれ値を除外した不公平度の中での最大値を機械学習モデルの不公平度としてもよい。 In addition, if the unfairness of each user includes an outlier, the unfairness calculation unit 133 may determine the maximum value among the unfairnesses excluding the outlier as the unfairness of the machine learning model.
 出力処理部134は、制御部13による処理結果を出力する。例えば、出力処理部134は、不公平度算出部133による機械学習モデルの不公平度の評価結果を出力する。 The output processing unit 134 outputs the processing result by the control unit 13. For example, the output processing unit 134 outputs the evaluation result of the unfairness of the machine learning model by the unfairness calculation unit 133.
 このような評価装置10によれば、機械学習モデルの不公平度を評価することができる。 Using such an evaluation device 10, it is possible to evaluate the unfairness of a machine learning model.
[処理手順の例]
 次に、図3を用いて、評価装置10が実行する処理手順の例を説明する。まず、評価装置10のプライバシーリスク算出部131は、データセットに含まれるデータそれぞれのプライバシーリスクを算出する(S1)。
[Example of processing procedure]
Next, an example of a processing procedure executed by the evaluation device 10 will be described with reference to Fig. 3. First, the privacy risk calculation unit 131 of the evaluation device 10 calculates a privacy risk of each piece of data included in the data set (S1).
 次に、利得算出部132は、ユーザがデータセットにデータを提供することにより得た利得を算出する(S2)。例えば、利得算出部132は、ユーザが提供したデータを学習に使用した場合にシャドウモデルの精度がどの程度改善するかを算出する。 Next, the gain calculation unit 132 calculates the gain obtained by the user by providing data to the dataset (S2). For example, the gain calculation unit 132 calculates the extent to which the accuracy of the shadow model will improve if the data provided by the user is used for learning.
 次に、不公平度算出部133は、機械学習モデルの不公平度を評価する(S3)。例えば、不公平度算出部133は、S1で算出されたデータのプライバシーリスクから想定される利得と、S3で算出された当該データの提供により得られたユーザの実際の利得との差分を、当該ユーザの不公平度として算出する。そして、不公平度算出部133は、算出した各ユーザの不公平度のうち、その最大値を、機械学習モデルの不公平度とする。そして、出力処理部134は、S3で得られた機械学習モデルの不公平度の評価結果を出力する(S4)。 Next, the unfairness calculation unit 133 evaluates the unfairness of the machine learning model (S3). For example, the unfairness calculation unit 133 calculates the difference between the gain expected from the privacy risk of the data calculated in S1 and the actual gain of the user obtained by providing the data calculated in S3 as the unfairness of the user. The unfairness calculation unit 133 then determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model. The output processing unit 134 then outputs the evaluation result of the unfairness of the machine learning model obtained in S3 (S4).
 評価装置10が上記の処理を行うことで、機械学習モデルの不公平度を評価することができる。 By performing the above process, the evaluation device 10 can evaluate the unfairness of the machine learning model.
[適用例]
 次に、図4を用いて、評価装置10の適用例を説明する。例えば、評価装置10の管理者はプライバシーリスクの算出対象となるNN(ニューラルネットワーク)を設計する(S11)。例えば、DP-SGD(Differentially Private Stochastic Gradient Descent)により、Differential Privacyを満たすNNのトレーニングを行う。
[Application example]
Next, an application example of the evaluation device 10 will be described with reference to Fig. 4. For example, the administrator of the evaluation device 10 designs a neural network (NN) for calculating privacy risks (S11). For example, training of a NN that satisfies Differential Privacy is performed by DP-SGD (Differentially Private Stochastic Gradient Descent).
 次に、管理者は、ユーザの不公平度の評価に用いるデータセットと不公平度の評価対象のユーザの選定を行う(S12)。例えば、管理者は、ユーザの多様性を考慮し100人程度のユーザを選定する。 Next, the administrator selects the dataset to be used in evaluating the unfairness of users and the users to be evaluated for unfairness (S12). For example, the administrator selects about 100 users, taking into consideration the diversity of users.
 その後、評価装置10は、S12で選定されたデータセットを用いて、S12で選定された各ユーザの不公平度を算出し、算出した各ユーザの不公平度により機械学習モデル(NN)の不公平度を評価する(S13)。例えば、評価装置10は、S11で設計されたNNに対するデータセットの各データのプライバシーリスクを算出する。また、評価装置10はS12で選定されたユーザ(データセットにデータを提供したユーザ)の利得を算出する。そして、評価装置10は、算出した各データのプライバシーリスクとデータを提供した各ユーザの利得に基づき、各ユーザの不公平度を算出する。その後、評価装置10は、算出した各ユーザの不公平度のうち、その最大値を機械学習モデル(NN)の不公平度とする。 Then, the evaluation device 10 uses the dataset selected in S12 to calculate the unfairness of each user selected in S12, and evaluates the unfairness of the machine learning model (NN) based on the unfairness of each user calculated (S13). For example, the evaluation device 10 calculates the privacy risk of each data of the dataset for the NN designed in S11. The evaluation device 10 also calculates the gain of the user selected in S12 (the user who provided data to the dataset). Then, the evaluation device 10 calculates the unfairness of each user based on the calculated privacy risk of each data and the gain of each user who provided the data. Then, the evaluation device 10 determines the maximum value of the unfairness of each user calculated as the unfairness of the machine learning model (NN).
[システム構成等]
 また、図示した各部の各構成要素は機能概念的なものであり、必ずしも物理的に図示のように構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部又は一部を、各種の負荷や使用状況等に応じて、任意の単位で機能的又は物理的に分散・統合して構成することができる。さらに、各装置にて行われる各処理機能は、その全部又は任意の一部が、CPU及び当該CPUにて実行されるプログラムにて実現され、あるいは、ワイヤードロジックによるハードウェアとして実現され得る。
[System configuration, etc.]
In addition, each component of each part shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. In other words, the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or a part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc. Furthermore, each processing function performed by each device can be realized in whole or in any part by a CPU and a program executed by the CPU, or can be realized as hardware using wired logic.
 また、前記した実施形態において説明した処理のうち、自動的に行われるものとして説明した処理の全部又は一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部又は一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、制御手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。 Furthermore, among the processes described in the above embodiments, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically using known methods. In addition, the information including the processing procedures, control procedures, specific names, various data and parameters shown in the above documents and drawings can be changed as desired unless otherwise specified.
[プログラム]
 前記した評価装置10は、パッケージソフトウェアやオンラインソフトウェアとしてプログラム(評価プログラム)を所望のコンピュータにインストールさせることによって実装できる。例えば、上記のプログラムを情報処理装置に実行させることにより、情報処理装置を評価装置10として機能させることができる。ここで言う情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)等の移動体通信端末、さらには、PDA(Personal Digital Assistant)等の端末等がその範疇に含まれる。
[program]
The evaluation device 10 can be implemented by installing a program (evaluation program) as package software or online software on a desired computer. For example, the program can be executed by an information processing device, causing the information processing device to function as the evaluation device 10. The information processing device referred to here includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone Systems), as well as terminals such as PDAs (Personal Digital Assistants).
 図5は、評価プログラムを実行するコンピュータの一例を示す図である。コンピュータ1000は、例えば、メモリ1010、CPU1020を有する。また、コンピュータ1000は、ハードディスクドライブインタフェース1030、ディスクドライブインタフェース1040、シリアルポートインタフェース1050、ビデオアダプタ1060、ネットワークインタフェース1070を有する。これらの各部は、バス1080によって接続される。 FIG. 5 is a diagram showing an example of a computer that executes an evaluation program. The computer 1000 has, for example, a memory 1010 and a CPU 1020. The computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these components is connected by a bus 1080.
 メモリ1010は、ROM(Read Only Memory)1011及びRAM(Random Access Memory)1012を含む。ROM1011は、例えば、BIOS(Basic Input Output System)等のブートプログラムを記憶する。ハードディスクドライブインタフェース1030は、ハードディスクドライブ1090に接続される。ディスクドライブインタフェース1040は、ディスクドライブ1100に接続される。例えば磁気ディスクや光ディスク等の着脱可能な記憶媒体が、ディスクドライブ1100に挿入される。シリアルポートインタフェース1050は、例えばマウス1110、キーボード1120に接続される。ビデオアダプタ1060は、例えばディスプレイ1130に接続される。 The memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012. The ROM 1011 stores a boot program such as a BIOS (Basic Input Output System). The hard disk drive interface 1030 is connected to a hard disk drive 1090. The disk drive interface 1040 is connected to a disk drive 1100. A removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1100. The serial port interface 1050 is connected to a mouse 1110 and a keyboard 1120, for example. The video adapter 1060 is connected to a display 1130, for example.
 ハードディスクドライブ1090は、例えば、OS1091、アプリケーションプログラム1092、プログラムモジュール1093、プログラムデータ1094を記憶する。すなわち、上記の評価装置10が実行する各処理を規定するプログラムは、コンピュータにより実行可能なコードが記述されたプログラムモジュール1093として実装される。プログラムモジュール1093は、例えばハードディスクドライブ1090に記憶される。例えば、評価装置10における機能構成と同様の処理を実行するためのプログラムモジュール1093が、ハードディスクドライブ1090に記憶される。なお、ハードディスクドライブ1090は、SSD(Solid State Drive)により代替されてもよい。 The hard disk drive 1090 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. That is, the programs that define each process executed by the evaluation device 10 described above are implemented as program modules 1093 in which computer-executable code is written. The program modules 1093 are stored, for example, in the hard disk drive 1090. For example, a program module 1093 for executing processes similar to the functional configuration of the evaluation device 10 is stored in the hard disk drive 1090. The hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
 また、上述した実施形態の処理で用いられるデータは、プログラムデータ1094として、例えばメモリ1010やハードディスクドライブ1090に記憶される。そして、CPU1020が、メモリ1010やハードディスクドライブ1090に記憶されたプログラムモジュール1093やプログラムデータ1094を必要に応じてRAM1012に読み出して実行する。 The data used in the processing of the above-described embodiment is stored as program data 1094, for example, in memory 1010 or hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 or program data 1094 stored in memory 1010 or hard disk drive 1090 into RAM 1012 as necessary and executes it.
 なお、プログラムモジュール1093やプログラムデータ1094は、ハードディスクドライブ1090に記憶される場合に限らず、例えば着脱可能な記憶媒体に記憶され、ディスクドライブ1100等を介してCPU1020によって読み出されてもよい。あるいは、プログラムモジュール1093及びプログラムデータ1094は、ネットワーク(LAN(Local Area Network)、WAN(Wide Area Network)等)を介して接続される他のコンピュータに記憶されてもよい。そして、プログラムモジュール1093及びプログラムデータ1094は、他のコンピュータから、ネットワークインタフェース1070を介してCPU1020によって読み出されてもよい。 The program module 1093 and program data 1094 are not limited to being stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and program data 1094 may be stored in another computer connected via a network (such as a LAN (Local Area Network), WAN (Wide Area Network)). The program module 1093 and program data 1094 may then be read by the CPU 1020 from the other computer via the network interface 1070.
 10 評価装置
 11 入出力部
 12 記憶部
 13 制御部
 131 プライバシーリスク算出部
 132 利得算出部
 133 不公平度算出部
 134 出力処理部
REFERENCE SIGNS LIST 10 Evaluation device 11 Input/output unit 12 Storage unit 13 Control unit 131 Privacy risk calculation unit 132 Gain calculation unit 133 Unfairness calculation unit 134 Output processing unit

Claims (8)

  1.  機械学習モデルの学習に用いられたデータセットに含まれるデータそれぞれのプライバシーリスクを算出するプライバシーリスク算出部と、
     前記データセットにデータを提供したユーザが、前記データの提供により得る利得を算出する利得算出部と、
     前記データのプライバシーリスクと前記データの提供により得るユーザの利得との差分を、前記ユーザの不公平度として算出し、算出した前記ユーザそれぞれの不公平度により前記機械学習モデルの不公平度を評価する不公平度算出部と、
     前記機械学習モデルの不公平度の評価結果を出力する出力処理部と
     を備えることを特徴とする評価装置。
    A privacy risk calculation unit that calculates the privacy risk of each piece of data included in a dataset used to train a machine learning model;
    a profit calculation unit that calculates a profit that a user who provides data to the data set will obtain by providing the data;
    an unfairness calculation unit that calculates a difference between the privacy risk of the data and the user's gain obtained by providing the data as an unfairness of the user, and evaluates the unfairness of the machine learning model based on the calculated unfairness of each of the users;
    and an output processing unit that outputs an evaluation result of the unfairness of the machine learning model.
  2.  前記不公平度算出部は、
     前記データのプライバシーリスクと前記ユーザの利得それぞれを正規化し、正規化した前記データのプライバシーリスクと前記ユーザの利得との差分を、前記ユーザの不公平度とする
     ことを特徴とする請求項1に記載の評価装置。
    The unfairness calculation unit,
    The evaluation device according to claim 1 , further comprising: normalizing the privacy risk of the data and the payoff of the user, and determining a difference between the normalized privacy risk of the data and the payoff of the user as the unfairness of the user.
  3.  前記不公平度算出部は、
     算出した前記ユーザそれぞれの不公平度のうち、その最大値を前記機械学習モデルの評価結果とする
     ことを特徴とする請求項1に記載の評価装置。
    The unfairness calculation unit,
    The evaluation device according to claim 1 , further comprising: a maximum value of the unfairness degrees calculated for each of the users as an evaluation result of the machine learning model.
  4.  前記データの提供により得る利得は、
     前記データを機械学習モデルの学習に用いた場合に、前記機械学習モデルの精度が向上する度合いである
     ことを特徴とする請求項1に記載の評価装置。
    The benefits of providing the data are as follows:
    The evaluation device according to claim 1 , wherein the evaluation is a degree to which the accuracy of a machine learning model is improved when the data is used for training the machine learning model.
  5.  前記利得算出部は、
     前記データを学習に用いたシャドウモデルと前記データを学習に用いなかったシャドウモデルを構築し、前記データを学習に用いたシャドウモデルの精度と前記データを学習に用いなかったシャドウモデルの精度との差を算出することにより、前記データを学習に用いた場合に、前記機械学習モデルの精度が向上する度合いを算出する
     ことを特徴とする請求項4に記載の評価装置。
    The gain calculation unit
    The evaluation device according to claim 4, further comprising: constructing a shadow model using the data for training and a shadow model not using the data for training; and calculating a difference in accuracy between the shadow model using the data for training and the shadow model not using the data for training, thereby calculating a degree to which accuracy of the machine learning model is improved when the data is used for training.
  6.  前記データを提供することにより得る利得は、
     前記データの提供と引き換えに前記ユーザに提供されるサービスまたは金銭的な報酬である
     ことを特徴とする請求項1に記載の評価装置。
    The benefits of providing the data include:
    The evaluation device according to claim 1 , wherein the evaluation is a service or a monetary reward provided to the user in exchange for providing the data.
  7.  評価装置により実行される評価方法であって、
     機械学習モデルの学習に用いられたデータセットに含まれるデータそれぞれのプライバシーリスクを算出する工程と、
     前記データセットにデータを提供したユーザが、前記データの提供により得る利得を算出する工程と、
     前記データのプライバシーリスクと前記データの提供により得るユーザの利得との差分を、前記ユーザの不公平度として算出し、算出した前記ユーザそれぞれの不公平度により前記機械学習モデルの不公平度を評価する工程と、
     前記機械学習モデルの不公平度の評価結果を出力する工程と
     を含むことを特徴とする評価方法。
    An evaluation method performed by an evaluation device, comprising:
    Calculating the privacy risk of each piece of data included in the dataset used to train the machine learning model;
    A step of calculating a profit that a user who has contributed data to the data set will gain from providing the data;
    A step of calculating a difference between the privacy risk of the data and the user's gain obtained by providing the data as an unfairness of the user, and evaluating the unfairness of the machine learning model based on the calculated unfairness of each of the users;
    and outputting an evaluation result of the unfairness of the machine learning model.
  8.  機械学習モデルの学習に用いられたデータセットに含まれるデータそれぞれのプライバシーリスクを算出する工程と、
     前記データセットにデータを提供したユーザが、前記データの提供により得る利得を算出する工程と、
     前記データのプライバシーリスクと前記データの提供により得るユーザの利得との差分を、前記ユーザの不公平度として算出し、算出した前記ユーザそれぞれの不公平度により前記機械学習モデルの不公平度を評価する工程と、
     前記機械学習モデルのの評価結果を出力する工程と
     をコンピュータに実行させるための評価プログラム。
    Calculating the privacy risk of each piece of data included in the dataset used to train the machine learning model;
    A step of calculating a profit that a user who has contributed data to the data set will gain from providing the data;
    A step of calculating a difference between the privacy risk of the data and the user's gain obtained by providing the data as an unfairness of the user, and evaluating the unfairness of the machine learning model based on the calculated unfairness of each of the users;
    and outputting an evaluation result of the machine learning model.
PCT/JP2022/037938 2022-10-11 2022-10-11 Evaluation device, evaluation method, and evaluation program WO2024079802A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/037938 WO2024079802A1 (en) 2022-10-11 2022-10-11 Evaluation device, evaluation method, and evaluation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/037938 WO2024079802A1 (en) 2022-10-11 2022-10-11 Evaluation device, evaluation method, and evaluation program

Publications (1)

Publication Number Publication Date
WO2024079802A1 true WO2024079802A1 (en) 2024-04-18

Family

ID=90668980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/037938 WO2024079802A1 (en) 2022-10-11 2022-10-11 Evaluation device, evaluation method, and evaluation program

Country Status (1)

Country Link
WO (1) WO2024079802A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015130022A (en) * 2014-01-07 2015-07-16 Kddi株式会社 Anonymization parameter selection device, method and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015130022A (en) * 2014-01-07 2015-07-16 Kddi株式会社 Anonymization parameter selection device, method and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIBAHARA, TOSHIKI; MIURA, TAKAYUKI; KII, MASANOBU; ICHIKAWA, ATSUNORI: "Privacy Risk of Differentially Private Bayesian Neural Network", IPSJ COMPUTER SECURITY SYMPOSIUM (CSS 2021); OCTOBER 26-29, 2021, INFORMATION PROCESSING SOCIETY OF JAPAN, 1 October 2021 (2021-10-01), pages 245 - 252, XP093158735 *

Similar Documents

Publication Publication Date Title
Johari et al. Experimental design in two-sided platforms: An analysis of bias
Bhat et al. Near-optimal ab testing
Chandrasekhar et al. A network formation model based on subgraphs
Chandrasekhar et al. Tractable and consistent random graph models
US10846620B2 (en) Machine learning-based patent quality metric
US10474827B2 (en) Application recommendation method and application recommendation apparatus
Bazerman et al. Arbitrator Decision Making: when are final offers important?
Stahl Evidence based rules and learning in symmetric normal-form games
US11468521B2 (en) Social media account filtering method and apparatus
CN112669084B (en) Policy determination method, device and computer readable storage medium
CN108628967A (en) A kind of e-learning group partition method generating network similarity based on study
Cortez-Rodriguez et al. Exploiting neighborhood interference with low-order interactions under unit randomized design
CN113592590A (en) User portrait generation method and device
Tembine Mean field stochastic games: Convergence, Q/H-learning and optimality
Gao et al. Belief and opinion evolution in social networks: A high-dimensional mean field game approach
WO2024079802A1 (en) Evaluation device, evaluation method, and evaluation program
Marsman Plausible values in statistical inference
Marsman et al. Composition algorithms for conditional distributions
d'Adamo Orthogonal policy learning under ambiguity
CN111598390B (en) Method, device, equipment and readable storage medium for evaluating high availability of server
Zhang et al. Sequential sampling for Bayesian robust ranking and selection
Kapelner et al. Optimal rerandomization via a criterion that provides insurance against failed experiments
Antognini et al. Covariate adjusted designs for combining efficiency, ethics and randomness in normal response trials
WO2023067666A1 (en) Calculation device, calculation method, and calculation program
CN111625817A (en) Abnormal user identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22962022

Country of ref document: EP

Kind code of ref document: A1