WO2022059208A1 - 学習装置、学習方法及び学習プログラム - Google Patents
学習装置、学習方法及び学習プログラム Download PDFInfo
- Publication number
- WO2022059208A1 WO2022059208A1 PCT/JP2020/035623 JP2020035623W WO2022059208A1 WO 2022059208 A1 WO2022059208 A1 WO 2022059208A1 JP 2020035623 W JP2020035623 W JP 2020035623W WO 2022059208 A1 WO2022059208 A1 WO 2022059208A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- learning
- model
- anomaly score
- unlearned
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 31
- 238000012549 training Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 26
- 238000001514 detection method Methods 0.000 description 23
- 238000012545 processing Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000005856 abnormality Effects 0.000 description 9
- 230000010365 information processing Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012946 outsourcing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
Definitions
- the present invention relates to a learning device, a learning method and a learning program.
- IDS intrusion detection systems
- VAE Variational Auto Encoder
- An anomaly detection system using a probability density estimator generates high-dimensional data for learning called traffic features from actual communication, and learns the characteristics of normal traffic using these features for normal communication. It becomes possible to estimate the probability of occurrence of a pattern.
- the probability density estimator may be simply referred to as a model.
- the abnormality detection system calculates the probability of occurrence of each communication using the trained model, and detects the communication with a small probability of occurrence as an abnormality. Therefore, according to the anomaly detection system using the probability density estimator, it is possible to detect anomalies without knowing all malignant states, and there is also an advantage that it is possible to deal with unknown cyber attacks. In the anomaly detection system, an anomaly score that increases as the probability of occurrence is smaller may be used for anomaly detection.
- learning of a probability density estimator such as VAE often does not work in a situation where the number of cases is biased among the normal data to be learned.
- traffic session data often has a biased number of cases.
- a probability density estimator such as VAE in such a situation, learning of NTP communication with a small number of data will not be successful, and the probability of occurrence will be underestimated, which may cause false positives. be.
- Patent Document 1 As a method of solving the problem caused by such a bias in the number of data items, a method of learning the probability density estimator in two stages is known (see, for example, Patent Document 1).
- the conventional technology has a problem that the processing time may increase.
- the learning of the probability density estimator since the learning of the probability density estimator is performed in two steps, the learning time is about twice as long as in the case of one step.
- the learning device learns the data selected as unlearned data among the training data, and generates a model for calculating the anomaly score. It is characterized by having a selection unit for selecting at least a part of the data for learning whose anomaly score calculated by the model generated by the generation unit is equal to or higher than the threshold value as the unlearned data. And.
- FIG. 1 is a diagram illustrating a flow of learning processing.
- FIG. 2 is a diagram showing a configuration example of the learning device according to the first embodiment.
- FIG. 3 is a diagram illustrating selection of unlearned data.
- FIG. 4 is a flowchart showing a processing flow of the learning device according to the first embodiment.
- FIG. 5 is a diagram showing the distribution of anomaly scores.
- FIG. 6 is a diagram showing the distribution of anomaly scores.
- FIG. 7 is a diagram showing the distribution of anomaly scores.
- FIG. 8 is a diagram showing a ROC curve.
- FIG. 9 is a diagram showing a configuration example of an abnormality detection system.
- FIG. 10 is a diagram showing an example of a computer that executes a learning program.
- FIG. 1 is a diagram illustrating a flow of learning processing.
- the learning device of the present embodiment repeats STEP1 and STEP2 until the end condition is satisfied.
- the learning device generates a plurality of models.
- the generated model shall be added to the list.
- the learning device randomly samples a predetermined number of data from the unlearned data. Then, the learning device generates a model from the sampled data.
- the model is a probability density estimator such as VAE.
- the learning device calculates the anomaly score of the entire untrained data using the generated model. Then, the learning device selects the data whose anomaly score is equal to or less than the threshold value as the learned data. On the other hand, the learning device selects the data whose anomaly score is equal to or higher than the threshold value as unlearned data. Here, if the end condition is not satisfied, the learning device returns to STEP1.
- the data whose anomaly score is equal to or higher than the threshold value in STEP2 will be regarded as unlearned data.
- sampling and evaluation are repeated, and the dominant type of data among the unlearned data is sequentially learned.
- the data to be learned is reduced by sampling and narrowing down the unlearned data, so that the time required for learning can be shortened.
- FIG. 2 is a diagram showing a configuration example of the learning device according to the first embodiment.
- the learning device 10 has an IF (interface) unit 11, a storage unit 12, and a control unit 13.
- the IF unit 11 is an interface for inputting and outputting data.
- the IF unit 11 is a NIC (Network Interface Card).
- the IF unit 11 may be connected to an input device such as a mouse or keyboard, and an output device such as a display.
- the storage unit 12 is a storage device for an HDD (Hard Disk Drive), SSD (Solid State Drive), optical disk, or the like.
- the storage unit 12 may be a semiconductor memory in which data such as RAM (Random Access Memory), flash memory, NVSRAM (Non Volatile Static Random Access Memory) can be rewritten.
- the storage unit 12 stores an OS (Operating System) and various programs executed by the learning device 10.
- the control unit 13 controls the entire learning device 10.
- the control unit 13 is, for example, an electronic circuit such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or the like. It is an integrated circuit.
- the control unit 13 has an internal memory for storing programs and control data that specify various processing procedures, and executes each process using the internal memory. Further, the control unit 13 functions as various processing units by operating various programs.
- the control unit 13 has a generation unit 131, a calculation unit 132, and a selection unit 133.
- the generation unit 131 learns the data selected as the unlearned data among the data for learning, and generates a model for calculating the anomaly score.
- the generation unit 131 adds the generated model to the list.
- the generation unit 131 can adopt an existing VAE generation method. Further, the generation unit 131 may generate a model based on the data obtained by sampling a part of the unlearned data.
- the calculation unit 132 calculates the anomaly score of the unlearned data by the model generated by the generation unit 131.
- the calculation unit 132 may calculate the anomaly score of the entire unlearned data, or may calculate the anomaly score of a part of the unlearned data.
- the selection unit 133 selects at least a part of the data for learning whose anomaly score calculated by the model generated by the generation unit 131 is equal to or higher than the threshold value as unlearned data.
- FIG. 3 is a diagram illustrating selection of unlearned data.
- the model is VAE, and it is assumed that the anomaly score of the communication data is calculated in order to detect the abnormal communication.
- the horizontal axis is the anomaly score which is an approximation of the negative log-likelihood (-log p (x)) of the probability density
- the vertical axis is the histogram of the number of data.
- the negative log-likelihood of the probability density takes a higher value as the density (occurrence frequency) of the data points is lower, so it can be regarded as an anomaly score, that is, the degree of anomaly.
- the anomaly score of MQTT communication with a large number of data is low, and the anomaly score of camera streaming communication with a small number of data is high. Therefore, it is considered that the camera communication data having a small number of data causes false detection.
- the selection unit 133 selects unlearned data from the data whose anomaly score is equal to or higher than the threshold value. Then, a model with suppressed false positives is generated by using a part or all of the selected unlearned data. In other words, the selection unit 133 has a function of excluding data that does not require further learning.
- the threshold value may be determined based on the Loss value obtained at the time of model generation. In that case, in the selection unit 133, among the training data, the anomaly score calculated by the model generated by the generation unit 131 is equal to or higher than the threshold value calculated based on the Loss value of each data obtained at the time of model generation. At least a portion of the data that is is selected as unlearned data. For example, the threshold value may be calculated based on the mean value or the variance, such as the mean of Loss values + 0.3 ⁇ .
- the selection unit 133 mainly selects the DNS communication data and the camera communication data based on the anomaly score calculated in the ⁇ first time>. On the contrary, the selection unit 133 hardly selects the MQTT communication data having a large number of data.
- the learning device 10 can repeat each process by the generation unit 131, the calculation unit 132, and the selection unit 133 even after the third time. That is, each time the generation unit 131 selects data as unlearned data by the selection unit 133, the generation unit 131 learns the selected data and generates a model for calculating the anomaly score. Then, each time the model is generated by the generation unit 131, the selection unit 133 selects at least a part of the data whose anomaly score calculated by the generated model is equal to or higher than the threshold value as unlearned data.
- the learning device 10 may end the repetition when the number of data whose anomaly score is equal to or greater than the threshold value becomes less than a predetermined value.
- the selection unit 133 satisfies the predetermined condition, the anomaly score is the threshold value. At least a part of the above data is selected as unlearned data.
- the learning device 10 may repeat the process until the number of data whose anomaly score is equal to or greater than the threshold value is less than 1% of the number of data for learning initially collected. Further, since the model is generated and added to the list each time it is repeated, the learning device 10 can output a plurality of models.
- a plurality of models generated by the learning device 10 are used for abnormality detection in a detection device or the like.
- Anomaly detection using a plurality of models may be performed by the method described in Patent Document 1. That is, the detection device can detect an abnormality based on the merged value or the minimum value of the anomaly scores calculated by a plurality of models.
- FIG. 4 is a flowchart showing a processing flow of the learning device according to the first embodiment.
- the learning device 10 samples a part of the unlearned data (step S101).
- the learning device 10 generates a model based on the sampled data (step S102).
- step S103 if the end condition is satisfied (step S103, Yes), the learning device 10 ends the process. On the other hand, when the end condition is not satisfied (step S103, No), the learning device 10 calculates the anomaly score of the entire untrained data by the generated model (step S104).
- the learning device 10 selects data having an anomaly score equal to or higher than the threshold value as unlearned data (step S105), returns to step S101, and repeats the process. Immediately before step S105 is executed, the selection of unlearned data is temporarily initialized. That is, in step S105, the learning device 10 selects new unlearned data by referring to the anomaly score in a state where no unlearned data is selected.
- the generation unit 131 learns the data selected as the unlearned data among the training data and generates a model for calculating the anomaly score.
- the selection unit 133 selects at least a part of the data for learning whose anomaly score calculated by the model generated by the generation unit 131 is equal to or higher than the threshold value as unlearned data. In this way, the learning device 10 can select data that is likely to cause false positives after generating the model, and generate the model again.
- the present embodiment even if the number of cases between normal data is biased, learning can be performed accurately in a short time.
- the generation unit 131 learns the selected data each time the data is selected as unlearned data by the selection unit 133, and generates a model for calculating the anomaly score. Each time a model is generated by the generation unit 131, the selection unit 133 selects at least a part of the data whose anomaly score calculated by the generated model is equal to or higher than the threshold value as unlearned data. In the present embodiment, by repeating the process in this way, a plurality of models can be generated and the accuracy of abnormality detection can be improved.
- the selection unit 133 among the data for training, the data whose anomaly score calculated by the model generated by the generation unit 131 is equal to or higher than the threshold value calculated based on the Loss value of each data obtained when the model is generated. Select at least a portion of the data as untrained data. This makes it possible to set a threshold value according to the degree of bias of the anomaly score.
- the selection unit 133 When the number of data for learning whose anomaly score calculated by the model generated by the generation unit 131 is equal to or greater than the threshold value and satisfies a predetermined condition, the selection unit 133 has the anomaly score equal to or greater than the threshold value. Select at least part of the data as untrained data. By setting the end condition of the iterative process in this way, it is possible to adjust the balance between the accuracy of abnormality detection and the processing time required for learning.
- FIG. 5 shows the result of learning by the conventional VAE (one-step VAE).
- the time required for learning was 268 sec.
- the anomaly score of the camera communication which is a small amount of data, is calculated slightly higher.
- FIG. 6 shows the results of learning by the two-step VAE described in Patent Document 1.
- the time required for learning was 572 sec.
- the anomaly score of the camera communication which is a small amount of data, is lower than that of the example of FIG.
- Figure 7 shows the results of learning according to this embodiment.
- the time required for learning was 192 sec.
- the anomaly score of the camera communication is lowered to the same level as in the case of the two-stage VAE of FIG. 6, and the time required for learning is further shortened.
- FIG. 8 is a diagram showing a ROC curve. As shown in FIG. 8, the present embodiment shows an ideal ROC curve as compared with the one-step VAE and the two-step VAE.
- the detection accuracy according to this embodiment was 0.9949.
- the detection accuracy by the two-step VAE was 0.9652.
- the detection accuracy by the one-step VAE was 0.9216. From this, it can be said that the detection accuracy is improved according to the present embodiment.
- FIG. 9 is a diagram showing a configuration example of an abnormality detection system.
- the server collects the traffic session information sent and received by the IoT device, learns the probability density of the normal traffic session, and detects the abnormal traffic session.
- the server will be able to apply the method of the embodiment and generate an anomaly detection model accurately and at high speed even if there is a bias in the number of session data. ..
- each component of each of the illustrated devices is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific forms of distribution and integration of each device are not limited to those shown in the figure, and all or part of them may be functionally or physically dispersed or physically distributed in arbitrary units according to various loads and usage conditions. Can be integrated and configured. Further, each processing function performed by each device is realized by a CPU (Central Processing Unit) and a program that is analyzed and executed by the CPU, or hardware by wired logic. Can be realized as. The program may be executed not only by the CPU but also by another processor such as a GPU.
- CPU Central Processing Unit
- the learning device 10 can be implemented by installing a learning program that executes the above learning process as package software or online software on a desired computer. For example, by causing the information processing device to execute the above learning program, the information processing device can be made to function as the learning device 10.
- the information processing device referred to here includes a desktop type or notebook type personal computer.
- the information processing device includes smartphones, mobile phones, mobile communication terminals such as PHS (Personal Handyphone System), and slate terminals such as PDAs (Personal Digital Assistants).
- the learning device 10 can be implemented as a learning server device in which the terminal device used by the user is a client and the service related to the above learning process is provided to the client.
- the learning server device is implemented as a server device that provides a learning service that inputs learning data and outputs information of a plurality of generated models.
- the learning server device may be implemented as a Web server, or may be implemented as a cloud that provides the service related to the learning process by outsourcing.
- FIG. 10 is a diagram showing an example of a computer that executes a learning program.
- the computer 1000 has, for example, a memory 1010 and a CPU 1020.
- the computer 1000 also has a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. Each of these parts is connected by a bus 1080.
- the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012.
- the ROM 1011 stores, for example, a boot program such as a BIOS (BASIC Input Output System).
- BIOS BASIC Input Output System
- the hard disk drive interface 1030 is connected to the hard disk drive 1090.
- the disk drive interface 1040 is connected to the disk drive 1100.
- a removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive 1100.
- the serial port interface 1050 is connected to, for example, a mouse 1110 and a keyboard 1120.
- the video adapter 1060 is connected to, for example, the display 1130.
- the hard disk drive 1090 stores, for example, the OS 1091, the application program 1092, the program module 1093, and the program data 1094. That is, the program that defines each process of the learning device 10 is implemented as a program module 1093 in which a code that can be executed by a computer is described.
- the program module 1093 is stored in, for example, the hard disk drive 1090.
- the program module 1093 for executing the same processing as the functional configuration in the learning device 10 is stored in the hard disk drive 1090.
- the hard disk drive 1090 may be replaced by an SSD (Solid State Drive).
- the setting data used in the processing of the above-described embodiment is stored as program data 1094 in, for example, a memory 1010 or a hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 and the program data 1094 stored in the memory 1010 and the hard disk drive 1090 into the RAM 1012 as needed, and executes the process of the above-described embodiment.
- the program module 1093 and the program data 1094 are not limited to those stored in the hard disk drive 1090, but may be stored in, for example, a removable storage medium and read by the CPU 1020 via the disk drive 1100 or the like. Alternatively, the program module 1093 and the program data 1094 may be stored in another computer connected via a network (LAN (Local Area Network), WAN (Wide Area Network), etc.). Then, the program module 1093 and the program data 1094 may be read from another computer by the CPU 1020 via the network interface 1070.
- LAN Local Area Network
- WAN Wide Area Network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computer Security & Cryptography (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Debugging And Monitoring (AREA)
- Electrically Operated Instructional Devices (AREA)
- Testing And Monitoring For Control Systems (AREA)
Abstract
Description
まず、図1を用いて、本実施形態の学習処理の流れを説明する。図1は、学習処理の流れを説明する図である。図1に示すように、本実施形態の学習装置は、終了条件が満たされるまで、STEP1とSTEP2を繰り返す。これにより、学習装置は複数のモデルを生成する。また、生成されたモデルはリストに追加されていくものとする。
図4は、第1の実施形態に係る学習装置の処理の流れを示すフローチャートである。まず、学習装置10は、未学習のデータの一部をサンプリングする(ステップS101)。次に、学習装置10は、サンプリングしたデータを基にモデルを生成する(ステップS102)。
これまで説明してきたように、生成部131は、学習用のデータのうち未学習のデータとして選択されたデータを学習し、アノマリスコアを計算するモデルを生成する。選択部133は、学習用のデータのうち、生成部131によって生成されたモデルによって計算されたアノマリスコアが閾値以上であるデータの少なくとも一部を未学習のデータとして選択する。このように、学習装置10は、モデルを生成した後に、誤検知の原因になりやすいデータを選択し、モデルを再度生成することができる。その結果、本実施形態によれば、正常データ間の件数に偏りがある場合であっても、短時間で精度良く学習を行うことができる。
本実施形態を使って行った実験の結果を示す。まず、実験においては、下記の通信が混ざったデータを用いて学習が行われた。
MQTT通信:1883ポート 20951件(多数データ)
カメラ通信:1935ポート 204件(少数データ)
実験では、学習によってモデルを生成し、生成したモデルで各データのアノマリスコアを計算した。図5、図6及び図7は、アノマリスコアの分布を示す図である。
図9に示すようにIoT機器が接続されたネットワーク上に備えられたサーバに、上記の実施形態における学習装置10と同じモデル生成機能、及び学習装置10によって生成されたモデルを使った異常検知機能を持たせてもよい。図9は、異常検知システムの構成例を示す図である。
また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示のように構成されていることを要しない。すなわち、各装置の分散及び統合の具体的形態は図示のものに限られず、その全部又は一部を、各種の負荷や使用状況等に応じて、任意の単位で機能的又は物理的に分散又は統合して構成することができる。さらに、各装置にて行われる各処理機能は、その全部又は任意の一部が、CPU(Central Processing Unit)及び当該CPUにて解析実行されるプログラムにて実現され、あるいは、ワイヤードロジックによるハードウェアとして実現され得る。なお、プログラムは、CPUだけでなく、GPU等の他のプロセッサによって実行されてもよい。
一実施形態として、学習装置10は、パッケージソフトウェアやオンラインソフトウェアとして上記の学習処理を実行する学習プログラムを所望のコンピュータにインストールさせることによって実装できる。例えば、上記の学習プログラムを情報処理装置に実行させることにより、情報処理装置を学習装置10として機能させることができる。ここで言う情報処理装置には、デスクトップ型又はノート型のパーソナルコンピュータが含まれる。また、その他にも、情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)等の移動体通信端末、さらには、PDA(Personal Digital Assistant)等のスレート端末等がその範疇に含まれる。
11 IF部
12 記憶部
13 制御部
131 生成部
132 計算部
133 選択部
Claims (6)
- 学習用のデータのうち未学習のデータとして選択されたデータを学習し、アノマリスコアを計算するモデルを生成する生成部と、
前記学習用のデータのうち、前記生成部によって生成されたモデルによって計算されたアノマリスコアが閾値以上であるデータの少なくとも一部を前記未学習のデータとして選択する選択部と、
を有することを特徴とする学習装置。 - 前記生成部は、前記選択部によって前記未学習のデータとしてデータが選択されるたびに、当該選択されたデータを学習し、アノマリスコアを計算するモデルを生成し、
前記選択部は、前記生成部によってモデルが生成されるたびに、当該生成されたモデルによって計算されたアノマリスコアが閾値以上であるデータの少なくとも一部を前記未学習のデータとして選択することを特徴とする請求項1に記載の学習装置。 - 前記選択部は、前記学習用のデータのうち、前記生成部によって生成されたモデルによって計算されたアノマリスコアが、前記モデルの生成時に得られる各データのLoss値を基に計算された閾値以上であるデータの少なくとも一部を、前記未学習のデータとして選択することを特徴とする請求項1又は2に記載の学習装置。
- 前記選択部は、前記学習用のデータのうち、前記生成部によって生成されたモデルによって計算されたアノマリスコアが閾値以上であるデータの数が所定の条件を満たす場合、当該アノマリスコアが閾値以上であるデータの少なくとも一部を前記未学習のデータとして選択することを特徴とする請求項1から3のいずれか1項に記載の学習装置。
- 学習装置によって実行される学習方法であって、
学習用のデータのうち未学習のデータとして選択されたデータを学習し、アノマリスコアを計算するモデルを生成する生成工程と、
前記学習用のデータのうち、前記生成工程によって生成されたモデルによって計算されたアノマリスコアが閾値以上であるデータの少なくとも一部を前記未学習のデータとして選択する選択工程と、
を含むことを特徴とする学習方法。 - コンピュータを、請求項1から4のいずれか1項に記載の学習装置として機能させるための学習プログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080105213.4A CN116113960A (zh) | 2020-09-18 | 2020-09-18 | 学习装置、学习方法以及学习程序 |
PCT/JP2020/035623 WO2022059208A1 (ja) | 2020-09-18 | 2020-09-18 | 学習装置、学習方法及び学習プログラム |
AU2020468806A AU2020468806B2 (en) | 2020-09-18 | 2020-09-18 | Learning device, learning method, and learning program |
EP20954193.7A EP4202800A4 (en) | 2020-09-18 | 2020-09-18 | LEARNING DEVICE, LEARNING METHOD AND LEARNING PROGRAM |
US18/026,605 US20230334361A1 (en) | 2020-09-18 | 2020-09-18 | Training device, training method, and training program |
JP2022550324A JP7444271B2 (ja) | 2020-09-18 | 2020-09-18 | 学習装置、学習方法及び学習プログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2020/035623 WO2022059208A1 (ja) | 2020-09-18 | 2020-09-18 | 学習装置、学習方法及び学習プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022059208A1 true WO2022059208A1 (ja) | 2022-03-24 |
Family
ID=80776763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/035623 WO2022059208A1 (ja) | 2020-09-18 | 2020-09-18 | 学習装置、学習方法及び学習プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230334361A1 (ja) |
EP (1) | EP4202800A4 (ja) |
JP (1) | JP7444271B2 (ja) |
CN (1) | CN116113960A (ja) |
AU (1) | AU2020468806B2 (ja) |
WO (1) | WO2022059208A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001051969A (ja) * | 1999-08-13 | 2001-02-23 | Kdd Corp | 正誤答判定機能を有するニューラルネットワーク手段 |
JP2018190127A (ja) * | 2017-05-01 | 2018-11-29 | 日本電信電話株式会社 | 判定装置、分析システム、判定方法および判定プログラム |
JP2019028565A (ja) * | 2017-07-26 | 2019-02-21 | 安川情報システム株式会社 | 故障予知方法、故障予知装置および故障予知プログラム |
JP2019102011A (ja) * | 2017-12-08 | 2019-06-24 | 日本電信電話株式会社 | 学習装置、学習方法及び学習プログラム |
JP2019101982A (ja) | 2017-12-07 | 2019-06-24 | 日本電信電話株式会社 | 学習装置、検知システム、学習方法及び学習プログラム |
JP2019153893A (ja) * | 2018-03-01 | 2019-09-12 | 日本電信電話株式会社 | 検知装置、検知方法及び検知プログラム |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019770A (zh) | 2017-07-24 | 2019-07-16 | 华为技术有限公司 | 训练分类模型的方法与装置 |
JP6431231B1 (ja) | 2017-12-24 | 2018-11-28 | オリンパス株式会社 | 撮像システム、学習装置、および撮像装置 |
WO2020159439A1 (en) | 2019-01-29 | 2020-08-06 | Singapore Telecommunications Limited | System and method for network anomaly detection and analysis |
-
2020
- 2020-09-18 WO PCT/JP2020/035623 patent/WO2022059208A1/ja active Application Filing
- 2020-09-18 JP JP2022550324A patent/JP7444271B2/ja active Active
- 2020-09-18 EP EP20954193.7A patent/EP4202800A4/en active Pending
- 2020-09-18 US US18/026,605 patent/US20230334361A1/en active Pending
- 2020-09-18 AU AU2020468806A patent/AU2020468806B2/en active Active
- 2020-09-18 CN CN202080105213.4A patent/CN116113960A/zh active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001051969A (ja) * | 1999-08-13 | 2001-02-23 | Kdd Corp | 正誤答判定機能を有するニューラルネットワーク手段 |
JP2018190127A (ja) * | 2017-05-01 | 2018-11-29 | 日本電信電話株式会社 | 判定装置、分析システム、判定方法および判定プログラム |
JP2019028565A (ja) * | 2017-07-26 | 2019-02-21 | 安川情報システム株式会社 | 故障予知方法、故障予知装置および故障予知プログラム |
JP2019101982A (ja) | 2017-12-07 | 2019-06-24 | 日本電信電話株式会社 | 学習装置、検知システム、学習方法及び学習プログラム |
JP2019102011A (ja) * | 2017-12-08 | 2019-06-24 | 日本電信電話株式会社 | 学習装置、学習方法及び学習プログラム |
JP2019153893A (ja) * | 2018-03-01 | 2019-09-12 | 日本電信電話株式会社 | 検知装置、検知方法及び検知プログラム |
Non-Patent Citations (2)
Title |
---|
See also references of EP4202800A4 |
YUMA KOIZUMI; SHOICHIRO SAITO; MASATAKA YAMAGUCHI; SHIN MURATA; NOBORU HARADA: "Batch Uniformization for Minimizing Maximum Anomaly Score of DNN-based Anomaly Detection in Sounds", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 July 2019 (2019-07-19), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081444462 * |
Also Published As
Publication number | Publication date |
---|---|
AU2020468806A9 (en) | 2024-06-13 |
AU2020468806B2 (en) | 2024-02-29 |
US20230334361A1 (en) | 2023-10-19 |
AU2020468806A1 (en) | 2023-04-27 |
EP4202800A1 (en) | 2023-06-28 |
JP7444271B2 (ja) | 2024-03-06 |
CN116113960A (zh) | 2023-05-12 |
EP4202800A4 (en) | 2024-05-01 |
JPWO2022059208A1 (ja) | 2022-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6870508B2 (ja) | 学習プログラム、学習方法及び学習装置 | |
WO2017221667A1 (ja) | 悪性通信ログ検出装置、悪性通信ログ検出方法、悪性通信ログ検出プログラム | |
CN109257390B (zh) | Cc攻击的检测方法、装置及电子设备 | |
WO2019245006A1 (ja) | 検知装置及び検知方法 | |
EP3796599B1 (en) | Evaluation device and evaluation method | |
JP7014054B2 (ja) | 検知装置及び検知方法 | |
JP2019103069A (ja) | 特定システム、特定方法及び特定プログラム | |
JP2019003274A (ja) | 検知システム、検知方法及び検知プログラム | |
WO2022059208A1 (ja) | 学習装置、学習方法及び学習プログラム | |
WO2022059207A1 (ja) | 判定装置、判定方法及び判定プログラム | |
JP2019101781A (ja) | 検知システム、学習方法及び学習プログラム | |
US20220360596A1 (en) | Classification scheme for detecting illegitimate account creation | |
JP7184197B2 (ja) | 異常検出装置、異常検出方法および異常検出プログラム | |
JP2007334589A (ja) | 決定木構築方法および装置および状態判定装置 | |
US11899793B2 (en) | Information processing apparatus, control method, and program | |
WO2022239235A1 (ja) | 特徴量算出装置、特徴量算出方法および特徴量算出プログラム | |
WO2022254729A1 (ja) | 解析装置、解析方法、および、解析プログラム | |
US20210357500A1 (en) | Calculation device, calculation method, and calculation program | |
WO2023079757A1 (ja) | 分析装置、分析方法及び分析プログラム | |
JP7176636B2 (ja) | 生成装置、生成方法及び生成プログラム | |
JP7448022B2 (ja) | 検知装置、検知方法及び検知プログラム | |
WO2022044211A1 (ja) | 管理装置、管理方法および管理プログラム | |
WO2022070342A1 (ja) | 学習装置、学習方法及び学習プログラム | |
WO2023139640A1 (ja) | 情報処理装置および情報処理方法 | |
Zhuravel | DEVELOPMENT OF NETWORK SIMULATION MODEL FOR EVALUATING THE EFFICIENCY OF DISTRIBUTED CONSENSUS TAKING INTO ACCOUNT THE INSTABILITY OF NETWORK CONNECTIONS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20954193 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022550324 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: AU2020468806 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2020954193 Country of ref document: EP Effective date: 20230323 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020468806 Country of ref document: AU Date of ref document: 20200918 Kind code of ref document: A |