WO2023209899A1 - Security determination system, security determination device, method, and program - Google Patents
Security determination system, security determination device, method, and program Download PDFInfo
- Publication number
- WO2023209899A1 WO2023209899A1 PCT/JP2022/019173 JP2022019173W WO2023209899A1 WO 2023209899 A1 WO2023209899 A1 WO 2023209899A1 JP 2022019173 W JP2022019173 W JP 2022019173W WO 2023209899 A1 WO2023209899 A1 WO 2023209899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- security
- cryptographic protocol
- messages
- message
- safety
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 33
- 238000012795 verification Methods 0.000 claims abstract description 78
- 238000010801 machine learning Methods 0.000 claims abstract description 21
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 239000013598 vector Substances 0.000 claims description 36
- 230000006399 behavior Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 10
- 239000003999 initiator Substances 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 230000007774 longterm Effects 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 11
- 238000000605 extraction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 210000000352 storage cell Anatomy 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000020509 sex determination Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Definitions
- the present disclosure relates to a safety determination system, a safety determination device, a method, and a program.
- a machine learning model is constructed by converting a cryptographic protocol into data with a sequence structure.
- the present disclosure has been made in view of the above points, and provides a technology that verifies the security of cryptographic protocols with high accuracy using machine learning.
- a security determination system includes a conversion unit configured to convert a cryptographic protocol composed of one or more messages into series data of data expressing the message in a tree structure; a security verification unit configured to input series data and output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol using a machine learning model having learned model parameters; , a determining unit configured to determine whether the cryptographic protocol satisfies the security requirements based on the security verification value.
- Machine learning provides technology to verify the security of cryptographic protocols with high accuracy.
- FIG. 3 is a diagram illustrating an example of a method of configuring a message.
- FIG. 3 is a diagram showing an example of party behavior.
- FIG. 2 is a diagram showing an example of a tree structure representing a cryptographic protocol.
- FIG. 3 is a diagram showing an example of a safety verification model.
- 1 is a diagram illustrating an example of a hardware configuration of a safety determination device according to an embodiment.
- FIG. FIG. 2 is a diagram illustrating an example of a functional configuration of a safety determination device at the time of inference.
- FIG. 3 is a diagram illustrating an example of a functional configuration of a safety determination device during learning.
- 7 is a flowchart illustrating an example of safety determination processing according to the present embodiment. 7 is a flowchart illustrating an example of learning processing according to the present embodiment.
- a key exchange protocol or an authentication protocol is assumed as the cryptographic protocol, and in the case of the key exchange protocol, confidentiality is assumed as its security requirement, and in the case of the authentication protocol, authentication is assumed as its security requirement.
- the cryptographic protocols and their security requirements are not limited to these, and the present embodiment is similarly applicable to other cryptographic protocols and their safety requirements.
- ⁇ Target cryptographic protocol ⁇ This embodiment targets cryptographic protocols that satisfy the following (1) to (3). Note that the person who executes the cryptographic protocol is called a party. Further, one execution of a cryptographic protocol by parties (that is, from the start of message exchange until authentication, session key exchange, etc. are performed) is called a session.
- Each party has an ID, a private key that will be used for a long time (long-term private key), a public key for the long-term private key, and a private key that will be used for a short period only within the session (temporary private key). Allocated.
- FIG. 1 shows an example of the message constituent elements and their notation method.
- P represents any party. That is, P ⁇ P 1 , . . . , P s ⁇ .
- the elements constituting the AN are defined according to the target cryptographic protocol. For example, when the target cryptographic protocol is an authentication protocol, an AN that does not include the session key SK P shared by the party P may be defined.
- Figure 2 shows an example of the party's operations when composing a message and its notation method.
- ⁇ m represents a sequence of arbitrary messages (generated based on the composition method from the elements composing the message).
- ⁇ m an arbitrary message sequence
- the operations for configuring the FN are defined according to the target cryptographic protocol. For example, if the target cryptographic protocol is a key exchange protocol, an FN that does not include sign( ⁇ m, lsk) may be defined.
- -Party behavior Figure 3 shows an example of party behavior within the protocol and its notation method.
- ⁇ m represents an arbitrary message sequence.
- the behavior constituting the BN is defined according to the target cryptographic protocol. For example, if the target cryptographic protocol is an authentication protocol, a BN that does not include acceptI( ⁇ m) or acceptR( ⁇ m) may be defined.
- a cryptographic protocol is represented by a sequence of messages exchanged between parties.
- Each message is configured by applying element a included in AN and operation f included in FN to a message or a sequence of messages after determining the behavior of a party regarding the message.
- element a included in AN and operation f included in FN is configured by applying element a included in AN and operation f included in FN to a message or a sequence of messages after determining the behavior of a party regarding the message.
- both (a) and (b) below are messages.
- sendItoR(esk I ) (b) sendRtoI(ID I , esk I , esk R , aenc I (ID R , SK), sign R (ID I , esk I , esk R , aenc I (ID R , SK)))
- aenc I (.) represents aenc (.; pk I ) (where pk I is the public key of the initiator I).
- sign R ( ⁇ ) represents sign ( ⁇ ; lsk R ) (where lsk R is the long-term secret key of responder R).
- each message of a cryptographic protocol can be expressed as a tree structure (syntax tree) in which the behavior of the party is the root node, the operation is the internal node, and the message elements are the leaf nodes.
- syntax tree syntax tree
- the message shown in (a) above can be expressed in a tree structure as shown in the left diagram of FIG. 4
- the message shown in (b) above can be expressed in a tree structure as shown in the right diagram of FIG. 4.
- a cryptographic protocol is expressed as a message sequence structure expressed in a tree structure. That is, if each message expressed in a tree structure is m t and the number of message exchanges in the cryptographic protocol is T, then the cryptographic protocol has a series structure and a tree of ⁇ m t
- t 1,...,T ⁇ . It can be expressed using structured data.
- each message in a tree structure, it is possible to take into account the grammatical structure between messages (in other words, the structural information of the cryptographic protocol), making it possible to construct a more accurate machine learning model. For example, consider the following two messages (c) and (d).
- a security verification value representing the verification result of the security requirements of the cryptographic protocol is given as an input cryptographic protocol expressed as data ⁇ m t
- t 1,...,T ⁇ having a series structure and a tree structure.
- this machine learning model will be referred to as a "safety verification model.”
- the safety verification model 1000 includes a TreeLSTM 1100, an LSTM 1200, and a linear classifier 1300.
- the TreeLSTM 1100 is a neural network model configured based on the Child-Sum Tree-LSTM (Reference 2), which is a model that extends LSTM (Long Short-Term Memory) to a tree structure.
- the latent vector of each node in the tree structure is calculated based on the latent vectors of child nodes. Specifically, if the set of child nodes of node j of message m t is C(j), the latent vector h j ⁇ R n of node j and the storage cell c j ⁇ R n (where n is the latent vector h j and the number of dimensions of the storage cell c j ) are calculated and updated using the following formula.
- W c ⁇ R n ⁇ d and U c ⁇ R n ⁇ n are weight matrices, and b ⁇ R n is a bias.
- x j ⁇ R d is a vectorization of the label of node j (ie, party behavior, message element, or operation).
- i j , f jk , o j ⁇ [0,1] n are input, forget, and output gates calculated from (vectorized) latent vectors and labels of child nodes, respectively, and Control transmission.
- Child-Sum Tree-LSTM including these, please refer to Reference 2, for example.
- the columns h m_1 , . . . , h m_T of these latent vectors are input to the LSTM 1200. Note that “m_1”, . . . , “m_T” represent “m 1 ”, . . . , “m T ”, respectively.
- LSTM1200 is LSTM (Reference document 3), which is one of the improved models of RNN (Recurrent Neural Network).
- h' j represents a latent vector of LSTM. Note that in LSTM, past latent vectors are required when calculating h' j , but if the past latent vectors are uncalculated, for example, if the uncalculated latent vectors are defined as 0 vectors, etc. good. Since the LSTM 1200 is similar to existing LSTMs, a detailed explanation thereof will be omitted, but please refer to Reference Document 3 and the like as necessary.
- the linear classifier 1300 receives the latent vector h' obtained by the LSTM 1200 as input and outputs a safety verification value. That is, the linear classifier 1300 linearly transforms the latent vector h' into a vector with dimensions equal to the number of labels related to safety requirements, and then sets the softmax function value for the corresponding element of the vector as the safety verification value. Therefore, the safety verification value is expressed as a probability.
- the latent vector h' is linearly transformed into a two-dimensional vector by the linear classifier 1300. Therefore, for example, the softmax function value for the element corresponding to the label indicating "satisfying confidentiality" may be used as the security verification value.
- the security verification value exceeds a predetermined threshold value, it is determined that the corresponding cryptographic protocol satisfies the confidentiality, and otherwise, it is determined that the cryptographic protocol does not satisfy the confidentiality.
- the learning target parameters of the safety verification model 1000 described above are the weight matrices W c , U c and bias b of the TreeLSTM 1100, the weight matrix and bias of the LSTM 1200, and the weight matrix and bias used for linear transformation of the linear classifier 1300. It is. Hereinafter, these parameters will be referred to as model parameters.
- a learning data set represented by a pair of a cryptographic protocol and a security evaluation label representing the result of evaluating its security requirements is given.
- i 1, . . . ,
- Prt i is the i-th cryptographic protocol
- y i is its security evaluation label.
- the safety evaluation label y i takes, for example, 1 when the corresponding safety requirement is satisfied, and 0 otherwise.
- the model parameters are learned to minimize the loss function described above.
- a known method such as a gradient method may be used to minimize the loss function.
- learning model parameters will be referred to as “learning time”
- inference time the case of obtaining a security verification value for a desired cryptographic protocol by a security verification model using learned model parameters.
- FIG. 6 shows an example of the hardware configuration of the safety determination device 10 according to this embodiment.
- the safety determination device 10 includes an input device 101, a display device 102, an external I/F 103, a communication I/F 104, and a RAM (Random Access Memory) 105. It has a ROM (Read Only Memory) 106, an auxiliary storage device 107, and a processor 108. Each of these pieces of hardware is communicably connected via a bus 109.
- the input device 101 is, for example, a keyboard, a mouse, a touch panel, a physical button, or the like.
- the display device 102 is, for example, a display, a display panel, or the like. Note that the safety determination device 10 may not include at least one of the input device 101 and the display device 102, for example.
- the external I/F 103 is an interface with an external device such as the recording medium 103a.
- the safety determination device 10 can read, write, etc. to the recording medium 103a via the external I/F 103.
- Examples of the recording medium 103a include a flexible disk, a CD (Compact Disc), a DVD (Digital Versatile Disk), an SD memory card (Secure Digital memory card), and a USB (Universal Serial Bus) memory card.
- the communication I/F 104 is an interface for connecting the safety determination device 10 to a communication network.
- the RAM 105 is a volatile semiconductor memory (storage device) that temporarily holds programs and data.
- the ROM 106 is a nonvolatile semiconductor memory (storage device) that can retain programs and data even when the power is turned off.
- the auxiliary storage device 107 is, for example, a storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), or a flash memory.
- the processor 108 is, for example, an arithmetic device such as a CPU (Central Processing Unit).
- the safety determination device 10 has the hardware configuration shown in FIG. 6, so that it can implement various processes described below.
- the hardware configuration shown in FIG. 6 is an example, and the hardware configuration of the safety determination device 10 is not limited to this.
- the safety determination device 10 may include a plurality of auxiliary storage devices 107 and a plurality of processors 108, may not include a part of the illustrated hardware, or may include the illustrated hardware. It may also include various other hardware.
- FIG. 7 shows an example of the functional configuration of the safety determination device 10 at the time of inference.
- the safety determination device 10 at the time of inference includes a structure conversion section 201, a safety verification processing section 202, and a safety determination section 203. Each of these units is realized, for example, by one or more programs installed in the safety determination device 10 causing the processor 108 to execute the process.
- the structure conversion unit 201 converts the given cryptographic protocol Prt into data ⁇ m t
- t 1, . . . , T ⁇ having a sequence structure and a tree structure.
- each m t represents the t-th message constituting the cryptographic protocol Prt in a tree structure.
- t 1,...,T ⁇ .
- t 1,...,T ⁇ as input and calculates the cryptographic protocol. Output the safety verification value.
- the safety verification processing section 202 includes a tree structure feature extraction section 211, a series structure feature extraction section 212, and a classification section 213.
- t 1,...,T ⁇ and extracts latent vector columns h m_1 ,... , h m_T .
- the classification unit 213 is realized by a linear classifier 1300 having learned model parameters, and receives the latent vector h' as an input and outputs a safety verification value.
- the security determination unit 203 compares the security verification value with a predetermined threshold, and, for example, if the security verification value exceeds the predetermined threshold, it determines that the cryptographic protocol Prt satisfies the security requirements, and If not, it is determined that the cryptographic protocol Prt does not satisfy the security requirements.
- This determination result (safety determination result) is output to a predetermined output destination (for example, the auxiliary storage device 107, another device connected via the communication network, etc.).
- FIG. 8 shows an example of the functional configuration of the safety determination device 10 during learning.
- the safety determination device 10 during learning includes a structure conversion section 201, a safety verification processing section 202, and a learning section 204.
- Each of these units is realized, for example, by one or more programs installed in the safety determination device 10 causing the processor 108 to execute the process.
- the structure conversion unit 201 converts the cryptographic protocol Prt i included in the given learning data set D into data ⁇ m t (i)
- t 1,...,T (i) ⁇ having a sequence structure and a tree structure. Convert.
- each m t (i) represents the t-th message constituting the cryptographic protocol Prt i in a tree structure.
- t 1, . . . , T (i) ⁇ .
- the safety verification processing section 202 includes a tree structure feature extraction section 211, a series structure feature extraction section 212, and a classification section 213.
- t 1,...,T (i) ⁇ as input to extract latent vectors. Output the columns h m_1 , . . . , h m_T .
- the structure conversion unit 201 converts the given cryptographic protocol Prt into data ⁇ m t
- t 1, . . . , T ⁇ having a series structure and a tree structure (step S101).
- t 1,...,T ⁇ using the security verification model 1000 having learned model parameters, and verifies the cryptographic protocol.
- the safety verification value of is output (step S102).
- the security determination unit 203 determines whether the cryptographic protocol Prt satisfies the security requirements using the security verification value output in step S102 and a predetermined threshold (step S103).
- the learning unit 204 initializes model parameters (step S201). Note that the learning unit 204 may initialize the model parameters using a known method.
- the structure conversion unit 201 converts the cryptographic protocol Prt i included in the given learning data set D into data ⁇ m t (i)
- t 1,...,T (i) ⁇ having a sequence structure and a tree structure. Convert (step S202).
- the model parameters are learned so that (step S204).
- the security determination device 10 can verify the security requirements of a cryptographic protocol using a machine learning model. At this time, the security determination device 10 according to the present embodiment extracts not only the characteristics of the series structure of messages constituting the cryptographic protocol but also the characteristics of the tree structure of each message, and then determines the cryptographic protocol from these characteristics. Verify safety requirements. This makes it possible to verify the security requirements of cryptographic protocols with higher accuracy than with conventional methods.
- the safety determination device 10 is implemented by one device, but the invention is not limited to this, and for example, multiple devices connected to each other via a communication network are implemented.
- the safety determination device 10 may be realized by one device.
- the safety determination device 10 may be called a safety determination system or the like.
- the same device is used as the safety determination device 10 at the time of inference and at the time of learning, but the invention is not limited to this.
- the safety determination device 10 at the time of inference and the safety determination device 10 at the time of learning The sex determination device 10 may be a different device.
- Reference 1 International Standard. Iso: Information technology - security techniques - key management - part 3: Mechanisms using asymmetric techniques iso/iec 11770-3, 2015. 3rd edition. Reference 2: Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics. Reference 3: Sepp Hochreiter and J ⁇ urgen Schmidhuber. Long Short-Term Memory. Neural Computation, Vol. 9, No. 8, pp.1735-1780, 11 1997.
- Safety Judgment Device 101 Input Device 102 Display Device 103 External I/F 103a Recording medium 104 Communication I/F 105 RAM 106 ROM 107 Auxiliary storage device 108 Processor 109 Bus 201 Structure conversion unit 202 Safety verification processing unit 203 Safety determination unit 204 Learning unit 211 Tree structure feature extraction unit 212 Sequence structure feature extraction unit 213 Classification unit
Landscapes
- Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Communication Control (AREA)
Abstract
A security determination system according to one aspect of the present disclosure comprises: a conversion unit configured so as to convert an encryption protocol configured by one or more messages into series data of data in which the messages are expressed in a tree structure; a security verification unit configured so as to use the series data as input, and output a security verification value representing a verification result regarding a prescribed security requirement of the encryption protocol by using a machine learning model having trained model parameters; and a determination unit configured so as to determine, on the basis of the security verification value, whether the encryption protocol satisfies the security requirement.
Description
本開示は、安全性判定システム、安全性判定装置、方法、及びプログラムに関する。
The present disclosure relates to a safety determination system, a safety determination device, a method, and a program.
近年、暗号プロトコルの複雑化に伴い、その安全性検証に自動検証ツールが用いられている。主要な自動検証ツールでは形式検証と呼ばれる自動検証技術が用いられているが、形式検証は、停止性が保証されず、また検証が停止する場合であっても膨大な検証時間を要するという問題がある。これに対して、機械学習を用いた暗号プロトコルの安全性検証の研究も行われている(例えば、非特許文献1、2)。機械学習を用いた手法は、形式検証と比較して検証時間が短く、停止性も保証されている一方で、精度が100%とは限らないという問題がある。
In recent years, as cryptographic protocols have become more complex, automatic verification tools have been used to verify their security. Major automatic verification tools use an automatic verification technology called formal verification, but formal verification has the problem of not being guaranteed to stop, and even when verification stops, it takes a huge amount of verification time. be. In response, research is also being conducted on the security verification of cryptographic protocols using machine learning (for example, Non-Patent Documents 1 and 2). Although methods using machine learning require shorter verification time than formal verification and guarantee stopping performance, they have the problem that accuracy is not always 100%.
上記の非特許文献1及び2等に記載されている既存手法では、暗号プロトコルを系列構造のデータに変換し、機械学習モデルを構築している。
In the existing methods described in the above-mentioned non-patent documents 1 and 2, a machine learning model is constructed by converting a cryptographic protocol into data with a sequence structure.
しかしながら、既存手法では、暗号プロトコルのメッセージ要素毎に独立な変換処理を行うため、メッセージの構造が失われてしまっており、その結果、安全性検証の精度が低いものとなっていた。
However, in existing methods, the structure of the message is lost because an independent conversion process is performed for each message element of the cryptographic protocol, resulting in low accuracy in security verification.
本開示は、上記の点に鑑みてなされたもので、機械学習により暗号プロトコルの安全性を高精度に検証する技術を提供する。
The present disclosure has been made in view of the above points, and provides a technology that verifies the security of cryptographic protocols with high accuracy using machine learning.
本開示の一態様による安全性判定システムは、1以上のメッセージで構成される暗号プロトコルを、前記メッセージを木構造で表現したデータの系列データに変換するように構成されている変換部と、前記系列データを入力として、学習済みのモデルパラメータを持つ機械学習モデルにより、前記暗号プロトコルの所定の安全性要件に関する検証結果を表す安全性検証値を出力するように構成されている安全性検証部と、前記安全性検証値に基づいて、前記暗号プロトコルが前記安全性要件を満たすか否かを判定するように構成されている判定部と、を有する。
A security determination system according to an aspect of the present disclosure includes a conversion unit configured to convert a cryptographic protocol composed of one or more messages into series data of data expressing the message in a tree structure; a security verification unit configured to input series data and output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol using a machine learning model having learned model parameters; , a determining unit configured to determine whether the cryptographic protocol satisfies the security requirements based on the security verification value.
機械学習により暗号プロトコルの安全性を高精度に検証する技術が提供される。
Machine learning provides technology to verify the security of cryptographic protocols with high accuracy.
以下、本発明の一実施形態について説明する。以下では、暗号プロトコルの安全性要件を高精度に検証可能な機械学習モデルを構築した上で、その機械学習モデルにより所望の暗号プロトコルが安全性要件を満たすか否かを判定する安全性判定装置10について説明する。
An embodiment of the present invention will be described below. The following describes a security determination device that constructs a machine learning model that can highly accurately verify the security requirements of a cryptographic protocol, and then uses that machine learning model to determine whether a desired cryptographic protocol satisfies the security requirements. 10 will be explained.
ここで、以下では、暗号プロトコルとして鍵交換プロトコルや認証プロトコルを想定し、鍵交換プロトコルの場合はその安全性要件として機密性、認証プロトコルの場合はその安全性要件として認証性を想定する。ただし、暗号プロトコル及びその安全性要件はこれらに限定されるものではなく、他の暗号プロトコル及びその安全性要件に対しても、本実施形態は同様に適用可能である。
Here, in the following, a key exchange protocol or an authentication protocol is assumed as the cryptographic protocol, and in the case of the key exchange protocol, confidentiality is assumed as its security requirement, and in the case of the authentication protocol, authentication is assumed as its security requirement. However, the cryptographic protocols and their security requirements are not limited to these, and the present embodiment is similarly applicable to other cryptographic protocols and their safety requirements.
<準備>
以下、本実施形態の説明に必要な用語や概念、定義等を準備する。 <Preparation>
Below, terms, concepts, definitions, etc. necessary for explaining this embodiment will be prepared.
以下、本実施形態の説明に必要な用語や概念、定義等を準備する。 <Preparation>
Below, terms, concepts, definitions, etc. necessary for explaining this embodiment will be prepared.
≪対象とする暗号プロトコル≫
本実施形態では、以下の(1)~(3)を満たす暗号プロトコルを対象とする。なお、暗号プロトコルの実行者のことをパーティと呼ぶ。また、パーティによる暗号プロトコルの一実行(つまり、メッセージのやり取りを始めてから認証やセッション鍵の鍵交換等が行われるまで)をセッションと呼ぶ。 ≪Target cryptographic protocol≫
This embodiment targets cryptographic protocols that satisfy the following (1) to (3). Note that the person who executes the cryptographic protocol is called a party. Further, one execution of a cryptographic protocol by parties (that is, from the start of message exchange until authentication, session key exchange, etc. are performed) is called a session.
本実施形態では、以下の(1)~(3)を満たす暗号プロトコルを対象とする。なお、暗号プロトコルの実行者のことをパーティと呼ぶ。また、パーティによる暗号プロトコルの一実行(つまり、メッセージのやり取りを始めてから認証やセッション鍵の鍵交換等が行われるまで)をセッションと呼ぶ。 ≪Target cryptographic protocol≫
This embodiment targets cryptographic protocols that satisfy the following (1) to (3). Note that the person who executes the cryptographic protocol is called a party. Further, one execution of a cryptographic protocol by parties (that is, from the start of message exchange until authentication, session key exchange, etc. are performed) is called a session.
(1)パーティP1,・・・,Ps(ただし、sは2以上の整数)によるs者間で実行される。
(1) The process is executed between s parties P 1 , . . . , P s (where s is an integer of 2 or more).
(2)各パーティには、ID、長期的に用いられる秘密鍵(長期秘密鍵)、長期秘密鍵に対する公開鍵、セッション内のみで短期的に用いられる秘密鍵(一時秘密鍵)が1つずつ割り振られている。
(2) Each party has an ID, a private key that will be used for a long time (long-term private key), a public key for the long-term private key, and a private key that will be used for a short period only within the session (temporary private key). Allocated.
(3)パーティ間でメッセージのやり取りを行い、認証や鍵交換等の機能を提供する。
(3) Exchange messages between parties and provide functions such as authentication and key exchange.
≪暗号プロトコルの記述方法≫
以下のようにメッセージとパーティの振る舞いを定義し、パーティの振る舞いの列として暗号プロトコルを定義する。なお、メッセージはプロトコルメッセージと呼ばれることもある。 ≪How to write the cryptographic protocol≫
Define messages and party behaviors as follows, and define a cryptographic protocol as a sequence of party behaviors. Note that the message is sometimes called a protocol message.
以下のようにメッセージとパーティの振る舞いを定義し、パーティの振る舞いの列として暗号プロトコルを定義する。なお、メッセージはプロトコルメッセージと呼ばれることもある。 ≪How to write the cryptographic protocol≫
Define messages and party behaviors as follows, and define a cryptographic protocol as a sequence of party behaviors. Note that the message is sometimes called a protocol message.
・メッセージの構成要素
メッセージを構成する要素とその表記方法の一例を図1に示す。図1において、Pはいずれかのパーティを表す。すなわち、P∈{P1,・・・,Ps}である。以下では、メッセージを構成する要素の集合をAN={a1,・・・,aN}(ただし、Nはメッセージを構成する要素の総数)と表す。なお、ANを構成する要素(メッセージを構成する要素)は、対象とする暗号プロトコルに応じて定義される。例えば、対象とする暗号プロトコルが認証プロトコルである場合、パーティPが共有したセッション鍵SKPを含まないANが定義されてもよい。 - Message constituent elements Figure 1 shows an example of the message constituent elements and their notation method. In FIG. 1, P represents any party. That is, P∈{P 1 , . . . , P s }. In the following, a set of elements constituting a message is expressed as AN={a 1 , . . . , a N } (where N is the total number of elements constituting the message). Note that the elements constituting the AN (elements constituting the message) are defined according to the target cryptographic protocol. For example, when the target cryptographic protocol is an authentication protocol, an AN that does not include the session key SK P shared by the party P may be defined.
メッセージを構成する要素とその表記方法の一例を図1に示す。図1において、Pはいずれかのパーティを表す。すなわち、P∈{P1,・・・,Ps}である。以下では、メッセージを構成する要素の集合をAN={a1,・・・,aN}(ただし、Nはメッセージを構成する要素の総数)と表す。なお、ANを構成する要素(メッセージを構成する要素)は、対象とする暗号プロトコルに応じて定義される。例えば、対象とする暗号プロトコルが認証プロトコルである場合、パーティPが共有したセッション鍵SKPを含まないANが定義されてもよい。 - Message constituent elements Figure 1 shows an example of the message constituent elements and their notation method. In FIG. 1, P represents any party. That is, P∈{P 1 , . . . , P s }. In the following, a set of elements constituting a message is expressed as AN={a 1 , . . . , a N } (where N is the total number of elements constituting the message). Note that the elements constituting the AN (elements constituting the message) are defined according to the target cryptographic protocol. For example, when the target cryptographic protocol is an authentication protocol, an AN that does not include the session key SK P shared by the party P may be defined.
・メッセージの構成方法
メッセージを構成する際のパーティの操作とその表記方法の一例を図2に示す。図2において、 - Message composition method Figure 2 shows an example of the party's operations when composing a message and its notation method. In Figure 2,
メッセージを構成する際のパーティの操作とその表記方法の一例を図2に示す。図2において、 - Message composition method Figure 2 shows an example of the party's operations when composing a message and its notation method. In Figure 2,
・パーティの振る舞い
プロトコル内のパーティの振る舞いとその表記方法の一例を図3に示す。図3において、→mは任意のメッセージの列を表す。以下では、プロトコル内のパーティの振る舞いの集合をBN={b1,・・・,bL}(ただし、Lはプロトコル内のパーティの振る舞いの総数)と表す。なお、BNを構成する振る舞いは、対象とする暗号プロトコルに応じて定義される。例えば、対象とする暗号プロトコルが認証プロトコルである場合、acceptI(→m)やacceptR(→m)を含まないBNが定義されてもよい。 -Party behavior Figure 3 shows an example of party behavior within the protocol and its notation method. In FIG. 3, → m represents an arbitrary message sequence. In the following, the set of party behaviors in the protocol is expressed as BN={b 1 , . . . , b L } (where L is the total number of party behaviors in the protocol). Note that the behavior constituting the BN is defined according to the target cryptographic protocol. For example, if the target cryptographic protocol is an authentication protocol, a BN that does not include acceptI( → m) or acceptR( → m) may be defined.
プロトコル内のパーティの振る舞いとその表記方法の一例を図3に示す。図3において、→mは任意のメッセージの列を表す。以下では、プロトコル内のパーティの振る舞いの集合をBN={b1,・・・,bL}(ただし、Lはプロトコル内のパーティの振る舞いの総数)と表す。なお、BNを構成する振る舞いは、対象とする暗号プロトコルに応じて定義される。例えば、対象とする暗号プロトコルが認証プロトコルである場合、acceptI(→m)やacceptR(→m)を含まないBNが定義されてもよい。 -Party behavior Figure 3 shows an example of party behavior within the protocol and its notation method. In FIG. 3, → m represents an arbitrary message sequence. In the following, the set of party behaviors in the protocol is expressed as BN={b 1 , . . . , b L } (where L is the total number of party behaviors in the protocol). Note that the behavior constituting the BN is defined according to the target cryptographic protocol. For example, if the target cryptographic protocol is an authentication protocol, a BN that does not include acceptI( → m) or acceptR( → m) may be defined.
<暗号プロトコルのモデリング>
上記の準備の下、暗号プロトコルはパーティ間でやり取りされるメッセージの列で表現される。各メッセージは、そのメッセージに関するパーティの振る舞いを決定した上で、ANに含まれる要素a、FNに含まれる操作fをメッセージ又はメッセージの列に対して適用したもので構成される。例えば、以下の(a)及び(b)はいずれもメッセージである。 <Cryptographic protocol modeling>
With the above preparation, a cryptographic protocol is represented by a sequence of messages exchanged between parties. Each message is configured by applying element a included in AN and operation f included in FN to a message or a sequence of messages after determining the behavior of a party regarding the message. For example, both (a) and (b) below are messages.
上記の準備の下、暗号プロトコルはパーティ間でやり取りされるメッセージの列で表現される。各メッセージは、そのメッセージに関するパーティの振る舞いを決定した上で、ANに含まれる要素a、FNに含まれる操作fをメッセージ又はメッセージの列に対して適用したもので構成される。例えば、以下の(a)及び(b)はいずれもメッセージである。 <Cryptographic protocol modeling>
With the above preparation, a cryptographic protocol is represented by a sequence of messages exchanged between parties. Each message is configured by applying element a included in AN and operation f included in FN to a message or a sequence of messages after determining the behavior of a party regarding the message. For example, both (a) and (b) below are messages.
(a)sendItoR(eskI)
(b)sendRtoI(IDI,eskI,eskR,aencI(IDR,SK),signR(IDI,eskI,eskR,aencI(IDR,SK)))
ここで、aencI(・)はaenc(・;pkI)(ただし、pkIはイニシエータIの公開鍵)を表す。また、signR(・)はsign(・;lskR)(ただし、lskRはレスポンダRの長期秘密鍵)を表す。 (a) sendItoR(esk I )
(b) sendRtoI(ID I , esk I , esk R , aenc I (ID R , SK), sign R (ID I , esk I , esk R , aenc I (ID R , SK)))
Here, aenc I (.) represents aenc (.; pk I ) (where pk I is the public key of the initiator I). Further, sign R (·) represents sign (·; lsk R ) (where lsk R is the long-term secret key of responder R).
(b)sendRtoI(IDI,eskI,eskR,aencI(IDR,SK),signR(IDI,eskI,eskR,aencI(IDR,SK)))
ここで、aencI(・)はaenc(・;pkI)(ただし、pkIはイニシエータIの公開鍵)を表す。また、signR(・)はsign(・;lskR)(ただし、lskRはレスポンダRの長期秘密鍵)を表す。 (a) sendItoR(esk I )
(b) sendRtoI(ID I , esk I , esk R , aenc I (ID R , SK), sign R (ID I , esk I , esk R , aenc I (ID R , SK)))
Here, aenc I (.) represents aenc (.; pk I ) (where pk I is the public key of the initiator I). Further, sign R (·) represents sign (·; lsk R ) (where lsk R is the long-term secret key of responder R).
なお、上記の(a)及び(b)は、参考文献1に記載されている認証プロトコル(ISO/IEC 11770-3鍵配送メカニズム4)のメッセージである。
Note that (a) and (b) above are messages of the authentication protocol (ISO/IEC 11770-3 key distribution mechanism 4) described in Reference 1.
このため、暗号プロトコルの各メッセージは、パーティの振る舞いを根ノード、操作を内部ノード、メッセージの要素を葉ノードとする木構造(構文木)で表現できる。例えば、上記の(a)に示すメッセージは図4の左図のような木構造、上記の(b)に示すメッセージは図4の右図のような木構造でそれぞれ表現できる。
Therefore, each message of a cryptographic protocol can be expressed as a tree structure (syntax tree) in which the behavior of the party is the root node, the operation is the internal node, and the message elements are the leaf nodes. For example, the message shown in (a) above can be expressed in a tree structure as shown in the left diagram of FIG. 4, and the message shown in (b) above can be expressed in a tree structure as shown in the right diagram of FIG. 4.
したがって、暗号プロトコルは、木構造で表現されるメッセージの系列構造で表現される。すなわち、木構造で表現された各メッセージをmt、暗号プロトコルにおけるメッセージのやり取り数をTとすれば、当該暗号プロトコルは{mt|t=1,・・・,T}という系列構造及び木構造を持つデータで表現できる。
Therefore, a cryptographic protocol is expressed as a message sequence structure expressed in a tree structure. That is, if each message expressed in a tree structure is m t and the number of message exchanges in the cryptographic protocol is T, then the cryptographic protocol has a series structure and a tree of {m t |t=1,...,T}. It can be expressed using structured data.
各メッセージを木構造で表現することで、メッセージ間の文法構造(言い換えれば、暗号プロトコルの持つ構造的な情報)を考慮できるため、より高精度な機械学習モデルを構築することが可能となる。例えば、以下の(c)及び(d)の2つのメッセージを考える。
By representing each message in a tree structure, it is possible to take into account the grammatical structure between messages (in other words, the structural information of the cryptographic protocol), making it possible to construct a more accurate machine learning model. For example, consider the following two messages (c) and (d).
(c)sendItoR(signI(aencR(IDI,SK)))
(d)sendItoR(signR(aencI(IDI,SK)))
この場合、既存手法(例えば、非特許文献1、2等)では同じベクトルに変換され、暗号プロトコルの持つ構造的な情報が失われてしまう。一方で、木構造で表現することで、上記の(c)及び(d)の2つメッセージが区別され、暗号プロトコルの持つ構造的な情報を考慮することができる。このため、暗号プロトコルの持つ構造的な情報が反映された機械学習モデルが構築され、より高精度な安全性検証が可能となる。 (c) sendItoR(sign I (aenc R (ID I , SK)))
(d) sendItoR(sign R (aenc I (ID I , SK)))
In this case, existing methods (for example, Non-Patent Documents 1 and 2) convert the vectors into the same vector, and the structural information of the cryptographic protocol is lost. On the other hand, by representing it in a tree structure, the above two messages (c) and (d) can be distinguished, and the structural information of the cryptographic protocol can be taken into account. For this reason, a machine learning model that reflects the structural information of the cryptographic protocol is constructed, making it possible to perform more accurate security verification.
(d)sendItoR(signR(aencI(IDI,SK)))
この場合、既存手法(例えば、非特許文献1、2等)では同じベクトルに変換され、暗号プロトコルの持つ構造的な情報が失われてしまう。一方で、木構造で表現することで、上記の(c)及び(d)の2つメッセージが区別され、暗号プロトコルの持つ構造的な情報を考慮することができる。このため、暗号プロトコルの持つ構造的な情報が反映された機械学習モデルが構築され、より高精度な安全性検証が可能となる。 (c) sendItoR(sign I (aenc R (ID I , SK)))
(d) sendItoR(sign R (aenc I (ID I , SK)))
In this case, existing methods (for example, Non-Patent Documents 1 and 2) convert the vectors into the same vector, and the structural information of the cryptographic protocol is lost. On the other hand, by representing it in a tree structure, the above two messages (c) and (d) can be distinguished, and the structural information of the cryptographic protocol can be taken into account. For this reason, a machine learning model that reflects the structural information of the cryptographic protocol is constructed, making it possible to perform more accurate security verification.
<安全性検証モデル>
以下、系列構造及び木構造を持つデータ{mt|t=1,・・・,T}で表現された暗号プロトコルを入力として、その暗号プロトコルの安全性要件の検証結果を表す安全性検証値を出力する機械学習モデルの構成について説明する。以下、この機械学習モデルを「安全性検証モデル」と呼ぶ。 <Safety verification model>
Below, a security verification value representing the verification result of the security requirements of the cryptographic protocol is given as an input cryptographic protocol expressed as data {m t |t=1,...,T} having a series structure and a tree structure. We will explain the configuration of a machine learning model that outputs . Hereinafter, this machine learning model will be referred to as a "safety verification model."
以下、系列構造及び木構造を持つデータ{mt|t=1,・・・,T}で表現された暗号プロトコルを入力として、その暗号プロトコルの安全性要件の検証結果を表す安全性検証値を出力する機械学習モデルの構成について説明する。以下、この機械学習モデルを「安全性検証モデル」と呼ぶ。 <Safety verification model>
Below, a security verification value representing the verification result of the security requirements of the cryptographic protocol is given as an input cryptographic protocol expressed as data {m t |t=1,...,T} having a series structure and a tree structure. We will explain the configuration of a machine learning model that outputs . Hereinafter, this machine learning model will be referred to as a "safety verification model."
安全性検証モデル1000の構成例を図5に示す。図5に示すように、安全性検証モデル1000は、TreeLSTM1100と、LSTM1200と、線形分類器1300とで構成される。
An example of the configuration of the safety verification model 1000 is shown in FIG. As shown in FIG. 5, the safety verification model 1000 includes a TreeLSTM 1100, an LSTM 1200, and a linear classifier 1300.
TreeLSTM1100は、LSTM(Long Short-Term Memory)を木構造に拡張したモデルであるChild-Sum Tree-LSTM(参考文献2)をベースに構成したニューラルネットワークモデルである。TreeLSTM1100では、木構造の各ノードの潜在ベクトルが子ノードの潜在ベクトルに基づいて計算される。具体的には、メッセージmtのノードjの子ノードの集合をC(j)とすれば、ノードjの潜在ベクトルhj∈Rn及び記憶セルcj∈Rn(ただし、nは潜在ベクトルhj及び記憶セルcjの次元数)は以下の式により計算及び更新される。
The TreeLSTM 1100 is a neural network model configured based on the Child-Sum Tree-LSTM (Reference 2), which is a model that extends LSTM (Long Short-Term Memory) to a tree structure. In the TreeLSTM 1100, the latent vector of each node in the tree structure is calculated based on the latent vectors of child nodes. Specifically, if the set of child nodes of node j of message m t is C(j), the latent vector h j ∈R n of node j and the storage cell c j ∈R n (where n is the latent vector h j and the number of dimensions of the storage cell c j ) are calculated and updated using the following formula.
上記のTreeLSTM1100に対して、系列構造及び木構造を持つデータ{mt|t=1,・・・,T}で表現された暗号プロトコルを入力することにより、各メッセージmt(t=1,・・・,T)の木構造に関する特徴量が、動作ノードの潜在ベクトルの列hm_1,・・・,hm_Tとして得られる。これらの潜在ベクトルの列hm_1,・・・,hm_TがLSTM1200に入力される。なお、「m_1」,・・・,「m_T」はそれぞれ「m1」,・・・,「mT」を表す。
Each message m t (t=1 , ..., T) is obtained as a sequence of latent vectors h m_1 , ..., h m_T of the action nodes. The columns h m_1 , . . . , h m_T of these latent vectors are input to the LSTM 1200. Note that “m_1”, . . . , “m_T” represent “m 1 ”, . . . , “m T ”, respectively.
LSTM1200は、RNN(Recurrent Neural Network)を改良したモデルの1つであるLSTM(参考文献3)である。LSTM1200は、TreeLSTM1100で得られた潜在ベクトルの列hm_1,・・・,hm_Tを入力として、潜在ベクトルh'=h'm_T∈Rnを出力する。ここで、h'jはLSTMの潜在ベクトルを表す。なお、LSTMではh'jを計算する際に過去の潜在ベクトルが必要となるが、過去の潜在ベクトルが未計算である場合には、例えば、未計算の潜在ベクトルは0ベクトル等と定義すればよい。LSTM1200は既存のLSTMと同様であるため、その詳細な説明は省略するが、必要に応じて参考文献3等を参照されたい。
LSTM1200 is LSTM (Reference document 3), which is one of the improved models of RNN (Recurrent Neural Network). The LSTM 1200 inputs the latent vector sequence h m_1 , ..., h m_T obtained by the Tree LSTM 1100 and outputs a latent vector h'=h' m_T εR n . Here, h' j represents a latent vector of LSTM. Note that in LSTM, past latent vectors are required when calculating h' j , but if the past latent vectors are uncalculated, for example, if the uncalculated latent vectors are defined as 0 vectors, etc. good. Since the LSTM 1200 is similar to existing LSTMs, a detailed explanation thereof will be omitted, but please refer to Reference Document 3 and the like as necessary.
線形分類器1300は、LSTM1200で得られた潜在ベクトルh'を入力として、安全性検証値を出力する。すなわち、線形分類器1300は、当該潜在ベクトルh'を、安全性要件に関するラベル数の次元のベクトルに線形変換した上で、そのベクトルの該当の要素に対するsoftmax関数値を安全性検証値とする。したがって、安全性検証値は確率で表される。
The linear classifier 1300 receives the latent vector h' obtained by the LSTM 1200 as input and outputs a safety verification value. That is, the linear classifier 1300 linearly transforms the latent vector h' into a vector with dimensions equal to the number of labels related to safety requirements, and then sets the softmax function value for the corresponding element of the vector as the safety verification value. Therefore, the safety verification value is expressed as a probability.
例えば、安全性要件に関するラベルが「機密性を満たす」及び「機密性を満たさない」の2つである場合、線形分類器1300によって潜在ベクトルh'が2次元のベクトルに線形変換される。このため、例えば、「機密性を満たす」ことを表すラベルに対応する要素に対するsoftmax関数値を安全性検証値とすればよい。これにより、例えば、その安全性検証値が所定の閾値を超えている場合は該当の暗号プロトコルは機密性を満たすと判定され、そうでない場合は機密性を満たさないと判定されることになる。以下、このように、安全性要件に関するラベルが「該当の安全性要件を満たす」及び「該当の安全性要件を満たさない」の2つである場合を想定して説明する。ただし、これは一例であって、これに限られるものではない。例えば、「機密性を満たす」及び「認証性を満たす」を安全性要件に関するラベルとしてもよいし、「機密性を満たす」、「機密性を満たさない」、「認証性を満たす」及び「認証性を満たさない」を安全性要件に関するラベルとしてもよい。
For example, if there are two labels related to security requirements: "confidentiality is met" and "confidentiality is not met", the latent vector h' is linearly transformed into a two-dimensional vector by the linear classifier 1300. Therefore, for example, the softmax function value for the element corresponding to the label indicating "satisfying confidentiality" may be used as the security verification value. As a result, for example, if the security verification value exceeds a predetermined threshold value, it is determined that the corresponding cryptographic protocol satisfies the confidentiality, and otherwise, it is determined that the cryptographic protocol does not satisfy the confidentiality. The following description will be made assuming that there are two labels related to safety requirements, ``satisfies the applicable safety requirements'' and ``does not satisfy the applicable safety requirements''. However, this is just an example and is not limited to this. For example, "confidentiality is met" and "authentication is met" may be labels related to security requirements, or "confidentiality is met", "confidentiality is not met", "authentication is met" and "authentication is met". "does not meet safety requirements" may be used as a label regarding safety requirements.
<安全性検証モデルの学習>
上記で説明した安全性検証モデル1000の学習対象のパラメータは、TreeLSTM1100の重み行列Wc,Uc及びバイアスb、LSTM1200の重み行列及びバイアス、線形分類器1300の線形変換に用いられる重み行列及びバイアスである。以下、これらのパラメータをモデルパラメータと呼ぶ。 <Learning the safety verification model>
The learning target parameters of thesafety verification model 1000 described above are the weight matrices W c , U c and bias b of the TreeLSTM 1100, the weight matrix and bias of the LSTM 1200, and the weight matrix and bias used for linear transformation of the linear classifier 1300. It is. Hereinafter, these parameters will be referred to as model parameters.
上記で説明した安全性検証モデル1000の学習対象のパラメータは、TreeLSTM1100の重み行列Wc,Uc及びバイアスb、LSTM1200の重み行列及びバイアス、線形分類器1300の線形変換に用いられる重み行列及びバイアスである。以下、これらのパラメータをモデルパラメータと呼ぶ。 <Learning the safety verification model>
The learning target parameters of the
安全性検証モデル1000を学習する際には、暗号プロトコルとその安全性要件を評価した結果を表す安全性評価ラベルとの組で表される学習データ集合が与えられるものとする。以下、この学習データ集合をD={(Prti,yi)|i=1,・・・,|D|}とする。Prtiはi番目の暗号プロトコル、yiはその安全性評価ラベルである。安全性評価ラベルyiは、例えば、該当の安全性要件を満たすときは1、そうでないときは0を取る。
When learning the security verification model 1000, a learning data set represented by a pair of a cryptographic protocol and a security evaluation label representing the result of evaluating its security requirements is given. Hereinafter, this learning data set is assumed to be D={(Prt i , y i )|i=1, . . . , |D|}. Prt i is the i-th cryptographic protocol, and y i is its security evaluation label. The safety evaluation label y i takes, for example, 1 when the corresponding safety requirement is satisfied, and 0 otherwise.
このとき、Prtiを系列構造及び木構造を持つデータに変換した上で、安全性検証モデル1000に入力したときの安全性検証値の確率分布をp=p(y|Prti)とすれば、安全性評価ラベルyiに対する交差エントロピー関数はl(p,yi)=-logp(yi|Prti)と定義できる。0<p(yi|Prti)<1であるためl(p,yi)>0である。このため、p(yi|Prti)が小さいほどl(p,yi)は大きな値を取り、l(p,yi)は安全性検証モデル1000による予測と真のラベルとの誤差の大きさを表している。したがって、学習データ集合Dに対する誤差の平均、つまり(l(p,y1)+・・・+l(p,y|D|))/|D|を損失関数とすることができる。
At this time, if Prt i is converted into data with a series structure and a tree structure, and the probability distribution of the safety verification value is input to the safety verification model 1000 as p=p(y|Prt i ), then , the cross-entropy function for the safety evaluation label y i can be defined as l(p, y i )=−logp(y i |Prt i ). Since 0<p(y i |Prt i )<1, l(p,y i )>0. Therefore, the smaller p(y i | Prt i ), the larger l(p, y i ) takes, and l(p, y i ) is the error between the prediction by the safety verification model 1000 and the true label. represents size. Therefore, the average error for the learning data set D, that is, (l(p,y 1 )+...+l(p,y |D| ))/|D| can be used as the loss function.
モデルパラメータは、上記の損失関数を最小化するように学習される。損失関数の最小化には、例えば、勾配法等といった既知の手法を用いればよい。なお、以下では、モデルパラメータの学習する場合を「学習時」、学習済みのモデルパラメータを用いた安全性検証モデルにより所望の暗号プロトコルに対する安全性検証値を得る場合を「推論時」と呼ぶことにする。
The model parameters are learned to minimize the loss function described above. A known method such as a gradient method may be used to minimize the loss function. Note that in the following, the case of learning model parameters will be referred to as "learning time", and the case of obtaining a security verification value for a desired cryptographic protocol by a security verification model using learned model parameters will be referred to as "inference time". Make it.
<安全性判定装置10のハードウェア構成例>
本実施形態に係る安全性判定装置10のハードウェア構成例を図6に示す。図6に示すように、本実施形態に係る安全性判定装置10は、入力装置101と、表示装置102と、外部I/F103と、通信I/F104と、RAM(Random Access Memory)105と、ROM(Read Only Memory)106と、補助記憶装置107と、プロセッサ108とを有する。これらの各ハードウェアは、それぞれがバス109を介して通信可能に接続されている。 <Example of hardware configuration ofsafety determination device 10>
FIG. 6 shows an example of the hardware configuration of thesafety determination device 10 according to this embodiment. As shown in FIG. 6, the safety determination device 10 according to the present embodiment includes an input device 101, a display device 102, an external I/F 103, a communication I/F 104, and a RAM (Random Access Memory) 105. It has a ROM (Read Only Memory) 106, an auxiliary storage device 107, and a processor 108. Each of these pieces of hardware is communicably connected via a bus 109.
本実施形態に係る安全性判定装置10のハードウェア構成例を図6に示す。図6に示すように、本実施形態に係る安全性判定装置10は、入力装置101と、表示装置102と、外部I/F103と、通信I/F104と、RAM(Random Access Memory)105と、ROM(Read Only Memory)106と、補助記憶装置107と、プロセッサ108とを有する。これらの各ハードウェアは、それぞれがバス109を介して通信可能に接続されている。 <Example of hardware configuration of
FIG. 6 shows an example of the hardware configuration of the
入力装置101は、例えば、キーボード、マウス、タッチパネル、物理ボタン等である。表示装置102は、例えば、ディスプレイ、表示パネル等である。なお、安全性判定装置10は、例えば、入力装置101及び表示装置102の少なくとも一方を有していなくてもよい。
The input device 101 is, for example, a keyboard, a mouse, a touch panel, a physical button, or the like. The display device 102 is, for example, a display, a display panel, or the like. Note that the safety determination device 10 may not include at least one of the input device 101 and the display device 102, for example.
外部I/F103は、記録媒体103a等の外部装置とのインタフェースである。安全性判定装置10は、外部I/F103を介して、記録媒体103aの読み取りや書き込み等を行うことができる。記録媒体103aとしては、例えば、フレキシブルディスク、CD(Compact Disc)、DVD(Digital Versatile Disk)、SDメモリカード(Secure Digital memory card)、USB(Universal Serial Bus)メモリカード等が挙げられる。
The external I/F 103 is an interface with an external device such as the recording medium 103a. The safety determination device 10 can read, write, etc. to the recording medium 103a via the external I/F 103. Examples of the recording medium 103a include a flexible disk, a CD (Compact Disc), a DVD (Digital Versatile Disk), an SD memory card (Secure Digital memory card), and a USB (Universal Serial Bus) memory card.
通信I/F104は、安全性判定装置10を通信ネットワークに接続するためのインタフェースである。RAM105は、プログラムやデータを一時保持する揮発性の半導体メモリ(記憶装置)である。ROM106は、電源を切ってもプログラムやデータを保持することができる不揮発性の半導体メモリ(記憶装置)である。補助記憶装置107は、例えば、HDD(Hard Disk Drive)、SSD(Solid State Drive)、フラッシュメモリ等のストレージ装置(記憶装置)である。プロセッサ108は、例えば、CPU(Central Processing Unit)等の演算装置である。
The communication I/F 104 is an interface for connecting the safety determination device 10 to a communication network. The RAM 105 is a volatile semiconductor memory (storage device) that temporarily holds programs and data. The ROM 106 is a nonvolatile semiconductor memory (storage device) that can retain programs and data even when the power is turned off. The auxiliary storage device 107 is, for example, a storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), or a flash memory. The processor 108 is, for example, an arithmetic device such as a CPU (Central Processing Unit).
本実施形態に係る安全性判定装置10は、図6に示すハードウェア構成を有することにより、後述する各種処理を実現することができる。なお、図6に示すハードウェア構成は一例であって、安全性判定装置10のハードウェア構成はこれに限られるものではない。例えば、安全性判定装置10は、複数の補助記憶装置107や複数のプロセッサ108を有していてもよいし、図示したハードウェアの一部を有していなくてもよいし、図示したハードウェア以外の様々なハードウェアを有していてもよい。
The safety determination device 10 according to this embodiment has the hardware configuration shown in FIG. 6, so that it can implement various processes described below. Note that the hardware configuration shown in FIG. 6 is an example, and the hardware configuration of the safety determination device 10 is not limited to this. For example, the safety determination device 10 may include a plurality of auxiliary storage devices 107 and a plurality of processors 108, may not include a part of the illustrated hardware, or may include the illustrated hardware. It may also include various other hardware.
<安全性判定装置10の機能構成例>
以下、本実施形態に係る安全性判定装置10の機能構成例について説明する。なお、推論時には安全性要件を満たすか否かを判定したい暗号プロトコルPrtが安全性判定装置10に与えられ、学習時には学習データ集合Dが安全性判定装置10に与えられるものとする。 <Example of functional configuration ofsafety determination device 10>
An example of the functional configuration of thesafety determination device 10 according to the present embodiment will be described below. It is assumed that the cryptographic protocol Prt to be determined whether or not it satisfies the security requirements is given to the security determination device 10 at the time of inference, and the learning data set D is provided to the security determination device 10 at the time of learning.
以下、本実施形態に係る安全性判定装置10の機能構成例について説明する。なお、推論時には安全性要件を満たすか否かを判定したい暗号プロトコルPrtが安全性判定装置10に与えられ、学習時には学習データ集合Dが安全性判定装置10に与えられるものとする。 <Example of functional configuration of
An example of the functional configuration of the
≪推論時≫
推論時における安全性判定装置10の機能構成例を図7に示す。図7に示すように、推論時における安全性判定装置10は、構造変換部201と、安全性検証処理部202と、安全性判定部203とを有する。これら各部は、例えば、安全性判定装置10にインストールされた1以上のプログラムが、プロセッサ108に実行させる処理により実現される。 ≪At the time of inference≫
FIG. 7 shows an example of the functional configuration of thesafety determination device 10 at the time of inference. As shown in FIG. 7, the safety determination device 10 at the time of inference includes a structure conversion section 201, a safety verification processing section 202, and a safety determination section 203. Each of these units is realized, for example, by one or more programs installed in the safety determination device 10 causing the processor 108 to execute the process.
推論時における安全性判定装置10の機能構成例を図7に示す。図7に示すように、推論時における安全性判定装置10は、構造変換部201と、安全性検証処理部202と、安全性判定部203とを有する。これら各部は、例えば、安全性判定装置10にインストールされた1以上のプログラムが、プロセッサ108に実行させる処理により実現される。 ≪At the time of inference≫
FIG. 7 shows an example of the functional configuration of the
構造変換部201は、与えられた暗号プロトコルPrtを系列構造及び木構造を持つデータ{mt|t=1,・・・,T}に変換する。ここで、各mtは暗号プロトコルPrtを構成するt番目のメッセージを木構造で表現したものである。以下、系列構造及び木構造を持つデータとして表現された暗号プロトコルをPrt={mt|t=1,・・・,T}と表記する。
The structure conversion unit 201 converts the given cryptographic protocol Prt into data {m t |t=1, . . . , T} having a sequence structure and a tree structure. Here, each m t represents the t-th message constituting the cryptographic protocol Prt in a tree structure. Hereinafter, the cryptographic protocol expressed as data having a series structure and a tree structure will be expressed as Prt={m t |t=1,...,T}.
安全性検証処理部202は、学習済みのモデルパラメータを持つ安全性検証モデル1000により実現され、暗号プロトコルPrt={mt|t=1,・・・,T}を入力として、その暗号プロトコルの安全性検証値を出力する。ここで、安全性検証処理部202には、木構造特徴抽出部211と、系列構造特徴抽出部212と、分類部213とが含まれる。
The security verification processing unit 202 is realized by the security verification model 1000 having learned model parameters, and receives the cryptographic protocol Prt={m t |t=1,...,T} as input and calculates the cryptographic protocol. Output the safety verification value. Here, the safety verification processing section 202 includes a tree structure feature extraction section 211, a series structure feature extraction section 212, and a classification section 213.
木構造特徴抽出部211は、学習済みのモデルパラメータを持つTreeLSTM1100により実現され、Prt={mt|t=1,・・・,T}を入力として、潜在ベクトルの列hm_1,・・・,hm_Tを出力する。
The tree structure feature extraction unit 211 is realized by TreeLSTM1100 having learned model parameters, and inputs Prt={m t |t=1,...,T} and extracts latent vector columns h m_1 ,... , h m_T .
系列構造特徴抽出部212は、学習済みのモデルパラメータを持つLSTM1200により実現され、潜在ベクトルの列hm_1,・・・,hm_Tを入力として、潜在ベクトルh'=h'm_Tを出力する。
The sequence structure feature extraction unit 212 is realized by the LSTM 1200 having learned model parameters, and receives the latent vector sequence h m_1 , . . . , h m_T as input and outputs the latent vector h'=h' m_T .
分類部213は、学習済みのモデルパラメータを持つ線形分類器1300により実現され、潜在ベクトルh'を入力として、安全性検証値を出力する。
The classification unit 213 is realized by a linear classifier 1300 having learned model parameters, and receives the latent vector h' as an input and outputs a safety verification value.
安全性判定部203は、安全性検証値と所定の閾値とを比較し、例えば、安全性検証値が所定の閾値を超えている場合、暗号プロトコルPrtは安全性要件を満たすと判定し、そうでない場合、暗号プロトコルPrtは安全性要件を満たさないと判定する。この判定結果(安全性判定結果)は所定の出力先(例えば、補助記憶装置107、通信ネットワークを介して接続される他の装置等)に出力される。
The security determination unit 203 compares the security verification value with a predetermined threshold, and, for example, if the security verification value exceeds the predetermined threshold, it determines that the cryptographic protocol Prt satisfies the security requirements, and If not, it is determined that the cryptographic protocol Prt does not satisfy the security requirements. This determination result (safety determination result) is output to a predetermined output destination (for example, the auxiliary storage device 107, another device connected via the communication network, etc.).
≪学習時≫
学習時における安全性判定装置10の機能構成例を図8に示す。図8に示すように、学習時における安全性判定装置10は、構造変換部201と、安全性検証処理部202と、学習部204とを有する。これら各部は、例えば、安全性判定装置10にインストールされた1以上のプログラムが、プロセッサ108に実行させる処理により実現される。 ≪When learning≫
FIG. 8 shows an example of the functional configuration of thesafety determination device 10 during learning. As shown in FIG. 8, the safety determination device 10 during learning includes a structure conversion section 201, a safety verification processing section 202, and a learning section 204. Each of these units is realized, for example, by one or more programs installed in the safety determination device 10 causing the processor 108 to execute the process.
学習時における安全性判定装置10の機能構成例を図8に示す。図8に示すように、学習時における安全性判定装置10は、構造変換部201と、安全性検証処理部202と、学習部204とを有する。これら各部は、例えば、安全性判定装置10にインストールされた1以上のプログラムが、プロセッサ108に実行させる処理により実現される。 ≪When learning≫
FIG. 8 shows an example of the functional configuration of the
構造変換部201は、与えられた学習データ集合Dに含まれる暗号プロトコルPrtiを系列構造及び木構造を持つデータ{mt
(i)|t=1,・・・,T(i)}に変換する。ここで、各mt
(i)は暗号プロトコルPrtiを構成するt番目のメッセージを木構造で表現したものである。以下、系列構造及び木構造を持つデータとして表現された暗号プロトコルをPrti={mt
(i)|t=1,・・・,T(i)}と表記する。
The structure conversion unit 201 converts the cryptographic protocol Prt i included in the given learning data set D into data {m t (i) | t=1,...,T (i) } having a sequence structure and a tree structure. Convert. Here, each m t (i) represents the t-th message constituting the cryptographic protocol Prt i in a tree structure. Hereinafter, the cryptographic protocol expressed as data having a sequence structure and a tree structure will be expressed as Prt i ={m t (i) | t=1, . . . , T (i) }.
安全性検証処理部202は、学習済みでないモデルパラメータを持つ安全性検証モデル1000により実現され、暗号プロトコルPrti={mt
(i)|t=1,・・・,T(i)}を入力として、安全評価値の確率分布p=p(y|Prti)を出力する。ここで、安全性検証処理部202には、木構造特徴抽出部211と、系列構造特徴抽出部212と、分類部213とが含まれる。
The security verification processing unit 202 is realized by the security verification model 1000 having model parameters that have not been learned, and uses the cryptographic protocol Prt i ={m t (i) | t=1,...,T (i) }. As an input, a probability distribution p=p(y|Prt i ) of safety evaluation values is output. Here, the safety verification processing section 202 includes a tree structure feature extraction section 211, a series structure feature extraction section 212, and a classification section 213.
木構造特徴抽出部211は、学習済みのモデルパラメータを持つTreeLSTM1100により実現され、Prti={mt
(i)|t=1,・・・,T(i)}を入力として、潜在ベクトルの列hm_1,・・・,hm_Tを出力する。
The tree structure feature extraction unit 211 is realized by TreeLSTM1100 having learned model parameters, and uses Prt i ={m t (i) | t=1,...,T (i) } as input to extract latent vectors. Output the columns h m_1 , . . . , h m_T .
系列構造特徴抽出部212は、学習済みのモデルパラメータを持つLSTM1200により実現され、潜在ベクトルの列hm_1,・・・,hm_Tを入力として、潜在ベクトルh'=h'm_Tを出力する。
The sequence structure feature extraction unit 212 is realized by the LSTM 1200 having learned model parameters, and receives the latent vector sequence h m_1 , . . . , h m_T as input and outputs the latent vector h'=h' m_T .
分類部213は、学習済みのモデルパラメータを持つ線形分類器1300により実現され、潜在ベクトルh'を入力として、安全性検証値の確率分布p=p(y|Prti)を出力する。
The classification unit 213 is realized by the linear classifier 1300 having learned model parameters, receives the latent vector h' as input, and outputs the probability distribution p=p(y|Prt i ) of the safety verification value.
学習部204は、安全性検証処理部202から出力された確率分布p=p(y|Prti)と、与えられた学習データ集合Dに含まれる安全性評価ラベルyiとを用いて、損失関数を最小化するように、モデルパラメータを学習する。
The learning unit 204 calculates the loss using the probability distribution p=p(y|Prt i ) output from the safety verification processing unit 202 and the safety evaluation label y i included in the given learning data set D. Learn model parameters to minimize a function.
<処理の詳細>
以下、本実施形態に係る安全性判定装置10が実行する処理の詳細について説明する。なお、安全性判定処理は推論時における安全性判定装置10によって実行され、学習処理は学習時における安全性判定装置10によって実行される。 <Processing details>
The details of the process executed by thesafety determination device 10 according to this embodiment will be described below. Note that the safety determination process is executed by the safety determination apparatus 10 at the time of inference, and the learning process is executed by the safety determination apparatus 10 at the time of learning.
以下、本実施形態に係る安全性判定装置10が実行する処理の詳細について説明する。なお、安全性判定処理は推論時における安全性判定装置10によって実行され、学習処理は学習時における安全性判定装置10によって実行される。 <Processing details>
The details of the process executed by the
≪安全性判定処理≫
本実施形態に係る安全性判定処理について、図9を参照しながら説明する。 ≪Safety judgment processing≫
The safety determination process according to this embodiment will be described with reference to FIG. 9.
本実施形態に係る安全性判定処理について、図9を参照しながら説明する。 ≪Safety judgment processing≫
The safety determination process according to this embodiment will be described with reference to FIG. 9.
構造変換部201は、与えられた暗号プロトコルPrtを系列構造及び木構造を持つデータ{mt|t=1,・・・,T}に変換する(ステップS101)。
The structure conversion unit 201 converts the given cryptographic protocol Prt into data {m t |t=1, . . . , T} having a series structure and a tree structure (step S101).
次に、安全性検証処理部202は、学習済みのモデルパラメータを持つ安全性検証モデル1000により、暗号プロトコルPrt={mt|t=1,・・・,T}を入力として、その暗号プロトコルの安全性検証値を出力する(ステップS102)。
Next, the security verification processing unit 202 inputs the cryptographic protocol Prt={m t |t=1,...,T} using the security verification model 1000 having learned model parameters, and verifies the cryptographic protocol. The safety verification value of is output (step S102).
そして、安全性判定部203は、上記のステップS102で出力された安全性検証値と所定の閾値とを用いて、暗号プロトコルPrtが安全性要件を満たすか否かを判定する(ステップS103)。
Then, the security determination unit 203 determines whether the cryptographic protocol Prt satisfies the security requirements using the security verification value output in step S102 and a predetermined threshold (step S103).
≪学習処理≫
本実施形態に係る学習処理について、図10を参照しながら説明する。 ≪Learning process≫
The learning process according to this embodiment will be explained with reference to FIG. 10.
本実施形態に係る学習処理について、図10を参照しながら説明する。 ≪Learning process≫
The learning process according to this embodiment will be explained with reference to FIG. 10.
学習部204は、モデルパラメータを初期化する(ステップS201)。なお、学習部204は、既知の手法によりモデルパラメータを初期化すればよい。
The learning unit 204 initializes model parameters (step S201). Note that the learning unit 204 may initialize the model parameters using a known method.
以下のステップS202~ステップS203は各i=1,・・・,|D|に対して繰り返し実行される。
The following steps S202 and S203 are repeatedly executed for each i=1, . . . , |D|.
構造変換部201は、与えられた学習データ集合Dに含まれる暗号プロトコルPrtiを系列構造及び木構造を持つデータ{mt
(i)|t=1,・・・,T(i)}に変換する(ステップS202)。
The structure conversion unit 201 converts the cryptographic protocol Prt i included in the given learning data set D into data {m t (i) | t=1,...,T (i) } having a sequence structure and a tree structure. Convert (step S202).
次に、安全性検証処理部202は、学習済みでないモデルパラメータを持つ安全性検証モデル1000により、暗号プロトコルPrti={mt
(i)|t=1,・・・,T(i)}を入力として、安全評価値の確率分布p=p(y|Prti)を出力する(ステップS203)。
Next, the security verification processing unit 202 uses the security verification model 1000 with model parameters that have not been learned to perform cryptographic protocol Prt i ={m t (i) | t=1,...,T (i) } is input, and a probability distribution p=p(y|Prt i ) of safety evaluation values is output (step S203).
そして、学習部204は、上記のステップS203で得られた確率分布p=p(y|Prti)と、暗号プロトコルPrtiに対応する安全性評価ラベルyiとを用いて、損失関数を最小化するように、モデルパラメータを学習する(ステップS204)。
Then, the learning unit 204 minimizes the loss function using the probability distribution p=p(y|Prt i ) obtained in step S203 above and the security evaluation label y i corresponding to the cryptographic protocol Prt i . The model parameters are learned so that (step S204).
<まとめ>
以上のように、本実施形態に係る安全性判定装置10は、機械学習モデルにより暗号プロトコルの安全性要件を検証することができる。このとき、本実施形態に係る安全性判定装置10では、暗号プロトコルを構成するメッセージの系列構造の特徴だけでなく、各メッセージの木構造の特徴も抽出した上で、これらの特徴から当該暗号プロトコルの安全性要件を検証する。これにより、従来手法によりも、暗号プロトコルの安全性要件を高精度に検証することが可能となる。 <Summary>
As described above, thesecurity determination device 10 according to the present embodiment can verify the security requirements of a cryptographic protocol using a machine learning model. At this time, the security determination device 10 according to the present embodiment extracts not only the characteristics of the series structure of messages constituting the cryptographic protocol but also the characteristics of the tree structure of each message, and then determines the cryptographic protocol from these characteristics. Verify safety requirements. This makes it possible to verify the security requirements of cryptographic protocols with higher accuracy than with conventional methods.
以上のように、本実施形態に係る安全性判定装置10は、機械学習モデルにより暗号プロトコルの安全性要件を検証することができる。このとき、本実施形態に係る安全性判定装置10では、暗号プロトコルを構成するメッセージの系列構造の特徴だけでなく、各メッセージの木構造の特徴も抽出した上で、これらの特徴から当該暗号プロトコルの安全性要件を検証する。これにより、従来手法によりも、暗号プロトコルの安全性要件を高精度に検証することが可能となる。 <Summary>
As described above, the
なお、上記の実施形態では、安全性判定装置10が1台の装置で実現されている場合について説明したが、これに限られるものではなく、例えば、通信ネットワークを介して相互に接続される複数台の装置で安全性判定装置10が実現されていてもよい。この場合、安全性判定装置10は、安全性判定システム等と呼ばれてもよい。
In the above embodiment, a case has been described in which the safety determination device 10 is implemented by one device, but the invention is not limited to this, and for example, multiple devices connected to each other via a communication network are implemented. The safety determination device 10 may be realized by one device. In this case, the safety determination device 10 may be called a safety determination system or the like.
また、上記の実施形態では、推論時と学習時で同一の装置を安全性判定装置10としているが、これに限られるものではなく、例えば、推論時における安全性判定装置10と学習時における安全性判定装置10とが異なる装置であってもよい。
Further, in the above embodiment, the same device is used as the safety determination device 10 at the time of inference and at the time of learning, but the invention is not limited to this. For example, the safety determination device 10 at the time of inference and the safety determination device 10 at the time of learning The sex determination device 10 may be a different device.
本発明は、具体的に開示された上記の実施形態に限定されるものではなく、請求の範囲の記載から逸脱することなく、種々の変形や変更、既知の技術との組み合わせ等が可能である。
The present invention is not limited to the above-described specifically disclosed embodiments, and various modifications and changes, combinations with known techniques, etc. are possible without departing from the scope of the claims. .
[参考文献1]
参考文献1:International Standard. Iso: Information technology - security techniques - key management - part 3: Mechanisms using asymmetric techniques iso/iec 11770-3, 2015. 3rd edition.
参考文献2:Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics.
参考文献3:Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, Vol. 9, No. 8, pp.1735-1780, 11 1997. [Reference 1]
Reference 1: International Standard. Iso: Information technology - security techniques - key management - part 3: Mechanisms using asymmetric techniques iso/iec 11770-3, 2015. 3rd edition.
Reference 2: Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics.
Reference 3: Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, Vol. 9, No. 8, pp.1735-1780, 11 1997.
参考文献1:International Standard. Iso: Information technology - security techniques - key management - part 3: Mechanisms using asymmetric techniques iso/iec 11770-3, 2015. 3rd edition.
参考文献2:Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics.
参考文献3:Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, Vol. 9, No. 8, pp.1735-1780, 11 1997. [Reference 1]
Reference 1: International Standard. Iso: Information technology - security techniques - key management - part 3: Mechanisms using asymmetric techniques iso/iec 11770-3, 2015. 3rd edition.
Reference 2: Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics.
Reference 3: Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, Vol. 9, No. 8, pp.1735-1780, 11 1997.
10 安全性判定装置
101 入力装置
102 表示装置
103 外部I/F
103a 記録媒体
104 通信I/F
105 RAM
106 ROM
107 補助記憶装置
108 プロセッサ
109 バス
201 構造変換部
202 安全性検証処理部
203 安全性判定部
204 学習部
211 木構造特徴抽出部
212 系列構造特徴抽出部
213 分類部 10Safety Judgment Device 101 Input Device 102 Display Device 103 External I/F
103a Recording medium 104 Communication I/F
105 RAM
106 ROM
107Auxiliary storage device 108 Processor 109 Bus 201 Structure conversion unit 202 Safety verification processing unit 203 Safety determination unit 204 Learning unit 211 Tree structure feature extraction unit 212 Sequence structure feature extraction unit 213 Classification unit
101 入力装置
102 表示装置
103 外部I/F
103a 記録媒体
104 通信I/F
105 RAM
106 ROM
107 補助記憶装置
108 プロセッサ
109 バス
201 構造変換部
202 安全性検証処理部
203 安全性判定部
204 学習部
211 木構造特徴抽出部
212 系列構造特徴抽出部
213 分類部 10
103a Recording medium 104 Communication I/F
105 RAM
106 ROM
107
Claims (7)
- 1以上のメッセージで構成される暗号プロトコルを、前記メッセージを木構造で表現したデータの系列データに変換するように構成されている変換部と、
前記系列データを入力として、学習済みのモデルパラメータを持つ機械学習モデルにより、前記暗号プロトコルの所定の安全性要件に関する検証結果を表す安全性検証値を出力するように構成されている安全性検証部と、
前記安全性検証値に基づいて、前記暗号プロトコルが前記安全性要件を満たすか否かを判定するように構成されている判定部と、
を有する安全性判定システム。 a conversion unit configured to convert a cryptographic protocol consisting of one or more messages into a series of data representing the messages in a tree structure;
A security verification unit configured to input the series data and output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol using a machine learning model having learned model parameters. and,
a determination unit configured to determine whether the cryptographic protocol satisfies the security requirements based on the security verification value;
A safety evaluation system with - 前記変換部は、
前記暗号プロトコルがT個のメッセージで構成されている場合、各t(t∈{1,・・・,T})に対して、t番目のメッセージを、該メッセージに対するパーティの振る舞いを根ノード、前記メッセージの要素を葉ノード、前記要素若しくは前記要素に対する操作の結果のいずれかに対する操作を内部ノードで表現した木構造のデータに変換し、該変換後のデータの系列を前記系列データとする、請求項1に記載の安全性判定システム。 The conversion unit is
If the cryptographic protocol is composed of T messages, for each t (t∈{1,...,T}), the t-th message is expressed as the root node, and the behavior of the party with respect to the message is expressed as converting the elements of the message into tree-structured data in which operations on either leaf nodes, the elements, or the results of operations on the elements are expressed as internal nodes, and the series of data after the conversion is used as the series data; The safety determination system according to claim 1. - 前記振る舞いには、イニシエータからレスポンダへのメッセージ送信、レスポンダからイニシエータへのメッセージ送信、の少なくとも1つが含まれ、
前記要素には、前記イニシエータ又は前記レスポンダのID、前記イニシエータ又は前記レスポンダの一時秘密鍵、前記イニシエータ又は前記レスポンダの長期秘密鍵、前記イニシエータ又は前記レスポンダの公開鍵、前記イニシエータ又は前記レスポンダのタイムスタンプ、前記イニシエータ及び前記レスポンダで事前に共有している秘密鍵、前記イニシエータ及び前記レスポンダで共有したセッション鍵、の少なくとも1つが含まれ、
前記操作には、前記要素の少なくとも1つの列に対する前記秘密鍵を用いた共通鍵暗号文作成操作、前記要素の少なくとも1つの列に対する前記公開鍵を用いた公開鍵暗号文作成操作、前記要素の少なくとも1つの列に対する前記長期秘密鍵を用いた電子署名作成操作、前記要素の少なくとも1つの列に対するハッシュ値計算操作、2つの前記要素の連結操作、2つの前記要素の一方を他方でべき乗する操作、2つの前記要素の加算操作、2つの前記要素の乗算操作、の少なくとも1つが含まれる、請求項2に記載の安全性判定システム。 The behavior includes at least one of sending a message from an initiator to a responder, and sending a message from a responder to an initiator,
The elements include an ID of the initiator or the responder, a temporary private key of the initiator or the responder, a long-term private key of the initiator or the responder, a public key of the initiator or the responder, and a timestamp of the initiator or the responder. , a secret key shared in advance by the initiator and the responder, and a session key shared by the initiator and the responder,
The operation includes a common key ciphertext creation operation using the private key for at least one column of the element, a public key ciphertext creation operation using the public key for at least one column of the element, and a public key ciphertext creation operation using the public key for at least one column of the element. An electronic signature creation operation using the long-term private key for at least one column, a hash value calculation operation for at least one column of the elements, a concatenation operation of two of the elements, and an operation of exponentiating one of the two elements by the other. , an addition operation of two said elements, and a multiplication operation of two said elements. - 前記機械学習モデルは、
前記系列データを入力として、Child-Sum Tree-LSTMに基づくモデルにより、前記メッセージを表現する木構造の特徴を表す第1の潜在ベクトルの列を出力し、
前記第1の潜在ベクトルの列を入力として、LSTMにより、前記系列データの系列構造の特徴を表す第2の潜在ベクトルを出力し、
前記第2の潜在ベクトルを入力として、線形変換とSoftmax関数とで構成される線形分類器により、前記安全性検証値を出力する、請求項1乃至3の何れか一項に記載の安全性判定システム。 The machine learning model is
Using the series data as input, a model based on Child-Sum Tree-LSTM outputs a sequence of first latent vectors representing characteristics of a tree structure representing the message;
outputting a second latent vector representing a feature of the sequence structure of the sequence data by LSTM using the first latent vector sequence as input;
The safety determination according to any one of claims 1 to 3, wherein the safety verification value is output by a linear classifier configured with a linear transformation and a Softmax function using the second latent vector as an input. system. - 1以上のメッセージで構成される暗号プロトコルを、前記メッセージを木構造で表現したデータの系列データに変換するように構成されている変換部と、
前記系列データを入力として、学習済みのモデルパラメータを持つ機械学習モデルにより、前記暗号プロトコルの所定の安全性要件に関する検証結果を表す安全性検証値を出力するように構成されている安全性検証部と、
前記安全性検証値に基づいて、前記暗号プロトコルが前記安全性要件を満たすか否かを判定するように構成されている判定部と、
を有する安全性判定装置。 a conversion unit configured to convert a cryptographic protocol consisting of one or more messages into a series of data representing the messages in a tree structure;
A security verification unit configured to input the series data and output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol using a machine learning model having learned model parameters. and,
a determination unit configured to determine whether the cryptographic protocol satisfies the security requirements based on the security verification value;
A safety determination device having: - 1以上のメッセージで構成される暗号プロトコルを、前記メッセージを木構造で表現したデータの系列データに変換する変換手順と、
前記系列データを入力として、学習済みのモデルパラメータを持つ機械学習モデルにより、前記暗号プロトコルの所定の安全性要件に関する検証結果を表す安全性検証値を出力する安全性検証手順と、
前記安全性検証値に基づいて、前記暗号プロトコルが前記安全性要件を満たすか否かを判定する判定手順と、
をコンピュータが実行する方法。 a conversion procedure for converting a cryptographic protocol consisting of one or more messages into a series of data representing the messages in a tree structure;
a security verification step that uses the series data as input and uses a machine learning model with learned model parameters to output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol;
a determination procedure for determining whether the cryptographic protocol satisfies the security requirements based on the security verification value;
The way a computer performs. - 1以上のメッセージで構成される暗号プロトコルを、前記メッセージを木構造で表現したデータの系列データに変換する変換手順と、
前記系列データを入力として、学習済みのモデルパラメータを持つ機械学習モデルにより、前記暗号プロトコルの所定の安全性要件に関する検証結果を表す安全性検証値を出力する安全性検証手順と、
前記安全性検証値に基づいて、前記暗号プロトコルが前記安全性要件を満たすか否かを判定する判定手順と、
をコンピュータに実行させるプログラム。 a conversion procedure for converting a cryptographic protocol consisting of one or more messages into a series of data representing the messages in a tree structure;
a security verification step that uses the series data as input and uses a machine learning model with learned model parameters to output a security verification value representing a verification result regarding predetermined security requirements of the cryptographic protocol;
a determination procedure for determining whether the cryptographic protocol satisfies the security requirements based on the security verification value;
A program that causes a computer to execute.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024517723A JPWO2023209899A1 (en) | 2022-04-27 | 2022-04-27 | |
PCT/JP2022/019173 WO2023209899A1 (en) | 2022-04-27 | 2022-04-27 | Security determination system, security determination device, method, and program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/019173 WO2023209899A1 (en) | 2022-04-27 | 2022-04-27 | Security determination system, security determination device, method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023209899A1 true WO2023209899A1 (en) | 2023-11-02 |
Family
ID=88518398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/019173 WO2023209899A1 (en) | 2022-04-27 | 2022-04-27 | Security determination system, security determination device, method, and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2023209899A1 (en) |
WO (1) | WO2023209899A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007028447A (en) * | 2005-07-20 | 2007-02-01 | Toshiba Corp | Encryption protocol safety verification device, encryption protocol design device, encryption protocol safety verification method, encryption protocol design method, encryption protocol safety verification program and encryption protocol design program |
-
2022
- 2022-04-27 WO PCT/JP2022/019173 patent/WO2023209899A1/en unknown
- 2022-04-27 JP JP2024517723A patent/JPWO2023209899A1/ja active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007028447A (en) * | 2005-07-20 | 2007-02-01 | Toshiba Corp | Encryption protocol safety verification device, encryption protocol design device, encryption protocol safety verification method, encryption protocol design method, encryption protocol safety verification program and encryption protocol design program |
Non-Patent Citations (4)
Title |
---|
"IT Roadmap 2018", 22 March 2018, TOYO KEIZAI INC., JP, ISBN: 978-4-492-58113-1, article KOMAHASHI, KENICHI: "Chapter 4 Security technology that accelerates business", pages: 299 - 307, XP009550391 * |
HUSSAIN FATIMA; HUSSAIN RASHEED; HASSAN SYED ALI; HOSSAIN EKRAM: "Machine Learning in IoT Security: Current Solutions and Future Challenges", IEEE COMMUNICATIONS SURVEYS & TUTORIALS, IEEE, USA, vol. 22, no. 3, 7 April 2020 (2020-04-07), USA , pages 1686 - 1721, XP011807019, DOI: 10.1109/COMST.2020.2986444 * |
KENICHI ARAI ET AL.: "Formalization of CT using ProVerif", 2018 CRYPTOGRAPHY AND INFORMATION SECURITY SYMPOSIUM SUMMARY COLLECTION (CSIS2018); 23-26/01/2018, IEICE, JP, 23 January 2018 (2018-01-23) - 26 January 2018 (2018-01-26), JP, pages 1 - 6, XP009550392 * |
MORI TAKUMI ,, FUJITA MASAHIRO, YAMANAKA TADAKAZU: "Security Rating Method by Regression Analysis of Web Data", COMPUTER SECURITY SYMPOSIUM 2020 26 - 29 OCTOBER 2020, 1 October 2020 (2020-10-01), pages 801 - 807, XP093103209 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023209899A1 (en) | 2023-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wisiol et al. | Splitting the interpose PUF: A novel modeling attack strategy | |
Ren et al. | Tree-RNN: Tree structural recurrent neural network for network traffic classification | |
Möller et al. | On the impact of quantum computing technology on future developments in high-performance scientific computing | |
US9947314B2 (en) | Semi-supervised learning of word embeddings | |
CN111695415A (en) | Construction method and identification method of image identification model and related equipment | |
US10585989B1 (en) | Machine-learning based detection and classification of personally identifiable information | |
WO2023179429A1 (en) | Video data processing method and apparatus, electronic device, and storage medium | |
CN111552849A (en) | Searchable encryption method, system, storage medium, vehicle-mounted network and smart grid | |
WO2023137918A1 (en) | Text data analysis method and apparatus, model training method, and computer device | |
KR20210074358A (en) | Computer-implemented systems and methods including public key combination verification | |
Yu et al. | Efficient hybrid side‐channel/machine learning attack on XOR PUFs | |
CN114281931A (en) | Text matching method, device, equipment, medium and computer program product | |
CN112989024A (en) | Method, device and equipment for extracting relation of text content and storage medium | |
Chang et al. | Research on side-channel analysis based on deep learning with different sample data | |
Rao Jammalamadaka et al. | Asymptotic theory for statistics based on cumulant vectors with applications | |
KR102688562B1 (en) | Method, Computing Device and Computer-readable Medium for Classification of Encrypted Data Using Neural Network | |
WO2023209899A1 (en) | Security determination system, security determination device, method, and program | |
Yang et al. | Trustdfl: A blockchain-based verifiable and trusty decentralized federated learning framework | |
CN115834251B (en) | Hypergraph-transform-based threat hunting model building method | |
Santikellur et al. | Correlation integral-based intrinsic dimension: A deep-learning-assisted empirical metric to estimate the robustness of physically unclonable functions to modeling attacks | |
Chattopadhyay | Causality networks | |
Wen et al. | Sparse solution of nonnegative least squares problems with applications in the construction of probabilistic Boolean networks | |
Luo et al. | A Federated Named Entity Recognition Model with Explicit Relation for Power Grid. | |
Hu et al. | Heterogeneous face recognition based on modality‐independent Kernel Fisher discriminant analysis joint sparse auto‐encoder | |
JP2022551917A (en) | Computer-implemented method, computing device, and computer program product for collaborative learning by multiple network nodes interconnected by a network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22940176 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024517723 Country of ref document: JP Kind code of ref document: A |