WO2010070959A1 - 情報処理装置、プログラム開発装置、プログラム検証方法及びプログラム - Google Patents
情報処理装置、プログラム開発装置、プログラム検証方法及びプログラム Download PDFInfo
- Publication number
- WO2010070959A1 WO2010070959A1 PCT/JP2009/066380 JP2009066380W WO2010070959A1 WO 2010070959 A1 WO2010070959 A1 WO 2010070959A1 JP 2009066380 W JP2009066380 W JP 2009066380W WO 2010070959 A1 WO2010070959 A1 WO 2010070959A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- function
- protection
- argument
- input
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/54—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1416—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
- G06F12/1425—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block
- G06F12/1441—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block for a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1416—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/53—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
Definitions
- the present invention relates to an information processing apparatus, a program development apparatus, a program verification method, and a program for developing a program according to a security protocol and verifying a program list of the program.
- the memory is not protected by switching the mode, but the data and instructions in the secure memory space cannot be read to the outside, so that the protected data can be misused. It is avoiding.
- the memory space accessed by the processor includes a secure space and a non-secure space, and the information read from the secure space into the register is provided with a kind of security tag, and the data is transferred to the non-secure space. Provide a mechanism that cannot be written to. With such a mechanism, information in the secure space is prevented from leaking into the non-secure space.
- Patent Document 1 As a method for realizing a memory area called a secure space (referred to as a protected memory area) in Patent Document 1, not only access control but also encryption functions described in Non-Patent Document 2 and on-chip memory access control. Some are realized by combination. When protection is realized by combination with cryptographic functions, the concept of protection is separated into two types: protection of confidentiality to protect secrets and protection of integrity to prevent tampering, and protection of either one Only may be provided. Of course, it is also possible to provide both confidentiality and integrity protection. Completeness is that the information is accurate.
- Patent Document 1 realizes separation of protected data and non-protected data.
- a security protocol is implemented, a mixture of protected data and non-protected data is inevitably included in calculation by function. It becomes.
- a message that is secretly exchanged is encrypted with a secret key, and output to a ciphertext that can be output to the outside.
- To enable output to the outside means to make unprotected data, that is, to make sure that protected data and secret key information are not leaked even if it is output.
- Patent Document 1 the method disclosed in Patent Document 1 is premised on a mechanism that allows the security tag to be released exceptionally and that the protected data and the unprotected data be mixed.
- misuse of this mechanism may not be prevented. That is, there is a possibility that the protection data is erroneously output to the outside by accidentally releasing the security tag.
- Non-Patent Document 1 when data with different security levels (protection attributes) and program parts exist in the program, information of high-level data is converted to low-level parts by a method called Information Flow Analysis. Data is exchanged between program parts having different security levels while preventing leakage. Specifically, type checking in data movement between security levels, and a special operation is explicitly forced to the programmer or encryption operation is applied to the programmer when dropping high level data to a low level. Note that conversion from a low level to a high level is possible unconditionally. In this method, conversion of security level by encryption operation is used, so that protected data and non-protected data can be mixed in the encryption process.
- Non-Patent Document 2 the encryption key for outputting the protection data is limited to a derivative value of what the program holds statically and what is sexually held. Therefore, it is prohibited to use a value transmitted from the outside or a value shared with the outside as an encryption key.
- Such a method of using an encryption key is often used in a security protocol.
- the present invention has been made in view of the above, and is an information processing apparatus, a program development apparatus, a program verification method, and an information processing apparatus capable of appropriately determining a protection attribute for each variable of a program according to a security protocol, and The purpose is to provide a program.
- the present invention is an information processing apparatus, and for each argument relating to input of data to be protected and argument relating to output, a memory area in which the value of the argument is protected or an unprotected memory
- First storage means for storing a plurality of types of security functions that define protection attributes required to be stored in the area, and the protection attributes given to a variable representing a memory area in which the data is stored are described
- the second storage means for storing the program list describing the execution procedure of the process using any or all of the security functions stored in the first storage means, and the security function used in the process
- a function argument protection attribute defined based on whether or not there is a computational unidirectionality and an integrity guarantee between an input argument and an output argument;
- the decision term that satisfies the condition that it is difficult to find two or more possible values of the argument included in the dependent term when the value of the argument included in the decision term among the arguments used in the utility function is fixed;
- a third storage means for storing a dependency defined by a dependent
- the Kyuriti function a function generating means for generating without changing the effect of the treatment, with the addition of the third security function, and a updating means for updating the function argument protection attribute and the dependency.
- the present invention is a program development device that generates an executable format program executable by a computer from a program list described by a user, and is executable in the information processing device that executes the executable format program It is possible to access and protect both the first memory area that permits writing and reading from other than the formal program and the second memory area that prohibits writing and reading from other than the executable formal program.
- the first First storage means for storing a plurality of types of security functions defining protection attributes required to be stored in a memory area;
- the protection attribute given to the variable representing the memory area in which the data is stored is described, and the execution procedure of the process using any or all of the security functions stored in the first storage means is described
- a second input receiving means for receiving an input of the dependency defined by the decision term and the
- Determining means for determining the protection attribute of the actual argument relating to the argument and the output; and all the protection attributes of the actual argument relating to the input of the security function and the actual argument relating to the output of the first memory corresponding to the security function
- a determination unit that determines whether or not the protection attribute of the type of the security function stored in the unit matches, and if a match is confirmed as a result of the determination by the determination unit, based on the updated data flow
- the variable is stored in the first memory area or the second memory area based on the protection attribute assigned to the variable, and the variable Rewriting means for rewriting the program list so as to describe the processing to be performed according to a memory area to be stored, and compiling for generating the executable program from the program list by compiling the rewritten program list Means.
- the present invention is a program verification method executed by an information processing apparatus including a first storage unit, a second storage unit, a third storage unit, a detection unit, a function generation unit, and an update unit.
- the first storage means is a protection that requests that the argument value related to the input of data to be protected and the argument related to the output be stored in a protected memory area or an unprotected memory area.
- a plurality of types of security functions defining attributes are stored, and the second storage means describes the protection attribute given to a variable representing a memory area in which the data is stored, and the first storage means
- a program list in which a procedure for executing a process using any or all of the stored security functions is described is stored, and the third storage unit is configured to use a security function used in the process.
- Function argument protection attribute defined based on whether or not there is a computational unidirectionality and integrity guarantee between the input argument and the output argument in the calculation, and included in the decision item among the arguments used in the function
- the detection means verifies the integrity of a variable included in one of the dependency determination terms in the first security function by the second security function.
- a detection step of detecting a combination of the first security function and the second security function, and the function generation means includes the second security function.
- a third security function that outputs a protection attribute that guarantees integrity to a variable included in a dependent term of the first dependency relationship in the first security function when the function verification is successful.
- the present invention is a program for causing a computer to execute the above program verification method.
- the figure which illustrates the program list of the pseudo program which described the procedure of the process which system B2201 performs in the security protocol shown in FIG. 5 is a flowchart showing a procedure of protection attribute determination processing performed by the information processing apparatus 1 according to the embodiment.
- the figure which shows the process of automatic determination about the partial data flow shown in FIG. The flowchart which shows the detailed procedure of the protection attribute determination process with a function extension concerning the embodiment.
- the figure which shows the sequence of the security protocol concerning 3rd Embodiment. 5 is a flowchart showing a procedure of protection attribute determination processing performed by the information processing apparatus 1 according to the embodiment.
- FIG. 1 is a diagram illustrating a software configuration of the target system 101.
- the target system 101 is connected to the external apparatus 103 via the network 102, and includes a program module 201 that is a collection of programs and an OS 301.
- the program module 201 includes an instruction execution unit 212 and a data storage memory area 211.
- the data storage memory area 211 has an unprotected memory area 214 and a protected memory area 215.
- the non-protected memory area 214 is a non-secure storage area that has no restrictions on access.
- the protected memory area 215 is a secure storage area in which an access restriction is provided for access from a specific program or process.
- the non-protected memory area 214 and the protected memory area 215 may be formed on the same storage medium, or may be formed on separate storage media. In addition, it is possible to use a known technique for realizing such exclusive access control for a specific program or process.
- the instruction execution unit 212 is a control unit such as a CPU (Central Processing Unit) that is an execution subject of the program, and has a register 213 for temporarily storing data when the program is executed.
- the instruction execution unit 212 performs data input / output through data reading / writing to the unprotected memory area 214 in the data storage memory area 211.
- access to the non-protected memory area 214 is not necessarily limited to the OS 301, and includes access by DMA (Direct Memory Access) transfer by a peripheral device managed by the OS 301.
- DMA Direct Memory Access
- the OS 301 is a control unit such as a CPU that is the execution subject of the basic software of the target system 101, accesses the unprotected memory area 214, and inputs data (for example, data input from the external device 103 through the network 102). And output data (for example, data output to the external device 103 via the network 102) are read.
- the OS 301 can read / write data from / to the unprotected memory area 214 of the data storage memory area 211 at any timing, but access to the protected memory area 215 is restricted and reading / writing is prohibited. .
- Such an exclusive access mechanism for restricting access to a specific program or process is known from Patent Document 1 and Non-Patent Document 2, for example.
- the OS 301 may perform wiretapping or falsification (hereinafter collectively referred to as an attack) on input data, output data, or data stored in the unprotected memory area 214.
- an attack the OS 301 and the unprotected memory area 214 can be considered as an extension of the network 102 in that wiretapping and message tampering can occur.
- the unprotected memory area 214 and the protected memory area 215) it is important to appropriately allocate data to two types of memories (the unprotected memory area 214 and the protected memory area 215).
- a range in which wiretapping or alteration by the OS 301 is prohibited is a range H ⁇ b> 1 including the instruction execution unit 212 and the protected memory area 215.
- the program module 201 is safe for execution of instructions. This can be realized by protecting a memory area (not shown) in which instructions, function calls, return addresses, and the like are stored from attacks by the OS 301 by a technique described in documents such as Patent Document 1 and Non-Patent Document 1. Because. However, the memory area may be protected by other realization means, and the configuration of the target system 101 is not limited to the form shown in FIG.
- the instruction execution unit 212 of the program module 201 and the OS 301 may be the same control unit, or may be individual control units specialized for each application. Further, the non-protected memory area 214 and the protected memory area 215 may be provided in the same storage device, or each may be provided in an individual storage device.
- the information processing apparatus includes a control device such as a CPU (Central Processing Unit) that controls the entire device, a ROM (Read Only Memory) that stores various data and various programs, a RAM (Random Access Memory), and the like. Equipped with a storage device, an external storage device such as an HDD (Hard Disk Drive) or a CD (Compact Disk) drive device for storing various data and various programs, and a bus connecting them, using a normal computer It has a hardware configuration.
- the information processing device includes a display device that displays information, an input device such as a keyboard and a mouse that accepts user instruction input, and a communication I / F (interface) that controls communication with an external device. Are connected to each other.
- FIG. 2 is a diagram illustrating a functional configuration of the information processing apparatus 1.
- the information processing apparatus 1 includes a conversion unit 50, an update replacement unit 51, and a protection attribute determination unit 52.
- the update replacement unit 51 and the protection attribute determination unit 52 are generated on a storage device such as a RAM when the CPU program is executed.
- the protection attribute definition 14 is stored in an external storage device such as an HDD of the information processing apparatus 1.
- the target program 11 is input to the information processing apparatus 1 as a target program for automatically determining the protection attribute for the function.
- the target program 11 is a program written in a high-level human-readable language such as C language.
- a function argument protection attribute and a dependency relationship are defined.
- the function argument protection attribute is a protection attribute that is determined based on the presence / absence of computational unidirectionality between input and output and the guarantee of completeness in calculation by a function.
- the dependency is defined by a pair of a decision term and a dependent term that satisfy the following conditions. Each pair is a condition that it is difficult to find two or more possible values of a function argument included in a dependent term when the value of the function argument included in the decision term is fixed.
- the conversion unit 50 converts the target program 11 into the data flow 21 by parsing the program list of the target program 11 to generate a data flow.
- the data flow 21 represents a connection between each function used in the program, an argument or variable input to the function, and an argument or variable output as a result of calculation by the function.
- the update replacement unit 51 includes a data flow 21 converted by the conversion unit 50 from the target program 11, a function argument protection attribute set 12 that is a set of function argument protection attributes, and a dependency set 13 that is a set of dependency relationships. Entered. Of the functions represented in the data flow 21, the update replacement unit 51 has the completeness of a variable included in a determination term of a certain dependency relationship in a certain function (referred to as function A) by another function (referred to as function B).
- a pair (A, B) of the function being verified is detected, and a protection attribute that guarantees the integrity is given to the variable included in the dependent term of the dependency relationship in the function A only when the verification of the function B is successful.
- the data flow 21 is updated by generating a function to be output (function C) without changing the function of the target program 11 and adding it.
- the function of the target program 11 is an action of processing in which an execution procedure is described in the program list of the target program 11.
- the update replacement unit 51 updates the function argument protection attribute set 12 and the dependency relationship set 13 with the addition of the function C.
- the protection attribute determination unit 52 stores the protection attribute uniquely determined for each variable by using the data flow 21, the function argument protection attribute set 12, and the dependency set 13 updated by the update replacement unit 51.
- Output attribute table and function overload table The variable protection attribute table and the function overload definition table will be described in detail in the operation column described later.
- the protection attribute definition 14 is a table in which protection attributes assigned to variables and function arguments (hereinafter collectively referred to as variables) are defined, and stored in an external storage device such as an HDD.
- FIG. 3 is a diagram showing a list of protection attributes according to the present embodiment. As shown in the figure, six types of protection attributes are given to the variables. Each protection attribute has two orthogonal properties regarding the security requirements of integrity and confidentiality: presence / absence of protection means in the memory area and presence / absence of semantic confirmation.
- the items achieved by the protection measures such as access control and encryption / falsification verification differ for the protection measures of the memory area.
- the protected memory area 215 having an access control mechanism that completely prohibits access from programs other than the execution target program is protected in both integrity and confidentiality (indicated by + on the table).
- the input / output service provided by the OS 301 cannot be used at all.
- an unprotected memory area 214 is provided in which an integrity and confidentiality protection mechanism is not realized for input and output.
- confidentiality protection is performed by a hardware encryption mechanism at the time of memory access, there may be a memory area in which only confidentiality is protected.
- integrity protection if it is performed by a hardware MAC generation / verification mechanism, there may be a memory area in which only integrity is protected.
- integrity protection if it is performed by a hardware MAC generation / verification mechanism, there may be a memory area in which only integrity is protected.
- ⁇ Semantic confirmation indicates whether the value stored in the variable is confirmed for integrity or confidentiality, or both. For example, even if the integrity or confidentiality of a memory area that stores a variable is protected, if a value obtained from the unprotected memory area 214 is assigned to the variable, the variable is stored in that variable. The value itself may have been tampered with or eavesdropped at any point during program execution. Even if it is a variable assigned to the protected memory area 215, its value is guaranteed to be complete and confidential. It cannot be assumed. Even if the value of the assignment source itself is guaranteed, if there is no integrity protection mechanism in the memory area where the assignment destination variable X is stored, the value will be displayed when the variable X is referenced again.
- “Exposed” is provided for input variables and output variables.
- the OS In order for the OS to give an input to the program, the OS needs to write the input value in a memory area that can be referred to by the program. Further, in order for the program to output, the program needs to write the output value in a memory area that can be referred to by the OS. Since “exposed” is a variable held in a memory area where both integrity and confidentiality are not protected, the OS can freely read and write a variable to which this protection attribute is assigned. Since there is no protection mechanism in the memory area holding the value, the confidentiality and integrity of the value are not guaranteed at all in terms of meaning.
- the protection attribute “exposed” is specified to clarify the difference from the variable protected by security (referred to as a protection variable). In order to avoid complication, the protection attribute “exposed” may be omitted.
- “Fixed” is a protection attribute that means a variable that holds a value that is stored in a memory area that has been prevented from being tampered with, but whose semantic completeness and confidentiality have not been confirmed.
- a variable to which the protection attribute “fixed” is assigned can be converted to a variable to which the protection attribute “exposed” is exceptionally added.
- “Verified” is a protected attribute that means a variable that is stored in the tamper-protected memory area and has a semantic value integrity guarantee and has no confidentiality. Substitution to a variable with the protected attribute "verified” is limited to the calculation result of the variable with the same attribute, but the value of the variable with the protected attribute "verified” is changed to the protected attribute "exposed” or the protected attribute "fixed” Is always allowed to be assigned to the variable. When the output process via the variable is not performed, the memory area may be protected from confidentiality.
- “Hidden” is a protection attribute that means a variable with insufficient semantic confidentiality and integrity stored in a protected area.
- the difference from the protected attribute "fixed” is that the variable with the protected attribute “hidden” is prohibited from being assigned to the protected attribute "exposed” and the protected attribute "fixed”, and the input and output between them are The point is not to do. Since the variable of the protection attribute “hidden” is not used for input from the outside, there is no problem even if a tampering prevention mechanism is added to the memory area.
- “Concealed” is a protected attribute that means a variable that is stored in a secure area but has semantic confidentiality but no integrity. Since the variable of the protection attribute “concealed” is not used for external input like the protection attribute “hidden”, there is no problem even if a tampering prevention mechanism is added to the memory area.
- Constant is a protection attribute that means a variable that is stored in a confidential and tamper-protected area and whose semantic confidentiality and integrity are both confirmed.
- the protection attribute can be said to be almost the same as the variable type in the language, but in a general language, the type often means a data type such as the number of fields and the length.
- strict matching is required for the type of the protection attribute, but no strict restriction is provided for the data type in order to simplify the explanation.
- data type consistency is secondary, so the term protection attribute is used for security.
- the protection attributes may be implemented in the form of data types.
- FIG. 4 is a diagram showing in a tabular form whether or not assignment between variables to which a protection attribute is assigned is possible. Variables to which the confidentiality “hidden”, “concealed”, and “confidential” protection attributes are assigned cannot be mutually converted or assigned except for the same protection attribute. Variables with the protection attribute “verified” that have both memory protection and semantic confirmation for integrity are allowed to assign the value to variables with the protection attributes “fixed” and “exposed”. However, assignment from these variables to variables with the protection attribute “verified” is not allowed. Assignment between the protection attributes “fixed” and “exposed” can be freely performed.
- the data type and protection attribute resolution will be described.
- the rules for substitution between the variables described above are applied. This is a rule that must be strictly observed, independent of data type conversion. When the embodiment in which the data type and the protection attribute are combined is taken, it is necessary to always observe the rules regarding the protection attribute. This rule is especially problematic when function arguments and return values are overloaded.
- the data type and the protection attribute are described independently, the resolution of the overload on the data type and the resolution of the overload of the protection attribute are performed independently, and the protection is performed without depending on the overloading of the data type. An example of resolving multiple attribute definitions will be described. When applying data types and protected attributes together, attention should be paid to the rules regarding conversion of protected attributes. Details of overloading will be described later.
- the function argument protection attribute defines the restrictions on the protection attribute of the input argument and output argument defined for each function.
- functions are divided into security functions and non-security functions (general functions), and the function argument protection attribute will be described for each.
- the security function is a function that realizes a so-called security primitive having a computational unidirectionality such as encryption, decryption, and signature verification. Computational unidirectionality means that forward calculation can be performed easily, but reverse calculation is difficult (not possible in a realistic time).
- FIGS. 5A and 5B are diagrams showing a list of definition of the function argument protection attribute of the security function.
- KDF Key Derivation Function, key derivation function
- the protection attribute “verified” is always required for the input of the public key. This is because a ciphertext encrypted with a public key whose integrity cannot be confirmed may be decrypted by an attacker.
- the confidential text attribute “confidential” or “concealed” is required for the input plaintext so that it cannot be read from the outside, and the output ciphertext can be read from the outside. In order to be able to do so, overloading is made such that “verified” and “exposed”, which are protection attributes that do not have confidentiality on the memory, are given respectively. Overloading means that a function argument protection attribute is defined by a plurality of entries.
- a security function in which different protection attributes are assigned to a plurality of arguments is calculated in a computational quantity such as encryption, decryption, signature generation, and verification. Limited to security primitives that always have one-wayness. Mixing different protection attributes means that the protection attributes of function arguments are not unified. That is, a message with the protection attribute “confidential” as an input and a public key with the protection attribute “verified” are given, and the output becomes the protection attribute “verified”. If you can freely define a function whose argument protection attribute is not unified, for example, the calculation result of a protected variable (protected variable) is assigned to an unprotected variable (called unprotected variable).
- the information of the original protection variable is leaked from the protection variable.
- the purpose of this mechanism in the present embodiment is not to perform such calculations without limitation but to limit the processing to encryption and the like.
- the hash function and KDF for example, the auxiliary classification hash C and KDF C in Figs. 5-1 and 5-2 have the same protection attribute for all arguments, but like hash D and KDF D, Those with different protection attributes are overloaded at the same time. Therefore, these functions are security functions.
- FIG. 6 is a diagram showing a work procedure for developing a security function and an application program.
- Security functions are developed by specialized programmers who are security experts familiar with both cryptographic primitives and memory management of programs, and whether or not their protection attributes are properly defined as well as their correctness of operation. It is necessary to pay sufficient attention. Thereafter, a security protocol using a security function set is implemented by a general programmer. Depending on the application program, the encryption algorithm used may differ, and in such a case, it is necessary to add a security function or delete it to prevent misuse by a specialized programmer.
- the definition of the function argument protection attribute of the security function varies depending on the algorithm used for the primitive.
- the definition of functions as shown in FIGS. 5A and 5B is performed based on a standard that satisfies the safety requirements.
- entries such as signature verification C of the auxiliary classification in FIGS. 5-1 and 5-2 should be deleted. It is. This is because even if the message is in a memory area where the protection attribute “concealed” is protected, if the protection attribute “exposed” is given to the signature, the message can be restored from this signature.
- FIG. 7 is a diagram conceptually showing a signature verification procedure in the present embodiment.
- signature verification only information indicating whether or not verification is successful is output.
- the memory area of the variable corresponding to the output protection attribute is secured, the value is copied from the variable to be verified, and the newly secured variable is output only when the copied value is successfully verified. The This is because if the variable before verification is secured in the non-protected memory area 214, the attacker can rewrite the variable before verification during verification or immediately after verification, so the value to be verified is stored in the protected memory area 215. This is to prevent copying and rewriting in advance.
- the output after verification has a protection attribute different from that of the input before verification, it is possible to avoid erroneous use of the variable before verification after verification. Since the variable before verification and the variable after verification hold the same value, the program will operate functionally even if used incorrectly. However, such a program is vulnerable to the above-described attacks.
- the verification method described above has the effect of eliminating such errors.
- the MAC verification function outputs a verified variable in the same way as the signature verification function.
- the encryption function, decryption function, signature generation function, and MAC generation function each have different input and output values, so there is no need to take into account the pre-verification variables that are used incorrectly after verification. It is.
- a non-security function is a function that does not have computational unidirectionality like a security function, and the protection attributes of the input and the output all match.
- FIG. 8 is a diagram showing function argument protection attributes of an input function and an output function that are non-security functions in the present embodiment.
- the input function “Input” that handles input via the OS
- the output function “Output” that handles output handle only unprotected variables.
- all the so-called service functions for exchanging data with the OS handle only unprotected variables. Since the variable (input variable) input to the function “Input” is written by the OS, the input variable is limited to the variable with the protection attribute “exposed”, and the output is also the variable with the protection attribute “exposed”. Limited.
- each protection attribute of “exposed” and “verified” is overloaded.
- the output protection attribute is fixed to the same protection attribute as the input.
- a function that performs data type conversion is also a non-security function. These do not input and output data via the OS, and can take protected variables as arguments under the condition that the protection attributes of all input variables and output variables are the same.
- Functions for this purpose may be defined with different names for each protection attribute, such as for protection attribute "exposed” and protection attribute "verified”, and functions for each protection attribute are provided by function overloading. May be.
- the above-described restriction of the function argument protection attribute and the like prevent the information contamination described above, and the protection attribute can be safely converted only by the security function.
- FIG. 1 An example is shown in which the system A 2101 operating on the external device 103 shown in FIG. 1 and the system B 2201 realized by the program module 201 shown in FIG. 1 transfer messages using hybrid encryption.
- the system A 2101 represents the entire system including the OS
- the system B 2201 to be subjected to protection attribute determination processing described later according to the present embodiment is a program module (201 in FIG. 1). Note that this is limited to In the following description, a description will be given focusing on safety in the implementation of the system B2201, and an internal operation of the system A2101 will not be described.
- the system B 2201 obtains the public key (pkA) 2215 of the system A 2101 in advance.
- the system B 2201 separately verifies the integrity of the public key (pkA) 2215 based on the certificate issued by the CA, but this is omitted in the figure.
- a secret key (skB) 2211 and a message (msg) 2218 assigned to itself are statically embedded in the program module 201 that realizes the system B 2201.
- the system A 2101 generates a common key (temporary key) (K) 2113 for transferring the message (msg) 2218, and signs the hash value of the common key (K) 2113 with the secret key assigned to itself. 2112 is generated and transmitted to the system B 2201 (2402). Further, the system A 2101 transmits data 2111 obtained by encrypting the common key (K) 2113 with the public key (pkB) of the system B 2201 to the system B 2201 (2401). The system A 2101 separately verifies the integrity of the public key (pkB) based on the certificate issued by the CA, but is omitted in the figure.
- the system B 2201 when the system B 2201 receives the data 2111 transmitted from the system A 2101 in 2401, the system B 2201 acquires the common key (K) by the public key ciphertext decryption function 2212 using the secret key (skB) 2211 assigned to itself. . Next, the system B 2201 calculates a hash value by the hash function 2214 using the obtained common key (K) 2213, the public key (pkA) 2215 whose integrity has been confirmed in advance, and the signature 2112 received by 2402 The signature verification function 2216 is used to verify the signature.
- the common key (K) is confidential and complete by decryption with the private key (skB) 2211 assigned to the system B 2201 itself and signature verification with the public key (pkA) 2215 whose integrity is confirmed. Sex is supported. This point will be described in detail below. Therefore, the common key (K) 2213 can be used as an encryption key for encrypting the message (msg) 2218 by the common key encryption method.
- the system B 2201 then encrypts the message (msg) 2218 using the common key encryption function 2217 using the common key (K) 2213, generates a cipher text, and transmits the cipher text to the system A 2101 (2403).
- the system A 2101 receives the ciphertext transmitted from the system B 2201 in 2403, the ciphertext is decrypted by the common key decryption function 2114 using the common key (K) 2213 to obtain a message (msg) 2215.
- Collision difficulty is a property that it is difficult to find a plurality of input values having the same output value.
- This collision difficulty is a basic property that a hash function used as a cryptographic primitive should have. In this case, it is difficult to find an input value other than the common key (K) 2113 that has the same hash value as the hash value of the common key (K) 2113, and the output of the hash value is changed. The danger that only the input common key (K) 2113 is replaced is eliminated. Therefore, when the integrity of the hash value is verified, the integrity of the common key (K) 2113 that is an input to the hash function 2214 can be confirmed at the same time. As described above, this security protocol can be interpreted as indirectly checking the integrity of the common key (K) 2113 without directly verifying the common key (K) 2113.
- the dependency relationship in the present embodiment is defined in detail. As described above, the dependency relationship is to ensure that the integrity of one of the input variable and the output variable of the function is verified to confirm the integrity of the other.
- This dependency is defined by a determinant set and a dependent attribute, each of which consists of a subset of function input and output variables.
- the dependency relation decision term and the dependent term have the property that it is difficult to obtain two or more types of variable values included in the dependency term so that the values of the variables included in the decision term are exactly the same. . More specifically, the decision term and the dependent term have the following properties (a) and (b). (a) When the value of the variable of the decision term is determined, the value of the variable of the dependent term corresponding thereto is uniquely determined. (b) Even if the value of the variable of the decision term is determined, there are a plurality of values of the variable of the dependent term corresponding thereto, but it is difficult to obtain a plurality of values of the variable of the dependent term.
- the terms “determined term” and “dependent term” refer to functional dependency terms used in the relational database of computers.
- the functional dependency corresponds to the property (a) described above, and when the variable value of the decision term is determined, the variable value of the dependency term is uniquely determined.
- the dependency in the present embodiment has the property (b) in addition to the property (a) described above. This can be regarded as a one-to-one relationship in terms of calculation amount, and corresponds to collision difficulty in a hash function or the like.
- the hash function described above has a dependency relationship in which the output is a decision term and the input is a dependency term.
- the function “bitwise_negation (X, Y)” that inverts the input bit string X and outputs it to Y is a function in which the input X and the output Y correspond one-to-one. Therefore, it is possible to define a dependency relationship in which the dependent term is output Y when the decision term is input X, or the dependent term is input X when the decision term is output Y.
- FIG. 10 is a definition of dependencies generated by a public key ciphertext decryption function, a common key decryption function, a KDF and a MAC verification function, which are part of the security functions defined as shown in FIGS. FIG.
- the public key ciphertext decryption function and the common key decryption function if the ciphertext and the value of the key (secret key or common key) are determined, the decryption result is uniquely determined. Therefore, the decision term is the two variables ciphertext and key, and the dependent term is the decryption result. This dependency means that when the integrity of “both” of the ciphertext and the key can be confirmed, the integrity of the decryption result can be confirmed.
- This relationship is considered when, for example, the integrity of the key is confirmed in advance, and the integrity of the ciphertext is confirmed later. At this time, the integrity of the decryption result is guaranteed only after the integrity of the ciphertext is confirmed.
- the integrity of the key is confirmed in advance, but in the case of a key that is transmitted from the communication partner, such as a hybrid cipher, the integrity of the key at the time of calculation of decryption. May not be confirmed. In such a case, the integrity of the ciphertext and the key is confirmed after decryption, and the integrity of the decryption result is guaranteed only there from the dependency.
- KDF there are two outputs for one input, but it is the same as the hash function in that the output value is uniquely determined once the input value is determined. Furthermore, even if only a part of the output, that is, one of the two outputs is fixed, it is difficult for the KDF to find a plurality of input values such that a part of the output becomes a fixed value. Therefore, when a part of the output is a decision term, a dependency relationship where the input becomes a dependent term can be defined. Further, if the input value is determined, the output value is uniquely determined. As a result, a dependency relationship in which one of the outputs is a decision term and the input and the other output are dependent terms can be defined.
- dependency relations are defined for some security functions, but the security functions not listed here also have the properties (a) and (b) that the above-mentioned dependency relations should have.
- dependency relations can be defined for each security function.
- the dependency may not be useful. For example, if there is a function that always allows only inputs and outputs that have integrity, the integrity of the input and output variables will not be indirectly verified later. This is because completeness is already guaranteed. Therefore, even if the dependency relationship can be defined, the dependency relationship that is not useful may not be deliberately defined from the viewpoint of improving the efficiency of an algorithm described later that automatically determines the protection attribute.
- the second to fourth lines represent variable declarations given initial values.
- parameters for output are given.
- Lines 8 to 16 represent variable declarations of work variables.
- the processing procedure is described in lines 19 to 33. Specifically, the data “Cipher_From_A” (data 2111 in FIG. 9) obtained by encrypting the common key (K) in the area secured by “buf” via the OS by the function “Input” on the 19th line. The data size is input and stored in size. Then, after some processing is performed, data is finally output via the OS by the function “Output”, and the processing is completed.
- the protection attribute is given to the variables in the second to fourth lines, but the protection attribute is not given to the other parameters and variables.
- the protection attribute is automatically determined for a variable to which the protection attribute is not given while rewriting the program as necessary.
- data types such as the number and length of fields are omitted for many variables.
- the protection attribute and the data type can be considered independently. For this reason, only the protection attribute is described here.
- the protection attributes may be combined into data types.
- the information processing apparatus 1 sets the above-described protection attribute calculation rules, variable substitution rules, and function argument protection attributes of security functions and non-security functions for each variable of a given program. Appropriately assign the protection attributes to have, considering the dependency. Therefore, first, when a target program that is a processing target program is input, the information processing apparatus 1 acquires a function argument protection attribute set related to the program (step S1). This function argument protection attribute set is a set of function argument protection attributes for each function used in the program. Next, the information processing apparatus 1 acquires the protection attribute of the initial state given from the program variable, security constant, and parameter declaration (step S2). Then, the information processing apparatus 1 parses the program list of the program to be processed and generates a data flow (step S3).
- FIG. 13 is a diagram illustrating the data flow of the pseudo program shown in FIG. Since it is substantially the same as FIG. 9, the same code
- inputs (2401 to 2402 in FIG. 9) from the system A 2101 are represented as input functions “Input” 2401 and 2402, respectively.
- the public key ciphertext decryption function 2212 is represented as a function “SKDecrypt” 2212
- the signature verification function 2216 is represented as a function “PKVerify” 2216
- the common key encryption function 2217 is represented as a function “CKEncrypt” 2217.
- the output from the system B 2201 (2403 in FIG. 9) is expressed as an output function “Output” 2403.
- variables (input variables) are input from the external device 103 shown in FIG. 1 to the functions “Input” 2401 and 4022 which are non-security functions, and variables (output variables) are transferred from the function “Output” 2403 to the external device 103.
- the protection information related to security is a secret key (skB) 2211, a message (msg) 2218, and a public key (pkA) 2215 assigned to the system B 2201 realized by the pseudo program, each represented by a square symbol. ing.
- the signature verification function 2216 is represented by a mountain symbol
- the public key ciphertext decryption function 2212 and the common key encryption function 2217 are represented by a pentagon symbol
- the hash function 2214 is represented by a triangle symbol. Yes.
- a variable that is a terminal of the data flow and is not used thereafter is represented by a rounded rectangle.
- step S4 the information processing apparatus 1 first performs basic protection attribute determination processing that automatically determines the protection attribute without considering the dependency relationship using the data flow generated in step S3 (step S4).
- step S5 determination of the protection attribute
- step S5: NO determination of the protection attribute fails
- step S6 A protection attribute determination process with function expansion in consideration of the dependency relationship according to the present embodiment is performed (step S6).
- step S7: NO the protection attribute determination process with function extension
- the information processing apparatus 1 ends the process normally and the determination of the protection attribute fails (step S7: NO), the process is terminated as abnormal. Details of the protection attribute determination process with function expansion will be described later.
- the information processing apparatus 1 generates a partial data flow obtained by dividing the data flow for each security function from the data flow generated in step S4 of FIG. 12 (step S10).
- the information processing apparatus 1 divides the continuous data flow shown in FIG. 13 into 12 partial data flows as shown in the table of FIG.
- FIG. 15 is a diagram schematically showing a divided partial data flow. In the figure, a partial data flow number is assigned to each partial data flow in order to uniquely identify them.
- the information processing apparatus 1 may store information such as that shown in FIG. 15 regarding the partial data flow in a table format in a storage device such as a RAM or an external storage device such as an HDD.
- a storage device such as a RAM or an external storage device such as an HDD.
- Each end point of each partial data flow should be one of input, output, initial value (unprotected or protected), security function argument and termination variable.
- the terminal variable here is a variable that is not used in any function after being output from a certain function.
- Each partial data flow includes at least two end points as components.
- the identification numbers of the variables and functions (security function, non-security function) shown in FIG. 13 are expressed in the format of “# code_variable name”.
- the argument is expressed as “#sign_variable name (#argument number)”.
- Non-security functions are also expressed using argument numbers to specify which variables are explicit inputs to the program and which variables are outputs.
- argument numbers When the program input and output are viewed at the variable level, even if the operation is just an input, the operation cannot be performed unless there is an implicit output such as the size of the input buffer and parameters to the OS.
- the argument specification is for making this distinction.
- the divided partial data flow will be specifically described.
- the value of “buf”, which is the first argument of the function “# 2401_Input” is substituted into “Cipher_From_A” and is input as the first argument of the function “# 2212_SKDecrypt”
- the partial data flow with the partial data flow number “1” is divided so far.
- the initial value “# 2211_skB” is directly substituted as the second argument of the function “# 2212_SKDecrypt”, and the component becomes two partial data flows. .
- the protection attribute “confidential” is given as an initial value to the secret key “skB” which is confidential information.
- the third argument of the function “# 2212_SKDecrypt” is input as the second argument of the function “# 2217_CKEncrypt” and the first argument of the function “# 2214_Hash”.
- the partial data flow with the partial data flow number “7” includes the fourth argument of the function “# 2216_PKVerify” and “Cert_SignMsg_A”. “Cert_SignMsg_A” is a terminal variable that is output and is not used thereafter, and the subsequent data flow is not constructed.
- the undetermined partial data flow table is a table for storing partial data flows whose protection attributes are not uniquely determined, and is stored in a storage device such as a RAM or an external storage device such as an HDD.
- the information processing apparatus 1 writes all the partial data flows divided in step S10 into the undetermined partial data flow table.
- the variable protection attribute table is a table that stores a completion incomplete state indicating whether or not a unique determination of a protection attribute is completed and a protection attribute uniquely determined for each variable and parameter, such as a RAM.
- the function overloading table indicates whether or not the unique determination of the protection attribute of each argument has been completed for each call of the overloaded function in the processing target program input in step S1 of FIG. This is a table for storing the incomplete state and the determined protection attribute of the argument, and is stored in a storage device such as a RAM or an external storage device such as an HDD. It should be noted that an entry in the function overload table is repeatedly generated for each call of the overloaded function. For example, if there are two public key encryption functions in the program, two entries are generated and each entry is handled independently.
- the information processing apparatus 1 stores each security function and a non-security function excluding an input function and an output function in a function overload table.
- the argument of the security function includes a certain protection attribute such as a key value and a non-protection attribute.
- the protection attribute of the input function is always “exposed”, and the protection attribute of the output function is limited to “exposed”, “fixed”, or “verified”.
- the information processing apparatus 1 selects one partial data flow to be examined next from the undetermined partial data flow table (step S12). Subsequently, the information processing apparatus 1 determines whether or not a match determination based on the protection attribute of the component included in the partial data flow is possible (step S13). Whether or not a match can be determined is whether or not there is at least one variable, argument, or parameter whose protection attribute has already been uniquely determined in the partial data flow. For example, in the partial data flow with the partial data flow number “1” shown in FIG. 15, the dummy argument “# 2212_SKDecrypt (# 1)” of the security function “SKDecrypt” has an overload definition, so the protection attribute is not uniquely determined.
- the partial data can be checked by checking the match with the protection attribute “exposed” of the above variable. It is possible to determine whether or not the protection attributes in the flow match.
- the partial data flow configured only with the overloaded function the above-described coincidence determination cannot be performed. For example, in the data flow of the partial data flow number “10” shown in FIG.
- step S14 the information processing apparatus 1 performs a match determination (step S14). Specifically, the information processing apparatus 1 first determines whether or not the protection attributes of all the constituent elements of the partial data flow that have been uniquely determined match (step S14).
- the uniquely determined component means a variable to which a protection attribute is explicitly given or a dummy argument whose protection attribute is determined by resolving a function overload in another partial data flow analysis. Furthermore, when all of the protection attributes of the uniquely determined components match and there is an argument of an overloaded function in which the protection attribute is not determined in the partial data flow, the information processing apparatus 1 determines the uniqueness.
- step S15 The protection attribute uniquely determined and matched for all the components of the partial data flow in this way is referred to as a confirmed protection attribute of the partial data flow.
- step S14 If any one of the uniquely determined components has a protection attribute mismatch (step S14: NO), the inspection fails. As a result, in FIG. 12, the determination result in step S5 is negative, and the information processing apparatus 1 performs a protection attribute determination process with function expansion described later. On the other hand, in step S15, the information processing apparatus 1 deletes the partial data flow from the undetermined data flow table. Next, the information processing apparatus 1 registers a confirmed protection attribute (referred to as a definite protection attribute) in the corresponding field of the function multiple definition table for the arguments of the undefined multiple definition function included in the partial data flow. Furthermore, when the protection attribute of one argument is uniquely determined, the protection attribute of another argument is uniquely determined based on the overloading of the function (this is the definite protection of the partial data flow).
- a confirmed protection attribute referred to as a definite protection attribute
- the information processing apparatus 1 registers the protection attribute that is uniquely determined in the corresponding field of the argument. For example, in the case of the partial data flow with the partial data flow number “1” shown in FIG. 15, the registration of the definite protection attribute corresponds to the first argument “# 2212_SKDecrypt (# 1)” of the function “SKDecrypt” of the function overloading table. The information processing apparatus 1 writes the protection attribute “exposed” in the field to be processed. Also, from the overloading of the function “SKDecrypt”, the protection attribute of the third argument is uniquely determined as “concealed”, and the information processing apparatus 1 writes this in the field corresponding to “# 2212_SKDecrypt (# 3)”. The protection attribute of this argument is uniquely determined.
- the information processing apparatus 1 determines whether or not there is a partial data flow whose protection attribute is undetermined in the undetermined partial data flow table (step S16). If there is no partial data flow whose protection attribute has not been determined (step S16: NO), the determination of the protection attribute is successful, and the information processing apparatus 1 outputs the determined variable protection attribute table and function overload definition table. (Step S17), the process ends. In this case, in FIG. 12, the determination result of step S5 is affirmative, and the information processing apparatus 1 ends the process normally. On the other hand, when there is a partial data flow whose protection attribute has not been determined (step S16: YES), the process returns to step S12, and the information processing apparatus 1 repeats the above-described processing. As such processing is repeated, the number of uniquely determined argument fields whose protection attributes are not yet determined in the function overload table increases, and by the time all the partial data flow inspections are completed, All fields are uniquely determined.
- FIG. 16 is a diagram showing a process of automatic determination for the partial data flow shown in FIG.
- a shaded portion represents a partial data flow selected as a processing target, and a bold character represents a determined protection attribute (including a derivatively determined one).
- the partial data flow with the partial data flow number “1” is first selected, and the protection attribute is determined to be “exposed”.
- the third argument of the function “SKDecrypt” which is the public key ciphertext decryption function in the partial data flow with the partial data flow number “3”.
- the protection attribute of “# 2212_SKDecrypt (# 3)” is determined to be “concealed”.
- the second argument “# 2217_CKEncrypt (# 2)” of the function “CKEncrypt” that is an encryption function must be the protection attribute “confidential” from the definition of the function argument protection attribute of the encryption function, the partial data flow number The protection attribute cannot be matched in the partial data flow “3”.
- the third argument “# 2212_SKDecrypt (# 3)” of the function “SKDecrypt” is selected. Is determined to be “confidential”. Then, in this case, the protection attribute cannot be matched in the partial data flow with the partial data flow number “1”, and in any case, the determination of the protection attribute fails.
- the protection attribute cannot be determined even though this protocol is secure.
- one solution is given to such a case by considering the dependency relationship. As shown in FIG. 12, when the protection attribute determination process fails in the basic protection attribute determination process in step S4, the information processing apparatus 1 performs a protection attribute determination process with function expansion in consideration of the dependency relationship in step S6.
- protection attribute determination process with function extension is that the protection variable and the non-protection variable are not properly separated from each other in the data flow shown in FIG. Detecting indirect verification of integrity taking into account dependencies (referred to as indirect verification of integrity) and reconstructing it by extending the function of the part where the indirect verification of integrity occurred, It is to generate a data flow that can separate protection variables, and a function argument protection attribute set and a dependency set corresponding to the data flow.
- this function extension is realized by function synthesis.
- the information processing apparatus 1 acquires a dependency set according to the function used in the data flow constructed from the processing target program and the dependency defined as shown in FIG. 10 (step S20). .
- the information processing apparatus 1 initializes a function pair table and a function argument protection attribute table of the composite function V ′ constructed by the protection attribute determination process with function extension (step S21).
- the function pair table is a table in which pairs of functions that generate dependency relationships and verification functions that generate indirect integrity verification (function pairs) are registered.
- a storage device such as a RAM or an HDD Stored in an external storage device.
- the information processing apparatus 1 registers all pairs of a function for generating a dependency relationship (referred to as a dependency relationship generation function) shown in FIG.
- the function argument protection attribute table of the synthesis function V ′ is a table that stores the function argument protection attribute of the synthesis function V ′, and is stored in a storage device such as a RAM or an external storage device such as an HDD. In the initialization of the composite function V ′, the information processing apparatus 1 resets all the function argument protection attributes in the function argument protection attribute table so that no entry is registered.
- step S ⁇ b> 23 the information processing apparatus 1 determines whether at least one of input variables and output variables of the dependency relationship generation function F is input to the verification function V. This means that the output of the dependency generation function F is input to the verification function V, or the input of the dependency generation function F is also input to the verification function V (that is, the dependency generation function F and the verification function It is determined whether or not any of the above is satisfied. If the determination result is negative (step S23: NO), since indirect integrity verification does not occur due to the dependency relationship, the processing moves to the next function pair (F, V).
- step S23 If the determination result of step S23 is affirmative (step S23: YES), the process proceeds to step S24.
- step S24 In the example of the pseudo program shown in FIG. 11, among the function pairs (SKDecrypt, PKVerify), (Hash, PKVerify), and (CKEncrypt, PKVerify), only the function pair (Hash, PKVerify) is connected in the data flow. The output of “Hash” is input to the function “PKVerify”. For this reason, the determination result of step S23 becomes positive only for this function pair (Hash, PKVerify).
- the processing in steps S24 to S27 is a loop, and the information processing apparatus 1 includes 1 in the dependency relationship table of the dependency relationship generation function F for the function pair (F, V) for which the determination result in step S23 is positive.
- One entry, one of the multiple definitions of the dependency function F, and one of the multiple definitions of the verification function V are selected, and the processing from step S25 is performed on all the combinations.
- step S ⁇ b> 25 the information processing apparatus 1 determines whether integrity indirect verification based on the dependency relationship occurs.
- the indirect verification of completeness due to dependencies occurs in the pre-verification stage because some of the variables included in the dependency decision terms do not have completeness, and after verification all of them have completeness. When you are. In other words, it is when the following two conditions are satisfied.
- Condition 1 At least one variable among the variables of the dependency determination term is input to the verification function, and is verified to have completeness at the time of output.
- Condition 2 The variable of the decision term that is not input to the verification function already has completeness
- step S25 the information processing apparatus 1 checks the following (c) and (d).
- (c) is a check as to whether condition 3 and condition 1 are satisfied.
- (d) is a check of whether or not condition 2 is satisfied.
- (c) At least one of the variables in the entry of the dependency entry is input to the verification function V, and the protection attribute of the variable is identical between the input and output of the dependency generation function F and the input of the verification function V. And whether the variable is complete in the output of the verification function V (d) Whether all of the variables in the entry of the dependency relationship that are not input to the verification function V have authenticity in the input definition and the output definition of the dependency generation function F.
- FIG. 18 shows the pseudo program shown in FIG. 11, in which the information processing apparatus 1 selects and checks the dependency relationship of the function “Hash”, the function definition of the function “Hash”, and the function definition of the function “PKVerify” one by one. It is a figure which illustrates a result.
- the variable of the decision term is one, the condition 2 is satisfied, so the check of (d) is passed unconditionally.
- condition 2 may not be satisfied.
- step S25 If the check in step S25 is passed (step S25: YES), it can be confirmed that the indirect integrity verification by the dependency occurs. Then, in the next step S26, the information processing apparatus 1 combines the dependency generation function F and the verification function V with respect to the function pair that has passed the check in step S25, and generates a combined function V ′ that combines these functions. Define as a new security verification function. By doing so, it is possible to safely extend the security function without performing data flow analysis by a security expert. Then, the information processing apparatus 1 defines a function by registering an entry in the function argument protection attribute table for the new security verification function V ′. In the example shown in FIG.
- step S25 the checks in step S25 pass (b, A, B), (b, D, B) for (Hash dependency, Hash function definition, PKVerify function definition). , (B, C, A) and (b, C, C).
- a signature verification function “Hash_and_PKVerify” is defined as a new security verification function V ′.
- FIG. 19 is a diagram illustrating the definition of the signature verification function “Hash_and_PKVerify”.
- the same overloaded entry is generated from (b, D, B) and (b, C, C), so the table shown in FIG. 19 includes three entries. It is overloaded.
- the definition of the function performed by the information processing apparatus 1 in step S26 will be specifically described.
- the information processing apparatus 1 defines the input of the new security verification function V ′ as “input variable of dependency generation function F + (input variable of verification function V ⁇ output variable of dependency generation function F)”. . Further, the information processing apparatus 1 defines the output of the security verification function V ′ as “(output variable of dependency generation function F ⁇ input variable of verification function V) + output variable of ⁇ ⁇ verification function V”. The reason why the output variable of the dependency generation function F is removed from the input and the input variable of the verification function V is removed from the output is to remove an intermediate variable between the dependency generation function F and the verification function V. Intermediate variables are trapped in the function and do not need to be defined.
- the process differs depending on whether or not the output variable includes the dependent variable.
- an output variable includes a variable of a dependent term
- the information processing apparatus 1 replaces the protection attribute of the variable from the one inherited from the dependency entry f by changing it to a protection attribute having integrity while maintaining confidentiality.
- the processing device 1 changes the name to be different from the input and adds it to the output of the security verification function V ′.
- the protection attribute of the variable is the protection attribute in the input of the dependency entry f, and the protection attribute of the variable before the name change is changed to one having integrity while maintaining confidentiality.
- the information processing apparatus 1 since the output variable does not include the dependent variable, the information processing apparatus 1 changes the name of the dependent variable “ComKey” to the variable “Cert_ComKey” and outputs it.
- the protection attribute is obtained by changing the protection attribute (“exposed” or “concealed”) of the variable “ComKey” in the input definition of the dependency generation function F to a protection attribute having integrity (from “exposed” to “ changed to “verified” or changed from “concealed” to “confidential”).
- the information processing apparatus 1 redefines the protection attribute to “confidential” if there is any confidentiality in the input, and the confidentiality is improved even in the input. If not, redefine its protection attribute to “verified”. In the example of the pseudo program shown in FIG. 11, the protection attribute is “verified” for the input and output that do not include confidentiality (extended public key signature verification 1-A in FIG. 19). Otherwise, the protection attribute is “confidential”. As described above, the information processing apparatus 1 defines the signature verification function “Hash_and_PKVerify” as a new security verification function V ′ as illustrated in FIG. 19.
- step S29 When the security verification function V ′ is defined in the loop of steps S24 to S27 (step S28: YES), the information processing apparatus 1 performs data flow expansion accordingly (step S29). The information processing apparatus 1 first deletes the dependency generation function function F and the verification function V, and adds a new security verification function V ′ instead. The dependency relationship of the security verification function V ′ continues the dependency relationship of the verification function V as it is.
- FIG. 20 is a diagram showing the data flow expanded in step S29 for the pseudo program shown in FIG.
- a function “Hash_and_PKVerify” 2501 is added as a new security verification function V ′.
- the dependency of the function “Hash_and_PKVerify” is not defined.
- the information processing apparatus 1 considers the calculation order of the function that receives the variable of the dependent term before the integrity verification, and considers the integrity output from the new security verification function V ′. Change the variable of the dependent term to have it as input.
- variable “ComKey” corresponds to this. Since the function “PKVerify” is used before the function “CKEncrypt” and is calculated from the program list, the variable “Cert_ComKey” is input to the function “CKEncrypt” instead of the variable “ComKey”.
- the information processing apparatus 1 prunes unnecessary variables of the security verification function V ′ (step S30).
- This pruning means that a variable that is not input or output to any function other than the security verification function V ′ is deleted from the inputs and outputs of the security verification function V ′. Thereby, the number of variables can be reduced, and the efficiency and safety of determining the protection attribute and executing the program can be improved.
- the two variables “Cert_Sign_A” and “Cert_SigMsg_A” output from Hash_and_Verify and represented by the rounded square symbol are output from the security verification function V ′, but are not input anywhere thereafter. Because there is, it becomes a target of pruning.
- the variable “Cer_Sign_A” is output after the integrity of the variable “Sign_A” input to the security verification function V ′ is verified. In this way, in the case of a variable that is input to the security verification function V ′ and whose integrity is verified and output, not only “Cert_Sign_A” but also “Sign_A” on the input side is input to the dependency generation function F. Alternatively, the information processing apparatus 1 determines whether or not pruning is possible by checking whether the data is not output. For the variable “Cert_Sign_A”, the input function “Input” is not included in the dependency generation function, and therefore can be pruned.
- the information processing apparatus 1 prunes the two variables “Cert_SigMsg_A” and “Cert_Sign_A”. In response to this, the information processing apparatus 1 also deletes the two variables from the output definition of the function “Hash_and_PKVerify” that is the security verification function V ′ illustrated in FIG. 19.
- FIG. 21 is a diagram illustrating a data flow in which the information processing apparatus 1 performs pruning from the data flow illustrated in FIG. Since it is detected that the integrity of the input (ComKey) to the function “CKEncrypt” is guaranteed by the function “PKVerify”, in FIG. 21, the function “Hash” and the function “PKVerify” are combined. Hash_and_PKVerify ”outputs a value whose integrity has been verified, indicating that it has been input to the function“ CKEncrypt ”. In addition, it is shown that two variables “Cert_SigMsg_A” and “Cert_Sign_A” are deleted from the output of the function “Hash_and_PKVerify” by pruning. This pruning is not essential and is not necessarily performed.
- the information processing apparatus 1 performs a basic protection attribute determination process similar to that in step S4 of FIG. 12 on the data flow that has been expanded in step S29 and pruned in step S30 (step S31).
- step S32: YES the protection attribute is successfully determined in the basic protection attribute determination process
- step S33 the information processing apparatus 1 outputs the data flow and the protection attribute of each determined argument (step S33).
- step S32: NO the determination of the protection attribute has failed in the basic protection attribute determination process in step S31 (step S32: NO)
- the information processing apparatus 1 synthesizes the function pair (F, V) to be processed to create the security verification function V. Even if 'is newly defined, the protection attribute cannot be determined automatically.
- the information processing apparatus 1 returns all the extensions performed in step S29 and the pruning performed in step S30, that is, returns to the state immediately before performing the process in step S29, and the function of the security verification function V ′.
- the argument protection attribute is initialized (step S35).
- the information processing apparatus 1 tries to determine the extension of the function and the protection attribute in consideration of the dependency relationship for all the functions (F, V) (step S36). If the protection attribute cannot be automatically determined for any function pair, the determination of the protection attribute fails.
- the failure here means that there is a data flow that is not safe from the viewpoint of security even if one synthesis is considered as an extension of the function pair.
- step S7 in FIG. 12 is negative, and the information processing apparatus 1 ends the process because it is abnormal.
- FIG. 22 is a diagram schematically showing a divided partial data flow.
- the partial data flow with the partial data flow number “3” has a variable “ComKey” that is a key before verification
- the partial data flow with the partial data flow number “6” has a variable “Cert_Comkey” that is a key after verification. ”And it can be seen that separation is performed before and after the verification.
- FIG. 23 is a diagram showing a process of performing the basic protection attribute determination process in step S4 of FIG. 12 for this partial data flow. As shown in FIG.
- the data flow expanded in step S29 includes a variable “ComKey” which is a decryption result of the function “SKDecrypt” which is a public key ciphertext decryption function, and a common key encryption function.
- the variable “Cert_ComKey” having integrity input to the function “CKEncrypt” is separated, the protection attribute is appropriately determined, and the assignment of the protection attribute to all the variables is successful.
- pruning has been described here, the determination of the protection attribute is successful even when pruning is not performed. For example, when pruning is not performed in the data flow illustrated in FIG. 22, the protection attribute of the variable “Cert_SigMsg_A” and the protection attribute of the variable “Cert_Sign_A” are both uniquely determined as “verified”.
- the protection attribute to be assigned to each variable is determined in consideration of the relationship between a plurality of functions as a dependency relationship. For example, in the case of the example of the pseudo program shown in FIG. 11, it is possible to confirm the completeness of the input of the hash function after the signature verification by considering the hash function and the signature verification function in cooperation with each other. More specifically, it is possible to treat the input of the hash function as incomplete before signature verification, and after the signature verification can be treated separately as complete only if the signature verification is successful. It becomes. Separation of protected and unprotected variables can be achieved in the data flow. As a result, an appropriate protection attribute can be assigned to each variable.
- a protection attribute related to integrity and confidentiality is given to each variable used in the description of the security protocol, and the calculation by the function is limited to variables of the same protection attribute. Thereby, protected data and unprotected data are separated. Further, in order to support conversion of protection attributes by encryption processing, calculation between different protection attributes is performed by a function having a computational unidirectionality (for example, an encryption function, a decryption function, a signature generation function, a signature verification function). And the protection attribute of input and output is limited for each type of calculation. For example, in the public key ciphertext decryption function, the secret key is required to have both confidentiality and integrity. The input ciphertext is unprotected data, whereas the decrypted output is protected data having confidentiality.
- the conversion of the protection attribute is limited to only a safe calculation.
- the possibility that the protection data is erroneously output to the outside as in the technique of Patent Document 1 is eliminated, and the conversion error of the protection attribute is reduced. be able to.
- Indirect integrity verification occurs, for example, in processing using a combination of a hash function and a signature verification function, as described above.
- the message m to be verified is input to the hash function H, and the output variable H (m) is input to the signature verification function.
- the signature verification is successful here, the integrity of the variable H (m) input to the signature verification function is guaranteed from the nature of the signature verification function. Then, due to the collision difficulty of the hash function H, not only the variable H (m) that is the output of the hash function, but also the integrity of the message m that is the input of the hash function corresponding thereto is guaranteed.
- the completeness of the message m can be confirmed based on the above idea, but it is not possible to detect that the message m is complete with only the above two rules (R1) and (R2). Appropriate protection attributes cannot be granted.
- the input integrity can be improved by paying attention to the relationship (dependency) in which the corresponding input integrity is indirectly guaranteed.
- a value that can be indirectly confirmed can be detected. Therefore, for example, in a protocol that performs processing that combines a hash function and a signature verification function, a protection attribute can be appropriately assigned. Examples of such protocols include PGP and S / MIME, which are standard protocols for mail, and SSL, which are standard protocols for the Internet. The configuration of the present embodiment is applied to these protocols. It is useful.
- a protected memory area is used in a programming model that can access two types of memory areas, a protected memory area and a non-protected memory area. Can support the safe implementation of various security protocols.
- separation of protected data to be stored in the protected memory area and unprotected data to be stored in the unprotected memory area in a general security protocol is appropriately performed. In order to do this, not only the data flow (data flow) but also each process such as an encryption process performed in the middle is appropriately determined for the protection attribute for each variable.
- these variables can be appropriately stored in the protected memory area or the non-protected memory area according to the protection attribute, and input / output from the outside using the non-protected memory area and protection for the variables to be held inside. It is possible to achieve both protection in the memory area.
- step S6 the information processing apparatus 1 does not synthesize the function pair of the dependency relationship generation function and the verification function, but the verification function itself of the function pair. Expand.
- the outline of the procedure of the protection attribute determination process with function expansion itself is substantially the same as that shown in FIG.
- the details of the processes in steps S26 and S29 shown in FIG. 17 are different from those in the first embodiment.
- Steps S20 to S25 are the same as those in the first embodiment described above.
- the information processing apparatus 1 determines (dependency relationship generation function F dependency, function definition and verification of F) for the function pair that has passed the check in step S25 as in the first embodiment.
- Function definition is performed for the triplet of function definition of function V).
- a new security verification function V ′ is generated by extending only the verification function V while leaving the dependency generation function F as it is. Therefore, the information processing apparatus 1 adds the dependent variable to the input variable and the output variable of the verification function V, respectively. At this time, the input variable and the output variable have different names.
- the information processing apparatus 1 takes over the input protection attribute and the output protection attribute defined in the function argument protection attribute entry f of the dependency generation function F as the protection attributes of these variables.
- the information processing apparatus 1 defines the input of “Modified_PKVerify”, which is a new security verification function that extends the function “PKVerify”, and At the same time as taking over the input and output of the function “PKVerify” as the definition of the output, the variable “ComKey” of the dependent term is added to the input, and the variable “Cert_ComKey” whose name is changed is added to the output.
- FIG. 24 is a diagram illustrating the definition of a new security verification function “Modified_PKVerify” that is a function that is an extension of the function “PKVerify” that is the verification function V.
- an input variable “Com_Key” is added to input 3 and an output variable is added to output 3 “Cert_ComKey”.
- Steps S27 to S28 are the same as those in the first embodiment described above.
- step S29 the information processing apparatus 1 expands the data flow based on the security verification function V ′ obtained in step S26.
- the information processing apparatus 1 does not delete the dependency relationship generation function F and replaces only the verification function V with the security verification function V ′. Further, as in the first embodiment, the information processing apparatus 1 rewrites the data flow by taking over the dependency relationship to the security verification function V ′ and changing the argument. Furthermore, in the present embodiment, the information processing apparatus 1 gives a special attribute to the intermediate variable of the function pair (F, V ′).
- the intermediate variable is a variable that is output from the dependency generation function F, is input to the new security verification function V ′ ′, and is not referenced other than the function pair (F, V ′).
- This intermediate variable is processed so that it cannot be read from the outside and cannot be tampered with regardless of the confidentiality and completeness of meaning.
- such intermediate variables are deleted so that they cannot be read from the outside.
- the special attribute “capsulated” is defined in the definition related to the output of the dependency generation function F and the definition related to the input of the verification function V. It should be noted that the special attribute “capsulated” is an attribute that does not allow calculation with any variable other than the intermediate variable of the function pair (F, FV ′).
- variable to which the special attribute “capsulated” is given is not permitted to be calculated with any variable other than the intermediate variable of the function pair (F, V ′). Therefore, the special attribute “capslated” needs to be distinguished and defined by the function pair (F, V ′). For example, it is necessary to assign a special attribute for each function pair (F, ⁇ V ′) such as“ capslated_1 ”and“ capslated_2 ”.
- the variable “SigMsg_A” used only between the function “Hash” and the function “Modified_PKVerify” corresponds to this intermediate variable.
- the information processing apparatus 1 rewrites the protection attribute of the variable “SigMsg_A” to the special attribute “capsulated_1” in the overloading table of the function “Hash” and the function “Modified_PKVerify”. At this time, the information processing apparatus 1 rewrites the protection attribute for the output variable of the function “Hash” to “capslated_1”. Even if another hash function other than the function “Hash” is used in the program, The protection attribute for the output variable of the hash function is not rewritten. Note that only the function “Hash”, which is the dependency generation function targeted here, is the target of rewriting.
- the value of the variable “Comkey” input to the function “Hash” and the variable “Comkey” input to the function “Modified_PKVerify” are processed by such processing. It is guaranteed that the value is the same. If the verification is successful, the value of the output variable “Cert_Comkey” is also the same as those values.
- step S30 the information processing apparatus 1 performs pruning of unnecessary variables output by the security verification function V ′.
- the two variables “Cert_SigMsg_A” and “Cert_Sign_A” are candidates for pruning, and both of these are verified and output by the security verification function V ′.
- the variable “Sign_A” corresponding to the variable “Cert_Sign_A” is connected only to the function “Input” that does not generate the dependency, and thus can be pruned.
- FIG. 25 is a diagram showing a data flow obtained by extending the data flow of the pseudo program shown in FIG. 11 and pruning the two variables. In the figure, it is shown that a value whose integrity has been confirmed is output from the function “Modified_PKVerify” in which the function “PKVerify” itself is extended, and is input to the function “CKEncrypt”.
- FIG. 26 is a diagram illustrating definitions of the function “Hash” and the security verification function “Modified_PKVerify”.
- the subsequent processing procedure is the same as in the first embodiment described above, and the information processing apparatus 1 performs the basic protection attribute determination process on the data flow expanded in step S29 and pruned in step S30.
- FIG. 27 is a diagram illustrating a process of performing basic protection attribute determination processing for a partial data flow divided in the same manner as in the first embodiment described above. As shown in the figure, the protection attribute is uniquely determined for each variable. Note that, as described above, in order to maintain the dependency relationship, the protection attribute “concealed” is assigned to the variable “ComKey”, but it should be noted that tampering prevention processing is performed.
- the protection attribute for each variable can be appropriately determined by extending the verification function itself instead of synthesizing the dependency generation function and the verification function.
- these variables can be appropriately stored in the protected memory area or the non-protected memory area according to the protection attribute, and input / output from the outside using the non-protected memory area and protection for the variables to be held inside. It is possible to achieve both protection in the memory area.
- FIG. 28 is a diagram showing a security protocol sequence used in the present embodiment.
- the security protocol is substantially the same as the security protocol of the hybrid encryption shown in FIG. 9 used in the first and second embodiments, except that KDF is used.
- a system A 7101 operates on the external device 103 shown in FIG. 2
- a system B 7201 is realized by the program module 212 shown in FIG.
- the system B 7201 acquires the public key (pkA) 7215 of the system A 7103 in advance.
- the system B 7201 separately verifies the integrity of the public key (pkA) 7215 using a certificate issued by the CA, but this is omitted in the figure.
- a secret key (skB) 7211 and a message (msg) 7218 assigned to itself are statically embedded in advance.
- the system A 7101 generates a common key (temporary key) (K) for transferring the message (msg) 7218, and passes the common key (K) 7113 through the KDF, whereby the first element and the first element of the KDF (K) A first element (K1) 7113A and a second element (K2) 7113B are obtained as two elements. Further, the system A 7101 generates a signature 7112 with its own private key for the hash value of the first element (K1) 7113A, and transmits this to the system 7201 (7402). Further, the system A 7101 transmits data 7111 obtained by encrypting the common key (K) 7113 with the public key (pkB) of the system B 7201 to the system 7201 (7401). The system A 7101 separately verifies the integrity of the public key (pkB) based on the certificate issued by the CA certificate, which is omitted in the figure.
- K temporary key
- the system B 7201 when the system B 7201 receives the data 7111 transmitted from the system A 7101 in 7401, the system B 7201 acquires the common key (K) 7213 by the public key encryption / decryption function 7212 using the secret key (skB) 7211 assigned to the system B 7201. To do. Next, the system B 2201 passes the common key (K) 7213 through the KDF 7231 to obtain a first element (K1) 7232 and a second element (K2) 7233 as the first and second elements of the KDF (K).
- the system B 2201 calculates a hash value by the hash function 7214 using the first element (K1) 7232 and uses the public key (pkA) 7215 whose integrity has been confirmed in advance and the signature 7112 received in 7402.
- the signature verification function 7216 performs signature verification.
- the integrity of the first element (K1) 7232 is confirmed from the properties of the hash function described in FIG. 10, and the common key (K) 7213 is further confirmed from the properties of the KDF described in the definition of the dependency relationship.
- the integrity of the second element (K2) 7233 is confirmed. Then, the second element (K2) 7233 can be used as an encryption key for encrypting the message (msg) 7218 with a common key.
- the system B 7201 then encrypts the message (msg) 7218 using the common key encryption function 7217 using the second element (K2) 7233 to generate a cipher text, and transmits the cipher text to the system A 7101 (7403). ).
- the system A 7101 receives the ciphertext transmitted from the system B 7201 in 7403, the system A 7101 passes the common key (K) 7113 held by the system A 7101 through the KDF to obtain the second element (K2) 7233, which is used in common.
- the ciphertext is decrypted by the key decryption function 7114 to obtain a message (msg) 7115.
- the pseudo program from which this data flow is generated is different from that shown in FIG. 11, the illustration thereof is omitted here.
- the update replacement unit 51 can update the data flow 21, the function argument protection attribute set 12, and the dependency relationship set 13 a plurality of times.
- the protection attribute determination unit 52 can receive the data flow 21, the function argument protection attribute set 12, and the dependency relationship set 13 updated by the update replacement unit 51 a plurality of times. Then, the protection attribute determination unit 52 uses the data flow 21, the function argument protection attribute set 12, and the dependency relationship set 13 updated by the update replacement unit 51 in the same manner as in the first embodiment described above.
- a variable protection attribute table and a function overloading table that store protection attributes uniquely determined for each are output.
- FIG. 30 is a flowchart illustrating a procedure of protection attribute determination processing with function extension multiple times. The procedure of the protection attribute determination process with function extension multiple times is different from the protection attribute determination process with function extension shown in FIG.
- step S33A the protection attribute determination process with multiple function extension performed in step S6 ′ is recursively performed.
- step S41: YES the information processing apparatus 1 ends the process normally and determines the protection attribute. If the process fails (step S41: NO), the process proceeds to step S35, and all the expansion performed in step S29 and the pruning performed in step S30 are returned. By performing such processing, it is possible to handle function expansion at a plurality of locations. The expansion point can be completely different or nested.
- FIG. 31 is a diagram illustrating a data flow according to the present embodiment. Since it is substantially the same as FIG. 28, the same reference numerals are used for the same functions and variables as in FIG.
- Inputs from the system A 7101 (7401 to 7402 in FIG. 28) are respectively represented as input functions “Input” 7401 and 7402.
- the public key ciphertext decryption function 7212 is represented as a function “SKDecrypt” 7212
- the signature verification function 7216 is represented as a function “PKVerify” 7216
- the common key encryption function 7217 is represented as a function “CKEncrypt” 7217.
- the output from the system B 7201 (7403 in FIG.
- FIG. 32 is a diagram showing a data flow obtained as a result of the data flow extension performed in step S29 of FIG.
- a function “Hash_and_PKVerify” is newly configured as a new security function
- “Cert_K1” 7252 which is a variable of a dependent term is shown as an output variable thereof.
- step S40 the protection attribute determination process with function extension is performed a plurality of times.
- step S40 the information processing apparatus 1 initializes the function pair table in step S21 at the start of the protection attribute determination process with function extension multiple times, which is the second time.
- the function pair (F, V) to be processed is deleted, and the security verification function V ′ generated by the first protection attribute determination process with multiple function extensions is added to the list instead.
- the function pair table in the first protection attribute determination process with multiple function extensions is initialized to (SKDecrypt, PKVerify), (KDF, PKVerify), (Hash, PKVerify), (CKEncrypt, PKVerify)
- the function pair table in the second protection attribute determination process with function extension is initialized to (SKDecrypt, Hash_and_PKVerify), (KDF, Hash_and_PKVerify), (CKEncrypt, Hash_and_PKVerify).
- FIG. 33 is a diagram illustrating a data flow obtained as a result of the expansion. In this figure, it is shown that a function “KDF_and_Hash_and_PKVerify” 7261 is newly configured as a new security verification function.
- “Cert_K” 7262 and “Cert_K2”, which are subordinate variables, are added to the output variable, and the variable “Cert_K2” is added to the function “CKEncrypt” 7217 in place of “K2” which is the output variable from the KDF. Entered.
- the variable “Cert_K1” has been deleted by pruning because the corresponding K1 has become an internal variable by function expansion and is no longer an input to the function “KDF_and_Hash_and_PKVerify”.
- FIG. 34 is a diagram illustrating a process of performing basic protection attribute determination processing on the partial data flow divided in the same manner as in the first embodiment described above. As shown in the figure, the protection attribute is uniquely determined for each variable.
- the protection attribute can be appropriately determined by extending the function at a plurality of locations. .
- the function at a plurality of locations is recursively expanded.
- the present invention is not limited to this, and the protection attribute can be set by repeatedly executing the process while managing necessary information. It can be decided appropriately.
- the present embodiment it is possible to extend a function at a plurality of locations or a function in a nested state by applying the configuration of the second embodiment. In this case, it is necessary not to perform the expansion once performed again. This means that after extending the function pair (F, V) and replacing the verification function V with a new security verification function V ', the function pair (F, V') is not expanded again. . If this extension is attempted, there is no change in the security verification function V ', but the protection attribute determination process with function extension shown in FIG. In order to avoid this, for example, in addition to recursively performing protection attribute determination processing with function extension multiple times, a table for storing the extended function pair is prepared, and whether or not it has been extended based on it. Check it. Also, other methods that prevent the same process from being repeated may be applied.
- the protection attribute determination process in the first to third embodiments described above the program list of the target program is analyzed, and an appropriate protection attribute is determined and assigned to each variable.
- the data flow related to the security protocol is correctly described in the description of the program list of the target program.
- the security protocol specification itself does not have a defect that may cause confidential information to be inadvertently leaked, but there are already some methods for verifying the safety of the security protocol specification itself.
- the security protocol to be mounted has been verified in advance for safety.
- the following reference 1 discloses an example of protocol verification using a formal method.
- the specification of the security protocol to be mounted is not limited to the method of Reference 1, and it is assumed that some safety verification is performed in advance.
- Reference 1 “Reasoning About Belief in Cryptographic Protocols” Li Gong, Roger Needham, Raphael Yahalom 1990 IEEE Symposium on Security and Privacy, 1990
- the protection attribute of the variable can be automatically determined.
- the programmer first eliminates the error in the data flow at the first stage, and automatically determines the protection attribute by the information processing apparatus 1 according to the present embodiment at the second stage.
- FIG. 35 is a program development flow to which the protection attribute determination process according to this embodiment is applied.
- the development flow is roughly divided into two.
- One is a phase 1100 in which a programmer eliminates an error in a data flow and performs functional verification.
- the information processing apparatus 1 automatically determines a protection attribute for each variable of the data flow and can execute it.
- a phase 1200 for automatically generating a program.
- the programmer starts development of the program related to the security protocol, and describes the program list of the program while paying attention to the correctness of the data flow (step S1101).
- the programmer describes the program list of the program without assigning protection attributes to the variables.
- programmers must know the appropriate protection attributes at this point for basic variables in data flow implementation, for example, variables that are the root of trust (Root of Trust), such as their own private key. Is considered common.
- step S1102 the programmer generates an execution format of the program by the compiler (step S1102), executes it, and performs a function test to determine whether desired output data can be obtained for the input data for function test (step S1103). ). If this functional test fails (step S1104: NO), the programmer corrects the error in the data flow (step S1151), and compiles and functional tests again in steps S1102 to S1103 until the result of the functional test is correct ( Step S1104: YES) Steps S1151 and S1102 to S1103 are repeated. If appropriate input data for functional tests are provided, data flow errors should be eliminated when the results of the functional tests are correct.
- the information processing apparatus 1 is made to perform protection attribute determination processing (step S1201).
- the protection attribute of the above-described Root of Trust that the programmer grasps in the step S ⁇ b> 1101 may be given as an initial value. If the protection attribute determination fails as a result of the protection attribute determination process (step S1202: NO), even if the data flow is correctly described according to the specification of the security protocol, the programmer implements an extra variable or data flow. It is thought that the program is such that confidential information is output or leaked, or the variable to be protected is polluted with untrustworthy input.
- step S1202 when the protection attribute has been successfully determined (step S1202: YES), the information processing apparatus 1 rewrites the program list of the target program.
- the function expansion as described in the first to third embodiments is performed, the information processing apparatus 1 rewrites the program list in accordance with the expansion and sets each variable.
- the protection attribute automatically determined is added to the program list (step S1203).
- FIG. 36 is a diagram showing an example of rewriting the program list of the pseudo program shown in FIG. In the figure, a protection attribute is added to the parameter on the 6th line and the variable on the 8th to 13th lines, and the 15th to 23rd lines indicate that a security verification function is defined by function expansion. .
- step S1204 the information processing apparatus 1 compiles the program whose program list has been rewritten in step S1203 (step S1204). As a result, an executable program that can be appropriately stored in the protected memory area or the non-protected memory area according to the protection attribute is generated.
- the information processing apparatus 1 can automatically assign appropriate protection attributes to each variable. . Therefore, it is possible to improve the program development environment for the security protocol.
- the information processing apparatus 1 may perform steps S1201 to S1202, and the programmer may perform the processing after step S1203.
- various programs executed by the information processing apparatus 1 may be stored on a computer connected to a network such as the Internet and provided by being downloaded via the network.
- the various programs are recorded in a computer-readable recording medium such as a CD-ROM, a flexible disk (FD), a CD-R, a DVD (Digital Versatile Disk), etc., in an installable or executable file. And may be configured to be provided.
- the protection attributes are sequentially determined for the divided partial data flows.
- This basic protection attribute determination process The embodiment is not limited to this. For example, a method of obtaining a protection attribute by converting a data flow into a logical expression and applying a logical expression solution derivation algorithm thereto can be considered. The conversion method to the logical expression can be easily inferred from the conversion to the logical expression in the NP problem, and many logical expression derivation algorithms have been proposed.
- a logical variable corresponding to each protection attribute is prepared for all variables and arguments appearing in the partial data flow.
- the prepared logical variables may be “exposed_buf”, “fixed_buf”,..., “Confidential_buf”.
- a logical expression is generated.
- the logical expression may be, for example, one in which all of the following (e) to (h) are connected by ⁇ (a logical symbol meaning and).
- (H) A logical expression that expresses constraints for ensuring that each variable has only one protection attribute. For example, for “buf”, it is sufficient to generate a logical expression in which only one of “exposed_buf”, “fixed_buf”,..., “Confidential_buf” is true and the others are false. This can be easily generated.
- the public key ciphertext decryption function, signature verification function, common key encryption function, common key decryption function, and hash function are dealt with as security functions. At least one of a key encryption function, a secret key encryption function, a common key ciphertext decryption function, a signature generation function, the MAC generation function, a MAC verification function, and a key derivation function may be handled.
- the function argument protection attribute of each function is defined as follows, for example.
- the public key is defined as integrity protection
- the argument related to the input to be encrypted is confidentiality protected
- the argument related to the encrypted output is defined as unprotected
- the public key is defined as integrity protection
- the argument relating to the input to be encrypted is defined as confidentiality protection and integrity protection
- the argument relating to the encrypted output is defined as integrity protection.
- the secret key is confidentiality and integrity protection
- the argument related to the input to be encrypted is confidentiality protection
- the argument related to the encrypted output is unprotected.
- the secret key is defined as confidentiality and integrity protection
- the argument related to the input to be encrypted is defined as confidentiality protection and integrity protection
- the argument related to the encrypted output is defined as integrity protection.
- the decryption key is confidentiality and integrity protection
- the argument related to the input to be decrypted is integrity protected
- the argument related to the decrypted output is confidentiality and integrity protected
- the decryption key is defined as confidentiality and integrity protection
- the argument related to the input to be decrypted is unprotected
- the argument related to the decrypted output is defined as confidentiality protection.
- the MAC generation key is defined as confidentiality and integrity protection
- the argument related to the input to be generated is defined as integrity protection and MAC is integrity protection
- the MAC generation is defined as confidentiality protection and integrity protection.
- the MAC verification key is confidentiality and integrity protection
- the message is confidentiality protection
- the MAC input argument is confidentiality protection and verification.
- Arguments related to the output of copying the two are both defined as confidentiality and integrity protection, respectively, or the MAC verification key is confidentiality and integrity protection
- the message is unprotected
- the argument related to the MAC input When unprotected and verified successfully, arguments related to the output of copying the message and the MAC are both defined by integrity protection.
- the protection attribute of the argument related to the input and the argument related to the output are all defined identically, the argument related to the input is defined as confidentiality protection, and the argument related to the output is defined as unprotected.
- the input argument is defined as confidentiality and integrity protection, and the output argument is defined as integrity protection.
- Dependencies are defined as follows.
- the dependent term when the decision term is the encrypted output and the decryption key, the dependent term is defined as the input to be encrypted.
- the dependent term is defined as two outputs when the decision term is an input, or the dependent term is defined as the other of the input and the two outputs when the decision term is one of the two outputs.
- the dependent term is defined as the verification key when the decision term is input and MAC, or the dependent term is defined as the input and MAC when the decision term is the verification key.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Storage Device Security (AREA)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/162,955 US8683208B2 (en) | 2008-12-18 | 2011-06-17 | Information processing device, program developing device, program verifying method, and program product |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2008-322907 | 2008-12-18 | ||
| JP2008322907A JP5322620B2 (ja) | 2008-12-18 | 2008-12-18 | 情報処理装置、プログラム開発システム、プログラム検証方法及びプログラム |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/162,955 Continuation US8683208B2 (en) | 2008-12-18 | 2011-06-17 | Information processing device, program developing device, program verifying method, and program product |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010070959A1 true WO2010070959A1 (ja) | 2010-06-24 |
Family
ID=42268635
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2009/066380 Ceased WO2010070959A1 (ja) | 2008-12-18 | 2009-09-18 | 情報処理装置、プログラム開発装置、プログラム検証方法及びプログラム |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US8683208B2 (enExample) |
| JP (1) | JP5322620B2 (enExample) |
| WO (1) | WO2010070959A1 (enExample) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012059221A (ja) * | 2010-09-13 | 2012-03-22 | Toshiba Corp | 情報処理装置、情報処理プログラム |
| JP2016009882A (ja) * | 2014-06-20 | 2016-01-18 | 株式会社東芝 | メモリ管理装置、プログラム、及び方法 |
| US9753867B2 (en) | 2014-06-20 | 2017-09-05 | Kabushiki Kaisha Toshiba | Memory management device and non-transitory computer readable storage medium |
| US9779033B2 (en) | 2014-06-20 | 2017-10-03 | Kabushiki Kaisha Toshiba | Memory management device and non-transitory computer readable storage medium |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012011564A1 (ja) * | 2010-07-23 | 2012-01-26 | 日本電信電話株式会社 | 暗号化装置、復号装置、暗号化方法、復号方法、プログラム、及び記録媒体 |
| KR101859646B1 (ko) | 2011-12-16 | 2018-05-18 | 삼성전자주식회사 | 보안 데이터를 보호하는 메모리 장치 및 보안 데이터를 이용한 데이터 보호 방법 |
| KR20130093800A (ko) * | 2012-01-04 | 2013-08-23 | 삼성전자주식회사 | 통신 시스템에서 패킷을 이용하여 어플리케이션을 식별하기 위한 장치 및 방법 |
| US9906360B2 (en) | 2012-03-30 | 2018-02-27 | Irdeto B.V. | Securing accessible systems using variable dependent coding |
| US20140096270A1 (en) * | 2012-09-28 | 2014-04-03 | Richard T. Beckwith | Secure data containers and data access control |
| US9720716B2 (en) * | 2013-03-12 | 2017-08-01 | Intel Corporation | Layered virtual machine integrity monitoring |
| CN105074712B (zh) | 2013-03-19 | 2018-05-08 | 株式会社东芝 | 代码处理装置和程序 |
| US10515231B2 (en) * | 2013-11-08 | 2019-12-24 | Symcor Inc. | Method of obfuscating relationships between data in database tables |
| JP6579735B2 (ja) * | 2014-08-05 | 2019-09-25 | キヤノン株式会社 | 情報処理システム、情報処理装置、情報処理システムの制御方法、情報処理装置の制御方法、及びプログラム |
| US20160077151A1 (en) * | 2014-09-12 | 2016-03-17 | Qualcomm Incorporated | Method and apparatus to test secure blocks using a non-standard interface |
| WO2016067565A1 (ja) * | 2014-10-29 | 2016-05-06 | 日本電気株式会社 | 情報処理システム、情報処理装置、情報処理方法、及び、記録媒体 |
| US10079845B2 (en) * | 2016-03-31 | 2018-09-18 | Mcafee, Llc | IoT and PoS anti-malware strategy |
| EP3879750B1 (en) * | 2016-07-19 | 2022-09-07 | Nippon Telegraph And Telephone Corporation | Communication terminals and programs |
| JP6852337B2 (ja) * | 2016-09-29 | 2021-03-31 | 富士通株式会社 | 情報処理装置、情報処理プログラム、情報処理システム及び情報処理方法 |
| JP6370503B1 (ja) * | 2017-04-17 | 2018-08-08 | 三菱電機株式会社 | プログラム作成装置 |
| GB2564878B (en) * | 2017-07-25 | 2020-02-26 | Advanced Risc Mach Ltd | Parallel processing of fetch blocks of data |
| FR3092923B1 (fr) * | 2019-02-19 | 2021-05-21 | Sangle Ferriere Bruno | Méthode cryptographique de vérification des données |
| WO2021048063A1 (en) * | 2019-09-09 | 2021-03-18 | Pactum Ai Oü | Method and system for generating and using value functions for users |
| JP7563281B2 (ja) * | 2021-04-12 | 2024-10-08 | オムロン株式会社 | 制御装置、制御システム、管理方法およびプログラム |
| WO2024079897A1 (ja) * | 2022-10-14 | 2024-04-18 | 日本電信電話株式会社 | 証明装置、通信システム、証明方法、及びプログラム |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000267844A (ja) * | 1999-03-16 | 2000-09-29 | Nippon Telegr & Teleph Corp <Ntt> | ソフトウェア開発システム |
| JP2004118494A (ja) * | 2002-09-26 | 2004-04-15 | Hitachi Software Eng Co Ltd | 異種言語プログラム間インターフェイスのチェックプログラム及びチェック方法 |
| JP2005004301A (ja) * | 2003-06-10 | 2005-01-06 | Fujitsu Ltd | プログラムチェック装置 |
| JP2009129206A (ja) * | 2007-11-22 | 2009-06-11 | Toshiba Corp | 情報処理装置、プログラム検証方法及びプログラム |
Family Cites Families (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6470450B1 (en) * | 1998-12-23 | 2002-10-22 | Entrust Technologies Limited | Method and apparatus for controlling application access to limited access based data |
| US7043553B2 (en) * | 1999-10-07 | 2006-05-09 | Cisco Technology, Inc. | Method and apparatus for securing information access |
| US6983374B2 (en) | 2000-02-14 | 2006-01-03 | Kabushiki Kaisha Toshiba | Tamper resistant microprocessor |
| JP4067757B2 (ja) | 2000-10-31 | 2008-03-26 | 株式会社東芝 | プログラム配布システム |
| JP4153653B2 (ja) | 2000-10-31 | 2008-09-24 | 株式会社東芝 | マイクロプロセッサおよびデータ保護方法 |
| JP4074057B2 (ja) | 2000-12-28 | 2008-04-09 | 株式会社東芝 | 耐タンパプロセッサにおける暗号化データ領域のプロセス間共有方法 |
| JP4098478B2 (ja) | 2001-01-31 | 2008-06-11 | 株式会社東芝 | マイクロプロセッサ |
| JP2003051819A (ja) | 2001-08-08 | 2003-02-21 | Toshiba Corp | マイクロプロセッサ |
| JP2003101533A (ja) | 2001-09-25 | 2003-04-04 | Toshiba Corp | 機器認証管理システム及び機器認証管理方法 |
| JP4226816B2 (ja) | 2001-09-28 | 2009-02-18 | 株式会社東芝 | マイクロプロセッサ |
| JP3866597B2 (ja) | 2002-03-20 | 2007-01-10 | 株式会社東芝 | 内部メモリ型耐タンパプロセッサおよび秘密保護方法 |
| US6785820B1 (en) * | 2002-04-02 | 2004-08-31 | Networks Associates Technology, Inc. | System, method and computer program product for conditionally updating a security program |
| JP2003330365A (ja) | 2002-05-09 | 2003-11-19 | Toshiba Corp | コンテンツ配布/受信方法 |
| JP4115759B2 (ja) | 2002-07-01 | 2008-07-09 | 株式会社東芝 | 耐タンパプロセッサにおける共有ライブラリの使用方法およびそのプログラム |
| JP3880933B2 (ja) | 2003-01-21 | 2007-02-14 | 株式会社東芝 | 耐タンパマイクロプロセッサ及びキャッシュメモリ搭載プロセッサによるデータアクセス制御方法 |
| JP4347582B2 (ja) | 2003-02-04 | 2009-10-21 | パナソニック株式会社 | 情報処理装置 |
| JP4263976B2 (ja) | 2003-09-24 | 2009-05-13 | 株式会社東芝 | オンチップマルチコア型耐タンパプロセッサ |
| JP4282472B2 (ja) | 2003-12-26 | 2009-06-24 | 株式会社東芝 | マイクロプロセッサ |
| JP4612461B2 (ja) | 2004-06-24 | 2011-01-12 | 株式会社東芝 | マイクロプロセッサ |
| JP4559794B2 (ja) | 2004-06-24 | 2010-10-13 | 株式会社東芝 | マイクロプロセッサ |
| JP4204522B2 (ja) | 2004-07-07 | 2009-01-07 | 株式会社東芝 | マイクロプロセッサ |
| JP2007058588A (ja) | 2005-08-24 | 2007-03-08 | Toshiba Corp | プログラム保護機能を持つプロセッサ |
| US7752223B2 (en) * | 2006-08-07 | 2010-07-06 | International Business Machines Corporation | Methods and apparatus for views of input specialized references |
-
2008
- 2008-12-18 JP JP2008322907A patent/JP5322620B2/ja not_active Expired - Fee Related
-
2009
- 2009-09-18 WO PCT/JP2009/066380 patent/WO2010070959A1/ja not_active Ceased
-
2011
- 2011-06-17 US US13/162,955 patent/US8683208B2/en not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000267844A (ja) * | 1999-03-16 | 2000-09-29 | Nippon Telegr & Teleph Corp <Ntt> | ソフトウェア開発システム |
| JP2004118494A (ja) * | 2002-09-26 | 2004-04-15 | Hitachi Software Eng Co Ltd | 異種言語プログラム間インターフェイスのチェックプログラム及びチェック方法 |
| JP2005004301A (ja) * | 2003-06-10 | 2005-01-06 | Fujitsu Ltd | プログラムチェック装置 |
| JP2009129206A (ja) * | 2007-11-22 | 2009-06-11 | Toshiba Corp | 情報処理装置、プログラム検証方法及びプログラム |
Non-Patent Citations (1)
| Title |
|---|
| RYOTARO HAYASHI: "Secure Software Development Environment DFITS (Data Flow Isolation Technology for Security)", INFORMATION PROCESSING SOCIETY OF JAPAN KENKYU HOKOKU, vol. 2009, no. 20, 26 February 2009 (2009-02-26), pages 247 - 252 * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012059221A (ja) * | 2010-09-13 | 2012-03-22 | Toshiba Corp | 情報処理装置、情報処理プログラム |
| US8650655B2 (en) | 2010-09-13 | 2014-02-11 | Kabushiki Kaisha Toshiba | Information processing apparatus and information processing program |
| JP2016009882A (ja) * | 2014-06-20 | 2016-01-18 | 株式会社東芝 | メモリ管理装置、プログラム、及び方法 |
| US9753868B2 (en) | 2014-06-20 | 2017-09-05 | Kabushiki Kaisha Toshiba | Memory management device and non-transitory computer readable storage medium |
| US9753867B2 (en) | 2014-06-20 | 2017-09-05 | Kabushiki Kaisha Toshiba | Memory management device and non-transitory computer readable storage medium |
| US9779033B2 (en) | 2014-06-20 | 2017-10-03 | Kabushiki Kaisha Toshiba | Memory management device and non-transitory computer readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| JP5322620B2 (ja) | 2013-10-23 |
| JP2010146299A (ja) | 2010-07-01 |
| US8683208B2 (en) | 2014-03-25 |
| US20110296192A1 (en) | 2011-12-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5322620B2 (ja) | 情報処理装置、プログラム開発システム、プログラム検証方法及びプログラム | |
| JP4976991B2 (ja) | 情報処理装置、プログラム検証方法及びプログラム | |
| JP5171907B2 (ja) | 情報処理装置、情報処理プログラム | |
| Krüger et al. | Crysl: An extensible approach to validating the correct usage of cryptographic apis | |
| Sinha et al. | Moat: Verifying confidentiality of enclave programs | |
| KR102396071B1 (ko) | 소프트웨어 시스템의 자동화된 검증 기법 | |
| KR101763084B1 (ko) | 신뢰의 하드웨어 루트를 사용하는 미디어 클라이언트 장치 인증 | |
| JP4689945B2 (ja) | リソースアクセス方法 | |
| US7577852B2 (en) | Microprocessor, a node terminal, a computer system and a program execution proving method | |
| CN110199286A (zh) | 利用密封包围区的数据密封 | |
| CN106796641A (zh) | 针对运行已验证软件的硬件的端到端安全性 | |
| JP2008524726A (ja) | Risc形式アセンブリコードの情報フローの強制 | |
| WO2023029447A1 (zh) | 模型保护方法、装置、设备、系统以及存储介质 | |
| CN114139117A (zh) | 应用程序加固方法、装置、电子设备及存储介质 | |
| JP2007148962A (ja) | サブプログラム、そのサブプログラムを実行する情報処理装置、及びそのサブプログラムを実行する情報処理装置におけるプログラム制御方法 | |
| Lee et al. | Classification and analysis of security techniques for the user terminal area in the Internet banking service | |
| Liskov et al. | The cryptographic protocol shapes analyzer: A manual | |
| CN118445020A (zh) | 基于浏览器bs架构的数据存储方法及装置 | |
| Centenaro et al. | Type-based analysis of PKCS# 11 key management | |
| CN116964575A (zh) | 代码部署 | |
| Chen et al. | STELLA: sparse taint analysis for enclave leakage detection | |
| Chaki et al. | Verification across intellectual property boundaries | |
| Getreu | Embedded system security with Rust | |
| Singleton | Automated Tool Support for Finding and Repairing Security Bugs in Mobile Applications | |
| Sluys et al. | Partial Key Overwrite Attacks in Microcontrollers: A Survey |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09833264 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 09833264 Country of ref document: EP Kind code of ref document: A1 |