AU2004218702B8 - Method for verifying integrity on an apparatus - Google Patents

Method for verifying integrity on an apparatus Download PDF

Info

Publication number
AU2004218702B8
AU2004218702B8 AU2004218702A AU2004218702A AU2004218702B8 AU 2004218702 B8 AU2004218702 B8 AU 2004218702B8 AU 2004218702 A AU2004218702 A AU 2004218702A AU 2004218702 A AU2004218702 A AU 2004218702A AU 2004218702 B8 AU2004218702 B8 AU 2004218702B8
Authority
AU
Australia
Prior art keywords
integrity verification
tamper resistant
functions
program
integrity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU2004218702A
Other versions
AU2004218702A8 (en
AU2004218702A1 (en
AU2004218702B2 (en
Inventor
David Aucsmith
Gary Graunke
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to AU2004218702A priority Critical patent/AU2004218702B8/en
Publication of AU2004218702A1 publication Critical patent/AU2004218702A1/en
Application granted granted Critical
Publication of AU2004218702B2 publication Critical patent/AU2004218702B2/en
Publication of AU2004218702B8 publication Critical patent/AU2004218702B8/en
Publication of AU2004218702A8 publication Critical patent/AU2004218702A8/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Landscapes

  • Storage Device Security (AREA)

Description

AUSTRALIA PATENTS ACT 1990 DIVISIONAL APPLICATION NAME OF APPLICANT: Intel Corporation ADDRESS FOR SERVICE: DAVIES COLLISON CAVE Patent Attorneys 1 Nicholson Street Melbourne, 3000. INVENTION TITLE: "Method for verifying integrity on an apparatus" The following statement is a full description of this invention, including the best method of performing it known to us: H:\jzc\ntcnvoven\NRPortbl\DCCArchiv eIJZC\500018115_ l.docx-19/11/2014 Method For Verifying Integrity On An Apparatus BACKGROUND OF THE INVENTION 5 Field of the Invention The present invention relates to the field of system security. More specifically, the present invention relates to a computer-implemented method for verifying integrity on a computing device, a computing device for verifying integrity, and a related machine 10 readable medium and system. Background Information Many applications, e.g. financial transactions, unattended authorizations and content 15 management, require the basic integrity of their operations to be assumed, or at least verified. While a number of security approaches such as encryption and decryption techniques are known in the art, unfortunately, the security approaches can be readily compromised, because these applications and the security approaches are implemented on systems with an open accessible architecture, that renders both hardware and software 20 including the security approaches observable and modifiable by a malevolent user or a malicious program. Thus, a system based on open and accessible architecture is a fundamentally insecure platform, notwithstanding the employment of security measures. However, openness and 25 accessibility offer a number of advantages, contributing to these systems; successes. Therefore, what is required are techniques that will render software execution virtually unobservable or modifiable on these fundamentally insecure platforms, notwithstanding their openness and accessibility.
H:\jc\Interwoven\NRPorlbl\DCCAchivcOlJZC\500181 15_1.doc.9/11/2014 -2 SUMMARY OF THE INVENTION In accordance with the invention, there is provided a computer-implemented method for 5 verifying integrity on a computing device, the method comprising: a) individually requesting, via a first and a second tamper resistant integrity verification functions of a first and a second applications of the computing device, a third tamper resistant integrity verification function of a system integrity verification program to jointly perform integrity verification with the first and second tamper resistant integrity 10 verification functions, respectively, wherein the system integrity verification program includes tamper resistant integrity verification kernels to jointly deploy an interlocking trust with tamper resistant security sensitive functions including the first and second tamper resistant integrity verification functions; b) in response, calling, via the third tamper resistant integrity verification function, a 15 fourth tamper resistant integrity verification function of the system integrity verification program to jointly perform the requested integrity verifications; c) providing, via the fourth tamper resistant integrity verification function, the first and the second tamper resistant integrity verification functions with respective results of the requested integrity verifications. 20 Preferably, the system integrity verification program comprises a decryption program that operates with a secret private key, and wherein secrets relating to the security sensitive functions are isolated and distributed in time and space. 25 In another aspect, there is provided a computing device for verifying integrity, the computing device comprising: an execution unit for executing programming instructions and; a storage medium having stored thereon the programming instructions to be executed by the execution unit, wherein the execution unit is further to: 30 a) individually request, via a first and a second tamper resistant integrity verification functions of a first and a second applications of the computing device, a third tamper H:\jzcUnIenvoven\NRPorbi\DCCArchiveO MZC\5000 18115 Idocx-19/11/2014 -2A resistant integrity verification function of a system integrity verification program to jointly perform integrity verification with the first and second tamper resistant integrity verification functions, respectively, wherein the system integrity verification program includes tamper resistant integrity verification kernels to jointly deploy an interlocking 5 trust with tamper resistant security sensitive functions including the first and second tamper resistant integrity verification functions; b) in response, call, via the third tamper resistant integrity verification function, a fourth tamper resistant integrity verification function of the system integrity verification program to jointly perform the requested integrity verifications; and 10 c) provide, via the fourth tamper resistant integrity verification function, the first and the second tamper resistant integrity verification functions with respective results of the requested integrity verifications. Preferably, the system integrity verification program comprises a decryption program that 15 operates with a secret private key, and wherein secrets relating to the security sensitive functions are isolated and distributed in time and space. In another aspect, there is provided at least one machine-readable medium comprising a plurality of instructions, when executed on a computing device, to implement or perform a 20 method as described above. In another aspect, there is provided a system comprising a mechanism to implement or perform a method as described above. 25 BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be described by way of examples, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which: 30 Figure 1 is a block diagram illustrating an example of making a security sensitive H:\jzclatenvoven\NRPortbl\DCCArchivcIUZC\500018115_ Idc.x- 19/11/2014 - 2B program tamper resistant by distributing the program's secret(s) in time and in space; Figure 2 is a block diagram showing a subprogram generator for generating the subprograms that operate with corresponding subparts of the distributed secret(s); Figure 3 is a flow diagram illustrating the operational flow of the subprogram 5 generator of Figure 2; Figure 4 is a block diagram illustrating an example of making a security P:\OPER\DH\12521010 divdo-08/10/04 -3 sensitive program tamper resistant by obfuscating the various subparts of the security sensitive program; Figure 5 is a block diagram showing a subpart of the obfuscated program; Figure 6 is a block diagram showing an obfuscation processor for 5 generating the obfuscated program; Figure 7 is a graphical diagram illustrating distribution of key period; Figures 8a - 8b are flow diagrams showing operational flow of the obfuscation processor of Figure 6; Figure 9 is a flow diagram showing the operational logic of an obfuscated 10 subprogram of the obfuscated program; Figures 10 - 14 are diagrams illustrating a sample application; Figure 15 is a block diagram illustrating another example of making a security sensitive application tamper resistant; Figure 16 is a block diagram illustrating a further example of making a 15 security sensitive system tamper resistant; Figure 17 is a block diagram illustrating yet another example for making security sensitive industry tamper resistant; and Figures 18 - 19 are block diagrams illustrating an example computer system and an embedded controller. 20 DETAILED DESCRIPTION OF THE INVENTION In the following description, various aspects of the present invention will be described. However, it will be apparent to those skilled in the art that the present invention may be practised with only some or all aspects of the present invention. For purposes of explanation, specific numbers, materials and configurations are 25 set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practised without the specific details. In other instances, well known features are omitted or simplified in order not to obscure the present invention.
P:\OPER\DH\1252 1010 div.do-08/10/04 -4 Parts of the description will be presented in terms of operations performed by a computer system, using terms such as data, flags, bits, values, characters, strings, numbers and the like, consistent with the manner commonly employed by those skilled in the art to convey the substance of their work to others skilled in the 5 art. As well understood by those skilled in the art, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipulated through mechanical and electrical components of the computer system; and the term computer system include general purpose as well as special purpose data processing machines, systems, 10 and the like, that are standalone, adjunct or embedded. Various operations will be described as multiple discrete steps in turn in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed as to imply that these operations are necessarily order dependent, in particular, the order of presentation. 15 Referring now to Figure 1, a block diagram illustrating a security sensitive program 100 which is made tamper resistant by distributing its secret in space as well as in time. The secret (not shown in totality) is "partitioned" into subparts 101, and program 100 is unrolled into a number of subprograms 102 that operate with subparts 101; for the illustrated example, one subpart 101 per subprogram 102. 20 Subprograms 102 are then executed over a period of time. As a result, the complete secret cannot be observed or modified in any single point in space nor in any single point in time. For example, consider the artificially simple "security sensitive" program for computing the result of X multiply by S, where S is the secret. Assuming S equals 25 to 8, S can be divided into 4 subparts, with each subpart equals 2, and the "security sensitive" program can be unrolled into 4 subprograms with each program computing A = A + (X multiply by 2). Thus, the complete secret 8 can never be observed or modified in any point in space nor time.
P\OPER\DH\O2521010 div.do-08/10/04 -5 As a further example, consider the "security sensitive" program for computing the result of (X to the power of S) modulo Y, where S again is the secret. If S equals 16, S can be divided into 8 subparts, with each subpart equals 2, and the "security sensitive" program can be unrolled into 8 subprograms with each program 5 computing A = (A multiply by ((X to the power of 2) modulo Y)) modulo Y. Thus, the complete secret 16 can never be observed or modified in any point in space nor time. As will be appreciated by those skilled in the art, the function (X to the power of S) modulo Y is the basis function employed in many asymmetric key (private/public 10 key) schemes for encryption and decryption. Thus, an encryption/decryption function can be made tamper resistant. In one example, the subprograms are further interleaved with unrelated tasks to further obscure the true nature of the tasks being performed by the unrolled subprograms. The tasks may even have no purpose to them. 15 Figure 2 shows a subprogram generator for generating the subprograms. For the illustrated example, subprogram generator 104 is provided with the secret as input. Furthermore, subprogram generator 104 is provided with access to library 105 having entry, basis and prologue subprograms 106, 108, and 109 for use in generating subprograms 102 of a particular security sensitive program in view of 20 the secret provided. In other words, entry and basis subprograms 106 and 108 employed differently for different security sensitive programs. For the above illustrated examples, in the first case, entry and basis subprograms 106 and 108 will initialise and compute A = A + (X multiply by a subpart of S), whereas in the second case, entry and basis subprograms 106 and 108 will initialise and 25 compute A = (A multiply by ((X to the power of a subpart of S) modulo Y)) modulo Y. Prologue subprogram 109 is used to perform post processing, e.g., outputting the computed results as decrypted content. For the illustrated example, entry subprogram 106 is used in particular to initialise an appropriate runtime table 110 for looking up basis values by basis subprogram P:\OPER\OH\252 1010 divdo-08/10/04 -6 108, and basis subprogram 108 is used to perform the basis computation using runtime table 110. For the modulo function example discussed above, runtime table 110 is used to return basis values for (X to the power of a subpart of secret) modulo Y for various subpart values, and basis subprogram 108 is used to 5 perform the basis computation of A = (A multiply by (basis value of a subpart of secret)) modulo Y, where A equals the accumulated intermediate results. A's initial value is 1. For example, entry subprogram 106 may initialise a runtime table 110 of size three for storing the basis values of bvl, bv2 and bv3, where bvl, bv2 and bv3 equal (X 10 to the power of 1) modulo Y, (X to the power of 2) modulo Y, and (X to the power of 3) modulo Y respectively. For the modulo function (X to the power 5) modulo Y, subprogram generator 104 may partition the secret 5 into two subparts with subpart values 3 and 2, and generate two basis programs 108 computing A = (A * Lkup (3)) modulo Y and A = (A * Lkup (2)) modulo Y respectively. 15 Figure 3 illustrates operational flow of subprogram generator 104 of Figure 2. For the illustrated example, upon invocation, subprogram generator 104 first generates an instance of entry subprogram 106 for initialising at least an appropriate runtime lookup table 110 (Lkup) for returning the basis values of a modulo function for various subparts of a secret, and an accumulation variable (A) 20 to an appropriate initial state, step 112. Subprogram generator 104 then partitions the secret into subparts, step 114. In one example, the partition is performed to require the least number of basis programs, within the constraint of the basis values stored in runtime table 110. Next, subprogram generator 104 sets a subpart of the secret as the lookup index 25 (LIDX), step 116. Then, subprogram generator 104 generates the current basis subprogram to compute A = [A multiply by Lkup (LIDX)] modulo Y, step 118. Subprogram generator 104 repeats steps 116 - 118 for all subparts, until a basis program has been generated for each subpart of the secret, step 120. Finally, subprogram generator 104 generates an instance of prologue subprogram 109 for P\OPER\DH\1252 1010 div doc-08/10/04 -7 performing post processing, as described earlier, step 122. Figure 4 illustrates another example of making a security sensitive program 203 tamper resistant by obfuscating the program. Security sensitive program 203 is divided and processed into a number of obfuscated subprograms 204. A plaintext 5 (i.e., unmutated) appearance location schedule (i.e., where in memory) is selected for obfuscated subprograms 204. For the illustrated example, the plain text appearance location schedule is formulated in terms of the memory cells 202 of two memory segments, memory segment 201a and memory segment 201b. Initially, except for obfuscated subprogram 204 where the program's entry point is 10 located, all other obfuscated subprograms 204 are stored in mutated states. Obfuscated subprograms 204 are recovered or made to appear in plaintext form at the desired memory cells 202, one or more at a time, when they are needed for execution, and mutated again, once executions are completed. As will be described in more detail below, the initial mutated states, and the process of 15 recovery, are determined or performed in accordance with one or more pseudo randomly selected patterns of mutations. The pseudo-randomly selected pattern(s) of mutations is (are) determined using a predetermined mutation partnership function in conjunction with one or more ordered sets of pseudo random keys. As a result, obfuscated subprograms 204 cyclically mutate back to 20 their respective initial states after each execution pass. Actually, obfuscated subprograms 204 implementing the same loop also cyclically mutate back to the loop entry states after each pass through the loop. For the illustrated example, each obfuscated subprogram 204 and each cell 202 are of the same size, and first memory segment 201a is located in high memory, 25 whereas second memory segment 201b is located in low memory. Furthermore, there are an even number of obfuscated subprograms 204, employing dummy subprogram if necessary. Figure 5 illustrates one example of subprogram 204. In addition to original subprogram 102, obfuscated subprogram 204 is provided with mutation partner P:\OPER\DH\l2521010divdoc-08/10/04 -8 identification function 206, mutation function 207, partner key 208 and jump block 209. Original subprogram 102 performs a portion of the functions performed by program 100. Original subprogram 102 may be an entry/basis/prologue subprogram 106/108/109 in accordance with the above described example of 5 Figures 1 to 3. Mutation partner identification function 206 is used to identify the partner memory cells 202 for all memory cells 202 at each mutation round. In one instance, the partner identification function 206 is the function: Partner Cell ID = Cell ID XOR Pseudo-Random Key. For a pseudo-random key, mutation partner identification function 206 will identify a memory cell 202 in the second memory 10 segment 201 b as the partner memory cell of a memory cell 202 in the first memory segment 201a, and vice versa. Only ordered sets of pseudo-random keys that will provide the required periods for the program and its loops will be employed. The length of a period is a function of the pseudo-random keys' set size (also referred to as key length). Mutation function 207 is used to mutate the 15 content of the various memory cells 202. In one instance, mutation function 207 XORs the content of each memory cell 202 in first memory segment 201a into the partner memory cell 202 in second memory segment 201b in an odd mutation round, and XORS the content of each memory cell 202 in second memory segment 201b into the partner memory cell 202 in first memory segment 201a in 20 an even mutation round. Partner key 208 is the pseudo-random key to be used by mutation partner identification function 206 to identify mutation partners of the various memory cells 202 for a mutation round. Jump block 209 transfers execution control to the next obfuscated subprogram 204, which at the time of transfer, has been recovered into plaintext through the pseudo-random pattern of 25 mutations. An obfuscated subprogram 204 may also include other functions being performed for other purposes or simply unrelated functions being performed to further obscure the subpart functions being performed. Figure 6 shows an obfuscation processor for processing and transforming 30 subprograms into obfuscated subprograms. For the illustrated example, P\OPER\DH\l2521010 div.doc-08/10/04 -9 obfuscation processor 214 is provided with program 200 as an input. Furthermore, obfuscation processor 214 is provided with access to pseudo random keys' key length lookup table 212, mutation partner identification function 206, and mutation function 207. For the illustrated example, obfuscation 5 processor 214 also uses two working matrices 213 during generation of obfuscated program 203. Key length lookup table 212 provides obfuscation processor 214 with key lengths that provide the required periods by the program and its loops. Key lengths that will provide the required periods are a function of the mutation technique and the 10 partnership function. Figure 7 illustrates various key lengths that will provide various periods for the first and second memory segment mutation technique and the partnership function described above. Referring back to Figure 6, mutation partner identification function 206 identifies a mutation partner memory cell 202 for each memory cell 202. Mutation partner 15 identification function 206 identifies mutation partner memory cells in accordance with the "XOR" mutation partner identification function described earlier. Mutation function 207 mutates all memory cells 202. In one embodiment, mutation function 207 mutates memory cells 202 in accordance with the two memory segments, odd and even round technique described earlier. 20 Working matrices 213 include two matrices M1 and M2. Working matrix M1 stores the Boolean functions of the current state of the various memory cells 202 in terms of the initial values of memory cells 202. Working matrix M2 stores the Boolean functions for recovering the plaintext of the various obfuscated subprograms 204 in terms of the initial values of memory cells 202. 25 Referring now to Figures 8a - 8b, two block diagrams illustrating obfuscation processor 214 are shown. As shown in Fig. 8a in response to a program input (in object form), obfuscation processor 214 analyses the program, step 216. In particular, obfuscation processor 214 analyses branch flow of the program, identifying loops within the program, using conventional compiler optimisation PAOPER\DH\I2521010 div.do-08/1004 - 10 techniques known in the art. For the purpose of this application, any execution control transfer, such as a call and subsequent return, is also considered a "loop". Next, obfuscation processor 214 may perform an optional step of peephole randomization, step 218. During this step, a peephole randomization pass over 5 the program replaces code patterns with random equivalent patterns chosen from an optional dictionary of such patterns. Whether it is performed depends on whether the machine architecture of the instructions provide alternate ways of accomplishing the same task. Then, obfuscation processor 214 restructures and partitions the program 200 into 10 a number of equal size subprograms 204 organized by their loop levels, padding the subprograms 204 if necessary, based on the analysis results, step 220. Except for very simple programs with a single execution path, virtually all programs 200 will require some amount of restructuring. Restructuring includes e.g., removing as well as adding branches, and replicating instructions in different 15 loop levels. Restructuring is also performed using conventional compiler optimisation techniques. Finally, obfuscation processor 214 determines the subprograms' plaintext appearance location schedule, and the initial state values for the various memory cells 202, step 221. 20 Fig. 8b illustrates step 221 in further detail. As shown, obfuscation processor 214 first initialises first working matrix M1, step 222. Then, obfuscation processor 214 selects a memory cell for the program's entry subprogram to appear in plaintext, step 223. In one example, the memory cell 202 is arbitrarily selected (within the proper memory segment 201a or 201b). Once selected, obfuscation processor 25 214 updates the second working matrix M2, step 224. Next, obfuscation processor 214 selects an appropriate key length based on the procedure's period requirement, by accessing key length table 212, step 226. Obfuscation processor 214 then generates an ordered set of pseudo-random keys PAOPER\DHI2521010 divdoc-08/10104 -11 based on the selected key length, step 228. For example, if key length equals 5 is selected among the key lengths that will provide a required period of 30, obfuscation processor 214 may randomly select 17, 18, 20, 24 and 16 as the ordered pseudo-random keys. 5 Next, obfuscation processor 214 determines the partner memory cells 202 for all memory cells 202 using the predetermined mutation partner identification function 206 and the next key is the selected set of ordered pseudo-random keys, step 230. Upon making the determination, obfuscation processor 214 simulates a mutation, and updates M1 to reflect the results of the mutation, step 232. 10 Once mutated, obfuscation processor 214 selects a memory cell for the next subprogram 204 to appear in plaintext, step 234. Having done so, obfuscation processor 214 updates M2, and incrementally inverts M2 using the Guassian Method, step 235. In one example, instead of incremental inversion, obfuscation processor 214 may just verify M2 remains invertible instead. If M2 is not 15 invertible, obfuscation processor 214 cancels the memory cell selection, and restores M2 to its prior state, step 237. Obfuscation processor 214 repeats steps 234 - 236 to select another memory cell 202. Eventually, obfuscation processor 214 becomes successful. Once successful, obfuscation processor 214 determines if there was a loop level 20 change, step 238. If there was a loop level change, obfuscation processor 214 further determines if the loop level change is a down level or up level change, i.e., the subprogram is an entry subprogram of a new loop level or a return point of a higher loop level, step 239. If the loop level change is "down", obfuscation processor 214 selects another appropriate key length based on the new loop's 25 period requirement, accessing key length table 212, step 241. Obfuscation processor 214 then generates a new ordered set of pseudo-random keys based on the newly selected key length, step 242. The newly generated ordered set of pseudo-random keys becomes the "top" set of pseudo-random keys. On the other hand, if the loop level change id "up", obfuscation process 214 restores an P:\OPER\DH\12521010 div.do-08/10/04 - 12 immediately "lower" set of pseudo random keys to be the "top" set of pseudo random keys, step 240. Upon properly organising the "top" set of pseudo-random keys or upon determining there's no loop level change, obfuscation processor 214 again 5 determines the partner memory cells 202 for all memory cells 202 using the predetermined mutation partner identification function 206 and the next key in the "top" set of ordered pseudo-random keys, step 243. Upon making the determination, obfuscation processor 214 simulates a mutation, and updates M1 to reflect the results of the mutation, step 244. 10 Once mutated, obfuscation processor 214 determines if there are more subprograms 204 to process, step 245. If there are more subprograms 204 to process, obfuscation processor 214 returns to step 234 and proceeds as described earlier. Otherwise, obfuscation processor 214 inserts the mutation partner identification function 206, the partner key to be used to identify mutation 15 partner memory cells, the mutation function, the jump block, and the address of the next subprogram 204 into each of the obfuscated subprograms 204, step 246. Finally, obfuscation processor 214 computes the initial values of the various obfuscated subprograms 204, and outputs them, steps 247 - 248. Figure 9 illustrates the operational flow of an obfuscated subprogram 204. For the 20 illustrated example, obfuscated subprogram 204 first executes the functions of the original subprogram, step 250. For examples including additional and/or unrelated functions, they may be executed also. Then obfuscated subprogram 204 executes mutation partner identification function 206 to identify the mutation memory cell partners for all memory cells 202 using the stored partner key, step 252. Having 25 identified the mutation partners, obfuscated subprogram 204 executes mutation function 207 to mutate the memory cells based on the identified partnership. Next, depending on whether obfuscated subprogram 204 is the last subprogram in an execution pass, obfuscated subprogram 204 either jumps to the next obfuscated subprogram (which should be in plain text) or returns to the "caller".
P:\OPER\DH'\12521010 divdo-8/10/04 - 13 Note that if obfuscated subprogram 204 returns to the "caller", all other obfuscated subprograms 204 are in their respective initial states steps 256 and 258. Figures 10 - 14 illustrate a sample application of the above. Figure 10 illustrates 5 a sample security sensitive program 200 having six subprograms SPGMO SPGM5 implementing a simple single level logic, for ease of explanation, with contrived plaintext values of "000", "001", "010", "011", "100" and "111". Thus, the required period is 6. For ease of explanation, a keylength of one will be used, and the pseudo-random key selected is 3. Furthermore, the mutation partnership 10 identification function is simply Partner Cell ID = Cell ID + 3, i.e., cell 0 always pairs with cell 3, cell 1 pairs with cell 4, and cell 2 pairs with cell 5. Figure 10 further illustrates at invocation (mutation 0), memory cells (cO - c5) contains initial values (ivO - iv5), as reflected by M1. Assuming, cell cO is chosen for SPGMO, M2 is updated to reflect that the Boolean function for recovering the 15 plaintext of SPGMO is simply ivO. Figure 10 further illustrates the values stored in memory cells (cO - c5) after the first mutation. Note that for the illustrated mutation technique, only the content of the memory cells (c3 - c5) have changed. M1 is updated to reflect the current state. Assuming, cell c3 is chosen for SPGM1, M2 is updated to reflect that the Boolean function for recovering the 20 plaintext of SPGM1 is simply iv- XOR iv3. Note that for convenience of manipulation, the columns of M2 have been swapped. Figure 11 illustrates the values stored in memory cells (cO - c5) after the second, third and fourth mutations. As shown, the content of half of the memory cells (cO - c5) are changed alternatingly after each mutation. In each case, M1 is updated 25 to reflect the current state. Assuming, cells c1, c4 and c2 are chosen for SPGM2, SPGM3 and SPGM4 respectively after the second, third and fourth mutations respectively, in each case M2 is updated to reflect that the Boolean functions for recovering the plaintexts of SPGM2, SPGM3 and SPGM4, i.e., iv4, iv1, and iv2 XOR iv5.
P:OPER\DH\12521010 divdo-08/10/04 - 14 Figure 12 illustrates the values stored in memory cells (cO - c5) after the fifth mutation. As shown, the content of memory cells (c3 - c5) are changed as in previous odd rounds of mutation. M1 is updated to reflect the current state. Assuming cell c5 is chosen for SPGM5, M2 is updated to reflect that the Boolean 5 function for recovering the plaintext of SPGM5 is iv5. Figure 13 illustrates how the initial values ivO - iv5 are calculated from the inverse of M2, since M2 x ivs - SPGMs, ivs = M2-1 x SPGMs. Note that a "1" in M2-1 denotes the corresponding SPGM is selected, whereas a "0" in M2-1 denotes the corresponding SPGM is not selected, for computing the initial values (ivO - iv5). 10 Figure 14 illustrates the content of the memory cells of the above example during execution. Note that at any point in time, at most only two of the subprograms are observable in their plaintext forms. Note that the pairing of mutation partners is fixed only because of the single pseudo-random key and the simple mutation partner function employed, for ease of explanation. Note also that with another 15 mutation, the content of the memory cells are back to their initial states. In other words, after each execution pass, the subprograms are in their initial states, ready for another invocation. As will be appreciated by those skilled in the art, the above example is unrealistically simple for the purpose of explanation. The plaintext of a 20 subprogram contains many more "0" and "1" bits, making it virtually impossible to distinguish memory cell storing an obfuscated subprogram in a mutated state from a memory cell storing an obfuscated subprogram in plaintext form. Thus, it is virtually impossible to infer the plaintext appearance location schedule from observing the mutations during execution. 25 Figure 15 illustrates a security sensitive application 300 which is made tamper resistant by isolating its security sensitive functions 302 and making them tamper proof by incorporating the first and/or second examples described above with reference to Figures 1 to 4 and 5 to 14, respectively.
P:\OPER\DH\12521010 divdo-08/10/04 - 15 In employing the above described second example, different sets of pseudo random keys will produce a different pattern of mutations, even with the same mutation partner identification function. Thus, copies of the security sensitive application installed on different systems may be made unique by employing a 5 different pattern of mutations through different sets of pseudo-random keys. Thus, the security sensitive applications installed in different systems are further resistant from the class attack, even if the obfuscation scheme is understood from observation on one system. Figure 16 illustrates a further example of making security sensitive system 400 10 tamper resistant by making its security sensitive applications 400a and 400b tamper resistant in accordance with the first, second and/or third examples described above with reference to Figures 1 to 3, 4 to 14 and 15, respectively. Furthermore, security of system 400 may be further strengthened by providing system integrity verification program (SIVP) 404 having a number of integrity 15 verification kernels (IVKs). For the illustrated example, a first and a second level IVK 406a and 406b. First level IVK 406a has a published external interface for other tamper resistant security sensitive functions (SSFs) 402a - 402b of the security sensitive applications 400a - 400b to call. Both IVKs are made tamper resistant in accordance with the first and second examples described earlier. 20 Together, the tamper resistant SSFs 402a - 402b and IVKs 406a - 406b implement an interlocking trust mechanism. In accordance with the interlocking trust mechanism, for the illustrated example, tamper resistant SSF1 and SSF2 402a - 402b are responsible for the integrity of security sensitive applications 400a - 400b respectively. IVK1 and IVK2 406a 25 406b are responsible for the integrity of SIVP 404. Upon verifying the integrity of security sensitive application 400a or 400b for which it is responsible, SSF1/SSF2 402a - 402b will call IVK1 406a. In response, IVK1 406a will verify the integrity of SIVP 404. Upon successfully doing so, IVK1 406a calls IVK2 406b, which in response, will also verify the integrity of SIVP 404.
P:\OPER\DH\1252 1010 divdoc-08/10/04 - 16 Thus, in order to tamper with security sensitive application 400a, SSFI 402a, IVK1 406a and IVK2 406b must be tampered with at the same time. However, because IVK1 and IVK2 406a - 406b are also used by SSF2 and any other SSFs on the system, all other SSFs must be tampered with at the same time. 5 Figure 17 illustrates a content industry association 500, content manufacturers 502, content reader manufacturers 510 and content player manufacturers 506 which may jointly implement a coordinated encryption/decryption scheme, with content players 508 manufactured by content player manufacturers 506 employing playing software that includes content decryption functions made 10 tamper resistant in accordance with the above described various examples. Content industry association 500 shows and holds a secret private encryption key Kciapri. Content industry association 500 encrypts content manufacturer's secret content encryption key Kc and content player manufacturer's public encryption key Kppub for the respective manufacturers 502 and 506 using Kciapri, i.e., 15 Kciapri[Kc] and Kciapri[Kppub]. Content manufacturer 502 encrypts its content product Kc[ctnt] and includes with the content product Kciapri[Kc]. Content reader manufacturer 510 includes with its content reader product 512 the public key of content industry association Kciapub, whereas content player manufacturer 506 includes with its content player 20 product 508 content player manufacturer's secret private play key Kppri, content industry association's public key Kciapub, and the encrypted content player public key Kciapri[Kppub]. During operation, content reader product 512 reads encrypted content Kc[ctnt] and the encrypted content encryption key Kciapri[Kc]. Content reader product 512 25 decrypts Kc using Kciapub. Concurrently, content player product 508 recovers its public key Kppub by decrypting Kciapri[Kppub] using content industry association's public key Kciapub. Content reader product 512 and content player product 508 are also in communication with each other. Upon recovering its own public key, content player product 508 provides it to content reader product 512.
P\OPER\DHI12521010 div.doc-08/10/04 - 17 Content reader product 512 uses the provided player public key Kppub to encrypt the recovered content encryption key Kc, generating Kppub[Kc], which is returned to content player product 508. In response, content player product 508 recovers content encryption key Kc by decrypting Kppub[Kc] using its own private key 5 Kppri. Thus, as content reader product 512 reads encrypted content Kc[ctnt], and forwards them to content player product 508, content player product 508 decrypts them with the recovered Kc, generating the unencrypted content (ctnt). In accordance with the above, the decryption functions for recovering the content 10 player's manufacturer's public key, and recovering the content encryption key Kc are made tamper resistant. As will be appreciated by those skilled in the art, in addition to being made tamper resistant, by virtue of the interlocking trust, tampering with the content player product's decryption functions will require tampering of the content industry 15 association, content manufacturer and content reader manufacturer's encryption/decryption functions, thus making it virtually impossible to compromise the various encryption/decryption functions' integrity. As will be also appreciated by those skilled in the art, a manufacturer may play more than one role in the above described tamper resistant industry security 20 scheme, e.g., manufacturing both the content reader and the content player products, as separate or combined product. Figure 18 illustrates a sample computer system suitable to be programmed with security sensitive programs/applications with or without SIVP, including an industry wide security mechanism, made tamper resistant in accordance with the 25 first, second, third, fourth and/or fifth examples described above. Sample computer system 600 includes CPU 602 and cache memory 604 coupled to each other through processor bus 605. Sample computer system 600 also includes high performance 1/O bus 608 and standard 1/O bus 618. Processor bus 605 and high performance 1/O bus 608 are bridged by host bridge 606, whereas high P\OPER\DH\12521010 div.do-08/10104 - 18 performance 1/O bus 608 and standard 1/O bus 618 are bridged by bus bridge 610. Coupled to high performance 1/O bus 608 are main memory 612, and video memory 614. Coupled to video memory 614 is video display 616. Coupled to standard 1/O bus 618 are mass storage 620, and keyboard and pointing devices 5 622. These elements perform their conventional functions. In particular, mass storage 620 is used to provide permanent storage for the executable instructions of the various tamper resistant programs/applications, whereas main memory 612 is used to temporarily store the executable instructions tamper resistant 10 programs/applications during execution by CPU 602. Figure 19 illustrates a sample embedded controller suitable to be programmed with security sensitive programs for a security sensitive apparatus, made tamper resistant in accordance with the first, second, third, fourth and/or fifth examples described above. Sample embedded system 700 includes CPU 702, main 15 memory 704, ROM 706 and 1/O controller 708 coupled to each other through system bus 710. These elements also perform their conventional functions. In particular, ROM 706 may be used to provide permanent and execute-in-place storage for the executable instructions of the various tamper resistant programs, whereas main memory 704 may be used to provide temporary storage for various 20 working data during execution of the executable instructions of the tamper resistant programs by CPU 702. Thus, various tamper resistant methods and apparatus have been described. While the methods and apparatus of the present invention have been described in terms of the above illustrated examples, those skilled in the art will recognise that 25 the invention is not limited to the embodiments described. The present invention can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of restrictive on the present invention. Throughout this specification and the claims which follow, unless the context P:\OPER\DH\12521010 divdo-08/1/04 - 19 requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps. 5 The reference to any prior art in this specification is not, and should not be taken as, an acknowledgment or any form of suggestion that that prior art forms part of the common general knowledge in Australia.

Claims (6)

1. A computer-implemented method for verifying integrity on a computing device, the method comprising: 5 a) individually requesting, via a first and a second tamper resistant integrity verification functions of a first and a second applications of the computing device, a third tamper resistant integrity verification function of a system integrity verification program to jointly perform integrity verification with the first and second tamper resistant integrity verification functions, 10 respectively, wherein the system integrity verification program includes tamper resistant integrity verification kernels to jointly deploy an interlocking trust with tamper resistant security sensitive functions including the first and second tamper resistant integrity verification functions; 15 b) in response, calling, via the third tamper resistant integrity verification function, a fourth tamper resistant integrity verification function of the system integrity verification program to jointly perform the requested integrity verifications; c) providing, via the fourth tamper resistant integrity verification function, the 20 first and the second tamper resistant integrity verification functions with respective results of the requested integrity verifications.
2. The computer-implemented method of claim 1, wherein the system integrity verification program comprises a decryption program that operates with a secret 25 private key, and wherein secrets relating to the security sensitive functions are isolated and distributed in time and space.
3. A computing device for verifying integrity, the computing device comprising: an execution unit for executing programming instructions and; 30 a storage medium having stored thereon the programming instructions to be executed by the execution unit, wherein the execution unit is further to: H:\jzcIntenvoven\NRPortbl\DCCAchiveIIJZC\5000I8115 L.docx-19/I1/2014 -21 a) individually request, via a first and a second tamper resistant integrity verification functions of a first and a second applications of the computing device, a third tamper resistant integrity verification function of a system integrity verification program to jointly perform integrity 5 verification with the first and second tamper resistant integrity verification functions, respectively, wherein the system integrity verification program includes tamper resistant integrity verification kernels to jointly deploy an interlocking trust with tamper resistant security sensitive functions including the first and second tamper resistant integrity verification 10 functions; b) in response, call, via the third tamper resistant integrity verification function, a fourth tamper resistant integrity verification function of the system integrity verification program to jointly perform the requested integrity verifications; and 15 c) provide, via the fourth tamper resistant integrity verification function, the first and the second tamper resistant integrity verification functions with respective results of the requested integrity verifications.
4. The computing device of claim 3, wherein the system integrity verification 20 program comprises a decryption program that operates with a secret private key, and wherein secrets relating to the security sensitive functions are isolated and distributed in time and space.
5. At least one machine-readable medium comprising a plurality of instructions, when 25 executed on a computing device, to implement or perform a method as claimed in any of claims 1-2.
6. A system comprising a mechanism to implement or perform a method as claimed in any of claim 1-2. 30
AU2004218702A 1996-06-13 2004-10-08 Method for verifying integrity on an apparatus Expired AU2004218702B8 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2004218702A AU2004218702B8 (en) 1996-06-13 2004-10-08 Method for verifying integrity on an apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/662679 1996-06-13
AU53405/00A AU5340500A (en) 1996-06-13 2000-08-16 Method for verifying integrity on an apparatus
AU2004218702A AU2004218702B8 (en) 1996-06-13 2004-10-08 Method for verifying integrity on an apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU53405/00A Division AU5340500A (en) 1996-06-13 2000-08-16 Method for verifying integrity on an apparatus

Publications (4)

Publication Number Publication Date
AU2004218702A1 AU2004218702A1 (en) 2004-11-04
AU2004218702B2 AU2004218702B2 (en) 2014-12-11
AU2004218702B8 true AU2004218702B8 (en) 2015-01-15
AU2004218702A8 AU2004218702A8 (en) 2015-01-15

Family

ID=3739411

Family Applications (2)

Application Number Title Priority Date Filing Date
AU53405/00A Abandoned AU5340500A (en) 1996-06-13 2000-08-16 Method for verifying integrity on an apparatus
AU2004218702A Expired AU2004218702B8 (en) 1996-06-13 2004-10-08 Method for verifying integrity on an apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
AU53405/00A Abandoned AU5340500A (en) 1996-06-13 2000-08-16 Method for verifying integrity on an apparatus

Country Status (1)

Country Link
AU (2) AU5340500A (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK279089D0 (en) * 1989-06-07 1989-06-07 Kommunedata I S PROCEDURE FOR TRANSFER OF DATA, AN ELECTRONIC DOCUMENT OR SIMILAR, SYSTEM FOR EXERCISING THE PROCEDURE AND A CARD FOR USE IN EXERCISING THE PROCEDURE
ES2158081T3 (en) * 1994-01-13 2001-09-01 Certco Inc CRYPTOGRAPHIC SYSTEM AND METHOD WITH KEY DEPOSIT CHARACTERISTICS.

Also Published As

Publication number Publication date
AU2004218702A1 (en) 2004-11-04
AU2004218702B2 (en) 2014-12-11
AU5340500A (en) 2000-11-02

Similar Documents

Publication Publication Date Title
US5892899A (en) Tamper resistant methods and apparatus
US6205550B1 (en) Tamper resistant methods and apparatus
US6175925B1 (en) Tamper resistant player for scrambled contents
US6178509B1 (en) Tamper resistant methods and apparatus
Aucsmith Tamper resistant software: An implementation
WO2004055653A2 (en) Method of defending software from debugger attacks
US10331896B2 (en) Method of protecting secret data when used in a cryptographic algorithm
AU2004218702B8 (en) Method for verifying integrity on an apparatus
AU750845B2 (en) Tamper resistant methods and apparatus
AU2004200094B2 (en) Tamper resistant methods and apparatus
AU723556C (en) Tamper resistant methods and apparatus
AU774198B2 (en) Apparatus for tamper resistance
EP1000482A1 (en) Cell array providing non-persistent secret storage through a mutation cycle
KR20000054834A (en) Modification prevention system of program cooperated with operating system and compiler and method thereof

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application
NA Applications received for extensions of time, section 223

Free format text: AN APPLICATION TO EXTEND THE TIME FROM 12 JUN 2005 TO 12 JUN 2013 IN WHICH TO PAY A CONTINUATION FEE HAS BEEN FILED .

NB Applications allowed - extensions of time section 223(2)

Free format text: THE TIME IN WHICH TO PAY A CONTINUATION FEE HAS BEEN EXTENDED TO 12 JUN 2013 .

TH Corrigenda

Free format text: IN VOL 28 , NO 49 , PAGE(S) 6691 UNDER THE HEADING APPLICATIONS ACCEPTED - NAME INDEX UNDER THE NAME INTEL CORPORATION, APPLICATION NO. 2004218702, UNDER INID (54) CORRECT THE TITLE TO READ METHOD FOR VERIFYING INTEGRITY ON AN APPARATUS

Free format text: IN VOL 18 , NO 43 , PAGE(S) 9650 UNDER THE HEADING APPLICATIONS OPI - NAME INDEX UNDER THE NAME INTEL CORPORATION, APPLICATION NO. 2004218702, UNDER INID (54) CORRECT THE TITLE TO READ METHOD FOR VERIFYING INTEGRITY ON AN APPARATUS

Free format text: IN VOL 18 , NO 42 , PAGE(S) 9567 UNDER THE HEADING COMPLETE APPLICATIONS FILED - NAME INDEX UNDER THE NAME INTEL CORPORATION, APPLICATION NO. 2004218702, UNDER INID (54) CORRECT THE TITLE TO READ METHOD FOR VERIFYING INTEGRITY ON AN APPARATUS

FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired