AU720583B2 - A method for protecting data - Google Patents

A method for protecting data Download PDF

Info

Publication number
AU720583B2
AU720583B2 AU35253/97A AU3525397A AU720583B2 AU 720583 B2 AU720583 B2 AU 720583B2 AU 35253/97 A AU35253/97 A AU 35253/97A AU 3525397 A AU3525397 A AU 3525397A AU 720583 B2 AU720583 B2 AU 720583B2
Authority
AU
Australia
Prior art keywords
data
hash
further including
aggregate
record
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU35253/97A
Other versions
AU3525397A (en
Inventor
Addison M. Fischer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/154,520 external-priority patent/US5475826A/en
Application filed by Individual filed Critical Individual
Priority to AU35253/97A priority Critical patent/AU720583B2/en
Publication of AU3525397A publication Critical patent/AU3525397A/en
Application granted granted Critical
Publication of AU720583B2 publication Critical patent/AU720583B2/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • G06F21/565Static detection by checking file integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/60Digital content management, e.g. content distribution

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Virology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioethics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Storage Device Security (AREA)

Description

Regulation 3.2
AUSTRALIA
Patents Act 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
(ORIGINAL)
6 Name of Applicant: Actual Inventor: ADDISON M. FISCHER of 4073 Merchantile Avenue, Naples, Florida 33942, United States of America Addison M. FISCHER Address for Service: DAVIES COLLISON CAVE, Patent Attorneys, of 1 Little Collins Street, Melbourne, Victoria 3000, Australia Invention Title: "A METHOD FOR PROTECTING DATA" The following statement is a full description of this invention, including the best method of performing it known to me: -1- P:\OPER\DBW\57783-94.DIV 25/8/97 la FIELD OF THE INVENTION This invention generally relates to protecting data.
BACKGROUND AND SUMMARY OF THE INVENTION Particularly, with the advent of electronic business transactions, ensuring the privacy and integrity of workstation data (whether it is generated by a laptop computer, a mainframe terminal, a stand-alone PC, or any type of computer network workstation), is critically important. For example, many users of laptop computers encrypt all hard drive data to ensure data privacy. The encryption hides the data from unintended disclosure.
20 In and of itself, the encryption does not ensure data integrity. For example, encryption does not prevent an opponent that can gain surreptitious access to the computer from running a special sabotage program which although being unable to make sense of a particular piece of encrypted data may attempt to randomly over-write the encrypted data with other possibly random information, thereby causing an erroneous analysis when 25 the data is eventually decrypted for input to other processes.
Depending on the encryption protocol, the type of file that was damaged, and how it was damaged, it is possible that this alteration may go undetected and lead to fallacious results when the data is processed by the proper owner. It is especially easy for this to occur, for example, if the damaged data contains binary numerical data. The owner may be led to erroneous action by incorrect results.
It is well-known that file integrity may be protected by taking a oneway hash by using MD5 or the secure hash algorithm SHA) over the contents of the file. By implementing and checking a currently computed hash value, with a previously stored hash value, correct file integrity assures the threat of malicious tampering (or even accidental external modification) can be detected thereby improving the reliability and security of ultimate results.
Assuming it is stored in a way that preserves its own integrity, the file hash can be used to insure that the entire file has not been damaged or deliberately tampered.
Such a hash can be computed when the file is processed sequentially.
The hash can be computed when (or as) the file is sequentially built; and then checked again whenever the file is used. Provided that the hash value is protected from alteration such as by being encrypted by a key known only to the user, or by being digitally signed in a way trusted by the user, or by being stored in a trusted token device, the user can be certain that the file has not been altered, since modification of any part of the file will result in the recomputation of a different hash value.
Existing techniques require that the entire file be processed sequentially in order to compute, or re-compute the hash value. These techniques become cumbersome, if not impractical, for files which are frequently updated or which are processed "randomly".
The conventional validation process consists of verifying the hash when the file is first accessed, modifying the file, then re-computing the hash of the revised file after all changes have been applied. This conventional process is not well suited to certain applications such as those which are long-running, or those in which the file is frequently modified, or is in use constantly, or in which there is a danger that the particular program or computer system updating the filed may be interrupted the computer may be turned off) anytime before the program comes to final conclusion where the updated file is saved and the new hash is re-computed and stored. This is because it is generally impractical to recompute the hash for the entire file whenever an update occurs. Without such a computation, the file exists in an apparently tampered state between the moment the first update is done, until the final hash is recomputed.
Such practical problems exist when applying conventional hashing l techniques to certain types of files. Some files, such as indexed databases, are updated "randomly" only a subset of records are updated in some nonsequential order) and over a long period of time. The file may be constantly updated over a period of minutes, hours, or (in the case of mainframes or "servers") even days.
If the hash is computed over the entire file and the file is frequently •updated, then computing a revised hash over the entire file each time it is modified results in unacceptable overhead. On the other hand, if the hash is computed over the entire file and the file is frequently updated, then delaying the computation of the revised file hash until the file is closed (or the program is completed) results in the file being left in an apparent "incorrect" state between the moment of the first update and the final hash recomputation. If the system or other program is terminated prematurely, then the file is left in this apparent state.
If a hash is maintained for each record, then additional record space is required which may impact the layout of the file or its records. Typically each S 7- 4-00114:53 ;Davies Coll ison Cave 61 3 92542808 4/ 19 IP:\)PKE \HW\ I 73-'J4. l 7/14/2WO 4 record's hash might be stored in space set aside at the end of each record. Such file layout revision may be acceptable in some applications, however, this approach suffers various drawbacks including that it requires additional storage for each record.
Another drawback to keeping a hash only on a record-by-record basis is that if an adversary has a stale copy of the database (even if the database was encrypted) and is able to isolate such stale records. Such a database which is designed to be updated "randomly" must be encrypted in record units cipher chaining across record boundaries makes "random" updating impossible. The adversary could then blindly substitute these anachronistic records S. 10 for corresponding records in the current active copy of the database (this could be done even S: if the adversary is unsure of the actual content of the records and only wishes to cause confusion) thereby damaging the integrity of the database in a way impossible to automatically detect.
15 In accordance with the present invention there is provided a method for protecting a collection of a plurality of discrete data units, which are modified from time to time by an associated data processing system, from tampering to provide security for the collection, comprising the steps of: :obtaining an individual hash value for each of the discrete data units by performing a 20 hash operation using at least the data value portion of the data unit which is to be protected; aggregating said individual hash values to obtain an aggregate hash value for said collection using a function that provides for the independent inclusion and deletion of each individual hash value from the aggregate hash value; and revising said aggregate hash value for said collection using said function without executing said aggregating step and by operating on revised ones of said discrete data units.
The present invention also provides a method for protecting a collection of individual data groups, including a first data group and a second data group which are modified from time to time, from tampering to provide security for the collection comprising the steps of: S performing a predetermined hash operation using both the first data group and indicia 07/04 '00 FRI 15:59 [TX/RX NO 8270] 7 4 0a; 14!5 3 ;D a vie S C o I Ii S on C av ;i e 2488 /1 ;61 3 92542808 5/ 19 I'AOPFEI)\D3W\17H-'24,0UJI 7/4/7400 4a in addition to the first data group which specifically identifies said first group; performing a predetermined hash operation on the second data group and indicia identifying said second group; combining the hashes to determine an aggregate hash for said collection using a function; and revising said aggregate hash for said collection without executing said combining step and by extracting said hash of said first group or said bash of said second group using the inverse of said function.
10 The present invention further provides a method for maintaining a validity indicator of art updatable data file including a plurality of data records and having an associated aggregate file hash comprising the steps of: accessing said aggregate file hash; updating one of said plurality of data records to generate an updated record; and computing an updated aggregate bash using the updated record and the aggregate file a hash without aggregating hash values by applin a-unction having both associative and corniutative properties with respect to the aggregate hash.
good The present invention also provides a method of protecting a plurality of digital data 20 records from tampering to provide security for the records, each data record including both information content and a record identifier comprising the steps of: combining the informational content of a data record with the record identifier of said data record to determine an aggregate data string; performing a hashing operation on said aggregate data string to determine a hash value; applying a function having both associative and commutative properties to said hash value to generate an aggregate bash value for said data records; and revising said aggregate hash value for said plurality of data records using said function without aggregating hash values and by operating on revised ones of said data records.
07/04 '00 FRI 15:59 [TX/RX NO 8270] P:\OPER\DBW\35253-97.351 17/12/98 4b The present invention enables the contents of a file to be hashed so that an ongoing hash may be maintained, and constantly updated, in an efficient fashion. Data base integrity can be maintained without introducing the undue and excessive additional overhead of repeatedly re-processing the entire file, and without leaving the file in an apparently-tampered state for long durations of time (such as while a long-duration realtime program is running).
Only a limited amount of additional storage for each file is required, which could easily be maintained in the system directory, or in a special ancillary (and possibly encrypted) file, with other information about each file. Each underlying file format and S* structure can remain unchanged, and this provides integrity "transparently" as part of file processing, possibly at or near the "system" level, without requiring changes to existing o programs. This overcomes compatibility difficulties in systems which attempt to provide this additional integrity service as a transparent service in addition to normal operation (independently of any particular application).
S S o* P:\OPER\DBW\35253-97.351 17/12/98 As will be explained in detail herein, the present invention permits the hash of a file to be taken on an incremental basis. It permits any part of the file to be changed while allowing a new aggregate hash to be computed based on the revised file portion and the prior total hash. The aggregate hash is readily updatable with each record revision without having to recompute the hash of the entire file in accordance with conventional techniques.
The illustrative embodiment accomplishes these objectives using two functions.
The first function is an effective one-way hash function for which it is computationally impossible to find two data values that hash to the same result. Examples of such functions include the well-known MD5 and SHA algorithms. The second function is a commutative and associative function (and inverse "Finv") and provides a mechanism for combining the aggregate hash and the hash of updated records. Examples 9.
of these latter functions include exclusive OR and arithmetic addition.
9*9 9 The methodology involves combining the hash of each file record and the hash of an identification of the record a record number or key). These hashes are combined go :".:using a function whereby individual records may be extracted using the inverse of that function (Finv). In this fashion, an individual record may be extracted from the aggregate hash and updated. With each update, the file hash as computed according to this invention is preferably also written after being encrypted under a key known only to the valid user, or if it is digitally signed by the valid user or if it is held in a valid user, or if it is digitally signed by the valid user or if it is held in a P:\OPER\DBW\35253-97.351 17/12/98 6 tamper resistant storage. Each record is represented by its identification hashed together with its data content. All such records are added together to provide a highly secure integrity check. This aggregate hash reflects the entire database such that the tampering (or rearranging) of any data record is revealed by the us& of the record identifier record number) in the hash calculation due to its impact on the aggregate hash the sum).
Using this methodology a user cannot be tricked into operating with fallacious data.
The invention advantageously overcomes at least the prior art drawbacks of massive re-computation for each file alteration, long periods in which the file is in jeopardy of being considered "invalid" if the application or system is abruptly terminated, additional storage space for a hash (or MAC) for each record, and the ability of an adversary to substitute stale records because the integrity of the entire file, and the intero.
relationship of all records is maintained encapsulated in a single file HASH value which o. changes as each file update is performed.
1" BRIEF DESCRIPTION OF THE DRAWINGS go S; A preferred embodiment of the present invention is hereinafter described, by way of example only, with reference to the accompanying drawings, wherein: FIGURE 1 is a block diagram of a communications system within which the present invention may be utilized; FIGURE 2 generally shows an exemplary record format in accordance with the illustrative embodiment of the present invention; FIGURE 3 is an exemplary representation of a scratchData data structure; FIGURE 4 is a flowchart which delineates the sequence of operations performed in accordance with an exemplary embodiment when the system opens a file to be updated or used in anyway; FIGURE 5 is a flowchart which delineates the sequence of operations performed in accordance with an exemplary embodiment when executing an add, update or delete operation; and FIGURE 6 is a flowchart which delineates the sequence of operations performed in accordance with an exemplary embodiment for a closure operation.
DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENT FIGURE 1 shows in block diagram form an exemplary computing system within which the present invention may be utilized as part of an electronic commerce computing network. While the present invention may be used in such a communications network environment, the invention may likewise be advantageously utilized in conjunction with a laptop computer, a stand alone PC, a mainframe computer, or any other computer system where data security is significant.
The system shown in FIGURE 1 includes an exemplary computing network having an unsecured communications channel 12 over which communications between terminals A, N may take place.
Communications channel 12 may, for example, be a telephone line. Terminals
S.
a A, B through N may, by way of example only, be IBM PC's having a processor (with main memory) 2 which is coupled to a conventional keyboard/CRT 4. Each terminal A, B through N also includes a conventional IBM PC communications board (not shown) which when coupled to a conventional modem 6, 8, 10, respectively, permits the terminals to transmit and receive messages. Each terminal includes a conventional IBM PC disk storage device which permits the computer to read, write and store data base information Each terminal is capable of generating a plain text or unenciphered message and performing whatever signature operation may be required, and transmitting the message to any of the other terminals connected to communications channel 12 (or to a communications network (not shown) which may e co nnected t-o cmmnicatinsc rhnnel 12) Additionallv each of the terminals A, B, through N is capable of performing signature verification on each message.
Each of the terminal users has a public encrypting key and an associated private secret decrypting key. In the public key cryptosystem shown in FIG. 1 each terminal user is aware of the general method by which the other terminal users encrypt a message. Additionally, each terminal user is aware of the encryption key utilized by the terminal's encryption procedure to generate the enciphered message.
Each terminal user, however, by revealing his encryption procedure and encryption key does not reveal his private decryption key which is necessary to decrypt the ciphered message and to create signatures. In this regard it is computationally unfeasible to compute the decryption key from knowledge of the encryption key.
P:\OPER\DBWO35253-97.351 17/12/98 9 Besides the capability of transmitting a private message, each terminal user likewise has the capability of digitally signing a transmitted message. A message may be digitally signed by a terminal user decrypting a message with his private decrypting key before transmitting the message. Upon receiving the message, the recipient can read the message by using the sender's public encryption key. In this fashion, the recipient can verify that only the holder of the secret decryption key should have created the message.
Thus, the recipient of the signed message has proof that the message originated from the sender.
Further details of the exemplary digital signature methodology which may be used S in conjunction of the present invention is disclosed in U.S. Patent No. 4,405,829, and the applicant's digital signature methodology disclosed in U.S. Patent Nos. 4,868,877 and 5,005,200, which patents are hereby expressly incorporated.
Each of the computer terminals A to N are preferably designed to be a secure workstation for electronic commerce. In accordance with the present invention, the entire memory space of a computer terminal may be protected in a manner described in detail below or if desired only a portion of the memory space may be protected. Each of the programs resident in the terminal memory space is preferably protected in accordance with *o program authorization information (PAI) as described in the specification of Australian Patent No. 672786 entitled "Computer System Security Method and Apparatus having Program Authorization Information Data Structures" which is expressly incorporated herein by reference. In accordance with the exemplary implementation of the present invention, a program may be run on terminals A to N unless it is authorized in accordance with associated PAl. The PAl information is utilized to protect programs from being tampered with. Data encryption in accordance with conventional techniques is used to protect the confidentiality of the data operated on by the program. The present invention is used to prevent tampering with the data files.
Particularly, when used in concert, a highly secure workstation results which may be reliably utilized in electronic commerce.
If a terminal A is a laptop computer protected in accordance with this methodology, the terminal may be carried anywhere and even utilized to operate programs which may contain a virus. The protected portion of the memory space in accordance with this methodology described below will be immune to such tampered program and the user can have a very high degree of assurance in the data processed in, stored on, and transmitted from the computer system.
Figure 2 shows in simplified fashion, an exemplary record format in accordance with the illustrative embodiment of the present invention. As shown in FIGURE 2, the file contains n discrete records R, (where, i varies from 1 to Each record, R has an associated record identifier, "IK" which may be a record number. K, may be any indexing v'ue such as, for example, an employee number. Thus, the file may be organized as a sequential file (beginning with a record 1, followed by a record 2, to record The record identifiers may be sector numbers on a disk. The record identifiers may be organized in any associative manner by employee number, etc.) as long as each record is uniformly and consistently identified.
In addition to a record identifier Ki each record is associated with a data together with conventional media control signals as will be appreciated by those skilled in the art.
As used herein, the interpretation of record can vary depending upon •the application or the computer system being used. It might be appropriate to treat each byte as a record. By operating on each byte it is possible to always keep a perfect hash of the entire file. On disks which are so organized, each sector could be considered a record. In some systems, such as S/360 architecture systems, there are discrete records that can be defined in assorted ways, including sequential numbering. In some data base systems, records are best distinguished by (1 being a data key the value of which is used to identify a particular record.
Of course, this exemplary embodiment should be taken as only one possible way to implement the invention. Other techniques could include using only a partial amount of the data, or using a hash of aspects of some particular data instead of data itself, or by construing data records as combinations of raw data and or the hash values of yet other data.
In accordance with an exemplary embodiment of this invention, the data structures described below insure that the file is always recoverable in light of normal interruptions (no matter how the system may have been interrupted when the file was being updated). Any other types of data damage, accidental or intentional, will always be detectable. If complete recoverability from all interruptions is not desired, then portions of the following logic may be removed.
The illustrative embodiment uses a data base file as well as "scratchData" and "fileHash" data structures to provide complete recoverability (from interruptions) together with absolute tamper resistance.
The database File data structure contains the user's data. The scratchData data structure contains record processing information in case the system is interrupted while updates are underway.
The manner in which the scratchData file is associated with the database File depends on the implementation. For example, scratchData could be aFileassociated with the database file by file name with the presence of P:\OPER\DBW\57783-94-DIV 25/8/97 12 scratchData when the database File is opened indicating that its previous usage had been interrupted and recovery was necessary. The scratchData is created afresh if it did not exist during the database file opening and is erased whenever the database File is successfully closed. This approach is illustrated by the exemplary implementation described below. Alternatively, a pool of scratchData elements could be resident in permanent storage maintained by the operating environment, and associated with active database files whenever they are opened; returned to the pool when the files are closed; and checked after a system interruption to cleanup any updates underway.
If a power failure occurs during the scratchData processing then it will be recognizable that the file is inherently trustworthy. Alternatively, if a power failure occurs during file processing, then the data stored in the scratchData data structure is used to make the database File recoverable. Thus, by keeping track of what needs to be done to the file, and by performing the appropriate operation using the scratchData structure, the 15 database File becomes essentially "bullet proof" with respect to such interruptions.
As shown in Figure 3, an exemplary scratchData data structure 50 has five fields.
The operation field 52 indicates either "add", "update", "delete", or "null" operations. A record identifier field 54 identifies Ki. Additionally, the scratchData data structure 20 includes fields indicating value of the revised record Ri The revised version of the overall or future database Hash and the hash of the fields 52, 54, 56, and 58 Fields 54 and 56 are ignored for a "null" operation and field 58 is ignored for a "delete" operation.
25 If any part of the scratchData itself is incorrect or damaged, either through blind vandalism, normal hardware (such as media) failure, or as the result of interruption while being written, then such is detected through a mismatch with the scratch's hash kept in field P:\OPER\DBW\57783-94.DIV 25/8/97 13 of the scratchData data structure. The scratchData data structure should be protected in a manner similar to the fileHash lest an opponent modify it to effect a deliberate change to the database File. In the preferred implementation, all fileHash, scratchData, and databaseFile data structures are stored in a form encrypted with a key known only to the valid user(s). The records are encrypted before being written, and decrypted as they are read.
If only integrity, and not confidentiality, of the databaseFile is required, then the only data that needs to be encrypted is field 60 in the scratchData record and the fileHash.
This insures that neither fileHash nor scratchData can be manipulated by clever opponents.
There is one further attack that needs to be considered, namely that an opponent could substitute stale data for all of the database File, scratchData and fileHash. In this case, the database is consistent and exactly reflects a former state of the database 15 thereby conforming to all validation checks but reflects non-current and thereby possibly misleading data.
This threat could be addressed in several ways such as by keeping the date/time of the last update as an additional field and storing it with the fileHash data. This date/time 20 data could then be announced to the user as part of opening the file each time. This also allows the user to deliberately restore and use an older version of the database File. It o would also be possible to store the date/time of the last "open" as part of the fileHash data, and also store this in a user token (such as a SmartCard), which is invulnerable to surreptitious modification (the token might also be used to store the encryption key). If the .i 25 date/time found with the fileHash disagrees with that in the user token, the user is alerted P:\OPER\DBW57783-94.DIV 25/8/97 14 to the fact that an obsolete version of the database File is being used. If the date/time agrees, as is normally expected, then the user is allowed to proceed without being required to make a decision.
The threat also may be addressed by changing the encryption key used to hide the fileHash and scratchData as part of the start of each open request. This ensures that each fileHash and scratchData can never be duplicated from session to session. The latest key could then be stored in an unalterable token maintained by the user.
The fileHash is stored where it can be associated with the databaseFile. It must be designed such that it cannot be surreptitiously modified by anyone other than the authorized user. This could be done several ways including being encrypted under a symmetric cipher key known only to the valid user(s); being encrypted under a public key, corresponding to a private key known by the valid user(s); or being digitally signed so that 15 it can be verified by the valid user(s) as being trusted.
Any other technique may be employed so that the valid user(s) can trust that the fileHash value cannot be altered by an opponent. The trusted fileHash could be stored, for example, in the directory entry corresponding to the file, as an appendage to the file itself, 20 or in a special database that allows it to be related to the file.
In accordance with the exemplary embodiment, a hash of the database File is computed as follows. The file hash is initially set equal to an initial value (such as 0).
Thereafter, the hash routine indexes over all records in the database File using all the 25 record identifiers Ki of the records Ril to N in the database File, computing this value: fileHash F(fileHash, H(Ki Ri)) p:\OPER\DBW\57783-94.DIV 25/8/97 The notation Ki R indicates an operation that unambiguously combines the value of "Ki and the value of the associated record Ri. One simple way to do this, if Ki has a uniform length for all the keys (perhaps a binary integer padded to four bytes), is to concatenate the two values. If the field "Ki can vary in length, then the operation should be elaborated to effectively prefix the value "Ki with its length indication in order to unambiguously distinguish the "Ki and "Ri values and then concatenate the three values such as: length (Ki) I K I I Ri.
After K i and the content of the record R, are unambiguously combined by concatenation), the hash of the aggregate data string is taken using the hash function H and the result is combined with the proper aggregate fileHash value using the specialized function F. The hash function H is a one-way hash function for which it is computationally impossible to find two data values that hash to the same result. Examples of such functions include the MD5 hashing algorithm developed by MIT Professor Dr.
15 Rivest or the secure hash algorithm (SHA).
The function F is a commutative and associative function which has an associated inverse function "Finv" and which provides a mechanism for combining the aggregate hash and the hash of updated records. Examples of such commutative and associative 20 functions include exclusive OR (XOR) and arithmetic addition. After the application of the function F, the aggregate hash becomes the hash for all the old records including the new record. This processing is done for all the records in the database File. In the case where records are best distinguished by an index data key, that is the value K, which is used to identify a particular record, the processing loop described above is taken over all active .25 indexing entries.
P:\OPER\DBWW783-94.DIV -25/8/97 16 When a record identified by K i is updated (where Ri is the old record and R2, is the new record value), then the new revised database File hash is recomputed as: fileHash F(Finv (fileHash, H(Ki H(Ki R2,)) In other words, the hash of the former record is removed, and the newly computed hash value is inserted. If a record identified by "Ki" is removed from the data base, database File hash is revised to: fileHash Finv (fileHash, H(K, Ri)).
If a new record identified by Ki is introduced, then the revised hash becomes: fileHash F(fileHash, H(Kii Ri)).
With this protocol, the revised hash can be computed as modified and stored.
15 FIGURE 4 is a flowchart which delineates the sequence of operations when the system opens a database file (to be updated or used in any way) to establish the validity of the file. File processing begins by opening the database file (1010). A check is initially made at block 1012 to determine if the database file is being initially created or is being reinitialized overwritten).
If the database file is a new file or is being reinitialized, the variable "fileHash" is set to 0 and the routine branches to block 1230. By initializing fileHash to 0, the stage is set for scanning through the file to insure that all records are present and :contemporaneous, that none of the records have been tampered with or have been 25 rearranged, and to ensure that the entire file in context agrees with appropriate checks.
Thus, whenever a database is first utilized, the file is scanned to check the stored hash.
P:\OPER\DBW\5773-94.DIV 25/8/97 17 If the check at block 1012 indicates that the database File being processed is an old file, then the routine branches to block 1020 where the associated aggregate "fileHash" previously computed is accessed. The fileHash may be stored in a secure directory and encrypted with a key known only to the user. This value reflects the state of the file when it was last used. Exactly where the fileHash is stored depends upon the implementation. In a preferred embodiment of the present invention, the file hash may be stored in a separate data base, distinct from the file, or in an adjunct to the file's directory entry.
A check is made at block 1025 to determine whether scratchData corresponding to the database File exists. If scratchData, as shown above, in FIGURE 3, does not exist for the database File, then the routine branches to block 1230.
If the check at block 1025 indicates that a scratchData data structure exists, then a process is initiated at block 1030 for handling the recovery for updates to the database files 15 which may have been interrupted during previous processing. Initially, the scratchData data structure is opened and read. In the preferred embodiment, the scratchData file is encrypted so it must be decrypted to read its contents. A check is then made in block 1030 to insure the scratchData is itself valid by computing the hash of fields 52, 54, 56, and 58 and comparing such computed hash with the stored hash in field 60 of FIGURE 3. In this 20 fashion, it can be insured that interruptions did not occur when scratchData information was being processed.
If the computed scratchData hash does not match the stored hash in field (1050), then the routine branches to block 1220 where processing for creating a new 25 scratchData data structure begins. The mismatch of hashes implies that the scratchData data structure itself was interrupted while being written. If this is the case, the database file and fileHash P:\OPER\DBW\57783-94.DIV 25/8/9) 18 should be correct and consistent, which is expected to be verified by continuing processing in block 1220.
If the scratchData hashes match, processing continues at block 1060 where database File updates which where involved in the prior processing are reapplied. Thus, the last operations that were performed on the database File are repeated using data from the scratchData data structure based on the operation designated in field 52, the record identifier of field 54, the value of the revised record from field 56 and the revised version of the overall database hash (which is the new or future hash) in field 58 of the data structure shown in FIGURE 3.
A check is then made at block 1065 to determine whether the operation indicated at field 52 of the scratchData data structure 50 is an add operation. If so, then the routine branches to block 1070 which initiates the repetition of an add operation. The value Rn 15 (indicating the content of the revised record from field 56 of the scratchData data structure 50) is placed into the record identified by the identifier If the record exists, then it is replaced with the value R n Otherwise, if record Kn does not exist, then the new is inserted. Thereafter, the routine branches to block 1200 where the fileHash value is updated.
If the check at block 1065 indicates that the operation is not an add operation, then a check is made to determine whether the operation is an update operation (1080). If the check at block 1080 indicates an update operation, then at block 1090, the update operation is performed by replacing the value in record K, with the value (the revised 25 record data content) and the routine branches to block 1200.
P:\OPER\DBW\7783-94.DIV 25/8/97 19 If the check at block 1080 indicates that the operation is not an update operation, then a check is made at block 1100 to determine whether the operation is a delete operation. If the operation is a delete operation, the routine branches to block 1110 where a check is made to insure that the record identified by identifier is absent. If the record identified by is present, then the record is deleted and the routine branches to block 1200.
If the check at block 1100 indicates that the operation is not a delete operation, then a check is made at block 1120 to determine whether the operation is a "null" operation. A null operation is performed at the beginning and at the end of database file processing to prepare the scratchData data structure when a database file is opened and when database is closed. If the operation is a null operation, then the routine branches to block 1220 with expectation of confirming that the database File and the fileHash are both accurate and consistent. If the check at block 1120 indicates that the operation is not a null 15 operation, then an error condition exists, and the operation is suppressed at block 1130.
By reaching block 1130 it has been determined that the scratchData data structure was built incorrectly.
If the checks at blocks 1065, 1080, or 1100 indicate an add, update, or delete 20 operation, then processing branches to block 1200 where the fileHash value is updated to reflect the latest known value after the performance of the respective operation. Thus, the fileHash value is set to what the new file hash should be after the performance of the desired operation based on the protected scratchData and the protected file data. This new fileHash value is encrypted as desired and is written into the fileHash data structure.
Processing continues at block 1220 where the scratchData field 52 is reset to the null operation. If desired, the reset scratchData is encrypted and written into scratchData P:\OPER\DBW\57783-94. DIV 25/8197 Thereafter, the scratchData file in its entirety is deleted.
When the routine begins processing at block 1230, the scratchData data structure has been deleted, and the associated database file has been updated. At this point in the processing, the associated database file data should be correct. In block 1230, a new scratchData database is created which is initialized to the null operation and encrypted as necessary.
After the new scratchData is created, processing is initiated at block 1400 to insure that the database File is consistent with the fileHash. To begin this process, the computed Hash is set equal to zero. In block 1410, a loop is entered which steps through all the records in the database file. After all records are processed in the loop (block 1420), the routine branches to block 1430. If the database file is just being created, then there are zero records, this loop is not executed at all, and the routine branches immediately to 15 block 1430.
In block 1420, the value of the record identified by is read and decrypted as necessary. The computed Hash is then augmented with the new record by computing: computedHash F(computedHash, Hash, ((length of I R)) In the preferred implementation, where F and F-inverse are exclusive OR, this becomes: computedHash computedHash XOR hash ((length of KJ) I II R).
25 The routine branches back to block 1410 until all records are processed exactly once.
P:\OPER\DBW\57783-94.DIV 25/8/97 21 After all records are processed, a check is made at block 1430 to determine whether the new "fileHash" is equal to the "computedHash". If the hashes match, the database file is valid, consistent and untampered, and the routine returns to the main routine where the trustworthy data is then processed. Alternatively, if the hashes do not match, at least one of the database file, fileHash or scratchData has been damaged or tampered with in some way. This error condition is indicated to the user or the application program (1440) whereupon the application is terminated, or depending on the embodiment, the user may be allowed to determine whether to terminate processing or continue processing at his or her own risk.
FIGURE 5 is a flowchart which delineates the sequence of operations in performing an add, update, or delete operation. The routine shown in FIGURE 5 is thus executed if the indicated operation is either an add, update or delete operation. The nomenclature described below assumes that the operation is on record and that for add 15 and update operations, NewR, is the value of the record K, to be inserted. The nomenclature below assumes that for update and delete operations OldR,, is the current value of record when the operations starts. Thus, the two variables identified below are S for add and update operations: "NewR,," which represents the new value of the record K,, to be inserted, and for update and delete operations, "OldR,," which is the current value of 20 record when the operation starts.
At block 2010, the processing begins by preparing to compose a new scratchData data structure by inserting the relevant operation (add, update or delete) in field 52 and inserting the record identifier (including its length) in field 54 of FIGURE 3. A check 25 is made at block 2020 to determine whether the operation is update or delete. If the operation is update or delete (so that an old record is being replaced), then at block 2030, the hash of the aggregate file 22 is computed subtracting out the old record. To remove the old record from the overall hash, the function Finv is utilized as follows: fileHash Finv (fileHash, Hash (length of KJ IKJ joldRJ).
In the preferred implementation where F and Finv are exclusive OR (XOR) this becomes: fileHash fileHash XOR Hash (length of K, K, I oldR) If a delete operation is indicated, then dummy fields 54 and 56 are inserted for the value of the revised record since there is no such value.
If the operation is not update or delete (or after the re-computation of the hash of the existing record in block 2030), control reaches block 2040 where a check is made to determine if the operation is add or update. If the operation is add or update, then the routine branches to block 2050 wherein a new record is processed such that the new record is inserted in the scratchData data structure field 56 and the proposed revised record NewR, is incorporated S, into the overall fileHash using: fileHash F(fileHash, hash (length of K.I IKJ InewRj) In the preferred implementation where F. and Finv are exclusive OR, this becomes: fileHash fileHash XOR Hash (length of KJ KJI newR).
If the operation is not add or update (or after the processing at block 2050), the revised fileHash of the overall data base is inserted into field 58 of P:\OPER\DBW\57783-94.DIV 25/9/97 23 the scratchData data structure 50 (2060). The hash of the concatenation of the newly proposed scratchData fields 52, 54, 56, and 58 are calculated and inserted into field 60 of scratchData data structure 50. The new scratchData is written into the scratchData data structure after encrypting such data if necessary. At least field 60 in the scratchData should be encrypted for protection.
A check is made at 2070 to determine whether the write of such information onto, for example, disk memory was successful. If the write fails, the routine branches to block 2080. Otherwise continue with block 2100.
Even in the case of error, at this point in the processing of the database file, the fileHash should be intact and consistent and future recovery will yield a correct data because if the scratchData was not actually written at all, then recovery will see the last update and re-store the database File to its current state. If the scratch data was partially or i: 15 faultily written, then the final Hash in field 60 will demonstrate the fault to subsequent recovery. No action will be taken and the database File will be allowed to remain in its current (correct) condition. If the scratchData was actually written as desired, then the recovery will see a correct record and apply the latest changes as intended. The update is *then terminated.
If the check at block 2070 indicates that the write was successful, then the intended update operation add, update, delete) is performed at block 2100. A check is then made at block 2110 to determine whether the operation was successfully performed. The operation may fail for a variety of reasons (including, for example, termination of power 25 by the user). In any case, during the recovery process the operation will be reattempted.
In the exemplary embodiment, the routine exits the current add, update or delete routine (2115) and presents the error to the caller.
p:\OPER\DBW\57783-94.DIV 25/8/97 24 If the operation was successful, then updated fileHash value is written and encrypted (2120). Once the fileHash value has been successfully written, as determined by the check at block 2130, then the routine returns with an indication that the database File modification was successful (2145). If the write was not successful, as indicated by the check at block 2130, then the routine branches to block 2140 where the routine exits with an error. Any future recovery attempts will repeat the work performed in an attempt to correctly set the fileHash. While the exemplary embodiment described in conjunction with FIGURES 4 and 5 relate to handling only a single record and a single operation at a time, as will be appreciated by those skilled in the art, it could be extended to handled multiple records.
FIGURE 6 is a flowchart which delineates the sequence of operations involved in closure operations. As indicated at block 3010, a new final scratchData data structure 50 is composed, where a null operation indication is inserted into field 52 and dummy fields are 15 inserted into fields 54 and 56. The final fileHash is inserted in field 58, and the *o scratchData hash is computed of the concatenation of the newly composed scratchData o fields 52, 54, 56, and 58. This hash is then inserted into scratchData structure field The final scratchData data structure is encrypted as appropriate and written to memory.
i Processing steps are then performed to insure that the final fileHash value with the file is saved such as by moving it into the file directory or other secure area (3020). Thereafter, the database file is closed (3030) and the scratchData data structure is erased or otherwise disassociated from the current database file (3040).
The illustrative embodiment provides full file integrity while avoiding the prior problems mentioned above. Full integrity is achieved with the modest extra overhead of storing one hash value associated with each file in a system file directory (in some environments, the entire hard drive could be considered as a file), or in a special security database; additional processing 7- 4-00;14:53 ;Davies Collison Cave ;61 3 92542808 P:\OPER\DW\57781.94.09 8 M 74/2oo when each file is first accessed ("opened") to scan the entire file and re-compute the hash in order to verify it with the stored hash; additional working memory to store the hash for files which are in use at any particular moment; additional processing when any record is added, updated, or removed to compute the revised hash, and re-write this hash for the file.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and 10 scope of the appended claims.
S
Throughout this specification and the claims which follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group 15 of integers or steps but not the exclusion of any other integer or step or group of integers or *t steps.
o* *o 6/ 19 07/04 '00 FRI 15:59 [TX/RX NO 8270]

Claims (42)

  1. 7- 4-00114:53 ;Davies Collison Cave 61 3 92542808 7/ 19 P:\OPER\DBW 7783-o4.0Qo 7/4/200 26 THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS: 1. A method for protecting a collection of a plurality of discrete data units, which are modified from time to time by an associated data processing system, from tampering to provide security for the collection, comprising the steps of: obtaining an individual hash value for each of the discrete data units by performing a hash operation using at least the data value portion of the data unit which is to be protected; aggregating said individual hash values to obtain an aggregate hash value for said collection using a function that provides for the independent inclusion and deletion of each i 10 individual hash value from the aggregate hash value; and o. revising said aggregate hash value for said collection using said function without executing said aggregating step and by operating on revised ones of said discrete data units. 2. A method according to claim 1, further including the step of deleting an 15 individual hash value using the inverse of said function. a- S* 3. A method according to claim 1, further including the step of including as part of the data to be hashed for each data unit, indicia which distinguishes the relative order of each data unit from the other data units. 4. A method according to claim 1, further including the steps of updating one of said discrete data units, and operating on said aggregate hash value using the inverse of said function. 5. A method according to claim 3, wherein said indicia is one of a plurality of sequentially ordered data unit numbers. 6. A method according to claim 3, wherein said indicia is a key value used to associatively index said discrete data units. 07/04 '00 FRI 15:59 [TX/RX NO 8270] 7. A method according to claim 1, wherein said discrete data units are records.
  2. 8. A method according to claim 1, further including the step of associating a data structure with at least one of said discrete data units containing information regarding the data unit for use if the data processing system is interrupted while updates are underway.
  3. 9. A method according to claim 1, wherein said discrete data units are bytes. A method according to claim 1, wherein said discrete data units are sectors.
  4. 11. A method according to claim 8, further including the step of providing said data structure with a field for identifying an updating operation to be performed.
  5. 12. A method according to claim 8, further including the step of providing said data structure with a field for identifying a revised version of an aggregate hash.
  6. 13. A method according to claim 8, further including the step of providing said data structure with a field for storing the hash of fields in said data structure.
  7. 14. A method according to claim 8, further including the step of encrypting at least part of said data structure. 7- 4-00;14:53 IDavies Coll ison Cave ;61 3 92542808 8/ 19 P:\OPRERDW577B3.94.098 7/4/t00 28 A method according to claim 1, further including the step of storing said aggregate hash.
  8. 16. A method according to claim 1, further including the step of storing said aggregate hash such that it can not be modified by anyone other than an authorized user.
  9. 17. A method according to claim 1, wherein said function is an exclusive OR operation. f 10 18. A method according to claim 1, wherein said function is an addition operation. .9
  10. 19. A method for protecting a collection of individual data groups, including a first data group and a second data group which are modified from time to time, from 15 tampering to provide security for the collection comprising the steps of: performing a predetermined hash operation using both the first data group and indicia i. in addition to the first data group which specifically identifies said first group; performing a predetennined hash operation on the second data group and indicia identifying said second group; 20 combining the hashes to determine an aggregate hash for said collection using a function; and revising said aggregate hash for said collection without executing said combining step and by extracting said hash of said first group or said hash of said second group using the inverse of said function. A method according to claim 19, further comprising a step of performing an update operation by operating on said aggregate hash using the inverse of said function. 07/04 '00 FRI 15:59 [TX/RX NO 8270]
  11. 21. A method according to claim 19, wherein said step of combining the hash uses indicia identifying said first data group and indicia identifying said second data group.
  12. 22. A method according to claim 19, wherein said first data group and said second data group are records.
  13. 23. A method according to claim 19, further including the step of associating a data structure with at least one of said first data group and said second data group, said data structure containing information regarding the respective group for use if the data processing system is interrupted.
  14. 24. A method according to claim 23, further including the step of providing said data structure with a field for identifying an updating operation to be performed.
  15. 25. A method according to claim 23, further including the step of providing said data structure with a field for identifying a revised version of an aggregate hash.
  16. 26. A method according to claim 23, further including the step of providing said data structure with a field for storing the hash of fields in said data structure.
  17. 27. A method according to claim 23, further including the step of encrypting at least part of said data structure. 7- 4-00;14:53 ;Davies Collison Cave 161 3 92542808 9/ 19 I':\OPK\DBWS77I3-94.M 7/41/fri
  18. 28. A method according to claim 19, further including the step of storing said hash such that it cannot be modified by anyone other than an authorized user.
  19. 29. A method according to claim 19, wherein said function is an exclusive OR operation. A method according to claim 29, wherein said function is an addition operation, 10 31. A method for maintaining a validity indicator of an updatable data file including a plurality of data records and having an associated aggregate file hash comprising the steps of: accessing said aggregate file hash; updating one of said plurality of data records to generate an updated record; and 15 computing an updated aggregate hash using the updated record and the aggregate file hash without aggregating hash values by applying a function having both associative and commutative properties with respect to the aggregate hash.
  20. 32. A method according to claim 31, further including the step of associating a 20 data structure with at least one of said plurality of data records containing information regarding said one of said plurality of record for use if the data processing system is interrupted while updating said one of said plurality of data records.
  21. 33. A method according to claim 32, further including the step of providing said data structure with a field for identifying an updating operation to be performed on said one of said plurality of data records. 07/04 '00 FRI 15:59 [TX/RX NO 8270] 7 4- 00 ;14 :5 3 D av ies C oI IjiscCV o6 n 3242C a V 13 ;61 3 92542808 1 0/ 19 Sc *5 S S S S p p .5 i a PS S. S S 55$~ S *SS* *SPS p S I'AOPUMMD~S7783.4.O0I 7/412000X 31
  22. 34. A mnethod according to claim 32, further including the step of providing said data structure with a field for identifying a revised version of an aggregate hash. A method according to claim 34, further including the step of providing said data structure with a fields for storing the hash of fields in said data structure.
  23. 36. A method according to claim 32, further including the step of encrypting at least part of said data structure. 10 .37. A method according to claim 3 1, further including the step of storing said aggregate hash.
  24. 38. A method according to claim 3 1, furthecr including the step of storing said aggregate hash such that it cannot be modified by anyone other than an authorized user.
  25. 39. A method according to claim 3 1, wherein said function is an exclusive OR operation. A method according to claim 3 1, wherein said function is an addition 20 operation.
  26. 41. A method of protecting a plurality of digital data records from tampering to provide security for the records, each data record including both information content and a record identifier comprising the steps of: combining the informnational content of a data record with the record identifier of said data record to determine an aggregate data string; performing a hashing operation on said aggregate data string to determine a hash value; applying a function having both associative and commutative properties to said hash valuc to generate an aggregate hash value for said data records; and 07/04 '00 FRI 15:59 [TX/RX NO 8270] 7- 4-00 14:53 Davies Collison Cave ;61 3 92542808 11/ 19 P:\OPP\tRH\V\tI771 714/f0 32 revising said aggregate hash value for said plurality of data records using said function without aggregating hash values and by operating on revised ones of said data records.
  27. 42. A method according to claim 41, further including the step of associating a data structure with at least one data record containing information regarding the data record for use if the data processing system is interrupted while an update for said data record is underway. 10 43. A method according to claim 42, further including the step of providing said data structure with a field for identifying an updating operation to be performed on said data .4 record. o
  28. 44. A method according to claim 42, further including the step of providing said 15 data structure with a field of identifying a revised version of an aggregate hash for said plurality of data records. 9 A method according to claim 44, further including the step of providing said Sdata structure with a field for storing the hash of fields in said data structure.
  29. 46. A method according to claim 42, further including the step of encrypting at least part of said data structure.
  30. 47. A method according to claim 44, further including the step of storing said aggregate hash.
  31. 48. A method according to claim 44, further including the step of storing said aggregate hash such that it can not be modified by anyone other than an authorized user. 07/04 '00 FRI 15:59 [TX/RX NO 8270] P:\OPER\DBW\57783-94.DIV 25/8/97 33
  32. 49. A method according to claim 41, wherein said function is an exclusive OR operation. A method according to claim 41, wherein said function is an addition operation.
  33. 51. A method according to claim 1, wherein the obtaining step includes obtaining a substantially non-reversible hash value.
  34. 52. A method according to claim 1, wherein the obtaining step includes obtaining a substantially cryptographically secure hash value.
  35. 53. A method according to claim 52, wherein the cryptographically secure hash value is obtained using a message digest 5 (MD5) or a secure hash algorithm (SHA) .o 15 hashing technique.
  36. 54. A method according to claim 19, wherein the predetermined hash operation is cryptographically secure. o
  37. 55. A method according to claim 41, wherein the hashing operation produces a cryptographically secure hash value. 59 *555
  38. 56. A method according to claim 1, wherein the collections of a plurality of discrete data units corresponds to a file in a database data unit, each discrete data unit being a one of plural records in the file having a record identifier and a record value with each record having one of said individual hash values and the file having the aggregate hash value, and wherein the existing aggregate hash is incrementally modified by inclusion or deletion of individual hash values. P:\OPER\DBW\57783-94.DIV 25/8/97 34
  39. 57. A method for protecting a collection of a plurality of discrete data units substantially as hereinbefore described with reference to the accompanying drawings.
  40. 58. A method for protecting a collection of individual data groups substantially as hereinbefore described with reference to the accompanying drawings.
  41. 59. A method for maintaining a validity indicator of an updatable data file substantially as hereinbefore described with reference to the accompanying drawings.
  42. 60. A method of protecting a plurality of digital data records substantially as hereinbefore described with reference to the accompanying drawings. S. 15 **By his Patent Attorneys 20 DAVIES COLLISON CAVE DATED this 25th day of August, 1997 S ADDISON M. FISCHER By his Patent Attorneys DAVIES COLLISON CAVE oo
AU35253/97A 1993-11-19 1997-08-25 A method for protecting data Expired AU720583B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU35253/97A AU720583B2 (en) 1993-11-19 1997-08-25 A method for protecting data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US08/154,520 US5475826A (en) 1993-11-19 1993-11-19 Method for protecting a volatile file using a single hash
US154520 1993-11-19
AU57783/94A AU5778394A (en) 1993-11-19 1994-03-15 A method for protecting a volatile file using a single hash
AU35253/97A AU720583B2 (en) 1993-11-19 1997-08-25 A method for protecting data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU57783/94A Division AU5778394A (en) 1993-11-19 1994-03-15 A method for protecting a volatile file using a single hash

Publications (2)

Publication Number Publication Date
AU3525397A AU3525397A (en) 1997-12-11
AU720583B2 true AU720583B2 (en) 2000-06-08

Family

ID=25631766

Family Applications (1)

Application Number Title Priority Date Filing Date
AU35253/97A Expired AU720583B2 (en) 1993-11-19 1997-08-25 A method for protecting data

Country Status (1)

Country Link
AU (1) AU720583B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014155124A1 (en) * 2013-03-28 2014-10-02 Thunderhead Limited Document tamper detection

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121495A (en) * 1988-02-02 1992-06-09 Bell Communications Research, Inc. Methods and apparatus for information storage and retrieval utilizing hashing techniques

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121495A (en) * 1988-02-02 1992-06-09 Bell Communications Research, Inc. Methods and apparatus for information storage and retrieval utilizing hashing techniques

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014155124A1 (en) * 2013-03-28 2014-10-02 Thunderhead Limited Document tamper detection

Also Published As

Publication number Publication date
AU3525397A (en) 1997-12-11

Similar Documents

Publication Publication Date Title
US5694569A (en) Method for protecting a volatile file using a single hash
US7266689B2 (en) Encryption systems and methods for identifying and coalescing identical objects encrypted with different keys
EP1451664B1 (en) Systems, methods and devices for secure computing
Bellare et al. Incremental cryptography and application to virus protection
US7000118B1 (en) Asymmetric system and method for tamper-proof storage of an audit trial for a database
US8639947B2 (en) Structure preserving database encryption method and system
KR100829977B1 (en) Method for ensuring the integrity of a data record set
EP0849658A2 (en) Secure data processing method and system
CN110837634B (en) Electronic signature method based on hardware encryption machine
US20240078323A1 (en) Counter tree
AU720583B2 (en) A method for protecting data
Kühn Analysis of a database and index encryption scheme–problems and fixes
CN114567503A (en) Encryption method for centrally controlled and trusted data acquisition
CN115221539A (en) Knowledge graph secret state storage method based on secret calculation and searchable encryption
Vacca Encryption keys: Randomness is key to their undoing
Yang et al. An Accountability Scheme for Oblivious RAMs
Wells Achieving data base protection through the use of subkey encryption

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)