WO2019113495A1 - Systems and methods for cryptographic provision of synchronized clocks in distributed systems - Google Patents

Systems and methods for cryptographic provision of synchronized clocks in distributed systems Download PDF

Info

Publication number
WO2019113495A1
WO2019113495A1 PCT/US2018/064547 US2018064547W WO2019113495A1 WO 2019113495 A1 WO2019113495 A1 WO 2019113495A1 US 2018064547 W US2018064547 W US 2018064547W WO 2019113495 A1 WO2019113495 A1 WO 2019113495A1
Authority
WO
WIPO (PCT)
Prior art keywords
data set
cryptographic hash
sequence
data
hash values
Prior art date
Application number
PCT/US2018/064547
Other languages
French (fr)
Inventor
Anatoly Yakovenko
Original Assignee
Solana Labs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Solana Labs, Inc. filed Critical Solana Labs, Inc.
Publication of WO2019113495A1 publication Critical patent/WO2019113495A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • G06Q20/3823Payment protocols; Details thereof insuring higher security of transaction combining multiple encryption tools for a transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/04Payment circuits
    • G06Q20/06Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
    • G06Q20/065Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash
    • G06Q20/0655Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash e-cash managed centrally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0442Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption

Definitions

  • a block chain is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically can contain a timestamp and transaction data. Every block in the block chain may also contain a reference to its previous block(s).
  • mining The process of creating a block and appending it to the block chain is called mining.
  • mining can be a computationally-intensive process that requires solving a unique and difficult math problem so that the number of blocks mined each day remains steady.
  • the math problem to be solved may be used as a proof-of-work to check whether a solution is valid, but it may be difficult to find a solution, as this requires a lot of trial and error.
  • Every block in the block chain may contain a reference to its previous block(s), thus creating a chain from the first block (genesis block) to the current one.
  • the block-reference is a cryptographic hash of the previous block. This may ensure the integrity of the chain, as any modification to a block may result in a different hash for the block and thus the reference in the next block may change, resulting in a different hash for every block after.
  • block chains may be inherently resistant to modification of their data.
  • a block chain can serve as an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority.
  • clocks and timestamps may be used in distributed systems to indicate the time that a block chain transaction occurred.
  • Current block chain protocols may compare a local clock time to a signed timestamp of the message/transaction data, but it may not be known whether the receiver of the
  • Atomic clocks for block chain may have their limitations. Block chain cannot rely on a trusted atomic clock. Block chain may not trust a shared network clock. Any externally supplied timestamp value may not necessarily be trusted. Existing systems that rely on externally supplied passage of time may need to trust that the timestamp is not fabricated. Alternatively, an externally supplied timestamp may have to compare against a local time, and reject the timestamp if it is out of bounds. With such existing systems, there may be no guarantee that a node in a given system will reject or accept the timestamp based on the comparison to the local clock. Thus, there is an urgent and unmet need to provide clocks and timestamps that can be trusted by the users of the block chain.
  • a cryptographically secure hash function may be used herein whose output cannot be predicted from the input.
  • Such a cryptographically secure function may be completely executed to generate an output, and such function may be run iteratively in a sequence so that its output from a previous execution may be used as the input in the current execution.
  • the current output and/or how many times it has been executed or called can be periodically recorded.
  • the output can then be recomputed and verified by external computers in parallel by checking one or more iterations in parallel on separate computer(s).
  • Data can be timestamped into the sequence by recording the data and the index when the data is mixed into the sequence. Such timestamp then may guarantee that the data was created sometime before a next output of the secure function is generated in the sequence.
  • Multiple clocks can synchronize amongst each other by mixing their state into each other’s sequences.
  • Systems and methods of the present disclosure may advantageously enable proof of time passage in between entries in a digital record (e.g. sequence) without trusting the entity that is generating the digital record.
  • the present disclosure may advantageously enable creation of a temporal order of events that does not have to be trusted by any of the external clients.
  • the present disclosure may allow generation of a timestamp for a digital event which is a relative timestamp as indicated by entry in the historical record without trusting the creator of the record.
  • the present disclosure also can enable verification of the record with a multi-core computer in a fraction of the time it took to be generated.
  • the methods herein may further enable a combination of different records, optionally generated by different computers such that the different records are continuous and time passage between events recorded on separate machines can be proven based on the records without trusting the machines.
  • the method further comprises verifying the sequence of cryptographic hash values using a preselected number of computer processors by: selecting a preselected number of computer processors; splitting the sequence of cryptographic hash values into the preselected number of sub-sequences, each of the sub-sequences comprising a portion of the sequence of cryptographic hash values; and executing, by the pre-selected number of processors, the cryptographic hash function based on an input of each of the preselected number of sub-sequences to generate as output the preselected number of new sub-sequences; and verifying whether the preselected number of new sub-sequences match the preselected number of sub-sequences.
  • the method further comprises encrypting, by the one or more computer processors, the sequence of cryptographic hash values comprising: inputting at least the first data set and a private key to an encryption function to generate as output a first encrypted data set, wherein upon generating the second data set, a counter in computer memory is incremented by a preselected value; and recording the first data set, the first encrypted data, and the counter in an encrypted sequence of cryptographic hash values the computer memory.
  • the method further comprises decrypting, by the one or more computer processors, the encrypted sequence of cryptographic hash values using a public key.
  • the first data set comprises a set of cryptographic hash values from a previous data set.
  • the third data set comprises a cryptographic hash value of an event.
  • the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix.
  • the cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3.
  • the preselected value is an integer.
  • Another aspect of the present disclosure provides a computer-implemented method for consensus voting in a block chain, comprising (a) receiving a given consensus vote for a block from a node in the block chain; applying a lockout period to the given consensus vote, wherein the lockout period comprises a time period during which changing the given consensus vote violates a lockout policy; (b) removing, from a vote stack comprising preceding consensus votes made at least in part by the node prior to the given consensus vote, any of the preceding consensus votes having a lockout period that has expired; (c) increasing the lockout period of each of the preceding consensus votes still remaining in the vote stack by a factor greater than 1 ; (d) adding the given consensus vote to the vote stack; and (e) slashing a stake associated with the node if the node violates the lockout policy for the given consensus vote by voting for an additional block other than the block of (a) before the lockout period expires.
  • the factor is greater than or equal to 1.5. In some embodiments, the factor is greater than or equal to 2.
  • Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.
  • Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto.
  • the computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
  • FIG. 1 schematically illustrates a sequence of cryptographic hash values, the hash values generated by iterative execution of a cryptographic hash function
  • FIG. 2 schematically illustrates a sequence of cryptographic hash values, the hash values generated using data of a digital event and iterative execution of a cryptographic hash function
  • FIG. 3 schematically illustrates verification of a recorded sequence of cryptographic hash values by splitting the sequence of cryptographic hash values into subsequences
  • FIG. 4 schematically illustrates synchronization among different clocks by
  • FIG. 5 schematically illustrates a sequence of cryptographic hash values with user event data
  • FIG. 6 schematically illustrates a digital processing device comprising one or more central processing units (CPUs), a memory, a communication interface, and a display;
  • CPUs central processing units
  • memory a memory
  • communication interface a communication interface
  • display a display
  • FIG. 7 schematically illustrates a web/mobile application provision system providing browser-based and/or native mobile user interfaces
  • FIG. 8 schematically illustrates a cloud-based web/mobile application provision system comprising an elastically load balanced, auto-scaling web server and application server resources as well synchronously replicated databases;
  • Fig. 9 shows cipher block chaining encryption in which the previous encrypted block(s) data is necessary for the next block to be created.
  • Fig. 10 shows data sample from each block is combined with the last valid hash to create a merkle hash of the data
  • Fig. 11 shows the system architecture showing the currently active Proof of History generator node and downstream nodes
  • Fig. 12 shows the Proof of History generator node of Fig. 11 with incoming and outgoing traffic
  • Fig 13 shows an example of high performance memory management for smart contracts.
  • the term“about“ refers to an amount that is near the stated amount by about 10%, 5%, or 1%, including increments therein.
  • a cryptographically secure hash function may be used herein whose output cannot be predicted from the input.
  • Such a cryptographically secure function may be completely executed to generate an output, and such function may be run iteratively in a sequence so that its output from a previous execution may be used as the input in the current execution.
  • the current output and/or how many times it has been executed or called can be periodically recorded.
  • the output can then be recomputed and verified by external computers in parallel by checking one or more iterations in parallel on separate computer(s).
  • Data can be timestamped into the sequence by recording the data and the index when the data is mixed into the sequence. Such timestamp then may guarantee that the data was created sometime before a next output of the secure function is generated in the sequence.
  • Multiple clocks can synchronize amongst each other by mixing their state into each other’s sequences.
  • the present disclosure may advantageously enable proof of time passage in between entries in a digital record (e.g. sequence) without trusting the entity that is generating the digital record.
  • the present disclosure may advantageously enable creation of a temporal order of events that does not have to be trusted by any of the external clients.
  • the present disclosure may allow generation of a timestamp for a digital event which is a relative timestamp as indicated by entry in the historical record without trusting the creator of the record.
  • the present disclosure also can enable verification of the record with a multi-core computer at a fraction of the time it took to be generated.
  • the methods herein may further enable combination of different records, optionally generated by different computers such that the different records are continuous and time passage between events recorded on separate machines can be proven based on the records without trusting the machines.
  • the present disclosure provides a computer-implemented method for cryptographically generating a local timestamp.
  • the method may comprise, in a first operation, activating one or more computer processors that are individually or collectively programmed to execute a cryptographic hash function.
  • a second operation at least a first data set from computer memory may be inputted to the cryptographic hash function to generate as output a second data set comprising a first set of cryptographic hash values from the first data set.
  • a counter in computer memory is incremented by a preselected value.
  • At least the first data set, the second data set, and the counter to a sequence of cryptographic hash values may be recorded in the computer memory.
  • at least the second data set and a third data set may be inputted to the cryptographic hash function to generate as output a fourth data set comprising a second set of cryptographic hash values from the third data set.
  • the counter in computer memory is incremented by the preselected value.
  • a fifth operation at least the second data set, the third data set, the fourth data set, and the counter to the sequence of cryptographic hash values may be recorded in the computer memory;
  • a sixth operation using at least the fourth data set as input, the second and third operations may be repeated for a first number of repetitions, the fourth and fifth operations may be repeated for a second number of repetitions, or a combination thereof, to yield a fifth data set.
  • the counter in computer memory may be incremented by the first number of repetitions, the second number of repetitions or a
  • At least the fourth data set, the fifth data set, and the counter to the sequence of cryptographic hash values in may be recorded in the computer memory.
  • the counter may then be used to generate the local timestamp for the third data set.
  • the method comprises verifying the sequence of cryptographic hash values using a preselected number of computer processors by: selecting a preselected number of computer processors; splitting the sequence of cryptographic hash values into the preselected number of sub-sequences, each of the sub-sequences comprising a portion of the sequence of cryptographic hash values; and executing, by the pre-selected number of processors, the cryptographic hash function based on an input of each of the preselected number of sub- sequences to generate as output the preselected number of new sub-sequences; and verifying whether the preselected number of new sub-sequences match the preselected number of sub- sequences.
  • the method comprises encrypting, by the one or more computer processors, the sequence of cryptographic hash values comprising: inputting at least the first data set and a private key to an encryption function to generate as output a first encrypted data set, wherein upon generating the second data set, a counter in computer memory is incremented by a preselected value; and recording the first data set, the first encrypted data, and the counter in an encrypted sequence of cryptographic hash values the computer memory.
  • the method further comprises decrypting, by the one or more computer processors, the encrypted sequence of cryptographic hash values using a public key.
  • the first data set comprises a set of cryptographic hash values from a previous data set.
  • the third data set comprises a cryptographic hash value of an event.
  • the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix.
  • the cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3.
  • the preselected value is an integer.
  • a cryptographic hash function such as, for example, sha256, md5, or sha-l, may be run from a starting value (e.g., first input) to generate an output.
  • the output cannot be predicted without running the function.
  • the output then may be passed as the input into the same function again in the next iteration, thus generating a sequence of calling or executing the same cryptographic hash function multiple times.
  • the number of times the function has been executed/called and the output at each call/execution may be record.
  • Methods and systems of the present disclosure may be used with various hash functions, such as, for example, sha-256, sha-224, md5, sha-0, sha-l, sha-2, sha-3, ripemd-l60, ripemd- 320, and blake.
  • the hash function sha256 may take as input“ hashl,” which may be a random starting value or a generated output from a previous exaction of the same or a different hash function, and generates an output“ hash2, 2.”
  • the output may include a counter, a number of integration, an index, or the like which increases, e.g., by 1, or by any other predetermined values, for each execution of the hash function. In this case, the counter is 2, indicating the hash function has been executed two times.
  • the output may also include one or more hash values, a set of hash values, or any other form of data as its output, which can be input to next execution of the hash function.
  • the input, output, hash function, counter, or other information associated with execution of the hash function may be recorded into a sequence of hash functions/values as disclosed here.
  • the iteration using the cryptographic hash function can only be computed in sequence by a single computer thread since there is no way to predict what the hash value at a certain index 300 is going to be without actually running the algorithm from the starting value 300 times.
  • sha256(“any random starting value”) -> hashl, 1.
  • the function sha256 may generate the output as hashl and the index as 1.
  • the function sha256 may be repeated as follows:
  • Fig. 1 shows an example of a sequence of cryptographic hash values.
  • the sequence 100 may include any number of iterations that is no less than 2, for example, 3 iterations are shown here.
  • the cryptographic hash function 101 e.g. sha256
  • a starting value may be randomly selected as the first input.
  • the hash function generates a hash value as its output 102 which may be a string of numbers and letters.
  • the length of the string may be 128 bits, 256 bits, or any other lengths.
  • the output also includes an index/count for the number of iteration.
  • the input, output, index, and the hash function are recorded in the sequence of cryptographic hash values.
  • the sequence After the sequence has been generated, it can be verified in less time than it takes to generate it, for example, in a multi core computer. For instance, the computation may be split between corel and core2:
  • the verifier Given some number of cores, such as a GPU with 4000 cores, the verifier can split up the sequence of cryptographic functions, e.g., hashes, and their indexes into 4000 slices, and in parallel make sure that each slice is correct from the starting iteration of the function to the last iteration of the function in the slice.
  • the expected time to produce the sequence can be calculated as number of hashes/hashes per second for 1 core, the expected time to verify that the sequence may be calculated as: number of hashes/ (hashes per second per core x number of cores available).
  • the cryptographic functions herein are functions that are configured to generate an unpredictable change to the output when there is any change to any of the input. Thus, when there is any change in the input, the cryptographic function may have to run to create the output.
  • Combine functions may be similar to cryptographic functions in that a single bit change to any of the inputs to a combine function can result in an unpredictable change to the output of the combine function. Additionally, combine functions can be configured to not introduce a collision attack vector. For example, simple addition or multiplication may introduce an attack vector because the input that is added to the state can be precomputed such that the result of the addition is a previously known value that can generate a predicted output once it is hashed. [0053] Two or more functions that satisfy the requirement of the cryptographic hash function or combine function may be used to create a hybrid cryptographic hash function or a hybrid combine function, such as sha256(prepend(hash335, photograph_sha256)).
  • the cryptographic hash function herein may include one or more secure hash algorithms.
  • Non-limiting examples of the cryptographic hash function herein include: sha-0, sha-l, sha-2, sha-3, sha-224, sha-256, sha-384, sha-5l2, sha-512/224, sha-512/256, sha3-224, sha3-256, sha3- 384, sha3-5l2, shakel28, shake256, or their combinations.
  • sequence of cryptographic hash functions can also be used to record that some piece of data was created before a particular cryptographic function index, e.g., hash index, was generated.
  • a piece of data may be combined with the current hash at the current index using a combine function.
  • the data may be a cryptographically unique hash of arbitrary data. Any data of any finite size can be appended. Using a hash may be convenient because the size of the hash value can be fixed.
  • the combine function can be an append operation, or any operation that is collision resistant.
  • the combine function may combine the data such that all the bits before combination are present after combination.
  • Non-limiting combination functions include, for example, prepend operations and mix operations.
  • a combine function may include an operation that cannot be easily attacked using existing technologies. For example, an attacker may have to create a collision between a hash and the data they are trying to append in order to attack an append operation.
  • Append operation can be collision resistant for this application because an attacker may not know in advance what piece of data may generate a known value when appended to the output and hashed. The attacker may have to attempt many possible strings to append, which may require 2 L 128 attempts for sha-256. For instance, arithmetic operations, such as addition, subtraction, multiplication, division, or other operations, may not be used as a combination function because an attacker may precompute a separate sequence in parallel, and may join the real sequence and the separate sequence by inserting a segment of data that can add up to the starting value of the separate sequence.
  • a sequence of hash functions may be called as below:
  • the cryptographic hash of the data is generated, e.g., photograph_sha256.
  • the hash of the data can be appended to the binary data of the current hash, which is hash 335, e.g., append (hash335, photograph_sha256).
  • the next hash function, e.g., hash336, may be computed from the appended binary data of hash335 and the sha256 of the photograph as below:
  • the sha256 of the photograph may also be recorded as part of the sequence output.
  • this change to the sequence may be carried on to the next function in the sequence.
  • anyone verifying this sequence can then recreate the change to the sequence. The verification can still be done in parallel, for example,
  • the sequence may only need to mix a cryptographic value of the event data into the event sequence.
  • the value may be obtained using cryptographic functions such as a hash function.
  • the mapping of the cryptographic value of the event data to the actual event data as well as the actual event data can be stored outside of the sequence, and the actual event data can contain other information within itself such as metadata, real time stamps, and connection IPs.
  • Fig. 2 shows an example of a sequence of cryptographic hash values with a digital event.
  • the digital event 203 with input data e.g., hash value of original data
  • this insertion of the input data into the sequence may be carried on to the next function in the sequence and the next output 204 is unpredictably changed by the insertion of the input as its input.
  • the input, output, the input data, and index are recorded in the sequence of cryptographic hash values.
  • a timestamp for the input data may be generated based on the count/index before which it is inserted into the sequence. In this case, the event data is inserted before output and count/index 204.
  • Fig. 3 shows an example of verifying the sequence of cryptographic hash values.
  • the recorded sequence which starts with index 301 and ends with another index 303 may be split into 2 or more sub-sequences, for example, one with a starting index 301 and an end index of 302, the other with a starting index 302 and a ending index of 303.
  • the sub-sequences are fed into different computer cores 304, 305 or computer processors that run the cryptographic hash functions in parallel.
  • the recorded output is compared with the generated output for verification.
  • the verification e.g., the fact that the recorded output matches the generated output in each iteration after insertion of the input data, can show that the input data and the timestamp of the input data are accurate.
  • the sequence herein may be encrypted to improve security of the values of cryptographic functions, thereby improving security of the temporal order of events.
  • a public/private key encryption scheme may be used for encryption of the sequence herein.
  • the encryption function may be executed using the output of a cryptographic function as its input. Given a public and private key pair, the encrypted sequence may be generated as:
  • the encrypted sequence may also be generated as:
  • the encrypted sequence may also be generated as:
  • the encrypted sequence may also be generated as:
  • the verifier can then use the Public key to decrypt the results backwards, for example: i) decrypt(encrypted200, Public Key) -> encrypted 199, 199
  • the combine function can cryptographically hash the combined data before and/or after encrypting it. Since the cryptographic hash is not reversible, the sequence may need to record the pre-combined state, and the state right after the combined result is hashed and encrypted.
  • the encryption herein may ensure the encrypted sequence can only be generated by the holder of the private key.
  • Another approach can be to use an encryption scheme on the seed of the chain, and continue the rest of the chain using an algorithm that is optimal for the current hardware. If sha256 is a commercially available intrinsic, then a secondary clock may take an initial seed as the output of the first clock, encrypt it with its private key, and start continuously hashing using sha256 the encrypted value. It can then be later verified by everyone that the Public key can be used to decrypt the original seed and tie it back to the primary clock. Public Key, initial seed, and the encrypted result of the seed may need to be published for verifiers to confirm that it is a continuation from the original clock.
  • the encryption herein requires a private key, a public key, and any encryption function.
  • Nonlimiting examples of the encryption functions includes a symmetric encryption function, an asymmetric encryption function such as Rivest-Shamir-Adleman (RSA), data encryption standard (DES), Triple DES, and advanced encryption standard (AES).
  • RSA Rivest-Shamir-Adleman
  • DES data encryption standard
  • Triple DES Triple DES
  • AES advanced encryption standard
  • This sequence may be used as a clock to determine the temporal order of event data.
  • the clock can be run as an internet service, continuously creating a sequence of cryptographic function indexes.
  • Events data described by the cryptographic hash of the data for that event, can be recorded into the existing sequence which may be published via the Internet. For instance, the longer the sequence, the harder it is for a malicious service to have precomputed it and “faked” the passage of time between events being entered into the sequence. If a sequence running for 1 year generates N hashes, the time between events entered into the sequence at index K and index K + N is about a year apart.
  • a malicious service with a computer that is twice as fast may have to run for 6 months to create a sequence in which the order of the two events that are N indexes apart may be switched.
  • the actual time for calculation may be adjusted as computer speed improves. Synchronizing multiple clocks
  • the systems and methods herein enable synchronization among multiple clocks by mixing the sequence state from each clock to each other clock(s). For example,
  • Clock A may receive a data packet from Clock B, which contains the last state from Clock B and/or the last state Clock B observed from Clock A.
  • the next state hash in Clock A then may depend on the state from Clock B.
  • hash2b happens sometime before hash4a.
  • each clock can then handle a portion of external traffic, thus the overall system can handle a larger amount of events to track than a single clock at the cost of the accuracy due to network latencies between the clocks.
  • Having multiple synchronized clocks may make deployment more resistant to attacks.
  • one clock may be high bandwidth, and may receive many events to mix into its sequence; and another clock can be a high-speed, low-bandwidth clock that periodically mixes with the high bandwidth clock.
  • the high speed sequence can create a secondary sequence of data that an attacker may have to reverse. When examining the combined stream of sequences, the faster sequence may have a larger number of hashes generated. An attacker may need to generate the larger number of hashes to create a forged record of events that may show them in reverse order.
  • FIG. 4 shows an example of synchronization among two different clocks/sequences of cryptographic hash values.
  • a first cryptographic hash function 401 may generate a hash value 403 which may be received by a second cryptographic hash function 402 and recorded as its output 404.
  • the next output/state 406 of the second hash function may depend on the previous output 404, thus the last state of the first hash function 403.
  • hash value 403 occurs before hash value 406.
  • hash value in the output 406 of the second hash function may be inserted back to the first hash function 405.
  • hash value 406 occurs before any output generated using hash value 405 as its input.
  • the sequence of hash values, events and the clock disclosed herein can be used in a distributed system. Many users may input data into the distributed system. Such data may include a computer program, a function, or any other information. Each piece of data may have a cryptographic hash value that uniquely identifies the data that is mixed into the sequence. In the case that the data is program(s), the sequence of hashes with the mixed data hashes is the order to execute the programs. Readers of the sequence can then evaluate each program in the exact same order, and each one should expect to see the same result as every other reader.
  • Users that receives a sequence of transactions/hash values can know that if they read and accept message N, then they may have received all the messages prior to message N. This may make it possible to trust the result from making all the transactions consistent across all the users that have read and accepted message N or a later message.
  • the cryptographic function can be constructed such that it can be inefficient to create a custom ASIC, either because standard CPU’s from Intel or some other commercially available chip supplier already have an optimized hash function in their ASICs, or the custom circuit size can be too large.
  • the sequence(s) herein can be executed on a standard CPU that is cooled and overclocked to the highest currently possible speed such that it may not be possible to run a CPU that is twice as fast. While it may be possible to run a faster hashing service by using a more expensive cooling solution, it may only add marginal gains, and may require a significant expense in time and money to completely overcome the real sequence after it has been functioning for a significant amount of time.
  • the sequence(s) herein can be executed on a standard CPU that is cooled and overclocked to the highest currently possible speed such that it may not be possible to run a CPU that is twice as fast.
  • cryptographic function can change and become more complex in its execution thus it is less likely to be attacked. As an example, it may require the previous N outputs to be combined into the next input.
  • a competitive market can be created between agents who are running these sequences as clocks, so the agents can compete on generating the longest sequence either using previously agreed checkpoint, or by continuously mixing sequences together.
  • Each agent can propose a generated sequence that is started from the last accepted checkpoint. For instance, each agent can mix his sequence into a central one and only the fastest one is selected. This may provide a reward for keeping the service running at the fastest possible clock rate, and thus be more resistant to malicious hidden faster clocks.
  • CPUs typically cannot run at lowest possible temperatures with maximum frequencies for a long time because of the stress on the components.
  • CPUs may be rotated while each rotated CPU continuously mixes in whatever sequence it is able to generate. The result may be a continuous chain of hashes that were running at the maximum frequency on at least one of the CPUs.
  • disclosed herein are methods for preventing the service from creating a duplicate sequence with a partial reordering, so that the service can be trusted by the users. For example, external clients want to sequence 3 events, eventl, event2, and event 3 into a current sequence:
  • Both sequences may start at hash9a, so they may be equal in length (e.g. they may have same number of hashes). However, only a single valid sequence may be desired by a client using the current sequence. To prevent this attack, each client-generated event may contain within itself the latest hash that the client observed from what it considers to be a valid sequence. Specifically, when a client creates the“Eventl” hash, he or she may append the last hash, hash 5a, observed as below:
  • Eventl hash(append(eventl data, hash5a))
  • Event2 hash(append(event2 data, hashl5a))
  • Event3 hash(append(event3 data, hashlOa))
  • Event3 can reference hash25a, and if it is not ordered in the sequence prior to hash25a, the consumers of the sequence know that it is an invalid sequence.
  • the partial reordering attack may then be limited to the number of hashes produced while the client has observed an event and when the event is entered.
  • Clients may generate or use software that does not assume the order is correct for the short period of hashes between the last observed hash and the inserted hash.
  • hash30a has a reference to hashlOa, and since there may be a gap between when it was inserted (hash30a) and its reference (hashlOa), the true order of events in that gap may not be established.
  • the clients can submit a signature of the event data and the last observed hash instead of just a hash, for example:
  • Event3 sign(append(event3 data, hash25a), client Private Key)
  • mapping from the event hash or signature to event data and public key of a client can be published in a separate database for all the clients of the service to verify, for example, as i) (Public Key, hash25a, event3 data) ⁇ - lookup Event3
  • Fig. 5 shows an example of the sequence of cryptographic hash values.
  • a user event may have a hash value of 2zfd23423, and it may also contain the last hash value before the event data is generated, which is 62f5164cl.
  • the user event, and its hash value may link 62f5164cl to the next generated block with a different hash 3d039eef which prevents a malicious leader from reordering the events in a hidden side sequence.
  • a leader may be a node in the block chain that decides the contents of the next block, and a malicious leader may be a leader who attempts to add an incorrect block or a block that may not be intended (e.g., a block that may not be intended by the leader).
  • a specific instance of proof of stake permits quick confirmation of the current sequence produced by a proof of history (PoH) generator, for voting and selecting the next proof of history generator, and for disabling any misbehaving validators.
  • This algorithm may depend on messages eventually arriving to all participating nodes within a certain timeout.
  • bonds generally refers to a capital expense in a proof of work.
  • a miner may purchase resources, such as hardware and electricity (power), and commit such resources to a single branch in a proof of work block chain.
  • a bond may be a coin that a validator commits as collateral while they are validating transactions.
  • the term“slashing,” as used herein, generally refers to a solution to a nothing at stake problem in a proof of stake (PoS) system.
  • PoS proof of stake
  • the term“supermajority,” as used herein, may be at least 2/3 of the validators weighted by their bonds.
  • a supermajority vote may indicate that the network has reached consensus, and at least 1/3 of the network may have had to vote maliciously for this branch to be invalid. This may put the economic cost of an attack to at least 1/3 of the market cap of the coin.
  • a bonding transaction that may take a user specified amount of coin and move it to a bonding account under the user's identity. Coins in the bonding account may not be spent and may have to remain in the account until the user removes them. The user may only remove stale coins that have timed out. Bonds may be valid after a supermajority of the current stakeholders has confirmed the sequence.
  • a proof of history generator may publish a signature of the state at a predefined period. Each bonded identity may confirm that signature by publishing its own signed signature of the state. The vote may be a simple‘yes’ vote, without a‘no’ vote. If a supermajority of the bonded identities has voted within a timeout, then this branch may be accepted as valid.
  • N may be a dynamic value based on the ratio of stale to active votes. N may increase as the number of stale votes increase.
  • this may allow the larger branch to recover faster than the smaller branch.
  • An election for a new PoH generator may occur when a failure of the PoH generator is detected.
  • the validator with the largest voting power, or highest public key address if there is a tie, may be selected as the new PoH generator.
  • a supermajority of confirmations may be required on the new sequence. If the new leader fails before a supermajority of confirmations is available, the next highest validator may be selected and a new set of conformations may be required.
  • a validator may need to vote at a higher PoH sequence counter and the new vote may need to contain the votes it wants to switch. Otherwise, the second vote may be slashable. Vote switching may only occur from a level or height that does not have a
  • a secondary may be elected to take over the transactional processing duties. If a secondary exists, it may be considered as the next leader during a primary failure.
  • Secondary and lower rank generators may be promoted to primary at a predefined schedule or if an exception is detected.
  • PoH generators may have an identity that signs the generated sequence. A fork can only occur in case the PoH generator identity has been compromised. A fork may be detected because two different historical records have been published on the same PoH identity.
  • a hardware failure or a bug, or an intentional error in the PoH generator may cause it to generate an invalid state and publish a signature of the state that does not match the local validators result.
  • Validators may publish the correct signature via gossip and this event may trigger a new round of elections. Any validators that accept an invalid state may have their bonds slashed.
  • a network timeout may trigger an election. This may occur in the event of system failure, such as a network failure.
  • Slashing may occur when a validator votes two separate sequences.
  • a proof of duplicate vote may remove the bonded coins from circulation and add them to the mining pool.
  • a vote that includes a previous vote on a contending sequence may not be eligible as proof of duplicate voting. Instead of slashing the bonds, this vote may remove the currently cast vote on the contending sequence.
  • Slashing may also occur if a vote is cast for an invalid hash generated by the PoH generator.
  • the PoH generator may be expected to randomly generate an invalid state which may trigger a fallback to Secondary.
  • Secondary and lower ranked proof of history generators may be proposed and approved.
  • a proposal is cast on the primary generators sequence.
  • the proposal contains a timeout, if the motion is approved by a supermajority of the vote before the timeout, the Secondary is considered elected, and can take over duties as scheduled.
  • Primary can do a soft handover to Secondary by inserting a message into the generated sequence indicating that a handover can occur, or inserting an invalid state and forcing the network to fallback to Secondary.
  • the secondary may be considered as the first fallback during an election.
  • the PoS verifiers may confirm the state hash generated by the PoH generator. There may be economic incentive for them to do no work and simply approve every generated state hash.
  • the PoH generator may generate an invalid hash with a probability P. Any voters for this hash may be slashed. When the hash is generated, the network may immediately promote the secondary elected PoH generator.
  • a verifier that is colluding with the PoH generator may know in advance when the invalid hash is going to be produced and not vote for it. This scenario may be no different than the PoH identity having a larger verifier stake. The PoH generator may still have to do all the work to produce the state hash.
  • Censorship or denial of service may occur when at least x (x ⁇ 1/2), e.g., 1/3, of the bond holders refuse to validate any sequences with new bonds.
  • the protocol can defend against this form of attack by dynamically adjusting how fast bonds become stale.
  • the larger partition can fork and censor the Byzantine bond holders.
  • the larger network may recover as the Byzantine bonds become stale with time.
  • the smaller Byzantine partition may not be able to move forward for a longer period of time.
  • the algorithm may work as follows: A majority of the network, e.g., 2/3, may elect a new loom, or equivalently herein, a proof of history generator. The loom may then censor the Byzantine bond holders from participating. A proof of history generator may have to continue generating a sequence to prove the passage of time, until sufficient Byzantine bonds have become stale so the majority of the network has a l-x, e.g., 2/3, majority. The rate at which bonds become stale may be dynamically based on what percentage of bonds is active. As such, the Byzantine minority fork of the network may have to wait longer than the majority fork to recover a supermajority. Once a supermajority has been established, slashing may be used to permanently disable the Byzantine bond holders.
  • a majority of the network e.g., 2/3
  • the loom may then censor the Byzantine bond holders from participating.
  • a proof of history generator may have to continue generating a sequence to prove the passage of time
  • proof of replication nodes that are storing a copy of data have to provide a proof that the copy has been actually stored in memory somewhere for the specified period of time. This may permit fast and streaming verifications of proof of replication, which may be enabled by keeping track of time in a PoH generated sequence. Replication may not be used as a consensus algorithm, though it may be a useful tool to account for the cost of storing the block chain history or state at a high availability.
  • a cipher block chaining (CBC) encryption encrypts each block of data in a sequence, using the previously encrypted block to a binary operation on input data where the bits are mixed using the XOR operator.
  • Each replication identity may generate a key by signing a hash that has been generated by a PoH sequence. This may tie the key to a replicator’s identity, and to a specific proof of history sequence. Only specific hashes can be selected (see Hash Selection below).
  • the data set may fully be encrypted block-by-block. Then, to generate a proof, the key is used to seed a pseudorandom number generator that selects a random 32 bytes from each block.
  • a merkle hash may be computed with the selected PoH hash prepended to the each slice.
  • the root may be published, along with the key, and the selected hash that was generated.
  • the replication node may be required to publish another proof in N hashes as they are generated by proof of history generator, where N may be approximately 1 ⁇ 2 the time it takes to encrypt the data.
  • the proof of history generator may publish specific hashes for proof of replication at a predefined period.
  • the replicator node may select the next published hash for generating the proof.
  • the hash may be signed and random slices may be selected from the blocks to create the merkle root.
  • the data may be re-encrypted with a new CBC key.
  • each core can stream encryption for each identity.
  • the total space required may be 2 blocks * N cores, since the previous encrypted block may be necessary to generate the next one.
  • Each core can then be used to generate all the proofs that derived from the current encrypted block.
  • the total time to verify proofs may be equal to the time it takes to encrypt.
  • the proofs themselves may consume a few random bytes from the block, in which case the amount of data to hash may be significantly lower than the encrypted block size.
  • the number of replication identities that can be verified at the same time may be equal to the number of available cores.
  • a graphics processing unit GPU
  • Rotation may need to be sufficiently slow such that it is practical to verify replication proofs on GPU hardware, which is slower per core than on CPUs.
  • a proof of history generator may publish a hash to be used by the entire network for encrypting one or more proofs of replication, and for use as the pseudorandom number generator for byte selection in fast proofs.
  • a hash may be published at a periodic counter that is roughly equal to 1/2 the time it takes to encrypt the data set.
  • Each replication identity may use the same hash, and use the signed result of the hash as the seed for byte selection, or the encryption key.
  • the period that each replicator may provide a proof may be smaller than the encryption time. Otherwise the replicator can stream the encryption and delete it for each proof.
  • a malicious generator may inject data into the sequence prior to this hash to generate a specific hash. This attack is discussed in depth here.
  • the proof of history node may not validate the submitted proof of replication proof(s). It may only keep track of a number of pending and verified proofs submitted by a replicator. A proof may become verified when the replicator is able to sign the proof by 2/3rds of the validators in the network.
  • the verifications may be collected by the replicator via a network (e.g., a peer to peer gossip network), and submitted as one packet that contains at least 2/3 of the validators in the network.
  • This packet may verify all the proofs prior to a specific hash generated by the proof of history sequence, and can contain multiple replicator identities at once.
  • a malicious user may create many replicator identities and spam the network with bad proofs.
  • nodes may be required to provide the encrypted data and the entire merkle tree to the rest of the network when they request verification.
  • the proof of replication may be configured for cheap verification of any additional proofs, as they take no additional space. However, each identity may consume one core of encryption time.
  • Verifiers may approve proofs without verification. Economic incentives may be aligned with verifiers actually doing work. The payout for proof of replication (PoRep) proofs may come out of the same mining budget as PoS verifiers. As such, increasing the replication target without actually verifying may only transfer mining and fee payout from PoS verifiers to PoRep producers.
  • PoRep proof of replication
  • Another approach may be to have each PoRep produce one or more false proofs.
  • the PoS verifier’s job may be to find the false proofs.
  • PoRep producers may prove that the proof is false by producing the function that produced the false data.
  • a replicator node may attempt to partially erase some of the data to avoid storing the entire state. Such an attack may be made difficult by the number of proofs and the randomness of the seed. Collusion with PoH generator
  • the signed hash may be used to seed the sample. If a replicator may select a specific hash in advance then the replicator may erase all bytes that are not going to be sampled.
  • a replicator identity that is colluding with the proof of history generator may inject a specific transaction at the end of the sequence before the predefined hash for random byte selection is generated. With enough cores, an attacker may generate a hash that is preferable to the replicator identity.
  • This attack may only benefit a single replicator identity. Since all the identities may have to use the same exact hash that is cryptographically signed with Elliptical curve digital signature algorithm (ECDSA) for signing data with elliptic curve cryptographic key or equivalent, the resulting signature may be unique for each replicator identity, and collision resistant. A single replicator identity may only have marginal gains.
  • EDSA Elliptical curve digital signature algorithm
  • the cost of adding an additional replicator identity may be equal to the cost of storage.
  • the cost of adding extra computational capacity to verify all the replicator identities may be equal to the cost of a CPET or GPET core per replication identity.
  • the consensus protocol chosen for the network can select a replication target, and award the replication proofs that meet the desired characteristics, such as availability on the network, bandwidth, and geolocation.
  • the PoS verifiers may confirm PoRep without doing any work.
  • the economic incentives may be lined up with the PoS verifiers to do work, such as by splitting the mining payout between the PoS verifiers and the PoRep replication nodes.
  • the PoRep verifiers can submit false proofs a small percentage of the time. They can prove the proof is false by providing the function that generated the false data. Any PoS verifier that confirmed a false proof may be slashed.
  • the proof of history generator may need to broadcast the data to other users or nodes, e.g., the rest of the network.
  • Nodes disclosed herein are verifiers in the network of the data that is broadcast by the Proof of History generator. To maintain high availability it is the goal of the network to maximize the number of verifier nodes. As the network bandwidth of the generator may have physical limit, the generator may need to efficiently split up the data between nodes with minimal overhead.
  • the generator includes an algorithm to split the data.
  • Nodes can be arranged using a predetermined structure, for example, as a heap data structure weighted by their Proof of Stake bond size.
  • the top of the heap can the generator.
  • the next layer of nodes can represent the children.
  • the Generator is the parent of the children, and the first layer of nodes can be the parents of the second layer.
  • the number of children for each layer can depend on the distribution of the bonds.
  • at least 2/3rds + 1 of the weighted stake needs to be in the first layer.
  • a maximum size that is limited by the network capacity and/or the Internet Protocol determines packet sizes that can fit into the available bandwidth of the generator.
  • the generator can split up the data into packets, optionally of same data size, one for each child node.
  • the packets may have a monotonically increasing index.
  • Each child node can compute the same heap data structure and be aware of its index in its peer list at the same layer. When the child node receives the packet with the index value that matches the child's index value, it may retransmit the packet to one or more nodes in its list, for example, every peer in his list.
  • all the peers may then receive all the packets that were transmitted by the generator.
  • some of the children may be unavailable due to hardware or network failures.
  • the generator can insert erasure coded packets to one or more child nodes, e.g., using Reed-Solomon coding, or other coding techniques. Child nodes receiving such packets can then reconstruct the full packet stream if some of the packets are missing from the erasure coded packets.
  • some threshold e.g. 10%
  • the generator can use 10% of its bandwidth for erasure coding, and all the children may be able to reconstruct the full data set.
  • Using erasure coding may allow us to account for inconsistencies in the state of the heap data structure between the nodes.
  • the children may retransmit the data to the next layer down.
  • the child nodes can have multiple parent nodes.
  • each parent node can retransmit different packets down to different children, to avoid overlapping with other parent nodes.
  • Each child then can forward the parent packets they receive to their peers, while avoiding sending duplicate packets.
  • an algorithm may be used for controlling packet retransmission for child nodes with multiple parent nodes.
  • a non-limiting example of an algorithm is: (a) if packet is from parent
  • index % parents size ! parent. index
  • the algorithm is designed to forward the missing packets from each parent node to each peer child node while avoiding overlap with any other peer doing the same thing.
  • the child node can then forward the packets to the downstream nodes, i.e., its child nodes.
  • a non-limiting example of an algorithm is:
  • the algorithm is designed such that the different packets are forwarded from each parent to each child node downstream, thus minimizing the number of packets each subsequent child has to retransmit to their peers.
  • the erasure coded packets as disclosed herein may allow the network to recover the full data set.
  • packets that are transmitted through the network may be signed by the originator of the packet.
  • the signatures can use Elliptical Curve Cryptography. Other cryptographical schemes include, but are not limited to, RSA or other asymmetric encryption schemes.
  • the verification of the packets can be a relative slow process, e.g., taking about lO’s of milliseconds, to maintain high performance on the network, the verification may be accelerated with GPUs.
  • the acceleration can be pipelined with data receiving packets from the network. In some embodiments, the verification is accelerated by at least lOx with a single GPU card.
  • a non-limiting example of a method for accelerated signature verification includes an operating system thread that may continuously read data packets, e.g., from a network socket, into a memory allocated buffer. When the buffer is full, or the data source, e.g., the socket, has no more packets, the thread may send the entire pointer of the buffer to a queue of GPU and wake up the GPU controller thread. If there are no more packets, the thread may also perform a blocking read call, which can wake up the thread when new network data is available from that socket.
  • the queue of GPU may include one or more data buffers.
  • the GPU controller thread can read data in one or more buffers from the queue simultaneously or sequentially, and sends the data batch from the buffer(s) while splitting among more than one available GPU.
  • the output of the GPU processing can include a vector of Boolean values that indicate whether the data packet passed or failed signature verification, optionally for one or more batches of data packets.
  • One or more batches of data packets, and the vector of verifications is queued for processing in the next stage of a pipeline.
  • the GPU controller thread then may check its queue to determine if there are any more batches to process.
  • the processing thread can read one or more buffers from the queue simultaneously or sequentially and processes them in parallel alongside the verification vector.
  • FIG. 11 An architecture of a system that may be used to implement stake consensus is shown in FIG. 11.
  • a loom equivalently herein as a proof of history generator, is an elected proof of history generator. It may consume arbitrary user transactions and outputs a proof of history sequence of all the transactions that guarantees a unique global order in the system. After each batch of transactions, the loom may output a signature of the state that is the result of running the transactions in that order. This signature and the last transactions hash may be signed with the identity of the loom.
  • a naive hash table may be indexed by a user’s address. Each cell may include the full address of the user and the memory required for this computation.
  • transactions may include [ ⁇ 20 byte ripemd-l60(sha256(users public key))> ⁇ 8 byte account value> ⁇ 4 byte unused>] for a total of 32 bytes.
  • a proof of stake bond table may include [ ⁇ 20 byte ripemd- l60(sha256(users public key))> ⁇ 64 bits bond value> ⁇ Last cast vote 8 bytes> ⁇ 20 byte, list of uncast votes> ⁇ 20 bytes unused>] for a total of 64 bytes.
  • These nodes may replicate the block chain state and provide high availability of the block chain state.
  • the replication target may be selected by the consensus algorithm, and the validators in the consensus algorithm select and vote the proof of replication nodes they approve of based on of-chain defined criteria.
  • the network may be configured with a minimum proof of stake bond size, and a requirement for a single replicator identity per bond.
  • nodes may be consuming bandwidth from spools. They are virtual nodes, and can run on the same machines as the spools or the Loom, or on separate machines that are specific to the consensus algorithm configured for this network. Examples of network limits
  • loom takes incoming user packets, orders them the most efficient way possible, and sequences them into a proof of history sequence that is published to downstream spools.
  • Efficiency may be based on memory access patterns of the transactions, so the transactions may be ordered to minimize faults and to maximize prefetching.
  • the minimal payload that can be supported may be 1 destination account.
  • a payload may be: [ ⁇ to 20 byte ripemd-l60 hash>, ⁇ amount 8 byte>].
  • the minimum size may be 24 bytes.
  • the proof of history sequence may add an additional hash and counter to the output: [ ⁇ current hash, 20 byte>, ⁇ 8 byte counter>, ⁇ last valid hash, 20 bytes, 8 byte counter>, ⁇ unused, 6 bits>, ⁇ size, 10 bits>, ⁇ payload>, ⁇ fee, 8 bytes> ⁇ from, 32 bytes>, ⁇ signature, 32 bytes>].
  • Block chains or networks may generate a significantly large volume of data, e.g., almost 4 petabytes of data a year for full speed and capacity with 710,000 transactions per second on a 1 gigabit network). If every node in the network is required to store all that data, it may limit network membership to the centralized few that have sufficient storage capacity. Disclosed herein is a proof of History system and method that can be leveraged to avoid this situation, allowing a fast-to-verify implementation of Proof of Replication and enabling a bit torrent-esque distribution of the ledger across all nodes in the network.
  • PoRep Proof of Replication
  • the basic idea of Proof of Replication may be to encrypt a dataset with a public symmetric key using CBC encryption and then hash the encrypted dataset.
  • the simple solution can be to force the hash to be done on the reverse of the encryption, or in some embodiments, with a random order. This can ensure that all the data is present during the generation of the proof and it may also require the validator to have the entirety of the encrypted data present for verification of every proof of every identity. So the space required to validate can be (Number of CBC keys)*(data size).
  • the blocks can stay in the exact same order for every PoRep and verification can stream the data and verify all the proofs in a single batch. This way multiple proofs can be verified concurrently, each one on its own CUDA core.
  • the network can support up to l4k replication identities or symmetric keys.
  • the total space required for verification can be (2 CBC blocks) * (Number of CBC keys), with core count of equal to (Number of CBC keys).
  • a CBC block is expected to be 1MB in size.
  • validators for PoRep can be the same validators that are verifying transactions. They may have some stake that they have put up as collateral that ensures that their work is honest. If it can be proven that a validator verified a fake PoRep, then the validator’s stake can be slashed.
  • Replicators can be specialized thin clients, which are clients that do not need to fully replicate the state that is processed by the network).
  • the clients or replicators can download a part of the ledger and store it and provide PoReps of storing the ledger. For each verified PoRep, replicators may earn a reward of sol from the mining pool.
  • One or more of the following constraints may exist: 1) at most l4k replication identities can be used, because that’s how many CUDA (GPU) cores can fit in a $5k box at the moment; and 2) verification requires generating the CBC (Cipher Block Chaining) blocks. That can require space of 2 blocks per identity, and 1 CUDA core per identity for the same dataset.
  • the network can set the replication target number, e.g., lk.
  • lk PoRep identities can be created from signatures of a PoH hash. So they can be tied to a specific PoH hash. It may not matter who creates them, or simply the last lk validation signatures for the ledger at that count. This can be just the initial batch of identities, because identity rotation can be staggered.
  • Replicator identities can be the CBC encryption keys.
  • replicators that want to create PoRep proofs can sign the PoH hash at that count. That signature can be the seed used to pick the block and identity to replicate.
  • a block may be 1TB of ledger.
  • replicators Periodically, at a specific PoH count, replicators can submit PoRep proofs for their selected block.
  • a signature of the PoH hash at that count can be the seed used to sample the 1TB encrypted block, and hash it. This can be done faster than it takes to encrypt the 1TB block with the original identity.
  • Replicators can submit some number of fake proofs, which they can prove to be fake by providing the seed for the fake hash result. Periodically at a specific PoH count, in some embodiments, validators can sign the hash and use the signature to select the 1TB block that they need to validate. They can batch all the identities and proofs and submit approval for all the verified ones. Replicator client can submit the proofs of fake proofs.
  • the next identity can be generated by hashing itself with a PoH hash, or via some other process based on the validation signatures.
  • the game or procedure between validators and replicators is over random blocks and random encryption identities and random data samples.
  • the goal of randomization may be to prevent colluding groups from having overlap on data or validation.
  • Replicator clients can fish for lazy validators by submitting fake proofs that they can prove are fake.
  • Replication identities can be symmetric encryption keys. Some of the replication identities may be storage replication targets. Many more client identities can exist than replicator identities, so unlimited number of clients can provide proofs of the same replicator identity.
  • the client(s) may be forced to store for multiple rounds before receiving a reward.
  • the present disclosure also provides a decentralized storage network for a multi petabyte ledger.
  • the flexible smart contracts engine disclosed elsewhere herein can be used as virtualized disk space for contracts.
  • the systems and methods herein can schedule asynchronous and eventually guaranteed transactions on the network that are called Signals.
  • the runtime can guarantee that all the state of the contract has been generated by only the contract code, and that all user inputs can be recorded on chain. Thus, all resident contract state can be recreated by running the ledger.
  • Contracts can create a Signal which can move the memory that belongs to the PublicKey into the ledger, thus reducing its costs. Then can later call a load to bring that memory back.
  • Storage can be fairly simple.
  • the memory can be simply sent as user data over a transaction and gets put into PoH.
  • the PublicKey then can store the location on“ledger” of where the memory may be resident.
  • Load can be much more complicated. Since nodes may not be required to keep a full copy of the ledger, each node may only maintain a randomly selected stipe. To do a load, each verifier may have to retrieve the state from the network, and then agree that the loaded state is correct. Because this operation can be slow due to the fact that the data needs to be fetched over the network instead of being locally available, it cannot be synchronous with respect to all other operations on the chain, especially finality, which is a measurement of how fast all the nodes in the network agree on a state. So this operation may be broken up into several asynchronous steps that eventually complete:
  • each verifier fetches the data
  • each verifier computes the hash of the loaded data
  • step 1 Many rounds of finality can occur between step 1 and step 4.
  • the data the contract writes to the ledger can be recomputed from the ledger itself. So this piece of data can actually be forgotten after the load operation completes. It can indicate that when the next round of ledger slices is selected, the replicator nodes may not have to store the data bytes that were loaded into resident memory. They can actually be removed from
  • replication What is kept or stored, can be the hash of those bytes that was mixed into Proof of History. So the integrity of the clock can remain. For example, as shown in Table 1, if the data at PoH count 1000000 can be memory that is loaded into the virtual machine, 65600 bytes at offset 23423288 to offset 23488888 can be removed from replication. The next round of replication can skip storing this data.
  • the present disclosure includes how smart contracts work on the Proof of History based block chain herein.
  • the present disclosure includes one or more of: high performance bytecode designed for fast verification and compilation to native code, memory management that may not be designed for fast analysis of data dependencies, execution of smart contracts that can parallelizable in as many cores as the system can provide.
  • smart contract execution can be based on how operating systems load and execute dynamic code in the kernel.
  • an untrusted client or Userspace in Operating Systems terms, can create a program in the front-end language of her choice, (like C/C++/Rust/Lua), and compiles it with LLVM (Low Level Virtual Machine compiler) to the Solana Bytecode object.
  • This object file can be a standard ELF (Executable Linked Format) file.
  • ELF Executable Linked Format
  • Frontend to LLVM may take a user supplied program in a higher level language such as C/C++/Rust/Lua, or is called as a library from JavaScript or any other language.
  • LLVM toolchain may perform the actual work of converting the program to an ELF.
  • the output can be an ELF with a specific bytecode as its target that is designed for quick verification and conversion to the local machine instruction set that Solana can be running on.
  • the ELF can be verified, loaded and executed. Verifier can check if the bytecode is valid and safe to execute or not and may convert it to the local machine instruction set. Loader can prepare the memory necessary to load the code and mark the segment as executable. The runtime actually can call the program with arguments and manages the changes to the virtual machine.
  • bytecode doesn’t matter. While as disclosed herein, bytecode can be based on Berkley Packet Filter, anything that can JIT (just-in-time compile) to x86 (or SPIR-V, a GPU specific bytecode format) can be used herein. The reason for basing the bytecode on BPF may be because what the kernel does with untrusted code can overlap almost exactly with one or more requirements:
  • Memory management may be the most performance-critical part of the engine. If all the contracts that are scheduled have no data dependencies, they can all be executed concurrently. If successful, the performance of the engine can scale with the number of cores available to execute the contracts. The throughput can double every 2 years with Moore’s law.
  • Memory management can start with the ELF (Executable Linkable Format) itself.
  • contracts may be constrained to be a single read-only code and data segment. It can be composed of read-only executable code and read-only data— no mutable global variables or mutable static variables. This requirement can provide a simple solution for requirement 4 as disclosed above.
  • the runtime can provide an interface for creating state. This interface can be invoked through a transaction just like any other contract method.
  • the page address public key has no current memory region associated with it, the region can be allocated and the allocated memory is set to 0. If the page address public key is unassigned, it can be assigned to the contract public key. The only code that can modify the memory that is assigned to this is code that is provided by the contract. Thus, all state transitions in that memory can be done by the contract. If one or more of the conditions disclosed herein fail, the call may fails. This interface can be called with a simple transaction.
  • the call structure can be a basic transaction. This structure can describe the context of the transaction:
  • the two most important parts of this structure for the contract can be the keys and the user data.
  • the runtime may translate the keys to the memory locations associated with them with the memory that was created with allocate memory.
  • the user data can be the random bits of memory that the user has supplied for the call. As such, the users can add dynamic external state into the runtime.
  • Smart contracts can also examine the required sigs vector to check which of the
  • PublicKeys in call have a valid signature. By the time the Call gets to the contracts method, the signatures may have already been verified.
  • a contract implements the following interface:
  • the runtime may only store the modified Page if one or more of the following rules are met:
  • Signals can be asynchronous calls or asynchronous and guaranteed function calls, i.e., calls during which the call site is not blocked from executing while waiting for the called code to finish, to programs loaded into the network. They can be created just like pages are allocated and assigned to a contract.
  • the memory that is owned by the signal’s page can store a Call structure that the runtime can examine:
  • a signal can be a way for a contract to schedule itself, or call other contracts. Once a signal becomes set, the runtime guarantees its eventual execution.
  • a client can call a contract with any number of signals, which can modify the memory of the signal and construct whatever asynchronous method the contract needs to call, including allocate memory or create signal Notes
  • a contract cannot allocate memory synchronously.
  • the client first can make a transaction for allocate memory, and then it can call the contract method with the page that has some memory allocated to it.
  • a contract can schedule a signal to allocate memory and call itself back in the future.
  • the runtime can execute all the non-overlapping contract calls in parallel. As an example, some preliminary results show that over 500,000 calls per second are possible— with room for optimization.
  • one or more basic primitives can be used to implement all the regular Operating Systems features, which may include, but not limited, to SDK, signals, JIT, loader, and toolchain. Allocate memory can be used to create stack frames for threads, and writable segments for processes ELFs with persistent state. Signals can be used as a trampoline from the running process to an OS service that does dynamic memory allocation, creates additional threads, and creates other signals.
  • This framework can support all the operating system primitives in a modern OS as simple synchronous calls without compromising
  • TDMA Time-division multiple access
  • Some scheduling algorithm(s) may exist to select an order of leaders, giving each leader an assigned slot to transmit transactions, e.g., each leader (Ll,L27) is slotted at a PoH count interval. During this interval, one or more leader can be designated as‘active leader’ and only they can append transactions to the PoH data structure during their slot.
  • Nodes can vote on the hash and count of the PoH data structure along with the corresponding state signature. Each vote may represent a slash-able lockout. If a node votes on a different branch (e.g. a branch that doesn’t include the current vote) within the lockout they can get slashed. This vote also may double the lockout of all the previous votes, thus making it exponentially more expensive to switch branches and unroll older votes.
  • a branch e.g. a branch that doesn’t include the current vote
  • an active leader experiences a failure and the transactions they are processing aren’t successfully communicated to the rest of the network, if every one/every node is generating PoH, one or more node in the network can be generating the same virtual ticks, derived from the last slot, thus there may be always a‘fallback’ in case of leader node failure. Since these virtual ticks may have no data, everyone in the network can derive the exact same values as they continuously roll the SHA256 loop.
  • TX/VX Transaction data/Virtual Data
  • TX represents transaction data generated by leader LX (e.g. Ll, L2, L3)
  • VX can represent virtual data (ticks) generated by leader LX as shown in Table 2 below.
  • Table 2 Table 2.
  • L2 was in the 1 ⁇ 2+ partition, it may append to the Tl version of the ledger. Otherwise it may append to the VI version of the ledger.
  • L3 can have a similar choice after Ll’s rotation. If L3 was in the Ll 1 ⁇ 2+ partition, and L2 was in the 1 ⁇ 2- partition, L3 may see L2’s data as invalid and continue from the V2 version of the ledger. If it was in the 1/2- partition, along with L2, it may continue with the L2 version of the ledger.
  • the potential branching tree can look as follows: i) Ll : Tl VI
  • Examples of the synchronized clocks may include, but are not limited to, GPS clocks, atomic clocks provided by NIST, and clocks implemented with Network Time Protocol. Nakomoto Consensus with time-based locks
  • the systems and methods described herein can be used to implement a modified version of the Nakomoto Consensus algorithm, which is an algorithm for verifying the authenticity of blocks in Bitcoin and other block chains.
  • the Nakomoto Consensus algorithm with time-based locks (“modified Nakomoto Consensus algorithm”) may be defined by a stack of votes that each have a lockout period.
  • a lockout period may be a time period during which a vote for a particular chain of blocks by a node cannot be changed.
  • the lockout period of each preceding vote in the vote stack may be doubled.
  • the vote stack can be rolled back. Rollback may occur when preceding votes in the stack with a lower lock time than a new vote are removed from the vote stack. In some implementations, after rollback, lockouts are not doubled until the stack has as many votes as it did immediately prior to rollback.
  • the lockout period for the vote at time 9 doubles, from 2 to 4.
  • the lockout periods for the votes at times 1 and 2 do not double because those lockout periods are already double the votes immediately above them in the stack.
  • the stack has as many votes as it did before rollback.
  • the vote made at time 2 has a lock time of 10. So when a vote is made at time 11, the entire stack up to the vote made at time 2 will rollback, resulting in the following vote stack: vote tine lockout lock t ne
  • Lockout periods may be used to force a node to commit time to a specific branch. Nodes that violate the lockout period and vote for a diverging branch during the lockout period can be punished. Slashing is one approach to punish a node that votes during a lockout period. If the network detects a node that make a vote that violates a lockout period, the network can reduce the stake associated with that node, or freeze the node from receiving rewards. The network can reward nodes for selecting the right branch with the rest of the network as often as possible. This is well aligned with generating a reward when the vote stack is full and the oldest vote needs to be de-queued.
  • Each node can independently set a threshold of network commitment to a branch before that node commits to a branch. For example, at vote stack index 7, the lockout is 256 time units (2 L 8). A node may withhold votes and let votes 0-7 expire unless the vote at index 7 has at greater than 50% commitment in the network. This allows each node to independently control how much risk to commit to a branch. Committing to a branch faster would allow the node to earn more rewards since more votes are pushed to the stack and de-queue happens more frequently.
  • the modified Nakomoto Consensus algorithm may provide the following benefits: (i) if nodes share a common ancestor then they will converge to a branch containing that ancestor no matter how they are partitioned, (ii) rollback requires exponentially more time for older votes then for newer votes, and (iii) nodes can independently configure a vote threshold they would like to see before committing a vote to a higher lockout. This allows each node to make a trade- off of risk and reward.
  • Time can be a proof of history hash count which is a verifiable delay function that provides a source of time before consensus.
  • Other sources of time can be used as well, such as radio transmitted atomic clocks, Network Time Protocol, or locally synchronized atomic clocks.
  • the systems and methods disclosed herein described herein include a digital processing device, a computer processor, or use of the same.
  • the digital processing device includes one or more hardware central processing units (CPUs) or general purpose graphics processing units (GPGPUs) that carry out the functions of the device.
  • the digital processing device further comprises an operating system configured to perform executable instructions.
  • the digital processing device is optionally connected to a computer network.
  • the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web.
  • the digital processing device is optionally connected to a cloud computing infrastructure.
  • the digital processing device is optionally connected to an intranet.
  • the digital processing device is optionally connected to a data storage device.
  • suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Many smartphones may be suitable for use in the system described herein. Select televisions, video players, and digital music players with optional computer network connectivity may be suitable for use in the system described herein. Suitable tablet computers include those with booklet, slate, and convertible configurations.
  • the digital processing device includes an operating system configured to perform executable instructions.
  • the operating system is, for example, software, including programs and data, which manages the hardware of the device and provides services for execution of applications.
  • Suitable server operating systems may include, by way of non limiting examples, FreeBSD, OpenBSD, NetBSD ® , Linux, Apple ® Mac OS X Server ® , Oracle ® Solaris ® , Windows Server ® , and Novell ® NetWare ® .
  • Suitable personal computer operating systems may include, by way of non-limiting examples, Microsoft ® Windows ® , Apple ® Mac OS X ® , UNIX ® , and UNIX-like operating systems such as GNU/Linux ® .
  • the operating system is provided by cloud computing.
  • Suitable mobile smart phone operating systems may include, by way of non-limiting examples, Nokia ® Symbian ® OS, Apple ® iOS ® , Research In Motion ® BlackBerry OS ® , Google ® Android ® , Microsoft ® Windows Phone ® OS, Microsoft ® Windows Mobile ® OS, Linux ® , and Palm ® WebOS ® .
  • Suitable media streaming device operating systems may include, by way of non-limiting examples, Apple TV ® , Roku ® , Boxee ® , Google TV ® , Google Chromecast ® , Amazon Fire ® , and Samsung ® HomeSync ® .
  • Suitable video game console operating systems may include, by way of non-limiting examples, Sony ® PS3 ® , Sony ® PS4 ® , Microsoft ® Xbox 360 ® , Microsoft Xbox One, Nintendo ® Wii ® , Nintendo ® Wii U ® , and Ouya ® .
  • the device includes a storage and/or memory device.
  • the storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis.
  • the device is volatile memory and requires power to maintain stored information.
  • the device is non-volatile memory and retains stored information when the digital processing device is not powered.
  • the non-volatile memory comprises flash memory.
  • the non volatile memory comprises dynamic random-access memory (DRAM).
  • the non-volatile memory comprises ferroelectric random access memory (FRAM).
  • the non-volatile memory comprises phase-change random access memory (PRAM).
  • the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing based storage.
  • the storage and/or memory device is a combination of devices such as those disclosed herein.
  • the digital processing device includes a display to send visual information to a user.
  • the display is a liquid crystal display (LCD).
  • the display is a thin film transistor liquid crystal display (TFT-LCD).
  • the display is an organic light emitting diode (OLED) display.
  • OLED organic light emitting diode
  • on OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display.
  • the display is a plasma display.
  • the display is a video projector.
  • the display is a head- mounted display in communication with the digital processing device, such as a VR headset.
  • suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like.
  • the display is a combination of devices such as those disclosed herein.
  • the digital processing device includes an input device to receive information from a user.
  • the input device is a keyboard.
  • the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus.
  • the input device is a touch screen or a multi-touch screen.
  • the input device is a microphone to capture voice or other sound input.
  • the input device is a video camera or other sensor to capture motion or visual input.
  • the input device is a Kinect, Leap Motion, or the like.
  • the input device is a combination of devices such as those disclosed herein.
  • Fig. 6 shows a digital processing device 601 that is programmed or otherwise configured to perform methods steps disclosed herein.
  • the device 601 can regulate various aspects of the cryptographic functions, sequence of the hash values, such as the recordation of the present disclosure.
  • the digital processing device 601 includes a central processing unit (CPU, also“processor” and“computer processor” herein) 605, which can be a single core or multi core processor, or a plurality of processors for parallel processing.
  • CPU central processing unit
  • processor also“processor” and“computer processor” herein
  • the digital processing device 601 also includes memory or memory location 610 (e.g., random-access memory, read- only memory, flash memory), electronic storage unit 615 (e.g., hard disk), communication interface 620 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 625, such as cache, other memory, data storage and/or electronic display adapters.
  • the memory 610, storage unit 615, interface 620 and peripheral devices 625 are in communication with the CPU 605 through a communication bus (solid lines), such as a motherboard.
  • the storage unit 615 can be a data storage unit (or data repository) for storing data.
  • the digital processing device 601 can be operatively coupled to a computer network (“network”) 630 with the aid of the communication interface 620.
  • network computer network
  • the network 630 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
  • the network 630 in some cases is a telecommunication and/or data network.
  • the network 630 can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • the network 630 in some cases with the aid of the device 601, can implement a peer-to-peer network, which may enable devices coupled to the device 601 to behave as a client or a server.
  • the CPU 605 can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 610.
  • the instructions can be directed to the CPU 605, which can subsequently program or otherwise configure the CPU 605 to implement methods of the present disclosure. Examples of operations performed by the CPU 605 can include fetch, decode, execute, and write back.
  • the CPU 605 can be part of a circuit, such as an integrated circuit. One or more other components of the device 601 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the storage unit 615 can store files, such as drivers, libraries and saved programs.
  • the storage unit 615 can store user data, e.g., user preferences and user programs.
  • the digital processing device 601 in some cases can include one or more additional data storage units that are external, such as located on a remote server that is in communication through an intranet or the Internet.
  • the digital processing device 601 can communicate with one or more remote computer systems through the network 630.
  • the device 601 can communicate with a remote computer system of a user.
  • remote computer systems include personal computers (e.g., portable PC), slate or tablet PCs (e.g., Apple ® iPad, Samsung ® Galaxy Tab), telephones, Smart phones (e.g., Apple ® iPhone, Android-enabled device,
  • Blackberry ® or personal digital assistants.
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the digital processing device 101, such as, for example, on the memory 610 or electronic storage unit 615.
  • the machine executable or machine readable code can be provided in the form of software.
  • the code can be executed by the processor 605.
  • the code can be retrieved from the storage unit 615 and stored on the memory 610 for ready access by the processor 605.
  • the electronic storage unit 615 can be precluded, and machine-executable instructions are stored on memory 610.
  • the systems and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device.
  • a computer readable storage medium is a tangible component of a digital processing device.
  • a computer readable storage medium is optionally removable from a digital processing device.
  • a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like.
  • the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media.
  • the systems and methods disclosed herein include at least one computer program, or use of the same.
  • a computer program includes a sequence of instructions, executable in the CPU of the digital processing device, written to perform a specified task.
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • a computer program may be written in various versions of various languages.
  • a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
  • a computer program includes a web application.
  • a web application in various embodiments, may utilize one or more software frameworks and one or more database systems.
  • a web application is created upon a software framework such as Microsoft ® .NET or Ruby on Rails (RoR).
  • a web application utilizes one or more database systems including, by way of non-limiting examples, relational, non-relational, object oriented, associative, and XML database systems.
  • suitable relational database systems include, by way of non-limiting examples, Microsoft ® SQL Server, mySQLTM, and Oracle ® .
  • a web application in various embodiments, may be written in one or more versions of one or more languages.
  • a web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof.
  • a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML).
  • a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS).
  • CSS Cascading Style Sheets
  • a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash ® Actionscript, Javascript, or Silverlight ® .
  • AJAX Asynchronous Javascript and XML
  • Flash ® Actionscript Javascript
  • Javascript or Silverlight ®
  • a web application is written to some extent in a server-side coding language such as Active Server Pages (ASP), ColdFusion ® , Perl, JavaTM, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), PythonTM, Ruby, Tel, Smalltalk, WebDNA ® , or Groovy.
  • a web application is written to some extent in a database query language such as Structured Query Language (SQL).
  • SQL Structured Query Language
  • a web application integrates enterprise server products such as IBM ® Lotus Domino ® .
  • a web application includes a media player element.
  • a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe ® Flash ® , HTML 5, Apple ® QuickTime ® , Microsoft ® Silverlight ® , JavaTM, and Unity ® .
  • an application provision system may comprise one or more databases 700 accessed by a relational database management system (RDBMS) 210.
  • RDBMSs include Firebird, MySQL, PostgreSQL, SQLite, Oracle Database, Microsoft SQL Server, IBM DB2, IBM Informix, SAP Sybase, SAP Sybase, Teradata, and the like.
  • the application provision system further comprises one or more application severs 720 (such as Java servers, .NET servers, PHP servers, and the like) and one or more web servers 730 (such as Apache, IIS, GWS and the like).
  • the web server(s) optionally expose one or more web services via app application programming interfaces (APIs) 740.
  • APIs app application programming interfaces
  • an application provision system may have a distributed, cloud-based architecture 800 and comprises elastically load balanced, auto-scaling web server resources 310 and application server resources 820 as well synchronously replicated databases 830.
  • a computer program includes a mobile application provided to a mobile digital processing device.
  • the mobile application is provided to a mobile digital processing device at the time it is manufactured.
  • the mobile application is provided to a mobile digital processing device via the computer network described herein.
  • a mobile application may be created by various techniques using various hardware, languages, and development environments. Mobile applications may be written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, JavaTM, Javascript, Pascal, Object Pascal, PythonTM, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
  • Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator ® , Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, AndroidTM SDK, BlackBerry ® SDK, BREW SDK, Palm ® OS SDK, Symbian SDK, webOS SDK, and Windows ® Mobile SDK.
  • iOS iPhone and iPad
  • the systems and methods disclosed herein include software, server, and/or database modules, or use of the same.
  • Software modules may be created by various techniques using various machines, software, and languages.
  • the software modules disclosed herein are implemented in a multitude of ways.
  • a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
  • a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
  • the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
  • software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. Databases
  • the systems and methods disclosed herein may include one or more databases, or use of the same.
  • databases may be suitable for storage and retrieval of hash values, index, input, output, time stamp, hash functions, combine functions.
  • suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity- relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase.
  • a database is internet-based.
  • a database is web-based.
  • a database is cloud computing-based.
  • a database is based on one or more local computer storage devices.
  • a set of users share a database. They want to modify the database and make sure that after the modifications, the results are the same at every step.
  • Each user submits the database changes to an agent that is creating the sequence order.
  • the order is then broadcast to all the users.
  • the users receive the sequence and modify their local databases. Since the order is the same for all the users, then all the local copies have the same result.
  • the users do not have any trust in the machine generating the sequence, they can examine the output and determine that it is valid and consistent. Multiple machines can be used to create the order sequence. So there is no centralized point of failure. And the output of each machine can be deterministically combined without trusting the machines.
  • the iterative execution of cryptographic hash functions is running continuously as service, users can enter events and have them authenticated to have occurred at least some time before they were entered into the sequence.
  • anyone inspecting the record can verify that the data of the user is entered some time before that portion of the record is generated.
  • the generator of the record does not need to be trusted.
  • an inspector can verify the record with a multi core computer at the fraction of the time it took to be generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Storage Device Security (AREA)

Abstract

Disclosed herein are computer-implemented systems and methods for cryptographically generating a local timestamp for a data set using cryptographic hash function(s).

Description

SYSTEMS AND METHODS FOR CRYPTOGRAPHIC PROVISION OF
SYNCHRONIZED CLOCKS IN DISTRIBUTED SYSTEMS CROSS-REFERENCE
[0001] This application claims priority to U.S. Provisional Patent Application No. 62/596,678, filed on December 8, 2017, U.S. Provisional Patent Application No. 62/618,972, filed on January 18, 2018, and U.S. Provisional Patent Application No. 62/660,854, filed on April 20, 2018 and each of which is entirely incorporated herein by reference.
BACKGROUND
[0002] A block chain is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically can contain a timestamp and transaction data. Every block in the block chain may also contain a reference to its previous block(s).
[0003] The process of creating a block and appending it to the block chain is called mining. For proof-of-work based block chains like Bitcoin, mining can be a computationally-intensive process that requires solving a unique and difficult math problem so that the number of blocks mined each day remains steady. The math problem to be solved may be used as a proof-of-work to check whether a solution is valid, but it may be difficult to find a solution, as this requires a lot of trial and error.
SUMMARY
[0004] Every block in the block chain may contain a reference to its previous block(s), thus creating a chain from the first block (genesis block) to the current one. The block-reference is a cryptographic hash of the previous block. This may ensure the integrity of the chain, as any modification to a block may result in a different hash for the block and thus the reference in the next block may change, resulting in a different hash for every block after. As such, block chains may be inherently resistant to modification of their data. A block chain can serve as an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority.
[0005] While a true time of a block chain transaction may not be available, clocks and timestamps may be used in distributed systems to indicate the time that a block chain transaction occurred. Current block chain protocols may compare a local clock time to a signed timestamp of the message/transaction data, but it may not be known whether the receiver of the
message/transaction data may reject or accept the timestamp or not. Atomic clocks for block chain may have their limitations. Block chain cannot rely on a trusted atomic clock. Block chain may not trust a shared network clock. Any externally supplied timestamp value may not necessarily be trusted. Existing systems that rely on externally supplied passage of time may need to trust that the timestamp is not fabricated. Alternatively, an externally supplied timestamp may have to compare against a local time, and reject the timestamp if it is out of bounds. With such existing systems, there may be no guarantee that a node in a given system will reject or accept the timestamp based on the comparison to the local clock. Thus, there is an urgent and unmet need to provide clocks and timestamps that can be trusted by the users of the block chain.
[0006] The present disclosure provides systems and methods to cryptographically verify passage of time between two events. A cryptographically secure hash function may be used herein whose output cannot be predicted from the input. Such a cryptographically secure function may be completely executed to generate an output, and such function may be run iteratively in a sequence so that its output from a previous execution may be used as the input in the current execution. The current output and/or how many times it has been executed or called can be periodically recorded. The output can then be recomputed and verified by external computers in parallel by checking one or more iterations in parallel on separate computer(s). Data can be timestamped into the sequence by recording the data and the index when the data is mixed into the sequence. Such timestamp then may guarantee that the data was created sometime before a next output of the secure function is generated in the sequence. Multiple clocks can synchronize amongst each other by mixing their state into each other’s sequences.
[0007] Systems and methods of the present disclosure may advantageously enable proof of time passage in between entries in a digital record (e.g. sequence) without trusting the entity that is generating the digital record. The present disclosure may advantageously enable creation of a temporal order of events that does not have to be trusted by any of the external clients. In addition, the present disclosure may allow generation of a timestamp for a digital event which is a relative timestamp as indicated by entry in the historical record without trusting the creator of the record. The present disclosure also can enable verification of the record with a multi-core computer in a fraction of the time it took to be generated. The methods herein may further enable a combination of different records, optionally generated by different computers such that the different records are continuous and time passage between events recorded on separate machines can be proven based on the records without trusting the machines.
[0008] In an aspect, disclosed herein are computer-implemented methods for cryptographically generating a local timestamp, comprising: (a) activating one or more computer processors that are individually or collectively programmed to execute a cryptographic hash function; (b) inputting at least a first data set from computer memory to the cryptographic hash function to generate as output a second data set comprising a first set of cryptographic hash values from the first data set, wherein upon generating the second data set, a counter in computer memory is incremented by a preselected value; (c) recording at least the first data set, the second data set, and the counter to a sequence of cryptographic hash values in the computer memory; (d) inputting at least the second data set and a third data set to the cryptographic hash function to generate as output a fourth data set comprising a second set of cryptographic hash values from the third data set, wherein upon generating the fourth data set, the counter in computer memory is incremented by the preselected value; (e) recording at least the second data set, the third data set, the fourth data set, and the counter to the sequence of cryptographic hash values in the computer memory; (f) using at least the fourth data set as input, repeating (b) and (c) for a first number of repetitions, (d) and (e) for a second number of repetitions or a combination thereof to yield a fifth data set, wherein upon generating the fifth data set, the counter in computer memory is incremented by the first number of repetitions, the second number of repetitions or a combination thereof; (g) recording at least the fourth data set, the fifth data set, and the counter to the sequence of cryptographic hash values in the computer memory; and using the counter to generate the local timestamp for the third data set. In some embodiments, the method further comprises verifying the sequence of cryptographic hash values using a preselected number of computer processors by: selecting a preselected number of computer processors; splitting the sequence of cryptographic hash values into the preselected number of sub-sequences, each of the sub-sequences comprising a portion of the sequence of cryptographic hash values; and executing, by the pre-selected number of processors, the cryptographic hash function based on an input of each of the preselected number of sub-sequences to generate as output the preselected number of new sub-sequences; and verifying whether the preselected number of new sub-sequences match the preselected number of sub-sequences. In some embodiments, the method further comprises encrypting, by the one or more computer processors, the sequence of cryptographic hash values comprising: inputting at least the first data set and a private key to an encryption function to generate as output a first encrypted data set, wherein upon generating the second data set, a counter in computer memory is incremented by a preselected value; and recording the first data set, the first encrypted data, and the counter in an encrypted sequence of cryptographic hash values the computer memory. In some embodiments, the method further comprises decrypting, by the one or more computer processors, the encrypted sequence of cryptographic hash values using a public key. In some embodiments, the first data set comprises a set of cryptographic hash values from a previous data set. In some embodiments, the third data set comprises a cryptographic hash value of an event. In some embodiments, the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix. In some embodiments, the cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3. In some embodiments, the preselected value is an integer.
[0009] Another aspect of the present disclosure provides a computer-implemented method for consensus voting in a block chain, comprising (a) receiving a given consensus vote for a block from a node in the block chain; applying a lockout period to the given consensus vote, wherein the lockout period comprises a time period during which changing the given consensus vote violates a lockout policy; (b) removing, from a vote stack comprising preceding consensus votes made at least in part by the node prior to the given consensus vote, any of the preceding consensus votes having a lockout period that has expired; (c) increasing the lockout period of each of the preceding consensus votes still remaining in the vote stack by a factor greater than 1 ; (d) adding the given consensus vote to the vote stack; and (e) slashing a stake associated with the node if the node violates the lockout policy for the given consensus vote by voting for an additional block other than the block of (a) before the lockout period expires.
[0010] In some embodiments, the factor is greater than or equal to 1.5. In some embodiments, the factor is greater than or equal to 2.
[0011] Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.
[0012] Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
[0013] Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure.
Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive. INCORPORATION BY REFERENCE
[0014] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] A better understanding of the features and advantages of the present subject matter will be obtained by reference to the following detailed description that sets forth illustrative embodiments and the accompanying drawings of which:
[0016] Fig. 1 schematically illustrates a sequence of cryptographic hash values, the hash values generated by iterative execution of a cryptographic hash function;
[0017] Fig. 2 schematically illustrates a sequence of cryptographic hash values, the hash values generated using data of a digital event and iterative execution of a cryptographic hash function;
[0018] Fig. 3 schematically illustrates verification of a recorded sequence of cryptographic hash values by splitting the sequence of cryptographic hash values into subsequences;
[0019] Fig. 4 schematically illustrates synchronization among different clocks by
synchronization of sequences of cryptographic hash values;
[0020] Fig. 5 schematically illustrates a sequence of cryptographic hash values with user event data;
[0021] Fig. 6 schematically illustrates a digital processing device comprising one or more central processing units (CPUs), a memory, a communication interface, and a display;
[0022] Fig. 7 schematically illustrates a web/mobile application provision system providing browser-based and/or native mobile user interfaces;
[0023] Fig. 8 schematically illustrates a cloud-based web/mobile application provision system comprising an elastically load balanced, auto-scaling web server and application server resources as well synchronously replicated databases;
[0024] Fig. 9 shows cipher block chaining encryption in which the previous encrypted block(s) data is necessary for the next block to be created.
[0025] Fig. 10 shows data sample from each block is combined with the last valid hash to create a merkle hash of the data;
[0026] Fig. 11 shows the system architecture showing the currently active Proof of History generator node and downstream nodes;
[0027] Fig. 12 shows the Proof of History generator node of Fig. 11 with incoming and outgoing traffic; and
[0028] Fig 13 shows an example of high performance memory management for smart contracts.
DETAILED DESCRIPTION
[0029] While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.
[0030] Unless otherwise defined, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
[0031] As used herein, the singular forms“a,”“an,” and“the” include plural references unless the context clearly dictates otherwise. Any reference to“or” herein is intended to encompass “and/or” unless otherwise stated.
[0032] As used herein, the term“about“ refers to an amount that is near the stated amount by about 10%, 5%, or 1%, including increments therein.
[0033] Whenever the term“at least,”“greater than,” or“greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term“at least,”“greater than” or“greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.
[0034] Whenever the term“no more than,”“less than,” or“less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term“no more than,”“less than,” or“less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.
[0035] The present disclosure provides systems and methods to cryptographically verify passage of time between two events. A cryptographically secure hash function may be used herein whose output cannot be predicted from the input. Such a cryptographically secure function may be completely executed to generate an output, and such function may be run iteratively in a sequence so that its output from a previous execution may be used as the input in the current execution. The current output and/or how many times it has been executed or called can be periodically recorded. The output can then be recomputed and verified by external computers in parallel by checking one or more iterations in parallel on separate computer(s). Data can be timestamped into the sequence by recording the data and the index when the data is mixed into the sequence. Such timestamp then may guarantee that the data was created sometime before a next output of the secure function is generated in the sequence. Multiple clocks can synchronize amongst each other by mixing their state into each other’s sequences.
[0036] The present disclosure may advantageously enable proof of time passage in between entries in a digital record (e.g. sequence) without trusting the entity that is generating the digital record. The present disclosure may advantageously enable creation of a temporal order of events that does not have to be trusted by any of the external clients. In addition, the present disclosure may allow generation of a timestamp for a digital event which is a relative timestamp as indicated by entry in the historical record without trusting the creator of the record. The present disclosure also can enable verification of the record with a multi-core computer at a fraction of the time it took to be generated. The methods herein may further enable combination of different records, optionally generated by different computers such that the different records are continuous and time passage between events recorded on separate machines can be proven based on the records without trusting the machines.
[0037] In an aspect, the present disclosure provides a computer-implemented method for cryptographically generating a local timestamp. The method may comprise, in a first operation, activating one or more computer processors that are individually or collectively programmed to execute a cryptographic hash function. Next, in a second operation, at least a first data set from computer memory may be inputted to the cryptographic hash function to generate as output a second data set comprising a first set of cryptographic hash values from the first data set. Upon generating the second data set, a counter in computer memory is incremented by a preselected value.
[0038] Next, in a third operation, at least the first data set, the second data set, and the counter to a sequence of cryptographic hash values may be recorded in the computer memory. In a fourth operation, at least the second data set and a third data set may be inputted to the cryptographic hash function to generate as output a fourth data set comprising a second set of cryptographic hash values from the third data set. Upon generating the fourth data set, the counter in computer memory is incremented by the preselected value.
[0039] Next, in a fifth operation, at least the second data set, the third data set, the fourth data set, and the counter to the sequence of cryptographic hash values may be recorded in the computer memory; Next, in a sixth operation, using at least the fourth data set as input, the second and third operations may be repeated for a first number of repetitions, the fourth and fifth operations may be repeated for a second number of repetitions, or a combination thereof, to yield a fifth data set. Upon generating the fifth data set, the counter in computer memory may be incremented by the first number of repetitions, the second number of repetitions or a
combination thereof.
[0040] Next, in a seventh operation, at least the fourth data set, the fifth data set, and the counter to the sequence of cryptographic hash values in may be recorded in the computer memory. The counter may then be used to generate the local timestamp for the third data set.
[0041] In some embodiments, the method comprises verifying the sequence of cryptographic hash values using a preselected number of computer processors by: selecting a preselected number of computer processors; splitting the sequence of cryptographic hash values into the preselected number of sub-sequences, each of the sub-sequences comprising a portion of the sequence of cryptographic hash values; and executing, by the pre-selected number of processors, the cryptographic hash function based on an input of each of the preselected number of sub- sequences to generate as output the preselected number of new sub-sequences; and verifying whether the preselected number of new sub-sequences match the preselected number of sub- sequences.
[0042] In some embodiments, the method comprises encrypting, by the one or more computer processors, the sequence of cryptographic hash values comprising: inputting at least the first data set and a private key to an encryption function to generate as output a first encrypted data set, wherein upon generating the second data set, a counter in computer memory is incremented by a preselected value; and recording the first data set, the first encrypted data, and the counter in an encrypted sequence of cryptographic hash values the computer memory. In some embodiments, the method further comprises decrypting, by the one or more computer processors, the encrypted sequence of cryptographic hash values using a public key.
[0043] In some embodiments, the first data set comprises a set of cryptographic hash values from a previous data set. In some embodiments, the third data set comprises a cryptographic hash value of an event. In some embodiments, the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix. In some embodiments, the cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3. In some embodiments, the preselected value is an integer.
Cryptographic hash functions
[0044] A cryptographic hash function, such as, for example, sha256, md5, or sha-l, may be run from a starting value (e.g., first input) to generate an output. The output cannot be predicted without running the function. The output then may be passed as the input into the same function again in the next iteration, thus generating a sequence of calling or executing the same cryptographic hash function multiple times. The number of times the function has been executed/called and the output at each call/execution may be record.
[0045] Methods and systems of the present disclosure may be used with various hash functions, such as, for example, sha-256, sha-224, md5, sha-0, sha-l, sha-2, sha-3, ripemd-l60, ripemd- 320, and blake. Taking sha256(hashl) -> hash2, 2 as an example, the hash function sha256 may take as input“ hashl,” which may be a random starting value or a generated output from a previous exaction of the same or a different hash function, and generates an output“ hash2, 2.” The output may include a counter, a number of integration, an index, or the like which increases, e.g., by 1, or by any other predetermined values, for each execution of the hash function. In this case, the counter is 2, indicating the hash function has been executed two times. The output may also include one or more hash values, a set of hash values, or any other form of data as its output, which can be input to next execution of the hash function. The input, output, hash function, counter, or other information associated with execution of the hash function may be recorded into a sequence of hash functions/values as disclosed here.
[0046] As such, the iteration using the cryptographic hash function can only be computed in sequence by a single computer thread since there is no way to predict what the hash value at a certain index 300 is going to be without actually running the algorithm from the starting value 300 times. As an example, sha256(“any random starting value”) -> hashl, 1. With input of“any random starting value,” the function sha256 may generate the output as hashl and the index as 1. As other examples, the function sha256 may be repeated as follows:
i) sha256(hashl) -> hash2, 2
ii) sha256(hash2) -> hash3, 3
[0047] Instead of publishing every hash on every index, only a subset of these hashes may be published at an interval. For example:
i) sha256(“any random starting value”) -> hashl, 1
ii) ...
iii) sha256(hashl99) -> hash200, 200
iv) ...
v) sha256(hash299) -> hash300, 300
[0048] Fig. 1 shows an example of a sequence of cryptographic hash values. The sequence 100 may include any number of iterations that is no less than 2, for example, 3 iterations are shown here. In each iteration, the cryptographic hash function 101, e.g. sha256, takes an output from a previous iteration as its input. In the first iteration when there is no previous iteration, a starting value may be randomly selected as the first input. The hash function generates a hash value as its output 102 which may be a string of numbers and letters. The length of the string may be 128 bits, 256 bits, or any other lengths. The output also includes an index/count for the number of iteration. The input, output, index, and the hash function are recorded in the sequence of cryptographic hash values.
[0049] After the sequence has been generated, it can be verified in less time than it takes to generate it, for example, in a multi core computer. For instance, the computation may be split between corel and core2:
i) Corel :
ii) sha256(“any random starting value”) -> hashl, 1
iii) ...
iv) sha256(hashl99) -> hash200, 200
v) Core2:
vi) sha256(hashl99) -> hash200, 200
vii) ...
viii) sha256(hash299) -> hash300, 300
[0050] Given some number of cores, such as a GPU with 4000 cores, the verifier can split up the sequence of cryptographic functions, e.g., hashes, and their indexes into 4000 slices, and in parallel make sure that each slice is correct from the starting iteration of the function to the last iteration of the function in the slice. The expected time to produce the sequence can be calculated as number of hashes/hashes per second for 1 core, the expected time to verify that the sequence may be calculated as: number of hashes/ (hashes per second per core x number of cores available).
[0051] The cryptographic functions herein are functions that are configured to generate an unpredictable change to the output when there is any change to any of the input. Thus, when there is any change in the input, the cryptographic function may have to run to create the output.
[0052] Combine functions may be similar to cryptographic functions in that a single bit change to any of the inputs to a combine function can result in an unpredictable change to the output of the combine function. Additionally, combine functions can be configured to not introduce a collision attack vector. For example, simple addition or multiplication may introduce an attack vector because the input that is added to the state can be precomputed such that the result of the addition is a previously known value that can generate a predicted output once it is hashed. [0053] Two or more functions that satisfy the requirement of the cryptographic hash function or combine function may be used to create a hybrid cryptographic hash function or a hybrid combine function, such as sha256(prepend(hash335, photograph_sha256)).
[0054] The cryptographic hash function herein may include one or more secure hash algorithms. Non-limiting examples of the cryptographic hash function herein include: sha-0, sha-l, sha-2, sha-3, sha-224, sha-256, sha-384, sha-5l2, sha-512/224, sha-512/256, sha3-224, sha3-256, sha3- 384, sha3-5l2, shakel28, shake256, or their combinations.
Timestamps for events
[0055] The sequence of cryptographic hash functions, e.g., hashes, can also be used to record that some piece of data was created before a particular cryptographic function index, e.g., hash index, was generated.
[0056] A piece of data may be combined with the current hash at the current index using a combine function. The data may be a cryptographically unique hash of arbitrary data. Any data of any finite size can be appended. Using a hash may be convenient because the size of the hash value can be fixed. The combine function can be an append operation, or any operation that is collision resistant. The combine function may combine the data such that all the bits before combination are present after combination. Non-limiting combination functions include, for example, prepend operations and mix operations. A combine function may include an operation that cannot be easily attacked using existing technologies. For example, an attacker may have to create a collision between a hash and the data they are trying to append in order to attack an append operation. Append operation can be collision resistant for this application because an attacker may not know in advance what piece of data may generate a known value when appended to the output and hashed. The attacker may have to attempt many possible strings to append, which may require 2L128 attempts for sha-256. For instance, arithmetic operations, such as addition, subtraction, multiplication, division, or other operations, may not be used as a combination function because an attacker may precompute a separate sequence in parallel, and may join the real sequence and the separate sequence by inserting a segment of data that can add up to the starting value of the separate sequence.
[0057] As a non-limiting example, a sequence of hash functions may be called as below:
i) sha256(“any starting value”) -> hashl, 1
ii) ...
iii) sha256(hashl99) -> hash200, 200
iv) ...
v) sha256(hash299) -> hash300, 300 [0058] When some external event occurs, e.g., a photograph is taken, or any arbitrary digital data is created, the cryptographic hash of the data is generated, e.g., photograph_sha256. The hash of the data can be appended to the binary data of the current hash, which is hash 335, e.g., append (hash335, photograph_sha256). The next hash function, e.g., hash336, may be computed from the appended binary data of hash335 and the sha256 of the photograph as below:
i) sha256(hash334) -> hash335, 335, photograph_sha256
ii) sha256(append(hash335, photograph_sha256) -> hash336, 336
iii) ...
iv) sha256(hash399) -> hash400, 400
[0059] In addition to the index, the sha256 of the photograph may also be recorded as part of the sequence output. Thus, this change to the sequence may be carried on to the next function in the sequence. In addition, anyone verifying this sequence can then recreate the change to the sequence. The verification can still be done in parallel, for example,
i) Core n
ii) sha256(hash299) -> hash300, 300
iii) sha256(hash334) -> hash335, 335, photograph_sha256
iv) Core m
v) sha256(append(hash335, photograph_sha256) -> hash336, 336
vi) sha256(hash399) -> hash400, 400
[0060] As cryptographic functions still may be called/executed sequentially, it is known that things entered into the sequence may only occur before the future hashed value can be computed. For example, based on the sequential execution below, it can be determined that photograph2 may be created before hash 601, and photographl may be created before hash 336. Thus, the order in which events may occur can be advantageously determined without the need for external clocks or timestamps.
i) sha256(hash334) -> hash335, 335, photograph l_sha256
ii) sha256(append(hash335, photograph_sha256) -> hash336, 336
iii) ...
iv) sha256(hash599) -> hash600, 600, photograph2_sha256
v) sha256(append(hash600, photograph2_sha256) -> hash60l, 601
[0061] Inserting extra data into the sequence of hashes can result in an unpredictable change to the current output, and all subsequent output and/or input values in the sequence. Therefore, it can be impossible to precompute any future sequences based on prior knowledge of what data is mixed into the sequence.
[0062] The sequence may only need to mix a cryptographic value of the event data into the event sequence. The value may be obtained using cryptographic functions such as a hash function. The mapping of the cryptographic value of the event data to the actual event data as well as the actual event data can be stored outside of the sequence, and the actual event data can contain other information within itself such as metadata, real time stamps, and connection IPs.
[0063] Fig. 2 shows an example of a sequence of cryptographic hash values with a digital event. The digital event 203 with input data, e.g., hash value of original data, may be inserted into the sequence as part of the output 202 of the current cryptographic has function 201, via a combination function. Thus, this insertion of the input data into the sequence may be carried on to the next function in the sequence and the next output 204 is unpredictably changed by the insertion of the input as its input. The input, output, the input data, and index are recorded in the sequence of cryptographic hash values. A timestamp for the input data may be generated based on the count/index before which it is inserted into the sequence. In this case, the event data is inserted before output and count/index 204.
[0064] Fig. 3 shows an example of verifying the sequence of cryptographic hash values. After the sequence has been recorded, the recorded sequence which starts with index 301 and ends with another index 303 may be split into 2 or more sub-sequences, for example, one with a starting index 301 and an end index of 302, the other with a starting index 302 and a ending index of 303. The sub-sequences are fed into different computer cores 304, 305 or computer processors that run the cryptographic hash functions in parallel. In each iteration, between the starting and the ending index, the recorded output is compared with the generated output for verification. The verification, e.g., the fact that the recorded output matches the generated output in each iteration after insertion of the input data, can show that the input data and the timestamp of the input data are accurate.
Encrypted Sequence
[0065] The sequence herein may be encrypted to improve security of the values of cryptographic functions, thereby improving security of the temporal order of events. A public/private key encryption scheme may be used for encryption of the sequence herein. The encryption function may be executed using the output of a cryptographic function as its input. Given a public and private key pair, the encrypted sequence may be generated as:
i) sha256(“any starting value”) -> hashl, 1
ii) encrypt(hashl, Private Key) -> encrypted 1, 1 iii) sha256(hashl)-> hash2, 2
iv) encrypt(hash2, Private Key) -> encrypted2, 2
v) ...
vi) encrypt(encryptedl99, Private Key) -> encrypted200, 200
vii) encrypt(encryptedN, Private Key) -> encryptedN, N
[0066] The encrypted sequence may also be generated as:
sha256(“any starting value”) -> hashl, 1
sha256(hashl)-> hash2, 2 sha256(hashN)->hashN+ 1 ,N+ 1
encrypt(“any starting value, Private Key) -> encryptedO, 0
encrypt(hashl, Private Key) -> encrypted 1, 1 encrypt(hashN, Private Key) -> encryptedN, N
encrypt(hashN+l, Private Key) -> encryptedN+l, N+l
The encrypted sequence may also be generated as:
sha256(“any starting value”) -> hashl, 1
encrypt(hashl, Private Key) -> encrypted 1, 1
sha256(encryptl,l)-> hash2, 2
encrypt(hash2, Private Key) -> encrypted2, 2
sha256(encrypt2,2)-> hash3, 3
encrypt(hash3, Private Key) -> encrypted3, 3
[0067] The encrypted sequence may also be generated as:
Hash (hashO, 0) -> hashl, 1
Hash (hashn,n)-> hashn+l,n+l
Encrypt (hashn+l,n+l) ->encryptl,n+2
Hash (en crypt 1, n + 2) -> hashn+3, n + 3
Hash (hashn+3, n+3)-> hashn+4,n+4
[0068] The verifier can then use the Public key to decrypt the results backwards, for example: i) decrypt(encrypted200, Public Key) -> encrypted 199, 199
ii) decrypt(encryptedl99, Public Key) -> encrypted 198, 198 iii) decrypt(encryptedl, Public Key) ->“data”, 0
[0069] External data can be inserted into the sequence, for example, as below:
i) encrypt(encrypted334, Private Key) -> encrypted335, 335, photograph l_sha256 ii) encrypt(sha256(append(encrypted335, photograph l_sha256)) -> encrypted336, 336 [0070] To avoid the state (encryptedN value) from growing in size, the combine function can cryptographically hash the combined data before and/or after encrypting it. Since the cryptographic hash is not reversible, the sequence may need to record the pre-combined state, and the state right after the combined result is hashed and encrypted.
[0071] The encryption herein may ensure the encrypted sequence can only be generated by the holder of the private key.
[0072] Another approach can be to use an encryption scheme on the seed of the chain, and continue the rest of the chain using an algorithm that is optimal for the current hardware. If sha256 is a commercially available intrinsic, then a secondary clock may take an initial seed as the output of the first clock, encrypt it with its private key, and start continuously hashing using sha256 the encrypted value. It can then be later verified by everyone that the Public key can be used to decrypt the original seed and tie it back to the primary clock. Public Key, initial seed, and the encrypted result of the seed may need to be published for verifiers to confirm that it is a continuation from the original clock.
[0073] The encryption herein requires a private key, a public key, and any encryption function. Nonlimiting examples of the encryption functions includes a symmetric encryption function, an asymmetric encryption function such as Rivest-Shamir-Adleman (RSA), data encryption standard (DES), Triple DES, and advanced encryption standard (AES).
Internet Service
[0074] This sequence may be used as a clock to determine the temporal order of event data. The clock can be run as an internet service, continuously creating a sequence of cryptographic function indexes. Events data, described by the cryptographic hash of the data for that event, can be recorded into the existing sequence which may be published via the Internet. For instance, the longer the sequence, the harder it is for a malicious service to have precomputed it and “faked” the passage of time between events being entered into the sequence. If a sequence running for 1 year generates N hashes, the time between events entered into the sequence at index K and index K + N is about a year apart. A malicious service with a computer that is twice as fast may have to run for 6 months to create a sequence in which the order of the two events that are N indexes apart may be switched. The actual time for calculation may be adjusted as computer speed improves. Synchronizing multiple clocks
[0075] The systems and methods herein enable synchronization among multiple clocks by mixing the sequence state from each clock to each other clock(s). For example,
i) Clock A
ii) Hash la
iii) Hash2a
iv) Hash3a hash(Hash2b, hash la)
v) Hash4a
vi) Clock B
vii)Hashlb
viii) Hash2b
ix) Hash3b hash(Hash2a, hashlb)
x) Hash4b
[0076] In this case, Clock A may receive a data packet from Clock B, which contains the last state from Clock B and/or the last state Clock B observed from Clock A. The next state hash in Clock A then may depend on the state from Clock B. Thus, it can be derived that hash2b happens sometime before hash4a. This property can be transitive, when three clocks are synchronized through a single common clock, B, e.g., A <= B <= C, the dependency between Clock A and C may be traced even though they are not synchronized directly. By periodically synchronizing the clocks, each clock can then handle a portion of external traffic, thus the overall system can handle a larger amount of events to track than a single clock at the cost of the accuracy due to network latencies between the clocks. Having multiple synchronized clocks may make deployment more resistant to attacks. As an example, one clock may be high bandwidth, and may receive many events to mix into its sequence; and another clock can be a high-speed, low-bandwidth clock that periodically mixes with the high bandwidth clock. The high speed sequence can create a secondary sequence of data that an attacker may have to reverse. When examining the combined stream of sequences, the faster sequence may have a larger number of hashes generated. An attacker may need to generate the larger number of hashes to create a forged record of events that may show them in reverse order.
[0077] Fig. 4 shows an example of synchronization among two different clocks/sequences of cryptographic hash values. A first cryptographic hash function 401 may generate a hash value 403 which may be received by a second cryptographic hash function 402 and recorded as its output 404. Thus, the next output/state 406 of the second hash function may depend on the previous output 404, thus the last state of the first hash function 403. It can be derived that hash value 403 occurs before hash value 406. Similarity in a different direction, hash value in the output 406 of the second hash function may be inserted back to the first hash function 405. Thus, it can be derived that hash value 406 occurs before any output generated using hash value 405 as its input.
Use in a Distributed System
[0078] The sequence of hash values, events and the clock disclosed herein can be used in a distributed system. Many users may input data into the distributed system. Such data may include a computer program, a function, or any other information. Each piece of data may have a cryptographic hash value that uniquely identifies the data that is mixed into the sequence. In the case that the data is program(s), the sequence of hashes with the mixed data hashes is the order to execute the programs. Readers of the sequence can then evaluate each program in the exact same order, and each one should expect to see the same result as every other reader.
[0079] Users that receives a sequence of transactions/hash values can know that if they read and accept message N, then they may have received all the messages prior to message N. This may make it possible to trust the result from making all the transactions consistent across all the users that have read and accepted message N or a later message.
[0080] The closer the hash entry of the program is to the latest hash in the sequence, the more likely it may be subject to a reordering attack. Thus, in a distributed system where no reader trusts the creator of the sequence, readers can wait until the sequence gets long enough such that a reordering attack is unlikely and take action on the result of the program.
Attacks
[0081] To prevent a hidden attacker from generating a different ordered sequence with a faster computer, the cryptographic function can be constructed such that it can be inefficient to create a custom ASIC, either because standard CPU’s from Intel or some other commercially available chip supplier already have an optimized hash function in their ASICs, or the custom circuit size can be too large. The sequence(s) herein can be executed on a standard CPU that is cooled and overclocked to the highest currently possible speed such that it may not be possible to run a CPU that is twice as fast. While it may be possible to run a faster hashing service by using a more expensive cooling solution, it may only add marginal gains, and may require a significant expense in time and money to completely overcome the real sequence after it has been functioning for a significant amount of time. As computer technology advances, the
cryptographic function can change and become more complex in its execution thus it is less likely to be attacked. As an example, it may require the previous N outputs to be combined into the next input.
[0082] A competitive market can be created between agents who are running these sequences as clocks, so the agents can compete on generating the longest sequence either using previously agreed checkpoint, or by continuously mixing sequences together. Each agent can propose a generated sequence that is started from the last accepted checkpoint. For instance, each agent can mix his sequence into a central one and only the fastest one is selected. This may provide a reward for keeping the service running at the fastest possible clock rate, and thus be more resistant to malicious hidden faster clocks.
[0083] CPUs typically cannot run at lowest possible temperatures with maximum frequencies for a long time because of the stress on the components. CPUs may be rotated while each rotated CPU continuously mixes in whatever sequence it is able to generate. The result may be a continuous chain of hashes that were running at the maximum frequency on at least one of the CPUs.
Partial Reordering
[0084] In some embodiments, disclosed herein are methods for preventing the service from creating a duplicate sequence with a partial reordering, so that the service can be trusted by the users. For example, external clients want to sequence 3 events, eventl, event2, and event 3 into a current sequence:
i) hash9a
ii) Eventl, hashlOa
iii) Event2, hash20a
iv) Event3, hash30a
[0085] If the events are all available to insert at the same time, or a hidden clock that is slightly faster than that of the current sequence may be generated, then a second hidden sequence with the events in reverse order may be generated as follows:
i) hash9a
ii) Event3, hash 10b
iii) Event2, hash20b
iv) Eventl, hash30b
[0086] Both sequences may start at hash9a, so they may be equal in length (e.g. they may have same number of hashes). However, only a single valid sequence may be desired by a client using the current sequence. To prevent this attack, each client-generated event may contain within itself the latest hash that the client observed from what it considers to be a valid sequence. Specifically, when a client creates the“Eventl” hash, he or she may append the last hash, hash 5a, observed as below:
i) Eventl = hash(append(eventl data, hash5a))
ii) Eventl, hashlOa
iii) Event2 = hash(append(event2 data, hashl5a))
iv) Event2, hash20a
v) Event3 = hash(append(event3 data, hashlOa))
vi) Event3, hash30a
[0087] When the sequence is published, Event3 can reference hash25a, and if it is not ordered in the sequence prior to hash25a, the consumers of the sequence know that it is an invalid sequence. The partial reordering attack may then be limited to the number of hashes produced while the client has observed an event and when the event is entered. Clients may generate or use software that does not assume the order is correct for the short period of hashes between the last observed hash and the inserted hash. In the example above, hash30a has a reference to hashlOa, and since there may be a gap between when it was inserted (hash30a) and its reference (hashlOa), the true order of events in that gap may not be established.
[0088] To prevent a malicious clock service from rewriting the client event hashes completely, the clients can submit a signature of the event data and the last observed hash instead of just a hash, for example:
i) Event3 = sign(append(event3 data, hash25a), client Private Key)
ii) Event3, hash30a
[0089] The mapping from the event hash or signature to event data and public key of a client can be published in a separate database for all the clients of the service to verify, for example, as i) (Public Key, hash25a, event3 data) <- lookup Event3
ii) Verify Event3 signature with Public Key
iii) Verify hash25a exists prior to hash3 la in the sequence
[0090] Fig. 5 shows an example of the sequence of cryptographic hash values. A user event may have a hash value of 2zfd23423, and it may also contain the last hash value before the event data is generated, which is 62f5164cl. Thus, the user event, and its hash value may link 62f5164cl to the next generated block with a different hash 3d039eef which prevents a malicious leader from reordering the events in a hidden side sequence. A leader may be a node in the block chain that decides the contents of the next block, and a malicious leader may be a leader who attempts to add an incorrect block or a block that may not be intended (e.g., a block that may not be intended by the leader). [0091] Parallel Sequence Joining. An attacker may generate two sequences in parallel. Then “data” mixing tries may be used to generate a hash collision such that both sequences have the same hash appear/output value. If an attacker succeeds, then it can be possible to join the histories and create a sequence that looks longer than it takes to compute it on a single computer core. To prevent this attack, the combine function and the sequence function herein may be collision resistant. For example, sha-256 may require 2L128 attempts to generate a collision. Using append as the combine operation may require the attacker to permute 2L128 possible combinations of data to append to generate a collision. Thus, with current technology, attacks by hash collision may be infeasible using selected collision resistant functions.
Stake Consensus
[0092] In another aspect, a specific instance of proof of stake permits quick confirmation of the current sequence produced by a proof of history (PoH) generator, for voting and selecting the next proof of history generator, and for disabling any misbehaving validators. This algorithm may depend on messages eventually arriving to all participating nodes within a certain timeout.
[0093] The term“bonds,” as used herein, generally refers to a capital expense in a proof of work. A miner may purchase resources, such as hardware and electricity (power), and commit such resources to a single branch in a proof of work block chain. A bond may be a coin that a validator commits as collateral while they are validating transactions.
[0094] The term“slashing,” as used herein, generally refers to a solution to a nothing at stake problem in a proof of stake (PoS) system. When a proof of voting for a different branch is published, that branch can destroy the validator bond. This may be an economic incentive configured to discourage validators from confirming multiple branches.
[0095] The term“supermajority,” as used herein, may be at least 2/3 of the validators weighted by their bonds. A supermajority vote may indicate that the network has reached consensus, and at least 1/3 of the network may have had to vote maliciously for this branch to be invalid. This may put the economic cost of an attack to at least 1/3 of the market cap of the coin.
Bonding
[0096] A bonding transaction that may take a user specified amount of coin and move it to a bonding account under the user's identity. Coins in the bonding account may not be spent and may have to remain in the account until the user removes them. The user may only remove stale coins that have timed out. Bonds may be valid after a supermajority of the current stakeholders has confirmed the sequence.
Voting
[0097] A proof of history generator may publish a signature of the state at a predefined period. Each bonded identity may confirm that signature by publishing its own signed signature of the state. The vote may be a simple‘yes’ vote, without a‘no’ vote. If a supermajority of the bonded identities has voted within a timeout, then this branch may be accepted as valid.
Unbonding
[0098] Any missing N number of votes may mark the coins as stale and no longer eligible for voting. The user can issue an unbonding transaction to remove them. N may be a dynamic value based on the ratio of stale to active votes. N may increase as the number of stale votes increase.
In an event of a large network partition, this may allow the larger branch to recover faster than the smaller branch.
Elections
[0099] An election for a new PoH generator may occur when a failure of the PoH generator is detected. The validator with the largest voting power, or highest public key address if there is a tie, may be selected as the new PoH generator.
[0100] A supermajority of confirmations may be required on the new sequence. If the new leader fails before a supermajority of confirmations is available, the next highest validator may be selected and a new set of conformations may be required.
[0101] To switch votes, a validator may need to vote at a higher PoH sequence counter and the new vote may need to contain the votes it wants to switch. Otherwise, the second vote may be slashable. Vote switching may only occur from a level or height that does not have a
supermajority.
[0102] Once a PoH generator is established, a secondary may be elected to take over the transactional processing duties. If a secondary exists, it may be considered as the next leader during a primary failure.
[0103] Secondary and lower rank generators may be promoted to primary at a predefined schedule or if an exception is detected.
Election Triggers
Forked proof of history (PoH) generator
[0104] PoH generators may have an identity that signs the generated sequence. A fork can only occur in case the PoH generator identity has been compromised. A fork may be detected because two different historical records have been published on the same PoH identity.
Runtime Exceptions
[0105] A hardware failure or a bug, or an intentional error in the PoH generator, may cause it to generate an invalid state and publish a signature of the state that does not match the local validators result. Validators may publish the correct signature via gossip and this event may trigger a new round of elections. Any validators that accept an invalid state may have their bonds slashed.
Network Timeouts
[0106] In some instances, a network timeout may trigger an election. This may occur in the event of system failure, such as a network failure.
Slashing
[0107] Slashing may occur when a validator votes two separate sequences. A proof of duplicate vote may remove the bonded coins from circulation and add them to the mining pool.
[0108] A vote that includes a previous vote on a contending sequence may not be eligible as proof of duplicate voting. Instead of slashing the bonds, this vote may remove the currently cast vote on the contending sequence.
[0109] Slashing may also occur if a vote is cast for an invalid hash generated by the PoH generator. The PoH generator may be expected to randomly generate an invalid state which may trigger a fallback to Secondary.
Secondary Elections
[0110] Secondary and lower ranked proof of history generators may be proposed and approved. A proposal is cast on the primary generators sequence. The proposal contains a timeout, if the motion is approved by a supermajority of the vote before the timeout, the Secondary is considered elected, and can take over duties as scheduled. Primary can do a soft handover to Secondary by inserting a message into the generated sequence indicating that a handover can occur, or inserting an invalid state and forcing the network to fallback to Secondary.
[0111] If a secondary is elected and the primary fails, the secondary may be considered as the first fallback during an election.
Attacks
Tragedy of Commons
[0112] The PoS verifiers may confirm the state hash generated by the PoH generator. There may be economic incentive for them to do no work and simply approve every generated state hash.
To avoid this condition, the PoH generator may generate an invalid hash with a probability P. Any voters for this hash may be slashed. When the hash is generated, the network may immediately promote the secondary elected PoH generator.
Collusion with the PoH generator
[0113] A verifier that is colluding with the PoH generator may know in advance when the invalid hash is going to be produced and not vote for it. This scenario may be no different than the PoH identity having a larger verifier stake. The PoH generator may still have to do all the work to produce the state hash.
Censorship
[0114] Censorship or denial of service may occur when at least x (x< 1/2), e.g., 1/3, of the bond holders refuse to validate any sequences with new bonds. The protocol can defend against this form of attack by dynamically adjusting how fast bonds become stale. In the event of a denial of service, the larger partition can fork and censor the Byzantine bond holders. The larger network may recover as the Byzantine bonds become stale with time. The smaller Byzantine partition may not be able to move forward for a longer period of time.
[0115] In some examples, the algorithm may work as follows: A majority of the network, e.g., 2/3, may elect a new loom, or equivalently herein, a proof of history generator. The loom may then censor the Byzantine bond holders from participating. A proof of history generator may have to continue generating a sequence to prove the passage of time, until sufficient Byzantine bonds have become stale so the majority of the network has a l-x, e.g., 2/3, majority. The rate at which bonds become stale may be dynamically based on what percentage of bonds is active. As such, the Byzantine minority fork of the network may have to wait longer than the majority fork to recover a supermajority. Once a supermajority has been established, slashing may be used to permanently disable the Byzantine bond holders.
Streaming proof of replication
[0116] In proof of replication - nodes that are storing a copy of data have to provide a proof that the copy has been actually stored in memory somewhere for the specified period of time. This may permit fast and streaming verifications of proof of replication, which may be enabled by keeping track of time in a PoH generated sequence. Replication may not be used as a consensus algorithm, though it may be a useful tool to account for the cost of storing the block chain history or state at a high availability.
Algorithm
[0117] With reference to FIG. 9, a cipher block chaining (CBC) encryption encrypts each block of data in a sequence, using the previously encrypted block to a binary operation on input data where the bits are mixed using the XOR operator. Each replication identity may generate a key by signing a hash that has been generated by a PoH sequence. This may tie the key to a replicator’s identity, and to a specific proof of history sequence. Only specific hashes can be selected (see Hash Selection below).
[0118] The data set may fully be encrypted block-by-block. Then, to generate a proof, the key is used to seed a pseudorandom number generator that selects a random 32 bytes from each block.
[0119] With reference to FIG. 10, a merkle hash may be computed with the selected PoH hash prepended to the each slice. The root may be published, along with the key, and the selected hash that was generated. The replication node may be required to publish another proof in N hashes as they are generated by proof of history generator, where N may be approximately ½ the time it takes to encrypt the data. The proof of history generator may publish specific hashes for proof of replication at a predefined period. The replicator node may select the next published hash for generating the proof. The hash may be signed and random slices may be selected from the blocks to create the merkle root.
[0120] After a period of N proofs, the data may be re-encrypted with a new CBC key.
Verification
[0121] With N cores, each core can stream encryption for each identity. The total space required may be 2 blocks * N cores, since the previous encrypted block may be necessary to generate the next one. Each core can then be used to generate all the proofs that derived from the current encrypted block.
[0122] The total time to verify proofs may be equal to the time it takes to encrypt. The proofs themselves may consume a few random bytes from the block, in which case the amount of data to hash may be significantly lower than the encrypted block size. The number of replication identities that can be verified at the same time may be equal to the number of available cores. In some cases, a graphics processing unit (GPU) may have greater than about 3500 cores, and in some cases run at 1/2 to 1/3 the clock speed of a central processing unit (CPU).
Key Rotation
[0123] Without key rotation the same encrypted replication can generate cheap proofs for multiple proof of history sequences. Keys are rotated periodically and each replication is re- encrypted with a new key that is tied to a unique proof of history sequence.
[0124] Rotation may need to be sufficiently slow such that it is practical to verify replication proofs on GPU hardware, which is slower per core than on CPUs.
Hash Selection
[0125] A proof of history generator may publish a hash to be used by the entire network for encrypting one or more proofs of replication, and for use as the pseudorandom number generator for byte selection in fast proofs. A hash may be published at a periodic counter that is roughly equal to 1/2 the time it takes to encrypt the data set. Each replication identity may use the same hash, and use the signed result of the hash as the seed for byte selection, or the encryption key. The period that each replicator may provide a proof may be smaller than the encryption time. Otherwise the replicator can stream the encryption and delete it for each proof.
[0126] A malicious generator may inject data into the sequence prior to this hash to generate a specific hash. This attack is discussed in depth here.
Proof validation
[0127] The proof of history node may not validate the submitted proof of replication proof(s). It may only keep track of a number of pending and verified proofs submitted by a replicator. A proof may become verified when the replicator is able to sign the proof by 2/3rds of the validators in the network.
[0128] The verifications may be collected by the replicator via a network (e.g., a peer to peer gossip network), and submitted as one packet that contains at least 2/3 of the validators in the network. This packet may verify all the proofs prior to a specific hash generated by the proof of history sequence, and can contain multiple replicator identities at once.
Attacks
Spam
[0129] A malicious user may create many replicator identities and spam the network with bad proofs. To facilitate faster verification, nodes may be required to provide the encrypted data and the entire merkle tree to the rest of the network when they request verification.
[0130] The proof of replication may be configured for cheap verification of any additional proofs, as they take no additional space. However, each identity may consume one core of encryption time.
Tragedy of Commons
[0131] Verifiers may approve proofs without verification. Economic incentives may be aligned with verifiers actually doing work. The payout for proof of replication (PoRep) proofs may come out of the same mining budget as PoS verifiers. As such, increasing the replication target without actually verifying may only transfer mining and fee payout from PoS verifiers to PoRep producers.
[0132] Another approach may be to have each PoRep produce one or more false proofs. The PoS verifier’s job may be to find the false proofs. PoRep producers may prove that the proof is false by producing the function that produced the false data.
Partial Erasure
[0133] A replicator node may attempt to partially erase some of the data to avoid storing the entire state. Such an attack may be made difficult by the number of proofs and the randomness of the seed. Collusion with PoH generator
[0134] The signed hash may be used to seed the sample. If a replicator may select a specific hash in advance then the replicator may erase all bytes that are not going to be sampled.
[0135] A replicator identity that is colluding with the proof of history generator may inject a specific transaction at the end of the sequence before the predefined hash for random byte selection is generated. With enough cores, an attacker may generate a hash that is preferable to the replicator identity.
[0136] This attack may only benefit a single replicator identity. Since all the identities may have to use the same exact hash that is cryptographically signed with Elliptical curve digital signature algorithm (ECDSA) for signing data with elliptic curve cryptographic key or equivalent, the resulting signature may be unique for each replicator identity, and collision resistant. A single replicator identity may only have marginal gains.
Denial of Service
[0137] The cost of adding an additional replicator identity may be equal to the cost of storage. The cost of adding extra computational capacity to verify all the replicator identities may be equal to the cost of a CPET or GPET core per replication identity.
[0138] This may create an opportunity for a denial of service attack on the network by creating a large number of valid replicator identities.
[0139] To limit this attack, the consensus protocol chosen for the network can select a replication target, and award the replication proofs that meet the desired characteristics, such as availability on the network, bandwidth, and geolocation.
Tragedy of Commons
[0140] The PoS verifiers may confirm PoRep without doing any work. The economic incentives may be lined up with the PoS verifiers to do work, such as by splitting the mining payout between the PoS verifiers and the PoRep replication nodes.
[0141] To further avoid this scenario, the PoRep verifiers can submit false proofs a small percentage of the time. They can prove the proof is false by providing the function that generated the false data. Any PoS verifier that confirmed a false proof may be slashed.
Data Broadcast
[0142] The proof of history generator may need to broadcast the data to other users or nodes, e.g., the rest of the network. Nodes disclosed herein are verifiers in the network of the data that is broadcast by the Proof of History generator. To maintain high availability it is the goal of the network to maximize the number of verifier nodes. As the network bandwidth of the generator may have physical limit, the generator may need to efficiently split up the data between nodes with minimal overhead.
[0143] In some embodiments, the generator includes an algorithm to split the data. Nodes can be arranged using a predetermined structure, for example, as a heap data structure weighted by their Proof of Stake bond size. The top of the heap can the generator. The next layer of nodes can represent the children. The Generator is the parent of the children, and the first layer of nodes can be the parents of the second layer. The number of children for each layer can depend on the distribution of the bonds. In some embodiments, at least 2/3rds + 1 of the weighted stake needs to be in the first layer. In some embodiments, a maximum size that is limited by the network capacity and/or the Internet Protocol determines packet sizes that can fit into the available bandwidth of the generator.
[0144] The generator can split up the data into packets, optionally of same data size, one for each child node. The packets may have a monotonically increasing index. Each child node can compute the same heap data structure and be aware of its index in its peer list at the same layer. When the child node receives the packet with the index value that matches the child's index value, it may retransmit the packet to one or more nodes in its list, for example, every peer in his list.
[0145] After retransmitting its own packet to all the peers by each child, all the peers may then receive all the packets that were transmitted by the generator.
[0146] In some embodiments, some of the children may be unavailable due to hardware or network failures. The generator can insert erasure coded packets to one or more child nodes, e.g., using Reed-Solomon coding, or other coding techniques. Child nodes receiving such packets can then reconstruct the full packet stream if some of the packets are missing from the erasure coded packets. As an example, if the percentage of nodes that is unavailable needs to be below some threshold, e.g., 10%, the generator can use 10% of its bandwidth for erasure coding, and all the children may be able to reconstruct the full data set. Using erasure coding may allow us to account for inconsistencies in the state of the heap data structure between the nodes.
[0147] The children may retransmit the data to the next layer down. For the lower layers, the child nodes can have multiple parent nodes. In this case, each parent node can retransmit different packets down to different children, to avoid overlapping with other parent nodes. Each child then can forward the parent packets they receive to their peers, while avoiding sending duplicate packets.
[0148] In some embodiments, an algorithm may be used for controlling packet retransmission for child nodes with multiple parent nodes. A non-limiting example of an algorithm is: (a) if packet is from parent
1. for each peer
i. if peer. index % parents size != parent. index
ii. send packet to peer
[0149] In some embodiments, the algorithm is designed to forward the missing packets from each parent node to each peer child node while avoiding overlap with any other peer doing the same thing. The child node can then forward the packets to the downstream nodes, i.e., its child nodes. A non-limiting example of an algorithm is:
(a) for each child
1. send packets[(child. index + my. index) % packets. size] child
[0150] In some embodiments, the algorithm is designed such that the different packets are forwarded from each parent to each child node downstream, thus minimizing the number of packets each subsequent child has to retransmit to their peers.
[0151] In cases where there is inconsistency in the heap data structure keeping track of parents and children between the nodes, the erasure coded packets as disclosed herein may allow the network to recover the full data set.
GPU Accelerated signature verification
[0152] In some embodiments, packets that are transmitted through the network may be signed by the originator of the packet. The signatures can use Elliptical Curve Cryptography. Other cryptographical schemes include, but are not limited to, RSA or other asymmetric encryption schemes. The verification of the packets can be a relative slow process, e.g., taking about lO’s of milliseconds, to maintain high performance on the network, the verification may be accelerated with GPUs. The acceleration can be pipelined with data receiving packets from the network. In some embodiments, the verification is accelerated by at least lOx with a single GPU card.
[0153] A non-limiting example of a method for accelerated signature verification includes an operating system thread that may continuously read data packets, e.g., from a network socket, into a memory allocated buffer. When the buffer is full, or the data source, e.g., the socket, has no more packets, the thread may send the entire pointer of the buffer to a queue of GPU and wake up the GPU controller thread. If there are no more packets, the thread may also perform a blocking read call, which can wake up the thread when new network data is available from that socket. The queue of GPU may include one or more data buffers.
[0154] The GPU controller thread can read data in one or more buffers from the queue simultaneously or sequentially, and sends the data batch from the buffer(s) while splitting among more than one available GPU. The output of the GPU processing can include a vector of Boolean values that indicate whether the data packet passed or failed signature verification, optionally for one or more batches of data packets. One or more batches of data packets, and the vector of verifications is queued for processing in the next stage of a pipeline. The GPU controller thread then may check its queue to determine if there are any more batches to process.
[0155] If there are more batches to process, the processing thread can read one or more buffers from the queue simultaneously or sequentially and processes them in parallel alongside the verification vector.
System Architecture
[0156] An architecture of a system that may be used to implement stake consensus is shown in FIG. 11. In this system, a loom, equivalently herein as a proof of history generator, is an elected proof of history generator. It may consume arbitrary user transactions and outputs a proof of history sequence of all the transactions that guarantees a unique global order in the system. After each batch of transactions, the loom may output a signature of the state that is the result of running the transactions in that order. This signature and the last transactions hash may be signed with the identity of the loom.
State
[0157] A naive hash table may be indexed by a user’s address. Each cell may include the full address of the user and the memory required for this computation. For example, transactions may include [<20 byte ripemd-l60(sha256(users public key))><8 byte account value><4 byte unused>] for a total of 32 bytes. A proof of stake bond table may include [<20 byte ripemd- l60(sha256(users public key))><64 bits bond value><Last cast vote 8 bytes><20 byte, list of uncast votes><20 bytes unused>] for a total of 64 bytes.
Spool, State Replication
[0158] These nodes may replicate the block chain state and provide high availability of the block chain state. The replication target may be selected by the consensus algorithm, and the validators in the consensus algorithm select and vote the proof of replication nodes they approve of based on of-chain defined criteria.
[0159] The network may be configured with a minimum proof of stake bond size, and a requirement for a single replicator identity per bond.
Validators
[0160] These nodes may be consuming bandwidth from spools. They are virtual nodes, and can run on the same machines as the spools or the Loom, or on separate machines that are specific to the consensus algorithm configured for this network. Examples of network limits
[0161] With reference to FIG. 12, in some examples, loom takes incoming user packets, orders them the most efficient way possible, and sequences them into a proof of history sequence that is published to downstream spools. Efficiency may be based on memory access patterns of the transactions, so the transactions may be ordered to minimize faults and to maximize prefetching.
[0162] In an example, an incoming packet may have the following format: [<last valid hash, 20 bytes, 8 byte counter>, <unused, 6 bits>, <payload size, 10 bits>, <payload>, <fee, 8 bytes> <from, 32 bytes>, <signature, 32 bytes>]; Size: 20 + 8 + 16 + 8 + 32 + 32 = 116 bytes. The minimal payload that can be supported may be 1 destination account.
[0163] A payload may be: [<to 20 byte ripemd-l60 hash>, <amount 8 byte>]. The minimum size may be 24 bytes.
[0164] The proof of history sequence may add an additional hash and counter to the output: [<current hash, 20 byte>, <8 byte counter>, <last valid hash, 20 bytes, 8 byte counter>, <unused, 6 bits>, <size, 10 bits>, <payload>, <fee, 8 bytes> <from, 32 bytes>, <signature, 32 bytes>].
The minimum size of the output packet may be: 116 + 28 + 28 = 172.
[0165] Multiple transactions may be batched on the same hash. On a 1 gigabit per second (gbps) network connection, the maximum number of transactions may be 1 gigabit per second / 172 bytes = 728k tps max. Some loss (1-4%) may be expected due to Ethernet framing. The spare capacity over the target amount for the network can be used to increase availability by coding the output with Reed-Solomon codes and striping it to the available downstream spools.
Decentralized Storage for a Multi-petabyte Digital Ledger
[0166] Block chains or networks, such as those for cryptographically secure and trustless time source implementation using methods and systems provided elsewhere herein, may generate a significantly large volume of data, e.g., almost 4 petabytes of data a year for full speed and capacity with 710,000 transactions per second on a 1 gigabit network). If every node in the network is required to store all that data, it may limit network membership to the centralized few that have sufficient storage capacity. Disclosed herein is a proof of History system and method that can be leveraged to avoid this situation, allowing a fast-to-verify implementation of Proof of Replication and enabling a bit torrent-esque distribution of the ledger across all nodes in the network.
[0167] The basic idea of Proof of Replication (PoRep) may be to encrypt a dataset with a public symmetric key using CBC encryption and then hash the encrypted dataset. ETnfortunately, the problem with this simple approach is that it can be vulnerable to attack. For instances, a dishonest storage node can stream the encryption and delete the data as it’s hashed. The simple solution can be to force the hash to be done on the reverse of the encryption, or in some embodiments, with a random order. This can ensure that all the data is present during the generation of the proof and it may also require the validator to have the entirety of the encrypted data present for verification of every proof of every identity. So the space required to validate can be (Number of CBC keys)*(data size).
[0168] Disclosed herein are systems and methods that can randomly sample the encrypted blocks faster than it takes to encrypt, and record the hash of those samples into the PoH ledger. Thus the blocks can stay in the exact same order for every PoRep and verification can stream the data and verify all the proofs in a single batch. This way multiple proofs can be verified concurrently, each one on its own CUDA core. With the current generation of graphics cards, the network can support up to l4k replication identities or symmetric keys. The total space required for verification can be (2 CBC blocks) * (Number of CBC keys), with core count of equal to (Number of CBC keys). A CBC block is expected to be 1MB in size.
[0169] As an example, validators for PoRep can be the same validators that are verifying transactions. They may have some stake that they have put up as collateral that ensures that their work is honest. If it can be proven that a validator verified a fake PoRep, then the validator’s stake can be slashed.
[0170] Replicators can be specialized thin clients, which are clients that do not need to fully replicate the state that is processed by the network). The clients or replicators can download a part of the ledger and store it and provide PoReps of storing the ledger. For each verified PoRep, replicators may earn a reward of sol from the mining pool. One or more of the following constraints may exist: 1) at most l4k replication identities can be used, because that’s how many CUDA (GPU) cores can fit in a $5k box at the moment; and 2) verification requires generating the CBC (Cipher Block Chaining) blocks. That can require space of 2 blocks per identity, and 1 CUDA core per identity for the same dataset. As many identities at once can be batched with as many proofs for those identities verified concurrently for the same dataset. In this example, the network can set the replication target number, e.g., lk. lk PoRep identities can be created from signatures of a PoH hash. So they can be tied to a specific PoH hash. It may not matter who creates them, or simply the last lk validation signatures for the ledger at that count. This can be just the initial batch of identities, because identity rotation can be staggered.
[0171] Any client or users can use any of these identities to create PoRep proofs. Replicator identities can be the CBC encryption keys. Periodically, at a specific PoH count, in some embodiments, replicators that want to create PoRep proofs can sign the PoH hash at that count. That signature can be the seed used to pick the block and identity to replicate. A block may be 1TB of ledger. Periodically, at a specific PoH count, replicators can submit PoRep proofs for their selected block. A signature of the PoH hash at that count can be the seed used to sample the 1TB encrypted block, and hash it. This can be done faster than it takes to encrypt the 1TB block with the original identity. Replicators can submit some number of fake proofs, which they can prove to be fake by providing the seed for the fake hash result. Periodically at a specific PoH count, in some embodiments, validators can sign the hash and use the signature to select the 1TB block that they need to validate. They can batch all the identities and proofs and submit approval for all the verified ones. Replicator client can submit the proofs of fake proofs.
[0172] For any random seed, everyone may be forced to use a signature that is derived from a PoH hash. Everyone must use the same count, so the same PoH hash is signed by every participant. The signatures are then each cryptographically tied to the keypair, which prevents a leader from grinding on the resulting value for more than 1 identity.
[0173] It may be helpful to stagger the rotation of the identity keys. Once staggering the rotation of the identity key is being performed, the next identity can be generated by hashing itself with a PoH hash, or via some other process based on the validation signatures.
[0174] Since there are many more client identities and/or encryption identities, the reward can be split for multiple clients, and can prevent Sybil attacks from generating many clients to acquire the same block of data. To remain Byzantine Fault Tolerant (BFT), a single human entity cannot store all the replications of a single chunk of the ledger.
[0175] Provided herein is a solution which may force the clients to continue using the same identity. If the first round may be used to acquire the same block for many client identities, the second round for the same client identities may force a redistribution of the signatures, and therefore PoRep identities and blocks. Thus to get a reward for storage, clients may need to store the first block for free and the network can reward long lived client identities more than new ones.
Grinding Attack Vectors
[0176] Generating many identities to select a specific block
[0177] The first time a keypair may be used to generate an identity it may not be eligible for reward and it may not be counted towards the replication target. Since identities may have no choice on what PoH to sign, and all may have to sign the same PoH for block assignment, the second generation of blocks can be unpredictable from the first set.
[0178] Generating a specific PoH hash for a predictable seed
[0179] An attack can only influence one seed, so 1 block from many. With the total number of possible blocks of CBC keys * data slices, it can be a reward loss the network can afford. [0180] The cost of verification of PoRep can be reduced by using PoH, and actually make it feasible to verify a large number of proofs for a global dataset.
[0181] Grinding may be eliminated by forcing everyone to sign the same PoH hash and use the signatures as the seed.
[0182] The game or procedure between validators and replicators is over random blocks and random encryption identities and random data samples. The goal of randomization may be to prevent colluding groups from having overlap on data or validation.
[0183] Replicator clients can fish for lazy validators by submitting fake proofs that they can prove are fake.
[0184] Replication identities can be symmetric encryption keys. Some of the replication identities may be storage replication targets. Many more client identities can exist than replicator identities, so unlimited number of clients can provide proofs of the same replicator identity.
[0185] To defend against grinding client identities that try to store the same block, the client(s) may be forced to store for multiple rounds before receiving a reward.
Smart Contracts with Infinite Disk
[0186] The present disclosure also provides a decentralized storage network for a multi petabyte ledger. The flexible smart contracts engine disclosed elsewhere herein can be used as virtualized disk space for contracts.
[0187] The systems and methods herein can schedule asynchronous and eventually guaranteed transactions on the network that are called Signals. The runtime can guarantee that all the state of the contract has been generated by only the contract code, and that all user inputs can be recorded on chain. Thus, all resident contract state can be recreated by running the ledger.
Storing this state in DDR can be the most expensive part of the runtime. Memory can be significantly more expensive then storage, and cannot be easily striped and distributed, since fast random access is needed. Thus, the operations herein can be offered to programs running in the network that need long term storage of large state:
i) III move the resident state memory into the ledger
store(page_address: PublicKey);
ii) III restore the resident state memory from the ledger
load(page_address: PublicKey);
[0188] Contracts can create a Signal which can move the memory that belongs to the PublicKey into the ledger, thus reducing its costs. Then can later call a load to bring that memory back.
[0189] Storage can be fairly simple. The memory can be simply sent as user data over a transaction and gets put into PoH. The PublicKey then can store the location on“ledger” of where the memory may be resident. Load can be much more complicated. Since nodes may not be required to keep a full copy of the ledger, each node may only maintain a randomly selected stipe. To do a load, each verifier may have to retrieve the state from the network, and then agree that the loaded state is correct. Because this operation can be slow due to the fact that the data needs to be fetched over the network instead of being locally available, it cannot be synchronous with respect to all other operations on the chain, especially finality, which is a measurement of how fast all the nodes in the network agree on a state. So this operation may be broken up into several asynchronous steps that eventually complete:
i) load transaction is signaled
ii) each verifier fetches the data
iii) each verifier computes the hash of the loaded data
iv) when the super majority of the verifiers have agreed on the hash, the memory is
committed to the virtual machine state
v) the verification signature of the next virtual machine state reflects the change
[0190] Many rounds of finality can occur between step 1 and step 4.
[0191] The data the contract writes to the ledger can be recomputed from the ledger itself. So this piece of data can actually be forgotten after the load operation completes. It can indicate that when the next round of ledger slices is selected, the replicator nodes may not have to store the data bytes that were loaded into resident memory. They can actually be removed from
replication. What is kept or stored, can be the hash of those bytes that was mixed into Proof of History. So the integrity of the clock can remain. For example, as shown in Table 1, if the data at PoH count 1000000 can be memory that is loaded into the virtual machine, 65600 bytes at offset 23423288 to offset 23488888 can be removed from replication. The next round of replication can skip storing this data.
Table 1.
[0192] | PoH Count | PoH | Data Offset] Data Hash
Figure imgf000036_0001
| 1000000 | 0x23 a.. ( 23423288 | 0x023 f .
| 2000000 | 0x3 fc.. | 23488888 | 0xf2d2..
High performance memory management for smart contracts
[0193] The present disclosure includes how smart contracts work on the Proof of History based block chain herein. The present disclosure includes one or more of: high performance bytecode designed for fast verification and compilation to native code, memory management that may not be designed for fast analysis of data dependencies, execution of smart contracts that can parallelizable in as many cores as the system can provide. As an example, smart contract execution can be based on how operating systems load and execute dynamic code in the kernel.
[0194] As shown in FIG. 13, in a particular embodiment, an untrusted client, or Userspace in Operating Systems terms, can create a program in the front-end language of her choice, (like C/C++/Rust/Lua), and compiles it with LLVM (Low Level Virtual Machine compiler) to the Solana Bytecode object. This object file can be a standard ELF (Executable Linked Format) file. The computationally expensive work of converting frontend languages into programs can be done locally by the client. Frontend to LLVM may take a user supplied program in a higher level language such as C/C++/Rust/Lua, or is called as a library from JavaScript or any other language. LLVM toolchain may perform the actual work of converting the program to an ELF. The output can be an ELF with a specific bytecode as its target that is designed for quick verification and conversion to the local machine instruction set that Solana can be running on.
On the blockchain network, the ELF can be verified, loaded and executed. Verifier can check if the bytecode is valid and safe to execute or not and may convert it to the local machine instruction set. Loader can prepare the memory necessary to load the code and mark the segment as executable. The runtime actually can call the program with arguments and manages the changes to the virtual machine.
Bytecode
[0195] In existing methods, most of the focus on performance of smart contracts has been focused on moving to WASM. The bytecode doesn’t matter. While as disclosed herein, bytecode can be based on Berkley Packet Filter, anything that can JIT (just-in-time compile) to x86 (or SPIR-V, a GPU specific bytecode format) can be used herein. The reason for basing the bytecode on BPF may be because what the kernel does with untrusted code can overlap almost exactly with one or more requirements:
a) Deterministic amount of time to execute the code;
b) Bytecode that is portable between machine instruction sets;
c) Verified memory accesses; and
d) Deterministic and short amount of time to load the object and verify the bytecode and JIT to local machine instruction set, and simpler, faster, and easier way to verify instruction set is preferred.
Memory Management
[0196] Memory management may be the most performance-critical part of the engine. If all the contracts that are scheduled have no data dependencies, they can all be executed concurrently. If successful, the performance of the engine can scale with the number of cores available to execute the contracts. The throughput can double every 2 years with Moore’s law.
[0197] Memory management can start with the ELF (Executable Linkable Format) itself.
Initially, contracts may be constrained to be a single read-only code and data segment. It can be composed of read-only executable code and read-only data— no mutable global variables or mutable static variables. This requirement can provide a simple solution for requirement 4 as disclosed above.
[0198] Since smart contracts themselves can hold no state, the runtime can provide an interface for creating state. This interface can be invoked through a transaction just like any other contract method.
/// Allocate memo ry to the page_add ress PublicKey
/// Assign the Page to the Contract
allocate_memory{ page_add ress : PublicKey, cont ract : PublicKey, size :
u64 )
i)
Allocate Memory
[0199] If the page address public key has no current memory region associated with it, the region can be allocated and the allocated memory is set to 0. If the page address public key is unassigned, it can be assigned to the contract public key. The only code that can modify the memory that is assigned to this is code that is provided by the contract. Thus, all state transitions in that memory can be done by the contract. If one or more of the conditions disclosed herein fail, the call may fails. This interface can be called with a simple transaction.
Execution
[0200] The call structure can be a basic transaction. This structure can describe the context of the transaction:
pub st ruct Call {
/// keys to load , aka the to key
/// keys [ø] is the calle r ' s key
pub keys : Vec<Pub licKey> ,
/// indices into ' keys ' that requ ire a va lid s ignatu re
pub required_sigs : Vec<u8>,
/// usedata in bytes
pub use r_data : Vec u8>,
/// cont ract add ress
cont ract : PublicKey
}
i)
[0201] The two most important parts of this structure for the contract can be the keys and the user data. The runtime may translate the keys to the memory locations associated with them with the memory that was created with allocate memory. The user data can be the random bits of memory that the user has supplied for the call. As such, the users can add dynamic external state into the runtime.
[0202] Smart contracts can also examine the required sigs vector to check which of the
PublicKeys in call have a valid signature. By the time the Call gets to the contracts method, the signatures may have already been verified.
Rules
[0203] A contract implements the following interface:
/// Call the contract
/// * call - the client supplied call
/// * pages - the loaded page requested by the client supplied call call_cont ract ( call: &Call, pages: &mut [Page])
i)
[0204] With the following definition for a Page:
/// Generic Page for the PageTable
pub struct Page {
/// key that indexes this page
pub owner: PublicKey,
/// contract that owns this page
III only contract can write to the 'memory' vector pub contract: PublicKey,
/// token balance that belongs to page
pub tokens: i64,
/// User memory
pub memory: Vec<u8>,
>
i)
[0205] As long as the caller can pay the fee to execute the contract, it can be called. The runtime may only store the modified Page if one or more of the following rules are met:
a) For pages assigned to the contract, the total sum of all the tokens in these pages cannot increase.
b) For pages unassigned to the contract, the individual balance of each page cannot
decrease.
c) For pages unassigned to the contract, the memory cannot change.
d) The total sum of all the tokens in all the pages cannot change.
[0206] So a contract can move balances freely around any of the pages that have been assigned to it. It can modify the memory in those pages freely. For pages assigned to other contracts, the contract can add tokens to individual pages, and can read their memory vectors. Signals
[0207] Signals can be asynchronous calls or asynchronous and guaranteed function calls, i.e., calls during which the call site is not blocked from executing while waiting for the called code to finish, to programs loaded into the network. They can be created just like pages are allocated and assigned to a contract. The memory that is owned by the signal’s page can store a Call structure that the runtime can examine:
create_signal(page_address: PublicKey, contract: PublicKey)
[0208] A signal can be a way for a contract to schedule itself, or call other contracts. Once a signal becomes set, the runtime guarantees its eventual execution. A client can call a contract with any number of signals, which can modify the memory of the signal and construct whatever asynchronous method the contract needs to call, including allocate memory or create signal Notes
[0209] A contract cannot allocate memory synchronously. The client first can make a transaction for allocate memory, and then it can call the contract method with the page that has some memory allocated to it. A contract can schedule a signal to allocate memory and call itself back in the future.
[0210] The runtime can execute all the non-overlapping contract calls in parallel. As an example, some preliminary results show that over 500,000 calls per second are possible— with room for optimization.
[0211] As disclosed herein, one or more basic primitives can be used to implement all the regular Operating Systems features, which may include, but not limited, to SDK, signals, JIT, loader, and toolchain. Allocate memory can be used to create stack frames for threads, and writable segments for processes ELFs with persistent state. Signals can be used as a trampoline from the running process to an OS service that does dynamic memory allocation, creates additional threads, and creates other signals. This framework can support all the operating system primitives in a modern OS as simple synchronous calls without compromising
performance.
Branching and Proof of History
[0212] The systems and methods disclosed herein use“Proof of History” (PoH) as a global, trustless source of time available before node consensus. Once nodes may have awareness of a global clock, building a set of protocols around this shared march of time may become an obvious choice. In fact, this may mirror much work done in the non-blockchain/centralized protocol design world such as time-division multiple access (TDMA) protocols. [0213] TDMA can be based around the idea of splitting up the channel into time frames, each frame with a user-specific slot. Users can only transmit data during their slot, thus minimizing data cross talk and collisions.
[0214] Some scheduling algorithm(s) may exist to select an order of leaders, giving each leader an assigned slot to transmit transactions, e.g., each leader (Ll,L2...) is slotted at a PoH count interval. During this interval, one or more leader can be designated as‘active leader’ and only they can append transactions to the PoH data structure during their slot. i) Time: - > ii) PoH Interval: | 100-139 | 140-179 | 180-119 | 220-259 iii ) Slot Leader: Ll | L2 | L3 L4
[0215] Using the PoH disclosed herein, when Ll appends transactions it’s appending a hash of the transaction blob to the loop, e.g., SHA256 loop, this can modify the output and create a partition.
[0216] Nodes can vote on the hash and count of the PoH data structure along with the corresponding state signature. Each vote may represent a slash-able lockout. If a node votes on a different branch (e.g. a branch that doesn’t include the current vote) within the lockout they can get slashed. This vote also may double the lockout of all the previous votes, thus making it exponentially more expensive to switch branches and unroll older votes.
[0217] As an example, an active leader experiences a failure and the transactions they are processing aren’t successfully communicated to the rest of the network, if every one/every node is generating PoH, one or more node in the network can be generating the same virtual ticks, derived from the last slot, thus there may be always a‘fallback’ in case of leader node failure. Since these virtual ticks may have no data, everyone in the network can derive the exact same values as they continuously roll the SHA256 loop.
[0218] Another way to think of it is that an active leader can be transmitting two ledgers, the one with transactions that can fail, and one without transactions that cannot fail since it locally generated by every node. This virtual/real ledger generation can be described below with the TX/VX (Transmitted data/Virtual Data) where TX represents transaction data generated by leader LX (e.g. Ll, L2, L3) and VX can represent virtual data (ticks) generated by leader LX as shown in Table 2 below. Table 2. i) PoH Count: | 110 120 130 140 | 150 160 170 180
ii) LI: | T1 T1 T1 T1 |
iii ) L2 : VI VI VI VI | T2 T2 T2 T2
[0219] If more than one half (1/2+) of the network can observe all the transactional data Tl from Ll, they can vote to confirm the ledger at (140, Ll), the remaining nodes (1/2-) of the network can submit a vote on the virtual data at (140, VI). L2 may be the next leader in line, so L2’s choice of what ledger (Tl or VI) to extend may depend on which partition she happened to be in (i.e. ½+ vs ½-) as shown below: i) PoH Count: | 110 120 130 140 | 150 160 170 180
ii) if L2 is in ½+
iii) L2 : | Tl Tl Tl Tl | T2 T2 T2 T2
iv) or if L2 is in ½- v) L2 : | VI VI VI VI | T2' T2' T2' T2'
[0220] As shown above, if L2 was in the ½+ partition, it may append to the Tl version of the ledger. Otherwise it may append to the VI version of the ledger. L3 can have a similar choice after Ll’s rotation. If L3 was in the Ll ½+ partition, and L2 was in the ½- partition, L3 may see L2’s data as invalid and continue from the V2 version of the ledger. If it was in the 1/2- partition, along with L2, it may continue with the L2 version of the ledger. The potential branching tree can look as follows: i) Ll : Tl VI
ii) L2 : T2 V2 T2' V2
iii) L3 : T3 V3 T3' V3 T3 V3 T3' V3
[0221] In the above example: Ll -> L2 (½- during Ll) -> L3 (½+ during Ll) as shown below: i) == Slot 1 ==
ii) Ll : Tl VI
iii) L2 : T2 V2 T2' V2
iv) L3 : T3 V3 T3' V3' T3 V3 T3' V3' v) == Slot 2 =
vi ) L 1 : T1 VI
vii) L2 : T2 V2 T2 ' V2
viii) L3 : T3 V3 T3' V3 ' T3 V3 T3 V3 ' ix) == Slot 3 ==
x) LI : T1 VI
xi) L2 : T2 V2 T2 ' V2
xii) L3 : T3 V3 T3 ' V3 T3 V3 T3 ' V3 xiii ) Confirmed ledger after slot 3: T1 | V2 | T3'
[0222] And the alternative situation: Ll -> L2 (½- during Ll) -> L3 (½- during Ll) as shown below:
i) == Slot 1 ==
ii ) LI : T1 VI
iii ) L2 : T2 V2 T2 ' V2
iv) L3 : T3 V3 T3 ' V3 T3 V3 T3 ' V3 v) == Slot 2 =
vi ) L 1 : T1 VI
vii ) L2 : T2 V2 T2 ' V2
viii) L3 : T3 V3 T3' V3 T3 V3 T3 ' V3 ix) == Slot 3 ==
x) LI : T1 VI
xi) L2 : T2 V2 T2 ' V2
xii ) L3 : T3 V3 T3 V3 T3 V3 T3 V3 xiii) Confirmed ledger after slot 3: VI T2' | T3
[0223] Although the present disclosure has made reference to the PoH methods and systems as disclosed herein, systems and methods provided herein, including algorithms, may be employed for use with various types of synchronized clocks of block chains or networks.
Examples of the synchronized clocks may include, but are not limited to, GPS clocks, atomic clocks provided by NIST, and clocks implemented with Network Time Protocol. Nakomoto Consensus with time-based locks
[0224] The systems and methods described herein can be used to implement a modified version of the Nakomoto Consensus algorithm, which is an algorithm for verifying the authenticity of blocks in Bitcoin and other block chains. The Nakomoto Consensus algorithm with time-based locks (“modified Nakomoto Consensus algorithm”) may be defined by a stack of votes that each have a lockout period. A lockout period may be a time period during which a vote for a particular chain of blocks by a node cannot be changed. When a node makes a new vote that is added to the vote stack, e.g., a vote for a next block in a chain of blocks, the lockout period of each preceding vote in the vote stack may be doubled. In other words, older votes for older blocks may have longer lockout periods. The vote stack can be rolled back. Rollback may occur when preceding votes in the stack with a lower lock time than a new vote are removed from the vote stack. In some implementations, after rollback, lockouts are not doubled until the stack has as many votes as it did immediately prior to rollback.
[0225] In an example, consider the following vote stack:
vote tine lockout lock tine k 2 6
3 k 7
2 0 10
16 17
[0226] If the next vote is at time 9, the resulting vote stack is:
vote tine lockout lock tine
9 2 11
2 a 10
1 16 17
[0227] That is, the votes made at time 3 and time 4 are removed because their lock times - time 7 and time 6 respectively, are lower than the lock time for the new vote (time 11). If the next vote is made at time 10, the resulting vote stack is:
vote tine lockout lock tine
10 2 12
9 13
2 3 10
1 16 17
[0228] The lockout period for the vote at time 9 doubles, from 2 to 4. The lockout periods for the votes at times 1 and 2 do not double because those lockout periods are already double the votes immediately above them in the stack. At time 10, the stack has as many votes as it did before rollback. However, the vote made at time 2 has a lock time of 10. So when a vote is made at time 11, the entire stack up to the vote made at time 2 will rollback, resulting in the following vote stack: vote tine lockout lock t ne
13
1 16 17
[0229] Lockout periods may be used to force a node to commit time to a specific branch. Nodes that violate the lockout period and vote for a diverging branch during the lockout period can be punished. Slashing is one approach to punish a node that votes during a lockout period. If the network detects a node that make a vote that violates a lockout period, the network can reduce the stake associated with that node, or freeze the node from receiving rewards. The network can reward nodes for selecting the right branch with the rest of the network as often as possible. This is well aligned with generating a reward when the vote stack is full and the oldest vote needs to be de-queued.
[0230] Each node can independently set a threshold of network commitment to a branch before that node commits to a branch. For example, at vote stack index 7, the lockout is 256 time units (2L8). A node may withhold votes and let votes 0-7 expire unless the vote at index 7 has at greater than 50% commitment in the network. This allows each node to independently control how much risk to commit to a branch. Committing to a branch faster would allow the node to earn more rewards since more votes are pushed to the stack and de-queue happens more frequently.
[0231] The modified Nakomoto Consensus algorithm may provide the following benefits: (i) if nodes share a common ancestor then they will converge to a branch containing that ancestor no matter how they are partitioned, (ii) rollback requires exponentially more time for older votes then for newer votes, and (iii) nodes can independently configure a vote threshold they would like to see before committing a vote to a higher lockout. This allows each node to make a trade- off of risk and reward.
[0232] Time can be a proof of history hash count which is a verifiable delay function that provides a source of time before consensus. Other sources of time can be used as well, such as radio transmitted atomic clocks, Network Time Protocol, or locally synchronized atomic clocks. Digital processing device
[0233] In some embodiments, the systems and methods disclosed herein described herein include a digital processing device, a computer processor, or use of the same. In further embodiments, the digital processing device includes one or more hardware central processing units (CPUs) or general purpose graphics processing units (GPGPUs) that carry out the functions of the device. In still further embodiments, the digital processing device further comprises an operating system configured to perform executable instructions. In some embodiments, the digital processing device is optionally connected to a computer network. In further embodiments, the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web. In still further embodiments, the digital processing device is optionally connected to a cloud computing infrastructure. In other embodiments, the digital processing device is optionally connected to an intranet. In other embodiments, the digital processing device is optionally connected to a data storage device.
[0234] In accordance with the description herein, suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Many smartphones may be suitable for use in the system described herein. Select televisions, video players, and digital music players with optional computer network connectivity may be suitable for use in the system described herein. Suitable tablet computers include those with booklet, slate, and convertible configurations.
[0235] In some embodiments, the digital processing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the hardware of the device and provides services for execution of applications. Suitable server operating systems may include, by way of non limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Suitable personal computer operating systems may include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Suitable mobile smart phone operating systems may include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. Suitable media streaming device operating systems may include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®. Suitable video game console operating systems may include, by way of non-limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
[0236] In some embodiments, the device includes a storage and/or memory device. The storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis. In some embodiments, the device is volatile memory and requires power to maintain stored information. In some embodiments, the device is non-volatile memory and retains stored information when the digital processing device is not powered. In further embodiments, the non-volatile memory comprises flash memory. In some embodiments, the non volatile memory comprises dynamic random-access memory (DRAM). In some embodiments, the non-volatile memory comprises ferroelectric random access memory (FRAM). In some embodiments, the non-volatile memory comprises phase-change random access memory (PRAM). In other embodiments, the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing based storage. In further embodiments, the storage and/or memory device is a combination of devices such as those disclosed herein.
[0237] In some embodiments, the digital processing device includes a display to send visual information to a user. In some embodiments, the display is a liquid crystal display (LCD). In further embodiments, the display is a thin film transistor liquid crystal display (TFT-LCD). In some embodiments, the display is an organic light emitting diode (OLED) display. In various further embodiments, on OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display. In some embodiments, the display is a plasma display. In other embodiments, the display is a video projector. In yet other embodiments, the display is a head- mounted display in communication with the digital processing device, such as a VR headset. In further embodiments, suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like. In still further embodiments, the display is a combination of devices such as those disclosed herein.
[0238] In some embodiments, the digital processing device includes an input device to receive information from a user. In some embodiments, the input device is a keyboard. In some embodiments, the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus. In some embodiments, the input device is a touch screen or a multi-touch screen. In other embodiments, the input device is a microphone to capture voice or other sound input. In other embodiments, the input device is a video camera or other sensor to capture motion or visual input. In further embodiments, the input device is a Kinect, Leap Motion, or the like. In still further embodiments, the input device is a combination of devices such as those disclosed herein.
[0239] Fig. 6 shows a digital processing device 601 that is programmed or otherwise configured to perform methods steps disclosed herein. The device 601 can regulate various aspects of the cryptographic functions, sequence of the hash values, such as the recordation of the present disclosure. In this embodiment, the digital processing device 601 includes a central processing unit (CPU, also“processor” and“computer processor” herein) 605, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The digital processing device 601 also includes memory or memory location 610 (e.g., random-access memory, read- only memory, flash memory), electronic storage unit 615 (e.g., hard disk), communication interface 620 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 625, such as cache, other memory, data storage and/or electronic display adapters. The memory 610, storage unit 615, interface 620 and peripheral devices 625 are in communication with the CPU 605 through a communication bus (solid lines), such as a motherboard. The storage unit 615 can be a data storage unit (or data repository) for storing data. The digital processing device 601 can be operatively coupled to a computer network (“network”) 630 with the aid of the communication interface 620. The network 630 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet. The network 630 in some cases is a telecommunication and/or data network. The network 630 can include one or more computer servers, which can enable distributed computing, such as cloud computing. The network 630, in some cases with the aid of the device 601, can implement a peer-to-peer network, which may enable devices coupled to the device 601 to behave as a client or a server.
[0240] Continuing to refer to Fig. 6, the CPU 605 can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory 610. The instructions can be directed to the CPU 605, which can subsequently program or otherwise configure the CPU 605 to implement methods of the present disclosure. Examples of operations performed by the CPU 605 can include fetch, decode, execute, and write back. The CPU 605 can be part of a circuit, such as an integrated circuit. One or more other components of the device 601 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA).
[0241] Continuing to refer to Fig. 6, the storage unit 615 can store files, such as drivers, libraries and saved programs. The storage unit 615 can store user data, e.g., user preferences and user programs. The digital processing device 601 in some cases can include one or more additional data storage units that are external, such as located on a remote server that is in communication through an intranet or the Internet.
[0242] Continuing to refer to Fig. 6, the digital processing device 601 can communicate with one or more remote computer systems through the network 630. For instance, the device 601 can communicate with a remote computer system of a user. Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PCs (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device,
Blackberry®), or personal digital assistants.
[0243] Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the digital processing device 101, such as, for example, on the memory 610 or electronic storage unit 615. The machine executable or machine readable code can be provided in the form of software. During use, the code can be executed by the processor 605. In some cases, the code can be retrieved from the storage unit 615 and stored on the memory 610 for ready access by the processor 605.
In some situations, the electronic storage unit 615 can be precluded, and machine-executable instructions are stored on memory 610.
Non-transitory computer readable storage mediums
[0244] In some embodiments, the systems and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device. In further embodiments, a computer readable storage medium is a tangible component of a digital processing device. In still further embodiments, a computer readable storage medium is optionally removable from a digital processing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media.
Computer programs
[0245] In some embodiments, the systems and methods disclosed herein include at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable in the CPU of the digital processing device, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, a computer program may be written in various versions of various languages.
[0246] The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
Web applications
[0247] In some embodiments, a computer program includes a web application. In light of the disclosure provided herein, a web application, in various embodiments, may utilize one or more software frameworks and one or more database systems. In some embodiments, a web application is created upon a software framework such as Microsoft® .NET or Ruby on Rails (RoR). In some embodiments, a web application utilizes one or more database systems including, by way of non-limiting examples, relational, non-relational, object oriented, associative, and XML database systems. In further embodiments, suitable relational database systems include, by way of non-limiting examples, Microsoft® SQL Server, mySQL™, and Oracle®. A web application, in various embodiments, may be written in one or more versions of one or more languages. A web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof. In some embodiments, a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML). In some embodiments, a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS). In some embodiments, a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash® Actionscript, Javascript, or Silverlight®. In some embodiments, a web application is written to some extent in a server-side coding language such as Active Server Pages (ASP), ColdFusion®, Perl, Java™, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), Python™, Ruby, Tel, Smalltalk, WebDNA®, or Groovy. In some embodiments, a web application is written to some extent in a database query language such as Structured Query Language (SQL). In some embodiments, a web application integrates enterprise server products such as IBM® Lotus Domino®. In some embodiments, a web application includes a media player element. In various further embodiments, a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe® Flash®, HTML 5, Apple® QuickTime®, Microsoft® Silverlight®, Java™, and Unity®.
[0248] Referring to Fig. 7, an application provision system may comprise one or more databases 700 accessed by a relational database management system (RDBMS) 210. Suitable RDBMSs include Firebird, MySQL, PostgreSQL, SQLite, Oracle Database, Microsoft SQL Server, IBM DB2, IBM Informix, SAP Sybase, SAP Sybase, Teradata, and the like. In this embodiment, the application provision system further comprises one or more application severs 720 (such as Java servers, .NET servers, PHP servers, and the like) and one or more web servers 730 (such as Apache, IIS, GWS and the like). The web server(s) optionally expose one or more web services via app application programming interfaces (APIs) 740. Via a network, such as the Internet, the system provides browser-based and/or mobile native user interfaces.
[0249] Referring to Fig. 8, an application provision system may have a distributed, cloud-based architecture 800 and comprises elastically load balanced, auto-scaling web server resources 310 and application server resources 820 as well synchronously replicated databases 830.
Mobile application
[0250] In some embodiments, a computer program includes a mobile application provided to a mobile digital processing device. In some embodiments, the mobile application is provided to a mobile digital processing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile digital processing device via the computer network described herein.
[0251] A mobile application may be created by various techniques using various hardware, languages, and development environments. Mobile applications may be written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.
[0252] Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.
[0253] Several commercial forums may be available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Google® Play, Chrome
WebStore, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and
Nintendo® DSi Shop.
Software modules
[0254] In some embodiments, the systems and methods disclosed herein include software, server, and/or database modules, or use of the same. Software modules may be created by various techniques using various machines, software, and languages. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. Databases
[0255] The systems and methods disclosed herein may include one or more databases, or use of the same. In view of the disclosure provided herein, many databases may be suitable for storage and retrieval of hash values, index, input, output, time stamp, hash functions, combine functions. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity- relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase. In some embodiments, a database is internet-based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In other embodiments, a database is based on one or more local computer storage devices.
EXAMPLES
Example 1. Ordering of Events
[0256] In this example, a set of users share a database. They want to modify the database and make sure that after the modifications, the results are the same at every step. Each user submits the database changes to an agent that is creating the sequence order. The order is then broadcast to all the users. The users receive the sequence and modify their local databases. Since the order is the same for all the users, then all the local copies have the same result. If the users do not have any trust in the machine generating the sequence, they can examine the output and determine that it is valid and consistent. Multiple machines can be used to create the order sequence. So there is no centralized point of failure. And the output of each machine can be deterministically combined without trusting the machines.
Example 2. Notarization/Authentication
[0257] In this example, the iterative execution of cryptographic hash functions is running continuously as service, users can enter events and have them authenticated to have occurred at least some time before they were entered into the sequence. Anyone inspecting the record can verify that the data of the user is entered some time before that portion of the record is generated. The generator of the record does not need to be trusted. And an inspector can verify the record with a multi core computer at the fraction of the time it took to be generated.
[0258] While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A computer-implemented method for cryptographically generating a local timestamp, comprising:
a) activating one or more computer processors that are individually or collectively programmed to execute a cryptographic hash function;
b) inputting at least a first data set from computer memory to said cryptographic hash function to generate as output a second data set comprising a first set of cryptographic hash values from said first data set, wherein upon generating said second data set, a counter in computer memory is incremented by a preselected value;
c) recording at least said first data set, said second data set, and said counter to a sequence of cryptographic hash values in said computer memory; d) inputting at least said second data set and a third data set to said cryptographic hash function to generate as output a fourth data set comprising a second set of cryptographic hash values from said third data set, wherein upon generating said fourth data set, said counter in computer memory is incremented by said preselected value;
e) recording at least said second data set, said third data set, said fourth data set, and said counter to said sequence of cryptographic hash values in said computer memory;
f) using at least said fourth data set as input, repeating (b) and (c) for a first number of repetitions, (d) and (e) for a second number of repetitions or a combination thereof to yield a fifth data set, wherein upon generating said fifth data set, said counter in computer memory is incremented by said first number of repetitions, said second number of repetitions or a combination thereof;
g) recording at least said fourth data set, said fifth data set, and said counter to said sequence of cryptographic hash values in said computer memory; and
h) using said counter to generate said local timestamp for said third data set.
2. The method of claim 1, further comprising verifying said sequence of cryptographic hash values using a preselected number of computer processors by:
a) selecting a preselected number of computer processors; b) splitting said sequence of cryptographic hash values into said preselected number of sub-sequences, each of said sub-sequences comprising a portion of said sequence of cryptographic hash values; and
c) executing, by said pre-selected number of processors, the cryptographic hash function based on an input of each of said preselected number of sub-sequences to generate as output said preselected number of new sub-sequences; and
d) verifying whether said preselected number of new sub-sequences match said
preselected number of sub-sequences.
3. The method of claim 1 further comprising encrypting, by said one or more computer processors, said sequence of cryptographic hash values comprising:
a) inputting at least said first data set and a private key to an encryption function to generate as output a first encrypted data set, wherein upon generating said second data set, a counter in computer memory is incremented by a preselected value; and b) recording said first data set, said first encrypted data, and the counter in an encrypted sequence of cryptographic hash values said computer memory.
4. The method of claim 3, further comprising decrypting, by said one or more computer processors, the encrypted sequence of cryptographic hash values using a public key.
5. The method of claim 1, wherein said first data set comprises a set of cryptographic hash values from a previous data set.
6. The method of claim 1, wherein said third data set comprises a cryptographic hash value of an event.
7. The method of claim 1, wherein the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix.
8. The method of claim 1, wherein said cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3.
9. The method of claim 2, wherein said preselected value is an integer.
10. A system for cryptographically generating a local timestamp comprising a digital
processing device comprising: one or more computer processor, an operating system configured to perform executable instructions, a computer memory, and a computer program including instructions executable by the digital processing device configured to: a) activate said one or more computer processors that are individually or collectively programmed to execute a cryptographic hash function;
b) input at least a first data set from said computer memory to said cryptographic hash function to generate as output a second data set comprising a first set of cryptographic hash values from said first data set, wherein upon generating said second data set, a counter in said computer memory is incremented by a preselected value;
c) record at least said first data set, said second data set, and said counter to a
sequence of cryptographic hash values in said computer memory; d) input at least said second data set and a third data set to said cryptographic hash function to generate as output a fourth data set comprising a second set of cryptographic hash values from said third data set, wherein upon generating said fourth data set, said counter in computer memory is incremented by said preselected value;
e) record at least said second data set, said third data set, said fourth data set, and said counter to said sequence of cryptographic hash values in said computer memory;
f) use at least said fourth data set as input, repeating (b) and (c) for a first number of repetitions, (d) and (e) for a second number of repetitions or a combination thereof to yield a fifth data set, wherein upon generating said fifth data set, said counter in computer memory is incremented by said first number of repetitions, said second number of repetitions or a combination thereof;
g) record at least said fourth data set, said fifth data set, and said counter to said sequence of cryptographic hash values in said computer memory; and
h) use said counter to generate said local timestamp for said third data set.
11. The system of claim 10, wherein said computer program is further configured to verify said sequence of cryptographic hash values using a preselected number of computer processors, comprising:
a) selecting a preselected number of computer processors;
b) splitting said sequence of cryptographic hash values into said preselected number of sub-sequences, each of said sub-sequences comprising a portion of said sequence of cryptographic hash values; and c) executing, by said pre-selected number of processors, the cryptographic hash function based on an input of each of said preselected number of sub-sequences to generate as output said preselected number of new sub-sequences; and
d) verifying whether said preselected number of new sub-sequences match said
preselected number of sub-sequences.
12. The system of claim 10, wherein said computer program is further configured to encrypt, by said one or more computer processors, said sequence of cryptographic hash values comprising: a) inputting at least said first data set and a private key to an encryption function to
generate as output a first encrypted data set, wherein upon generating said second data set, a counter in computer memory is incremented by a preselected value; and b) recording said first data set, said first encrypted data, and the counter in an encrypted sequence of cryptographic hash values said computer memory.
13. The method of claim 12, wherein said computer program is further configured to
decrypt, by said one or more computer processors, the encrypted sequence of
cryptographic hash values using a public key.
14. The method of claim 10, wherein said first data set comprises a set of cryptographic hash values from a previous data set.
15. The method of claim 10, wherein said third data set comprises a cryptographic hash value of an event.
16. The method of claim 10, wherein the cryptographic hash value comprises a plurality of characters selected from the group consisting of a number, a letter, a symbol, a string, a vector, and a matrix.
17. The method of claim 10, wherein said cryptographic hash function comprises one or more of: sha-256, sha-224, md5, sha-0, sha-l, sha-2, and sha-3.
18. The method of claim 11, wherein said preselected value is an integer.
19. A computer-implemented method for consensus voting in a block chain, comprising:
(a) receiving a given consensus vote for a block from a node in said block chain; applying a lockout period to said given consensus vote, wherein said lockout period comprises a time period during which changing said given consensus vote violates a lockout policy; (b) removing, from a vote stack comprising preceding consensus votes made by said node prior to said given consensus vote, any of said preceding consensus votes having a lockout period that has expired;
(c) increasing said lockout period of at least one of said preceding consensus votes still remaining in said vote stack by a factor greater than 1;
(d) adding said given consensus vote to said vote stack; and
(e) slashing a stake associated with said node if said node violates said lockout policy for said given consensus vote by voting for an additional block other than said block of (a) before said lockout period expires.
20. The method of claim 19, wherein said factor is greater than or equal to 1.5.
21. The method of claim 20, wherein said factor is greater than or equal to 2.
PCT/US2018/064547 2017-12-08 2018-12-07 Systems and methods for cryptographic provision of synchronized clocks in distributed systems WO2019113495A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762596678P 2017-12-08 2017-12-08
US62/596,678 2017-12-08
US201862618972P 2018-01-18 2018-01-18
US62/618,972 2018-01-18
US201862660854P 2018-04-20 2018-04-20
US62/660,854 2018-04-20

Publications (1)

Publication Number Publication Date
WO2019113495A1 true WO2019113495A1 (en) 2019-06-13

Family

ID=66751787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/064547 WO2019113495A1 (en) 2017-12-08 2018-12-07 Systems and methods for cryptographic provision of synchronized clocks in distributed systems

Country Status (1)

Country Link
WO (1) WO2019113495A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209339A (en) * 2020-01-03 2020-05-29 腾讯科技(深圳)有限公司 Block synchronization method, device, computer and storage medium
CN111371769A (en) * 2020-02-27 2020-07-03 浙江超脑时空科技有限公司 Consensus processing method, consensus node, electronic device, and readable storage medium
US10733152B2 (en) 2018-12-29 2020-08-04 Alibaba Group Holding Limited System and method for implementing native contract on blockchain
CN111611606A (en) * 2020-05-22 2020-09-01 北京百度网讯科技有限公司 File encryption and decryption method and device
TWI707573B (en) * 2019-11-14 2020-10-11 財團法人資訊工業策進會 Apparatus for adding data to blockchain, data verification apparatus, and data verification method
US10860350B2 (en) 2019-03-26 2020-12-08 Advanced New Technologies Co., Ltd. System and method for implementing different types of blockchain contracts
US11086847B2 (en) 2018-12-29 2021-08-10 Advanced New Technologies Co., Ltd. System and method for implementing native contract on blockchain
CN113238806A (en) * 2019-08-30 2021-08-10 创新先进技术有限公司 Method and apparatus for concurrently executing transactions in a blockchain
US20220129481A1 (en) * 2019-03-07 2022-04-28 Uvue Ltd System and method for implementing consensus in distributed ledger arrangement
US20230084297A1 (en) * 2018-07-24 2023-03-16 T-Mobile Innovations Llc Network generated precision time
CN116405187A (en) * 2023-04-21 2023-07-07 石家庄铁道大学 Distributed node intrusion situation sensing method based on block chain
EP4293952A1 (en) * 2022-06-17 2023-12-20 Youssef Merzoug Information lookup through iterative hashing
CN117521113A (en) * 2024-01-03 2024-02-06 烟台业达智慧城市运营科技有限公司 Artificial intelligence data encryption storage method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090100041A1 (en) * 2008-04-25 2009-04-16 Wilson Kelce S Public Electronic Document Dating List
US20120017093A1 (en) * 2007-02-21 2012-01-19 Stephen Savitzky Trustworthy timestamps and certifiable clocks using logs linked by cryptographic hashes
US20170034197A1 (en) * 2015-07-31 2017-02-02 British Telecommunications Public Limited Company Mitigating blockchain attack
WO2017095920A1 (en) * 2015-12-02 2017-06-08 Pcms Holdings, Inc. System and method for tamper-resistant device usage metering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120017093A1 (en) * 2007-02-21 2012-01-19 Stephen Savitzky Trustworthy timestamps and certifiable clocks using logs linked by cryptographic hashes
US20090100041A1 (en) * 2008-04-25 2009-04-16 Wilson Kelce S Public Electronic Document Dating List
US20170034197A1 (en) * 2015-07-31 2017-02-02 British Telecommunications Public Limited Company Mitigating blockchain attack
WO2017095920A1 (en) * 2015-12-02 2017-06-08 Pcms Holdings, Inc. System and method for tamper-resistant device usage metering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BRANWEN: "Easy Cryptographic Timestamping", 26 May 2017 (2017-05-26), XP055616120, Retrieved from the Internet <URL:https://web.archive.org/web/20170721082915/https://www.gwern.net/Timestamping> [retrieved on 20190328] *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11930462B2 (en) 2018-07-24 2024-03-12 T-Mobile Innovations Llc Network generated precision time
US11716697B2 (en) * 2018-07-24 2023-08-01 T-Mobile Innovations Llc Network generated precision time
US20230084297A1 (en) * 2018-07-24 2023-03-16 T-Mobile Innovations Llc Network generated precision time
US11086847B2 (en) 2018-12-29 2021-08-10 Advanced New Technologies Co., Ltd. System and method for implementing native contract on blockchain
US10733152B2 (en) 2018-12-29 2020-08-04 Alibaba Group Holding Limited System and method for implementing native contract on blockchain
US11868368B2 (en) * 2019-03-07 2024-01-09 Uvue Ltd System and method for implementing consensus in distributed ledger arrangement
US20220129481A1 (en) * 2019-03-07 2022-04-28 Uvue Ltd System and method for implementing consensus in distributed ledger arrangement
US10860350B2 (en) 2019-03-26 2020-12-08 Advanced New Technologies Co., Ltd. System and method for implementing different types of blockchain contracts
US11010184B2 (en) 2019-03-26 2021-05-18 Advanced New Technologies Co., Ltd. System and method for implementing different types of blockchain contracts
US10949231B2 (en) 2019-03-26 2021-03-16 Advanced New Technologies Co., Ltd. System and method for implementing different types of blockchain contracts
US10866823B2 (en) 2019-03-26 2020-12-15 Advanced New Technologies Co., Ltd. System and method for implementing different types of blockchain contracts
CN113238806A (en) * 2019-08-30 2021-08-10 创新先进技术有限公司 Method and apparatus for concurrently executing transactions in a blockchain
CN112804284A (en) * 2019-11-14 2021-05-14 财团法人资讯工业策进会 Data chaining device, data verification device and data verification method
TWI707573B (en) * 2019-11-14 2020-10-11 財團法人資訊工業策進會 Apparatus for adding data to blockchain, data verification apparatus, and data verification method
CN111209339A (en) * 2020-01-03 2020-05-29 腾讯科技(深圳)有限公司 Block synchronization method, device, computer and storage medium
CN111371769B (en) * 2020-02-27 2022-03-08 北京链化未来科技有限公司 Consensus processing method, consensus node, electronic device, and readable storage medium
CN111371769A (en) * 2020-02-27 2020-07-03 浙江超脑时空科技有限公司 Consensus processing method, consensus node, electronic device, and readable storage medium
CN111611606A (en) * 2020-05-22 2020-09-01 北京百度网讯科技有限公司 File encryption and decryption method and device
EP4293952A1 (en) * 2022-06-17 2023-12-20 Youssef Merzoug Information lookup through iterative hashing
CN116405187A (en) * 2023-04-21 2023-07-07 石家庄铁道大学 Distributed node intrusion situation sensing method based on block chain
CN116405187B (en) * 2023-04-21 2024-04-09 石家庄铁道大学 Distributed node intrusion situation sensing method based on block chain
CN117521113A (en) * 2024-01-03 2024-02-06 烟台业达智慧城市运营科技有限公司 Artificial intelligence data encryption storage method and system
CN117521113B (en) * 2024-01-03 2024-04-16 烟台业达智慧城市运营科技有限公司 Artificial intelligence data encryption storage method and system

Similar Documents

Publication Publication Date Title
WO2019113495A1 (en) Systems and methods for cryptographic provision of synchronized clocks in distributed systems
Yakovenko Solana: A new architecture for a high performance blockchain v0. 8.13
US11770238B2 (en) Decentralized computation system architecture based on node specialization
TWI721699B (en) System and method for parallel-processing blockchain transactions
JP6690066B2 (en) Validating the integrity of data stored on the consortium blockchain using the public sidechain
CN109472696B (en) Asset transaction method, device, storage medium and computer equipment
US11778024B2 (en) Decentralized computation system architecture based on node specialization
US20190172026A1 (en) Cross blockchain secure transactions
CN112544053B (en) Methods, systems, computer program products, and computer readable media for determining data blocks and for providing time stamped transactions
JP7251035B2 (en) System and method for providing special proof of classified knowledge
CN111095326A (en) Parallel execution of transactions in a distributed ledger system
CN110678865B (en) High integrity log for distributed software services
US9589153B2 (en) Securing integrity and consistency of a cloud storage service with efficient client operations
EP3977390B1 (en) Blockchain transaction processing systems and methods
US11314885B2 (en) Cryptographic data entry blockchain data structure
CN111226209A (en) Performing mapping iterations in a blockchain based system
JP2014524204A (en) Method and system for storing and retrieving data from key-value storage
US20210311925A1 (en) Blockchain transaction processing systems and methods
TW202139127A (en) Compute services for a platform of services associated with a blockchain
JP2022532764A (en) Systems and methods for deparallelized mining in proof of work blockchain networks
US12020242B2 (en) Fair transaction ordering in blockchains
US11500845B2 (en) Blockchain transaction processing systems and methods
Paimani SonicChain: A Wait-free, Pseudo-Static Approach Toward Concurrency in Blockchains

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18886572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18886572

Country of ref document: EP

Kind code of ref document: A1