US20150294123A1 - System and method for sharing data securely - Google Patents

System and method for sharing data securely Download PDF

Info

Publication number
US20150294123A1
US20150294123A1 US14/683,924 US201514683924A US2015294123A1 US 20150294123 A1 US20150294123 A1 US 20150294123A1 US 201514683924 A US201514683924 A US 201514683924A US 2015294123 A1 US2015294123 A1 US 2015294123A1
Authority
US
United States
Prior art keywords
data
secure
cache
nonce
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/683,924
Inventor
William V. Oxford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KRIMMENI TECHNOLOGIES Inc
Rubicon Labs Inc
Original Assignee
KRIMMENI TECHNOLOGIES Inc
Rubicon Labs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KRIMMENI TECHNOLOGIES Inc, Rubicon Labs Inc filed Critical KRIMMENI TECHNOLOGIES Inc
Priority to US14/683,924 priority Critical patent/US20150294123A1/en
Assigned to RUBICON LABS, INC. reassignment RUBICON LABS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OXFORD, WILLIAM V.
Publication of US20150294123A1 publication Critical patent/US20150294123A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/72Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0435Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply symmetric encryption, i.e. same key used for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0478Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload applying multiple layers of encryption, e.g. nested tunnels or encrypting the content with a first key and then with at least a second key
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0485Networking architectures for enhanced packet encryption processing, e.g. offloading of IPsec packet processing or efficient security association look-up
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • H04L63/061Network architectures or network communication protocols for network security for supporting key management in a packet data network for key exchange, e.g. in peer-to-peer networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/321Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving a third party or a trusted authority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3242Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving keyed hash functions, e.g. message authentication codes [MACs], CBC-MAC or HMAC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/04Key management, e.g. using generic bootstrapping architecture [GBA]
    • H04W12/041Key generation or derivation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2463/00Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00
    • H04L2463/061Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00 applying further key derivation, e.g. deriving traffic keys from a pair-wise master key
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0819Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
    • H04L9/083Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) involving central third party, e.g. key distribution center [KDC] or trusted third party [TTP]

Definitions

  • This disclosure relates generally to security in computer systems.
  • this disclosure relates to systems and methods by which a secure process can share selected data with other processes, either secure or not, in a nonetheless safe manner.
  • Embodiments of systems and methods disclosed herein provide simple and effective methods for secure processes to share selected data with other processes, either secure or not, in a safe and secure manner are disclosed.
  • systems and methods are disclosed that enable a secure data cache system to write certain data to main memory in plaintext form even though the data must first pass through a mandatory encryption process prior to being written out to main memory.
  • a secure execution controller is configured to symmetrically encrypt the data using an encryption key and store the encrypted data in the cache.
  • the secure execution controller is configured to symmetrically encrypt the data a second time using the same encryption key and storing the twice symmetrically encrypted data in the memory as plaintext.
  • systems and methods are disclosed that enable a secure data cache system to write encrypted data from one secure process to main memory, and to enable the decryption of the data by another secure process.
  • a cache having one or more of data lines includes a vector used to identify a secure process for each data line.
  • An encryption key is generated for each secure process identified by a vector in the data lines.
  • the data in each line having a vector that identifies a secure process is symmetrically encrypted using the encryption key corresponding to the secure process identified by the vector of the given data line.
  • the secure process identified in the vector of a given data line is allowed to write data to the vector, identifying another secure process, thereby effectively sharing the data in the data line with the other secure process.
  • FIG. 1 depicts one embodiment of an architecture for content distribution
  • FIG. 2 depicts one embodiment of a target device
  • FIG. 3 depicts one embodiment of a secure execution controller
  • FIGS. 4A and 4B depict an embodiment of a cache architecture used for process working set isolation
  • FIG. 5 depicts the generation in hardware of a nonce-based authCode/encryption key.
  • FIG. 6 depicts an exemplary secure processor data flow.
  • FIG. 7 depicts a secure software implementation.
  • FIG. 8 is a flowchart illustrating an exemplary pre-encryption operation.
  • FIG. 9 is a block diagram of an exemplary secure data cache system.
  • FIG. 10 is a block diagram of an exemplary secure data cache system used to place non-encrypted data in main memory.
  • FIGS. 11-13 are block diagrams of exemplary secure data cache systems using secure vector identifiers.
  • FIG. 14 depicts a 2-party secure AKE implemented without using asymmetric cryptography
  • FIG. 15 depicts a 3-party secure AKE version of the protocol shown in FIG. 14 .
  • FIG. 16 depicts an embodiment of the protocol shown in FIG. 14 , but using an HSM module as an accelerator for bulk message encryption and decryption processing.
  • a solution presented herein according to embodiments comes in two parts.
  • the first part involves data that is modified while a processor is operating in secure mode.
  • the data for that cache line is tagged as being secure. Subsequently, only the same secure process that actually wrote the data to that cache line in the first place can read that data without error.
  • the secure page-out process can be accomplished by encrypting the data as it is written out of data cache, using a standard (symmetric) encryption algorithm, such as AES-128 or AES-256.
  • a standard (symmetric) encryption algorithm such as AES-128 or AES-256.
  • AES-128 or AES-256 the security of the encrypted data should thus be entirely dependent on the security of the key for that encryption process.
  • only the secure process that “owns” that data should be able to recreate the encryption key correctly. In that case, no other process (either secure or not) would be able to access the unencrypted form of the data that is paged out to main memory.
  • the creation of a key for this encryption process can be accomplished by using a compound key mechanism (such as that described in commonly-assigned U.S. Pat. No. 7,203,844, issued Apr. 10, 2007, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” which is hereby incorporated by reference in its entirety as if fully set forth herein).
  • a compound key mechanism such as that described in commonly-assigned U.S. Pat. No. 7,203,844, issued Apr. 10, 2007, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” which is hereby incorporated by reference in its entirety as if fully set forth herein).
  • this encryption key can be recreated at will using just the (public) precursors. Since the authCode for a given secure process is also known, then all that would be needed for a secure process to recreate the actual encryption key is the nonce. Using this mechanism, a secure process can then safely page its secure data out to main memory.
  • FIG. 1 depicts one embodiment of such a topology.
  • a content distribution system 101 may operate to distribute digital content (which may be for example, a bitstream comprising audio or video data, a software application, etc.) to one or more target units 100 (also referred to herein as target or endpoint devices) which comprise protocol engines. Examples other than the exemplary content distribution systems are also possible.
  • These target units may be part of, for example, computing devices on a wireline or wireless network or a computer device which is not networked, such computing devices including, for example, a personal computers, cellular phones, personal data assistants, tablets, media players which may play content delivered as a bitstream over a network or on a computer readable storage media that may be delivered, for example, through the mail, etc.
  • This digital content may compose or be distributed in such a manner such that control over the execution of the digital content may be controlled and security implemented with respect to the digital content.
  • control over the digital content may be exercised in conjunction with a licensing authority 103 .
  • This licensing authority 103 (which may be referred to as a central licensing authority (CLA), though it will be understood that such a licensing authority need not be centralized and whose function may be distributed, or whose function may be accomplished by content distribution system 101 , manual distribution of data on a hardware device such as a memory stick, etc. may provide a key and/or an authorization code.
  • This key may be a compound key (DS), that is both cryptographically dependent on the digital content distributed to the target device and bound to each target device (TDn).
  • a target device may be attempting to execute an application in secure mode.
  • This secure application (which may be referred to as candidate code or a candidate code block (e.g., CC)) may be used in order to access certain digital content.
  • the licensing authority 103 supplies a correct value of a compound key (one example of which may be referred to as an Authorization Code) to the target device on which the candidate code block is attempting to execute in secure mode (e.g., supply DS 1 to TD 1 ).
  • a compound key one example of which may be referred to as an Authorization Code
  • No other target device e.g., TDn, where TDn TD 1
  • the compound key e.g., DS 1
  • no other compound key DSn assuming DSn DS 1
  • Target Device 100 e.g., TD 1
  • the target device 100 engages a hash function (which may be hardware based) that creates a message digest (e.g., MD 1 ) of that candidate code block (e.g., CC 1 ).
  • the seed value for this hash function is the secret key for the target device 100 (e.g., TD 1 's secret key (e.g., SK 1 )).
  • such a message digest (e.g., MD 1 ) may be a Message Authentication Code (MAC) as well as a compound key, since the hash function result depends on the seed value of the hash, the secret key of the target device 100 (e.g., SK 1 ).
  • MAC Message Authentication Code
  • the resulting value of the message digest (e.g., MD 1 ) is cryptographically bound to both the secret key of the target device 100 and to the candidate code block.
  • the licensing authority distributed compound key e.g., DS 1
  • the value of the message digest e.g., MD 1
  • the candidate code block e.g., CC 1
  • the target device 100 can then run the candidate code block in secure mode.
  • the target device 100 when secure mode execution for a target device 100 is performed, the target device 100 may be executing code that has both been verified as unaltered from its original form, and is cryptographically “bound” to the target device 100 on which it is executing.
  • This method of ensuring secure mode execution of a target device may be contrasted with other systems, where a processor enters secure mode upon hardware reset and then may execute in a hypervisor mode or the like in order to establish a root-of-trust.
  • any or all of these data such as the compound key from the licensing authority, the message digest, the candidate code block, etc. (e.g., DS 1 , MD 1 , CC 1 ) may be completely public as longs as the secret key for the target device 100 (e.g., SK 1 ) is not exposed.
  • the value of the secret key of a target device is never exposed, either directly or indirectly.
  • embodiments of the systems and methods presented herein may, in addition to protecting the secret key from direct exposure, protect against indirect exposure of the secret key on target devices 100 by securing the working sets of processes executing in secure mode on target devices 100 .
  • FIG. 2 shows the architecture of one embodiment of a target device that is capable of controlling the execution of the digital content or implementing security protocols in conjunction with received digital content.
  • Elements of the target unit may include a set of blocks, which allow a process to execute in a secured mode on the target device such that when a process is executing in secured mode the working set of the process may be isolated.
  • these blocks are described as hardware in this embodiment, software may be utilized to accomplish similar functionality with equal efficacy. It will also be noted that while certain embodiments may include all the blocks described herein other embodiments may utilize lesser or additional blocks.
  • the target device 100 may comprise a CPU execution unit 120 which may be a processor core with an execution unit and instruction pipeline.
  • Target unit 100 may also contain a true random number generator 182 which may be configured to produce a sequence of sufficiently random numbers or which can then be used to supply seed values for a pseudo-random number generation system.
  • This pseudo-random number generator can also potentially be implemented in hardware, software or in “secure” software.
  • One-way hash function block 160 may be operable for implementing a hashing function substantially in hardware.
  • One-way hash function block 160 may be a part of a secure execution controller 162 that may be used to control the placement of the target device 100 in secure mode or that may be used to control memory accesses (e.g., when the target device 100 is executing in secured mode), as will be described in more detail herein at a later point.
  • one way hash function block 160 may be implemented in a virtual fashion, by a secure process running on the same CPU that is used to evaluate whether a given process is secure or not.
  • two conditions may be adhered to, ensuring that such a system may resolve correctly.
  • the secure mode “evaluation” operation e.g., the hash function
  • a chain of nested evaluations may have a definitive termination point (which may be referred to as the root of the “chain of trust” or simply the “root of trust”).
  • this “root of trust” may be the minimum portion of the system that should be implemented in some non-changeable fashion (e.g., in hardware).
  • This minimum feature may be referred to as a “hardware root of trust”.
  • one such hardware root of trust might be a One-Way hash function that is realized in firmware (e.g., in non-changeable software).
  • the target unit 100 may be a hardware-assisted secure mode controller block 170 .
  • This secure mode controller block 170 can be implemented in a number of ways.
  • the secure mode controller block 170 is a general purpose processor or a state machine.
  • the secure execution controller 162 also includes secure mode control registers 105 , which define the configuration of the current security state on a process by process basis.
  • the secret key 104 and another number are run through the one-way hash function block 160 .
  • the result of the hash function is repeatable and is a derivative of the secret.
  • the result of the hash function is provided to the secure mode controller block 170 .
  • encryption and decryption will be utilized interchangeably herein when referring to engines (algorithms, hardware, software, etc.) for performing encryption/decryption.
  • engines algorithms, hardware, software, etc.
  • the same or similar encryption or decryption engine may be utilized for both encryption and decryption.
  • the encryption and decryption functions may or may not be substantially similar, even though the keys may be different.
  • Target device 100 may also comprise a data cache 180 , the instruction cache 110 where code that is to be executed can be stored, and main memory 190 .
  • Data cache 180 may be almost any type of cache desired such as a L1 or L2 cache.
  • data cache 180 may be configured to associate a secure process descriptor with one or more pages of the cache and may have one or more security flags associated with (all or some subset of the) lines of a data cache 180 .
  • a secure process descriptor may be associated with a page of data cache 180 .
  • embodiments of target device 100 may isolate the working set of a process executing in secure mode stored in data cache 180 such that the data is inaccessible to any other process, even after the original process terminates. More specifically, in one embodiment, the entire working set of a currently executing may be stored in data cache 180 and writes to main memory 190 and write-through of that cache (e.g., to main memory 190 ) disallowed (e.g., by secured execution controller 162 ) when executing in secured mode.
  • those cache lines may be associated with a secure process descriptor for the currently executing process.
  • the secure process descriptor may uniquely specify those associated “dirty” cache lines as belonging to the executing secure process, such that access to those cache lines can be restricted to only that process (e.g., be by secured execution controller 162 ).
  • main memory e.g., a page swap or page out operation
  • external data transactions between the processor and the bus e.g., an external memory bus
  • the encryption (and decryption) of data written to main memory may be controlled by secure execution controller 162 .
  • the key for such an encryption may be the secure process descriptor itself or some derivative thereof and that secure descriptor may itself be encrypted (e.g., using the target device's 100 secret key 104 or some derivative thereof) and stored in the main memory 190 in encrypted form as a part of the data being written to main memory.
  • Instruction cache 110 is typically known as an I-Cache.
  • a characteristic of portions of this I-Cache 110 is that the data contained within certain blocks be readable only by CPU execution unit 120 .
  • this particular block of I-Cache 130 is execute-only and may not be read from, nor written to, by any executing software.
  • This block of I-Cache 130 will also be referred to as the “secured I-Cache” 130 herein.
  • the manner by which code to be executed is stored in this secured I-Cache block 130 may be by way of another block which may or may not be depicted.
  • Normal I-Cache 150 may be utilized to store code that is to be executed normally as is known in the art.
  • certain blocks may be used to accelerate the operation of a secure code block.
  • a set of CPU registers 140 may be designated to only be accessible while the CPU 120 is executing secure code or which are cleared upon completion of execution of the secure code block (instructions in the secured I-cache block 130 executing in secured mode), or if, for some reason a jump to any section of code which is located in the non-secure or “normal” I-Cache 150 or other area occurs during the execution of code stored in the secured I-Cache 130 .
  • CPU execution unit 120 may be configured to track which registers 140 are read from or written to while executing the code stored in secured I-cache block 130 and then automatically clear or disable access to these registers upon exiting the “secured execution” mode. This allows the secured code to quickly “clean-up” after itself such that only data that is permitted to be shared between two kinds of code blocks is kept intact. Another possibility is that an author of code to be executed in the secured code block 130 can explicitly identify which registers 140 are to be cleared or disabled. In the case where a secure code block is interrupted and then resumed, then these disabled registers may potentially be re-enabled if it can be determined that the secure code that is being resumed has not been tampered with during the time that it was suspended.
  • a set of registers 140 which are to be used only when the CPU 120 is executing secured code may be identified. In one embodiment, this may be accomplished utilizing a version of the register renaming and scoreboarding mechanism, which is practiced in many contemporary CPU designs. In some embodiments, the execution of a code block in secured mode is treated as an atomic action (e.g., it is non-interruptible) which may make this such renaming and scoreboarding easier to implement.
  • another method which may be utilized for protecting the results obtained during the execution of a secured code block that is interrupted mid-execution from being exposed to other execution threads within a system is to disable stack pushes while a the target device 100 is operating in secured execution mode.
  • This disabling of stack pushes will mean that a secured code block is thus not interruptible in the sense that, if the secured code block is interrupted prior to its normal completion, it cannot be resumed and therefore must be restarted from the beginning.
  • the “secured execution” mode is disabled during a processor interrupt, then the secured code block may also potentially not be able to be restarted unless the entire calling chain is restarted.
  • Each target unit 100 may also have one or more secret key constants 104 ; the values of neither of which are software-readable.
  • the first of these keys (the primary secret key) may be organized as a set of secret keys, of which only one is readable at any particular time. If the “ownership” of a unit is changed (for example, the equipment containing the protocol engine is sold or its ownership is otherwise transferred), then the currently active primary secret key may be “cleared” or overwritten by a different value. This value can either be transferred to the unit in a secure manner or it can be already stored in the unit in such a manner that it is only used when this first key is cleared.
  • a secondary secret key may be utilized with the target unit 100 itself. Since the CPU 120 of the target unit 100 cannot ever access the values of either the primary or the secondary secret keys, in some sense, the target unit 100 does not even “know” its own secret keys 104 . These keys are only stored and used within the security execution controller 162 of the target unit 100 as will be described.
  • the two keys may be constructed as a list of “paired” keys, where one such key is implemented as a one-time-programmable register and the other key in the pair is implemented using a re-writeable register.
  • the re-writeable register may be initialized to a known value (e.g., zero) and the only option that may be available for the system to execute in secure mode in that state may be to write a value into the re-writeable portion of the register.
  • some value e.g., one that may only be known by the Licensing Authority, for example
  • the system may only then be able to execute more general purpose code while in secure mode. If this re-writeable value should be re-initialized for some reason, then the use of a new value each time this register is written may provide increased security in the face of potential replay attacks.
  • Yet another set of keys may operate as part of a temporary public/private key system (also known as an asymmetric key system or a PKI system).
  • the keys in this pair may be generated on the fly and may be used for establishing a secure communications link between similar units, without the intervention of a central server.
  • these keys may be larger in size than those of the set of secret keys mentioned above.
  • These keys may be used in conjunction with the value that is present in the on-chip timer block in order to guard against “replay attacks”, among other things. Since these keys may be generated on the fly, the manner by which they are generated may be dependent on the random number generation system 182 in order to increase the overall system security.
  • one method that can be used to affect a change in “ownership” of a particular target unit is to always use the primary secret key as a compound key in conjunction with another key 107 , which we will refer to as a timestamp or timestamp value, as the value of this key may be changed (in other words may have different values at different times), and may not necessarily reflect the current time of day.
  • This timestamp value itself may or may not be itself architecturally visible (e.g., it may not necessarily be a secret key), but nonetheless it will not be able to be modified unless the target unit 100 is operating in secured execution mode.
  • the consistent use of the timestamp value as a component of a compound key whenever the primary secret is used can produce essentially the same effect as if the primary secret key had been switched to a separate value, thus effectively allowing a “change of ownership” of a particular target endpoint unit without having to modify the primary secret key itself.
  • the target device 100 may use secure execution controller 162 and data cache 180 to isolate the working sets of processes executing in secure mode such that the data is inaccessible to any other process, even after the original process terminates.
  • This working set isolation may be accomplished in certain embodiments by disabling off-chip writes and write-through of data cache when executing in secured mode, associating lines of the data cache written by the executing process with a secure descriptor (that may be uniquely associated with the executing process) and restricting access to those cache lines to only that process using the secure process descriptor.
  • a secure process descriptor may be a compound key such as an authorization code or some derivative value thereof.
  • the secure descriptor associated with the currently executing process may be compared with the secure descriptor associated with the requested line of the data cache. If the secure descriptors match, the data of that cache line may be provided to the executing process while if the secure descriptors do not match the data may not be provide and another action may be taken.
  • main memory e.g., a page swap or page out operation
  • external data transactions between the processor and the bus e.g., an external memory bus
  • the key for such an encryption may be the secure process descriptor itself or some derivative thereof and that secure process descriptor may be encrypted (e.g., using the target device's secret key or some derivative thereof) prior to being written out to the main memory.
  • this encryption processes may be accomplished substantially using the hashing block of the target device or by use of an software encryption process running in secure mode on the processor itself or some other on-chip processing resource, or by use of a encryption function that is implemented in hardware.
  • a subset of the processes working set that is considered “secure” may be created (e.g., only a subset of the dirty cache lines for the process may be associated with the secure descriptor) and only encrypt those cache lines or the portion of the cache containing those lines, when it is written out to external memory.
  • an off-chip storage mechanism e.g., a page swapping module
  • an interrupting process e.g., using a DMA unit with integrated AES encryption hardware acceleration
  • a separate secure “working set encapsulation” software module may be used to perform the encryption prior to allowing working set data to be written out to memory.
  • secure execution controller 362 is associated with a CPU of a system in which it is included and is intended to support the running of a candidate code block in secure mode on the main CPU.
  • secure execution controller 362 may comprise one or more of registers, including a secret hardware key 310 which is not visible to the CPU, secure mode control register 350 , authorization code register 360 , secure mode status register 352 , hash seed register 312 and hardware generated compound key register 314 .
  • registers including a secret hardware key 310 which is not visible to the CPU, secure mode control register 350 , authorization code register 360 , secure mode status register 352 , hash seed register 312 and hardware generated compound key register 314 .
  • all but secret hardware key 310 may be readable by a CPU without affecting the overall security of the system, although any of these other registers may or may not be visible.
  • Secure mode control register 350 may be a register that may be written to in order to attempt to place the target device in a secure mode.
  • the secure mode control register 350 may have a register into which a memory location (e.g., in an I-cache or main memory) corresponding to the beginning address of a candidate code block (e.g., a code block to be executed in secured mode) may be written and a separate register into which the length of such a candidate code block may be written.
  • Authorization code register 360 may be a location into which an authorization code or another type of key or data may be written.
  • Secure mode status register 352 may be a memory-mapped location comprising one or more bits that may only be set by hardware comparison block 340 and which can indicate whether or not the target device 100 is operating in secure mode.
  • Hardware hash function block 320 may be operable for implementing a hash function substantially in hardware to generate a compound key 314 .
  • Hardware hash function block 320 may, for example, implement a SHA 256 or some similar one-way hash function. However, this hash function may also be implemented in software or in firmware running on either a separate processor from the CPU of the system, or even a process that is run on the CPU in secure mode, using a virtual hardware hash function methodology as described earlier.
  • Hardware hash function block 320 may take as input one or more of the values stored in the hash seed register 312 , secret hardware key 310 or data from another location, concatenate these inputs (e.g., prepend or append one input to another) and hash the resulting data set to generate a message authentication code, which we have referred to earlier as a one-way compound key.
  • the input data for the hardware hash function may be constructed by a concatenation of the secret hardware key, a hash seed precursor key and a secure code block candidate.
  • the input data for the hardware hash function may be constructed by a concatenation of the secret hardware key, a hash seed precursor key and a secure code block candidate.
  • Hardware generated compound key register 314 is configured to store the output of the hardware hash function block 320 .
  • Hardware comparison block 340 may be configured to compare the data in hardware generated compound key register 314 with the data in authorization code register 360 . If the two values are identical the hardware comparison block 340 is configured to set the one or more bits in secure mode status register 352 that place the target device in secure mode.
  • Secure mode controller state machine 370 may be logic (e.g., hardware, software or some combination) that may operate based on the state of bits of secure mode control register 350 or secure mode status register 352 .
  • Secure mode controller state machine 370 is configured for controlling inputs to hardware hash function block 320 , such that the precursors may be utilized in the correct manner to generate the desired output 314 of hardware hash function block 320 .
  • secure mode controller state machine 370 may be configured to cause the resulting output to be loaded into hardware generated compound key register 314 at the proper time.
  • secure mode controller state machine 370 may be configured to cause the correct data to be written to secure mode status register 352 .
  • Secure mode controller state machine 370 may also be configured for controlling memory access when the target device is executing in secure mode. In one embodiment, when the bits in secure mode status register 352 that indicate that the target device is now operating in secure mode, then secure mode controller state machine 370 may be configured to determine which of the pages of the data cache have been assigned to that process and store a secure descriptor for that process in the data cache in association with the one or more of the pages of the data cache. These secure process descriptors may thus be used to associate a particular set of data that is being stored in the data cache with a specific process that is executing in secured mode. Such a secure process descriptor may, for example, be the value that is based on the data that is located in authorization code register 360 or the hardware-generated compound key register 314 .
  • secure mode controller state machine 370 may be able to receive memory accesses by the process executing in secure mode and determine if the memory access is a read or a write access.
  • the secured mode controller state machine 370 may be configured to determine the cache line of the data cache corresponding to the address where the data is to be written and then set a security flag associated with that cache line to indicate that the data contained in that cache line is secure. In certain embodiments, secured mode controller state machine 370 is also configured to prevent any writes to any memory location which is not in the data cache, for example by disabling write-through, write-back or other operations of the data cache or memory controllers of the target device.
  • the secured mode controller state machine 370 may be configured to determine if a cache miss has occurred and if the requested address was not previously stored in the data cache the secured mode controller state machine 370 may be configured to allow the requested data to be read from main memory and placed in the data cache in a page associated with the process. If a cache hit occurs the secured mode controller state machine 370 may be configured to the determine the cache line corresponding to the address of the memory access and check the security flag associated with that cache line to determine if it is set. If the security flag is not set the memory access may be allowed to proceed (e.g., the data read from the cache line).
  • secured mode controller state machine 370 may be configured to obtain the secure process descriptor associated with the page in the data cache containing that cache line and compare it with a secure process descriptor associated with the currently executing. If the secure process descriptors match, then the memory access may be allowed to proceed. If the secure descriptors do not match, another action may be taken such as either returning a garbage or preset value in response to the memory access or alternately returning a “no-valid data” at that address message to the CPU, whereupon the CPU memory management unit may then request a replacement cache line to read in from system memory.
  • only the data cache is used to store the entire working set of a process executing in secure mode and any writes to memory other than to the data cache by the process may be disabled.
  • any lines of the data cache that are written to e.g., so-called “dirty” cache lines
  • a secure process descriptor that may uniquely and precisely specify which process to whom the “dirty” cache line belongs.
  • Access to these cache lines may only be allowed to the owner of the particular “dirty” cache line such that any cache line modified during the operation of a secure process is unreadable by any other process, even after the original process has terminated.
  • data that belongs to one instance of a process is unambiguously isolated from any other process.
  • FIGS. 4A and 4B illustrate one embodiment of the architecture of a data cache that may be utilized to effectuate isolation of working sets of processes according to certain embodiments.
  • data cache 400 may be almost any type of cache, including a L1 cache a L2 cache, a direct mapped cache, a 2-way set associative cache, a 4-way set associative, a 2-way skewed associative cache, etc. that may be implemented in conjunction with almost any management or write policies desired.
  • the cache 400 may comprise a set of pages 410 . When used when referring to the cache herein, a page may be understood to mean cache block or a cache set.
  • the data cache 400 is configured to store a secure descriptor associated with one or more pages 410 of the cache.
  • FIG. 4B depicts a view of one embodiment of a page 410 a of cache 400 .
  • the cache comprises logic 412 designed to store a secure process descriptor in association with the page 410 a and to provide the secure process descriptor in response to a request for the secure process descriptor for page 410 a or in conjunction with a read to a cache line 402 of page 410 a .
  • Each cache line 402 of the page 410 a includes bits for the data, address tags and flags 420 .
  • the flags 420 may include bits such as a valid bit or dirty bit.
  • flags 420 may include a secure bit 422 .
  • Cache 400 may be configured such that a secure bit 422 for a cache line 402 may be set (e.g., when a process executing in secure mode writes to that cache line 402 ).
  • any generic (or otherwise) block of code (which will be referred to as a “secure work function”) may be executed in secure mode on embodiments of a system such as those described herein is to execute a pair of extra functions, one on either side (e.g., before or after) of the secure work function.
  • a function (or set of functions) that is executed immediately prior to a secure work function will be referred to as the “prologue” and a function (or set of functions) which is executed immediately after the secure work function will be referred to as the “epilogue”.
  • the prologue in order to execute a secure work function on a CPU, then that secure work function should be preceded by a prologue and followed by an epilogue.
  • the purpose of the prologue is at least threefold.
  • the prologue should prepare the input arguments that are passed to the secure work function for use by the secure work function. This preparation may involve, for example, a decryption process, which may be required for those input arguments that may not be passed to the secure work function in the clear.
  • a second function of the prologue may be to construct a compound key whose value is dependent on a number of data elements.
  • Such data elements may include the hardware secret key of the target device, the Authorization Code of the parent (e.g., calling) function, a list of one or more input arguments to the secure work function (either in encrypted or non-encrypted form), the executable image of the secure work function itself, or some other information that may be used in determining whether or not the secure work function should be allowed to execute on the target device in secure mode.
  • a third function of the prologue could be to initiate a request that the CPU begin executing the secure work function in secure mode.
  • the purpose of the epilogue may be to “clean up” after the execution of the secure work function is complete.
  • One function the epilogue may be to prepare any designated output parameters for use by subsequent code blocks (e.g., to be executed after the secure work function), be they secure or not.
  • this preparation may involve encrypting of the designated output (or returned data) from the secure work function so that any observing process other than the intended recipient of such output arguments, including either hardware or software-based observers, may be precluded from effectively intercepting that data.
  • the encryption key that may be used may be a reversible compound key that is passed to the secure routine as one of its calling arguments.
  • a second function of the epilogue may be to either programmatically or automatically invalidate those portions of a data cache that have been written to while the secure work function (e.g., by the secure work function) was executing.
  • the secure work function e.g., by the secure work function
  • the data values that were written to a secure portion of the data cache prior to the process being suspended may thus be available to the resumed secure process without having to page these secure data locations out to memory (which may involve an intervening encryption process).
  • these same data cache locations may then be made available to the secure function, since the secure process descriptor may match the currently executing authorization code, or some derivative thereof (or another value being used as a secure process descriptor).
  • FIG. 5 is a block diagram depicting a nonce-based authCode/encryption key generated in hardware.
  • a nonce 510 and the hardware secret 512 are used in conjunction with the cached code 514 to be protected, and fed into a hash function 516 .
  • the hash function 516 generates a unique NauthCode which can be used as an encryption key for securely paging cache data out to main memory. Since the nonce changes every time, the nonce based authCode NauthCode also changes every time. In some embodiments, the nonce may be generated from the previous operation's NauthCode fed back into the hash function input.
  • the nonce should be generated securely; either by a software-based method (a process running in secure mode, such as in the “Prologue” section just preceding the Secure Code block as shown in FIG. 7 ) or a hardware-based mechanism (one which may need to use the processor's hardware secret, e.g., by the feedback line shown in FIG. 5 ). In this case, the nonce itself would then be considered as “secure” data and would thus not be able to be stored in the clear in main memory.
  • a software-based method a process running in secure mode, such as in the “Prologue” section just preceding the Secure Code block as shown in FIG. 7
  • a hardware-based mechanism one which may need to use the processor's hardware secret, e.g., by the feedback line shown in FIG. 5 .
  • the nonce itself would then be considered as “secure” data and would thus not be able to be stored in the clear in main memory.
  • FIG. 6 illustrates a secure process dataflow.
  • a secure processor 610 implements a secure data cache (D$) 612 .
  • D$ secure data cache
  • the data stored in the data cache 612 are paged out from the secure cache 612 into main memory 614 , they are encrypted using an encryption key generated using a keyed one way hash (or HMAC), as discussed above, based on the nonce 620 , the hardware secret key 618 , and the secure processor's authCode 622 .
  • HMAC keyed one way hash
  • the nonce itself is stored in the secure D$ 612 and pre-encrypted using the generated encryption key (via pre-encryption block 624 ) before being stored back in the secure D$ 612 .
  • the data, along with the pre-encrypted nonce are first encrypted using the encryption key (via encryption block 626 ). This decrypts the nonce, and it is stored in the clear in main memory 614 and can be available when reading data back in.
  • FIG. 7 illustrates one example of a secure software implementation.
  • a secure software implementation includes a prologue 710 .
  • the prologue 710 informs the secure mode hardware state machine where and how big the candidate secure code block is.
  • the secure code 712 generates a new nonce (the first part of the secure code) in the prologue 710 and then, in an epilogue 714 , data are exported (after being pre-encrypted, as described above) securely to main memory and the state machine is shut down.
  • the mechanism described above can potentially be subverted by an external attack based on use of the data that has been paged out to main memory. This could be accomplished if either the encrypted data is maliciously modified or if the nonce value itself is modified. For example, one might envision a “replay attack”, where a nonce from a previous invocation of a particular secure process is inserted into the data set in place of the correct nonce.
  • these kinds of data corruption problems can be detected by “signing” the encrypted data set that is paged out to main memory (e.g., with a Message Authentication Code—or MAC).
  • a MAC can be created by passing the encrypted data that is to be paged-out (including the unencrypted nonce) through a one-way hash function, along with the same secret value that was used in order to create the encryption key described above.
  • embodiments may include methods that will prevent this style of attack.
  • the nonce value is “seeded” at the same time that the Central Licensing Authority provisions the secure device in the first place. That nonce “seed” value will be provided to the device in an encrypted manner and will contain sufficient entropy so as to never have a secure device have a repeated nonce “seed” value.
  • PRNG Pseudo-Random Number Generator
  • the saved (encrypted) data set is signed by a MAC resulting from the hash of the concatenated pair of nonces (along with the hardware secret) and the paged-out encrypted data set, then every time the secure process is resumed, it can be checked for integrity prior to allowing the process to be resumed when returning from an interrupt.
  • This method is also secure against mid-encryption interruption (and thus subsequent intermediate partial results being paged out to memory), since any such partial results will themselves be encrypted and only that data which has been completely “pre-encrypted” will actually show up in the clear when it is paged out to main memory.
  • an encryption key (NauthCode) may be generated based on, for example, the process secure authcode, the processor secret hardware key, and the nonce (described in detail above).
  • Data that are desired to be exported (as well as the nonce, if necessary) may be pre-encrypted using the NauthCode encryption key (step 804 ).
  • the data may then be stored in the secure data cache (step 806 ).
  • To page the data out from the secure cache, the data are then encrypted (step 808 ) on page out using the same NauthCode encryption key, which unencrypts the data.
  • the now unencrypted data (and nonce) are now available in main memory, unencrypted (step 810 ). More detailed examples of similar processes are described below.
  • this functionality can be accomplished by using a shared encryption key.
  • a shared encryption key For security reasons, it is desirable not to share the compound encryption key described above between two different processes. Thus, we must use a different mechanism to create the encryption key for data that we wish to share between two secure processes. Also, it is desirable to further subdivide this problem into “one-way” and “bidirectional” shared data mechanisms.
  • a “one-way” secure data export it may be desirable to create an encryption key in such a manner that only the authorized recipient (i.e., the secure “receiver” process) could create and use the shared data decryption key.
  • the “sender” process would have access the original data, prior to its encryption (and export).
  • the requirement of such a “one-way” shared data system would be that only the authorized recipient could subsequently have access to the shared data.
  • the data is not modified by the “receiver” process, then it doesn't make sense to try to enforce this “one-way” stipulation, since the “sender” process can simply make a backup copy of the exported data.
  • the “one-way” stipulation would apply; i.e., the “sender” process would not have access the subsequently modified version of the shared data.
  • This mechanism can be supported by a number of different exemplary methods.
  • one simple method is be to create a “shared” key where the shared data can be decrypted by both the secure “receiver” process as well as the secure “sender” process.
  • the “receiver” process modifies it, then by the mechanisms described above, it will automatically be tagged as belonging to the secure “receiver” process.
  • this data is then paged back out to main memory, it would be encrypted with a different compound key (one that only the “receiver” process can recreate).
  • a “bidirectional” secure data export it may be desirable to provide a mechanism by which two independent secure processes can interact in order to recreate a shared secret key that is to be used in order to encrypt and decrypt a common data set.
  • At least one key exchange mechanism exists that does not depend on asymmetric cryptography.
  • This mechanism is the “SHAAKETM” protocol, as described in commonly assigned, co-pending U.S. Patent Application No. ______[Atty. Docket KRIM1220-1] titled “System and Method for an Efficient Authentication and Key Exchange Protocol,” filed concurrently herewith and hereby incorporated by reference in its entirety as if fully set forth herein.
  • This protocol depends only on cascaded hash functions and a single symmetric encrypt/decrypt stage, all of which impose a much lower computational load on the processor than traditional asymmetric cryptography based mechanisms.
  • the SHAAKETM protocol can enable communication between two independent secure processes, even if they are executing on completely different processors.
  • the SHAAKE protocol can also be easily extended to the case where the exported data must be shared between multiple independent secure processes, all of which may or may not be running on the same processor.
  • each process has its own nonce and authCode (Nonce 1 , authcode 1 , Nonce 2 , authCode 2 ) which are hashed with the hardware secret key Kh to obtain SPID 1 , SPID 2 .
  • SPID 1 and SPID 2 in turn are hashed with the hardware secret key Kh to obtain the final shared encryption key EK 12 .
  • data for example, a nonce
  • data can be pre-encrypted before being paged out to main memory, so that it will be decrypted with the same encryption key and thus stored in main memory unencrypted.
  • FIG. 9 is a block diagram of a secure data cache system 900 .
  • the secure data cache system 900 includes a data cache 910 and CPU 912 and corresponding CPU execution unit 914 .
  • Data cache 910 may comprise almost any type of cache, including a L1 cache a L2 cache, a direct mapped cache, a 2-way set associative cache, a 4-way set associative, a 2-way skewed associative cache, etc. that may be implemented in conjunction with almost any management or write policies desired.
  • the data cache 910 includes a plurality of data lines (in this example, three lines are shown). Each data line includes bits allocated for an address tag 916 (addr_tag), data 918 (data), valid flag bit 920 (V), and dirty flag bit 922 (D).
  • the address tags, data, valid bits and dirty bits shown in FIG. 9 are used in their conventional sense. Note that the address lines may also include other information not shown, for example, a secure bit such as that shown in FIG. 4 .
  • the secure data is encrypted by encryption block 924 , using the data cache encryption key 926 .
  • the data cache encryption key 926 is derived from the hardware secret 928 (hw secret), nonce 930 (nonce), and authorization code 932 (authCode).
  • the authorization code 932 is specific to a particular process, and thus the encryption key 926 is unique to a particular process.
  • the nonce 930 is unique to a particular instance of the secure process.
  • the hardware secret 928 , nonce 930 , and authorization code 932 are provided to hardware hash function 934 , to generate the data cache encryption key 926 .
  • FIG. 10 shows one example of a secure data cache system 1000 that can preserve a particular nonce for use later in reconstructing a previous data cache encryption key.
  • the data cache 910 includes a plurality of data lines, each including bits allocated for an address tag 1016 , data 1018 , valid flag bit 1020 , and dirty flag bit 1022 . Since the hardware secret is constant, and the authorization code is constant for a particular process, a particular data cache encryption key can be reconstructed, given the appropriate nonce. However, a nonce cannot simply be paged out to main memory, since it will be encrypted using an encryption key that will not be available once a new nonce is generated.
  • a secure process desires to reconstruct a previous data cache encryption key in order to access encrypted data in the main memory from a previous instance of the process. Since the hardware secret and authorization code are known, the secure process needs only to determine the value of the previous nonce to regenerate the appropriate encryption key.
  • the nonce is pre-decrypted using a hash function and the data cache encryption key 1026 .
  • the current nonce is encrypted (pre-decrypted) and stored as data in the data line.
  • the data When the secure data cache 1010 is flushed to main memory 1040 , the data will be encrypted by symmetric encryption block 1024 , using the current data cache encryption key 1026 , and stored in main memory 1040 . Since the nonce was pre-decrypted, after passing through symmetric encryption block 1024 again, the nonce will be converted to plaintext and stored in main memory 1040 as plaintext. In a subsequent instance of the secure process, the previous nonce can be read from main memory 1040 , and used to reconstruct the previous data cache encryption key 1026 , thus allowing data to be accessed which was encrypted using the previous data cache encryption key. Note that any data, not just the nonce, can be stored in main memory 1040 unencrypted (in plaintext), using the same “pre-decryption” technique.
  • FIGS. 11-13 illustrate additional examples of techniques for sharing data between processes.
  • FIG. 11 is a block diagram of data cache system 1100 .
  • the data cache system 1100 includes a secure data cache page 1110 , including a plurality of data lines (in this example four lines are shown).
  • each data line includes bits allocated for an address tag 1116 , data 1118 , valid flag bit 1120 , and dirty flag bit 1122 .
  • Each data line also includes bits representing a vector 1142 (S_vector), which functions similar to the secure bits described above with respect to FIG. 4 . However, instead of being a single bit, the vector 1142 contains multiple bits (for example, two bits). The value of the vector 1142 serves to identify to which process the respective data line belongs. In this example, assume there are three processes which may store data in the data cache 1110 .
  • each data line is encrypted by symmetric encryption block 1124 using the secure data encryption key 1126 corresponding to the value of the vector 1142 of the respective data line.
  • data in the data cache is encrypted and stored in main memory as encrypted data.
  • data can be pre-decrypted, and then stored in main memory 1140 as plaintext after passing through the symmetric encryption block 1124 (using the techniques described above with respect to FIG. 10 ).
  • the secure cache controller knows that the nonce is not encrypted, so it will be received as plaintext, without being decrypted.
  • FIG. 12 shows a secure data cache system 1200 , similar to the system 1100 shown in FIG. 11 .
  • the data cache system 1200 includes a secure data cache page 1210 , including a plurality of data lines. As before, each data line includes bits allocated for an address tag 1216 , data 1218 , valid flag bit 1220 , and dirty flag bit 1222 .
  • Each secure process is assigned a secure process ID (SPID) value that is entered in the appropriate data line in the vector field 1242 .
  • each line of data is encrypted by symmetric encryption block 1224 using the encryption key 1226 corresponding to the secure process to which it belongs.
  • each line of data will be decrypted using the encryption key corresponding to the process to which the data belongs.
  • Secure Process 1 wishes to share the data stored in the first data line with Secure Process 3 .
  • FIG. 13 is a block diagram of the secure data cache system shown in FIG. 12 .
  • the data cache system 1300 includes a secure data cache page 1310 , including a plurality of data lines. As before, each data line includes bits allocated for an address tag 1316 , data 1318 , valid flag bit 1320 , and dirty flag bit 1322 .
  • the symmetric encryption block 1324 will encrypt data line 1 using secure data encrypted key 2 , which is the key generated using the authCode and nonce of Secure Process 2 .
  • secure data encrypted key 2 is the key generated using the authCode and nonce of Secure Process 2 .
  • the line 1 data is encrypted using Secure Process 2 's encryption key. Therefore, when Secure Process 2 retrieves the data later from main memory 1340 , the data will be decrypted using Secure Data Encryption Key 2 (the encryption key corresponding to Secure Process 2 ).
  • Secure Data Encryption Key 2 the encryption key corresponding to Secure Process 2
  • an HSM module 1030 for a party may receive a NonceB, message digest (MD BA ), and ciphertext from another party “Bob.”
  • Alice may receive, from a licensing authority, an authorization code (authCode A ), a nonce (NKh AB ) and an encryption key (EKe B ).
  • Alice generates a Nonce A and also sends it to Bob.
  • Alice's HSM module 1030 further generates a message digest (MD AB ) of the session key.
  • Alice sends Bob Nonce A and Bob sends Alice Nonce B .
  • Alice passes Nonce A and Nonce B through HMACs (Hash Message Authentication Code, e.g., SHA functions) 1032 and 1034 , respectively, to generate message digests Ne A and Ne B .
  • HMACs Hashes of the nonces (Nonce A and Nonce B ), but seeded with the private keys that only Bob and only Alice know (Ke A , Ke B ).
  • Alice uses an embedded secret (architecturally invisible) KhA (and a nonce NKh AB , i.e., a random number, sent previously by the CLA) to generate (via HMAC 1036 ) the key Ke A .
  • Ne A is a “signed” nonce.
  • Alice can generate (via HMAC 1038 ) Bob's key Ke B from the CLA, which previously has sent an encrypted EKe B .
  • Alice decrypts it using the key Ke A .
  • the nonce Ne B is either concatenated or XOR-ed with Ne A and used as the input to a hash function (via HMAC 1041 ) to generate session key SK AB .
  • the CLA thus sends both Bob and Alice nonces, authCodes, and, encrypted keys which can be used for all subsequent communications between the two parties. Even if these are intercepted, however, only Alice and Bob can correctly generate the shared session key SK AB .
  • the session key (SK AB ) is used by symmetric encryption blocks 1042 and 1044 to decrypt ciphertext and encrypt plaintext, as illustrated in FIG. 14 . This method for generating the session key SK AB used for encrypting/decrypting plaintext/ciphertext during the session solves the perfect forward secrecy problem, with the assumption that the service (the CLA) is itself secure.
  • the man-in-the-middle problem is solved by hashing (via HMAC 1046 ) the session key SK AB with Ke B (which is Bob's key, generated similarly to Ke A but using Bob's embedded secret device key Kh B (not shown) and Bob's corresponding nonce NKh BA (not shown)).
  • the result of the hash is the message digest (MD AB ).
  • the message digest MD AB is sent after Alice receives Nonce B and performs the hash calculations. In this way, Bob can verify that he is speaking with Alice.
  • Alice receives MD BA (i.e., the hash of Ke A and SK AB ) from Bob and hashes (via HMAC 1048 ) the session key SK AB with Ke A to verify or authenticate that Alice is speaking with Bob.
  • MD BA i.e., the hash of Ke A and SK AB
  • HMAC 1048 the session key SK AB with Ke A to verify or authenticate that Alice is speaking with Bob.
  • This functionality an also be used by Alice and Bob to sign messages to each other.
  • FIG. 15 shows an example of how the protocol of FIG. 14 can be extended for Alice to communicate with Bob and Carol in a secure and private manner. Again, the overall execution environment for the protocol may be secured using a recursive security protocol.
  • the CLA provides authcode A , NKh AB , EKe B , and EKe C . From Bob, Alice receives ciphertext as well as Nonce B and message digest MD BA , while from Carol, Alice receives ciphertext, as well as Nonce C and message digest MD CA .
  • Alice sends Bob and Carol Nonce A
  • Bob sends Alice Nonce B
  • Carol sends Alice Nonce C
  • Alice passes Nonce A , Nonce B and Nonce C through HMACs (Hash Message Authentication Code, e.g., SHA functions) 1132 , 1134 , and 1136 respectively, to generate message digests Ne A , Ne B , and Ne t .
  • HMACs Hashes of the nonces (Nonce A , Nonce B and Nonce C ), but seeded with the private keys that only Bob, Alice, and Carol know (Ke A , Ke B , Ke C ).
  • Ne A is a “signed” nonce.
  • Alice can generate (via HMAC 1141 ) Bob's key Ke B from the CLA, which previously has sent an encrypted EKe B .
  • Alice can also generate (via HMAC 1143 ) Carol's key Ke C from the CLA, which previously has sent an encrypted EKe C .
  • Alice decrypts both using the key Ke A .
  • the nonces Ne A , Ne B and Ne t are hashed (via HMAC 1144 ) to generated session key SK ABC .
  • the CLA thus sends Bob, Alice, and Carol nonces, authCodes, and encrypted keys which can be used for all subsequent communications between the three parties. Even if these are intercepted, however, only Alice, Bob, and Carol can generate their session keys SK ABC .
  • the session key (SK ABC ) is used by symmetric encryption blocks 1146 , 1148 , and 1150 to decrypt ciphertext and encrypt plaintext, as illustrated in FIG. 15 . This method for generating the session key SK ABC used for encrypting/decrypting plaintext/ciphertext during the session solves the perfect forward secrecy problem.
  • the man-in-the-middle problem is solved by hashing (via HMAC 1152 and HMAC 1154 , respectively) the session key SK ABC with Ke B and Ke C .
  • the result of these hashes are the message digests (MD AB and MD AC ).
  • the message digests MD AB and MD AC are sent after Alice receives Nonce B and Nonce C and performs the hash calculations. In this way, Bob and Carol can each verify that they are speaking with Alice.
  • Alice receives MD BA and MD CA from Bob and Carol and hashes (via HMAC 1156 and 1158 ) the session key SK ABC with Ke A to verify or authenticate that Alice is speaking with Bob or Carol.
  • This functionality an also be used by Alice, Bob and Carol to sign messages to each other.
  • FIG. 16 illustrates an embodiment that boosts performance by implementing the symmetric decryption and encryption blocks in a crypto co-processor.
  • FIG. 16 is a diagram of an embodiment in which a crypto co-processor is used for encryption/decryption of text.
  • FIG. 16 shows an HSM module 1230 that is similar to the module shown in FIG. 14 .
  • the blocks 1232 , 1234 , 1236 , 1238 , 1241 , and 1246 operating in the same manner as the corresponding blocks in FIG. 14 , to generate session key SK AB and message digest MD AB .
  • the HSM module 1230 performs key exchange in a manner similar to that described with reference to FIG. 14 .
  • the actual encryption of the plaintext is performed using a separate crypto co-processor 1250 (for clarity, a decryption block is not shown).
  • the crypto coprocessor receives an encrypted session key ESK AB , which is a version of the session key SK AB that is encrypted (via symmetric encryption block 1252 ) using the coprocessor OTP secret key.
  • the coprocessor OTP secret key is generated from EK OTP and Ke A via symmetric decryption block 1254 .
  • the session key SK AB is decrypted using decryption block 1256 .
  • Symmetric encryption block 1258 uses the session key to encrypt the plaintext.
  • a similar decryption block (not shown) uses the session key to decrypt ciphertext.
  • Embodiments discussed herein can be implemented in a computer communicatively coupled to a network (for example, the Internet), another computer, or in a standalone computer.
  • a suitable computer can include a central processing unit (“CPU”), at least one read-only memory (“ROM”), at least one random access memory (“RAM”), at least one hard drive (“HD”), and one or more input/output (“I/O”) device(s).
  • the I/O devices can include a keyboard, monitor, printer, electronic pointing device (for example, mouse, trackball, stylus, touch pad, etc.), or the like.
  • ROM, RAM, and HD are computer memories for storing computer-executable instructions executable by the CPU or capable of being compiled or interpreted to be executable by the CPU. Suitable computer-executable instructions may reside on a computer readable medium (e.g., ROM, RAM, and/or HD), hardware circuitry or the like, or any combination thereof.
  • a computer readable medium is not limited to ROM, RAM, and HD and can include any type of data storage medium that can be read by a processor.
  • a computer-readable medium may refer to a data cartridge, a data backup magnetic tape, a floppy diskette, a flash memory drive, an optical data storage drive, a CD-ROM, ROM, RAM, HD, or the like.
  • the processes described herein may be implemented in suitable computer-executable instructions that may reside on a computer readable medium (for example, a disk, CD-ROM, a memory, etc.).
  • a computer readable medium for example, a disk, CD-ROM, a memory, etc.
  • the computer-executable instructions may be stored as software code components on a direct access storage device array, magnetic tape, floppy diskette, optical storage device, or other appropriate computer-readable medium or storage device.
  • Any suitable programming language can be used to implement the routines, methods or programs of embodiments of the invention described herein, including C, C++, Java, JavaScript, HTML, or any other programming or scripting code, etc.
  • Other software/hardware/network architectures may be used.
  • the functions of the disclosed embodiments may be implemented on one computer or shared/distributed among two or more computers in or across a network. Communications between computers implementing embodiments can be accomplished using any electronic, optical, radio frequency signals, or other suitable methods and tools of communication in compliance with known network protocols.
  • Any particular routine can execute on a single computer processing device or multiple computer processing devices, a single computer processor or multiple computer processors. Data may be stored in a single storage medium or distributed through multiple storage mediums, and may reside in a single database or multiple databases (or other data storage techniques).
  • steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, to the extent multiple steps are shown as sequential in this specification, some combination of such steps in alternative embodiments may be performed at the same time.
  • the sequence of operations described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc.
  • the routines can operate in an operating system environment or as stand-alone routines. Functions, routines, methods, steps and operations described herein can be performed in hardware, software, firmware or any combination thereof.
  • Embodiments described herein can be implemented in the form of control logic in software or hardware or a combination of both.
  • the control logic may be stored in an information storage medium, such as a computer-readable medium, as a plurality of instructions adapted to direct an information processing device to perform a set of steps disclosed in the various embodiments.
  • an information storage medium such as a computer-readable medium
  • a person of ordinary skill in the art will appreciate other ways and/or methods to implement the invention.
  • a “computer-readable medium” may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device.
  • the computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
  • Such computer-readable medium shall generally be machine readable and include software programming or code that can be human readable (e.g., source code) or machine readable (e.g., object code).
  • non-transitory computer-readable media can include random access memories, read-only memories, hard drives, data cartridges, magnetic tapes, floppy diskettes, flash memory drives, optical data storage devices, compact-disc read-only memories, and other appropriate computer memories and data storage devices.
  • some or all of the software components may reside on a single server computer or on any combination of separate server computers.
  • a computer program product implementing an embodiment disclosed herein may comprise one or more non-transitory computer readable media storing computer instructions translatable by one or more processors in a computing environment.
  • a “processor” includes any, hardware system, mechanism or component that processes data, signals or other information.
  • a processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems.
  • the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion.
  • a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, product, article, or apparatus.
  • the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
  • a term preceded by “a” or “an” includes both singular and plural of such term (i.e., that the reference “a” or “an” clearly indicates only the singular or only the plural).
  • the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.

Abstract

Embodiments of systems and methods disclosed herein provide simple and effective methods for secure processes to share selected data with other processes, either secure or not, in a safe and secure manner. More specifically, in certain embodiments, systems and methods are disclosed that enable a secure data cache system to write certain data to main memory unencrypted. In other embodiments, systems and methods are disclosed that enable a secure data cache system to write encrypted data from one secure process to main memory, and to enable the decryption of the data by another secure process. In other embodiments, the ownership of data lines in a secure data cache is selectively changed from one process to another, effectively allowing different secure processes to share data.

Description

    RELATED APPLICATIONS
  • This application claims a benefit of priority under 35 U.S.C. §119 to United States Provisional Patent Application No. 61/978,669, filed Apr. 11, 2014, entitled “SYSTEM AND METHOD FOR SHARING DATA SECURELY,” by William V. Oxford and to U.S. Provisional Patent Application No. 61/978,657, filed Apr. 11, 2014, entitled “SYSTEM AND METHOD FOR AN EFFICIENT AUTHENTICATION AND KEY EXCHANGE PROTOCOL” by William V. Oxford, which are hereby fully incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • This disclosure relates generally to security in computer systems. In particular, this disclosure relates to systems and methods by which a secure process can share selected data with other processes, either secure or not, in a nonetheless safe manner.
  • BACKGROUND
  • In the system described in commonly-assigned U.S. patent application Ser. No. 13/847,370, filed Mar. 19, 2013, entitled “Method and System for Process Working Set Isolation,” and published as US2013/0254494A1 and WO2013/142517A1 which are hereby incorporated by reference in its entirety as if fully set forth herein, a system is disclosed that can automatically isolate any intermediate results written from a secure process from external observation (by either a non-secure process or by a different secure process). However, this method can prove problematic if some subset of this isolated data must subsequently be exported for use by an external process (operating in secure or in non-secure mode). This issue is known in the cryptographic literature as the “King Midas Problem”. In effect, every datum that a secure process “touches” then becomes secure and no longer accessible by non-secure processes. In this case, there is no way to communicate the results from a secure process back to the outside world. A further complication arises (as per U.S. patent application Ser. No. 13/847,370) if the data belonging to a given secure process is then also isolated from all other secure processes. Thus, it is desirable to have a method by which a secure process can share selected data with other processes, either secure or not, in a nonetheless safe manner.
  • These, and other, aspects of the disclosure will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following description, while indicating various embodiments of the disclosure and numerous specific details thereof, is given by way of illustration and not of limitation. Many substitutions, modifications, additions and/or rearrangements may be made within the scope of the disclosure without departing from the spirit thereof, and the disclosure includes all such substitutions, modifications, additions and/or rearrangements.
  • SUMMARY
  • Embodiments of systems and methods disclosed herein provide simple and effective methods for secure processes to share selected data with other processes, either secure or not, in a safe and secure manner are disclosed.
  • In particular, in one embodiment, systems and methods are disclosed that enable a secure data cache system to write certain data to main memory in plaintext form even though the data must first pass through a mandatory encryption process prior to being written out to main memory. In one example, a secure execution controller is configured to symmetrically encrypt the data using an encryption key and store the encrypted data in the cache. The secure execution controller is configured to symmetrically encrypt the data a second time using the same encryption key and storing the twice symmetrically encrypted data in the memory as plaintext.
  • In other embodiments, systems and methods are disclosed that enable a secure data cache system to write encrypted data from one secure process to main memory, and to enable the decryption of the data by another secure process.
  • In other embodiments, the ownership of data lines in a secure data cache is selectively changed from one process to another, effectively allowing different secure processes to share data. In one example, a cache having one or more of data lines includes a vector used to identify a secure process for each data line. An encryption key is generated for each secure process identified by a vector in the data lines. The data in each line having a vector that identifies a secure process is symmetrically encrypted using the encryption key corresponding to the secure process identified by the vector of the given data line. In one example, the secure process identified in the vector of a given data line is allowed to write data to the vector, identifying another secure process, thereby effectively sharing the data in the data line with the other secure process.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings accompanying and forming part of this specification are included to depict certain aspects of the disclosure. It should be noted that the features illustrated in the drawings are not necessarily drawn to scale. A more complete understanding of the disclosure and the advantages thereof may be acquired by referring to the following description, taken in conjunction with the accompanying drawings in which like reference numbers indicate like features and wherein:
  • FIG. 1 depicts one embodiment of an architecture for content distribution;
  • FIG. 2 depicts one embodiment of a target device;
  • FIG. 3 depicts one embodiment of a secure execution controller;
  • FIGS. 4A and 4B depict an embodiment of a cache architecture used for process working set isolation;
  • FIG. 5 depicts the generation in hardware of a nonce-based authCode/encryption key.
  • FIG. 6 depicts an exemplary secure processor data flow.
  • FIG. 7 depicts a secure software implementation.
  • FIG. 8 is a flowchart illustrating an exemplary pre-encryption operation.
  • FIG. 9 is a block diagram of an exemplary secure data cache system.
  • FIG. 10 is a block diagram of an exemplary secure data cache system used to place non-encrypted data in main memory.
  • FIGS. 11-13 are block diagrams of exemplary secure data cache systems using secure vector identifiers.
  • FIG. 14 depicts a 2-party secure AKE implemented without using asymmetric cryptography;
  • FIG. 15 depicts a 3-party secure AKE version of the protocol shown in FIG. 14.
  • FIG. 16 depicts an embodiment of the protocol shown in FIG. 14, but using an HSM module as an accelerator for bulk message encryption and decryption processing.
  • DETAILED DESCRIPTION
  • The disclosure and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known starting materials, processing techniques, components and equipment are omitted so as not to unnecessarily obscure the invention in detail. It should be understood, however, that the detailed description and the specific examples, while indicating some embodiments of the invention, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
  • A solution presented herein according to embodiments comes in two parts. The first part involves data that is modified while a processor is operating in secure mode. As described in the U.S. patent application Ser. No. 13/847,370, when data is written out to the processor's secure data cache while the processor is in secure mode, the data for that cache line is tagged as being secure. Subsequently, only the same secure process that actually wrote the data to that cache line in the first place can read that data without error.
  • However, when a secure process creates more data than can fit into the secure data cache (i.e., a data cache “overflow” condition), then a portion of the older data in the secure process' working set must be written (or “paged”) out to main memory to make room for the more recently-created data. Since main memory is shared resource, then it is necessary to devise a system where this “paged out” data nonetheless remains accessible only to the secure process that created it in the first place. In cryptographic terms, a mechanism is needed that allows a secure process's working set to be paged out to shared memory in a manner that maintains both its security and its integrity.
  • The secure page-out process can be accomplished by encrypting the data as it is written out of data cache, using a standard (symmetric) encryption algorithm, such as AES-128 or AES-256. As per standard cryptographic practice, the security of the encrypted data should thus be entirely dependent on the security of the key for that encryption process. Ideally, only the secure process that “owns” that data should be able to recreate the encryption key correctly. In that case, no other process (either secure or not) would be able to access the unencrypted form of the data that is paged out to main memory.
  • The creation of a key for this encryption process can be accomplished by using a compound key mechanism (such as that described in commonly-assigned U.S. Pat. No. 7,203,844, issued Apr. 10, 2007, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” which is hereby incorporated by reference in its entirety as if fully set forth herein). By using this compound key method (where all of the “precursors” to the final compound key but one may be freely shared), a system is described where all of the information required to reconstruct an encryption key (the compound key) can be openly shared and yet, no external observer can correctly reconstruct the resulting compound (encryption) key. The one secret precursor is, of course, the processor's hardware secret key (which is architecturally invisible).
  • While there can be any number of other (public) precursors used to generate this compound encryption key, there are at least two precursors that may be used in its generation: the secure process' authCode (as described in U.S. patent application Ser. No. 13/847,370), and a nonce (a non-repeated random value).
  • If these two elements (plus the processor's hardware secret key) are used as precursors to the compound encryption key, then this encryption key can be recreated at will using just the (public) precursors. Since the authCode for a given secure process is also known, then all that would be needed for a secure process to recreate the actual encryption key is the nonce. Using this mechanism, a secure process can then safely page its secure data out to main memory.
  • Before discussing embodiments in more detail, it may helpful to give a general overview of an architecture in which embodiments of the present disclosure may be effectively utilized. FIG. 1 depicts one embodiment of such a topology. Here, a content distribution system 101 may operate to distribute digital content (which may be for example, a bitstream comprising audio or video data, a software application, etc.) to one or more target units 100 (also referred to herein as target or endpoint devices) which comprise protocol engines. Examples other than the exemplary content distribution systems are also possible. These target units may be part of, for example, computing devices on a wireline or wireless network or a computer device which is not networked, such computing devices including, for example, a personal computers, cellular phones, personal data assistants, tablets, media players which may play content delivered as a bitstream over a network or on a computer readable storage media that may be delivered, for example, through the mail, etc. This digital content may compose or be distributed in such a manner such that control over the execution of the digital content may be controlled and security implemented with respect to the digital content.
  • In certain embodiments, control over the digital content may be exercised in conjunction with a licensing authority 103. This licensing authority 103 (which may be referred to as a central licensing authority (CLA), though it will be understood that such a licensing authority need not be centralized and whose function may be distributed, or whose function may be accomplished by content distribution system 101, manual distribution of data on a hardware device such as a memory stick, etc. may provide a key and/or an authorization code. This key may be a compound key (DS), that is both cryptographically dependent on the digital content distributed to the target device and bound to each target device (TDn). In one example, a target device may be attempting to execute an application in secure mode. This secure application (which may be referred to as candidate code or a candidate code block (e.g., CC)) may be used in order to access certain digital content.
  • Accordingly, to enable a candidate code block to run in secure mode on the processor of a particular target device 100 to which the candidate code block is distributed, the licensing authority 103 supplies a correct value of a compound key (one example of which may be referred to as an Authorization Code) to the target device on which the candidate code block is attempting to execute in secure mode (e.g., supply DS1 to TD1). No other target device (e.g., TDn, where TDn TD1) can run the candidate code block correctly with the compound key (e.g., DS1) and no other compound key (DSn assuming DSn DS1) will work correctly with that candidate code block on that target device 100 (e.g., TD1).
  • As will be described in more detail below, when Target Device 100 (e.g., TD1) loads the candidate code block (e.g., CC1) into its instruction cache (and, for example, if CC1 is identified as code that is intended to be run in secure mode), the target device 100 (e.g., TD1) engages a hash function (which may be hardware based) that creates a message digest (e.g., MD1) of that candidate code block (e.g., CC1). The seed value for this hash function is the secret key for the target device 100 (e.g., TD1's secret key (e.g., SK1)).
  • In fact, such a message digest (e.g., MD1) may be a Message Authentication Code (MAC) as well as a compound key, since the hash function result depends on the seed value of the hash, the secret key of the target device 100 (e.g., SK1). Thus, the resulting value of the message digest (e.g., MD1) is cryptographically bound to both the secret key of the target device 100 and to the candidate code block. If the licensing authority distributed compound key (e.g., DS1) matches the value of the message digest (e.g., MD1) it can be assured that the candidate code block (e.g., CC1) is both unaltered as well as authorized to run in secure mode on the target device 100 (e.g., TD1). The target device 100 can then run the candidate code block in secure mode.
  • As can be seen then, in one embodiment, when secure mode execution for a target device 100 is performed, the target device 100 may be executing code that has both been verified as unaltered from its original form, and is cryptographically “bound” to the target device 100 on which it is executing. This method of ensuring secure mode execution of a target device may be contrasted with other systems, where a processor enters secure mode upon hardware reset and then may execute in a hypervisor mode or the like in order to establish a root-of-trust.
  • Accordingly, using embodiments as disclosed, any or all of these data such as the compound key from the licensing authority, the message digest, the candidate code block, etc. (e.g., DS1, MD1, CC1) may be completely public as longs as the secret key for the target device 100 (e.g., SK1) is not exposed. Thus, it is desired that the value of the secret key of a target device is never exposed, either directly or indirectly. Accordingly, as discussed above, embodiments of the systems and methods presented herein, may, in addition to protecting the secret key from direct exposure, protect against indirect exposure of the secret key on target devices 100 by securing the working sets of processes executing in secure mode on target devices 100.
  • FIG. 2 shows the architecture of one embodiment of a target device that is capable of controlling the execution of the digital content or implementing security protocols in conjunction with received digital content. Elements of the target unit may include a set of blocks, which allow a process to execute in a secured mode on the target device such that when a process is executing in secured mode the working set of the process may be isolated. It will be noted that while these blocks are described as hardware in this embodiment, software may be utilized to accomplish similar functionality with equal efficacy. It will also be noted that while certain embodiments may include all the blocks described herein other embodiments may utilize lesser or additional blocks.
  • The target device 100 may comprise a CPU execution unit 120 which may be a processor core with an execution unit and instruction pipeline. Target unit 100 may also contain a true random number generator 182 which may be configured to produce a sequence of sufficiently random numbers or which can then be used to supply seed values for a pseudo-random number generation system. This pseudo-random number generator can also potentially be implemented in hardware, software or in “secure” software.
  • One-way hash function block 160 may be operable for implementing a hashing function substantially in hardware. One-way hash function block 160 may be a part of a secure execution controller 162 that may be used to control the placement of the target device 100 in secure mode or that may be used to control memory accesses (e.g., when the target device 100 is executing in secured mode), as will be described in more detail herein at a later point.
  • In one embodiment, one way hash function block 160 may be implemented in a virtual fashion, by a secure process running on the same CPU that is used to evaluate whether a given process is secure or not. In certain embodiments two conditions may be adhered to, ensuring that such a system may resolve correctly. First, the secure mode “evaluation” operation (e.g., the hash function) proceeds independently of the execution of the secure process that it is evaluating. Second, a chain of nested evaluations may have a definitive termination point (which may be referred to as the root of the “chain of trust” or simply the “root of trust”). In such embodiments, this “root of trust” may be the minimum portion of the system that should be implemented in some non-changeable fashion (e.g., in hardware). This minimum feature may be referred to as a “hardware root of trust”. For example, in such embodiments, one such hardware root of trust might be a One-Way hash function that is realized in firmware (e.g., in non-changeable software).
  • Another portion of the target unit 100 may be a hardware-assisted secure mode controller block 170. This secure mode controller block 170 can be implemented in a number of ways. In one example, the secure mode controller block 170 is a general purpose processor or a state machine. The secure execution controller 162 also includes secure mode control registers 105, which define the configuration of the current security state on a process by process basis. As shown in FIG. 2, the secret key 104 and another number (for example, in initialization vector or nonce) are run through the one-way hash function block 160. The result of the hash function is repeatable and is a derivative of the secret. The result of the hash function is provided to the secure mode controller block 170.
  • It is not material to embodiments exactly which encryption algorithm is used for this hardware block 170. In order to promote the maximum flexibility, it is assumed that the actual hardware is general-purpose enough to be used in a non-algorithmically specific manner, but there are many different means by which this mechanism can be implemented. It should be noted at this point that the terms encryption and decryption will be utilized interchangeably herein when referring to engines (algorithms, hardware, software, etc.) for performing encryption/decryption. As will be realized if symmetric encryption is used in certain embodiments, the same or similar encryption or decryption engine may be utilized for both encryption and decryption. In the case of an asymmetric mechanism, the encryption and decryption functions may or may not be substantially similar, even though the keys may be different.
  • Target device 100 may also comprise a data cache 180, the instruction cache 110 where code that is to be executed can be stored, and main memory 190. Data cache 180 may be almost any type of cache desired such as a L1 or L2 cache. In one embodiment, data cache 180 may be configured to associate a secure process descriptor with one or more pages of the cache and may have one or more security flags associated with (all or some subset of the) lines of a data cache 180. For example, a secure process descriptor may be associated with a page of data cache 180.
  • Generally, embodiments of target device 100 may isolate the working set of a process executing in secure mode stored in data cache 180 such that the data is inaccessible to any other process, even after the original process terminates. More specifically, in one embodiment, the entire working set of a currently executing may be stored in data cache 180 and writes to main memory 190 and write-through of that cache (e.g., to main memory 190) disallowed (e.g., by secured execution controller 162) when executing in secured mode.
  • Additionally, for any of those lines of data cache 180 that are written to while executing in secure mode (e.g., a “dirty” cache line) those cache lines (or the page that comprises those cache lines) may be associated with a secure process descriptor for the currently executing process. The secure process descriptor may uniquely specify those associated “dirty” cache lines as belonging to the executing secure process, such that access to those cache lines can be restricted to only that process (e.g., be by secured execution controller 162).
  • In certain embodiments, in the event that the working set for a secure process overflows data cache 180 and portions of data cache 180 that include those dirty lines associated with the security descriptor of the currently executing process need to be written to main memory (e.g., a page swap or page out operation) external data transactions between the processor and the bus (e.g., an external memory bus) may be encrypted (e.g., using block 170 or encryption software executing in secure mode). The encryption (and decryption) of data written to main memory may be controlled by secure execution controller 162.
  • The key for such an encryption may be the secure process descriptor itself or some derivative thereof and that secure descriptor may itself be encrypted (e.g., using the target device's 100 secret key 104 or some derivative thereof) and stored in the main memory 190 in encrypted form as a part of the data being written to main memory.
  • Instruction cache 110 is typically known as an I-Cache. In some embodiments, a characteristic of portions of this I-Cache 110 is that the data contained within certain blocks be readable only by CPU execution unit 120. In other words, this particular block of I-Cache 130 is execute-only and may not be read from, nor written to, by any executing software. This block of I-Cache 130 will also be referred to as the “secured I-Cache” 130 herein. The manner by which code to be executed is stored in this secured I-Cache block 130 may be by way of another block which may or may not be depicted. Normal I-Cache 150 may be utilized to store code that is to be executed normally as is known in the art.
  • Additionally, in some embodiments, certain blocks may be used to accelerate the operation of a secure code block. Accordingly, a set of CPU registers 140 may be designated to only be accessible while the CPU 120 is executing secure code or which are cleared upon completion of execution of the secure code block (instructions in the secured I-cache block 130 executing in secured mode), or if, for some reason a jump to any section of code which is located in the non-secure or “normal” I-Cache 150 or other area occurs during the execution of code stored in the secured I-Cache 130.
  • In one embodiment, CPU execution unit 120 may be configured to track which registers 140 are read from or written to while executing the code stored in secured I-cache block 130 and then automatically clear or disable access to these registers upon exiting the “secured execution” mode. This allows the secured code to quickly “clean-up” after itself such that only data that is permitted to be shared between two kinds of code blocks is kept intact. Another possibility is that an author of code to be executed in the secured code block 130 can explicitly identify which registers 140 are to be cleared or disabled. In the case where a secure code block is interrupted and then resumed, then these disabled registers may potentially be re-enabled if it can be determined that the secure code that is being resumed has not been tampered with during the time that it was suspended.
  • In one embodiment, to deal with the “leaking” of data stored in registers 140 between secure and non-secure code segments a set of registers 140 which are to be used only when the CPU 120 is executing secured code may be identified. In one embodiment, this may be accomplished utilizing a version of the register renaming and scoreboarding mechanism, which is practiced in many contemporary CPU designs. In some embodiments, the execution of a code block in secured mode is treated as an atomic action (e.g., it is non-interruptible) which may make this such renaming and scoreboarding easier to implement.
  • Even though there may seem to be little possibility of the CPU 120 executing a mixture of “secured” code block (code from the secured I-Cache 130) and “unsecured code” (code in another location such as normal I-cache 150 or another location in memory), such a situation may arise in the process of switching contexts such as when jumping into interrupt routines, or depending on where the CPU 120 context is stored (most CPU's store the context in main memory, where it is potentially subject to discovery and manipulation by an unsecured code block).
  • In order to help protect against this eventuality, in one embodiment, another method which may be utilized for protecting the results obtained during the execution of a secured code block that is interrupted mid-execution from being exposed to other execution threads within a system is to disable stack pushes while a the target device 100 is operating in secured execution mode. This disabling of stack pushes will mean that a secured code block is thus not interruptible in the sense that, if the secured code block is interrupted prior to its normal completion, it cannot be resumed and therefore must be restarted from the beginning. It should be noted that in certain embodiments if the “secured execution” mode is disabled during a processor interrupt, then the secured code block may also potentially not be able to be restarted unless the entire calling chain is restarted.
  • Each target unit 100 may also have one or more secret key constants 104; the values of neither of which are software-readable. In one embodiment, the first of these keys (the primary secret key) may be organized as a set of secret keys, of which only one is readable at any particular time. If the “ownership” of a unit is changed (for example, the equipment containing the protocol engine is sold or its ownership is otherwise transferred), then the currently active primary secret key may be “cleared” or overwritten by a different value. This value can either be transferred to the unit in a secure manner or it can be already stored in the unit in such a manner that it is only used when this first key is cleared. In effect, this is equivalent to issuing a new primary secret key to that particular unit when its ownership is changed or if there is some other reason for such a change (such as a compromised key). A secondary secret key may be utilized with the target unit 100 itself. Since the CPU 120 of the target unit 100 cannot ever access the values of either the primary or the secondary secret keys, in some sense, the target unit 100 does not even “know” its own secret keys 104. These keys are only stored and used within the security execution controller 162 of the target unit 100 as will be described.
  • In another embodiment, the two keys may be constructed as a list of “paired” keys, where one such key is implemented as a one-time-programmable register and the other key in the pair is implemented using a re-writeable register. In this embodiment, the re-writeable register may be initialized to a known value (e.g., zero) and the only option that may be available for the system to execute in secure mode in that state may be to write a value into the re-writeable portion of the register. Once the value in this re-writeable register is initialized with some value (e.g., one that may only be known by the Licensing Authority, for example), then the system may only then be able to execute more general purpose code while in secure mode. If this re-writeable value should be re-initialized for some reason, then the use of a new value each time this register is written may provide increased security in the face of potential replay attacks.
  • Yet another set of keys may operate as part of a temporary public/private key system (also known as an asymmetric key system or a PKI system). The keys in this pair may be generated on the fly and may be used for establishing a secure communications link between similar units, without the intervention of a central server. As the security of such a system is typically lower than that of an equivalent key length symmetric key encryption system, these keys may be larger in size than those of the set of secret keys mentioned above. These keys may be used in conjunction with the value that is present in the on-chip timer block in order to guard against “replay attacks”, among other things. Since these keys may be generated on the fly, the manner by which they are generated may be dependent on the random number generation system 182 in order to increase the overall system security.
  • In one embodiment, one method that can be used to affect a change in “ownership” of a particular target unit is to always use the primary secret key as a compound key in conjunction with another key 107, which we will refer to as a timestamp or timestamp value, as the value of this key may be changed (in other words may have different values at different times), and may not necessarily reflect the current time of day. This timestamp value itself may or may not be itself architecturally visible (e.g., it may not necessarily be a secret key), but nonetheless it will not be able to be modified unless the target unit 100 is operating in secured execution mode. In such a case, the consistent use of the timestamp value as a component of a compound key whenever the primary secret is used can produce essentially the same effect as if the primary secret key had been switched to a separate value, thus effectively allowing a “change of ownership” of a particular target endpoint unit without having to modify the primary secret key itself.
  • As may be understood then, the target device 100 may use secure execution controller 162 and data cache 180 to isolate the working sets of processes executing in secure mode such that the data is inaccessible to any other process, even after the original process terminates. This working set isolation may be accomplished in certain embodiments by disabling off-chip writes and write-through of data cache when executing in secured mode, associating lines of the data cache written by the executing process with a secure descriptor (that may be uniquely associated with the executing process) and restricting access to those cache lines to only that process using the secure process descriptor. Such a secure process descriptor may be a compound key such as an authorization code or some derivative value thereof.
  • When it is desired to access data in the data cache by the process the secure descriptor associated with the currently executing process may be compared with the secure descriptor associated with the requested line of the data cache. If the secure descriptors match, the data of that cache line may be provided to the executing process while if the secure descriptors do not match the data may not be provide and another action may be taken.
  • Moreover, in certain embodiments, in the event that the working set for a secure process overflows the on-chip cache, and portions of cache that include those dirty lines associated with the secure process descriptor need to be written to main memory (e.g., a page swap or page out operation) external data transactions between the processor and the bus (e.g., an external memory bus) may be encrypted. The key for such an encryption may be the secure process descriptor itself or some derivative thereof and that secure process descriptor may be encrypted (e.g., using the target device's secret key or some derivative thereof) prior to being written out to the main memory. Again, this encryption processes may be accomplished substantially using the hashing block of the target device or by use of an software encryption process running in secure mode on the processor itself or some other on-chip processing resource, or by use of a encryption function that is implemented in hardware.
  • To enhance performance, in certain cases where a secure process may have a large working set or is frequently interrupted (e.g., entailing many page swaps) a subset of the processes working set that is considered “secure” may be created (e.g., only a subset of the dirty cache lines for the process may be associated with the secure descriptor) and only encrypt those cache lines or the portion of the cache containing those lines, when it is written out to external memory.
  • Additionally, to enhance performance, an off-chip storage mechanism (e.g., a page swapping module) can be run asynchronously in parallel with an interrupting process (e.g., using a DMA unit with integrated AES encryption hardware acceleration) and thus, could be designed to have a minimal impact on the main processor performance. In another embodiment, a separate secure “working set encapsulation” software module may be used to perform the encryption prior to allowing working set data to be written out to memory.
  • Referring to FIG. 3, one embodiment of the architecture of a secure execution controller is depicted. In this embodiment, secure execution controller 362 is associated with a CPU of a system in which it is included and is intended to support the running of a candidate code block in secure mode on the main CPU. As such, secure execution controller 362 may comprise one or more of registers, including a secret hardware key 310 which is not visible to the CPU, secure mode control register 350, authorization code register 360, secure mode status register 352, hash seed register 312 and hardware generated compound key register 314. Of these registers, all but secret hardware key 310 may be readable by a CPU without affecting the overall security of the system, although any of these other registers may or may not be visible.
  • Secure mode control register 350 may be a register that may be written to in order to attempt to place the target device in a secure mode. The secure mode control register 350 may have a register into which a memory location (e.g., in an I-cache or main memory) corresponding to the beginning address of a candidate code block (e.g., a code block to be executed in secured mode) may be written and a separate register into which the length of such a candidate code block may be written. Authorization code register 360 may be a location into which an authorization code or another type of key or data may be written. Secure mode status register 352 may be a memory-mapped location comprising one or more bits that may only be set by hardware comparison block 340 and which can indicate whether or not the target device 100 is operating in secure mode.
  • Hardware hash function block 320 may be operable for implementing a hash function substantially in hardware to generate a compound key 314. Hardware hash function block 320 may, for example, implement a SHA 256 or some similar one-way hash function. However, this hash function may also be implemented in software or in firmware running on either a separate processor from the CPU of the system, or even a process that is run on the CPU in secure mode, using a virtual hardware hash function methodology as described earlier.
  • Hardware hash function block 320 may take as input one or more of the values stored in the hash seed register 312, secret hardware key 310 or data from another location, concatenate these inputs (e.g., prepend or append one input to another) and hash the resulting data set to generate a message authentication code, which we have referred to earlier as a one-way compound key.
  • In certain embodiments, almost any numeric value can be provided as an input (precursor) for hardware hash function block 320. For example, the input data for the hardware hash function may be constructed by a concatenation of the secret hardware key, a hash seed precursor key and a secure code block candidate. There may be no fundamental difference in the operation of the hash function, almost no matter what the input data represent or how large any of these data sets may be. It should also be noted that there may be other inputs to the hardware hash function coming from a secure mode controller state machine that function as control inputs as opposed to input data to the hash function.
  • Hardware generated compound key register 314 is configured to store the output of the hardware hash function block 320. Hardware comparison block 340 may be configured to compare the data in hardware generated compound key register 314 with the data in authorization code register 360. If the two values are identical the hardware comparison block 340 is configured to set the one or more bits in secure mode status register 352 that place the target device in secure mode.
  • Secure mode controller state machine 370 may be logic (e.g., hardware, software or some combination) that may operate based on the state of bits of secure mode control register 350 or secure mode status register 352. Secure mode controller state machine 370 is configured for controlling inputs to hardware hash function block 320, such that the precursors may be utilized in the correct manner to generate the desired output 314 of hardware hash function block 320. For example, secure mode controller state machine 370 may be configured to cause the resulting output to be loaded into hardware generated compound key register 314 at the proper time. Additionally, secure mode controller state machine 370 may be configured to cause the correct data to be written to secure mode status register 352.
  • Secure mode controller state machine 370 may also be configured for controlling memory access when the target device is executing in secure mode. In one embodiment, when the bits in secure mode status register 352 that indicate that the target device is now operating in secure mode, then secure mode controller state machine 370 may be configured to determine which of the pages of the data cache have been assigned to that process and store a secure descriptor for that process in the data cache in association with the one or more of the pages of the data cache. These secure process descriptors may thus be used to associate a particular set of data that is being stored in the data cache with a specific process that is executing in secured mode. Such a secure process descriptor may, for example, be the value that is based on the data that is located in authorization code register 360 or the hardware-generated compound key register 314.
  • Additionally, when the bits in secure mode status register 352 that place the target device in secure mode are set, secure mode controller state machine 370 may be able to receive memory accesses by the process executing in secure mode and determine if the memory access is a read or a write access.
  • If the data access consists of a write operation, the secured mode controller state machine 370 may be configured to determine the cache line of the data cache corresponding to the address where the data is to be written and then set a security flag associated with that cache line to indicate that the data contained in that cache line is secure. In certain embodiments, secured mode controller state machine 370 is also configured to prevent any writes to any memory location which is not in the data cache, for example by disabling write-through, write-back or other operations of the data cache or memory controllers of the target device.
  • If the access is a read access the secured mode controller state machine 370 may be configured to determine if a cache miss has occurred and if the requested address was not previously stored in the data cache the secured mode controller state machine 370 may be configured to allow the requested data to be read from main memory and placed in the data cache in a page associated with the process. If a cache hit occurs the secured mode controller state machine 370 may be configured to the determine the cache line corresponding to the address of the memory access and check the security flag associated with that cache line to determine if it is set. If the security flag is not set the memory access may be allowed to proceed (e.g., the data read from the cache line).
  • Alternatively, if a security flag associated with the cache line in the data cache corresponding to the address from which data is to be read is set secured mode controller state machine 370 may be configured to obtain the secure process descriptor associated with the page in the data cache containing that cache line and compare it with a secure process descriptor associated with the currently executing. If the secure process descriptors match, then the memory access may be allowed to proceed. If the secure descriptors do not match, another action may be taken such as either returning a garbage or preset value in response to the memory access or alternately returning a “no-valid data” at that address message to the CPU, whereupon the CPU memory management unit may then request a replacement cache line to read in from system memory.
  • In one embodiment, only the data cache is used to store the entire working set of a process executing in secure mode and any writes to memory other than to the data cache by the process may be disabled. Additionally, any lines of the data cache that are written to (e.g., so-called “dirty” cache lines) while in secure mode are associated with a secure process descriptor that may uniquely and precisely specify which process to whom the “dirty” cache line belongs. Access to these cache lines may only be allowed to the owner of the particular “dirty” cache line such that any cache line modified during the operation of a secure process is unreadable by any other process, even after the original process has terminated. Thus, data that belongs to one instance of a process is unambiguously isolated from any other process.
  • It may also be helpful to give a general overview of an exemplary data cache architecture. FIGS. 4A and 4B illustrate one embodiment of the architecture of a data cache that may be utilized to effectuate isolation of working sets of processes according to certain embodiments. Referring first to FIG. 4A, data cache 400 may be almost any type of cache, including a L1 cache a L2 cache, a direct mapped cache, a 2-way set associative cache, a 4-way set associative, a 2-way skewed associative cache, etc. that may be implemented in conjunction with almost any management or write policies desired. The cache 400 may comprise a set of pages 410. When used when referring to the cache herein, a page may be understood to mean cache block or a cache set. The data cache 400 is configured to store a secure descriptor associated with one or more pages 410 of the cache.
  • FIG. 4B depicts a view of one embodiment of a page 410 a of cache 400. Here, the cache comprises logic 412 designed to store a secure process descriptor in association with the page 410 a and to provide the secure process descriptor in response to a request for the secure process descriptor for page 410 a or in conjunction with a read to a cache line 402 of page 410 a. Each cache line 402 of the page 410 a includes bits for the data, address tags and flags 420. The flags 420 may include bits such as a valid bit or dirty bit. In addition, flags 420 may include a secure bit 422. Cache 400 may be configured such that a secure bit 422 for a cache line 402 may be set (e.g., when a process executing in secure mode writes to that cache line 402).
  • It will now be useful to explain how embodiments of such a target device may be place in secured mode. It should be noted that, in one embodiment, the procedure by which any generic (or otherwise) block of code (which will be referred to as a “secure work function”) may be executed in secure mode on embodiments of a system such as those described herein is to execute a pair of extra functions, one on either side (e.g., before or after) of the secure work function. A function (or set of functions) that is executed immediately prior to a secure work function will be referred to as the “prologue” and a function (or set of functions) which is executed immediately after the secure work function will be referred to as the “epilogue”.
  • Thus, in one embodiment, in order to execute a secure work function on a CPU, then that secure work function should be preceded by a prologue and followed by an epilogue. In certain embodiments, the purpose of the prologue is at least threefold. First, the prologue should prepare the input arguments that are passed to the secure work function for use by the secure work function. This preparation may involve, for example, a decryption process, which may be required for those input arguments that may not be passed to the secure work function in the clear. A second function of the prologue may be to construct a compound key whose value is dependent on a number of data elements. Such data elements may include the hardware secret key of the target device, the Authorization Code of the parent (e.g., calling) function, a list of one or more input arguments to the secure work function (either in encrypted or non-encrypted form), the executable image of the secure work function itself, or some other information that may be used in determining whether or not the secure work function should be allowed to execute on the target device in secure mode. A third function of the prologue could be to initiate a request that the CPU begin executing the secure work function in secure mode.
  • The purpose of the epilogue may be to “clean up” after the execution of the secure work function is complete. One function the epilogue may be to prepare any designated output parameters for use by subsequent code blocks (e.g., to be executed after the secure work function), be they secure or not. For example, this preparation may involve encrypting of the designated output (or returned data) from the secure work function so that any observing process other than the intended recipient of such output arguments, including either hardware or software-based observers, may be precluded from effectively intercepting that data. In such a case, the encryption key that may be used may be a reversible compound key that is passed to the secure routine as one of its calling arguments.
  • A second function of the epilogue may be to either programmatically or automatically invalidate those portions of a data cache that have been written to while the secure work function (e.g., by the secure work function) was executing. Thus, in the case where a secure work function may have had its operation suspended and then resumed, the data values that were written to a secure portion of the data cache prior to the process being suspended may thus be available to the resumed secure process without having to page these secure data locations out to memory (which may involve an intervening encryption process). Then, once the secure function had been resumed, these same data cache locations may then be made available to the secure function, since the secure process descriptor may match the currently executing authorization code, or some derivative thereof (or another value being used as a secure process descriptor).
  • However, once a secure process had terminated (for example, using an epilogue function), then these same secure data cache locations may be marked as invalid during the epilogue function. This invalidation process would prevent any unintended potential “leakage” of data that may still be resident in the secure portion of the data cache from being accessed after the secure work function has terminated properly.
  • In this manner, even if a secure work function is repeated and if it is given the same secure process descriptor twice in a row, the second iteration of this secure work function will nonetheless be unable to access the working set data from the first iteration of that same secure work function, despite the fact that they might have the same secure process descriptor for both iterations. It will be noted that the descriptions of the prologue and epilogue are provided by way of example and that more or fewer functions may be accomplished by the prologue of the epilogue and that additionally, these function (or additional or fewer function) may be accomplished in another manner without departing from the scope of embodiments as described.
  • As stated above, if the secure process' authCode and a nonce (plus the processor's hardware secret key) are used as precursors to a compound encryption key, then this encryption key can be recreated at will using just the (public) precursors. This mechanism can be used by a secure process to safely page its secure data out to main memory. An example of this is illustrated more particularly in FIG. 5. FIG. 5 is a block diagram depicting a nonce-based authCode/encryption key generated in hardware. A nonce 510 and the hardware secret 512 are used in conjunction with the cached code 514 to be protected, and fed into a hash function 516. The hash function 516 generates a unique NauthCode which can be used as an encryption key for securely paging cache data out to main memory. Since the nonce changes every time, the nonce based authCode NauthCode also changes every time. In some embodiments, the nonce may be generated from the previous operation's NauthCode fed back into the hash function input.
  • However, one complication can arise if the method used to generate the nonce is itself secure. This is desirable, since one method to attack this compound key generation mechanism is to maliciously manipulate the value of the nonce (which is public). Thus, the nonce should be generated securely; either by a software-based method (a process running in secure mode, such as in the “Prologue” section just preceding the Secure Code block as shown in FIG. 7) or a hardware-based mechanism (one which may need to use the processor's hardware secret, e.g., by the feedback line shown in FIG. 5). In this case, the nonce itself would then be considered as “secure” data and would thus not be able to be stored in the clear in main memory.
  • This situation is interesting, since the nonce itself could be encrypted prior to being paged out to main memory, but the compound key for that encryption process would have to be somehow accessible, which would lead to yet another nonce, etc. However, this seemingly classic “chicken and egg” problem can be solved. Recall that the secure process (the one that must create the compound encryption key) is actually the “owner” of both the (securely generated) nonce as well as the resultant compound encryption key. Thus, the secure process can simply “pre-encrypt” the nonce with the (known) compound encryption key and then store it back in place (in the secure data cache). Then, when the secure process overflows and the cache line containing the nonce is paged out to main memory, it will be “re-encrypted” with the very same compound key that was used to “pre-encrypt” it in the first place. Thus, when the nonce is written out to main memory, it will actually be stored in the clear, without ever having to expose the value of the encryption key to any outside observer.
  • This is illustrated more particularly in FIG. 6. FIG. 6 illustrates a secure process dataflow. A secure processor 610 implements a secure data cache (D$) 612. When the data stored in the data cache 612 are paged out from the secure cache 612 into main memory 614, they are encrypted using an encryption key generated using a keyed one way hash (or HMAC), as discussed above, based on the nonce 620, the hardware secret key 618, and the secure processor's authCode 622.
  • The nonce itself is stored in the secure D$612 and pre-encrypted using the generated encryption key (via pre-encryption block 624) before being stored back in the secure D$612. When data in the secure D$612 are paged out to the main memory 614, the data, along with the pre-encrypted nonce, are first encrypted using the encryption key (via encryption block 626). This decrypts the nonce, and it is stored in the clear in main memory 614 and can be available when reading data back in.
  • FIG. 7 illustrates one example of a secure software implementation. In the example of FIG. 7, a secure software implementation includes a prologue 710. The prologue 710 informs the secure mode hardware state machine where and how big the candidate secure code block is. Once it has been verified, the secure code 712 generates a new nonce (the first part of the secure code) in the prologue 710 and then, in an epilogue 714, data are exported (after being pre-encrypted, as described above) securely to main memory and the state machine is shut down.
  • The mechanism described above can potentially be subverted by an external attack based on use of the data that has been paged out to main memory. This could be accomplished if either the encrypted data is maliciously modified or if the nonce value itself is modified. For example, one might envision a “replay attack”, where a nonce from a previous invocation of a particular secure process is inserted into the data set in place of the correct nonce.
  • In any case, these kinds of data corruption problems can be detected by “signing” the encrypted data set that is paged out to main memory (e.g., with a Message Authentication Code—or MAC). As before, such a MAC can be created by passing the encrypted data that is to be paged-out (including the unencrypted nonce) through a one-way hash function, along with the same secret value that was used in order to create the encryption key described above. Thus, if the paged-out data had been corrupted by an external party while it was in main memory, then that corruption would be detectable when the data was subsequently read back into the secure data cache prior to decryption. Since the secret value is not accessible to the CPU (i.e., it is architecturally invisible), then there is no way that any process (secure or not) can correctly recreate the value of the resulting MAC if any of the paged-out data has been modified. Of course, this mechanism cannot protect a secure process from a denial-of-service style attack (i.e., if the encrypted paged-out data set is modified or deleted), unless some other method is used (for example, a multiply redundant backing store).
  • The techniques described are also useful for multi-threaded secure operations. In the case where a secure process is interrupted by some other process, then all of its intermediate data may be paged out to main memory using this same mechanism, whereupon it can then be re-created back in the secure D$ in the clear when the secure process is resumed by reversing the encryption procedure described above. Essentially, as described above, this mechanism can be used to automatically decrypt the data as it is read back into the data cache from main memory without the processor having any “knowledge” of the architecturally invisible secret or even of the compound encryption key, outside of the secure process to which this compound key belongs. If the system is designed according to the structure that was described in U.S. Pat. No. 7,203,844, then there is no way that the processor can correctly decrypt the paged-out data, even if the nonce input value is known, without using this hardware-based “automatic” decryption mechanism.
  • Finally, in a system where every time a datum is written to by a processor executing in secure mode, then it is automatically included in this “secured data block”, then there is no way for an external process, either secure or not, to interact (correctly) with this secured data. As stated earlier, if the “precursor” input to this mechanism is truly nonce-based, then even an exact duplicate of the interrupted secure code block cannot access the data created by a distinct copy of itself, since the resulting encryption keys will be different; due to each distinct process having a different input nonce value. Thus, the security of such a system depends simply on the security of the nonce generation process, which as stated earlier, can be accomplished either in secure mode software or by a hardware-based mechanism.
  • Let us now consider the event where a secure process is interrupted mid-execution and its working data set is paged out of secure D$ and then execution is resumed at some later time. First, from a perspective of increased security (and to some extent, simplicity), it should be clear that the process would not “reacquire” the old (paged-out) nonce value when it resumes secure execution. At the very least, that kind of mechanism (where the old nonce value was reinstated) would open up a large vulnerability window for replay-style attacks. Thus, when the secure process resumes, it will actually have a new nonce (and thus a new NauthCode value). So any newly updated data would then subsequently be paged out to memory using a different encryption key.
  • Thus, to read back in data that had been paged out while executing a previous incarnation of the process, the system would need the older value of the NauthCode value in order to read that data back into the secure D$, but once that data had been safely restored into the secure D$, the old nonce value would no longer be needed. This makes it simple to understand which nonce should be used as the precursor of any NauthCode value for any subsequent re-encryption operations (on a subsequent page-out, for example). Since that restored data would have originated as “secure” data (or else it would not have been paged out in encrypted form in the first place), then when that “old” data is restored into the secure D$, it must be marked as “secure” (i.e., the “S” bit must be set in the secure D$ line flags field).
  • This way, if any such data were to be paged out again, it would presumably be re-encrypted; but this time, using the new value of the NauthCode as the encryption key. However, if this encrypted page-out process were to be used on unmodified data, then this would constitute another potential attack vector. This attack would be based on the fact that multiple encryptions of the same data with different keys would allow the attacker to accumulate statistical data on the ciphertext and thus, would help them to guess the original plaintext. Fortunately, this potential problem can be resolved simply by not setting the secure D$ “dirty” bit when the “old” data is read in from main memory (and decrypted) and restored back into the secure D$. Then, unless the processor writes the same value to this location over and over during separate execution instantiations, this data would never get written back out to main memory encrypted with a different NauthCode value.
  • However, by itself, this mechanism will potentially allow subsequent instantiations of a secure process to read back in data that was used in a prior instantiation and thus, potentially compromise the system by reading in partial results from previous instantiations and then further operating on these “older” data results. Thus, embodiments may include methods that will prevent this style of attack.
  • Following are descriptions of exemplary methods for preventing replay attacks using data from previous secure process instantiations. There are several methods that could be used to prevent the potential issue of “rewinding and then replaying” a secure process in order to extract intermediate results from a previous instantiation. One such method would be to “timestamp” a process when it is launched and then to use that timestamp as one of the precursor inputs to the compound key hash function. However, the security of that system would then depend on the security of the clock that is used for the timestamp, which might prove problematic if that clock were required to be reset for any reason (such as during a timezone change, for example). Another possible method might be to use the process ID in the same manner, but the same problem applies here; we are then dependent on the security of the process ID mechanism. However, another similar mechanism that could be used is already present in the system and has, in fact, already been applied to this problem, namely the nonce itself. We have already determined that the nonce value is non-repeating (and that is enforced by the hardware of the system).
  • In fact, if we look at the way that a hardware-based nonce is designed to operate, then ultimately, the nonce value is “seeded” at the same time that the Central Licensing Authority provisions the secure device in the first place. That nonce “seed” value will be provided to the device in an encrypted manner and will contain sufficient entropy so as to never have a secure device have a repeated nonce “seed” value. Thus, it is reasonable to assume that if the nonce is large enough, then even a nonce value that is generated using a Pseudo-Random Number Generator (PRNG) will never be repeated in the useful lifetime of the device. We can thus use the value of the nonce in the same function as a clock (at least for the purposes of this embodiment), although it would be difficult to use it as a real time reference per-se.
  • Thus, if we encode the nonce value of a secure process when it is launched (as opposed to when it is actually running), then we can then have a method for distinguishing between different instantiations of a single secure process. We can refer to this launch-time based nonce derivative the “LauthCode”. Thus, we simply save that “LauthCode” nonce value along with the current “NauthCode” nonce value each and every time a particular secure process is paged out. Then, if the saved (encrypted) data set is signed by a MAC resulting from the hash of the concatenated pair of nonces (along with the hardware secret) and the paged-out encrypted data set, then every time the secure process is resumed, it can be checked for integrity prior to allowing the process to be resumed when returning from an interrupt.
  • It may be desirable at times to export secure data to non-secure processes. Now, turning to the second part of the problem, we now see that we can potentially run afoul of the “King Midas Problem” when we wish to export the final results from a secure process to a non-secure process. However, we can simply use the pre-encryption mechanism that was described earlier as a way to export the (securely created) nonce value out to main memory in the clear and all portions of the resulting data that we wish to export, prior to then paging the data in question out to main memory.
  • As we described above, only the same secure process that created this data in the first place is able to recreate the actual encryption key that is used to encrypt the data as it is paged out of the data cache. Thus, in order to export final results for external use, only the same secure process can correctly “pre-encrypt” that data is then exported out to main memory in the clear.
  • This method is also secure against mid-encryption interruption (and thus subsequent intermediate partial results being paged out to memory), since any such partial results will themselves be encrypted and only that data which has been completely “pre-encrypted” will actually show up in the clear when it is paged out to main memory.
  • This is illustrated by way of example in the flowchart 800 of FIG. 8. At process step 802, an encryption key (NauthCode) may be generated based on, for example, the process secure authcode, the processor secret hardware key, and the nonce (described in detail above). Data that are desired to be exported (as well as the nonce, if necessary) may be pre-encrypted using the NauthCode encryption key (step 804). The data may then be stored in the secure data cache (step 806). To page the data out from the secure cache, the data are then encrypted (step 808) on page out using the same NauthCode encryption key, which unencrypts the data. Finally, the now unencrypted data (and nonce) are now available in main memory, unencrypted (step 810). More detailed examples of similar processes are described below.
  • It may be desirable at times to export secure data to other secure processes. Therefore, it would be useful to have a method by which one secure process can share data with another secure process privately. In other words, the ability to encrypt data that only another (specific) secure process can correctly decrypt. In more concrete terms, it would be ideal to create a system where only a single specific secure process could correctly decrypt the (encrypted) shared data.
  • In general, in one example, this functionality can be accomplished by using a shared encryption key. Of course, for security reasons, it is desirable not to share the compound encryption key described above between two different processes. Thus, we must use a different mechanism to create the encryption key for data that we wish to share between two secure processes. Also, it is desirable to further subdivide this problem into “one-way” and “bidirectional” shared data mechanisms.
  • In a first example of a “one-way” secure data export, it may be desirable to create an encryption key in such a manner that only the authorized recipient (i.e., the secure “receiver” process) could create and use the shared data decryption key. Of course, the “sender” process would have access the original data, prior to its encryption (and export). However, once the data in question is encrypted (and subsequently exported), the requirement of such a “one-way” shared data system would be that only the authorized recipient could subsequently have access to the shared data. If the data is not modified by the “receiver” process, then it doesn't make sense to try to enforce this “one-way” stipulation, since the “sender” process can simply make a backup copy of the exported data. However, if the “receiver” process makes any changes to this data, then the “one-way” stipulation would apply; i.e., the “sender” process would not have access the subsequently modified version of the shared data.
  • This mechanism can be supported by a number of different exemplary methods. However, one simple method is be to create a “shared” key where the shared data can be decrypted by both the secure “receiver” process as well as the secure “sender” process. However, once such data is read back into the secure data cache (and is thus has been restored to an unencrypted or cleartext form), and the “receiver” process modifies it, then by the mechanisms described above, it will automatically be tagged as belonging to the secure “receiver” process. Thus, this data is then paged back out to main memory, it would be encrypted with a different compound key (one that only the “receiver” process can recreate).
  • Thus, we have described a simple mechanism where this “one-way” data sharing can be accomplished; based on the assumption that we have a method for creating a “shared” secret which can be used by both secure processes in order to jointly encrypt and decrypt a shared data set. Other examples of “one-way” data sharing are provided below, with respect to FIGS. 11-13.
  • In an example of a “bidirectional” secure data export, it may be desirable to provide a mechanism by which two independent secure processes can interact in order to recreate a shared secret key that is to be used in order to encrypt and decrypt a common data set. Of course, it may be desired in such a secure shared key mechanism that any externally visible communications between these secure processes are not able to be used by an attacker in order to compromise the shared secret encryption key.
  • As it turns out, this problem of creating a secure shared secret is well-known in the cryptographic literature and the process is generally referred to as “Key Exchange”. While a large number of key exchange mechanisms are already known, most of them exhibit several drawbacks that make them less attractive to be used in the case of secure data sharing between two secure processes. Traditionally, such key exchange methods require the use of asymmetric cryptography methods that, while generally secure, are typically very compute-intensive.
  • However, at least one key exchange mechanism exists that does not depend on asymmetric cryptography. This mechanism is the “SHAAKE™” protocol, as described in commonly assigned, co-pending U.S. Patent Application No. ______[Atty. Docket KRIM1220-1] titled “System and Method for an Efficient Authentication and Key Exchange Protocol,” filed concurrently herewith and hereby incorporated by reference in its entirety as if fully set forth herein. This protocol depends only on cascaded hash functions and a single symmetric encrypt/decrypt stage, all of which impose a much lower computational load on the processor than traditional asymmetric cryptography based mechanisms.
  • Furthermore, when used for the “shared data export” purposes described above, the SHAAKE™ protocol can enable communication between two independent secure processes, even if they are executing on completely different processors. In fact, the SHAAKE protocol can also be easily extended to the case where the exported data must be shared between multiple independent secure processes, all of which may or may not be running on the same processor.
  • However, in the specific case of sharing data between two different processes that are running securely on a single processor, then the SHAAKE™ protocol can be simplified, as shown in FIG. 8. In the embodiment illustrated, each process has its own nonce and authCode (Nonce1, authcode1, Nonce2, authCode2) which are hashed with the hardware secret key Kh to obtain SPID1, SPID2. SPID1 and SPID2 in turn are hashed with the hardware secret key Kh to obtain the final shared encryption key EK12.
  • The most basic problem with the protocol shown above, however, seems to be that all of the secrets involved in the generation of the final encryption key value EK12 are able to be calculated by any secure process that is running on the processor. However, recall that the authCode itself enforces the integrity of the code that can access the hardware secret (the value of Kh). Thus, Kh can only be used as the “key” input to the HMAC function if the secure code (candidate) has not been tampered with.
  • In other words, a secure process can only use the correct value of Kh if it has not been corrupted. Thus, as long as the code block is specifically designed not to somehow expose the securely calculated value of EK12 in the first place, then we can ensure that the exposure of the encryption key will not happen at runtime.
  • We can ensure that if the only results that are exported from the secure code block are cryptographic derivatives of the EK12 value and the value itself. Note that the only manner in which the value of EK12 is used is in order to encrypt the exported data between secure processes. Thus, if the encryption mechanism that is used is itself secure, then any examination of the encrypted exported data (or even the resultant plaintext version of said data) should not expose the value of the encryption key itself. In this manner, the security of the interprocess communications channel thus depends solely on the authCode issuance process.
  • As discussed above with respect to FIG. 6, data (for example, a nonce), can be pre-encrypted before being paged out to main memory, so that it will be decrypted with the same encryption key and thus stored in main memory unencrypted. Below, several other examples of similar techniques are described. To help understand the described techniques, a more detailed explanation of a secure data cache follows.
  • FIG. 9 is a block diagram of a secure data cache system 900. The secure data cache system 900 includes a data cache 910 and CPU 912 and corresponding CPU execution unit 914. Data cache 910 may comprise almost any type of cache, including a L1 cache a L2 cache, a direct mapped cache, a 2-way set associative cache, a 4-way set associative, a 2-way skewed associative cache, etc. that may be implemented in conjunction with almost any management or write policies desired. The data cache 910 includes a plurality of data lines (in this example, three lines are shown). Each data line includes bits allocated for an address tag 916 (addr_tag), data 918 (data), valid flag bit 920 (V), and dirty flag bit 922 (D). The address tags, data, valid bits and dirty bits shown in FIG. 9 are used in their conventional sense. Note that the address lines may also include other information not shown, for example, a secure bit such as that shown in FIG. 4.
  • When data in the data cache 910 is paged out to main memory (not shown) the secure data is encrypted by encryption block 924, using the data cache encryption key 926. As discussed in detail above, the data cache encryption key 926 is derived from the hardware secret 928 (hw secret), nonce 930 (nonce), and authorization code 932 (authCode). As discussed above, the authorization code 932 is specific to a particular process, and thus the encryption key 926 is unique to a particular process. Similarly, the nonce 930 is unique to a particular instance of the secure process. The hardware secret 928, nonce 930, and authorization code 932 are provided to hardware hash function 934, to generate the data cache encryption key 926. Note that, if a security bit is included in each data line of the data cache 910, then data lines containing secure information can be designated as such. However, in the example shown in FIG. 9, with the data cache encryption key 926, the entire cache can be considered to be secure, and when a process exits, the entire cache can be invalidated.
  • In the example shown in FIG. 9, different secure processes cannot access each other's data. In fact, not even the same secure process running a second time can access previous data, as long as the nonce value is unique. FIG. 10 shows one example of a secure data cache system 1000 that can preserve a particular nonce for use later in reconstructing a previous data cache encryption key. As with FIG. 9, the data cache 910 includes a plurality of data lines, each including bits allocated for an address tag 1016, data 1018, valid flag bit 1020, and dirty flag bit 1022. Since the hardware secret is constant, and the authorization code is constant for a particular process, a particular data cache encryption key can be reconstructed, given the appropriate nonce. However, a nonce cannot simply be paged out to main memory, since it will be encrypted using an encryption key that will not be available once a new nonce is generated.
  • Now assume that a secure process desires to reconstruct a previous data cache encryption key in order to access encrypted data in the main memory from a previous instance of the process. Since the hardware secret and authorization code are known, the secure process needs only to determine the value of the previous nonce to regenerate the appropriate encryption key. As shown in FIG. 10, the nonce is pre-decrypted using a hash function and the data cache encryption key 1026. In other words, as illustrated in the second address line of FIG. 10, the current nonce is encrypted (pre-decrypted) and stored as data in the data line. When the secure data cache 1010 is flushed to main memory 1040, the data will be encrypted by symmetric encryption block 1024, using the current data cache encryption key 1026, and stored in main memory 1040. Since the nonce was pre-decrypted, after passing through symmetric encryption block 1024 again, the nonce will be converted to plaintext and stored in main memory 1040 as plaintext. In a subsequent instance of the secure process, the previous nonce can be read from main memory 1040, and used to reconstruct the previous data cache encryption key 1026, thus allowing data to be accessed which was encrypted using the previous data cache encryption key. Note that any data, not just the nonce, can be stored in main memory 1040 unencrypted (in plaintext), using the same “pre-decryption” technique.
  • As discussed above, sometimes it is desirable for separate secure processes to share data. In addition to the techniques described above, FIGS. 11-13 illustrate additional examples of techniques for sharing data between processes.
  • FIG. 11 is a block diagram of data cache system 1100. The data cache system 1100 includes a secure data cache page 1110, including a plurality of data lines (in this example four lines are shown). As before, each data line includes bits allocated for an address tag 1116, data 1118, valid flag bit 1120, and dirty flag bit 1122. Each data line also includes bits representing a vector 1142 (S_vector), which functions similar to the secure bits described above with respect to FIG. 4. However, instead of being a single bit, the vector 1142 contains multiple bits (for example, two bits). The value of the vector 1142 serves to identify to which process the respective data line belongs. In this example, assume there are three processes which may store data in the data cache 1110. Instead of a single data cache encryption key, like that shown in FIGS. 9-10, three separate data cache encryption keys 1126 are generated. Each is generated in the manner described above with respect to FIG. 9, using the respective nonce and authorization code, along with the hardware secret. When the data cache 1110 is flushed to main memory 1140, each data line is encrypted by symmetric encryption block 1124 using the secure data encryption key 1126 corresponding to the value of the vector 1142 of the respective data line. As before, data in the data cache is encrypted and stored in main memory as encrypted data. However, as shown in the fourth address line, data can be pre-decrypted, and then stored in main memory 1140 as plaintext after passing through the symmetric encryption block 1124 (using the techniques described above with respect to FIG. 10). When the plaintext nonce is retrieved later from main memory, the secure cache controller knows that the nonce is not encrypted, so it will be received as plaintext, without being decrypted.
  • FIG. 12 shows a secure data cache system 1200, similar to the system 1100 shown in FIG. 11. The data cache system 1200 includes a secure data cache page 1210, including a plurality of data lines. As before, each data line includes bits allocated for an address tag 1216, data 1218, valid flag bit 1220, and dirty flag bit 1222. In this example, there are three secure processes utilizing the data cache 1210. Each secure process is assigned a secure process ID (SPID) value that is entered in the appropriate data line in the vector field 1242. In this example, the vector 1242 is two bits. For example, assume Secure Process 1 is assigned (SPID=01), Secure Process 2 is assigned (SPID=10), and Secure Process 3 is assigned (SPID=11). Unsecure data (e.g., public data) is assigned (SPID=00), and is visible to all processes. In the example shown in FIG. 12, the data in the first line (SPID=01) belongs to Secure Process 1. The data in the second line (SPID=00) is insecure and visible to all processes. The data in the third line (SPID=10) belongs to Secure Process 2. The data in the third line (SPID=11) belongs to Secure Process 3. When the data in the data cache 1210 is flushed out to main memory 1240, each line of data is encrypted by symmetric encryption block 1224 using the encryption key 1226 corresponding to the secure process to which it belongs. For example, the first data line will be encrypted using Secure Data Encryption Key 1, corresponding to SPID=01 and Secure Process 1, and so forth. When data is later retrieved from main memory 1240, each line of data will be decrypted using the encryption key corresponding to the process to which the data belongs.
  • Now assume that Secure Process 1 wishes to share the data stored in the first data line with Secure Process 3. Secure Process 1 has the ability to write data to the vector field 1242 in data line 1, since data line 1 belongs to Secure Process 1 (as shown by SPID=01, which corresponds to Secure Process 1). Therefore, by Secure Process 1 rewriting the value of vector field 1242, Secure Process 1 can control which process has access to that data line. Similarly, Secure Process 1 can make the data visible to all processes by writing SPID=00 in vector 1242.
  • FIG. 13 is a block diagram of the secure data cache system shown in FIG. 12. The data cache system 1300 includes a secure data cache page 1310, including a plurality of data lines. As before, each data line includes bits allocated for an address tag 1316, data 1318, valid flag bit 1320, and dirty flag bit 1322. In this example, Secure Process 1 has rewritten the value of vector field 1342 to SPID=10, which changes ownership of that data line to Secure Process 2. At this point, secure process 2 has full access to the first data line, while Secure Process 1 no longer has access. When the data cache is eventually flushed to main memory 1340, the symmetric encryption block 1324 will encrypt data line 1 using secure data encrypted key 2, which is the key generated using the authCode and nonce of Secure Process 2. Thus, the line 1 data is encrypted using Secure Process 2's encryption key. Therefore, when Secure Process 2 retrieves the data later from main memory 1340, the data will be decrypted using Secure Data Encryption Key 2 (the encryption key corresponding to Secure Process 2). Once secure process 2 has control of the data, it can do what it wants with it, including passing it back to Secure Process 1, or another process, by rewriting the vector field 1342. In this way, data can be easily shared and transferred between multiple secure processes.
  • Details of recursive security protocols that may be used in conjunction with the teachings herein are described in U.S. Pat. No. 7,203,844, issued Apr. 10, 2007, entitled “Recursive Security Protocol System and Method for Digital Copyright Control”, U.S. Pat. No. 7,457,968, issued Nov. 25, 2008, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” U.S. Pat. No. 7,747,876, issued Jun. 29, 2010, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” U.S. Pat. No. 8,438,392, issued May 7, 2013, entitled “Method and System for Control of Code Execution on a General Purpose Computing Device and Control of Code Execution in an Recursive Security Protocol,” U.S. Pat. No. 8,726,035, issued May 13, 2014, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control,” U.S. patent application Ser. No. 13/745,236, filed Jan. 18, 2013, entitled “Method and System for a Recursive Security Protocol for Digital Copyright Control”, U.S. patent application Ser. No. 13/847,370, filed Mar. 19, 2013, entitled “Method and System for Process Working Set Isolation,” and U.S. Provisional Patent Application Ser. No. 61/882,796, filed Sep. 26, 2013, entitled “Method and System for Establishing and Using a Distributed Key Server,” and are hereby incorporated by reference in their entireties for all purposes
  • It may be helpful to an understanding of embodiments as presented herein to discuss embodiments of a SHA based AKE. As shown in FIG. 14, embodiments allow users to securely identify parties with whom they are talking and also generate secure keys for encryption. FIG. 14 shows a hardware block secured, for example, by recursive security. As shown in FIG. 14, an HSM module 1030 for a party (“Alice”) according to embodiments may receive a NonceB, message digest (MDBA), and ciphertext from another party “Bob.” In addition, Alice may receive, from a licensing authority, an authorization code (authCodeA), a nonce (NKhAB) and an encryption key (EKeB). Alice generates a NonceA and also sends it to Bob. Alice's HSM module 1030 further generates a message digest (MDAB) of the session key.
  • In operation, Alice sends Bob NonceA and Bob sends Alice NonceB. Alice passes NonceA and NonceB through HMACs (Hash Message Authentication Code, e.g., SHA functions) 1032 and 1034, respectively, to generate message digests NeA and NeB. These message digests are hashes of the nonces (NonceA and NonceB), but seeded with the private keys that only Bob and only Alice know (KeA, KeB). Alice uses an embedded secret (architecturally invisible) KhA (and a nonce NKhAB, i.e., a random number, sent previously by the CLA) to generate (via HMAC 1036) the key KeA. Thus, NeA is a “signed” nonce. Alice can generate (via HMAC 1038) Bob's key KeB from the CLA, which previously has sent an encrypted EKeB. Alice decrypts it using the key KeA. The nonce NeB is either concatenated or XOR-ed with NeA and used as the input to a hash function (via HMAC 1041) to generate session key SKAB.
  • The CLA thus sends both Bob and Alice nonces, authCodes, and, encrypted keys which can be used for all subsequent communications between the two parties. Even if these are intercepted, however, only Alice and Bob can correctly generate the shared session key SKAB. The session key (SKAB) is used by symmetric encryption blocks 1042 and 1044 to decrypt ciphertext and encrypt plaintext, as illustrated in FIG. 14. This method for generating the session key SKAB used for encrypting/decrypting plaintext/ciphertext during the session solves the perfect forward secrecy problem, with the assumption that the service (the CLA) is itself secure.
  • The man-in-the-middle problem is solved by hashing (via HMAC 1046) the session key SKAB with KeB (which is Bob's key, generated similarly to KeA but using Bob's embedded secret device key KhB (not shown) and Bob's corresponding nonce NKhBA (not shown)). The result of the hash is the message digest (MDAB). The message digest MDAB is sent after Alice receives NonceB and performs the hash calculations. In this way, Bob can verify that he is speaking with Alice. Correspondingly, Alice receives MDBA (i.e., the hash of KeA and SKAB) from Bob and hashes (via HMAC 1048) the session key SKAB with KeA to verify or authenticate that Alice is speaking with Bob. This functionality an also be used by Alice and Bob to sign messages to each other.
  • It is noted that this mechanism is easily linearly scaled. All that is required is an additional symmetric encryption and that the CLA provide the additional party's encrypted keys Key.
  • An example of a three-party version of the SHAAKE protocol executing on an HSM module 1130 is shown in FIG. 15. FIG. 15 shows an example of how the protocol of FIG. 14 can be extended for Alice to communicate with Bob and Carol in a secure and private manner. Again, the overall execution environment for the protocol may be secured using a recursive security protocol. As shown, the CLA provides authcodeA, NKhAB, EKeB, and EKeC. From Bob, Alice receives ciphertext as well as NonceB and message digest MDBA, while from Carol, Alice receives ciphertext, as well as NonceC and message digest MDCA.
  • In operation, Alice sends Bob and Carol NonceA, Bob sends Alice NonceB, and Carol sends Alice NonceC. Alice passes NonceA, NonceB and NonceC through HMACs (Hash Message Authentication Code, e.g., SHA functions) 1132, 1134, and 1136 respectively, to generate message digests NeA, NeB, and Net. These message digests are hashes of the nonces (NonceA, NonceB and NonceC), but seeded with the private keys that only Bob, Alice, and Carol know (KeA, KeB, KeC). Alice uses an embedded secret (architecturally invisible) KhA (and a nonce NKhAB, i.e., a random number, sent previously by the CLA) to generate (via HMAC 1138) the key KeA. Thus, NeA is a “signed” nonce. Alice can generate (via HMAC 1141) Bob's key KeB from the CLA, which previously has sent an encrypted EKeB. Alice can also generate (via HMAC 1143) Carol's key KeC from the CLA, which previously has sent an encrypted EKeC. Alice decrypts both using the key KeA. The nonces NeA, NeB and Net are hashed (via HMAC 1144) to generated session key SKABC.
  • The CLA thus sends Bob, Alice, and Carol nonces, authCodes, and encrypted keys which can be used for all subsequent communications between the three parties. Even if these are intercepted, however, only Alice, Bob, and Carol can generate their session keys SKABC. The session key (SKABC) is used by symmetric encryption blocks 1146, 1148, and 1150 to decrypt ciphertext and encrypt plaintext, as illustrated in FIG. 15. This method for generating the session key SKABC used for encrypting/decrypting plaintext/ciphertext during the session solves the perfect forward secrecy problem.
  • As with the example shown in FIG. 14, the man-in-the-middle problem is solved by hashing (via HMAC 1152 and HMAC 1154, respectively) the session key SKABC with KeB and KeC. The result of these hashes are the message digests (MDAB and MDAC). The message digests MDAB and MDAC are sent after Alice receives NonceB and NonceC and performs the hash calculations. In this way, Bob and Carol can each verify that they are speaking with Alice. Correspondingly, Alice receives MDBA and MDCA from Bob and Carol and hashes (via HMAC 1156 and 1158) the session key SKABC with KeA to verify or authenticate that Alice is speaking with Bob or Carol. This functionality an also be used by Alice, Bob and Carol to sign messages to each other.
  • Referring back to FIG. 14, the process described for generating session key SKAB has to be performed once per session. Once the session key is generated, text is encrypted and decrypted without regenerating the session key. Therefore, the bulk of the processing in FIG. 14 is performed by the symmetric decryption and encryption blocks 1042 and 1044. FIG. 16 illustrates an embodiment that boosts performance by implementing the symmetric decryption and encryption blocks in a crypto co-processor.
  • FIG. 16 is a diagram of an embodiment in which a crypto co-processor is used for encryption/decryption of text. FIG. 16 shows an HSM module 1230 that is similar to the module shown in FIG. 14. In general, the blocks 1232, 1234, 1236, 1238, 1241, and 1246 operating in the same manner as the corresponding blocks in FIG. 14, to generate session key SKAB and message digest MDAB. The HSM module 1230 performs key exchange in a manner similar to that described with reference to FIG. 14. However, the actual encryption of the plaintext is performed using a separate crypto co-processor 1250 (for clarity, a decryption block is not shown). The crypto coprocessor receives an encrypted session key ESKAB, which is a version of the session key SKAB that is encrypted (via symmetric encryption block 1252) using the coprocessor OTP secret key. The coprocessor OTP secret key is generated from EKOTP and KeA via symmetric decryption block 1254. In the crypto coprocessor 1250, the session key SKAB is decrypted using decryption block 1256. Symmetric encryption block 1258 uses the session key to encrypt the plaintext. A similar decryption block (not shown) uses the session key to decrypt ciphertext.
  • Although the invention has been described with respect to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of the invention. The description herein of illustrated embodiments of the invention, including the description in the Summary, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein (and in particular, the inclusion of any particular embodiment, feature or function within the Summary is not intended to limit the scope of the invention to such embodiment, feature or function). Rather, the description is intended to describe illustrative embodiments, features and functions in order to provide a person of ordinary skill in the art context to understand the invention without limiting the invention to any particularly described embodiment, feature or function, including any such embodiment feature or function described in the Summary. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the invention in light of the foregoing description of illustrated embodiments of the invention and are to be included within the spirit and scope of the invention. Thus, while the invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the invention.
  • Reference throughout this specification to “one embodiment”, “an embodiment”, or “a specific embodiment” or similar terminology means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment and may not necessarily be present in all embodiments. Thus, respective appearances of the phrases “in one embodiment”, “in an embodiment”, or “in a specific embodiment” or similar terminology in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any particular embodiment may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the invention.
  • In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that an embodiment may be able to be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, components, systems, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the invention. While the invention may be illustrated by using a particular embodiment, this is not and does not limit the invention to any particular embodiment and a person of ordinary skill in the art will recognize that additional embodiments are readily understandable and are a part of this invention.
  • Embodiments discussed herein can be implemented in a computer communicatively coupled to a network (for example, the Internet), another computer, or in a standalone computer. As is known to those skilled in the art, a suitable computer can include a central processing unit (“CPU”), at least one read-only memory (“ROM”), at least one random access memory (“RAM”), at least one hard drive (“HD”), and one or more input/output (“I/O”) device(s). The I/O devices can include a keyboard, monitor, printer, electronic pointing device (for example, mouse, trackball, stylus, touch pad, etc.), or the like.
  • ROM, RAM, and HD are computer memories for storing computer-executable instructions executable by the CPU or capable of being compiled or interpreted to be executable by the CPU. Suitable computer-executable instructions may reside on a computer readable medium (e.g., ROM, RAM, and/or HD), hardware circuitry or the like, or any combination thereof. Within this disclosure, the term “computer readable medium” is not limited to ROM, RAM, and HD and can include any type of data storage medium that can be read by a processor. For example, a computer-readable medium may refer to a data cartridge, a data backup magnetic tape, a floppy diskette, a flash memory drive, an optical data storage drive, a CD-ROM, ROM, RAM, HD, or the like. The processes described herein may be implemented in suitable computer-executable instructions that may reside on a computer readable medium (for example, a disk, CD-ROM, a memory, etc.). Alternatively, the computer-executable instructions may be stored as software code components on a direct access storage device array, magnetic tape, floppy diskette, optical storage device, or other appropriate computer-readable medium or storage device.
  • Any suitable programming language can be used to implement the routines, methods or programs of embodiments of the invention described herein, including C, C++, Java, JavaScript, HTML, or any other programming or scripting code, etc. Other software/hardware/network architectures may be used. For example, the functions of the disclosed embodiments may be implemented on one computer or shared/distributed among two or more computers in or across a network. Communications between computers implementing embodiments can be accomplished using any electronic, optical, radio frequency signals, or other suitable methods and tools of communication in compliance with known network protocols.
  • Different programming techniques can be employed such as procedural or object oriented. Any particular routine can execute on a single computer processing device or multiple computer processing devices, a single computer processor or multiple computer processors. Data may be stored in a single storage medium or distributed through multiple storage mediums, and may reside in a single database or multiple databases (or other data storage techniques). Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, to the extent multiple steps are shown as sequential in this specification, some combination of such steps in alternative embodiments may be performed at the same time. The sequence of operations described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc. The routines can operate in an operating system environment or as stand-alone routines. Functions, routines, methods, steps and operations described herein can be performed in hardware, software, firmware or any combination thereof.
  • Embodiments described herein can be implemented in the form of control logic in software or hardware or a combination of both. The control logic may be stored in an information storage medium, such as a computer-readable medium, as a plurality of instructions adapted to direct an information processing device to perform a set of steps disclosed in the various embodiments. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the invention.
  • It is also within the spirit and scope of the invention to implement in software programming or code an of the steps, operations, methods, routines or portions thereof described herein, where such software programming or code can be stored in a computer-readable medium and can be operated on by a processor to permit a computer to perform any of the steps, operations, methods, routines or portions thereof described herein. The invention may be implemented by using software programming or code in one or more general purpose digital computers, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of the invention can be achieved by any means as is known in the art. For example, distributed or networked systems, components and circuits can be used. In another example, communication or transfer (or otherwise moving from one place to another) of data may be wired, wireless, or by any other means.
  • A “computer-readable medium” may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory. Such computer-readable medium shall generally be machine readable and include software programming or code that can be human readable (e.g., source code) or machine readable (e.g., object code). Examples of non-transitory computer-readable media can include random access memories, read-only memories, hard drives, data cartridges, magnetic tapes, floppy diskettes, flash memory drives, optical data storage devices, compact-disc read-only memories, and other appropriate computer memories and data storage devices. In an illustrative embodiment, some or all of the software components may reside on a single server computer or on any combination of separate server computers. As one skilled in the art can appreciate, a computer program product implementing an embodiment disclosed herein may comprise one or more non-transitory computer readable media storing computer instructions translatable by one or more processors in a computing environment.
  • A “processor” includes any, hardware system, mechanism or component that processes data, signals or other information. A processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems.
  • It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. Additionally, any signal arrows in the drawings/figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted.
  • As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, product, article, or apparatus.
  • Furthermore, the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). As used herein, a term preceded by “a” or “an” (and “the” when antecedent basis is “a” or “an”) includes both singular and plural of such term (i.e., that the reference “a” or “an” clearly indicates only the singular or only the plural). Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.

Claims (19)

What is claimed is:
1. A system, comprising:
a processor;
a memory;
a secret key stored in hardware;
a cache having a data line comprising data of a process executed on the processor in a secure mode;
a secure execution controller configured to symmetrically encrypt the data using an encryption key and store the encrypted data in the cache; and
wherein the secure execution controller is configured to symmetrically encrypt the data a second time using the encryption key and storing the twice symmetrically encrypted data in the memory.
2. The system of claim 1, wherein the data stored in the memory is plaintext data.
3. The system of claim 1, wherein the second encryption of the data decrypts the data.
4. The system of claim 1, wherein the encryption key is derived from a secret key stored in hardware.
5. The system of claim 4, wherein the encryption key is derived from the secret key stored in hardware and a nonce.
6. The system of claim 5, wherein the encryption key is derived from the secret key stored in hardware, the nonce, and a authentication code derived from the process.
7. The system of claim 1, wherein the data includes a nonce.
8. A system, comprising:
a processor;
a memory;
a secret key stored in hardware;
a cache having one or more of data lines, each data line comprising data of a process executed on the processor, wherein each data line includes a vector used to identify a secure process;
a secure execution controller configured to generate an encryption key for each secure process identified by a vector in the one or more data lines.
9. The system of claim 8, wherein the secure execution controller is configured to symmetrically encrypt the data in a given data line using an encryption key corresponding to a secure process identified by the vector of the given data line.
10. The system of claim 8, wherein a secure process identified in the vector of a given data line is allowed to write data to the vector.
11. The system of claim 10, wherein the secure process identified in the vector of the given data line is allowed to write data to the vector identifying a second secure process.
12. The system of claim 11, wherein the secure execution controller is configured to symmetrically encrypt the data in the given data line using an encryption key corresponding to the second secure process identified by the vector of the given data line.
13. A method of writing data to memory from a secure data cache, comprising:
writing data to a data line in a secure data cache, the data corresponding to a secure process;
generating an encryption key derived from the secure process;
symmetrically encrypting the data in the data line in the secure data cache using the generated encryption key; and
before writing the data to memory; symmetrically encrypting the data a second time using the generated encryption key.
14. The method of claim 13, wherein symmetrically encrypting the data a second time using the generated encryption key decrypts the data into plaintext.
15. The method of claim 13, wherein the data is a nonce.
16. The method of claim 13, wherein the encryption key is derived from a secret key stored in hardware and the secure process.
17. The method of claim 16, wherein the encryption key is derived from the secret key stored in hardware, the secure process, and a nonce.
18. The method of claim 13, further comprising writing a vector value to the data line, wherein the vector value identifies a specific secure process.
19. The method of claim 18, wherein the encryption key is derived from the specific secure process identified by the vector value.
US14/683,924 2014-04-11 2015-04-10 System and method for sharing data securely Abandoned US20150294123A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/683,924 US20150294123A1 (en) 2014-04-11 2015-04-10 System and method for sharing data securely

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201461978669P 2014-04-11 2014-04-11
US201461978657P 2014-04-11 2014-04-11
US14/683,924 US20150294123A1 (en) 2014-04-11 2015-04-10 System and method for sharing data securely

Publications (1)

Publication Number Publication Date
US20150294123A1 true US20150294123A1 (en) 2015-10-15

Family

ID=54265304

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/683,924 Abandoned US20150294123A1 (en) 2014-04-11 2015-04-10 System and method for sharing data securely
US14/683,988 Active 2035-11-01 US9734355B2 (en) 2014-04-11 2015-04-10 System and method for an efficient authentication and key exchange protocol

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/683,988 Active 2035-11-01 US9734355B2 (en) 2014-04-11 2015-04-10 System and method for an efficient authentication and key exchange protocol

Country Status (2)

Country Link
US (2) US20150294123A1 (en)
WO (2) WO2015157693A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170012975A1 (en) * 2015-07-12 2017-01-12 Broadcom Corporation Network Function Virtualization Security and Trust System
US9734355B2 (en) 2014-04-11 2017-08-15 Rubicon Labs, Inc. System and method for an efficient authentication and key exchange protocol
US20170285976A1 (en) * 2016-04-01 2017-10-05 David M. Durham Convolutional memory integrity
EP3486832A1 (en) * 2017-11-21 2019-05-22 Renesas Electronics Corporation Semiconductor device, authentication system, and authentication method
US10505948B2 (en) * 2015-11-05 2019-12-10 Trilliant Networks, Inc. Method and apparatus for secure aggregated event reporting
CN110932842A (en) * 2018-09-19 2020-03-27 微安科技有限公司 System on chip for performing virtual private network functions and system including the same
CN111143247A (en) * 2019-12-31 2020-05-12 海光信息技术有限公司 Storage device data integrity protection method, controller thereof and system on chip
US10686605B2 (en) 2017-09-29 2020-06-16 Intel Corporation Technologies for implementing mutually distrusting domains
US20210028932A1 (en) * 2019-07-23 2021-01-28 Mastercard International Incorporated Methods and computing devices for auto-submission of user authentication credential
US11093658B2 (en) * 2017-05-09 2021-08-17 Stmicroelectronics S.R.L. Hardware secure element, related processing system, integrated circuit, device and method
US11323265B2 (en) * 2019-05-08 2022-05-03 Samsung Electronics Co., Ltd. Storage device providing high security and electronic device including the storage device
US20220321540A1 (en) * 2021-03-31 2022-10-06 Sophos Limited Encrypted cache protection

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9806887B1 (en) * 2014-09-23 2017-10-31 Amazon Technologies, Inc. Authenticating nonces prior to encrypting and decrypting cryptographic keys
US9641400B2 (en) 2014-11-21 2017-05-02 Afero, Inc. Internet of things device for registering user selections
US20160180100A1 (en) 2014-12-18 2016-06-23 Joe Britt System and method for securely connecting network devices using optical labels
US10291595B2 (en) 2014-12-18 2019-05-14 Afero, Inc. System and method for securely connecting network devices
US9832173B2 (en) 2014-12-18 2017-11-28 Afero, Inc. System and method for securely connecting network devices
US10075447B2 (en) * 2015-03-04 2018-09-11 Neone, Inc. Secure distributed device-to-device network
US9704318B2 (en) 2015-03-30 2017-07-11 Afero, Inc. System and method for accurately sensing user location in an IoT system
US10045150B2 (en) 2015-03-30 2018-08-07 Afero, Inc. System and method for accurately sensing user location in an IoT system
US20160350861A1 (en) * 2015-05-29 2016-12-01 Yoti Ltd Electronic systems and methods for asset tracking
US9717012B2 (en) 2015-06-01 2017-07-25 Afero, Inc. Internet of things (IOT) automotive device, system, and method
US9699814B2 (en) 2015-07-03 2017-07-04 Afero, Inc. Apparatus and method for establishing secure communication channels in an internet of things (IoT) system
US9729528B2 (en) * 2015-07-03 2017-08-08 Afero, Inc. Apparatus and method for establishing secure communication channels in an internet of things (IOT) system
US10015766B2 (en) 2015-07-14 2018-07-03 Afero, Inc. Apparatus and method for securely tracking event attendees using IOT devices
US10341311B2 (en) * 2015-07-20 2019-07-02 Schweitzer Engineering Laboratories, Inc. Communication device for implementing selective encryption in a software defined network
US9793937B2 (en) 2015-10-30 2017-10-17 Afero, Inc. Apparatus and method for filtering wireless signals
US10095746B2 (en) 2015-12-03 2018-10-09 At&T Intellectual Property I, L.P. Contextual ownership
US10178530B2 (en) 2015-12-14 2019-01-08 Afero, Inc. System and method for performing asset and crowd tracking in an IoT system
RU2018125626A (en) * 2015-12-16 2020-01-16 Виза Интернэшнл Сервис Ассосиэйшн SYSTEMS AND METHODS OF PROTECTED MULTILATERAL COMMUNICATION USING AN INTERMEDIARY
US9584493B1 (en) 2015-12-18 2017-02-28 Wickr Inc. Decentralized authoritative messaging
US10523437B2 (en) * 2016-01-27 2019-12-31 Lg Electronics Inc. System and method for authentication of things
CN105610860A (en) * 2016-02-01 2016-05-25 厦门优芽网络科技有限公司 User private data sharing protocol and application method
US11551074B2 (en) * 2017-01-20 2023-01-10 Tsinghua University Self-adaptive threshold neuron information processing method, self-adaptive leakage value neuron information processing method, system computer device and readable storage medium
US10615970B1 (en) * 2017-02-10 2020-04-07 Wells Fargo Bank, N.A. Secure key exchange electronic transactions
US10615969B1 (en) 2017-02-10 2020-04-07 Wells Fargo Bank, N.A. Database encryption key management
US11671250B2 (en) * 2017-06-04 2023-06-06 Apple Inc. Migration for wearable to new companion device
US10534725B2 (en) 2017-07-25 2020-01-14 International Business Machines Corporation Computer system software/firmware and a processor unit with a security module
US10855440B1 (en) * 2017-11-08 2020-12-01 Wickr Inc. Generating new encryption keys during a secure communication session
US11005658B2 (en) * 2017-12-13 2021-05-11 Delta Electronics, Inc. Data transmission system with security mechanism and method thereof
EP3793129A4 (en) * 2018-05-30 2021-11-17 Huawei International Pte. Ltd. Key exchange system, method, and apparatus
US11133940B2 (en) * 2018-12-04 2021-09-28 Journey.ai Securing attestation using a zero-knowledge data management network
WO2020118071A1 (en) 2018-12-06 2020-06-11 Schneider Electric Systems Usa, Inc. One-time pad encryption for industrial wireless instruments
EP3758322A1 (en) * 2019-06-25 2020-12-30 Gemalto Sa Method and system for generating encryption keys for transaction or connection data
US20220277110A1 (en) * 2019-08-07 2022-09-01 Nec Corporation Secure computation system, secure computation method, and secure computation program
US11750585B2 (en) 2019-09-30 2023-09-05 Acumera, Inc. Secure ephemeral access to insecure devices
WO2021080586A1 (en) * 2019-10-24 2021-04-29 Hewlett-Packard Development Company, L.P. Authentication of write requests
US11475167B2 (en) 2020-01-29 2022-10-18 International Business Machines Corporation Reserving one or more security modules for a secure guest
US20230198978A1 (en) * 2021-12-22 2023-06-22 Mcafee, Llc Deterministic hash to secure personal data and passwords
DE102022202824A1 (en) 2022-03-23 2023-01-19 Vitesco Technologies GmbH Method for detecting manipulation of transmission measurement signals of a sensor unit of a system and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040123312A1 (en) * 2002-08-16 2004-06-24 Fujitsu Limited Contents distributing method via a communications network
US7203844B1 (en) * 2002-06-20 2007-04-10 Oxford William V Method and system for a recursive security protocol for digital copyright control
US20080247540A1 (en) * 2007-04-05 2008-10-09 Samsung Electronics Co., Ltd. Method and apparatus for protecting digital contents stored in usb mass storage device
US20080282345A1 (en) * 2007-05-11 2008-11-13 Echostar Technologies L.L.C. Apparatus for controlling processor execution in a secure environment
US20090157936A1 (en) * 2007-12-13 2009-06-18 Texas Instruments Incorporated Interrupt morphing and configuration, circuits, systems, and processes
US20100122088A1 (en) * 2002-06-20 2010-05-13 Oxford William V Method and system for control of code execution on a general purpose computing device and control of code execution in a recursive security protocol
US20110235806A1 (en) * 2008-12-05 2011-09-29 Panasonic Electric Works Co., Ltd. Key distribution system
US8214654B1 (en) * 2008-10-07 2012-07-03 Nvidia Corporation Method and system for loading a secure firmware update on an adapter device of a computer system
US20140236842A1 (en) * 2011-09-28 2014-08-21 Onsun Oy Payment system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7434050B2 (en) * 2003-12-11 2008-10-07 International Business Machines Corporation Efficient method for providing secure remote access
US20080095361A1 (en) * 2006-10-19 2008-04-24 Telefonaktiebolaget L M Ericsson (Publ) Security-Enhanced Key Exchange
US8429643B2 (en) * 2007-09-05 2013-04-23 Microsoft Corporation Secure upgrade of firmware update in constrained memory
US9172529B2 (en) * 2011-09-16 2015-10-27 Certicom Corp. Hybrid encryption schemes
US9575906B2 (en) 2012-03-20 2017-02-21 Rubicon Labs, Inc. Method and system for process working set isolation
US20160065362A1 (en) * 2013-04-05 2016-03-03 Interdigital Patent Holdings, Inc. Securing peer-to-peer and group communications
KR101460541B1 (en) * 2013-07-15 2014-11-11 고려대학교 산학협력단 Public encryption method based on user ID
WO2015157693A2 (en) 2014-04-11 2015-10-15 Rubicon Labs, Inc. System and method for an efficient authentication and key exchange protocol

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203844B1 (en) * 2002-06-20 2007-04-10 Oxford William V Method and system for a recursive security protocol for digital copyright control
US20100122088A1 (en) * 2002-06-20 2010-05-13 Oxford William V Method and system for control of code execution on a general purpose computing device and control of code execution in a recursive security protocol
US20040123312A1 (en) * 2002-08-16 2004-06-24 Fujitsu Limited Contents distributing method via a communications network
US20080247540A1 (en) * 2007-04-05 2008-10-09 Samsung Electronics Co., Ltd. Method and apparatus for protecting digital contents stored in usb mass storage device
US20080282345A1 (en) * 2007-05-11 2008-11-13 Echostar Technologies L.L.C. Apparatus for controlling processor execution in a secure environment
US20090157936A1 (en) * 2007-12-13 2009-06-18 Texas Instruments Incorporated Interrupt morphing and configuration, circuits, systems, and processes
US8214654B1 (en) * 2008-10-07 2012-07-03 Nvidia Corporation Method and system for loading a secure firmware update on an adapter device of a computer system
US20110235806A1 (en) * 2008-12-05 2011-09-29 Panasonic Electric Works Co., Ltd. Key distribution system
US20140236842A1 (en) * 2011-09-28 2014-08-21 Onsun Oy Payment system

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9734355B2 (en) 2014-04-11 2017-08-15 Rubicon Labs, Inc. System and method for an efficient authentication and key exchange protocol
US20170012975A1 (en) * 2015-07-12 2017-01-12 Broadcom Corporation Network Function Virtualization Security and Trust System
US10341384B2 (en) * 2015-07-12 2019-07-02 Avago Technologies International Sales Pte. Limited Network function virtualization security and trust system
US10505948B2 (en) * 2015-11-05 2019-12-10 Trilliant Networks, Inc. Method and apparatus for secure aggregated event reporting
US20170285976A1 (en) * 2016-04-01 2017-10-05 David M. Durham Convolutional memory integrity
US10585809B2 (en) * 2016-04-01 2020-03-10 Intel Corporation Convolutional memory integrity
US11010310B2 (en) 2016-04-01 2021-05-18 Intel Corporation Convolutional memory integrity
US11093658B2 (en) * 2017-05-09 2021-08-17 Stmicroelectronics S.R.L. Hardware secure element, related processing system, integrated circuit, device and method
US11921910B2 (en) * 2017-05-09 2024-03-05 Stmicroelectronics Application Gmbh Hardware secure element, related processing system, integrated circuit, and device
US20210357538A1 (en) * 2017-05-09 2021-11-18 Stmicroelectronics S.R.I. Hardware secure element, related processing system, integrated circuit, and device
US10686605B2 (en) 2017-09-29 2020-06-16 Intel Corporation Technologies for implementing mutually distrusting domains
US10949527B2 (en) 2017-11-21 2021-03-16 Renesas Electronics Corporation Semiconductor device, authentication system, and authentication method
EP3486832A1 (en) * 2017-11-21 2019-05-22 Renesas Electronics Corporation Semiconductor device, authentication system, and authentication method
CN110932842A (en) * 2018-09-19 2020-03-27 微安科技有限公司 System on chip for performing virtual private network functions and system including the same
US11323265B2 (en) * 2019-05-08 2022-05-03 Samsung Electronics Co., Ltd. Storage device providing high security and electronic device including the storage device
US20210028932A1 (en) * 2019-07-23 2021-01-28 Mastercard International Incorporated Methods and computing devices for auto-submission of user authentication credential
US11757629B2 (en) * 2019-07-23 2023-09-12 Mastercard International Incorporated Methods and computing devices for auto-submission of user authentication credential
CN111143247A (en) * 2019-12-31 2020-05-12 海光信息技术有限公司 Storage device data integrity protection method, controller thereof and system on chip
US20220321540A1 (en) * 2021-03-31 2022-10-06 Sophos Limited Encrypted cache protection
US11929992B2 (en) * 2021-03-31 2024-03-12 Sophos Limited Encrypted cache protection

Also Published As

Publication number Publication date
WO2015157693A3 (en) 2015-12-03
WO2015157693A2 (en) 2015-10-15
WO2015157690A1 (en) 2015-10-15
US9734355B2 (en) 2017-08-15
US20150295713A1 (en) 2015-10-15

Similar Documents

Publication Publication Date Title
US20150294123A1 (en) System and method for sharing data securely
US9575906B2 (en) Method and system for process working set isolation
US20170063544A1 (en) System and method for sharing data securely
US9842212B2 (en) System and method for a renewable secure boot
US20200153808A1 (en) Method and System for an Efficient Shared-Derived Secret Provisioning Mechanism
CN103069428B (en) Secure virtual machine in insincere cloud infrastructure guides
AU2012204448B2 (en) System and method for in-place encryption
CN104883256B (en) A kind of cryptographic key protection method for resisting physical attacks and system attack
US20190087354A1 (en) System, Apparatus And Method For Integrity Protecting Tenant Workloads In A Multi-Tenant Computing Environment
US20160188874A1 (en) System and method for secure code entry point control
JP2017515413A5 (en)
CN103038746A (en) Method and apparatus for trusted execution in infrastructure as a service cloud environments
CN103026347A (en) Virtual machine memory compartmentalization in multi-core architectures
US20130036312A1 (en) Method and Device for Protecting Memory Content
US20150363333A1 (en) High performance autonomous hardware engine for inline cryptographic processing
JP2019532559A (en) Key thread ownership for hardware-accelerated cryptography
CN105678173A (en) vTPM safety protection method based on hardware transactional memory
Gross et al. Breaking trustzone memory isolation through malicious hardware on a modern fpga-soc
Wong et al. SMARTS: secure memory assurance of RISC-V trusted SoC
US11019098B2 (en) Replay protection for memory based on key refresh
GB2528780A (en) Security against memory replay attacks in computing systems
JP2017526220A (en) Inferential cryptographic processing for out-of-order data
US10169251B1 (en) Limted execution of software on a processor
JP2009253490A (en) Memory system encrypting system
US20160352733A1 (en) Distributed and hierarchical device activation mechanisms

Legal Events

Date Code Title Description
AS Assignment

Owner name: RUBICON LABS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OXFORD, WILLIAM V.;REEL/FRAME:035840/0006

Effective date: 20150609

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION