US20070174921A1 - Manifest-Based Trusted Agent Management in a Trusted Operating System Environment - Google Patents
Manifest-Based Trusted Agent Management in a Trusted Operating System Environment Download PDFInfo
- Publication number
- US20070174921A1 US20070174921A1 US11/558,125 US55812506A US2007174921A1 US 20070174921 A1 US20070174921 A1 US 20070174921A1 US 55812506 A US55812506 A US 55812506A US 2007174921 A1 US2007174921 A1 US 2007174921A1
- Authority
- US
- United States
- Prior art keywords
- trusted
- manifest
- binaries
- correspond
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/54—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/53—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Definitions
- This invention relates to trusted environments generally, and more particularly to manifest-based trusted agent management in a trusted operating system environment.
- This trust generally focuses on the ability to trust the computer to use the information it stores or receives correctly. Exactly what this trust entails can vary based on the circumstances. For example, multimedia content providers would like to be able to trust computers to not improperly copy their content. By way of another example, users would like to be able to trust their computers to forward confidential financial information (e.g., bank account numbers) only to appropriate destinations (e.g., allow the information to be passed to their bank, but nowhere else). Unfortunately, given the generally open nature of most computers, a wide range of applications can be run on most current computers without the user's knowledge, and these applications can compromise this trust (e.g., forward the user's financial information to some other destination for malicious use).
- confidential financial information e.g., bank account numbers
- these mechanisms entail some sort of authentication procedure where the computer can authenticate or certify that at least a portion of it (e.g., certain areas of memory, certain applications, etc.) are at least as trustworthy as they present themselves to be (e.g., that the computer or application actually is what it claims to be). In other words, these mechanisms prevent a malicious application from impersonating another application (or allowing a computer to impersonate another computer).
- the user or others can make a judgment as to whether or not to accept a particular platform and application as trustworthy (e.g., a multimedia content provider may accept a particular application as being trustworthy, once the computer can certify to the content provider's satisfaction that the particular application is the application it claims to be).
- components and modules of an application are allowed to be changed (e.g., in response to user preferences) and/or upgraded fairly frequently.
- applications frequently include various dynamic link libraries (DLL's), plug-ins, etc. and allow for different software configurations, each of which can alter the binaries which execute as the application.
- DLL's dynamic link libraries
- plug-ins plug-ins
- the manifest-based trusted agent management in a trusted operating system environment described herein provides such a security model.
- a request to execute a process is received and a virtual memory space for the process is set up.
- a manifest corresponding to the process is accessed, and which of a plurality of binaries can be executed in the virtual memory space is limited based on indicators, of the binaries, that are included in the manifest.
- a manifest includes a first portion including data representing a unique identifier of the trusted application, a second portion including data indicating whether a particular one or more binaries can be loaded into the process space for the trusted application, and a third portion derived from the data in both the first portion and the second portion by generating a digital signature over the first and second portions.
- the manifest can also include a portion that includes data representing a list of one or more export statements that allow a secret associated with the trusted application to be exported to another trusted application, a portion that includes data representing a set of properties corresponding to the data structure, and a portion that includes data representing a list of entry points into the executing trusted application.
- FIG. 1 illustrates an exemplary trusted operating system environment.
- FIG. 2 illustrates one exemplary architecture that can be implemented on a client computing device.
- FIG. 3 illustrates another exemplary architecture that can be used with the invention.
- FIG. 4 illustrates an exemplary relationship between a gatekeeper storage key and trusted application secrets.
- FIG. 5 illustrates an exemplary process for securely storing secrets using a gatekeeper storage key.
- FIG. 6 illustrates an exemplary upgrade from one trusted core to another trusted core on the same client computing device.
- FIG. 7 illustrates an exemplary process for upgrading a trusted core.
- FIG. 8 illustrates another exemplary process for upgrading a trusted core.
- FIG. 9 illustrates an exemplary secret storage architecture employing hive keys.
- FIG. 10 illustrates an exemplary process for securely storing secrets using hive keys.
- FIG. 11 illustrates an exemplary process for migrating secrets from a source computing device to a destination computing device.
- FIG. 12 illustrates an exemplary manifest corresponding to a trusted application.
- FIG. 13 illustrates an exemplary process for controlling execution of processes in an address space based on a manifest.
- FIG. 14 illustrates an exemplary process for upgrading to a new version of a trusted application.
- FIG. 15 illustrates a general exemplary computer environment, which can be used to implement various devices and processes described herein.
- code being “trusted” refers to code that is immutable in nature and immutable in identity. Code that is trusted is immune to being tampered with by other parts (e.g. code) of the computer and it can be reliably and unambiguously identified. In other words, any other entity or component asking “who is this code” can be told “this is code xyz”, and can be assured both that the code is indeed code xyz (rather than some imposter) and that code xyz is unadulterated. Trust does not deal with any quality or usefulness aspects of the code—only immutability of nature and immutability of identity.
- the execution environment of the trusted code effects the overall security.
- the execution environment includes the machine or machine class on which the code is executing.
- FIG. 1 illustrates an exemplary trusted operating system environment 100 .
- multiple client computing devices 102 are coupled to multiple server computing devices 104 via a network 106 .
- Network 106 is intended to represent any of a wide variety of conventional network topologies and types (including wired and/or wireless networks), employing any of a wide variety of conventional network protocols (including public and/or proprietary protocols).
- Network 106 may include, for example, the Internet as well as possibly at least portions of one or more local area networks (LANs).
- LANs local area networks
- Computing devices 102 and 104 can each be any of a wide variety of conventional computing devices, including desktop PCs, workstations, mainframe computers, Internet appliances, gaming consoles, handheld PCs, cellular telephones, personal digital assistants (PDAs), etc.
- desktop PCs workstations
- mainframe computers mainframe computers
- gaming consoles handheld PCs
- cellular telephones personal digital assistants
- PDAs personal digital assistants
- One or more of devices 102 and 104 can be the same types of devices, or alternatively different types of devices.
- Each of client computing devices 102 includes a secure operating system (OS) 108 .
- Secure operating system 108 is designed to provide a level of trust to users of client devices 102 as well as server devices 104 that are in communication with client devices 102 via a network 106 .
- Secure operating system 108 can be designed in different ways to provide such trust, as discussed in more detail below. By providing this trust, the user of device 102 and/or the server devices 104 can be assured that secure operating system 108 will use data appropriately and take various measures to protect that data.
- Each of client computing devices 102 may also execute one or more trusted applications (also referred to as trusted agents or processes) 110 .
- Each trusted application is software (or alternatively firmware) that is made up of multiple instructions to be executed by a processor(s) of device 102 .
- a trusted application is made up of multiple individual files (also referred to as binaries) that together include the instructions that comprise the trusted application.
- a client device 102 may obtain digital content (e.g., a movie, song, electronic book, etc.) from a server device 104 .
- Secure operating system 108 on client device 102 assures 11 server device 104 that operating system 108 will not use the digital content inappropriately (e.g., will not communicate copies of the digital content to other devices) and will take steps to protect the digital content (e.g., will not allow unauthorized applications to access decrypted content).
- a client device 102 may communicate with a server device 104 and exchange confidential financial information (e.g., to purchase or sell a product or service, to perform banking operations such as withdrawal or transfer of funds, etc.).
- Secure operating system 108 on the client device 102 assures server device 104 , as well as the user of client device 102 , that it will not use the financial information inappropriately (e.g., will not steal account numbers or funds) and will take steps to protect the financial information (e.g., will not allow unauthorized applications to access decrypted content).
- Secure operating system 108 may be employed to maintain various secrets by different trusted applications 110 executing on client devices 102 .
- confidential information may be encrypted by a trusted application 110 and a key used for this encryption securely stored by secure operating system 108 .
- the confidential information itself may be passed to secure operating system 108 for secure storage.
- secure operating system 108 provides two primary functions: (1) the ability to securely store secrets for trusted applications 110 ; and (2) the ability to allow trusted applications 110 to authenticate themselves.
- the secure storage of secrets allows trusted applications 110 to save secrets to secure operating system 108 and subsequently retrieve those secrets so long as neither the trusted application 110 nor operating system 108 has been altered. If either the trusted application 110 or the operating system 108 has been altered (e.g., by a malicious user or application in an attempt to subvert the security of operating system 108 ), then the secrets are not retrievable by the altered application and/or operating system.
- a secret refers to any type of data that the trusted application does not want to make publicly available, such as an encryption key, a user password, a password to access a remote computing device, digital content (e.g., a movie, a song, an electronic book, etc.) or a key(s) used to encrypt the digital content, financial data (e.g., account numbers, personal identification numbers (PINs), account balances, etc.), and so forth.
- digital content e.g., a movie, a song, an electronic book, etc.
- financial data e.g., account numbers, personal identification numbers (PINs), account balances, etc.
- a trusted application 110 to authenticate itself allows the trusted application to authenticate itself to a third party (e.g., a server device 104 ). This allows, for example, a server device 104 to be assured that it is communicating digital content to a trusted content player executing on a trusted operating system, or for the server device 104 to be assured that it is communicating with a trusted e-commerce application on the client device rather than with a virus (or some other malicious or untrusted application).
- the security model discussed herein provides for authentication and secret storage in a trusted operating system environment, while at the same time allowing one or more of
- encryption refers to a process in which the data to be encrypted (often referred to as plaintext) is input to an encryption algorithm that operates, using a key (commonly referred to as the encryption key), on the plaintext to generate ciphertext.
- Encryption algorithms are designed so that it is extremely difficult to re-generate the plaintext without knowing a decryption key (which may be the same as the encryption key, or alternatively a different key).
- a variety of conventional encryption algorithms can be used, such as DES (Data Encryption Standard), RSA (Rivest, Shamir, Adelman), RC4 (Rivest Cipher 4), RC5 (Rivest Cipher 5), etc.
- the public-private key pair includes two keys (one private key and one public key) that are selected so that it is relatively straight-forward to decrypt the ciphertext if both keys are known, but extremely difficult to decrypt the ciphertext if only one (or neither) of the keys is known. Additionally, the encryption algorithm is designed and the keys selected such that it is extremely difficult to determine one of the keys based on the ciphertext alone and/or only one key.
- the owner of a public-private key pair typically makes its public key publicly available, but keeps its private key secret. Any party or component desiring to encrypt data for the owner can encrypt the data using the owner's public key, thus allowing only the owner (who possesses the corresponding private key) to readily decrypt the data.
- the key pair can also be used for the owner to digitally sign data. In order to add a digital signature to data, the owner encrypts the data using the owner's private key and makes the resultant ciphertext available with the digitally signed data.
- a recipient of the digitally signed data can decrypt the ciphertext using the owner's public key and compare the decrypted data to the data sent by the owner to verify that the owner did in fact generate that data (and that is has not been altered by the owner since being generated).
- Secure operating system 108 of FIG. 1 includes at least a portion that is trusted code, referred to as the “trusted core”.
- the trusted core may be a full operating system, a microkernel, a Hypervisor, or some smaller component that provides specific security services.
- FIG. 2 illustrates one exemplary architecture that can be implemented on a client computing device 102 .
- the trusted core is implemented by taking advantage of different privilege levels of the processor(s) of the client computing device 102 (e.g., referred to as “rings” in an x86 architecture processor).
- these privilege levels are referred to as rings, although alternate implementations using different processor architectures may use different nomenclature.
- the multiple rings provide a set of prioritized levels that software can execute at, often including 4 levels (Rings 0 , 1 , 2 , and 3 ).
- Ring 0 is typically referred to as the most privileged ring.
- Software processes executing in Ring 0 can typically access more features (e.g., instructions) than processes executing in less privileged rings.
- a processor executing in a particular ring cannot alter code or data in a higher priority ring.
- a trusted core 120 executes in Ring 0
- an operating system 122 executes in Ring 1
- trusted applications 124 execute in Ring 3 .
- trusted core 120 operates at a more privileged level and can control the execution of operating system 122 from this level.
- the code and/or data of trusted core 120 cannot be altered directly by operating system 122 (executing in Ring 1 ) or trusted applications 124 (executing in Ring 3 ).
- any such alterations would have to be made by the operating system 122 or a trusted application 124 requesting trusted core 120 to make the alteration (e.g., by sending a message to trusted core 120 , invoking a function of trusted core 120 , etc.).
- Trusted core 120 also maintains a secret store 126 where secrets passed to and encrypted by trusted core 120 (e.g., originating with trusted applications 124 , OS 122 , or trusted core 120 ) are securely stored. The storage of secrets is discussed in more detail below.
- a cryptographic measure of trusted core 120 is also generated when it is loaded into the memory of computing device 102 and stored in a digest register of the hardware.
- the digest register is designed to be written to only once after each time the computing device is reset, thereby preventing a malicious user or application from overwriting the digest of the trusted core.
- This cryptographic measure can be generated by different components, such as a security processor of computing device 102 , a trusted BIOS, etc.
- the cryptographic measure provides a small (relative to the size of the trusted core) measure of the trusted core that can be used to verify the trusted core that is loaded.
- cryptographic measure Given the nature of the cryptographic measure, it is most likely that any changes made to a trusted core (e.g., to circumvent its trustworthiness) will be reflected in the cryptographic measure, so that the altered core and the original core will produce different cryptographic measures.
- This cryptographic measure is used as a basis for securely storing data, as discussed in more detail below.
- cryptographic measures can be used.
- One such cryptographic measure is a digest—for ease of explanation the cryptographic measure will be discussed primarily herein as a digest, although other measures could alternatively be used.
- the digest is calculated using a one-way hashing operation, such as SHA-1 (Secure Hash Algorithm 1), MD4 (Message Digest 4), MD5 (Message Digest 5), etc.
- the cryptographic digest has the property that it is extremely difficult to find a second pre-image (in this case, a second trusted core) that when digested produces the same hash value.
- the digest register contains a value that can be considered to uniquely represent the trusted core in use.
- An alternative cryptographic measure to a digest is the public key of a properly formed certificate on the digest.
- a publisher can generate a sequence of trusted-cores that are treated as identical or equivalent by the platform (e.g., based on the public key of the publisher).
- the platform refers to the basic hardware of the computing device (e.g., processor and chipset) as well as the firmware associated with this hardware (e.g., microcode in the processor and/or chipset).
- the operating system may be separated into a memory manager component that operates as trusted core 120 with the remainder of the operating system operating as OS 122 .
- the trusted core 120 then controls all page maps and is thus able to shield trusted agents executing in Ring 3 from other components (including OS 122 ).
- additional control is also added to protect the trusted core 120 from other busmasters that do not obey ring privileges.
- FIG. 3 illustrates another exemplary architecture that can be used with the invention.
- the trusted core is implemented by establishing two separate “spaces” within a client computing device 102 of FIG. 1 : a trusted space 140 (also referred to as a protected parallel area, or curtained memory) and a normal (untrusted) space 142 . These spaces can be, for example, one or more address ranges within computing device 102 .
- Both trusted space 140 and normal space 142 include a user space and a kernel space, with the trusted core 144 and secret store 146 being implemented in the kernel space of trusted space 140 .
- a cryptographic measure, such as a digest, of trusted core 144 is also generated and used analogous to the cryptographic measure of trusted core 120 discussed above.
- trusted applets, trusted applications, and/or trusted agents 148 can execute within the user space of trusted space 140 , under the control of trusted core 144 . However, any application 150 , operating system 152 , or device driver 154 executing in normal space 142 is prevented, by trusted core 144 , from accessing trusted space 140 . Thus, no alterations can be made to trusted applications or data in trusted space 140 unless approved by trusted core 144 .
- the digest of a trusted core is discussed herein as a single digest of the trusted core.
- the digest may be made up of multiple parts.
- the boot process may involve a trusted BIOS loading a platform portion of the trusted core and generating a digest of the platform portion.
- the platform portion in turn loads an operating system portion of the trusted core and generates a digest for the operating system portion.
- the operating system portion in turn loads a gatekeeper portion of the trusted core and generates a digest for the gatekeeper portion.
- a composite of these multiple generated digests is used as the digest of the trusted core.
- These multiple generated digests may be stored individually in separate digest registers with the composite of the digests being the concatenation of the different register values.
- each new digest may be used to generate a new digest value by generating a cryptographic hash of the previous digest value concatenated with the new digest—the last new digest value generated (e.g., by the operating system portion) is stored in a single digest register.
- Two fundamental types of primitive operations are supported by the hardware and software of a client computing device 102 of FIG. 1 . These fundamental types are secret storage primitives and authentication primitives.
- the hardware of a device 102 makes these primitive operations available to the trusted core executing on the device 102 , and the trusted core makes variations of these primitive operations available to the trusted applications executing on the device 102 .
- the Seal primitive operation uses at least two parameters—one parameter is the secret that is to be securely stored and the other parameter is an identification of the module or component that is to be able to subsequently retrieve the secret.
- the Seal primitive operation provided by the hardware of client computing device 102 e.g., by a cryptographic or security processor of device 102 ) takes the following form:
- Seal secret, digest_to_unseal, current_digest
- secret represents the secret to be securely stored
- digest_to_unseal represents a cryptographic digest of the trusted core that is authorized to subsequently retrieve the secret
- current_digest represents a cryptographic digest of the trusted core at the time the Seal operation was invoked.
- the current_digest the current_digest is automatically added by the security processor as the value in the digest register of the device 102 rather than being explicitly settable as an external parameter (thereby removing the possibility that the module or component invoking the Seal operation provides an inaccurate current_digest).
- the security processor When the Seal primitive operation is invoked, the security processor encrypts the parameters provided (e.g., secret, digest_to_unseal, and current_digest).
- the digest_to_unseal (and optionally the current_digest as well) may not be encrypted, but rather stored in non-encrypted form and a correspondence maintained between the encrypted secret and the digest_to_unseal.
- comparisons performed in response to the Unseal primitive operation discussed below can be carried out without decrypting the ciphertext.
- the security processor can encrypt the data of the Seal operation in any of a wide variety of conventional manners.
- the security processor may have an individual key that it keeps secret and divulges to no component or module, and/or a public-private key pair.
- the security processor could use the individual key, the public key from its public-private key pair, or a combination thereof.
- the security processor can use any of a wide variety of conventional encryption algorithms to encrypt the data.
- the resultant ciphertext is then stored as a secret (e.g., in secret store 126 of FIG. 2 or 146 of FIG. 3 ).
- the Unseal primitive operation is the converse of the Seal primitive operation, and takes as a single parameter the ciphertext produced by an earlier Seal operation.
- the security processor obtains the cryptographic digest of the trusted core currently executing on the computing device and also obtains the digest_to_unseal. If the digest_to_unseal exists in a non-encrypted state (e.g., associated with the ciphertext, but not encrypted as part of the ciphertext), then this non-encrypted version of the digest_to_unseal is obtained by the security processor. However, if no such non-encrypted version of the digest_to_unseal exists, then the security processor decrypts the ciphertext to obtain the digest_to_unseal.
- a non-encrypted state e.g., associated with the ciphertext, but not encrypted as part of the ciphertext
- the security processor compares the two digests to determine if they are the same. If the two digests are identical, then the trusted core currently executing on the computing device is authorized to retrieve the secret, and the security processor returns the secret (decrypting the secret, if it has not already been decrypted) to the component or module invoking the Unseal operation. However, if the two digests are not identical, then the trusted core currently executing on the computing device is not authorized to retrieve the secret and the security processor does not return the secret (e.g., returning a “faill” notification).
- Unseal operation failures of the Unseal operation will also occur if the ciphertext was generated on a different platform (e.g., a computing device using a different platform firmware) using a different encryption or integrity key, or if the ciphertext was generated by some other process (although the security processor may decrypt the secret and make it 11 available to the trusted core, the trusted core would not return the secret to the other process).
- a different platform e.g., a computing device using a different platform firmware
- the security processor may decrypt the secret and make it 11 available to the trusted core, the trusted core would not return the secret to the other process.
- Quote and Unwrap also referred to as PK_Unseal
- the Quote primitive takes one parameter, and causes the security processor to generate a signed statement associating the supplied parameter with the digest of the currently running trusted core.
- the security processor generates a certificate that includes the public key of a public-private key pair of the security processor as well as the digest of the currently running trusted core and the external parameter. The security processor then digitally signs this certificate and returns it to the component or module (and possibly ultimately to a remote third party), which can use the public key in the certificate to verify the signature.
- the Unwrap or PK Unseal primitive operation has ciphertext as its single parameter.
- the party invoking the Unwrap or PK_Unseal operation initially generates a structure that includes two parts—a secret and a digest_to_unseal.
- the party then encrypts this structure using the public key of a public-private key pair of the security processor on the client computing device 102 .
- the security processor responds to the Unwrap or PK Unseal primitive operation by using its private key of the public-private key pair to decrypt the ciphertext received from the invoking party.
- the security processor compares the digest of the trusted core currently running on the client computing device 102 to the digest_to_unseal from the decrypted ciphertext. If the two digests are identical, then the trusted core currently executing on the computing device is authorized to retrieve the secret, and the security processor provides the secret to the trusted core. However, if the two digests are not identical, then the trusted core currently executing on the computing device is not authorized to retrieve the secret and the security processor does not provide the secret to the trusted core (e.g., instead providing a “fall” notification).
- Both quote and unwrap can be used as part of a cryptographic protocol that allows a remote party to be assured that he is communicating with a trusted platform running a specific piece of trusted core software (by knowing its digest).
- GSK gatekeeper storage key
- the gatekeeper storage key is used to facilitate upgrading of the secure part of the operating system (the trusted core) and also to reduce the frequency with which the hardware Seal primitive operation is invoked.
- the gatekeeper storage key is generated by the trusted core and then securely stored using the Seal operation with the digest of the trusted core itself being the digest_to_unseal (this is also referred to as sealing the gatekeeper storage key to the trusted core with the digest digest_to_unseal).
- Securely storing the gatekeeper storage key using the Seal operation allows the trusted core to retrieve the gatekeeper storage key when the trusted core is subsequently re-booted (assuming that the trusted core has not been altered, and thus that its digest has not been altered).
- the trusted core should not disclose the GSK to any other parties, apart from under the strict rules detailed below.
- the gatekeeper storage key is used as a root key to securely store any trusted application, trusted core, or other operating system secrets.
- a trusted application desiring to store data as a secret invokes a software implementation of Seal supported by the trusted core (e.g., exposed by the trusted core via an application programming interface (API)).
- the trusted core encrypts the received trusted application secret using an encryption algorithm that uses the gatekeeper storage key as its encryption key. Any of a wide variety of conventional encryption algorithms can be used.
- the encrypted secret is then stored by the trusted core (e.g., in secret store 126 of FIG. 2 , secret store 146 of FIG. 3 , or alternatively elsewhere (typically, but not necessarily, on the client device)).
- the trusted application When the trusted application desires to subsequently retrieve the stored secret, the trusted application invokes an Unseal operation supported by the trusted core (e.g., exposed by the trusted core via an API) and based on the GSK as the encryption key.
- the trusted core determines whether to allow the trusted application to retrieve the secret based on information the trusted core has about the trusted application that saved the secret as well as the trusted application that is requesting the secret. Retrieval of secrets is discussed in more detail below with reference to manifests.
- the gatekeeper storage key allows multiple trusted application secrets to be securely stored without the Seal operation of the hardware being invoked a corresponding number of times.
- security of the trusted application secrets is still maintained because a mischievous trusted core will not be able to decrypt the trusted application secrets (it will not be able to recover the gatekeeper storage key that was used to encrypt the trusted application secrets, and thus will not be able to decrypt the encrypted trusted application secrets).
- FIG. 4 illustrates an exemplary relationship between the gatekeeper storage key and trusted application secrets.
- a single gatekeeper storage key 180 is a root key and multiple (n) trusted application secrets 182 , 184 , and 186 are securely stored based on key 180 .
- Trusted application secrets 182 , 184 , and 186 can be stored by a single trusted application or alternatively multiple trusted applications.
- Each trusted application secret 182 , 184 , and 186 optionally includes a policy statement 188 , 190 , and 192 , respectively.
- the policy statement includes policy information regarding the storage, usage, and/or migration conditions that the trusted application desires to be imposed on the corresponding trusted application secret.
- FIG. 5 illustrates an exemplary process 200 for securely storing secrets using a gatekeeper storage key.
- the process of FIG. 5 is carried out by the trusted core of a client computing device, and may be performed in software.
- a gatekeeper storage key is obtained (act 202 ) and optionally sealed, using a cryptographic measure of the trusted core, to the trusted core (act 204 ).
- the gatekeeper storage key may not be sealed, depending on the manner in which the gatekeeper storage keys are generated, as discussed in more detail below.
- the gatekeeper storage key can be generated in a variety of different manners.
- the trusted core generates a gatekeeper storage key by generating a random number (or pseudo-random number) and uses a seal primitive to save and protect it between reboots. This generated gatekeeper storage key can also be transferred to other computing devices under certain circumstances, as discussed in more detail below.
- platform firmware on a computing device generates a gatekeeper storage key according to a particular procedure that allows any previous gatekeeper storage keys to be obtained by the trusted core, but does not allow the trusted core to obtain any future gatekeeper storage keys; in this case an explicit seal/unseal step need not be performed.
- the trusted core on the client computing device may be upgraded to a new trusted core and these secrets maintained.
- FIG. 6 illustrates an exemplary upgrade from one trusted core to another trusted core on the same client computing device.
- the initial trusted core executing on the client computing device is trusted core( 0 ) 230 , which is to be upgraded to trusted core( 1 ) 232 .
- Trusted core 230 includes (or corresponds to) a certificate 234 , a public key 236 , and a gatekeeper storage key 238 (GSK 0 ).
- Public key 236 is the public key of a public-private key pair of the component or device that is the source of trusted core 230 (e.g., the manufacturer of trusted core 230 ).
- Certificate 234 is digitally signed by the source of trusted core 230 , and includes the digest 240 of trusted core 230 .
- trusted core 232 includes (or corresponds to) a certificate 242 including a digest 244 , and a public key 246 .
- trusted core 232 will also include a gatekeeper storage key 248 (GSK 1 ), as well as gatekeeper storage key 238 (GSK 0 ).
- GSK 1 gatekeeper storage key
- GSK 0 gatekeeper storage key
- trusted cores 230 and 232 may also include version identifiers 250 and 252 , respectively.
- FIG. 7 illustrates an exemplary process 270 for upgrading a trusted core which uses the seal/unseal primitives.
- the process of FIG. 7 is carried out by the two trusted cores.
- the process of FIG. 7 is discussed with reference to components of FIG. 6 .
- the acts performed by the initial trusted core are on the left-hand side of FIG. 7 and the acts performed by the new trusted core (trusted core( 1 )) are on the right-hand side of FIG. 7 .
- a request to upgrade trusted core( 0 ) to trusted core( 1 ) is received (act 272 ).
- the upgrade request is accompanied by the certificate belonging to the proposed upgrade trusted core (trusted core ( 1 )).
- Trusted core( 0 ) verifies the digest of proposed-upgraded trusted core( 1 ) (act 274 ), such as by using public key 246 to verify certificate 242 .
- Trusted core( 0 ) also optionally checks whether one or more other upgrade conditions are satisfied (act 276 ). Any of a variety of upgrade conditions may be imposed.
- trusted core( 0 ) imposes the restriction that trusted cores are upgraded in strictly increasing version numbers and are signed by the same certification authority as the one that certified the currently running trusted core (or alternatively signed by some other key known to by the currently running trusted core to be held by a trusted publisher).
- version 0 can only be replaced by version 1
- version 1 can only be replaced by version 2, and so forth.
- it is generally not desirable to allow “downgrades” to earlier versions e.g., earlier versions may have more security vulnerabilities).
- the upgrade process fails and the trusted core refuses to seal the gatekeeper storage key to the prospective-newer trusted core (act 278 ).
- the upgrade process is authorized to proceed and trusted core( 0 ) uses the Seal primitive operation to seal gatekeeper storage key 238 to the digest of trusted core( 1 ) as stated in the certificate received in act 272 (act 280 ).
- trusted core( 0 ) uses the Seal operation with digest 244 being the digest_to_unseal parameter.
- trusted core( 1 ) may be loaded and booted. This may be an automated step (e.g., performed by trusted core( 0 )), or alternatively a manual step performed by a user or system administrator.
- trusted core( 1 ) obtains the sealed gatekeeper storage key 238 (act 282 ).
- Trusted core( 1 ) unseals gatekeeper storage key 238 (act 284 ), which it is able to successfully do as its digest 244 matches the digest_to_unseal parameter used to seal gatekeeper storage key 238 .
- Trusted core( 1 ) then generates its own gatekeeper storage key 248 (act 286 ) and seals gatekeeper storage key 248 to the trusted core( 1 ) digest (act 288 ), thereby allowing gatekeeper storage key 248 to be subsequently retrieved by trusted core( 1 ).
- Trusted core ( 1 ) may also optionally seal gatekeeper storage key 238 to the trusted core( 1 ) digest.
- trusted core( 1 ) uses gatekeeper storage key 248 to securely store the secrets (act 290 ).
- trusted core( 1 ) uses gatekeeper storage key 238 to retrieve old secrets (secrets that were sealed by trusted core( 0 )), and uses gatekeeper storage key 248 to retrieve new secrets (secrets that were sealed by trusted core( 1 )) (act 292 ).
- the gatekeeper storage key may be obtained (act 200 ) is by having the platform generate a set of one or more keys to be uses as gatekeeper storage keys.
- the platform When booting a particular trusted core “n”, the platform generates the family of keys from 1 to n and provide them to trusted core “n.” Each time trusted core n boots, it has access to all secrets stored with key n (which is used as a GSK). But additionally, it has access to all secrets stored with previous versions of the trusted core, because the platform has provided the trusted core with all earlier secrets.
- trusted core cannot get access to secrets stored by future trusted cores because trusted core “n” obtains the family of keys 1 to n from the platform, but does not obtain key n+1 or any other keys beyond n.
- secrets available to each family of trusted cores are inaccessible to cores generated by a different software publisher that does not have access to the private key used to generate the certificates.
- the certificates are provided along with the trusted core (e.g., shipped by the publisher along with the trusted core), allowing the platform to generate gatekeeper storage keys for that publisher's trusted cores (based on the publisher's public key).
- FIG. 8 illustrates an exemplary process 300 for upgrading a trusted core which uses the family-based set of platform-generated gatekeeper storage keys.
- the process of FIG. 8 is carried out by the trusted core and the platform.
- the acts performed by the trusted core are on the left-hand side of FIG. 8 and the acts performed by the platform are on the right-hand side of FIG. 8 .
- trusted core (n) requests a set of keys from the platform (act 302 ). This request is typically issued when trusted core (n) is booted. In response to the request, the platform generates a set of keys from 1 to n (act 304 ) and returns the set of keys to trusted core (n) (act 306 ). Trusted core (n) eventually receives requests to store and/or retrieve secrets, and uses the received set of keys to store and retrieve such secrets. Trusted core (n) uses key (n) as the gatekeeper storage key to store and retrieve any new secrets (act 308 ), and uses key (n-a) as the gatekeeper storage key to retrieve any old secrets stored by a previous trusted core (n-a)(act 310 ).
- the process of FIG. 8 is the process performed by a trusted core when it executes, regardless of whether it is a newly upgraded-to trusted core or a trusted core that has been installed and running for an extended period of time. Requests to upgrade to new trusted cores can still be received and upgrades can still occur with the process of FIG. 8 , but sealing of a gatekeeper storage key to the digest of the new trusted core need not be performed.
- trusted core ( 1 ) has a storage facility (GSK 1 ) that allows it to store new secrets that will be inaccessible to trusted core ( 0 ), and yet still has access to the secrets stored by trusted core ( 0 ) by virtue of its access to GSK 0 . Furthermore, a user can still boot the older trusted core (O) and have access to secrets that it has stored, and yet not have access to newer secrets obtained by, or generated by trusted core ( 1 ).
- GSK 1 storage facility
- multiple gatekeeper storage keys may be used by a computing device. These additional second-level gatekeeper storage key(s) may be used during normal operation of the device, or alternatively only during the upgrade process.
- Using multiple gatekeeper storage keys allows trusted applications to prevent their secrets from being available to an upgraded trusted core. Some trusted applications may allow their secrets to be available to an upgraded trusted core, whereas other trusted applications may prevent their secrets from being available to the upgraded trusted core. Additionally, a particular trusted application may allow some of its secrets to be available to the upgraded trusted core, but not other secrets.
- a trusted application when a trusted application stores a secret it indicates to the trusted core whether the secret should be accessible to an upgraded trusted core, and this indication is saved as part of the policy corresponding to the secret (e.g., policy 188 , 190 , or 192 of FIG. 4 ).
- the family of second-level gatekeeper storage keys can be generated randomly and held encrypted by the root (sealed) gatekeeper storage key. During the trusted core upgrade process, only those trusted application secrets that are to be accessible to an upgraded trusted core are encrypted so as to be retrievable by the upgraded trusted core.
- the trusted core being upgraded can generate a temporary gatekeeper storage key and encrypt a subset of the trusted application secrets (all of the secrets that are to be retrievable by the upgraded trusted core) using the temporary gatekeeper storage key.
- the temporary gatekeeper storage key is then sealed to the digest of the new trusted core, but the other gatekeeper storage key used by the trusted core is not sealed to the digest of the new trusted core.
- the new trusted core will be able to retrieve the temporary gatekeeper storage key and thus retrieve all of the trusted application secrets that were saved using the temporary gatekeeper storage key, but not trusted application secrets that were saved using the other gatekeeper storage key.
- the trusted core upgrade process allows the new upgraded trusted core to access secrets that were securely stored by the previous trusted core(s), as the new upgraded trusted core has access to the gatekeeper storage key used by the previous trusted core(s).
- any other core e.g., a mischievous core
- the trusted core upgrade process allows the new upgraded trusted core to be authenticated to third parties.
- the security processor uses the digest of the new upgraded trusted core in performing any Quote or Unwrap/PK Unseal primitive operations.
- Secret use and storage by trusted applications executing on a client computing device 102 of FIG. 1 can be further based on multiple additional keys referred to as “hive” keys.
- the hive keys are used to facilitate migrating of trusted application secrets from one computing device to another computing device.
- up to three different types or classes of secrets can be securely stored: non-migrateable secrets, user-migrateable secrets, and third party-migrateable secrets.
- One or more hive keys may be used in a computing device 102 for each type of secret. Trusted application secrets are securely stored by encrypting the secrets using one of these hive keys.
- Which type of secret is being stored (and thus which hive key to use) is identified by the trusted application when storing the secret (e.g., is a parameter of the seal operation that the trusted core makes available to the trusted applications). Whether a particular trusted application secret can be migrated to another computing device is dependent on which type of secret it is.
- FIG. 9 illustrates an exemplary secret storage architecture employing hive keys.
- a root gatekeeper storage key 320 and three types of hive keys are included: a non-migrateable key 322 , one or more user-migrateable keys 324 , and one or more third party-migrateable keys 326 .
- Non-migrateable trusted application secrets 328 are encrypted by the trusted core using non-migrateable key 322
- user-migrateable trusted application secrets 330 are encrypted by the trusted core using user-migrateable key 324
- third party-migrateable secrets 332 are encrypted by the trusted core using third party-migrateable key 326 .
- Each of the hive keys 322 , 324 , and 326 is encrypted by the trusted core using gatekeeper storage key 320 , and the encrypted ciphertext stored.
- the trusted core can retrieve gatekeeper storage key 320 , it can decrypt the hive keys 322 , 324 , and 326 , and then use the hive keys to decrypt trusted application secrets 328 , 330 , and 332 .
- Non-migrateable secrets 328 are unconditionally non-migrateable—they cannot be transferred to another computing device.
- Non-migrateable secrets 328 are encrypted by an encryption algorithm that uses, as an encryption key, non-migrateable key 322 .
- the trusted core will not divulge non-migrateable key 322 to another computing device, so no other device will be able to decrypt trusted application secrets 328 .
- an upgraded trusted core (executing on the same computing device) may still be able to access trusted application secrets 328 because, as discussed above, the upgraded trusted core will be able to retrieve gatekeeper storage key 320 .
- only a single non-migrateable key 322 is illustrated, alternatively multiple non-migrateable keys may be used.
- User-migrateable secrets 330 can be migrated/transferred to another computing device, but only under the control or direction of the user.
- User-is migrate able key 324 can be transferred, under the control or direction of the user, to another computing device.
- the encrypted trusted application secrets 330 can also be transferred to the other computing device which, so long as the trusted core of the other computing device has user-migrateable key 324 , can decrypt trusted application secrets 330 .
- Each trusted application that stores user-migrateable secrets may use a different user-migrateable key (thereby allowing the migration of secrets for different trusted applications to be controlled separately), or a single trusted application may use different user-migrateable keys for different ones of its secrets.
- Which user-migrateable key 324 to use to encrypt a particular trusted application secret is identified by the trusted application when requesting secure storage of the secret.
- this user control is created by use of a passphrase.
- the user can input his or her own passphrase on the source computing device, or alternatively the trusted core executing on the source computing device may generate a passphrase and provide it to the user.
- the trusted core encrypts user-migrateable key 324 to the passphrase, using the passphrase as the encryption key.
- the ciphertext that is the encrypted trusted application secrets 330 can be transferred to the destination computing device in any of a variety of manners (e.g., copied onto a removable storage medium (e.g., optical or magnetic disk) and 11 the medium moved to and inserted into the destination computing device, copied via a network connection, etc.).
- the user also inputs the passphrase (regardless of who/what created the passphrase) into the destination computing device.
- the encrypted user-migrateable key 324 can then be decrypted by the trusted core at the destination computing device using the passphrase.
- the trusted core at the destination device can then encrypt user-migrateable key 324 using the gatekeeper storage key of the trusted core at the destination device.
- the trusted core at the destination device is able to retrieve secrets securely stored using key 324 , assuming that the trusted core executing on the destination device is not a different trusted core (or an earlier version of the trusted core) executing on the source device.
- the retrieval of secrets is based on a manifest, as discussed in more detail below.
- the trusted core also typically authenticates the destination computing device before allowing the encrypted user-migrateable key 324 to be transferred to the destination computing device. Alternatively, at the user's discretion, authentication of the destination computing device may not be performed.
- the trusted core may perform the authentication itself, or alternatively rely on another party (e.g., a remote authentication party trusted by the trusted core) to perform the authentication or assist in the authentication.
- the destination computing device can be authenticated in a variety of different manners.
- the quote and/or pk_unseal operations are used to verify that the trusted core executing on the destination computing device is the same as or is known to the trusted core executing on the source computing device (e.g., identified as or determined to be trustworthy to the trusted core on the source computing device).
- the authentication may also involve checking a list of “untrustworthy” certificates (e.g., a revocation list) to verify that the trusted core on the destination computing device (based on its certificate) has not been identified as being untrustworthy (e.g., broken by a mischievous user).
- the authentication may also optionally include, analogous to verifying the trustworthiness of the trusted core on the destination computing device, verifying the trustworthiness of the destination computing device hardware (e.g., based on a certificate of the hardware or platform), as well as verifying the trustworthiness of one or more trusted applications executing on the destination computing device.
- Third party-migrateable secrets 332 can be migrated/transferred to another computing device, but only under the control or direction of a third party.
- This third party could be the party that provided the secret to the trusted application, or alternatively could be another party (such as a party that agrees to operate as a controller/manager of how data is migrated amongst devices).
- third party control include keys that control access to premium content (e.g., movies) etc., which may be licensed to several of a user's devices, and yet not freely movable to any other device, or credentials used to log on to a corporate LAN (Local Area Network), which can be moved, but only under the control of the LAN administrator.
- premium content e.g., movies
- corporate LAN Local Area Network
- This third party could also be another device, such as a smartcard that tracks and limits the number of times the secret is migrated.
- Third party-migrateable key 326 can be transferred, under the control or direction of the third party, to another computing device.
- the encrypted trusted application secrets 332 can also be transferred to the other computing device which, so long as the trusted core of the other computing device has third party-migrateable key 326 , can decrypt trusted application secrets 332 (assuming that the trusted core executing on the destination device is not a different trusted core (or an earlier version of the trusted core) executing on the source device).
- this user control is created by use of a public-private key pair associated with the third party responsible for controlling migration of secrets amongst machines. Multiple such third parties may exist, each having its own public-private key pair and each having its own corresponding third party-migrateable key 326 . Each third party-migrateable key 326 has a corresponding certificate 334 that includes the public key of the corresponding third party. Each time that a trusted application requests secure storage of a third party-migrateable secret, the trusted application identifies the third party that is responsible for controlling migration of the secret. If a key 326 already exists for the identified third party, then that key is used to encrypt the secret. However, if no such key already exists, then a new key corresponding to the identified third party is generated, added as one of keys 326 , and is used to encrypt the secret.
- the trusted core encrypts the third party-migrateable key 326 used to encrypt that secret with the public key of the certificate 334 corresponding to the key 326 .
- the ciphertext that is the encrypted trusted application secrets 332 can be transferred to the destination computing device in any of a variety of manners (e.g., copied onto a removable storage medium (e.g., optical or magnetic disk) and the medium moved to and inserted into the destination computing device, copied via a network connection, etc.).
- the encrypted third party-migrateable key 326 is also transferred to the destination computing device, and may be transferred along with (or alternatively separately from) the encrypted trusted application secrets 332 .
- the trusted core executing on the source computing device also typically authenticates the destination computing device before allowing the encrypted third party-migrateable key 326 to be transferred to the destination computing device.
- authentication of the destination computing device may not be performed.
- the trusted core (or third party) may perform the authentication itself, or alternatively rely on another party (e.g., a remote authentication party trusted by the trusted core or third party) to perform or assist in performing the authentication.
- the trusted core executing on the destination computing device can then access the third party corresponding to the encrypted third party-migrateable key 326 in order to have the key 326 decrypted.
- the third party can impose whatever type of verification or other constraints that it desires in determining whether to decrypt the key 326 .
- the third party may require the trusted core executing on the destination computing device to authenticate itself, or may decrypt the key 326 only if fewer than an upper limit number of computing devices have requested to decrypt the key 326 , or may require the user to verify certain information over the telephone, etc.
- the destination 11 computing device is not able to decrypt encrypted trusted application secrets 332 .
- the third party If the third party refuses to decrypt the key 326 , then the destination 11 computing device is not able to decrypt encrypted trusted application secrets 332 . However, if the third party does decrypt the key 326 , then the third party returns the decrypted key to the destination computing device.
- the decrypted key can be returned in a variety of different secure methods, such as via a voice telephone call between the user of the destination computing device and a representative of the third party, using network security protocols (such as HTTPS (Secure HyperText Transfer Protocol)), encrypting the key with a public key of a public-private key pair of the destination computing device, etc.
- the trusted core at the destination device can then encrypt third party-migrateable key 326 using the gatekeeper storage key of the trusted core at the destination device.
- Storing application secrets based on classes or types facilitates the migration of the application secrets to other computing devices.
- the application secrets are classed together, with only one key typically being needed for the user-migrateable class and only one key per third party typically being needed for the third party-migrateable class.
- an “all” class can also exist (e.g., associated with gatekeeper storage key 320 of FIG.
- FIG. 10 illustrates an exemplary process 360 for securely storing secrets using hive keys.
- the process of FIG. 10 is carried out by the trusted core of a client computing device, and may be performed in software.
- a gatekeeper storage key is generated (act 362 ) and sealed, using a cryptographic measure of the trusted core, to the trusted core (act 364 ).
- a request to store a secret is received by the trusted core from a trusted application (act 366 ), and the request includes an identification of the type of secret (non-migrateable, user-migrateable, or third party-migrateable).
- the trusted core generates a hive key for that type of secret if needed (act 368 ).
- a hive key is needed if no hive key of that type has been created by the trusted core yet, or if the identified user-migrateable key has not been created yet, or if a hive key corresponding to the third party of a third party-migrateable secret has not been created yet.
- the trusted core uses the hive key to encrypt the trusted application secret (act 370 ). Additionally, the trusted core uses the gatekeeper storage key to encrypt the hive key (act 372 ).
- FIG. 11 illustrates an exemplary process 400 for migrating secrets from a source computing device to a destination computing device.
- the process of FIG. 11 is carried out by the trusted cores on the two computing devices.
- the process of FIG. 11 is discussed with reference to components of FIG. 9 .
- a request to migrate or transfer secrets to a destination computing device is received at the source computing device (act 402 ).
- the trusted core on the source computing device determines whether/how to allow the transfer of secrets based on the type of secret (act 404 ). If the secret is a non-migrateable secret, then the trusted core does not allow the secret to be transferred or migrated (act 406 ).
- the trusted core obtains a user passphrase (act 408 ) and encrypts the hive key corresponding to the secret using the passphrase (act 410 ).
- the trusted core also authenticates the destination computing device as being trusted to receive the secret (act 412 ). If the destination computing device is not authenticated, then the trusted core does not transfer the encrypted hive key to the destination computing device. Assuming the destination computing device is authenticated, the encrypted hive key as well as the encrypted secret is received at the destination computing device (act 414 ), and the trusted core at the destination computing device also receives the passphrase from the user (act 416 ). The trusted core at the destination computing devices uses the passphrase to decrypt the hive key (act 418 ), thereby allowing the trusted core to decrypt the encrypted secrets when requested.
- the trusted core on the source computing device encrypts the hive key corresponding to the secret using the public key of the corresponding third party (act 420 ).
- the trusted core on the source computing device or alternatively the third party corresponding to the hive key, also authenticates the destination computing device (act 422 ). If the destination computing device is not authentication then the trusted core does not transfer the encrypted hive key to the destination computing device (or alternatively, the third party does not decrypt the hive key). Assuming the destination computing device is authenticated, the encrypted hive key as well as the encrypted secret is received at the destination computing device (act 424 ). The trusted core at the destination computing device contacts the corresponding third party to decrypt the hive key (act 426 ), thereby allowing the trusted core to decrypt the encrypted secrets when requested.
- secure secret storage is maintained by allowing trusted processes to restrict whether and how trusted application secrets can be migrated to other computing devices, and by the trusted core enforcing such restrictions.
- migration of secrets to other computing devices is facilitated by the use of the gatekeeper storage key and hive keys, as only one or a few keys need to be moved in order to have access to the application secrets held by the source device.
- the use of hive keys to migrate secrets to other computing devices does not interfere with the ability of the trusted applications or the trusted core to authenticate itself to third parties.
- the first is the failure of the mass storage device that stores the trusted core (e.g., a hard disk) or the operating system executing on the computing device, and the second is the damaging of the device sufficiently to justify replacement of the computing device with a new computing device (e.g., a heavy object fell on the computing device, or a power surge destroyed one or more components).
- the trusted core e.g., a hard disk
- the operating system executing on the computing device e.g., a hard disk
- the second is the damaging of the device sufficiently to justify replacement of the computing device with a new computing device (e.g., a heavy object fell on the computing device, or a power surge destroyed one or more components).
- the contents of the mass storage device are backed up when the computing device is functioning properly.
- the mass storage device can be erased (e.g., formatted) or replaced, and the backed up data stored to the newly erased (or new) mass storage device.
- the computing device may have an associated “recovery” disk (or other media) that the manufacturer provides and that can be used to copy the trusted core from when recovering from a failure.
- the trusted core will have the same digest as the trusted core prior to the failure, so that the new trusted core will be able to decrypt the gatekeeper storage key and thus the trusted application secrets.
- the backing up of securely stored secrets is accomplished in a manner very similar to the migration of secrets from one computing device to another.
- the backing up is essentially migrating the trusted application secrets from a source computing device (the old, damaged device) to a destination computing device (the new, replacement device).
- Non-migrateable secrets are not backed up. This can be accomplished by the trusted core not allowing the non-migrateable secrets to be copied from the computing device, or not allowing the non-migrateable key to be copied from the computing device, when backing up data.
- User-migrateable secrets are backed up using a passphrase.
- a user passphrase(s) is obtained and used to encrypt the user-migrateable key(s), with the encrypted keys being stored on a backup medium (e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.).
- a backup medium e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.
- the user can copy the backed up encrypted trusted application secrets, as well as the user-migrateable key(s) encrypted to the passphrase(s), to any other device he or she desires. Then, by entering the passphrase(s) to the other device, the user can allow the trusted core to decrypt and retrieve the trusted application secrets.
- Third party-migrateable secrets are backed up using a public key(s) of the third party or parties responsible for controlling the migration of the secrets.
- the trusted core encrypts the third party-migrateable key(s) with the public key(s) of the corresponding third parties, and the encrypted keys are stored on a backup medium (e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.).
- a backup medium e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.
- the user can copy the backed up encrypted trusted application secrets to any other device he or she desires, and contact the appropriate third party or parties to decrypt the encrypted keys stored on the backup medium.
- the third party or parties decrypt the keys and return (typically in a secure manner) the third party-migrateable key(s) to the other computing device, which the trusted core can use to decrypt and retrieve the trusted application secrets.
- trusted processes are allowed to restrict whether and how trusted application secrets can be backed up, and the trusted core enforces such restrictions. Additionally, the backing up of secrets does not interfere with the ability of the trusted applications or the trusted core to authenticate itself to third parties.
- trusted application components and modules are more likely to be upgraded than are components and modules of the trusted core.
- Trusted applications frequently include various dynamic link libraries (DLL's), plug-ins, etc. and allow for different software configurations, each of which can alter the binaries which execute as the trusted application.
- DLL's dynamic link libraries
- Using a digest for the trusted application can thus be burdensome as the digest would be changing every time one of the binaries for the trusted application changes.
- a security model is defined for trusted applications that relies on manifests.
- a manifest is a policy statement which attempts to describe what types of binaries are allowed to be loaded into a process space for a trusted application.
- This process space is typically a virtual memory space, but alternatively may be a non-virtual memory space.
- the manifest specifies a set of binaries, is uniquely identifiable, and is used to gate access to secrets.
- Multiple manifests can be used in a computing device at any one time—one manifest may correspond to multiple different applications (sets of binaries), and one application (set of binaries) may correspond to multiple different manifests.
- FIG. 12 illustrates an exemplary manifest 450 corresponding to a trusted application.
- Manifest 450 can be created by anybody—there need not be any restrictions on who can create manifests. Certain trust models may insist on authorization by some given authority in order to generate manifests. However, this is not an inherent property of manifests, but a way of using them—in principle, no authorization is needed to create a manifest.
- Manifest 450 includes several portions: an identifier portion 452 made up of a triple (K, U, V), a signature portion 454 including a digital signature over manifest 450 (except for signature portion 454 ), a digest list portion 456 , an export statement list portion 458 , and a set of properties portion 460 .
- An entry point list 462 may optionally be included.
- Identifier portion 452 is an identifier of the manifest.
- the manifest identifier is a triple (K, U, V), in which K is a public key of a public-private key pair of the party that generates manifest 450 .
- U is an arbitrary identifier.
- U is a member of a set M u , where the exact definition of M u is dependent upon the specific implementation.
- M u One condition on set M u is that all of its elements have a finite representation (that is, M u is countable).
- M u could be, for example, the set of integers, the set of strings of finite length over the Latin alphabet, the set of rational numbers, etc.
- the value U is a friendly name or unique identifier of the party that generates manifest 450 .
- V is similar to U, and can be a member of a set M v having the same conditions as M u (which may be the same set that U is a member of, or alternatively a different set). Additionally, there is an (total or partial) defined on the set M v (e.g., increasing numerical order, alphabetical order, or some arbitrarily defined order).
- V is the version number of manifest 450 .
- the trusted application corresponding to manifest 450 is identified by the triple in portion 452 .
- manifest identifier portion 452 is described herein primarily with reference to the triple (K, U, V). Alternatively, manifest identifiers may not include all three elements K, U, and V. For example, if version management is not needed, the V component can be omitted.
- manifest identifiers may also be used.
- any of a variety of conventional cryptographic hashing functions such as SHA-1 may be used to generate a hash of one or more portions of manifest 450 (e.g., portion 456 ).
- the resultant hash value can be used as the manifest identifier.
- Signature portion 454 includes a digital signature over the portions of manifest 450 other than signature portion 454 (that is, portions 452 , 456 , 458 , and 460 ). Alternatively, one or more other portions of manifest 450 may also be excluded from being covered by the digital signature, such as portion 458 .
- the digital signature is generated by the party that generates manifest 450 , and is generated using the private key corresponding to the public key K in portion 452 .
- a device such as a trusted core
- can verify manifest 450 by checking the manifest signature 454 using the public key K. Additionally, this verification may be indirected through a certificate chain.
- a digital signature over a portion(s) of manifest 450 may not be included in manifest 450 .
- the digital signature in portion 454 serves to tie lists portion 456 to the manifest identifier.
- other mechanisms may be used to tie lists portion 456 to the manifest identifier. For example, if the manifest identifier is a hash value generated by hashing portion 456 , then the manifest identifier inherently ties lists portion 456 to the manifest identifier.
- Certificate lists 456 are two lists (referred to as S and T) of public key representations.
- lists 456 are each a list of certificate hashes.
- the S list is referred to as an inclusion list while the T list is referred to as an exclusion list.
- the certificate hashes are generated using any of a wide variety of conventional cryptographic hashing operations, such as SHA-1.
- List S is a list of hashes of certificates that certify the public key which corresponds to the private key that was used to sign the certificates in the chain that corresponds to the binaries that are authorized by manifest 450 to execute in the virtual memory space.
- a particular manufacturer may digitally sign multiple binaries using the same private key, and thus the single certificate that includes the public key corresponding to this private key may be used to authorize multiple binaries to execute in the virtual memory space.
- a manufacturer can generate an entirely new key for each binary which is subsequently deleted. This will result in the same mechanism being used to identify a single, unique application as opposed to one from a family.
- the “hash-of-a-certificate” scheme is hence a very flexible scheme for describing applications or families of applications.
- List T is a list of hashes of certificates that certify the public key which corresponds to the private key that was used to sign the certificates in the chain 11 that corresponds to the binaries that are not authorized by manifest 450 to execute in the virtual memory space.
- List T may also be referred to as a revocation list. Adding a particular certificate to list T thus allows manifest 450 to particularly identify one or more binaries that are not allowed to execute in the virtual memory space.
- the entries in list T override the entries in list S.
- the binary in order for a binary to be authorized to execute in a virtual memory space corresponding to manifest 450 , the binary must have a certificate hash that is the same as a certificate hash in list S (or have a certificate that identifies a chain of one or more additional certificates, at least one of which is in list S) but is not the same as any certificate hash in list T.
- the certificates in the chain from the certificate in S to the leaf certificate that contains the hash of the binary can be contained in list T. If both of these conditions are not satisfied, then the binary is not authorized to execute in the virtual memory space corresponding to manifest
- the T list in conjunction with the S list, can be used flexibly. For example, given an inclusion of all applications certified by a given root in the inclusion list (S), the exclusion list (T) can be used to exclude one or more applications that are known to be vulnerable or have other bugs. Similarly, given a certification hierarchy, with the root certificate on the inclusion list (S), the exclusion list (T) can be used to remove one or more of the child keys in the hierarchy (and binaries certified by them) that have been compromised.
- the S and T lists may be the actual certificates that certify the public keys which correspond to the private keys that were used to sign the certificates in the chains that correspond to the binaries that are authorized by manifest 450 to execute (the S list) or not execute (the T list) in the virtual memory space.
- the S and T lists may be just the public keys which correspond to the private keys that were used to sign the certificates in the chains that correspond to the binaries that are authorized by manifest 450 to execute (the S list) or not execute (the T list) in the virtual memory space.
- Export statement list portion 458 includes a list of zero or more export statements that allow a trusted application secret associated with manifest 450 to be exported (migrated) to another trusted application on the same computing device.
- Each trusted application executing on a client computing device 102 of FIG. 1 has a corresponding manifest 450 , and thus each trusted application secret securely saved by the trusted application is associated with manifest 450 .
- Export statement list portion 458 allows the party that generates manifest 450 to identify the other trusted applications to which the trusted application secrets associated with manifest 450 can be exported and made available for retrieving.
- Each export statement includes a triple (A, B, S), where A is the identifier (K, U, V) of the source manifest, B is the identifier (K, U, V) of the destination manifest, and S is a digital signature over the source and destination manifest identifiers.
- B may identify a single destination manifest, or alternatively a set of destination manifests.
- a (possibly open) interval of V values may optionally be allowed (e.g., “version 3 and higher”, or “versions 2 through 5”).
- the digital signature S is made using the same private key as was used to sign manifest 450 (in order to generate the signature in portion 454 ).
- Export statements may be device-independent and thus not limited to being used on any particular computing device.
- an export statement may be device-specific, with the export statement being useable on only one particular computing device (or set of computing devices). This one particular computing device may be identified in different manners, such as via a hardware id or a cryptographic mechanism (e.g., the export statement may be encrypted using the public key associated with the particular computing device). If a hardware id is used to identify a particular computing device, the export statement includes an additional field which states the hardware id (thus, the issuer of the manifest could control on a very fine granularity who can move secrets).
- one or more export statements may be separate from, but associated with, manifest 450 .
- the party that generates manifest 450 may generate one or more export statements after manifest 450 is generated and distributed. These export statements are associated with the manifest 450 and thus have the same affect as if they were included in manifest 450 .
- a new trusted application may be developed after the manifest 450 is generated, but the issuer of the manifest 450 would like the new trusted application to have access to secrets from the application associated with the manifest 450 .
- the issuer of the manifest 450 can then distribute an export statement (e.g., along with the new trusted application or alternatively separately) allowing the secrets to be migrated to the new trusted application.
- the trusted core checks to ensure that the manifest identifier of the desired destination trusted application is included in export statement list portion 758 . If the manifest identifier of the desired destination trusted application is included in export statement list portion 758 , then the trusted core allows the destination trusted application to have access to the source trusted application secrets; otherwise, the trusted core does not allow the destination trusted application to have access to the source trusted application secrets. Thus, although a user may request that trusted application secrets be exported to another trusted application, the party that generates the manifest for the trusted application has control over whether the secrets can actually be exported to the other trusted application.
- Properties portion 460 identifies a set of zero or more properties for the manifest 450 and/or executing process corresponding to manifest 450 .
- Various properties can be included in portion 460 .
- Example properties include: whether the process is debuggable, whether to allow (or under what conditions to allow) additional binaries to be added to the virtual memory space after the process begins executing, whether to allow implicit upgrades to higher manifest version numbers (e.g., allow upgrades from one manifest to another based on the K and U values of identifier 452 , without regard for the V value), whether other processes (and what other processes) should have access to the virtual memory space of the process (e.g. to support secure shared memory), what/whether other resources should be shareable (e.g. “pipe” connections, mutexes (mutually exclusives), or other OS resources), and so forth.
- Entry point list 462 is optional and need not be included in manifest 450 .
- an entry point list is included in the binary or a certificate for the binary, and thus not included in manifest 450 .
- entry point list 462 may be included in manifest 450 .
- Entry point list 462 is a list of entry points into the executing process. Entry point list 462 is typically generated by the party that generates manifest 450 . These entry points can be stored in a variety of different manners, such as particular addresses (e.g., offsets relative to some starting location, such as the beginning address of a particular binary), names of functions or procedures, and so forth.
- entry points are the only points of the process that can be accessed by other processes (e.g., to invoke functions or methods of the process).
- the trusted core checks whether the particular address corresponds to an entry point in entry point list 462 . If the particular address does correspond to an entry point in entry point list 462 , then the access is allowed; otherwise, the trusted core denies the access.
- the manifest is used by the trusted core in controlling authentication of trusted application processes and access to securely stored secrets by trusted application processes executing on the client computing device.
- the trusted core (or any other entity) can refer to its identifier (the triple K, U, V).
- the trusted core exposes versions of the Seal, Unseal, Quote, and Unwrap operations analogous to those primitive operations discussed above, except that it is the trusted core that is exposing the operations rather than the underlying hardware of the computing device, and the parameters may vary.
- the versions of the Seal, Unseal, Quote, and Unwrap operations that are exposed by the trusted core and that can be invoked by the trusted application processes are as follows.
- the Seal operation exposed by the trusted core takes the following form:
- secret_key represents the K component of a manifest identifier
- identifier represents the U component of a manifest identifier
- version represents the V value of a manifest identifier
- secret_type is the type of secret to be stored (e.g., non-migrateable, user-migrateable, or third party-migrateable).
- the manifest identifier (the K, U, and V components) is a manifest identifier as described above (e.g., with reference to manifest 450 ).
- the K and U portions of the manifest identifier refer to the party that generated the manifest for the process storing the secret, while the V portion refers to the versions of the manifest that should be allowed to retrieve the secret.
- the (K,U,V) triple can be a list of such triples and the value V can be a range of values.
- the trusted core When the Seal operation is invoked, the trusted core encrypts the secret and optionally additional parameters of the operation using the appropriate hive key based on the secret_type.
- the encrypted secret is then stored by the trusted core in secret store 126 of FIG. 2 or 146 of FIG. 3 cryptographically bound to the associated rules (the list ⁇ (K,U,V) ⁇ ), or alternatively in some other location.
- the Unseal operation exposed by the trusted core takes the following form:
- encrypted_secret represents the ciphertext that has encrypted in it the secret 11 to be retrieved together with the (K, U, V) list that names the application(s) qualified to retrieve the secret.
- the trusted core obtains the encrypted secret and determines whether to reveal the secret to the requesting process.
- the trusted core reveals the secret to the requesting process under two different sets of conditions; if neither of these sets of conditions is satisfied then the trusted core does not reveal the secret to the requesting process.
- the first set of conditions is that the requesting process was initiated with a manifest that is properly formed and is included in the (K, U, V) list (or the K, U, V value) indicated by the sealer. This is the common case: An application can seal a secret naming its own manifest, or all possible future manifests from the same software vendor. In this case, the same application or any future application in the family has automatic access to its secrets.
- the second set of conditions allows a manifest issuer to make a specific allowance for other applications to have access to the secrets previously sealed with more restrictive conditions.
- This is managed by an export certificate, which provides an override that allows secrets to be migrated to other applications from other publishers not originally named in the (K, U, V) list of the sealer.
- export lists should originate from the publisher of the original manifest. This restriction is enforced by requiring that the publisher sign the export certificate with the key originally used to sign the manifest of the source application. This signature requirement may also be indirected through certificate chains.
- the trusted core is a) furnished with the manifest from the original publisher (i.e., the manifest issuer), b) furnished with the export certificate itself which is signed by the original publisher, and c) running a process that is deemed trustworthy in the export certificate. If all these requirements are met, the running process has access to the secrets sealed by the original process.
- the Quote and Unwrap operations provide a way for the trusted core to authenticate to a third party that it is executing a trusted application process with a manifest that meets certain requirements.
- the Unwrap operation uses ciphertext as its single parameter.
- a third (arbitrary) party initially generates a structure that includes five parts: a secret, a public_key K, an identifier U, a version V, and a hive_id.
- secret represents the secret to be revealed if the appropriate conditions are satisfied
- public_key K represents the public key of the party that needs to have digitally signed the manifest for the process
- identifier U is the identifier of the party that needs to have generated the manifest for the process
- version V is a set of zero or more acceptable versions of the manifest
- hive_id is the type of secret being revealed (e.g., non-migrateable, user-migrateable, or third party-migrateable).
- the party then encrypts this structure using the public key of the public-private key pair known to belong to a trustworthy trusted core (presumably because of certification of the public part of this key).
- the manner in which the trusted core gets this key is discussed in additional detail in U.S. patent application Ser. No. 09/227,611 entitled “Loading and Identifying a Digital Rights Management Operating System” and U.S. patent application Ser. No. 09/227,561 entitled “Digital Rights Management Operating System”.
- a trusted application receives the ciphertext generated by the third party and invokes the Unwrap operation exposed by the trusted core.
- the trusted core responds to the Unwrap operation by using its private key of the public-private key pair to decrypt the ciphertext received from the invoking party.
- the trusted core compares the conditions in or associated with the encrypted ciphertext to the manifest associated with the appropriate trusted application process.
- the appropriate trusted application process can be identified explicitly by the third party that generated the ciphertext being unwrapped, or alternatively inherently as the trusted application invoking the Unwrap operation (so the trusted core knows that whichever process invokes the Unwrap operation is the appropriate trusted application process). If the manifest associated with the process satisfies all of the conditions in the encrypted ciphertext, then the process is authorized to retrieve the secret, and the trusted core provides the secret to the process. However, if one or more of the conditions in the encrypted ciphertext are not satisfied by the manifest associated with the process, then the process is not authorized to retrieve the secret and the trusted core does not provide the secret to the process.
- the Unwrap operation may also have conditions on the data of the secret. If the conditions on the data (e.g., to verify its integrity) are not satisfied then the trusted core does not provide the secret to the process (even if the manifest conditions are satisfied).
- the encrypted secret may include both the data of the secret and a cryptographic hash of the data. The trusted core verifies the integrity of the data by hashing the data and verifying the resultant hash value.
- Unwrap operation naming the manifest or manifests of the application(s) allowed to decrypt the secret allows a remote party to conveniently express that a secret should only be revealed to a certain application or set of applications on a particular host computer running a particular trusted core.
- An alternative technique is based on the use of the quote operation, which allows an application value to be cryptographically associated with the manifest of the application requesting the quote operation.
- the quote operation associates an application-supplied value with an identifier for the running software.
- the quote operation was implemented in hardware, and allowed the digest of the trusted core to be cryptographically associated with some trusted core-supplied data.
- the quote operation will generate a signed statement that a particular value X was supplied by a process running under a particular manifest (K, U, V), where the value X is an input parameter to the quote operation.
- the value X can be used as part of a more general authentication protocol.
- such a statement can be sent as part of a cryptographic interchange between a client and a server to allow the server to determine that the client it is talking to is a good device running a trusted core, and an application that it trusts before revealing any secret data to it.
- the requesting party can analyze the manifest and make its own determination of whether it is willing to trust the process.
- FIG. 13 illustrates an exemplary process 500 for controlling execution of processes in an address space based on a manifest. The process of FIG. 13 is discussed with reference to components in FIG. 12 , and is implemented by a trusted core.
- a request to execute a process is received by the trusted core (act 502 ).
- This request may be received from a user or alternatively another process executing on the same client computing device as the trusted core or alternatively on another computing device in communication with the client computing device.
- a virtual memory space for the process is set up by the trusted core (act 504 ) and the binaries necessary to execute the process are loaded into the virtual memory space (act 506 ). It should be noted that, in act 506 , the binaries are loaded into the memory space but execution of the binaries has not yet begun.
- the trusted core then initializes the environment and obtains a manifest for the process (act 508 ). Typically, the manifest is provided to the trusted core as part of the request to execute the process.
- the trusted core checks whether all of the loaded binaries are consistent with the manifest (act 510 ). In one implementation, this check for consistency involves verifying that the certificate (or certificate hash) of each binary is in the S list in portion 456 of manifest 450 , and that certificates (or certificate hashes) for none of the binaries are in the T list in portion 456 . This certificate verification may be indirected through a certificate list. If the loaded binaries are not consistent with the manifest (e.g., at least one is not in the S list and/or at least one is in the T list), then process 500 fails—the requested process is not executed (act 512 ).
- the trusted core allows the processor to execute the binaries in the virtual memory space (act 514 ).
- Execution of the loaded binaries typically is triggered by an explicit request from an outside entity (e.g. another process).
- a request may be subsequently received, typically from the executing process or some other process, to load an additional binary into the virtual memory space.
- the trusted core continues executing the process if no such request is received (acts 514 and 516 ). However, when such a request is received, the trusted core checks whether the additional binary is consistent with manifest 450 (act 518 ).
- Consistency in act 518 is determined in the same manner as act 510 —the additional binary is consistent with manifest 450 if its certificate (or certificate hash) is in the S list in portion 456 of manifest 450 and is not in the T list in portion 456 .
- the additional binary is not consistent with manifest 450 , then the additional binary is not loaded into the virtual memory space and allowed to execute, and processing continues to act 514 . However, if the additional binary is consistent with manifest 450 , then the additional binary is loaded into the virtual memory space (act 520 ), and processing of the binaries (including the additional binary) continues.
- the manifest can be obtained prior to loading the binaries into the virtual memory space (e.g., provided as part of the initial request to execute a trusted process in act 502 ). In this case, each request to load a binary is checked against the manifest. Binaries which are not allowed by the manifest are not loaded into the virtual memory space, whereas binaries that are allowed are loaded into the virtual memory space.
- FIG. 14 illustrates an exemplary process 540 for upgrading to a new version of a trusted application.
- the process of FIG. 14 is discussed with reference to components in FIG. 12 , and is implemented by a computing device (typically other than the client computing device).
- the upgraded version of a trusted application is prepared by the same party that prepared the previous version of the trusted application.
- a trusted application upgrade request is received along with one or more new components or modules (e.g., binaries) for the trusted application to be upgraded (act 542 ).
- These new components or modules may replace previous versions of the components or modules in the previous version of the process, or alternatively may be new components or modules that have no counterpart in the previous version.
- a party begins generating a new manifest 450 ′ for the new version of the trusted application including a new triple (K′, U′, V′) identifier for the new version and appropriate certificate hashes (or alternatively certificates) in the appropriate S and T lists in portion 456 (act 544 ).
- the K′ and U′ parts of the triple will be the same as the K and U parts of the triple identifier of the previous version, so that only V and V′ differ (that is, only the versions in the identifier differ).
- the new manifest 450 ′ is then made available to the client computing device(s) where the new version of the trusted application is to be executed (act 546 ).
- the first situation is where some binaries for the application are changed, added, and/or removed, but the old manifest allows the new binaries to be loaded and loading the old binaries is not considered to harm security.
- the manifest does not have to change at all and no secrets have to be migrated. The user simply installs the new binaries on his machine and they are allowed to execute.
- the second situation is where some binaries are changed, added, and/or removed, and the old manifest is no longer acceptable because some of the old binaries (which can still be loaded under the old manifest) compromise security and/or some of the changed or new binaries cannot be loaded under the old manifest.
- the issuer of the old manifest decides to issue a new manifest with the same K,U.
- the software manufacturer produces new binaries. These new binaries are digitally signed (certificates are issued) and a new manifest is created. This new manifest (via its S and T lists) allows the new binaries to be executed but does not allow the old binaries to be executed (at least not the binaries that compromise security).
- a user then receives all three things (the new binaries, the certificates for the new binaries, and the new manifest) and installs all three on his or her machine. Secrets do not have to be migrated, because the new manifest is just a new version of the old one. The new binaries are allowed to execute, but the old binaries are not.
- the third situation is where secrets have to be migrated between different applications that are not versions of each other. This situation is handled as described above regarding export statements.
- secure secret storage is maintained by the trusted core imposing restrictions, based on the manifests, on which trusted processes can retrieve particular secrets.
- the manifests also provide a way for trusted applications to be authenticated to remote parties.
- FIG. 15 illustrates a general exemplary computer environment 600 , which can be used to implement various devices and processes described herein.
- the computer environment 600 is only one example of a computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the computer and network architectures. Neither should the computer environment 600 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary computer environment 600 .
- Computer environment 600 includes a general-purpose computing device in the form of a computer 602 .
- Computer 602 can be, for example, a client computing device 102 or server device 104 of FIG. 1 , a device used to generate a trusted application or manifest, etc.
- the components of computer 602 can include, but are not limited to, one or more processors or processing units 604 , a system memory 606 , and a system bus 608 that couples various system components including the processor 604 to the system memory 606 .
- the system bus 608 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
- bus architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
- Computer 602 typically includes a variety of computer readable media. Such media can be any available media that is accessible by computer 602 and includes both volatile and non-volatile media, removable and non-removable media.
- Computer 602 may also include other removable/non-removable, volatile/non-volatile computer storage media.
- FIG. 15 illustrates a hard disk drive 616 for reading from and writing to a non-removable, non-volatile magnetic media (not shown), a magnetic disk drive 618 for reading from and writing to a removable, non-volatile magnetic disk 620 (e.g., a “floppy disk”), and an optical disc drive 622 for reading from and/or writing to a removable, non-volatile optical disc 624 such as a CD-ROM, DVD-ROM, or other optical media.
- a hard disk drive 616 for reading from and writing to a non-removable, non-volatile magnetic media (not shown)
- a magnetic disk drive 618 for reading from and writing to a removable, non-volatile magnetic disk 620 (e.g., a “floppy disk”)
- an optical disc drive 622 for reading from and/or writing to a removable, non-volatile optical disc
- the hard disk drive 616 , magnetic disk drive 618 , and optical disc drive 622 are each connected to the system bus 608 by one or more data media interfaces 626 .
- the hard disk drive 616 , magnetic disk drive 618 , and optical disc drive 622 can be connected to the system bus 608 by one or more interfaces (not shown).
- the various drives and their associated computer storage media provide non-volatile storage of computer readable instructions, data structures, program modules, and other data for computer 602 .
- a hard disk 616 a removable magnetic disk 620 , and a removable optical disc 624
- other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile discs (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like, can also be utilized to implement the exemplary computing system and environment.
- Any number of program modules can be stored on the hard disk 616 , magnetic disk 620 , optical disc 624 , ROM 612 , and/or RAM 610 , including by way of example, an operating system 626 , one or more application programs 628 (e.g., trusted applications), other program modules 630 , and program data 632 .
- an operating system 626 may implement all or part of the resident components that support the distributed file system.
- a user can enter commands and information into computer 602 via input devices such as a keyboard 634 and a pointing device 636 (e.g., a “mouse”).
- Other input devices 638 may include a microphone, joystick, game pad, satellite dish, serial port, scanner, and/or the like.
- input/output interfaces 640 that are coupled to the system bus 608 , but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
- a monitor 642 or other type of display device can also be connected to the system bus 608 via an interface, such as a video adapter 644 .
- other output peripheral devices can include components such as speakers (not shown) and a printer 646 which can be connected to computer 602 via the input/output interfaces 640 .
- Computer 602 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computing device 648 .
- the remote computing device 648 can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and the like.
- the remote computing device 648 is illustrated as a portable computer that can include many or all of the elements and features described herein relative to computer 602 .
- Logical connections between computer 602 and the remote computer 648 are depicted as a local area network (LAN) 650 and a general wide area network (WAN) 652 .
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
- the computer 602 When implemented in a LAN networking environment, the computer 602 is connected to a local network 650 via a network interface or adapter 654 . When implemented in a WAN networking environment, the computer 602 typically includes a modem 656 or other means for establishing communications over the wide network 652 .
- the modem 656 which can be internal or external to computer 602 , can be connected to the system bus 608 via the input/output interfaces 640 or other appropriate mechanisms. It is to be appreciated that the illustrated network connections are exemplary and that other means of establishing communication link(s) between the computers 602 and 648 can be employed.
- remote application programs 658 reside on a memory device of remote computer 648 .
- application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 602 , and are executed by the data processor(s) of the computer.
- Computer 602 typically includes at least some form of computer readable media.
- Computer readable media can be any available media that can be accessed by computer 602 .
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed by computer 602 .
- Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.
- the invention may be implemented in hardware or a combination of hardware, software, and/or firmware.
- ASICs application specific integrated circuits
- a security model a trusted environment has been described in which secrets can be securely stored for trusted applications and in which the trusted applications can be authenticated to third parties. These properties of the trusted environment are maintained, even though various parts of the environment may be upgraded or changed in a controlled way on the same computing device or migrated to a different computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Storage Device Security (AREA)
Abstract
Manifest-based trusted agent management in a trusted operating system environment includes receiving a request to execute a process is received and setting up a virtual memory space for the process. Additionally, a manifest corresponding to the process is accessed, and which of a plurality of binaries can be executed in the virtual memory space is limited based on indicators, of the binaries, that are included in the manifest.
Description
- This application is a continuation of U.S. patent application Ser. No. 09/993,370, filed Nov. 16, 2001, which is hereby incorporated by reference herein.
- This invention relates to trusted environments generally, and more particularly to manifest-based trusted agent management in a trusted operating system environment.
- Having people be able to trust computers has become an increasingly important goal. This trust generally focuses on the ability to trust the computer to use the information it stores or receives correctly. Exactly what this trust entails can vary based on the circumstances. For example, multimedia content providers would like to be able to trust computers to not improperly copy their content. By way of another example, users would like to be able to trust their computers to forward confidential financial information (e.g., bank account numbers) only to appropriate destinations (e.g., allow the information to be passed to their bank, but nowhere else). Unfortunately, given the generally open nature of most computers, a wide range of applications can be run on most current computers without the user's knowledge, and these applications can compromise this trust (e.g., forward the user's financial information to some other destination for malicious use).
- To address these trust issues, different mechanisms have been proposed (and new mechanisms are being developed) that allow a computer or portions thereof to be trusted. Generally, these mechanisms entail some sort of authentication procedure where the computer can authenticate or certify that at least a portion of it (e.g., certain areas of memory, certain applications, etc.) are at least as trustworthy as they present themselves to be (e.g., that the computer or application actually is what it claims to be). In other words, these mechanisms prevent a malicious application from impersonating another application (or allowing a computer to impersonate another computer). Once such a mechanism can be established, the user or others (e.g., content providers) can make a judgment as to whether or not to accept a particular platform and application as trustworthy (e.g., a multimedia content provider may accept a particular application as being trustworthy, once the computer can certify to the content provider's satisfaction that the particular application is the application it claims to be).
- Oftentimes, components and modules of an application are allowed to be changed (e.g., in response to user preferences) and/or upgraded fairly frequently. For example, applications frequently include various dynamic link libraries (DLL's), plug-ins, etc. and allow for different software configurations, each of which can alter the binaries which execute as the application. Currently, it is difficult (if possible at all) in many systems to allow for such changes and differing configurations of applications, while at the same time maintaining the trustworthiness of the computer. Thus, it would be beneficial to have a security model that allows for these differences and changes, while at the same time maintaining the trustworthiness of the computer. The manifest-based trusted agent management in a trusted operating system environment described herein provides such a security model.
- Manifest-based trusted agent management in a trusted operating system environment is described herein.
- According to one aspect, a request to execute a process is received and a virtual memory space for the process is set up. A manifest corresponding to the process is accessed, and which of a plurality of binaries can be executed in the virtual memory space is limited based on indicators, of the binaries, that are included in the manifest.
- According to another aspect, a manifest includes a first portion including data representing a unique identifier of the trusted application, a second portion including data indicating whether a particular one or more binaries can be loaded into the process space for the trusted application, and a third portion derived from the data in both the first portion and the second portion by generating a digital signature over the first and second portions. The manifest can also include a portion that includes data representing a list of one or more export statements that allow a secret associated with the trusted application to be exported to another trusted application, a portion that includes data representing a set of properties corresponding to the data structure, and a portion that includes data representing a list of entry points into the executing trusted application.
-
FIG. 1 illustrates an exemplary trusted operating system environment. -
FIG. 2 illustrates one exemplary architecture that can be implemented on a client computing device. -
FIG. 3 illustrates another exemplary architecture that can be used with the invention. -
FIG. 4 illustrates an exemplary relationship between a gatekeeper storage key and trusted application secrets. -
FIG. 5 illustrates an exemplary process for securely storing secrets using a gatekeeper storage key. -
FIG. 6 illustrates an exemplary upgrade from one trusted core to another trusted core on the same client computing device. -
FIG. 7 illustrates an exemplary process for upgrading a trusted core. -
FIG. 8 illustrates another exemplary process for upgrading a trusted core. -
FIG. 9 illustrates an exemplary secret storage architecture employing hive keys. -
FIG. 10 illustrates an exemplary process for securely storing secrets using hive keys. -
FIG. 11 illustrates an exemplary process for migrating secrets from a source computing device to a destination computing device. -
FIG. 12 illustrates an exemplary manifest corresponding to a trusted application. -
FIG. 13 illustrates an exemplary process for controlling execution of processes in an address space based on a manifest. -
FIG. 14 illustrates an exemplary process for upgrading to a new version of a trusted application. -
FIG. 15 illustrates a general exemplary computer environment, which can be used to implement various devices and processes described herein. - As used herein, code being “trusted” refers to code that is immutable in nature and immutable in identity. Code that is trusted is immune to being tampered with by other parts (e.g. code) of the computer and it can be reliably and unambiguously identified. In other words, any other entity or component asking “who is this code” can be told “this is code xyz”, and can be assured both that the code is indeed code xyz (rather than some imposter) and that code xyz is unadulterated. Trust does not deal with any quality or usefulness aspects of the code—only immutability of nature and immutability of identity.
- Additionally, the execution environment of the trusted code effects the overall security. The execution environment includes the machine or machine class on which the code is executing.
- General Operating Environment
-
FIG. 1 illustrates an exemplary trustedoperating system environment 100. Inenvironment 100, multipleclient computing devices 102 are coupled to multipleserver computing devices 104 via anetwork 106. Network 106 is intended to represent any of a wide variety of conventional network topologies and types (including wired and/or wireless networks), employing any of a wide variety of conventional network protocols (including public and/or proprietary protocols). Network 106 may include, for example, the Internet as well as possibly at least portions of one or more local area networks (LANs). -
Computing devices devices - Each of
client computing devices 102 includes a secure operating system (OS) 108.Secure operating system 108 is designed to provide a level of trust to users ofclient devices 102 as well asserver devices 104 that are in communication withclient devices 102 via anetwork 106.Secure operating system 108 can be designed in different ways to provide such trust, as discussed in more detail below. By providing this trust, the user ofdevice 102 and/or theserver devices 104 can be assured thatsecure operating system 108 will use data appropriately and take various measures to protect that data. - Each of
client computing devices 102 may also execute one or more trusted applications (also referred to as trusted agents or processes) 110. Each trusted application is software (or alternatively firmware) that is made up of multiple instructions to be executed by a processor(s) ofdevice 102. Oftentimes a trusted application is made up of multiple individual files (also referred to as binaries) that together include the instructions that comprise the trusted application. - One example of the usage of
environment 100 is to maintain rights to digital content, often referred to as “digital rights management”. Aclient device 102 may obtain digital content (e.g., a movie, song, electronic book, etc.) from aserver device 104.Secure operating system 108 onclient device 102 assures 11server device 104 thatoperating system 108 will not use the digital content inappropriately (e.g., will not communicate copies of the digital content to other devices) and will take steps to protect the digital content (e.g., will not allow unauthorized applications to access decrypted content). - Another example of the usage of
environment 100 is for electronic commerce (also referred to as e-commerce). Aclient device 102 may communicate with aserver device 104 and exchange confidential financial information (e.g., to purchase or sell a product or service, to perform banking operations such as withdrawal or transfer of funds, etc.).Secure operating system 108 on theclient device 102 assuresserver device 104, as well as the user ofclient device 102, that it will not use the financial information inappropriately (e.g., will not steal account numbers or funds) and will take steps to protect the financial information (e.g., will not allow unauthorized applications to access decrypted content). -
Secure operating system 108 may be employed to maintain various secrets by different trustedapplications 110 executing onclient devices 102. For example, confidential information may be encrypted by a trustedapplication 110 and a key used for this encryption securely stored bysecure operating system 108. By way of another example, the confidential information itself may be passed to secureoperating system 108 for secure storage. - There are two primary functions that secure
operating system 108 provides: (1) the ability to securely store secrets for trustedapplications 110; and (2) the ability to allow trustedapplications 110 to authenticate themselves. The secure storage of secrets allows trustedapplications 110 to save secrets to secureoperating system 108 and subsequently retrieve those secrets so long as neither the trustedapplication 110 noroperating system 108 has been altered. If either the trustedapplication 110 or theoperating system 108 has been altered (e.g., by a malicious user or application in an attempt to subvert the security of operating system 108), then the secrets are not retrievable by the altered application and/or operating system. A secret refers to any type of data that the trusted application does not want to make publicly available, such as an encryption key, a user password, a password to access a remote computing device, digital content (e.g., a movie, a song, an electronic book, etc.) or a key(s) used to encrypt the digital content, financial data (e.g., account numbers, personal identification numbers (PINs), account balances, etc.), and so forth. - The ability for a
trusted application 110 to authenticate itself allows the trusted application to authenticate itself to a third party (e.g., a server device 104). This allows, for example, aserver device 104 to be assured that it is communicating digital content to a trusted content player executing on a trusted operating system, or for theserver device 104 to be assured that it is communicating with a trusted e-commerce application on the client device rather than with a virus (or some other malicious or untrusted application). - Various concerns exist for the upgrading, migrating, and backing up of various components of the
client devices 102. As discussed in more detail below, the security model discussed herein provides for authentication and secret storage in a trusted operating system environment, while at the same time allowing one or more of -
- secure operating system upgrades
- migration of secrets to other computing devices
- backup of secrets
- trusted application upgrades
- Reference is made herein to encrypting data using a key. Generally, encryption refers to a process in which the data to be encrypted (often referred to as plaintext) is input to an encryption algorithm that operates, using a key (commonly referred to as the encryption key), on the plaintext to generate ciphertext. Encryption algorithms are designed so that it is extremely difficult to re-generate the plaintext without knowing a decryption key (which may be the same as the encryption key, or alternatively a different key). A variety of conventional encryption algorithms can be used, such as DES (Data Encryption Standard), RSA (Rivest, Shamir, Adelman), RC4 (Rivest Cipher 4), RC5 (Rivest Cipher 5), etc.
- One type of encryption uses a public-private key pair. The public-private key pair includes two keys (one private key and one public key) that are selected so that it is relatively straight-forward to decrypt the ciphertext if both keys are known, but extremely difficult to decrypt the ciphertext if only one (or neither) of the keys is known. Additionally, the encryption algorithm is designed and the keys selected such that it is extremely difficult to determine one of the keys based on the ciphertext alone and/or only one key.
- The owner of a public-private key pair typically makes its public key publicly available, but keeps its private key secret. Any party or component desiring to encrypt data for the owner can encrypt the data using the owner's public key, thus allowing only the owner (who possesses the corresponding private key) to readily decrypt the data. The key pair can also be used for the owner to digitally sign data. In order to add a digital signature to data, the owner encrypts the data using the owner's private key and makes the resultant ciphertext available with the digitally signed data. A recipient of the digitally signed data can decrypt the ciphertext using the owner's public key and compare the decrypted data to the data sent by the owner to verify that the owner did in fact generate that data (and that is has not been altered by the owner since being generated).
- The discussions herein assume a basic understanding of cryptography. For a basic introduction of cryptography, the reader is directed to a text written by Bruce Schneier and entitled “Applied Cryptography: Protocols, Algorithms, and Source Code in C,” published by John Wiley & Sons with copyright 1994 (or second edition with copyright 1996).
- Exemplary Computing Device Architectures
-
Secure operating system 108 ofFIG. 1 includes at least a portion that is trusted code, referred to as the “trusted core”. The trusted core may be a full operating system, a microkernel, a Hypervisor, or some smaller component that provides specific security services. -
FIG. 2 illustrates one exemplary architecture that can be implemented on aclient computing device 102. InFIG. 2 , the trusted core is implemented by taking advantage of different privilege levels of the processor(s) of the client computing device 102 (e.g., referred to as “rings” in an x86 architecture processor). In the illustrated example, these privilege levels are referred to as rings, although alternate implementations using different processor architectures may use different nomenclature. The multiple rings provide a set of prioritized levels that software can execute at, often including 4 levels (Rings Ring 0 is typically referred to as the most privileged ring. Software processes executing inRing 0 can typically access more features (e.g., instructions) than processes executing in less privileged rings. - Furthermore, a processor executing in a particular ring cannot alter code or data in a higher priority ring. In the illustrated example, a trusted
core 120 executes inRing 0, while anoperating system 122 executes inRing 1 and trustedapplications 124 execute inRing 3. Thus, trustedcore 120 operates at a more privileged level and can control the execution ofoperating system 122 from this level. Additionally, the code and/or data of trusted core 120 (executing in Ring 0) cannot be altered directly by operating system 122 (executing in Ring 1) or trusted applications 124 (executing in Ring 3). Rather, any such alterations would have to be made by theoperating system 122 or atrusted application 124 requesting trustedcore 120 to make the alteration (e.g., by sending a message to trustedcore 120, invoking a function of trustedcore 120, etc.). -
Trusted core 120 also maintains asecret store 126 where secrets passed to and encrypted by trusted core 120 (e.g., originating with trustedapplications 124,OS 122, or trusted core 120) are securely stored. The storage of secrets is discussed in more detail below. - A cryptographic measure of trusted
core 120 is also generated when it is loaded into the memory ofcomputing device 102 and stored in a digest register of the hardware. In one implementation, the digest register is designed to be written to only once after each time the computing device is reset, thereby preventing a malicious user or application from overwriting the digest of the trusted core. This cryptographic measure can be generated by different components, such as a security processor ofcomputing device 102, a trusted BIOS, etc. The cryptographic measure provides a small (relative to the size of the trusted core) measure of the trusted core that can be used to verify the trusted core that is loaded. Given the nature of the cryptographic measure, it is most likely that any changes made to a trusted core (e.g., to circumvent its trustworthiness) will be reflected in the cryptographic measure, so that the altered core and the original core will produce different cryptographic measures. This cryptographic measure is used as a basis for securely storing data, as discussed in more detail below. - A variety of different cryptographic measures can be used. One such cryptographic measure is a digest—for ease of explanation the cryptographic measure will be discussed primarily herein as a digest, although other measures could alternatively be used. The digest is calculated using a one-way hashing operation, such as SHA-1 (Secure Hash Algorithm 1), MD4 (Message Digest 4), MD5 (Message Digest 5), etc. The cryptographic digest has the property that it is extremely difficult to find a second pre-image (in this case, a second trusted core) that when digested produces the same hash value. Hence the digest register contains a value that can be considered to uniquely represent the trusted core in use.
- An alternative cryptographic measure to a digest, is the public key of a properly formed certificate on the digest. Using this technique, a publisher can generate a sequence of trusted-cores that are treated as identical or equivalent by the platform (e.g., based on the public key of the publisher). The platform refers to the basic hardware of the computing device (e.g., processor and chipset) as well as the firmware associated with this hardware (e.g., microcode in the processor and/or chipset).
- Alternatively, the operating system may be separated into a memory manager component that operates as trusted
core 120 with the remainder of the operating system operating asOS 122. The trustedcore 120 then controls all page maps and is thus able to shield trusted agents executing inRing 3 from other components (including OS 122). In this alternative, additional control is also added to protect the trustedcore 120 from other busmasters that do not obey ring privileges. -
FIG. 3 illustrates another exemplary architecture that can be used with the invention. InFIG. 3 , the trusted core is implemented by establishing two separate “spaces” within aclient computing device 102 ofFIG. 1 : a trusted space 140 (also referred to as a protected parallel area, or curtained memory) and a normal (untrusted)space 142. These spaces can be, for example, one or more address ranges withincomputing device 102. Both trusted space 140 andnormal space 142 include a user space and a kernel space, with the trustedcore 144 andsecret store 146 being implemented in the kernel space of trusted space 140. A cryptographic measure, such as a digest, of trustedcore 144 is also generated and used analogous to the cryptographic measure of trustedcore 120 discussed above. - A variety of trusted applets, trusted applications, and/or trusted
agents 148 can execute within the user space of trusted space 140, under the control of trustedcore 144. However, anyapplication 150,operating system 152, ordevice driver 154 executing innormal space 142 is prevented, by trustedcore 144, from accessing trusted space 140. Thus, no alterations can be made to trusted applications or data in trusted space 140 unless approved by trustedcore 144. - Additional information regarding these computing device architectures can be found in the following four U.S. Patent Applications, each of which is hereby incorporated by reference: U.S. patent application Ser. No. 09/227,611, entitled “Loading and Identifying a Digital Rights Management Operating System”, which was filed Jan. 8, 1999, in the names of Paul England et al.; U.S. patent application Ser. No. 09/227,561, entitled “Digital Rights Management Operating System”, which was filed Jan. 8, 1999, in the names of Paul England et al.; U.S. patent application Ser. No. 09/287,393, entitled “Secure Execution of Program Code”, which was filed Apr. 6, 1999, in the names of Paul England et al.; and U.S. patent application Ser. No. 09/287,698, entitled “Hierarchical Trusted Code for Content Protection in Computers”, which was filed Apr. 6, 1999, in the name of Paul England.
- For ease of explanation, the digest of a trusted core is discussed herein as a single digest of the trusted core. However, in different implementations, the digest may be made up of multiple parts. By way of example, the boot process may involve a trusted BIOS loading a platform portion of the trusted core and generating a digest of the platform portion. The platform portion in turn loads an operating system portion of the trusted core and generates a digest for the operating system portion. The operating system portion in turn loads a gatekeeper portion of the trusted core and generates a digest for the gatekeeper portion. A composite of these multiple generated digests is used as the digest of the trusted core. These multiple generated digests may be stored individually in separate digest registers with the composite of the digests being the concatenation of the different register values. Alternatively, each new digest may be used to generate a new digest value by generating a cryptographic hash of the previous digest value concatenated with the new digest—the last new digest value generated (e.g., by the operating system portion) is stored in a single digest register.
- Primitive Operations
- Two fundamental types of primitive operations are supported by the hardware and software of a
client computing device 102 ofFIG. 1 . These fundamental types are secret storage primitives and authentication primitives. The hardware of adevice 102 makes these primitive operations available to the trusted core executing on thedevice 102, and the trusted core makes variations of these primitive operations available to the trusted applications executing on thedevice 102. - Two secret storage primitive operations are supported: Seal and Unseal. The Seal primitive operation uses at least two parameters—one parameter is the secret that is to be securely stored and the other parameter is an identification of the module or component that is to be able to subsequently retrieve the secret. In one implementation, the Seal primitive operation provided by the hardware of client computing device 102 (e.g., by a cryptographic or security processor of device 102) takes the following form:
- Seal (secret, digest_to_unseal, current_digest) where secret represents the secret to be securely stored, digest_to_unseal represents a cryptographic digest of the trusted core that is authorized to subsequently retrieve the secret, and current_digest represents a cryptographic digest of the trusted core at the time the Seal operation was invoked. The current_digest the current_digest is automatically added by the security processor as the value in the digest register of the
device 102 rather than being explicitly settable as an external parameter (thereby removing the possibility that the module or component invoking the Seal operation provides an inaccurate current_digest). - When the Seal primitive operation is invoked, the security processor encrypts the parameters provided (e.g., secret, digest_to_unseal, and current_digest). Alternatively, the digest_to_unseal (and optionally the current_digest as well) may not be encrypted, but rather stored in non-encrypted form and a correspondence maintained between the encrypted secret and the digest_to_unseal. By not encrypting the digest_to_unseal, comparisons performed in response to the Unseal primitive operation discussed below can be carried out without decrypting the ciphertext.
- The security processor can encrypt the data of the Seal operation in any of a wide variety of conventional manners. For example, the security processor may have an individual key that it keeps secret and divulges to no component or module, and/or a public-private key pair. The security processor could use the individual key, the public key from its public-private key pair, or a combination thereof. The security processor can use any of a wide variety of conventional encryption algorithms to encrypt the data. The resultant ciphertext is then stored as a secret (e.g., in
secret store 126 ofFIG. 2 or 146 ofFIG. 3 ). - The Unseal primitive operation is the converse of the Seal primitive operation, and takes as a single parameter the ciphertext produced by an earlier Seal operation. The security processor obtains the cryptographic digest of the trusted core currently executing on the computing device and also obtains the digest_to_unseal. If the digest_to_unseal exists in a non-encrypted state (e.g., associated with the ciphertext, but not encrypted as part of the ciphertext), then this non-encrypted version of the digest_to_unseal is obtained by the security processor. However, if no such non-encrypted version of the digest_to_unseal exists, then the security processor decrypts the ciphertext to obtain the digest_to_unseal.
- Once the digest_to_unseal and the cryptographic digest of the trusted core currently executing on the computing device are both obtained, the security processor compares the two digests to determine if they are the same. If the two digests are identical, then the trusted core currently executing on the computing device is authorized to retrieve the secret, and the security processor returns the secret (decrypting the secret, if it has not already been decrypted) to the component or module invoking the Unseal operation. However, if the two digests are not identical, then the trusted core currently executing on the computing device is not authorized to retrieve the secret and the security processor does not return the secret (e.g., returning a “faill” notification). Note that failures of the Unseal operation will also occur if the ciphertext was generated on a different platform (e.g., a computing device using a different platform firmware) using a different encryption or integrity key, or if the ciphertext was generated by some other process (although the security processor may decrypt the secret and make it 11 available to the trusted core, the trusted core would not return the secret to the other process).
- Two authentication primitive operations are also supported: Quote and Unwrap (also referred to as PK_Unseal). The Quote primitive takes one parameter, and causes the security processor to generate a signed statement associating the supplied parameter with the digest of the currently running trusted core. In one implementation, the security processor generates a certificate that includes the public key of a public-private key pair of the security processor as well as the digest of the currently running trusted core and the external parameter. The security processor then digitally signs this certificate and returns it to the component or module (and possibly ultimately to a remote third party), which can use the public key in the certificate to verify the signature.
- The Unwrap or PK Unseal primitive operation, has ciphertext as its single parameter. The party invoking the Unwrap or PK_Unseal operation initially generates a structure that includes two parts—a secret and a digest_to_unseal. The party then encrypts this structure using the public key of a public-private key pair of the security processor on the
client computing device 102. The security processor responds to the Unwrap or PK Unseal primitive operation by using its private key of the public-private key pair to decrypt the ciphertext received from the invoking party. Similar to the Unseal primitive operation discussed above, the security processor compares the digest of the trusted core currently running on theclient computing device 102 to the digest_to_unseal from the decrypted ciphertext. If the two digests are identical, then the trusted core currently executing on the computing device is authorized to retrieve the secret, and the security processor provides the secret to the trusted core. However, if the two digests are not identical, then the trusted core currently executing on the computing device is not authorized to retrieve the secret and the security processor does not provide the secret to the trusted core (e.g., instead providing a “fall” notification). - Both quote and unwrap can be used as part of a cryptographic protocol that allows a remote party to be assured that he is communicating with a trusted platform running a specific piece of trusted core software (by knowing its digest).
- Gatekeeper Storage Key and Trusted Core Updates
- Secret use and storage by trusted applications executing on a
client computing device 102 ofFIG. 1 is based on a key generated by the trusted core, referred to as the gatekeeper storage key (GSK). The gatekeeper storage key is used to facilitate upgrading of the secure part of the operating system (the trusted core) and also to reduce the frequency with which the hardware Seal primitive operation is invoked. The gatekeeper storage key is generated by the trusted core and then securely stored using the Seal operation with the digest of the trusted core itself being the digest_to_unseal (this is also referred to as sealing the gatekeeper storage key to the trusted core with the digest digest_to_unseal). Securely storing the gatekeeper storage key using the Seal operation allows the trusted core to retrieve the gatekeeper storage key when the trusted core is subsequently re-booted (assuming that the trusted core has not been altered, and thus that its digest has not been altered). The trusted core should not disclose the GSK to any other parties, apart from under the strict rules detailed below. - The gatekeeper storage key is used as a root key to securely store any trusted application, trusted core, or other operating system secrets. A trusted application desiring to store data as a secret invokes a software implementation of Seal supported by the trusted core (e.g., exposed by the trusted core via an application programming interface (API)). The trusted core encrypts the received trusted application secret using an encryption algorithm that uses the gatekeeper storage key as its encryption key. Any of a wide variety of conventional encryption algorithms can be used. The encrypted secret is then stored by the trusted core (e.g., in
secret store 126 ofFIG. 2 ,secret store 146 ofFIG. 3 , or alternatively elsewhere (typically, but not necessarily, on the client device)). - When the trusted application desires to subsequently retrieve the stored secret, the trusted application invokes an Unseal operation supported by the trusted core (e.g., exposed by the trusted core via an API) and based on the GSK as the encryption key. The trusted core determines whether to allow the trusted application to retrieve the secret based on information the trusted core has about the trusted application that saved the secret as well as the trusted application that is requesting the secret. Retrieval of secrets is discussed in more detail below with reference to manifests.
- Thus, the gatekeeper storage key allows multiple trusted application secrets to be securely stored without the Seal operation of the hardware being invoked a corresponding number of times. However, security of the trusted application secrets is still maintained because a mischievous trusted core will not be able to decrypt the trusted application secrets (it will not be able to recover the gatekeeper storage key that was used to encrypt the trusted application secrets, and thus will not be able to decrypt the encrypted trusted application secrets).
-
FIG. 4 illustrates an exemplary relationship between the gatekeeper storage key and trusted application secrets. A singlegatekeeper storage key 180 is a root key and multiple (n) trustedapplication secrets key 180.Trusted application secrets application secret policy statement -
FIG. 5 illustrates anexemplary process 200 for securely storing secrets using a gatekeeper storage key. The process ofFIG. 5 is carried out by the trusted core of a client computing device, and may be performed in software. - The first time the trusted core is booted, a gatekeeper storage key is obtained (act 202) and optionally sealed, using a cryptographic measure of the trusted core, to the trusted core (act 204). The gatekeeper storage key may not be sealed, depending on the manner in which the gatekeeper storage keys are generated, as discussed in more detail below. Eventually, a request to store a secret is received by the trusted core from a trusted application (act 206). The trusted core uses the gatekeeper storage key to encrypt the trusted application secret (act 208), and stores the encrypted secret.
- The gatekeeper storage key can be generated in a variety of different manners. In one implementation, the trusted core generates a gatekeeper storage key by generating a random number (or pseudo-random number) and uses a seal primitive to save and protect it between reboots. This generated gatekeeper storage key can also be transferred to other computing devices under certain circumstances, as discussed in more detail below. In another implementation, platform firmware on a computing device generates a gatekeeper storage key according to a particular procedure that allows any previous gatekeeper storage keys to be obtained by the trusted core, but does not allow the trusted core to obtain any future gatekeeper storage keys; in this case an explicit seal/unseal step need not be performed.
- With this secret storage structure based on the gatekeeper storage key, the trusted core on the client computing device may be upgraded to a new trusted core and these secrets maintained.
FIG. 6 illustrates an exemplary upgrade from one trusted core to another trusted core on the same client computing device. - The initial trusted core executing on the client computing device is trusted core(0) 230, which is to be upgraded to trusted core(1) 232.
Trusted core 230 includes (or corresponds to) acertificate 234, apublic key 236, and a gatekeeper storage key 238 (GSK0).Public key 236 is the public key of a public-private key pair of the component or device that is the source of trusted core 230 (e.g., the manufacturer of trusted core 230).Certificate 234 is digitally signed by the source of trustedcore 230, and includes the digest 240 of trustedcore 230. Similarly, trustedcore 232 includes (or corresponds to) acertificate 242 including a digest 244, and apublic key 246. After trustedcore 230 is upgraded to trustedcore 232, trustedcore 232 will also include a gatekeeper storage key 248 (GSK1), as well as gatekeeper storage key 238 (GSK0). Optionally, trustedcores version identifiers -
FIG. 7 illustrates anexemplary process 270 for upgrading a trusted core which uses the seal/unseal primitives. The process ofFIG. 7 is carried out by the two trusted cores. The process ofFIG. 7 is discussed with reference to components ofFIG. 6 . For ease of explanation, the acts performed by the initial trusted core (trusted core(0)) are on the left-hand side ofFIG. 7 and the acts performed by the new trusted core (trusted core(1)) are on the right-hand side ofFIG. 7 . - Initially, a request to upgrade trusted core(0) to trusted core(1) is received (act 272). The upgrade request is accompanied by the certificate belonging to the proposed upgrade trusted core (trusted core (1)). Trusted core(0) verifies the digest of proposed-upgraded trusted core(1) (act 274), such as by using
public key 246 to verifycertificate 242. Trusted core(0) also optionally checks whether one or more other upgrade conditions are satisfied (act 276). Any of a variety of upgrade conditions may be imposed. In one implementation, trusted core(0) imposes the restriction that trusted cores are upgraded in strictly increasing version numbers and are signed by the same certification authority as the one that certified the currently running trusted core (or alternatively signed by some other key known to by the currently running trusted core to be held by a trusted publisher). Thus,version 0 can only be replaced byversion 1,version 1 can only be replaced byversion 2, and so forth. In most cases, it is also desirable to allowversion 0 to be upgraded toversion 2 in a single step (e.g., without having to be upgraded toversion 1 in between). However, it is generally not desirable to allow “downgrades” to earlier versions (e.g., earlier versions may have more security vulnerabilities). - If the check in
act 276 determines that the various conditions (including the verification of the digest in act 274) are not satisfied, then the upgrade process fails and the trusted core refuses to seal the gatekeeper storage key to the prospective-newer trusted core (act 278). Thus, even if the prospective-newer trusted core were to be installed on the computing device, it would not have access to any secrets stored by trusted core (O). However, if the various conditions are satisfied, then the upgrade process is authorized to proceed and trusted core(0) uses the Seal primitive operation to sealgatekeeper storage key 238 to the digest of trusted core(1) as stated in the certificate received in act 272 (act 280). In sealing theGSK 238 to the digest of trusted core(1), trusted core(0) uses the Seal operation withdigest 244 being the digest_to_unseal parameter. - Once the Seal operation is completed, trusted core(1) may be loaded and booted. This may be an automated step (e.g., performed by trusted core(0)), or alternatively a manual step performed by a user or system administrator.
- Once trusted core(1) is loaded and booted, trusted core(1) obtains the sealed gatekeeper storage key 238 (act 282). Trusted core(1) unseals gatekeeper storage key 238 (act 284), which it is able to successfully do as its digest 244 matches the digest_to_unseal parameter used to seal
gatekeeper storage key 238. Trusted core(1) then generates its own gatekeeper storage key 248 (act 286) and seals gatekeeperstorage key 248 to the trusted core(1) digest (act 288), thereby allowinggatekeeper storage key 248 to be subsequently retrieved by trusted core(1). Trusted core (1) may also optionally sealgatekeeper storage key 238 to the trusted core(1) digest. For subsequent requests by trusted applications to store secrets, trusted core(1) usesgatekeeper storage key 248 to securely store the secrets (act 290). For subsequent requests by trusted applications to retrieve secrets, trusted core(1) usesgatekeeper storage key 238 to retrieve old secrets (secrets that were sealed by trusted core(0)), and usesgatekeeper storage key 248 to retrieve new secrets (secrets that were sealed by trusted core(1)) (act 292). - Returning to
FIG. 5 , another way in which the gatekeeper storage key may be obtained (act 200) is by having the platform generate a set of one or more keys to be uses as gatekeeper storage keys. By way of example, the platform can generate a set of gatekeeper storage keys (SK) for trusted cores according to the following calculation:
SK n =SHA−1(cat(BK, public_key, n), for n=0 to N
where BK is a unique platform key called a binding key which is not disclosed to other parties, and is only used for the generation of keys as described above, public_key represents the public key of the party that generated the trusted core for which the gatekeeper storage keys are being generated, and N represents the version number of the trusted core. When booting a particular trusted core “n”, the platform generates the family of keys from 1 to n and provide them to trusted core “n.” Each time trusted core n boots, it has access to all secrets stored with key n (which is used as a GSK). But additionally, it has access to all secrets stored with previous versions of the trusted core, because the platform has provided the trusted core with all earlier secrets. - It should be noted, however, that the core cannot get access to secrets stored by future trusted cores because trusted core “n” obtains the family of
keys 1 to n from the platform, but does not obtain key n+1 or any other keys beyond n. Additionally, secrets available to each family of trusted cores (identified by the public key of the signer of the trusted cores) are inaccessible to cores generated by a different software publisher that does not have access to the private key used to generate the certificates. The certificates are provided along with the trusted core (e.g., shipped by the publisher along with the trusted core), allowing the platform to generate gatekeeper storage keys for that publisher's trusted cores (based on the publisher's public key). -
FIG. 8 illustrates anexemplary process 300 for upgrading a trusted core which uses the family-based set of platform-generated gatekeeper storage keys. The process ofFIG. 8 is carried out by the trusted core and the platform. For ease of explanation, the acts performed by the trusted core are on the left-hand side ofFIG. 8 and the acts performed by the platform are on the right-hand side ofFIG. 8 . - Initially, trusted core (n) requests a set of keys from the platform (act 302). This request is typically issued when trusted core (n) is booted. In response to the request, the platform generates a set of keys from 1 to n (act 304) and returns the set of keys to trusted core (n) (act 306). Trusted core (n) eventually receives requests to store and/or retrieve secrets, and uses the received set of keys to store and retrieve such secrets. Trusted core (n) uses key (n) as the gatekeeper storage key to store and retrieve any new secrets (act 308), and uses key (n-a) as the gatekeeper storage key to retrieve any old secrets stored by a previous trusted core (n-a)(act 310).
- It should be noted that the process of
FIG. 8 is the process performed by a trusted core when it executes, regardless of whether it is a newly upgraded-to trusted core or a trusted core that has been installed and running for an extended period of time. Requests to upgrade to new trusted cores can still be received and upgrades can still occur with the process ofFIG. 8 , but sealing of a gatekeeper storage key to the digest of the new trusted core need not be performed. - Following a successful upgrade (regardless of the manner in which gatekeeper storage keys are obtained by the trusted cores), trusted core (1) has a storage facility (GSK1) that allows it to store new secrets that will be inaccessible to trusted core (0), and yet still has access to the secrets stored by trusted core (0) by virtue of its access to GSK0. Furthermore, a user can still boot the older trusted core (O) and have access to secrets that it has stored, and yet not have access to newer secrets obtained by, or generated by trusted core (1).
- Alternatively, rather than a single gatekeeper storage key, multiple gatekeeper storage keys may be used by a computing device. These additional second-level gatekeeper storage key(s) may be used during normal operation of the device, or alternatively only during the upgrade process. Using multiple gatekeeper storage keys allows trusted applications to prevent their secrets from being available to an upgraded trusted core. Some trusted applications may allow their secrets to be available to an upgraded trusted core, whereas other trusted applications may prevent their secrets from being available to the upgraded trusted core. Additionally, a particular trusted application may allow some of its secrets to be available to the upgraded trusted core, but not other secrets. In one implementation, when a trusted application stores a secret it indicates to the trusted core whether the secret should be accessible to an upgraded trusted core, and this indication is saved as part of the policy corresponding to the secret (e.g.,
policy FIG. 4 ). The family of second-level gatekeeper storage keys can be generated randomly and held encrypted by the root (sealed) gatekeeper storage key. During the trusted core upgrade process, only those trusted application secrets that are to be accessible to an upgraded trusted core are encrypted so as to be retrievable by the upgraded trusted core. For example, the trusted core being upgraded can generate a temporary gatekeeper storage key and encrypt a subset of the trusted application secrets (all of the secrets that are to be retrievable by the upgraded trusted core) using the temporary gatekeeper storage key. The temporary gatekeeper storage key is then sealed to the digest of the new trusted core, but the other gatekeeper storage key used by the trusted core is not sealed to the digest of the new trusted core. Thus, when the new trusted core is loaded and booted, the new trusted core will be able to retrieve the temporary gatekeeper storage key and thus retrieve all of the trusted application secrets that were saved using the temporary gatekeeper storage key, but not trusted application secrets that were saved using the other gatekeeper storage key. - Thus, the trusted core upgrade process allows the new upgraded trusted core to access secrets that were securely stored by the previous trusted core(s), as the new upgraded trusted core has access to the gatekeeper storage key used by the previous trusted core(s). However, any other core (e.g., a mischievous core) would not have the same digest as the new upgraded trusted core, or would not have a valid certificate (digitally signed with the private key of the publisher of the new upgraded trusted core) with the public key of the publisher of the new upgraded trusted core, and thus would not have access to the secrets. Furthermore, if a previous trusted core were to be loaded and executed after secrets were stored by the new upgraded trusted core, the previous trusted core would not have access to the secrets stored by the new upgraded trusted core because the previous trusted core is not able to retrieve the gatekeeper storage key of the new upgraded trusted core. Additionally, the trusted core upgrade process allows the new upgraded trusted core to be authenticated to third parties. The security processor uses the digest of the new upgraded trusted core in performing any Quote or Unwrap/PK Unseal primitive operations.
- Hive Keys and Secret Migration
- Secret use and storage by trusted applications executing on a
client computing device 102 ofFIG. 1 can be further based on multiple additional keys referred to as “hive” keys. The hive keys are used to facilitate migrating of trusted application secrets from one computing device to another computing device. In one implementation, up to three different types or classes of secrets can be securely stored: non-migrateable secrets, user-migrateable secrets, and third party-migrateable secrets. One or more hive keys may be used in acomputing device 102 for each type of secret. Trusted application secrets are securely stored by encrypting the secrets using one of these hive keys. Which type of secret is being stored (and thus which hive key to use) is identified by the trusted application when storing the secret (e.g., is a parameter of the seal operation that the trusted core makes available to the trusted applications). Whether a particular trusted application secret can be migrated to another computing device is dependent on which type of secret it is. -
FIG. 9 illustrates an exemplary secret storage architecture employing hive keys. A rootgatekeeper storage key 320 and three types of hive keys are included: anon-migrateable key 322, one or more user-migrateable keys 324, and one or more third party-migrateable keys 326. Non-migrateabletrusted application secrets 328 are encrypted by the trusted core usingnon-migrateable key 322, user-migrateable trustedapplication secrets 330 are encrypted by the trusted core using user-migrateable key 324, and third party-migrateable secrets 332 are encrypted by the trusted core using third party-migrateable key 326. - Each of the
hive keys gatekeeper storage key 320, and the encrypted ciphertext stored. Thus, so long as the trusted core can retrievegatekeeper storage key 320, it can decrypt thehive keys application secrets -
Non-migrateable secrets 328 are unconditionally non-migrateable—they cannot be transferred to another computing device.Non-migrateable secrets 328 are encrypted by an encryption algorithm that uses, as an encryption key,non-migrateable key 322. The trusted core will not divulge non-migrateable key 322 to another computing device, so no other device will be able to decrypt trustedapplication secrets 328. However, an upgraded trusted core (executing on the same computing device) may still be able to access trustedapplication secrets 328 because, as discussed above, the upgraded trusted core will be able to retrievegatekeeper storage key 320. Although only a singlenon-migrateable key 322 is illustrated, alternatively multiple non-migrateable keys may be used. - User-
migrateable secrets 330 can be migrated/transferred to another computing device, but only under the control or direction of the user. User-is migrate able key 324 can be transferred, under the control or direction of the user, to another computing device. The encryptedtrusted application secrets 330 can also be transferred to the other computing device which, so long as the trusted core of the other computing device has user-migrateable key 324, can decrypt trustedapplication secrets 330. - Multiple user-
migrateable keys 324 may be used. For example, each trusted application that stores user-migrateable secrets may use a different user-migrateable key (thereby allowing the migration of secrets for different trusted applications to be controlled separately), or a single trusted application may use different user-migrateable keys for different ones of its secrets. Which user-migrateable key 324 to use to encrypt a particular trusted application secret is identified by the trusted application when requesting secure storage of the secret. - In one implementation, this user control is created by use of a passphrase. The user can input his or her own passphrase on the source computing device, or alternatively the trusted core executing on the source computing device may generate a passphrase and provide it to the user. The trusted core encrypts user-
migrateable key 324 to the passphrase, using the passphrase as the encryption key. The ciphertext that is the encrypted trustedapplication secrets 330 can be transferred to the destination computing device in any of a variety of manners (e.g., copied onto a removable storage medium (e.g., optical or magnetic disk) and 11 the medium moved to and inserted into the destination computing device, copied via a network connection, etc.). - The user also inputs the passphrase (regardless of who/what created the passphrase) into the destination computing device. The encrypted user-
migrateable key 324 can then be decrypted by the trusted core at the destination computing device using the passphrase. The trusted core at the destination device can then encrypt user-migrateable key 324 using the gatekeeper storage key of the trusted core at the destination device. Given user-migrateable key 324, the trusted core at the destination device is able to retrieve secrets securely stored using key 324, assuming that the trusted core executing on the destination device is not a different trusted core (or an earlier version of the trusted core) executing on the source device. The retrieval of secrets is based on a manifest, as discussed in more detail below. - The trusted core also typically authenticates the destination computing device before allowing the encrypted user-
migrateable key 324 to be transferred to the destination computing device. Alternatively, at the user's discretion, authentication of the destination computing device may not be performed. The trusted core may perform the authentication itself, or alternatively rely on another party (e.g., a remote authentication party trusted by the trusted core) to perform the authentication or assist in the authentication. - The destination computing device can be authenticated in a variety of different manners. In one implementation, the quote and/or pk_unseal operations are used to verify that the trusted core executing on the destination computing device is the same as or is known to the trusted core executing on the source computing device (e.g., identified as or determined to be trustworthy to the trusted core on the source computing device). The authentication may also involve checking a list of “untrustworthy” certificates (e.g., a revocation list) to verify that the trusted core on the destination computing device (based on its certificate) has not been identified as being untrustworthy (e.g., broken by a mischievous user). The authentication may also optionally include, analogous to verifying the trustworthiness of the trusted core on the destination computing device, verifying the trustworthiness of the destination computing device hardware (e.g., based on a certificate of the hardware or platform), as well as verifying the trustworthiness of one or more trusted applications executing on the destination computing device.
- Third party-
migrateable secrets 332 can be migrated/transferred to another computing device, but only under the control or direction of a third party. This third party could be the party that provided the secret to the trusted application, or alternatively could be another party (such as a party that agrees to operate as a controller/manager of how data is migrated amongst devices). Examples of third party control include keys that control access to premium content (e.g., movies) etc., which may be licensed to several of a user's devices, and yet not freely movable to any other device, or credentials used to log on to a corporate LAN (Local Area Network), which can be moved, but only under the control of the LAN administrator. This third party could also be another device, such as a smartcard that tracks and limits the number of times the secret is migrated. Third party-migrateable key 326 can be transferred, under the control or direction of the third party, to another computing device. The encryptedtrusted application secrets 332 can also be transferred to the other computing device which, so long as the trusted core of the other computing device has third party-migrateable key 326, can decrypt trusted application secrets 332 (assuming that the trusted core executing on the destination device is not a different trusted core (or an earlier version of the trusted core) executing on the source device). - In one implementation, this user control is created by use of a public-private key pair associated with the third party responsible for controlling migration of secrets amongst machines. Multiple such third parties may exist, each having its own public-private key pair and each having its own corresponding third party-
migrateable key 326. Each third party-migrateable key 326 has acorresponding certificate 334 that includes the public key of the corresponding third party. Each time that a trusted application requests secure storage of a third party-migrateable secret, the trusted application identifies the third party that is responsible for controlling migration of the secret. If a key 326 already exists for the identified third party, then that key is used to encrypt the secret. However, if no such key already exists, then a new key corresponding to the identified third party is generated, added as one ofkeys 326, and is used to encrypt the secret. - In order to migrate a third party-migrateable secret, the trusted core encrypts the third party-
migrateable key 326 used to encrypt that secret with the public key of thecertificate 334 corresponding to the key 326. The ciphertext that is the encrypted trustedapplication secrets 332 can be transferred to the destination computing device in any of a variety of manners (e.g., copied onto a removable storage medium (e.g., optical or magnetic disk) and the medium moved to and inserted into the destination computing device, copied via a network connection, etc.). The encrypted third party-migrateable key 326 is also transferred to the destination computing device, and may be transferred along with (or alternatively separately from) the encrypted trustedapplication secrets 332. - The trusted core executing on the source computing device, or alternatively the third party corresponding to the encrypted third party-migrateable key, also typically authenticates the destination computing device before allowing the encrypted third party-
migrateable key 326 to be transferred to the destination computing device. Alternatively, at the discretion of the third party corresponding to the encrypted third party-migrateable key, authentication of the destination computing device may not be performed. The trusted core (or third party) may perform the authentication itself, or alternatively rely on another party (e.g., a remote authentication party trusted by the trusted core or third party) to perform or assist in performing the authentication. - The trusted core executing on the destination computing device can then access the third party corresponding to the encrypted third party-
migrateable key 326 in order to have the key 326 decrypted. The third party can impose whatever type of verification or other constraints that it desires in determining whether to decrypt the key 326. For example, the third party may require the trusted core executing on the destination computing device to authenticate itself, or may decrypt the key 326 only if fewer than an upper limit number of computing devices have requested to decrypt the key 326, or may require the user to verify certain information over the telephone, etc. - If the third party refuses to decrypt the key 326, then the destination 11 computing device is not able to decrypt encrypted trusted
application secrets 332. However, if the third party does decrypt the key 326, then the third party returns the decrypted key to the destination computing device. The decrypted key can be returned in a variety of different secure methods, such as via a voice telephone call between the user of the destination computing device and a representative of the third party, using network security protocols (such as HTTPS (Secure HyperText Transfer Protocol)), encrypting the key with a public key of a public-private key pair of the destination computing device, etc. The trusted core at the destination device can then encrypt third party-migrateable key 326 using the gatekeeper storage key of the trusted core at the destination device. - Storing application secrets based on classes or types facilitates the migration of the application secrets to other computing devices. Rather than using a separate key for each application secret, the application secrets are classed together, with only one key typically being needed for the user-migrateable class and only one key per third party typically being needed for the third party-migrateable class. Thus, for example, rather than requiring each user-migrateable secret to have its own key that needs to be transferred to the destination device in order to migrate the secrets to the destination device, only the single user-migrateable key need be transferred to the destination device. Additionally, an “all” class can also exist (e.g., associated with
gatekeeper storage key 320 ofFIG. 9 ) that allows all of the secrets (except the non-migrateable secrets) to be migrated to the destination device by transferring and having decrypted only the gatekeeper storage key (which can in turn be used to decrypt the encrypted hive keys). The non-migrateable secrets can be kept from being migrated by not allowing the encrypted non-migrateable hive key to be copied. -
FIG. 10 illustrates anexemplary process 360 for securely storing secrets using hive keys. The process ofFIG. 10 is carried out by the trusted core of a client computing device, and may be performed in software. - The first time the trusted core is booted, a gatekeeper storage key is generated (act 362) and sealed, using a cryptographic measure of the trusted core, to the trusted core (act 364). Eventually, a request to store a secret is received by the trusted core from a trusted application (act 366), and the request includes an identification of the type of secret (non-migrateable, user-migrateable, or third party-migrateable). The trusted core generates a hive key for that type of secret if needed (act 368). A hive key is needed if no hive key of that type has been created by the trusted core yet, or if the identified user-migrateable key has not been created yet, or if a hive key corresponding to the third party of a third party-migrateable secret has not been created yet.
- Once the correct hive key is available, the trusted core uses the hive key to encrypt the trusted application secret (act 370). Additionally, the trusted core uses the gatekeeper storage key to encrypt the hive key (act 372).
-
FIG. 11 illustrates anexemplary process 400 for migrating secrets from a source computing device to a destination computing device. The process ofFIG. 11 is carried out by the trusted cores on the two computing devices. The process ofFIG. 11 is discussed with reference to components ofFIG. 9 . - Initially, a request to migrate or transfer secrets to a destination computing device is received at the source computing device (act 402). The trusted core on the source computing device determines whether/how to allow the transfer of secrets based on the type of secret (act 404). If the secret is a non-migrateable secret, then the trusted core does not allow the secret to be transferred or migrated (act 406).
- If the secret is a user-migrateable secret, then the trusted core obtains a user passphrase (act 408) and encrypts the hive key corresponding to the secret using the passphrase (act 410). The trusted core also authenticates the destination computing device as being trusted to receive the secret (act 412). If the destination computing device is not authenticated, then the trusted core does not transfer the encrypted hive key to the destination computing device. Assuming the destination computing device is authenticated, the encrypted hive key as well as the encrypted secret is received at the destination computing device (act 414), and the trusted core at the destination computing device also receives the passphrase from the user (act 416). The trusted core at the destination computing devices uses the passphrase to decrypt the hive key (act 418), thereby allowing the trusted core to decrypt the encrypted secrets when requested.
- If the secret is a third party-migrateable secret, then the trusted core on the source computing device encrypts the hive key corresponding to the secret using the public key of the corresponding third party (act 420). The trusted core on the source computing device, or alternatively the third party corresponding to the hive key, also authenticates the destination computing device (act 422). If the destination computing device is not authentication then the trusted core does not transfer the encrypted hive key to the destination computing device (or alternatively, the third party does not decrypt the hive key). Assuming the destination computing device is authenticated, the encrypted hive key as well as the encrypted secret is received at the destination computing device (act 424). The trusted core at the destination computing device contacts the corresponding third party to decrypt the hive key (act 426), thereby allowing the trusted core to decrypt the encrypted secrets when requested.
- Thus, secure secret storage is maintained by allowing trusted processes to restrict whether and how trusted application secrets can be migrated to other computing devices, and by the trusted core enforcing such restrictions. Furthermore, migration of secrets to other computing devices is facilitated by the use of the gatekeeper storage key and hive keys, as only one or a few keys need to be moved in order to have access to the application secrets held by the source device. Additionally, the use of hive keys to migrate secrets to other computing devices does not interfere with the ability of the trusted applications or the trusted core to authenticate itself to third parties.
- Backup of Secrets
- Situations can arise where the hardware or software of a
client computing device 102 ofFIG. 1 is damaged or fails. Because of the possibility of such situations arising, it is generally prudent to backup the data stored onclient computing device 102, including the securely stored secrets. However, care should be taken to ensure that the backup of the securely stored secrets does not compromise the security of the storage. - There are two primary situations that data backups are used to recover from. The first is the failure of the mass storage device that stores the trusted core (e.g., a hard disk) or the operating system executing on the computing device, and the second is the damaging of the device sufficiently to justify replacement of the computing device with a new computing device (e.g., a heavy object fell on the computing device, or a power surge destroyed one or more components).
- In order to recover from the first situation (failure of the mass storage device or operating system), the contents of the mass storage device (particularly the trusted core and the trusted application secrets) are backed up when the computing device is functioning properly. Upon failure of the mass storage device or operating system, the mass storage device can be erased (e.g., formatted) or replaced, and the backed up data stored to the newly erased (or new) mass storage device. Alternatively, rather than backing up the trusted core, the computing device may have an associated “recovery” disk (or other media) that the manufacturer provides and that can be used to copy the trusted core from when recovering from a failure. When the computing device is booted with the backed up data, the trusted core will have the same digest as the trusted core prior to the failure, so that the new trusted core will be able to decrypt the gatekeeper storage key and thus the trusted application secrets.
- In order to recover from the second situation (replacement of the computer), the backing up of securely stored secrets is accomplished in a manner very similar to the migration of secrets from one computing device to another. In the situation where the
computing device 102 is damaged and replaced with another computing device, the backing up is essentially migrating the trusted application secrets from a source computing device (the old, damaged device) to a destination computing device (the new, replacement device). - Recovery from the second situation varies for different trusted application secrets based on the secret types. Non-migrateable secrets are not backed up. This can be accomplished by the trusted core not allowing the non-migrateable secrets to be copied from the computing device, or not allowing the non-migrateable key to be copied from the computing device, when backing up data.
- User-migrateable secrets are backed up using a passphrase. During the backup procedure, a user passphrase(s) is obtained and used to encrypt the user-migrateable key(s), with the encrypted keys being stored on a backup medium (e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.). To recover the backup data, the user can copy the backed up encrypted trusted application secrets, as well as the user-migrateable key(s) encrypted to the passphrase(s), to any other device he or she desires. Then, by entering the passphrase(s) to the other device, the user can allow the trusted core to decrypt and retrieve the trusted application secrets.
- Third party-migrateable secrets are backed up using a public key(s) of the third party or parties responsible for controlling the migration of the secrets. During the backup procedure, the trusted core encrypts the third party-migrateable key(s) with the public key(s) of the corresponding third parties, and the encrypted keys are stored on a backup medium (e.g., a removable storage medium such as a disk or tape, a remote device such as a file server, etc.). To recover the backup data, the user can copy the backed up encrypted trusted application secrets to any other device he or she desires, and contact the appropriate third party or parties to decrypt the encrypted keys stored on the backup medium. Assuming the third party or parties authorize the retrieval of the keys, the third party or parties decrypt the keys and return (typically in a secure manner) the third party-migrateable key(s) to the other computing device, which the trusted core can use to decrypt and retrieve the trusted application secrets.
- Thus, analogous to the discussion of hive keys and secret migration above, trusted processes are allowed to restrict whether and how trusted application secrets can be backed up, and the trusted core enforces such restrictions. Additionally, the backing up of secrets does not interfere with the ability of the trusted applications or the trusted core to authenticate itself to third parties.
- Manifests and Application Security Policies
- Oftentimes, trusted application components and modules are more likely to be upgraded than are components and modules of the trusted core. Trusted applications frequently include various dynamic link libraries (DLL's), plug-ins, etc. and allow for different software configurations, each of which can alter the binaries which execute as the trusted application. Using a digest for the trusted application can thus be burdensome as the digest would be changing every time one of the binaries for the trusted application changes. Thus, rather than using a digest for the trusted applications as is described above for the trusted core, a security model is defined for trusted applications that relies on manifests. A manifest is a policy statement which attempts to describe what types of binaries are allowed to be loaded into a process space for a trusted application. This process space is typically a virtual memory space, but alternatively may be a non-virtual memory space. Generally, the manifest specifies a set of binaries, is uniquely identifiable, and is used to gate access to secrets. Multiple manifests can be used in a computing device at any one time—one manifest may correspond to multiple different applications (sets of binaries), and one application (set of binaries) may correspond to multiple different manifests.
-
FIG. 12 illustrates anexemplary manifest 450 corresponding to a trusted application. Manifest 450 can be created by anybody—there need not be any restrictions on who can create manifests. Certain trust models may insist on authorization by some given authority in order to generate manifests. However, this is not an inherent property of manifests, but a way of using them—in principle, no authorization is needed to create a manifest. Manifest 450 includes several portions: anidentifier portion 452 made up of a triple (K, U, V), asignature portion 454 including a digital signature over manifest 450 (except for signature portion 454), a digestlist portion 456, an exportstatement list portion 458, and a set ofproperties portion 460. Anentry point list 462 may optionally be included. -
Identifier portion 452 is an identifier of the manifest. In the illustrated example the manifest identifier is a triple (K, U, V), in which K is a public key of a public-private key pair of the party that generatesmanifest 450. U is an arbitrary identifier. Generally, U is a member of a set Mu, where the exact definition of Mu is dependent upon the specific implementation. One condition on set Mu is that all of its elements have a finite representation (that is, Mu is countable). Mu could be, for example, the set of integers, the set of strings of finite length over the Latin alphabet, the set of rational numbers, etc. In one implementation, the value U is a friendly name or unique identifier of the party that generatesmanifest 450. V is similar to U, and can be a member of a set Mv having the same conditions as Mu (which may be the same set that U is a member of, or alternatively a different set). Additionally, there is an (total or partial) defined on the set Mv (e.g., increasing numerical order, alphabetical order, or some arbitrarily defined order). In one implementation, V is the version number ofmanifest 450. The trusted application corresponding to manifest 450 is identified by the triple inportion 452. -
Manifest identifier portion 452 is described herein primarily with reference to the triple (K, U, V). Alternatively, manifest identifiers may not include all three elements K, U, and V. For example, if version management is not needed, the V component can be omitted. - Alternatively, different manifest identifiers may also be used. For example, any of a variety of conventional cryptographic hashing functions (such as SHA-1) may be used to generate a hash of one or more portions of manifest 450 (e.g., portion 456). The resultant hash value can be used as the manifest identifier.
-
Signature portion 454 includes a digital signature over the portions ofmanifest 450 other than signature portion 454 (that is,portions manifest 450 may also be excluded from being covered by the digital signature, such asportion 458. The digital signature is generated by the party that generates manifest 450, and is generated using the private key corresponding to the public key K inportion 452. Thus, givenmanifest 450, a device (such as a trusted core) can verify manifest 450 by checking themanifest signature 454 using the public key K. Additionally, this verification may be indirected through a certificate chain. - Alternatively, a digital signature over a portion(s) of
manifest 450 may not be included inmanifest 450. The digital signature inportion 454 serves to tielists portion 456 to the manifest identifier. In various alternatives, other mechanisms may be used to tielists portion 456 to the manifest identifier. For example, if the manifest identifier is a hash value generated by hashingportion 456, then the manifest identifier inherently ties listsportion 456 to the manifest identifier. - Certificate lists 456 are two lists (referred to as S and T) of public key representations. In one implementation, lists 456 are each a list of certificate hashes. The S list is referred to as an inclusion list while the T list is referred to as an exclusion list. The certificate hashes are generated using any of a wide variety of conventional cryptographic hashing operations, such as SHA-1. List S is a list of hashes of certificates that certify the public key which corresponds to the private key that was used to sign the certificates in the chain that corresponds to the binaries that are authorized by
manifest 450 to execute in the virtual memory space. A particular manufacturer (e.g., Microsoft Corporation) may digitally sign multiple binaries using the same private key, and thus the single certificate that includes the public key corresponding to this private key may be used to authorize multiple binaries to execute in the virtual memory space. Alternatively, a manufacturer can generate an entirely new key for each binary which is subsequently deleted. This will result in the same mechanism being used to identify a single, unique application as opposed to one from a family. The “hash-of-a-certificate” scheme is hence a very flexible scheme for describing applications or families of applications. - List T is a list of hashes of certificates that certify the public key which corresponds to the private key that was used to sign the certificates in the chain 11 that corresponds to the binaries that are not authorized by
manifest 450 to execute in the virtual memory space. List T may also be referred to as a revocation list. Adding a particular certificate to list T thus allows manifest 450 to particularly identify one or more binaries that are not allowed to execute in the virtual memory space. The entries in list T override the entries in list S. Thus, in order for a binary to be authorized to execute in a virtual memory space corresponding to manifest 450, the binary must have a certificate hash that is the same as a certificate hash in list S (or have a certificate that identifies a chain of one or more additional certificates, at least one of which is in list S) but is not the same as any certificate hash in list T. In addition, none of the certificates in the chain from the certificate in S to the leaf certificate that contains the hash of the binary can be contained in list T. If both of these conditions are not satisfied, then the binary is not authorized to execute in the virtual memory space corresponding to manifest - The T list, in conjunction with the S list, can be used flexibly. For example, given an inclusion of all applications certified by a given root in the inclusion list (S), the exclusion list (T) can be used to exclude one or more applications that are known to be vulnerable or have other bugs. Similarly, given a certification hierarchy, with the root certificate on the inclusion list (S), the exclusion list (T) can be used to remove one or more of the child keys in the hierarchy (and binaries certified by them) that have been compromised.
- Alternatively, other public key representations or encodings besides certificate hashes can be used as one or both of the S and T lists. For example, rather than certificate hashes, the S and T lists may be the actual certificates that certify the public keys which correspond to the private keys that were used to sign the certificates in the chains that correspond to the binaries that are authorized by
manifest 450 to execute (the S list) or not execute (the T list) in the virtual memory space. By way of another example, the S and T lists may be just the public keys which correspond to the private keys that were used to sign the certificates in the chains that correspond to the binaries that are authorized bymanifest 450 to execute (the S list) or not execute (the T list) in the virtual memory space. - Export
statement list portion 458 includes a list of zero or more export statements that allow a trusted application secret associated withmanifest 450 to be exported (migrated) to another trusted application on the same computing device. Each trusted application executing on aclient computing device 102 ofFIG. 1 has acorresponding manifest 450, and thus each trusted application secret securely saved by the trusted application is associated withmanifest 450. Exportstatement list portion 458 allows the party that generates manifest 450 to identify the other trusted applications to which the trusted application secrets associated withmanifest 450 can be exported and made available for retrieving. - Each export statement includes a triple (A, B, S), where A is the identifier (K, U, V) of the source manifest, B is the identifier (K, U, V) of the destination manifest, and S is a digital signature over the source and destination manifest identifiers. B may identify a single destination manifest, or alternatively a set of destination manifests. Additionally, for each (K, U) in B, a (possibly open) interval of V values may optionally be allowed (e.g., “
version 3 and higher”, or “versions 2 through 5”). The digital signature S is made using the same private key as was used to sign manifest 450 (in order to generate the signature in portion 454). - Export statements may be device-independent and thus not limited to being used on any particular computing device. Alternatively, an export statement may be device-specific, with the export statement being useable on only one particular computing device (or set of computing devices). This one particular computing device may be identified in different manners, such as via a hardware id or a cryptographic mechanism (e.g., the export statement may be encrypted using the public key associated with the particular computing device). If a hardware id is used to identify a particular computing device, the export statement includes an additional field which states the hardware id (thus, the issuer of the manifest could control on a very fine granularity who can move secrets).
- Additionally, although illustrated as part of
manifest 450, one or more export statements may be separate from, but associated with,manifest 450. For example, the party that generates manifest 450 may generate one or more export statements aftermanifest 450 is generated and distributed. These export statements are associated with themanifest 450 and thus have the same affect as if they were included inmanifest 450. For example, a new trusted application may be developed after themanifest 450 is generated, but the issuer of themanifest 450 would like the new trusted application to have access to secrets from the application associated with themanifest 450. The issuer of the manifest 450 can then distribute an export statement (e.g., along with the new trusted application or alternatively separately) allowing the secrets to be migrated to the new trusted application. - If a user or trusted application desires to export trusted application secrets from a source trusted application to a destination trusted application, the trusted core checks to ensure that the manifest identifier of the desired destination trusted application is included in export statement list portion 758. If the manifest identifier of the desired destination trusted application is included in export statement list portion 758, then the trusted core allows the destination trusted application to have access to the source trusted application secrets; otherwise, the trusted core does not allow the destination trusted application to have access to the source trusted application secrets. Thus, although a user may request that trusted application secrets be exported to another trusted application, the party that generates the manifest for the trusted application has control over whether the secrets can actually be exported to the other trusted application.
-
Properties portion 460 identifies a set of zero or more properties for themanifest 450 and/or executing process corresponding to manifest 450. Various properties can be included inportion 460. Example properties include: whether the process is debuggable, whether to allow (or under what conditions to allow) additional binaries to be added to the virtual memory space after the process begins executing, whether to allow implicit upgrades to higher manifest version numbers (e.g., allow upgrades from one manifest to another based on the K and U values ofidentifier 452, without regard for the V value), whether other processes (and what other processes) should have access to the virtual memory space of the process (e.g. to support secure shared memory), what/whether other resources should be shareable (e.g. “pipe” connections, mutexes (mutually exclusives), or other OS resources), and so forth. -
Entry point list 462 is optional and need not be included inmanifest 450. In one implementation, an entry point list is included in the binary or a certificate for the binary, and thus not included inmanifest 450. However, in alternative embodimentsentry point list 462 may be included inmanifest 450.Entry point list 462 is a list of entry points into the executing process.Entry point list 462 is typically generated by the party that generatesmanifest 450. These entry points can be stored in a variety of different manners, such as particular addresses (e.g., offsets relative to some starting location, such as the beginning address of a particular binary), names of functions or procedures, and so forth. These entry points are the only points of the process that can be accessed by other processes (e.g., to invoke functions or methods of the process). When a request to access a particular address in the virtual memory space of an executing process associated withmanifest 450 is received, the trusted core checks whether the particular address corresponds to an entry point inentry point list 462. If the particular address does correspond to an entry point inentry point list 462, then the access is allowed; otherwise, the trusted core denies the access. - The manifest is used by the trusted core in controlling authentication of trusted application processes and access to securely stored secrets by trusted application processes executing on the client computing device. When referencing a trusted application process, the trusted core (or any other entity) can refer to its identifier (the triple K, U, V). The trusted core exposes versions of the Seal, Unseal, Quote, and Unwrap operations analogous to those primitive operations discussed above, except that it is the trusted core that is exposing the operations rather than the underlying hardware of the computing device, and the parameters may vary. In one implementation, the versions of the Seal, Unseal, Quote, and Unwrap operations that are exposed by the trusted core and that can be invoked by the trusted application processes are as follows.
- The Seal operation exposed by the trusted core takes the following form:
- Seal (secret, public_key (K), identifier, version, secret_type)
- where secret represents the secret to be securely stored, public_key (K) represents the K component of a manifest identifier, identifier represents the U component of a manifest identifier, version represents the V value of a manifest identifier, and secret_type is the type of secret to be stored (e.g., non-migrateable, user-migrateable, or third party-migrateable). The manifest identifier (the K, U, and V components) is a manifest identifier as described above (e.g., with reference to manifest 450). The K and U portions of the manifest identifier refer to the party that generated the manifest for the process storing the secret, while the V portion refers to the versions of the manifest that should be allowed to retrieve the secret. In the general case, the (K,U,V) triple can be a list of such triples and the value V can be a range of values.
- When the Seal operation is invoked, the trusted core encrypts the secret and optionally additional parameters of the operation using the appropriate hive key based on the secret_type. The encrypted secret is then stored by the trusted core in
secret store 126 ofFIG. 2 or 146 ofFIG. 3 cryptographically bound to the associated rules (the list {(K,U,V)}), or alternatively in some other location. - The Unseal operation exposed by the trusted core takes the following form:
- Unseal (encrypted_secret)
- where encrypted_secret represents the ciphertext that has encrypted in it the secret 11 to be retrieved together with the (K, U, V) list that names the application(s) qualified to retrieve the secret. In response to the Unseal operation, the trusted core obtains the encrypted secret and determines whether to reveal the secret to the requesting process. The trusted core reveals the secret to the requesting process under two different sets of conditions; if neither of these sets of conditions is satisfied then the trusted core does not reveal the secret to the requesting process. The first set of conditions is that the requesting process was initiated with a manifest that is properly formed and is included in the (K, U, V) list (or the K, U, V value) indicated by the sealer. This is the common case: An application can seal a secret naming its own manifest, or all possible future manifests from the same software vendor. In this case, the same application or any future application in the family has automatic access to its secrets.
- The second set of conditions allows a manifest issuer to make a specific allowance for other applications to have access to the secrets previously sealed with more restrictive conditions. This is managed by an export certificate, which provides an override that allows secrets to be migrated to other applications from other publishers not originally named in the (K, U, V) list of the sealer. To avoid uncontrolled and insecure migration, export lists should originate from the publisher of the original manifest. This restriction is enforced by requiring that the publisher sign the export certificate with the key originally used to sign the manifest of the source application. This signature requirement may also be indirected through certificate chains.
- To process an export certificate, the trusted core is a) furnished with the manifest from the original publisher (i.e., the manifest issuer), b) furnished with the export certificate itself which is signed by the original publisher, and c) running a process that is deemed trustworthy in the export certificate. If all these requirements are met, the running process has access to the secrets sealed by the original process.
- The Quote and Unwrap operations provide a way for the trusted core to authenticate to a third party that it is executing a trusted application process with a manifest that meets certain requirements.
- The Unwrap operation uses ciphertext as its single parameter. A third (arbitrary) party initially generates a structure that includes five parts: a secret, a public_key K, an identifier U, a version V, and a hive_id. Here, secret represents the secret to be revealed if the appropriate conditions are satisfied, public_key K represents the public key of the party that needs to have digitally signed the manifest for the process, identifier U is the identifier of the party that needs to have generated the manifest for the process, version V is a set of zero or more acceptable versions of the manifest, and hive_id is the type of secret being revealed (e.g., non-migrateable, user-migrateable, or third party-migrateable). The party then encrypts this structure using the public key of the public-private key pair known to belong to a trustworthy trusted core (presumably because of certification of the public part of this key). The manner in which the trusted core gets this key is discussed in additional detail in U.S. patent application Ser. No. 09/227,611 entitled “Loading and Identifying a Digital Rights Management Operating System” and U.S. patent application Ser. No. 09/227,561 entitled “Digital Rights Management Operating System”. A trusted application receives the ciphertext generated by the third party and invokes the Unwrap operation exposed by the trusted core.
- The trusted core responds to the Unwrap operation by using its private key of the public-private key pair to decrypt the ciphertext received from the invoking party. The trusted core compares the conditions in or associated with the encrypted ciphertext to the manifest associated with the appropriate trusted application process. The appropriate trusted application process can be identified explicitly by the third party that generated the ciphertext being unwrapped, or alternatively inherently as the trusted application invoking the Unwrap operation (so the trusted core knows that whichever process invokes the Unwrap operation is the appropriate trusted application process). If the manifest associated with the process satisfies all of the conditions in the encrypted ciphertext, then the process is authorized to retrieve the secret, and the trusted core provides the secret to the process. However, if one or more of the conditions in the encrypted ciphertext are not satisfied by the manifest associated with the process, then the process is not authorized to retrieve the secret and the trusted core does not provide the secret to the process.
- In addition to manifest-based conditions, the Unwrap operation may also have conditions on the data of the secret. If the conditions on the data (e.g., to verify its integrity) are not satisfied then the trusted core does not provide the secret to the process (even if the manifest conditions are satisfied). For example, the encrypted secret may include both the data of the secret and a cryptographic hash of the data. The trusted core verifies the integrity of the data by hashing the data and verifying the resultant hash value.
- The Unwrap operation naming the manifest or manifests of the application(s) allowed to decrypt the secret allows a remote party to conveniently express that a secret should only be revealed to a certain application or set of applications on a particular host computer running a particular trusted core.
- An alternative technique is based on the use of the quote operation, which allows an application value to be cryptographically associated with the manifest of the application requesting the quote operation. The quote operation associates an application-supplied value with an identifier for the running software. When previously introduced, the quote operation was implemented in hardware, and allowed the digest of the trusted core to be cryptographically associated with some trusted core-supplied data. When implemented by the trusted core on behalf of applications, the quote operation will generate a signed statement that a particular value X was supplied by a process running under a particular manifest (K, U, V), where the value X is an input parameter to the quote operation. The value X can be used as part of a more general authentication protocol. For example, such a statement can be sent as part of a cryptographic interchange between a client and a server to allow the server to determine that the client it is talking to is a good device running a trusted core, and an application that it trusts before revealing any secret data to it. The requesting party can analyze the manifest and make its own determination of whether it is willing to trust the process.
-
FIG. 13 illustrates anexemplary process 500 for controlling execution of processes in an address space based on a manifest. The process ofFIG. 13 is discussed with reference to components inFIG. 12 , and is implemented by a trusted core. - Initially, a request to execute a process is received by the trusted core (act 502). This request may be received from a user or alternatively another process executing on the same client computing device as the trusted core or alternatively on another computing device in communication with the client computing device. In response to the request, a virtual memory space for the process is set up by the trusted core (act 504) and the binaries necessary to execute the process are loaded into the virtual memory space (act 506). It should be noted that, in
act 506, the binaries are loaded into the memory space but execution of the binaries has not yet begun. The trusted core then initializes the environment and obtains a manifest for the process (act 508). Typically, the manifest is provided to the trusted core as part of the request to execute the process. - The trusted core checks whether all of the loaded binaries are consistent with the manifest (act 510). In one implementation, this check for consistency involves verifying that the certificate (or certificate hash) of each binary is in the S list in
portion 456 ofmanifest 450, and that certificates (or certificate hashes) for none of the binaries are in the T list inportion 456. This certificate verification may be indirected through a certificate list. If the loaded binaries are not consistent with the manifest (e.g., at least one is not in the S list and/or at least one is in the T list), then process 500 fails—the requested process is not executed (act 512). - However, if the loaded binaries are consistent with the manifest, then the trusted core allows the processor to execute the binaries in the virtual memory space (act 514). Execution of the loaded binaries typically is triggered by an explicit request from an outside entity (e.g. another process). A request may be subsequently received, typically from the executing process or some other process, to load an additional binary into the virtual memory space. The trusted core continues executing the process if no such request is received (
acts 514 and 516). However, when such a request is received, the trusted core checks whether the additional binary is consistent with manifest 450 (act 518). Consistency inact 518 is determined in the same manner asact 510—the additional binary is consistent withmanifest 450 if its certificate (or certificate hash) is in the S list inportion 456 ofmanifest 450 and is not in the T list inportion 456. - If the additional binary is not consistent with
manifest 450, then the additional binary is not loaded into the virtual memory space and allowed to execute, and processing continues to act 514. However, if the additional binary is consistent withmanifest 450, then the additional binary is loaded into the virtual memory space (act 520), and processing of the binaries (including the additional binary) continues. - Alternatively, rather than loading the binaries (act 506) and checking whether the loaded binaries are consistent with the manifest (act 510), the manifest can be obtained prior to loading the binaries into the virtual memory space (e.g., provided as part of the initial request to execute a trusted process in act 502). In this case, each request to load a binary is checked against the manifest. Binaries which are not allowed by the manifest are not loaded into the virtual memory space, whereas binaries that are allowed are loaded into the virtual memory space.
-
FIG. 14 illustrates anexemplary process 540 for upgrading to a new version of a trusted application. The process ofFIG. 14 is discussed with reference to components inFIG. 12 , and is implemented by a computing device (typically other than the client computing device). Typically, the upgraded version of a trusted application is prepared by the same party that prepared the previous version of the trusted application. - Initially, a trusted application upgrade request is received along with one or more new components or modules (e.g., binaries) for the trusted application to be upgraded (act 542). These new components or modules may replace previous versions of the components or modules in the previous version of the process, or alternatively may be new components or modules that have no counterpart in the previous version. A party begins generating a
new manifest 450′ for the new version of the trusted application including a new triple (K′, U′, V′) identifier for the new version and appropriate certificate hashes (or alternatively certificates) in the appropriate S and T lists in portion 456 (act 544). Oftentimes (e.g., when the issuer of the new manifest is also the issuer of the old manifest and chooses K=K′) the K′ and U′ parts of the triple will be the same as the K and U parts of the triple identifier of the previous version, so that only V and V′ differ (that is, only the versions in the identifier differ). Thenew manifest 450′ is then made available to the client computing device(s) where the new version of the trusted application is to be executed (act 546). - Generally, there are three situations for application upgrades. The first situation is where some binaries for the application are changed, added, and/or removed, but the old manifest allows the new binaries to be loaded and loading the old binaries is not considered to harm security. In this situation, the manifest does not have to change at all and no secrets have to be migrated. The user simply installs the new binaries on his machine and they are allowed to execute.
- The second situation is where some binaries are changed, added, and/or removed, and the old manifest is no longer acceptable because some of the old binaries (which can still be loaded under the old manifest) compromise security and/or some of the changed or new binaries cannot be loaded under the old manifest. The issuer of the old manifest decides to issue a new manifest with the same K,U. Initially, the software manufacturer produces new binaries. These new binaries are digitally signed (certificates are issued) and a new manifest is created. This new manifest (via its S and T lists) allows the new binaries to be executed but does not allow the old binaries to be executed (at least not the binaries that compromise security). It should be noted that there is no inherent relationship between the S and T lists of the old manifest and the S and T lists of the new manifest. It should also be noted that, if the S list is completely changed in the new manifest, and some old binaries are re-used, the old binaries may need to be signed with a new private key.
- A user then receives all three things (the new binaries, the certificates for the new binaries, and the new manifest) and installs all three on his or her machine. Secrets do not have to be migrated, because the new manifest is just a new version of the old one. The new binaries are allowed to execute, but the old binaries are not.
- The third situation is where secrets have to be migrated between different applications that are not versions of each other. This situation is handled as described above regarding export statements.
- Thus, secure secret storage is maintained by the trusted core imposing restrictions, based on the manifests, on which trusted processes can retrieve particular secrets. The manifests also provide a way for trusted applications to be authenticated to remote parties.
- Exemplary Computing Device
-
FIG. 15 illustrates a generalexemplary computer environment 600, which can be used to implement various devices and processes described herein. Thecomputer environment 600 is only one example of a computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the computer and network architectures. Neither should thecomputer environment 600 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in theexemplary computer environment 600. -
Computer environment 600 includes a general-purpose computing device in the form of acomputer 602.Computer 602 can be, for example, aclient computing device 102 orserver device 104 ofFIG. 1 , a device used to generate a trusted application or manifest, etc. The components ofcomputer 602 can include, but are not limited to, one or more processors orprocessing units 604, asystem memory 606, and asystem bus 608 that couples various system components including theprocessor 604 to thesystem memory 606. - The
system bus 608 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus. -
Computer 602 typically includes a variety of computer readable media. Such media can be any available media that is accessible bycomputer 602 and includes both volatile and non-volatile media, removable and non-removable media. - The
system memory 606 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 610, and/or non-volatile memory, such as read only memory (ROM) 612. A basic input/output system (BIOS) 614, containing the basic routines that help to transfer information between elements withincomputer 602, such as during start-up, is stored inROM 612.RAM 610 typically contains data and/or program modules that are immediately accessible to and/or presently operated on by theprocessing unit 604. -
Computer 602 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example,FIG. 15 illustrates ahard disk drive 616 for reading from and writing to a non-removable, non-volatile magnetic media (not shown), amagnetic disk drive 618 for reading from and writing to a removable, non-volatile magnetic disk 620 (e.g., a “floppy disk”), and anoptical disc drive 622 for reading from and/or writing to a removable, non-volatileoptical disc 624 such as a CD-ROM, DVD-ROM, or other optical media. Thehard disk drive 616,magnetic disk drive 618, andoptical disc drive 622 are each connected to thesystem bus 608 by one or more data media interfaces 626. Alternatively, thehard disk drive 616,magnetic disk drive 618, andoptical disc drive 622 can be connected to thesystem bus 608 by one or more interfaces (not shown). - The various drives and their associated computer storage media provide non-volatile storage of computer readable instructions, data structures, program modules, and other data for
computer 602. Although the example illustrates ahard disk 616, a removablemagnetic disk 620, and a removableoptical disc 624, it is to be appreciated that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile discs (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like, can also be utilized to implement the exemplary computing system and environment. - Any number of program modules can be stored on the
hard disk 616,magnetic disk 620,optical disc 624,ROM 612, and/orRAM 610, including by way of example, anoperating system 626, one or more application programs 628 (e.g., trusted applications),other program modules 630, andprogram data 632. Each ofsuch operating system 626, one ormore application programs 628,other program modules 630, and program data 632 (or some combination thereof) may implement all or part of the resident components that support the distributed file system. - A user can enter commands and information into
computer 602 via input devices such as akeyboard 634 and a pointing device 636 (e.g., a “mouse”). Other input devices 638 (not shown specifically) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, and/or the like. These and other input devices are connected to theprocessing unit 604 via input/output interfaces 640 that are coupled to thesystem bus 608, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB). - A
monitor 642 or other type of display device can also be connected to thesystem bus 608 via an interface, such as avideo adapter 644. In addition to themonitor 642, other output peripheral devices can include components such as speakers (not shown) and aprinter 646 which can be connected tocomputer 602 via the input/output interfaces 640. -
Computer 602 can operate in a networked environment using logical connections to one or more remote computers, such as aremote computing device 648. By way of example, theremote computing device 648 can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and the like. Theremote computing device 648 is illustrated as a portable computer that can include many or all of the elements and features described herein relative tocomputer 602. - Logical connections between
computer 602 and theremote computer 648 are depicted as a local area network (LAN) 650 and a general wide area network (WAN) 652. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. - When implemented in a LAN networking environment, the
computer 602 is connected to alocal network 650 via a network interface oradapter 654. When implemented in a WAN networking environment, thecomputer 602 typically includes amodem 656 or other means for establishing communications over thewide network 652. Themodem 656, which can be internal or external tocomputer 602, can be connected to thesystem bus 608 via the input/output interfaces 640 or other appropriate mechanisms. It is to be appreciated that the illustrated network connections are exemplary and that other means of establishing communication link(s) between thecomputers - In a networked environment, such as that illustrated with
computing environment 600, program modules depicted relative to thecomputer 602, or portions thereof, may be stored in a remote memory storage device. By way of example,remote application programs 658 reside on a memory device ofremote computer 648. For purposes of illustration, application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of thecomputing device 602, and are executed by the data processor(s) of the computer. -
Computer 602 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed bycomputer 602. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed bycomputer 602. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media. - The invention has been described herein in part in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
- For purposes of illustration, programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.
- Alternatively, the invention may be implemented in hardware or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) could be designed or programmed to carry out the invention.
- Thus, a security model a trusted environment has been described in which secrets can be securely stored for trusted applications and in which the trusted applications can be authenticated to third parties. These properties of the trusted environment are maintained, even though various parts of the environment may be upgraded or changed in a controlled way on the same computing device or migrated to a different computing device.
- Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.
Claims (20)
1. A computer storage medium having stored thereon a data structure that describes what types of binaries can be loaded into a process space for a trusted application, the data structure comprising:
a first portion including data representing a unique identifier of the trusted application;
a second portion including data indicating whether a particular one or more binaries can be loaded into the process space for the trusted application; and
a third portion derived from the data in both the first portion and the second portion by generating a digital signature over the first and second portions.
2. A computer storage medium as recited in claim 1 , wherein the data structure, when populated with data, is a manifest corresponding to the trusted application, and wherein the unique identifier of the trusted application comprises:
a public key of a public-private key pair of a party that generates the manifest;
an identifier of the party that generates the manifest; and
a version number of the manifest.
3. A computer storage medium as recited in claim 1 , wherein the data in the second portion comprises:
a list of one or more hashes of certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
4. A computer storage medium as recited in claim 3 , wherein the data in the second portion further comprises:
a list of one or more additional hashes of certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
5. A computer storage medium as recited in claim 1 , wherein the data in the second portion comprises:
a list of one or more certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
6. A computer storage medium as recited in claim 5 , wherein the data in the second portion further comprises:
a list of one or more additional certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
7. A computer storage medium as recited in claim 1 , wherein the data in the second portion comprises:
a list of one or more public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
8. A computer storage medium as recited in claim 7 , wherein the data in the second portion further comprises:
a list of one or more public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
9. A computer storage medium as recited in claim 1 , wherein the data structure further comprises:
another portion that includes data representing a set of properties corresponding to the data structure.
10. A computer storage medium as recited in claim 9 , wherein the set of properties includes:
whether the trusted application is debuggable.
11. A computer storage medium as recited in claim 9 , wherein the set of properties includes:
whether to allow an additional binary to be added to the process space after the trusted application begins executing.
12. A computer storage medium as recited in claim 9 , wherein the set of properties includes:
whether to allow implicit upgrades to a higher version number.
13. A computer storage medium as recited in claim 1 , wherein the data structure further comprises:
another portion that includes data representing a list of entry points into the executing trusted application.
14. A method implemented at least in part by a computing device, the method comprising:
obtaining a manifest that describes what types of binaries can be loaded into a process space for a trusted application, the manifest comprising:
a first portion including data representing a unique identifier of the trusted application;
a second portion including data indicating whether a particular one or more binaries can be loaded into the process space for the trusted application; and
a third portion derived from the data in both the first portion and the second portion by generating a digital signature over the first and second portions; and
using the manifest to control loading of binaries into the process space for the trusted application.
15. A method as recited in claim 14 , wherein the data in the second portion comprises:
a list of one or more hashes of certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
16. A method as recited in claim 15 , wherein the data in the second portion further comprises:
a list of one or more additional hashes of certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
17. A method as recited in claim 14 , wherein the data in the second portion comprises:
a list of one or more certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
18. A method as recited in claim 17 , wherein the data in the second portion further comprises:
a list of one or more additional certificates that certify public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
19. A method as recited in claim 14 , wherein the data in the second portion comprises:
a list of one or more public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are authorized to execute in the process space.
20. A method as recited in claim 19 , wherein the data in the second portion further comprises:
a list of one or more public keys which correspond to private keys that were used to sign the certificates that correspond to binaries that are not authorized to execute in the process space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/558,125 US20070174921A1 (en) | 2001-11-16 | 2006-11-09 | Manifest-Based Trusted Agent Management in a Trusted Operating System Environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/993,370 US7137004B2 (en) | 2001-11-16 | 2001-11-16 | Manifest-based trusted agent management in a trusted operating system environment |
US11/558,125 US20070174921A1 (en) | 2001-11-16 | 2006-11-09 | Manifest-Based Trusted Agent Management in a Trusted Operating System Environment |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/993,370 Continuation US7137004B2 (en) | 2001-11-16 | 2001-11-16 | Manifest-based trusted agent management in a trusted operating system environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070174921A1 true US20070174921A1 (en) | 2007-07-26 |
Family
ID=25539461
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/993,370 Expired - Fee Related US7137004B2 (en) | 2001-11-16 | 2001-11-16 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,578 Expired - Fee Related US7305553B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,585 Expired - Fee Related US7634661B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/207,081 Abandoned US20050278477A1 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,579 Expired - Fee Related US7257707B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,519 Expired - Fee Related US7107463B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/558,125 Abandoned US20070174921A1 (en) | 2001-11-16 | 2006-11-09 | Manifest-Based Trusted Agent Management in a Trusted Operating System Environment |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/993,370 Expired - Fee Related US7137004B2 (en) | 2001-11-16 | 2001-11-16 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,578 Expired - Fee Related US7305553B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,585 Expired - Fee Related US7634661B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/207,081 Abandoned US20050278477A1 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,579 Expired - Fee Related US7257707B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
US11/206,519 Expired - Fee Related US7107463B2 (en) | 2001-11-16 | 2005-08-18 | Manifest-based trusted agent management in a trusted operating system environment |
Country Status (1)
Country | Link |
---|---|
US (7) | US7137004B2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060047944A1 (en) * | 2004-09-01 | 2006-03-02 | Roger Kilian-Kehr | Secure booting of a computing device |
US20080263637A1 (en) * | 2005-02-21 | 2008-10-23 | Masao Nonaka | Information Distribution System and Terminal Device |
US7500100B1 (en) * | 2003-09-10 | 2009-03-03 | Cisco Technology, Inc. | Method and apparatus for verifying revocation status of a digital certificate |
US20090083728A1 (en) * | 2007-09-25 | 2009-03-26 | Lehman Brothers Inc. | System and method for application management |
US20100031355A1 (en) * | 2008-07-30 | 2010-02-04 | Sun Microsystems, Inc. | Unvalidated privilege cap |
US20100138833A1 (en) * | 2008-12-01 | 2010-06-03 | Microsoft Corporation | Resource coverage and analysis |
US20120166795A1 (en) * | 2010-12-24 | 2012-06-28 | Wood Matthew D | Secure application attestation using dynamic measurement kernels |
EP3438868A4 (en) * | 2016-04-01 | 2019-09-11 | China Unionpay Co., Ltd. | Tee access control method and mobile terminal implementing same |
Families Citing this family (126)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7340058B2 (en) * | 2001-04-09 | 2008-03-04 | Lucent Technologies Inc. | Low-overhead secure information processing for mobile gaming and other lightweight device applications |
US7392541B2 (en) * | 2001-05-17 | 2008-06-24 | Vir2Us, Inc. | Computer system architecture and method providing operating-system independent virus-, hacker-, and cyber-terror-immune processing environments |
US7137004B2 (en) * | 2001-11-16 | 2006-11-14 | Microsoft Corporation | Manifest-based trusted agent management in a trusted operating system environment |
US7159240B2 (en) * | 2001-11-16 | 2007-01-02 | Microsoft Corporation | Operating system upgrades in a trusted operating system environment |
US7571467B1 (en) * | 2002-02-26 | 2009-08-04 | Microsoft Corporation | System and method to package security credentials for later use |
US7900054B2 (en) * | 2002-03-25 | 2011-03-01 | Intel Corporation | Security protocols for processor-based systems |
US7516491B1 (en) * | 2002-10-17 | 2009-04-07 | Roger Schlafly | License tracking system |
JP2004094617A (en) * | 2002-08-30 | 2004-03-25 | Fujitsu Ltd | Backup method by difference compression, system and difference compression method |
DE60220959T2 (en) | 2002-09-17 | 2008-02-28 | Errikos Pitsos | Method and apparatus for providing a list of public keys in a public key system |
US8784195B1 (en) | 2003-03-05 | 2014-07-22 | Bally Gaming, Inc. | Authentication system for gaming machines |
US7555749B2 (en) * | 2003-03-10 | 2009-06-30 | Microsoft Corporation | Software updating system and method |
US7584467B2 (en) | 2003-03-17 | 2009-09-01 | Microsoft Corporation | Software updating system and method |
US7539876B2 (en) * | 2003-04-18 | 2009-05-26 | Via Technologies, Inc. | Apparatus and method for generating a cryptographic key schedule in a microprocessor |
US7529368B2 (en) | 2003-04-18 | 2009-05-05 | Via Technologies, Inc. | Apparatus and method for performing transparent output feedback mode cryptographic functions |
US7844053B2 (en) | 2003-04-18 | 2010-11-30 | Ip-First, Llc | Microprocessor apparatus and method for performing block cipher cryptographic functions |
US7529367B2 (en) * | 2003-04-18 | 2009-05-05 | Via Technologies, Inc. | Apparatus and method for performing transparent cipher feedback mode cryptographic functions |
US7925891B2 (en) * | 2003-04-18 | 2011-04-12 | Via Technologies, Inc. | Apparatus and method for employing cryptographic functions to generate a message digest |
US7900055B2 (en) * | 2003-04-18 | 2011-03-01 | Via Technologies, Inc. | Microprocessor apparatus and method for employing configurable block cipher cryptographic algorithms |
US7532722B2 (en) * | 2003-04-18 | 2009-05-12 | Ip-First, Llc | Apparatus and method for performing transparent block cipher cryptographic functions |
US8060755B2 (en) * | 2003-04-18 | 2011-11-15 | Via Technologies, Inc | Apparatus and method for providing user-generated key schedule in a microprocessor cryptographic engine |
US7542566B2 (en) * | 2003-04-18 | 2009-06-02 | Ip-First, Llc | Apparatus and method for performing transparent cipher block chaining mode cryptographic functions |
US7536560B2 (en) | 2003-04-18 | 2009-05-19 | Via Technologies, Inc. | Microprocessor apparatus and method for providing configurable cryptographic key size |
US7502943B2 (en) * | 2003-04-18 | 2009-03-10 | Via Technologies, Inc. | Microprocessor apparatus and method for providing configurable cryptographic block cipher round results |
US7519833B2 (en) * | 2003-04-18 | 2009-04-14 | Via Technologies, Inc. | Microprocessor apparatus and method for enabling configurable data block size in a cryptographic engine |
US20040250086A1 (en) * | 2003-05-23 | 2004-12-09 | Harris Corporation | Method and system for protecting against software misuse and malicious code |
US7325165B2 (en) * | 2003-05-30 | 2008-01-29 | Broadcom Corporation | Instruction sequence verification to protect secured data |
US8086844B2 (en) * | 2003-06-03 | 2011-12-27 | Broadcom Corporation | Online trusted platform module |
US20050044363A1 (en) * | 2003-08-21 | 2005-02-24 | Zimmer Vincent J. | Trusted remote firmware interface |
US7299354B2 (en) * | 2003-09-30 | 2007-11-20 | Intel Corporation | Method to authenticate clients and hosts to provide secure network boot |
US7698739B2 (en) * | 2004-03-30 | 2010-04-13 | Marvell International Ltd. | Updating code with validation |
US7330981B2 (en) * | 2004-04-23 | 2008-02-12 | Microsoft Corporation | File locker and mechanisms for providing and using same |
US7380119B2 (en) * | 2004-04-29 | 2008-05-27 | International Business Machines Corporation | Method and system for virtualization of trusted platform modules |
US7664965B2 (en) * | 2004-04-29 | 2010-02-16 | International Business Machines Corporation | Method and system for bootstrapping a trusted server having redundant trusted platform modules |
US7484091B2 (en) * | 2004-04-29 | 2009-01-27 | International Business Machines Corporation | Method and system for providing a trusted platform module in a hypervisor environment |
WO2006000566A1 (en) * | 2004-06-24 | 2006-01-05 | International Business Machines Corporation | Access control over multicast |
US7694121B2 (en) * | 2004-06-30 | 2010-04-06 | Microsoft Corporation | System and method for protected operating system boot using state validation |
US20060026418A1 (en) * | 2004-07-29 | 2006-02-02 | International Business Machines Corporation | Method, apparatus, and product for providing a multi-tiered trust architecture |
US20060075199A1 (en) * | 2004-10-06 | 2006-04-06 | Mahesh Kallahalla | Method of providing storage to virtual computer cluster within shared computing environment |
US8095928B2 (en) * | 2004-10-06 | 2012-01-10 | Hewlett-Packard Development Company, L.P. | Method of forming virtual computer cluster within shared computing environment |
US8347078B2 (en) | 2004-10-18 | 2013-01-01 | Microsoft Corporation | Device certificate individualization |
JP4496061B2 (en) * | 2004-11-11 | 2010-07-07 | パナソニック株式会社 | Confidential information processing device |
US8336085B2 (en) | 2004-11-15 | 2012-12-18 | Microsoft Corporation | Tuning product policy using observed evidence of customer behavior |
WO2006101549A2 (en) | 2004-12-03 | 2006-09-28 | Whitecell Software, Inc. | Secure system for allowing the execution of authorized computer program code |
JP4669708B2 (en) * | 2005-02-16 | 2011-04-13 | 株式会社日立製作所 | Storage system, data migration method and management computer |
US20060209328A1 (en) * | 2005-03-15 | 2006-09-21 | Microsoft Corporation | Systems and methods that facilitate selective enablement of a device driver feature(s) and/or application(s) |
US8099324B2 (en) * | 2005-03-29 | 2012-01-17 | Microsoft Corporation | Securely providing advertising subsidized computer usage |
US7953980B2 (en) | 2005-06-30 | 2011-05-31 | Intel Corporation | Signed manifest for run-time verification of software program identity and integrity |
US8839450B2 (en) | 2007-08-02 | 2014-09-16 | Intel Corporation | Secure vault service for software components within an execution environment |
US8132005B2 (en) * | 2005-07-07 | 2012-03-06 | Nokia Corporation | Establishment of a trusted relationship between unknown communication parties |
US7434218B2 (en) * | 2005-08-15 | 2008-10-07 | Microsoft Corporation | Archiving data in a virtual application environment |
US8560853B2 (en) * | 2005-09-09 | 2013-10-15 | Microsoft Corporation | Digital signing policy |
US8539590B2 (en) * | 2005-12-20 | 2013-09-17 | Apple Inc. | Protecting electronic devices from extended unauthorized use |
US9158941B2 (en) * | 2006-03-16 | 2015-10-13 | Arm Limited | Managing access to content in a data processing apparatus |
JP4769608B2 (en) * | 2006-03-22 | 2011-09-07 | 富士通株式会社 | Information processing apparatus having start verification function |
US20070250512A1 (en) * | 2006-04-24 | 2007-10-25 | Dell Products L.P. | Video interactivity via connectivity through a conditional access system |
WO2008018055A2 (en) * | 2006-08-09 | 2008-02-14 | Neocleus Ltd | Extranet security |
US8082442B2 (en) * | 2006-08-10 | 2011-12-20 | Microsoft Corporation | Securely sharing applications installed by unprivileged users |
US7962499B2 (en) * | 2006-08-18 | 2011-06-14 | Falconstor, Inc. | System and method for identifying and mitigating redundancies in stored data |
US8590002B1 (en) | 2006-11-29 | 2013-11-19 | Mcafee Inc. | System, method and computer program product for maintaining a confidentiality of data on a network |
US7890723B2 (en) * | 2006-12-29 | 2011-02-15 | Sandisk Corporation | Method for code execution |
US7890724B2 (en) * | 2006-12-29 | 2011-02-15 | Sandisk Corporation | System for code execution |
US9244863B2 (en) * | 2007-02-05 | 2016-01-26 | Intel Deutschland Gmbh | Computing device, with data protection |
US9246687B2 (en) * | 2007-02-28 | 2016-01-26 | Broadcom Corporation | Method for authorizing and authenticating data |
EP2130322B1 (en) * | 2007-03-21 | 2014-06-25 | Intel Corporation | Protection against impersonation attacks |
WO2008114256A2 (en) * | 2007-03-22 | 2008-09-25 | Neocleus Ltd. | Trusted local single sign-on |
MX2009010490A (en) * | 2007-03-29 | 2010-02-09 | Christopher Murphy | Methods and systems for internet security via virtual software. |
US8484701B2 (en) | 2007-03-29 | 2013-07-09 | Christopher Murphy | Methods for internet security via multiple user authorization in virtual software |
US8701187B2 (en) * | 2007-03-29 | 2014-04-15 | Intel Corporation | Runtime integrity chain verification |
US8302458B2 (en) * | 2007-04-20 | 2012-11-06 | Parker-Hannifin Corporation | Portable analytical system for detecting organic chemicals in water |
US8621008B2 (en) | 2007-04-26 | 2013-12-31 | Mcafee, Inc. | System, method and computer program product for performing an action based on an aspect of an electronic mail message thread |
KR101495535B1 (en) * | 2007-06-22 | 2015-02-25 | 삼성전자주식회사 | Method and system for transmitting data through checking revocation of contents device and data server thereof |
US8209540B2 (en) | 2007-06-28 | 2012-06-26 | Apple Inc. | Incremental secure backup and restore of user settings and data |
US8199965B1 (en) | 2007-08-17 | 2012-06-12 | Mcafee, Inc. | System, method, and computer program product for preventing image-related data loss |
US20130276061A1 (en) | 2007-09-05 | 2013-10-17 | Gopi Krishna Chebiyyam | System, method, and computer program product for preventing access to data with respect to a data access attempt associated with a remote data sharing session |
US8190920B2 (en) * | 2007-09-17 | 2012-05-29 | Seagate Technology Llc | Security features in an electronic device |
US8639941B2 (en) * | 2007-12-05 | 2014-01-28 | Bruce Buchanan | Data security in mobile devices |
US8474037B2 (en) | 2008-01-07 | 2013-06-25 | Intel Corporation | Stateless attestation system |
US8893285B2 (en) | 2008-03-14 | 2014-11-18 | Mcafee, Inc. | Securing data using integrated host-based data loss agent with encryption detection |
US8353053B1 (en) * | 2008-04-14 | 2013-01-08 | Mcafee, Inc. | Computer program product and method for permanently storing data based on whether a device is protected with an encryption mechanism and whether data in a data structure requires encryption |
US8214646B2 (en) | 2008-05-06 | 2012-07-03 | Research In Motion Limited | Bundle verification |
EP2116953B1 (en) * | 2008-05-06 | 2018-12-26 | BlackBerry Limited | Modified bundle signature verification |
US20090307705A1 (en) * | 2008-06-05 | 2009-12-10 | Neocleus Israel Ltd | Secure multi-purpose computing client |
US9077684B1 (en) | 2008-08-06 | 2015-07-07 | Mcafee, Inc. | System, method, and computer program product for determining whether an electronic mail message is compliant with an etiquette policy |
US8839458B2 (en) * | 2009-05-12 | 2014-09-16 | Nokia Corporation | Method, apparatus, and computer program for providing application security |
EP2278514B1 (en) * | 2009-07-16 | 2018-05-30 | Alcatel Lucent | System and method for providing secure virtual machines |
US20130117550A1 (en) * | 2009-08-06 | 2013-05-09 | Imation Corp. | Accessing secure volumes |
US10242182B2 (en) | 2009-10-23 | 2019-03-26 | Secure Vector, Llc | Computer security system and method |
US8429429B1 (en) * | 2009-10-23 | 2013-04-23 | Secure Vector, Inc. | Computer security system and method |
US9454652B2 (en) | 2009-10-23 | 2016-09-27 | Secure Vector, Llc | Computer security system and method |
US8775802B1 (en) | 2009-10-23 | 2014-07-08 | Secure Vector | Computer security system and method |
US8499357B1 (en) * | 2010-08-06 | 2013-07-30 | Emc Corporation | Signing a library file to verify a callback function |
US9306737B2 (en) * | 2011-05-18 | 2016-04-05 | Citrix Systems, Inc. | Systems and methods for secure handling of data |
CN103582889B (en) * | 2011-06-06 | 2015-11-25 | 株式会社索思未来 | Content-data renovation process and thumbnail image generation method |
US9032214B2 (en) | 2011-06-30 | 2015-05-12 | Dell Products L.P. | System and method for providing an image to an information handling system |
GB2509022B (en) | 2011-09-07 | 2018-01-31 | Parker Hannifin Corp | Analytical system and method for detecting volatile organic compounds in water |
US8683206B2 (en) * | 2011-09-19 | 2014-03-25 | GM Global Technology Operations LLC | System and method of authenticating multiple files using a detached digital signature |
CN103282911A (en) * | 2011-11-04 | 2013-09-04 | Sk普兰尼特有限公司 | Method for interworking trust between a trusted region and an untrusted region, method, server, and terminal for controlling the downloading of trusted applications, and control system applying same |
US9703945B2 (en) | 2012-09-19 | 2017-07-11 | Winbond Electronics Corporation | Secured computing system with asynchronous authentication |
US9412066B1 (en) | 2013-03-11 | 2016-08-09 | Symantec Corporation | Systems and methods for predicting optimum run times for software samples |
EP2840492A1 (en) * | 2013-08-23 | 2015-02-25 | British Telecommunications public limited company | Method and apparatus for modifying a computer program in a trusted manner |
US9455962B2 (en) | 2013-09-22 | 2016-09-27 | Winbond Electronics Corporation | Protecting memory interface |
US9542568B2 (en) * | 2013-09-25 | 2017-01-10 | Max Planck Gesellschaft Zur Foerderung Der Wissenschaften E.V. | Systems and methods for enforcing third party oversight of data anonymization |
US9343162B2 (en) | 2013-10-11 | 2016-05-17 | Winbond Electronics Corporation | Protection against side-channel attacks on non-volatile memory |
US9225715B2 (en) * | 2013-11-14 | 2015-12-29 | Globalfoundries U.S. 2 Llc | Securely associating an application with a well-known entity |
US9805115B1 (en) | 2014-03-13 | 2017-10-31 | Symantec Corporation | Systems and methods for updating generic file-classification definitions |
US9684705B1 (en) | 2014-03-14 | 2017-06-20 | Symantec Corporation | Systems and methods for clustering data |
US9318221B2 (en) | 2014-04-03 | 2016-04-19 | Winbound Electronics Corporation | Memory device with secure test mode |
US11100242B2 (en) * | 2014-05-30 | 2021-08-24 | Apple Inc. | Restricted resource classes of an operating system |
IL234956A (en) | 2014-10-02 | 2017-10-31 | Kaluzhny Uri | Bus protection with improved key entropy |
US9674162B1 (en) | 2015-03-13 | 2017-06-06 | Amazon Technologies, Inc. | Updating encrypted cryptographic key pair |
US9893885B1 (en) | 2015-03-13 | 2018-02-13 | Amazon Technologies, Inc. | Updating cryptographic key pair |
US9479340B1 (en) | 2015-03-30 | 2016-10-25 | Amazon Technologies, Inc. | Controlling use of encryption keys |
US10003467B1 (en) | 2015-03-30 | 2018-06-19 | Amazon Technologies, Inc. | Controlling digital certificate use |
WO2016196911A1 (en) | 2015-06-05 | 2016-12-08 | Parker-Hannifin Corporation | Analysis system and method for detecting volatile organic compounds in liquid |
US10484172B2 (en) | 2015-06-05 | 2019-11-19 | Apple Inc. | Secure circuit for encryption key generation |
US10511485B2 (en) * | 2015-08-11 | 2019-12-17 | At&T Intellectual Property I, L.P. | Dynamic virtual network topology discovery engine |
GB2547921B (en) * | 2016-03-03 | 2019-05-29 | F Secure Corp | Authenticating or controlling software application on end user device |
US10019571B2 (en) | 2016-03-13 | 2018-07-10 | Winbond Electronics Corporation | Protection from side-channel attacks by varying clock delays |
US11194823B2 (en) | 2016-05-10 | 2021-12-07 | Aircloak Gmbh | Systems and methods for anonymized statistical database queries using noise elements |
US10034407B2 (en) | 2016-07-22 | 2018-07-24 | Intel Corporation | Storage sled for a data center |
US10686766B2 (en) * | 2016-09-16 | 2020-06-16 | Pivotal Software, Inc. | Credential management in cloud-based application deployment |
US10621351B2 (en) | 2016-11-01 | 2020-04-14 | Raptor Engineering, LLC. | Systems and methods for tamper-resistant verification of firmware with a trusted platform module |
US10956615B2 (en) | 2017-02-17 | 2021-03-23 | Microsoft Technology Licensing, Llc | Securely defining operating system composition without multiple authoring |
US11818100B2 (en) | 2017-12-04 | 2023-11-14 | Telefonaktiebolaget Lm Ericsson (Publ) | Automatic provisioning of streaming policies for video streaming control in CDN |
US10963269B2 (en) * | 2019-03-28 | 2021-03-30 | Lenovo (Singapore) Pte. Ltd. | Apparatus, method, and program product for storing a hardware manifest |
CA3191973A1 (en) * | 2020-09-08 | 2022-03-17 | Jason GAGNE-KEATS | Mobile device with secure private memory |
Citations (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4827508A (en) * | 1986-10-14 | 1989-05-02 | Personal Library Software, Inc. | Database usage metering and protection system and method |
US4969189A (en) * | 1988-06-25 | 1990-11-06 | Nippon Telegraph & Telephone Corporation | Authentication system and apparatus therefor |
US4977594A (en) * | 1986-10-14 | 1990-12-11 | Electronic Publishing Resources, Inc. | Database usage metering and protection system and method |
US5023907A (en) * | 1988-09-30 | 1991-06-11 | Apollo Computer, Inc. | Network license server |
US5050213A (en) * | 1986-10-14 | 1991-09-17 | Electronic Publishing Resources, Inc. | Database usage metering and protection system and method |
US5136647A (en) * | 1990-08-02 | 1992-08-04 | Bell Communications Research, Inc. | Method for secure time-stamping of digital documents |
US5140634A (en) * | 1987-09-07 | 1992-08-18 | U.S Philips Corporation | Method and apparatus for authenticating accreditations and for authenticating and signing messages |
US5276311A (en) * | 1989-03-01 | 1994-01-04 | Hartmut Hennige | Method and device for simplifying the use of a plurality of credit cards, or the like |
US5335334A (en) * | 1990-08-31 | 1994-08-02 | Hitachi, Ltd. | Data processing apparatus having a real memory region with a corresponding fixed memory protection key value and method for allocating memories therefor |
US5390247A (en) * | 1992-04-06 | 1995-02-14 | Fischer; Addison M. | Method and apparatus for creating, supporting, and using travelling programs |
US5412717A (en) * | 1992-05-15 | 1995-05-02 | Fischer; Addison M. | Computer system security method and apparatus having program authorization information data structures |
US5473692A (en) * | 1994-09-07 | 1995-12-05 | Intel Corporation | Roving software license for a hardware agent |
US5473690A (en) * | 1991-01-18 | 1995-12-05 | Gemplus Card International | Secured method for loading a plurality of applications into a microprocessor memory card |
US5491827A (en) * | 1994-01-14 | 1996-02-13 | Bull Hn Information Systems Inc. | Secure application card for sharing application data and procedures among a plurality of microprocessors |
US5544246A (en) * | 1993-09-17 | 1996-08-06 | At&T Corp. | Smartcard adapted for a plurality of service providers and for remote installation of same |
US5557518A (en) * | 1994-04-28 | 1996-09-17 | Citibank, N.A. | Trusted agents for open electronic commerce |
US5654746A (en) * | 1994-12-01 | 1997-08-05 | Scientific-Atlanta, Inc. | Secure authorization and control method and apparatus for a game delivery service |
US5664016A (en) * | 1995-06-27 | 1997-09-02 | Northern Telecom Limited | Method of building fast MACS from hash functions |
US5671280A (en) * | 1995-08-30 | 1997-09-23 | Citibank, N.A. | System and method for commercial payments using trusted agents |
US5721781A (en) * | 1995-09-13 | 1998-02-24 | Microsoft Corporation | Authentication system and method for smart card transactions |
US5745886A (en) * | 1995-06-07 | 1998-04-28 | Citibank, N.A. | Trusted agents for open distribution of electronic money |
US5757919A (en) * | 1996-12-12 | 1998-05-26 | Intel Corporation | Cryptographically protected paging subsystem |
US5787427A (en) * | 1996-01-03 | 1998-07-28 | International Business Machines Corporation | Information handling system, method, and article of manufacture for efficient object security processing by grouping objects sharing common control access policies |
US5796824A (en) * | 1992-03-16 | 1998-08-18 | Fujitsu Limited | Storage medium for preventing an irregular use by a third party |
US5812980A (en) * | 1994-02-22 | 1998-09-22 | Sega Enterprises, Ltd. | Program operating apparatus |
US5812662A (en) * | 1995-12-18 | 1998-09-22 | United Microelectronics Corporation | Method and apparatus to protect computer software |
US5841869A (en) * | 1996-08-23 | 1998-11-24 | Cheyenne Property Trust | Method and apparatus for trusted processing |
US5850442A (en) * | 1996-03-26 | 1998-12-15 | Entegrity Solutions Corporation | Secure world wide electronic commerce over an open network |
US5872847A (en) * | 1996-07-30 | 1999-02-16 | Itt Industries, Inc. | Using trusted associations to establish trust in a computer network |
US5892904A (en) * | 1996-12-06 | 1999-04-06 | Microsoft Corporation | Code certification for network transmission |
US5892902A (en) * | 1996-09-05 | 1999-04-06 | Clark; Paul C. | Intelligent token protected system with network authentication |
US5892900A (en) * | 1996-08-30 | 1999-04-06 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5910987A (en) * | 1995-02-13 | 1999-06-08 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5919257A (en) * | 1997-08-08 | 1999-07-06 | Novell, Inc. | Networked workstation intrusion detection system |
US5933498A (en) * | 1996-01-11 | 1999-08-03 | Mrj, Inc. | System for controlling access and distribution of digital property |
US5940504A (en) * | 1991-07-01 | 1999-08-17 | Infologic Software, Inc. | Licensing management system and method in which datagrams including an address of a licensee and indicative of use of a licensed product are sent from the licensee's site |
US5953502A (en) * | 1997-02-13 | 1999-09-14 | Helbig, Sr.; Walter A | Method and apparatus for enhancing computer system security |
US5958051A (en) * | 1996-11-27 | 1999-09-28 | Sun Microsystems, Inc. | Implementing digital signatures for data streams and data archives |
US5958050A (en) * | 1996-09-24 | 1999-09-28 | Electric Communities | Trusted delegation system |
US5963980A (en) * | 1993-12-07 | 1999-10-05 | Gemplus Card International | Microprocessor-based memory card that limits memory accesses by application programs and method of operation |
US5991876A (en) * | 1996-04-01 | 1999-11-23 | Copyright Clearance Center, Inc. | Electronic rights management and authorization system |
US5991399A (en) * | 1997-12-18 | 1999-11-23 | Intel Corporation | Method for securely distributing a conditional use private key to a trusted entity on a remote system |
US6006332A (en) * | 1996-10-21 | 1999-12-21 | Case Western Reserve University | Rights management system for digital media |
US6009274A (en) * | 1996-12-13 | 1999-12-28 | 3Com Corporation | Method and apparatus for automatically updating software components on end systems over a network |
US20010025281A1 (en) * | 2000-03-27 | 2001-09-27 | International Business Machines Corporation | Method for access control of aggregated data |
US6327656B2 (en) * | 1996-07-03 | 2001-12-04 | Timestamp.Com, Inc. | Apparatus and method for electronic document certification and verification |
US20020007452A1 (en) * | 1997-01-30 | 2002-01-17 | Chandler Brendan Stanton Traw | Content protection for digital transmission systems |
US20020069365A1 (en) * | 1999-02-08 | 2002-06-06 | Christopher J. Howard | Limited-use browser and security system |
US20020107803A1 (en) * | 1998-08-13 | 2002-08-08 | International Business Machines Corporation | Method and system of preventing unauthorized rerecording of multimedia content |
US20020120936A1 (en) * | 2000-10-10 | 2002-08-29 | Del Beccaro David J. | System and method for receiving broadcast audio/video works and for enabling a consumer to purchase the received audio/video works |
US20020152173A1 (en) * | 2001-04-05 | 2002-10-17 | Rudd James M. | System and methods for managing the distribution of electronic content |
US20030056102A1 (en) * | 2001-09-20 | 2003-03-20 | International Business Machines Corporation | Method and apparatus for protecting ongoing system integrity of a software product using digital signatures |
US6920861B2 (en) * | 2003-06-30 | 2005-07-26 | Aisan Kogyo Kabushiki Kaisha | Fuel injection control devices for internal combustion engines |
US6950943B1 (en) * | 1998-12-23 | 2005-09-27 | International Business Machines Corporation | System for electronic repository of data enforcing access control on data search and retrieval |
US7305553B2 (en) * | 2001-11-16 | 2007-12-04 | Microsoft Corporation | Manifest-based trusted agent management in a trusted operating system environment |
Family Cites Families (73)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3513345A1 (en) * | 1985-04-13 | 1986-10-16 | Brown, Boveri & Cie Ag, 6800 Mannheim | MEASURING DEVICE HOUSING |
DE3812235A1 (en) * | 1988-04-13 | 1989-10-26 | Draegerwerk Ag | BREATH ANALYSIS SYSTEM |
JPH0779509B2 (en) * | 1989-11-29 | 1995-08-23 | モトローラ・インコーポレーテッド | Signal jamming protection method and device |
US5470450A (en) * | 1992-04-20 | 1995-11-28 | Mitsubishi Materials Corporation | Edge protector for electrolytic electrode, and spreader bar |
AU683038B2 (en) * | 1993-08-10 | 1997-10-30 | Addison M. Fischer | A method for operating computers and for processing information among computers |
US5575518A (en) * | 1994-12-19 | 1996-11-19 | Payne; Douglas F. | Gripper tool for handling lumber |
US6157721A (en) | 1996-08-12 | 2000-12-05 | Intertrust Technologies Corp. | Systems and methods using cryptography to protect secure computing environments |
US5943422A (en) | 1996-08-12 | 1999-08-24 | Intertrust Technologies Corp. | Steganographic techniques for securely delivering electronic digital rights management control information over insecure communication channels |
JPH09212261A (en) * | 1996-01-31 | 1997-08-15 | Hitachi Ltd | Power supply control system for information processor |
US6038551A (en) * | 1996-03-11 | 2000-03-14 | Microsoft Corporation | System and method for configuring and managing resources on a multi-purpose integrated circuit card using a personal computer |
US5978484A (en) * | 1996-04-25 | 1999-11-02 | Microsoft Corporation | System and method for safety distributing executable objects |
US5944821A (en) * | 1996-07-11 | 1999-08-31 | Compaq Computer Corporation | Secure software registration and integrity assessment in a computer system |
US5958061A (en) * | 1996-07-24 | 1999-09-28 | Transmeta Corporation | Host microprocessor with apparatus for temporarily holding target processor state |
US6253323B1 (en) * | 1996-11-01 | 2001-06-26 | Intel Corporation | Object-based digital signatures |
US6154844A (en) * | 1996-11-08 | 2000-11-28 | Finjan Software, Ltd. | System and method for attaching a downloadable security profile to a downloadable |
US6367012B1 (en) | 1996-12-06 | 2002-04-02 | Microsoft Corporation | Embedding certifications in executable files for network transmission |
JPH10171607A (en) * | 1996-12-09 | 1998-06-26 | Matsushita Electric Ind Co Ltd | Data transfer system |
US6381741B1 (en) | 1998-05-18 | 2002-04-30 | Liberate Technologies | Secure data downloading, recovery and upgrading |
US6192473B1 (en) * | 1996-12-24 | 2001-02-20 | Pitney Bowes Inc. | System and method for mutual authentication and secure communications between a postage security device and a meter server |
US6073124A (en) * | 1997-01-29 | 2000-06-06 | Shopnow.Com Inc. | Method and system for securely incorporating electronic information into an online purchasing application |
US5920861A (en) * | 1997-02-25 | 1999-07-06 | Intertrust Technologies Corp. | Techniques for defining using and manipulating rights management data structures |
US6477648B1 (en) | 1997-03-23 | 2002-11-05 | Novell, Inc. | Trusted workstation in a networked client/server computing system |
US6212636B1 (en) * | 1997-05-01 | 2001-04-03 | Itt Manufacturing Enterprises | Method for establishing trust in a computer network via association |
US6175924B1 (en) * | 1997-06-20 | 2001-01-16 | International Business Machines Corp. | Method and apparatus for protecting application data in secure storage areas |
US6229894B1 (en) * | 1997-07-14 | 2001-05-08 | Entrust Technologies, Ltd. | Method and apparatus for access to user-specific encryption information |
JPH1145507A (en) | 1997-07-24 | 1999-02-16 | Toshiba Corp | Information reproducing device, recognition device, and information processing system |
US6233685B1 (en) * | 1997-08-29 | 2001-05-15 | Sean William Smith | Establishing and employing the provable untampered state of a device |
US6032257A (en) * | 1997-08-29 | 2000-02-29 | Compaq Computer Corporation | Hardware theft-protection architecture |
US6185678B1 (en) * | 1997-10-02 | 2001-02-06 | Trustees Of The University Of Pennsylvania | Secure and reliable bootstrap architecture |
US6148387A (en) * | 1997-10-09 | 2000-11-14 | Phoenix Technologies, Ltd. | System and method for securely utilizing basic input and output system (BIOS) services |
US6026166A (en) * | 1997-10-20 | 2000-02-15 | Cryptoworx Corporation | Digitally certifying a user identity and a computer system in combination |
US6112181A (en) * | 1997-11-06 | 2000-08-29 | Intertrust Technologies Corporation | Systems and methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information |
US6560706B1 (en) | 1998-01-26 | 2003-05-06 | Intel Corporation | Interface for ensuring system boot image integrity and authenticity |
US6725373B2 (en) * | 1998-03-25 | 2004-04-20 | Intel Corporation | Method and apparatus for verifying the integrity of digital objects using signed manifests |
US6148402A (en) * | 1998-04-01 | 2000-11-14 | Hewlett-Packard Company | Apparatus and method for remotely executing commands using distributed computing environment remote procedure calls |
US6009401A (en) * | 1998-04-06 | 1999-12-28 | Preview Systems, Inc. | Relicensing of electronically purchased software |
US6175917B1 (en) * | 1998-04-23 | 2001-01-16 | Vpnet Technologies, Inc. | Method and apparatus for swapping a computer operating system |
US6118873A (en) * | 1998-04-24 | 2000-09-12 | International Business Machines Corporation | System for encrypting broadcast programs in the presence of compromised receiver devices |
US6092189A (en) * | 1998-04-30 | 2000-07-18 | Compaq Computer Corporation | Channel configuration program server architecture |
US6223284B1 (en) * | 1998-04-30 | 2001-04-24 | Compaq Computer Corporation | Method and apparatus for remote ROM flashing and security management for a computer system |
WO1999057634A1 (en) * | 1998-05-06 | 1999-11-11 | Jcp Computer Services Ltd. | Processing apparatus and method |
US6363486B1 (en) | 1998-06-05 | 2002-03-26 | Intel Corporation | Method of controlling usage of software components |
US6189100B1 (en) * | 1998-06-30 | 2001-02-13 | Microsoft Corporation | Ensuring the integrity of remote boot client data |
US6105137A (en) * | 1998-07-02 | 2000-08-15 | Intel Corporation | Method and apparatus for integrity verification, authentication, and secure linkage of software modules |
US6230285B1 (en) * | 1998-09-08 | 2001-05-08 | Symantec Corporation | Boot failure recovery |
US6463535B1 (en) * | 1998-10-05 | 2002-10-08 | Intel Corporation | System and method for verifying the integrity and authorization of software before execution in a local platform |
US6327652B1 (en) | 1998-10-26 | 2001-12-04 | Microsoft Corporation | Loading and identifying a digital rights management operating system |
US6820063B1 (en) * | 1998-10-26 | 2004-11-16 | Microsoft Corporation | Controlling access to content based on certificates and access predicates |
US7194092B1 (en) * | 1998-10-26 | 2007-03-20 | Microsoft Corporation | Key-based secure storage |
US6330670B1 (en) | 1998-10-26 | 2001-12-11 | Microsoft Corporation | Digital rights management operating system |
US6330588B1 (en) | 1998-12-21 | 2001-12-11 | Philips Electronics North America Corporation | Verification of software agents and agent activities |
US6470450B1 (en) * | 1998-12-23 | 2002-10-22 | Entrust Technologies Limited | Method and apparatus for controlling application access to limited access based data |
US6272629B1 (en) * | 1998-12-29 | 2001-08-07 | Intel Corporation | Method and apparatus for establishing network connection for a processor without an operating system boot |
US6263431B1 (en) * | 1998-12-31 | 2001-07-17 | Intle Corporation | Operating system bootstrap security mechanism |
US6539093B1 (en) * | 1998-12-31 | 2003-03-25 | International Business Machines Corporation | Key ring organizer for an electronic business using public key infrastructure |
US6480961B2 (en) | 1999-03-02 | 2002-11-12 | Audible, Inc. | Secure streaming of digital audio/visual content |
US6389537B1 (en) | 1999-04-23 | 2002-05-14 | Intel Corporation | Platform and method for assuring integrity of trusted agent communications |
US6675382B1 (en) * | 1999-06-14 | 2004-01-06 | Sun Microsystems, Inc. | Software packaging and distribution system |
US6629150B1 (en) * | 1999-06-18 | 2003-09-30 | Intel Corporation | Platform and method for creating and using a digital container |
CA2310535A1 (en) * | 1999-06-30 | 2000-12-30 | International Business Machines Corporation | Vault controller context manager and methods of operation for securely maintaining state information between successive browser connections in an electronic business system |
US6477252B1 (en) | 1999-08-29 | 2002-11-05 | Intel Corporation | Digital video content transmission ciphering and deciphering method and apparatus |
US6748538B1 (en) * | 1999-11-03 | 2004-06-08 | Intel Corporation | Integrity scanner |
US6757824B1 (en) * | 1999-12-10 | 2004-06-29 | Microsoft Corporation | Client-side boot domains and boot rules |
US6871344B2 (en) * | 2000-04-24 | 2005-03-22 | Microsoft Corporation | Configurations for binding software assemblies to application programs |
US6874143B1 (en) * | 2000-06-21 | 2005-03-29 | Microsoft Corporation | Architectures for and methods of providing network-based software extensions |
US6766353B1 (en) * | 2000-07-11 | 2004-07-20 | Motorola, Inc. | Method for authenticating a JAVA archive (JAR) for portable devices |
US6931545B1 (en) * | 2000-08-28 | 2005-08-16 | Contentguard Holdings, Inc. | Systems and methods for integrity certification and verification of content consumption environments |
US6915433B1 (en) * | 2000-09-28 | 2005-07-05 | Sumisho Computer Systems Corporation | Securely extensible component meta-data |
US20030182414A1 (en) * | 2003-05-13 | 2003-09-25 | O'neill Patrick J. | System and method for updating and distributing information |
US6910128B1 (en) * | 2000-11-21 | 2005-06-21 | International Business Machines Corporation | Method and computer program product for processing signed applets |
US7043637B2 (en) * | 2001-03-21 | 2006-05-09 | Microsoft Corporation | On-disk file format for a serverless distributed file system |
US7107618B1 (en) * | 2001-09-25 | 2006-09-12 | Mcafee, Inc. | System and method for certifying that data received over a computer network has been checked for viruses |
US7243230B2 (en) * | 2001-11-16 | 2007-07-10 | Microsoft Corporation | Transferring application secrets in a trusted operating system environment |
-
2001
- 2001-11-16 US US09/993,370 patent/US7137004B2/en not_active Expired - Fee Related
-
2005
- 2005-08-18 US US11/206,578 patent/US7305553B2/en not_active Expired - Fee Related
- 2005-08-18 US US11/206,585 patent/US7634661B2/en not_active Expired - Fee Related
- 2005-08-18 US US11/207,081 patent/US20050278477A1/en not_active Abandoned
- 2005-08-18 US US11/206,579 patent/US7257707B2/en not_active Expired - Fee Related
- 2005-08-18 US US11/206,519 patent/US7107463B2/en not_active Expired - Fee Related
-
2006
- 2006-11-09 US US11/558,125 patent/US20070174921A1/en not_active Abandoned
Patent Citations (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5410598A (en) * | 1986-10-14 | 1995-04-25 | Electronic Publishing Resources, Inc. | Database usage metering and protection system and method |
US4977594A (en) * | 1986-10-14 | 1990-12-11 | Electronic Publishing Resources, Inc. | Database usage metering and protection system and method |
US5050213A (en) * | 1986-10-14 | 1991-09-17 | Electronic Publishing Resources, Inc. | Database usage metering and protection system and method |
US4827508A (en) * | 1986-10-14 | 1989-05-02 | Personal Library Software, Inc. | Database usage metering and protection system and method |
US5140634A (en) * | 1987-09-07 | 1992-08-18 | U.S Philips Corporation | Method and apparatus for authenticating accreditations and for authenticating and signing messages |
US4969189A (en) * | 1988-06-25 | 1990-11-06 | Nippon Telegraph & Telephone Corporation | Authentication system and apparatus therefor |
US5023907A (en) * | 1988-09-30 | 1991-06-11 | Apollo Computer, Inc. | Network license server |
US5276311A (en) * | 1989-03-01 | 1994-01-04 | Hartmut Hennige | Method and device for simplifying the use of a plurality of credit cards, or the like |
US5136647A (en) * | 1990-08-02 | 1992-08-04 | Bell Communications Research, Inc. | Method for secure time-stamping of digital documents |
US5335334A (en) * | 1990-08-31 | 1994-08-02 | Hitachi, Ltd. | Data processing apparatus having a real memory region with a corresponding fixed memory protection key value and method for allocating memories therefor |
US5473690A (en) * | 1991-01-18 | 1995-12-05 | Gemplus Card International | Secured method for loading a plurality of applications into a microprocessor memory card |
US5940504A (en) * | 1991-07-01 | 1999-08-17 | Infologic Software, Inc. | Licensing management system and method in which datagrams including an address of a licensee and indicative of use of a licensed product are sent from the licensee's site |
US5796824A (en) * | 1992-03-16 | 1998-08-18 | Fujitsu Limited | Storage medium for preventing an irregular use by a third party |
US5390247A (en) * | 1992-04-06 | 1995-02-14 | Fischer; Addison M. | Method and apparatus for creating, supporting, and using travelling programs |
US5412717A (en) * | 1992-05-15 | 1995-05-02 | Fischer; Addison M. | Computer system security method and apparatus having program authorization information data structures |
US5544246A (en) * | 1993-09-17 | 1996-08-06 | At&T Corp. | Smartcard adapted for a plurality of service providers and for remote installation of same |
US5963980A (en) * | 1993-12-07 | 1999-10-05 | Gemplus Card International | Microprocessor-based memory card that limits memory accesses by application programs and method of operation |
US5491827A (en) * | 1994-01-14 | 1996-02-13 | Bull Hn Information Systems Inc. | Secure application card for sharing application data and procedures among a plurality of microprocessors |
US5812980A (en) * | 1994-02-22 | 1998-09-22 | Sega Enterprises, Ltd. | Program operating apparatus |
US5557518A (en) * | 1994-04-28 | 1996-09-17 | Citibank, N.A. | Trusted agents for open electronic commerce |
US5473692A (en) * | 1994-09-07 | 1995-12-05 | Intel Corporation | Roving software license for a hardware agent |
US5654746A (en) * | 1994-12-01 | 1997-08-05 | Scientific-Atlanta, Inc. | Secure authorization and control method and apparatus for a game delivery service |
US5982891A (en) * | 1995-02-13 | 1999-11-09 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5917912A (en) * | 1995-02-13 | 1999-06-29 | Intertrust Technologies Corporation | System and methods for secure transaction management and electronic rights protection |
US5910987A (en) * | 1995-02-13 | 1999-06-08 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5745886A (en) * | 1995-06-07 | 1998-04-28 | Citibank, N.A. | Trusted agents for open distribution of electronic money |
US5664016A (en) * | 1995-06-27 | 1997-09-02 | Northern Telecom Limited | Method of building fast MACS from hash functions |
US5671280A (en) * | 1995-08-30 | 1997-09-23 | Citibank, N.A. | System and method for commercial payments using trusted agents |
US5721781A (en) * | 1995-09-13 | 1998-02-24 | Microsoft Corporation | Authentication system and method for smart card transactions |
US5812662A (en) * | 1995-12-18 | 1998-09-22 | United Microelectronics Corporation | Method and apparatus to protect computer software |
US5787427A (en) * | 1996-01-03 | 1998-07-28 | International Business Machines Corporation | Information handling system, method, and article of manufacture for efficient object security processing by grouping objects sharing common control access policies |
US5933498A (en) * | 1996-01-11 | 1999-08-03 | Mrj, Inc. | System for controlling access and distribution of digital property |
US5850442A (en) * | 1996-03-26 | 1998-12-15 | Entegrity Solutions Corporation | Secure world wide electronic commerce over an open network |
US5991876A (en) * | 1996-04-01 | 1999-11-23 | Copyright Clearance Center, Inc. | Electronic rights management and authorization system |
US6327656B2 (en) * | 1996-07-03 | 2001-12-04 | Timestamp.Com, Inc. | Apparatus and method for electronic document certification and verification |
US5872847A (en) * | 1996-07-30 | 1999-02-16 | Itt Industries, Inc. | Using trusted associations to establish trust in a computer network |
US5841869A (en) * | 1996-08-23 | 1998-11-24 | Cheyenne Property Trust | Method and apparatus for trusted processing |
US5892900A (en) * | 1996-08-30 | 1999-04-06 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5892902A (en) * | 1996-09-05 | 1999-04-06 | Clark; Paul C. | Intelligent token protected system with network authentication |
US5958050A (en) * | 1996-09-24 | 1999-09-28 | Electric Communities | Trusted delegation system |
US6006332A (en) * | 1996-10-21 | 1999-12-21 | Case Western Reserve University | Rights management system for digital media |
US5958051A (en) * | 1996-11-27 | 1999-09-28 | Sun Microsystems, Inc. | Implementing digital signatures for data streams and data archives |
US5892904A (en) * | 1996-12-06 | 1999-04-06 | Microsoft Corporation | Code certification for network transmission |
US5757919A (en) * | 1996-12-12 | 1998-05-26 | Intel Corporation | Cryptographically protected paging subsystem |
US6009274A (en) * | 1996-12-13 | 1999-12-28 | 3Com Corporation | Method and apparatus for automatically updating software components on end systems over a network |
US20020007452A1 (en) * | 1997-01-30 | 2002-01-17 | Chandler Brendan Stanton Traw | Content protection for digital transmission systems |
US5953502A (en) * | 1997-02-13 | 1999-09-14 | Helbig, Sr.; Walter A | Method and apparatus for enhancing computer system security |
US5919257A (en) * | 1997-08-08 | 1999-07-06 | Novell, Inc. | Networked workstation intrusion detection system |
US5991399A (en) * | 1997-12-18 | 1999-11-23 | Intel Corporation | Method for securely distributing a conditional use private key to a trusted entity on a remote system |
US20020107803A1 (en) * | 1998-08-13 | 2002-08-08 | International Business Machines Corporation | Method and system of preventing unauthorized rerecording of multimedia content |
US6950943B1 (en) * | 1998-12-23 | 2005-09-27 | International Business Machines Corporation | System for electronic repository of data enforcing access control on data search and retrieval |
US20020069365A1 (en) * | 1999-02-08 | 2002-06-06 | Christopher J. Howard | Limited-use browser and security system |
US20010025281A1 (en) * | 2000-03-27 | 2001-09-27 | International Business Machines Corporation | Method for access control of aggregated data |
US20020120936A1 (en) * | 2000-10-10 | 2002-08-29 | Del Beccaro David J. | System and method for receiving broadcast audio/video works and for enabling a consumer to purchase the received audio/video works |
US20020152173A1 (en) * | 2001-04-05 | 2002-10-17 | Rudd James M. | System and methods for managing the distribution of electronic content |
US20030056102A1 (en) * | 2001-09-20 | 2003-03-20 | International Business Machines Corporation | Method and apparatus for protecting ongoing system integrity of a software product using digital signatures |
US7305553B2 (en) * | 2001-11-16 | 2007-12-04 | Microsoft Corporation | Manifest-based trusted agent management in a trusted operating system environment |
US6920861B2 (en) * | 2003-06-30 | 2005-07-26 | Aisan Kogyo Kabushiki Kaisha | Fuel injection control devices for internal combustion engines |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7500100B1 (en) * | 2003-09-10 | 2009-03-03 | Cisco Technology, Inc. | Method and apparatus for verifying revocation status of a digital certificate |
US20060047944A1 (en) * | 2004-09-01 | 2006-03-02 | Roger Kilian-Kehr | Secure booting of a computing device |
US8683552B2 (en) * | 2005-02-21 | 2014-03-25 | Panasonic Corporation | Information distribution system and terminal device |
US20080263637A1 (en) * | 2005-02-21 | 2008-10-23 | Masao Nonaka | Information Distribution System and Terminal Device |
US20090083728A1 (en) * | 2007-09-25 | 2009-03-26 | Lehman Brothers Inc. | System and method for application management |
US8490078B2 (en) | 2007-09-25 | 2013-07-16 | Barclays Capital, Inc. | System and method for application management |
WO2009041990A1 (en) * | 2007-09-25 | 2009-04-02 | Barclays Capital, Inc. | System and method for application management |
US20100031355A1 (en) * | 2008-07-30 | 2010-02-04 | Sun Microsystems, Inc. | Unvalidated privilege cap |
US8856938B2 (en) * | 2008-07-30 | 2014-10-07 | Oracle America, Inc. | Unvalidated privilege cap |
US20100138833A1 (en) * | 2008-12-01 | 2010-06-03 | Microsoft Corporation | Resource coverage and analysis |
US20120166795A1 (en) * | 2010-12-24 | 2012-06-28 | Wood Matthew D | Secure application attestation using dynamic measurement kernels |
US9087196B2 (en) * | 2010-12-24 | 2015-07-21 | Intel Corporation | Secure application attestation using dynamic measurement kernels |
EP3438868A4 (en) * | 2016-04-01 | 2019-09-11 | China Unionpay Co., Ltd. | Tee access control method and mobile terminal implementing same |
US11544378B2 (en) * | 2016-04-01 | 2023-01-03 | China Unionpay Co., Ltd. | Tee access control method and mobile terminal implementing same |
Also Published As
Publication number | Publication date |
---|---|
US7107463B2 (en) | 2006-09-12 |
US20030097579A1 (en) | 2003-05-22 |
US7305553B2 (en) | 2007-12-04 |
US20050278531A1 (en) | 2005-12-15 |
US20050278477A1 (en) | 2005-12-15 |
US20050289351A1 (en) | 2005-12-29 |
US7634661B2 (en) | 2009-12-15 |
US20050278530A1 (en) | 2005-12-15 |
US7257707B2 (en) | 2007-08-14 |
US7137004B2 (en) | 2006-11-14 |
US20060005230A1 (en) | 2006-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7257707B2 (en) | Manifest-based trusted agent management in a trusted operating system environment | |
US7577839B2 (en) | Transferring application secrets in a trusted operating system environment | |
US7159240B2 (en) | Operating system upgrades in a trusted operating system environment | |
US7434263B2 (en) | System and method for secure storage data using a key | |
US8549313B2 (en) | Method and system for integrated securing and managing of virtual machines and virtual appliances | |
US7694121B2 (en) | System and method for protected operating system boot using state validation | |
US8938618B2 (en) | Device booting with an initial protection component | |
US6327652B1 (en) | Loading and identifying a digital rights management operating system | |
US6820063B1 (en) | Controlling access to content based on certificates and access predicates | |
US6330670B1 (en) | Digital rights management operating system | |
EP1391802B1 (en) | Saving and retrieving data based on symmetric key encryption | |
US20050060549A1 (en) | Controlling access to content based on certificates and access predicates | |
CN114651253A (en) | Virtual environment type verification for policy enforcement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001 Effective date: 20141014 |