US20090222915A1 - System and Method for Securely Clearing Secret Data that Remain in a Computer System Memory - Google Patents
System and Method for Securely Clearing Secret Data that Remain in a Computer System Memory Download PDFInfo
- Publication number
- US20090222915A1 US20090222915A1 US12/040,953 US4095308A US2009222915A1 US 20090222915 A1 US20090222915 A1 US 20090222915A1 US 4095308 A US4095308 A US 4095308A US 2009222915 A1 US2009222915 A1 US 2009222915A1
- Authority
- US
- United States
- Prior art keywords
- memory
- secret
- counter
- requesters
- security module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000010200 validation analysis Methods 0.000 claims description 82
- 230000008569 process Effects 0.000 claims description 39
- 238000005201 scrubbing Methods 0.000 claims description 20
- 230000004044 response Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 10
- 230000002730 additional effect Effects 0.000 claims 7
- 238000012545 processing Methods 0.000 description 24
- 238000013459 approach Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- LYNCXVGFZKKGDB-UHFFFAOYSA-M [2-hydroxy-3-(4-methoxyphenoxy)propyl]-[2-[[2-hydroxy-3-(4-methoxyphenoxy)propyl]amino]ethyl]-dimethylazanium;chloride;hydrochloride Chemical compound Cl.[Cl-].C1=CC(OC)=CC=C1OCC(O)CNCC[N+](C)(C)CC(O)COC1=CC=C(OC)C=C1 LYNCXVGFZKKGDB-UHFFFAOYSA-M 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000005204 segregation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/78—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
- G06F21/79—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
Definitions
- the present invention relates to a system and method securely clears secret data from computer system memory. More particularly, the present invention relates to a system and method that securely clears secret data that has been provided by a Trusted Platform Module (TPM).
- TPM Trusted Platform Module
- TPM Trusted Platform Module
- TPM is quite useful in only releasing secrets when proper authentication is provided, a challenge exists with ensuring that secrets, having been released to authenticated requesters, are not compromised when the system is re-booted.
- a requestor might store a secret in RAM that has been allocated to the requestor, but when the system is re-booted the RAM where the secret was stored no longer belongs to the original requestor and may fall into the hands of a malevolent user.
- One approach is to have requestors clean up (e.g. write over) the secret once the requestor is finished using it.
- a challenge to this approach is that the system can generally be booted at any time and, therefore, the requestor might not have the opportunity to clean up the memory where secrets are stored prior to a re-boot.
- Another approach would be to clear (write over) all of the RAM every time the system is rebooted so that any secret data would be written over before the system could be used by a malevolent user.
- the substantial challenge to this approach is that modern systems often contain many megabytes of RAM and, consequently, this approach would often require a long amount of time to clear all of the memory and would likely lead to user frustration and dissatisfaction in waiting such a long time before being able to use the system.
- a security module e.g., a TPM
- the security module receives requests for a secret that is secured by the security module.
- the requests are received from requesters, such as processes and applications running on the computer system.
- the security module releases the secret to the requesters and the released secrets are stored in memory areas allocated to the requesters (e.g., system RAM).
- Each time the secret is released by the security module a counter is incremented.
- a requestor When a requestor is finished using the secret, it sends a notification to the security module that indicates that the requestor has removed the secret from the requestor's allocated memory area.
- the security module then decrements the counter each time one of the notifications is received.
- the counter is compared to the initialization value to determine if notifications were not received from one of the requesters during the previous running of the computer system. If the counter was not decremented back to the initialization value (e.g., zero), then a memory area is scrubbed.
- the memory area that is scrubbed includes the memory areas where the secret was stored in system memory (RAM).
- FIG. 1 is a block diagram of a data processing system in which the methods described herein can be implemented
- FIG. 2 provides an extension of the information handling system environment shown in FIG. 1 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment;
- FIG. 3 is a high level diagram showing the interaction between the Trusted Platform Module (TPM) and the application that is using the secrets to keep a counter corresponding to the various secrets maintained by the TPM;
- TPM Trusted Platform Module
- FIG. 4 is a flowchart showing steps by the BIOS and the TPM when booting a system and checking whether any secrets are potentially at risk and handling the situation accordingly;
- FIG. 5 is a flowchart showing the interaction between the requesting application and the TPM in releasing secrets and accounting for secrets that have been scrubbed by the application;
- FIG. 6 is a flowchart showing steps performed by the TPM to validate an application's scrub notice and decrement the counter corresponding to the secret;
- FIG. 7 is a flowchart showing steps taken by the TPM to process a notification received from a requester that a requester is no longer using a secret
- FIG. 8 is a flowchart showing steps performed during system bring-up to check if any secrets are at risk and writing over selective memory where secrets were stored during a prior usage of the computer system.
- FIG. 9 is a flowchart showing steps taken by the bring-up process to retrieve the memory addresses where secrets were stored during the prior usage of the computer system.
- FIG. 1 A computing environment in FIG. 1 that is suitable to implement the software and/or hardware techniques associated with the invention.
- FIG. 2 A networked environment is illustrated in FIG. 2 as an extension of the basic computing environment, to emphasize that modern computing techniques can be performed across multiple discrete devices.
- FIG. 1 illustrates information handling system 100 which is a simplified example of a computer system capable of performing the computing operations described herein.
- Information handling system 100 includes one or more processors 110 which is coupled to processor interface bus 112 .
- Processor interface bus 112 connects processors 110 to Northbridge 115 , which is also known as the Memory Controller Hub (MCH).
- Northbridge 115 is connected to system memory 120 and provides a means for processor(s) 110 to access the system memory.
- Graphics controller 125 is also connected to Northbridge 115 .
- PCI Express bus 118 is used to connect Northbridge 115 to graphics controller 125 .
- Graphics controller 125 is connected to display device 130 , such as a computer monitor.
- Northbridge 115 and Southbridge 135 are connected to each other using bus 119 .
- the bus is a Direct Media Interface (DMI) bus that transfers data at high speeds in each direction between Northbridge 115 and Southbridge 135 .
- a Peripheral Component Interconnect (PCI) bus is used to connect the Northbridge and the Southbridge.
- Southbridge 135 also known as the I/O Controller Hub (ICH) is a chip that generally implements capabilities that operate at slower speeds than the capabilities provided by the Northbridge.
- Southbridge 135 typically provides various busses used to connect various components. These busses can include PCI and PCI Express busses, an ISA bus, a System Management Bus (SMBus or SMB), a Low Pin Count (LPC) bus.
- PCI and PCI Express busses an ISA bus
- SMB System Management Bus
- LPC Low Pin Count
- the LPC bus is often used to connect low-bandwidth devices, such as boot ROM 196 and “legacy” I/O devices (using a “super I/O” chip).
- the “legacy” I/O devices ( 198 ) can include serial and parallel ports, keyboard, mouse, floppy disk controller.
- the LPC bus is also used to connect Southbridge 135 to Trusted Platform Module (TPM) 195 .
- TPM Trusted Platform Module
- Other components often included in Southbridge 135 include a Direct Memory Access (DMA) controller, a Programmable Interrupt Controller (PIC), a storage device controller, which connects Southbridge 135 to nonvolatile storage device 185 , such as a hard disk drive, using bus 184 .
- DMA Direct Memory Access
- PIC Programmable Interrupt Controller
- ExpressCard 155 is a slot used to connect hot-pluggable devices to the information handling system.
- ExpressCard 155 supports both PCI Express and USB connectivity as it is connected to Southbridge 135 using both the Universal Serial Bus (USB) the PCI Express bus.
- Southbridge 135 includes USB Controller 140 that provides USB connectivity to devices that connect to the USB. These devices include webcam (camera) 150 , infrared (IR) receiver 148 , Bluetooth device 146 which provides for wireless personal area networks (PANs), keyboard and trackpad 144 , and other miscellaneous USB connected devices 142 , such as a mouse, removable nonvolatile storage device 145 , modems, network cards, ISDN connectors, fax, printers, USB hubs, and many other types of USB connected devices. While removable nonvolatile storage device 145 is shown as a USB-connected device, removable nonvolatile storage device 145 could be connected using a different interface, such as a Firewire interface, etc.
- Wireless Local Area Network (LAN) device 175 is connected to Southbridge 135 via the PCI or PCI Express bus 172 .
- LAN device 175 typically implements one of the IEEE 802.11 standards of over-the-air modulation techniques that all use the same protocol to wireless communicate between information handling system 100 and another computer system or device.
- Optical storage device 190 is connected to Southbridge 135 using Serial ATA (SATA) bus 188 .
- Serial ATA adapters and devices communicate over a high-speed serial link.
- the Serial ATA bus is also used to connect Southbridge 135 to other forms of storage devices, such as hard disk drives.
- Audio circuitry 160 such as a sound card, is connected to Southbridge 135 via bus 158 .
- Audio circuitry 160 is used to provide functionality such as audio line-in and optical digital audio in port 162 , optical digital output and headphone jack 164 , internal speakers 166 , and internal microphone 168 .
- Ethernet controller 170 is connected to Southbridge 135 using a bus, such as the PCI or PCI Express bus. Ethernet controller 170 is used to connect information handling system 100 with a computer network, such as a Local Area Network (LAN), the Internet, and other public and private computer networks.
- LAN Local Area Network
- the Internet and other public and private computer networks.
- an information handling system may take many forms.
- an information handling system may take the form of a desktop, server, portable, laptop, notebook, or other form factor computer or data processing system.
- an information handling system may take other form factors such as a personal digital assistant (PDA), a gaming device, ATM machine, a portable telephone device, a communication device or other devices that include a processor and memory.
- PDA personal digital assistant
- the Trusted Platform Module (TPM 195 ) shown in FIG. 1 and described herein to provide security functions is but one example of a hardware security module (HSM). Therefore, the TPM described and claimed herein includes any type of HSM including, but not limited to, hardware security devices that conform to the Trusted Computing Groups (TCG) standard, and entitled “Trusted Platform Module (TPM) Specification Version 1.2.”
- TCG Trusted Computing Groups
- TPM Trusted Platform Module
- the TPM is a hardware security subsystem that may be incorporated into any number of information handling systems, such as those outlined in FIG. 2 .
- FIG. 2 provides an extension of the information handling system environment shown in FIG. 1 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment.
- Types of information handling systems range from small handheld devices, such as handheld computer/mobile telephone 210 to large mainframe systems, such as mainframe computer 270 .
- handheld computer 210 include personal digital assistants (PDAs), personal entertainment devices, such as MP3 players, portable televisions, and compact disc players.
- PDAs personal digital assistants
- Other examples of information handling systems include pen, or tablet, computer 220 , laptop, or notebook, computer 230 , workstation 240 , personal computer system 250 , and server 260 .
- Other types of information handling systems that are not individually shown in FIG. 2 are represented by information handling system 280 .
- the various information handling systems can be networked together using computer network 200 .
- Types of computer network that can be used to interconnect the various information handling systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN), other wireless networks, and any other network topology that can be used to interconnect the information handling systems.
- LANs Local Area Networks
- WLANs Wireless Local Area Networks
- PSTN Public Switched Telephone Network
- Many of the information handling system include nonvolatile data stores, such as hard drives and/or nonvolatile memory.
- nonvolatile data store 265 can be a component that is external to the various information handling systems or can be internal to one of the information handling systems.
- removable nonvolatile storage device 145 can be shared amongst two or more information handling systems using various techniques, such as connecting the removable nonvolatile storage device 145 to a USB port or other connector of the information handling systems.
- FIG. 3 is a high level diagram showing the interaction between the Trusted Platform Module (TPM) and the application that is using the secrets to keep a counter corresponding to the various secrets maintained by the TPM.
- TPM 195 is a security module that, among other activities, safeguards secrets (e.g., encryption keys, etc.) so that unauthorized (e.g., malevolent) users and processes are unable to retrieve and abuse the secrets.
- TPM 195 includes nonvolatile storage, such as nonvolatile memory, in which secrets 310 are stored.
- TPM 195 has counters 314 that keep track of the number of times a secret has been requested.
- validation data 312 is used, as will be explained in further detail below.
- Processes 360 include instructions that are executed by processor(s) 110 of an information handling system, such as information handling system 100 shown in FIG. 1 . Some of these processes are “requesters” of secrets 310 that are maintained by TPM 195 .
- a process requests a secret (e.g., an encryption key) from the TPM.
- the TPM performs authentication processes to ensure that the secret is only provided to authenticated requesters. If authentication is successful, then TPM 195 releases the secret to the requester where, at step 370 , the requestor receives and uses the secret (e.g., uses an encryption key to encrypt a file or data packet, etc.). While using the secret, the requester stores the secret in memory (e.g., RAM) that has been allocated to the requester (memory 375 ). The operating system ensures that malevolent users and processes cannot access the memory that has been allocated to the requestor process.
- a secret e.g., an encryption key
- the TPM when the TPM releases the secret to the requesting process it also sends validation data to the requestor.
- the validation data is used by the requester when notifying the TPM that the requester is no longer using the secret and has scrubs the memory where the secret was stored in memory 375 .
- the requestor is finished using the secret and scrubs the memory so that the secret no longer remains in memory 375 .
- the requestor scrubs the memory by invoking a command (or commands) that writes a value (e.g., zeros) to the memory location where the secret was stored in memory 375 .
- the requestor sends a notification to the TPM that informs the TPM that the requester is no longer using the secret.
- the notification would be accompanied by validation data that corresponds to the original validation data that was sent by the TPM.
- the TPM checks to make sure that the validation data sent by the process corresponds to the validation data that was used when the secret was released.
- the same validation data value e.g., a random number
- the validation data value sent by the TPM corresponds to the expected validation data value but is not the same value.
- the validation data value that was sent may be processed by an algorithm to generate the expected validation data value.
- each secret has a separate counter value that is incremented and decremented as outlined above and as further described herein.
- a single counter is maintained for all secrets and this counter is incremented each time any secret is released and is also decremented each time any secret is accounted for by the requestor (e.g., whenever a notification is received from a requestor).
- secure BIOS 390 operates to scrub memory 375 if, during the boot process, it is discovered that any of the counters that track usage of secrets are not set to zero.
- the BIOS receives the counter value(s) from TPM 195 . The BIOS checks that each of the counters are set to the initialization value (e.g., zero).
- Predefined process 395 executed by secure BIOS 390 , is responsible for scrubbing memory 375 (e.g., writing zeros to the memory addresses) if any counters corresponding to any of the secrets are not at their initialization value (e.g., zero) when the system is booted. If all of the counters are set to their initialization values, then BIOS 390 does not scrub the memory as no secrets are in jeopardy.
- FIG. 4 is a flowchart showing steps by the BIOS and the TPM when booting a system and checking whether any secrets are potentially at risk and handling the situation accordingly.
- Secure BIOS 400 processing commences at 400 when the computer is initialized (e.g., re-booted with a command such as ctrl+alt+del or booted by having a main power switch of the computer system turned “ON”, etc.).
- the secure BIOS requests secret counter data from the TPM.
- a counter is maintained for each secret managed by the TPM while in another embodiment an overall counter is maintained for all secrets managed by the TPM.
- TPM processing commences at 410 where, at step 415 , the TPM receives the request from the secure BIOS for the counter data.
- the TPM reads secret counter data 314 from the TPM's nonvolatile storage 308 , such as the TPM's nonvolatile memory.
- a determination is made by the TPM as to whether any of the counters are not equal to the counter's initialization value, such as zero (0).
- decision 425 branches to “yes” branch 430 whereupon, at step 435 , the TPM returns a response to the secure BIOS (the caller) indicating that there are counter values that are not equal to their expected initialization values (e.g., zero).
- decision 425 branches to “no” branch 440 whereupon, at step 445 , the TPM returns a response to the secure BIOS indicating that all counter values are as expected (i.e., equal to their respective initialization values, such as zero).
- the secure BIOS receives a response from the TPM regarding the counter values.
- a determination is made as to whether the response indicates that at least one counter is not at its expected initialization value (decision 460 ). If one or more counters are not at their expected initialization values, then decision 460 branches to “yes” branch 465 whereupon, at step 470 , the memory that was used by the processes that accessed the secrets is scrubbed.
- scrubbing the memory includes writing a predetermined value, such as zeros, to the memory locations included in the memory.
- the secure BIOS requests that the secret counters be reset to their initialization values (e.g., zero).
- the TPM receives the request to reset the secret counters and, at step 485 , the TPM resets the counters but only if the TPM determines that the computer system is in a secure state (e.g., under the control of the secure BIOS).
- step 450 if the response received from the TPM at step 450 indicates that the counters are all at their expected initialization values, then decision 460 branches to “no” branch 490 bypassing step 470 and 475 .
- step 495 either after scrubbing memory if counters are not at their initialization values or if steps 470 and 475 have been bypassed, the remaining boot operations, including any user-configurable or non-secure BIOS operations, are performed and the BIOS also executes a bootstrapping process that loads the operating system, such as a Windows-based operating system distributed by Microsoft Corporation.
- a hypervisor is loaded and communicates with the TPM.
- guest operating systems are loaded under the hypervisor and one or more virtual machines (VMs) may be executed by the hypervisors.
- the hypervisor or one of the VMs, interfaces with the TPM and the operating systems do not directly communicate with the TPM. Instead, the operating systems communicate with the hypervisor (or with a VM running in the hypervisor) to make TPM requests.
- memory can be segregated into hypervisor memory that is used by the hypervisor and the virtual machines and non-hypervisor memory that is used by the operating systems (e.g., guest operating systems, etc.).
- the secrets released by the TPM will only be stored in the hypervisor's memory area and will not be stored in the operating systems memory area.
- a counter is not at its initial value when the system is booted, only the hypervisor memory (or areas thereof) would have to be scrubbed because any released secrets would only be stored in the hypervisor memory.
- FIG. 5 is a flowchart showing the interaction between the requesting application and the TPM in releasing secrets and accounting for secrets that have been scrubbed by the application.
- Requestor processing is shown commencing at 500 .
- the requestor is a software application running under the control of an operating system.
- the requestor is a process running in a hypervisor or a virtual machine executed by a hypervisor.
- TPM processing commences at 500 whereupon, at step 505 the requestor sends a request to the TPM for a particular secret.
- TPM processing commences at 510 whereupon, at step 515 , the TPM receives the request for the secret.
- a determination is made by the TPM (e.g., based on PCR values, etc.) as to whether to release the secret to the requester (decision 520 ). If the TPM decides not to release the requested secret, then decision 520 branches to “no” branch 522 whereupon, at step 525 an error is returned to the requestor.
- decision 520 branches to “yes” branch 528 whereupon, at predefined process 530 , the secret is released to the requestor and the counter is incremented.
- a counter is maintained for each secret that is released, while in another embodiment, a single counter is maintained for all of the combined secrets that are released.
- the process of “incrementing” and “decrementing” can be performed in many ways. In one embodiment, a positive value (e.g., +1) is used when incrementing and a negative value (e.g., ⁇ 1) is used when decrementing.
- the incrementing can also be implemented in a “countdown” fashion.
- the counters can be initialized to a high initialization value and these values can be incremented by a negative number (e.g., ⁇ 1) to keep track of the number of times a secret was released (such as in a system where a maximum number of “releases” is implemented).
- the decrementing would be performed by adding a positive number (e.g., +1) so that, if all of the releases are accounted for, the ending counter value is again equal to the initialization value.
- step 535 the requestor receives a response from the TPM.
- a determination is made as to whether the secret was released to the requester (decision 540 ). If the secret was not released, then decision 540 branches to “no” branch 542 whereupon processing ends with an error at 545 . On the other hand, if the secret was released to the requestor, then decision 540 branches to “yes” branch 548 whereupon, at step 550 , the secret is stored in memory 551 that has been allocated within system memory 375 to the requestor. If validation data is being used to notify the TPM when the requestor has scrubbed the secret, then the validation data is stored in memory 552 which is also memory that has been allocated within system memory 375 to the requestor.
- memory is segregated between the hypervisor (and its virtual machines) and non-hypervisor applications.
- the memory that is allocated to the requester (memory areas 551 and 552 ) are allocated from the hypervisor's memory area as the requestor is either a hypervisor process or a virtual machine running under the hypervisor.
- the requestor uses the secret (e.g., to encrypt or decrypt data when the secret is an encryption key, etc.).
- the requestor scrubs the memory area where the secret was stored (e.g., by writing zeros to memory area 551 , using a hardware command designed to clear memory area 551 , etc.).
- the requestor sends a notification to the TPM that the secret has been scrubbed from the requestor's memory. If validation data is being used in conjunction with sending the notification, then validation data is also sent to the TPM by the requestor at step 565 .
- the validation data returned to the TPM is the same validation data that the TPM sent to the requestor (e.g., a random number generated by the TPM, etc.).
- the validation data returned to the TPM is a second validation value that corresponds to the validation value initially sent by the TPM but is not the same exact value (e.g., executing an algorithm using the initial validation value to generate the second validation value that can then be verified by the TPM, etc.).
- the TPM receives the notification from the requestor that the secret has been scrubbed (i.e., cleared from the requestor's memory).
- the notification received by the TPM includes an identification of the secret that was scrubbed.
- the notification received by the TPM includes an identification of the requestor that is sending the notification.
- the notification includes validation data (either the same validation data sent by the TPM or a second validation value that corresponds to the validation value sent by the TPM). The various embodiments can be combined as needed.
- the TPM validates the notification as needed and, if the notification is valid, decrements the counter.
- the TPM uses data maintained in the TPM's nonvolatile storage 308 that is inaccessible outside of the TPM. This data includes the secret counter ( 314 ), and validation data 312 (if validation is being used to decrement the counter).
- FIG. 6 is a flowchart showing steps performed by the TPM to validate an application's scrub notice and decrement the counter corresponding to the secret.
- TPM processing commences at 600 whereupon, at step 610 , the secret is retrieved from secret memory area 310 within the TPM's nonvolatile storage (memory) 308 .
- a determination is made as to whether a validation data (a validation value) is being used (decision 620 ). If a validation value is being used, then decision 620 branches to “yes” branch 625 whereupon, at step 630 , a validation value is generated, such as a random number.
- the generated validation value is stored in validation data memory 312 within the TPM's nonvolatile storage 308 .
- decision 620 if validation data is not being used, then decision 620 branches to “no” branch 645 bypassing steps 630 and 640 .
- decision 650 branches to “yes” branch 655 whereupon, at step 660 , the counter that is associated with the locality where the secret is being released is incremented.
- Secret counters 314 are shown with two different implementations.
- Secret counter implementation 670 shows secrets being incremented based on locality, while secret counter implementation 685 shows the counter being incremented without using locality data.
- each implementation can be used to count the release of individual secrets or the overall release of secrets. If only the overall release of secrets is being maintained, then implementation 670 will have a count of the total secrets released to the various localities while implementation 685 will have a total count of secrets released to any process in the computer system.
- decision 650 if localities are not being used to track the release of secrets, then decision 650 branches to “no” branch 675 whereupon, at step 680 , the counter ( 685 ) is incremented.
- the secret value that was requested is returned to the requestor.
- the validation value generated in step 630 is also returned to the requestor. This validation value will be used, either directly or indirectly, when the requester notifies the TPM that the requester is no longer using the secret and has scrubbed the memory where the secret was stored.
- FIG. 7 is a flowchart showing steps taken by the TPM to process a notification received from a requestor that a requestor is no longer using a secret. Processing commences at 700 whereupon a determination is made as to whether a validation value is being used with notifications (decision 710 ). If validation values are being used, then decision 710 branches to “yes” branch 715 whereupon, at step 720 , the TPM reads the validation value that the requester included with the scrub notification. In addition, the TPM compares the validation value provided by the requestor against the expected validation value that was stored in validation data memory 312 when the secret was released. A determination is made as to whether the validation value received from the requester matches the stored validation value, either directly or indirectly (decision 720 ).
- the validation value provided by the requestor is processed by the algorithm and the resulting value is compared with the stored validation value to determine if they match. If no manipulation or computation of the validation value is being performed, then a simple comparison is made as to whether the validation value provided by the requester is the same as the validation value that was stored in validation data memory 312 . If the validation values do not match, then decision 730 branches to “no” branch 735 whereupon processing ends at 740 without decrementing the counter. For example, if the validation value is not included in the notification or an incorrect validation value is used, this may indicate that a malevolent user is attempting to decrement the counters so that the counters remain in memory and are not scrubbed when the system is rebooted. By not decrementing the counter without proper validation, more assurance is provided that the secrets have actually been accounted for and scrubbed by the applications before the counter is decremented.
- decision 730 if the validation value provided by the requester matches the stored validation value (decision 730 branching to “yes” branch 745 ), or if validation values are not being used (decision 710 branching to “no” branch 748 bypassing steps 720 to 740 ), then a determination is made as to whether localities are being used, as previously described in conjunction with FIG. 6 . If localities are not being used, then decision 750 branches to “no” branch 755 whereupon, at step 760 , the counter (secret counter 314 as implemented by non-locality counter 685 ) is decremented.
- decision 750 branches to “yes” branch 765 whereupon, at step 770 , a search is made of the counters in counters implementation 670 for the counter that corresponds to the requestor's locality. A determination is made as to whether the requestor's locality was found (decision 775 ). If the requestor's locality was not found, which again may indicate a malevolent user or process attempting to decrement the counters without actually scrubbing the secret from memory, then decision 775 branches to “no” branch 780 whereupon processing ends at 780 without decrementing the counter. However, if the requestor's locality was found, then decision 775 branches to “yes” branch 790 whereupon, at step 795 , the counter corresponding to the requesters locality shown in counter implementation 670 is decremented.
- FIG. 8 is a flowchart showing steps performed during system bring-up to check if any secrets are at risk and writing over selective memory where secrets were stored during a prior usage of the computer system.
- Processing commences at 800 whereupon, at step 805 , one or more counters are retrieved from counters memory area 314 within the TPM 195 's nonvolatile storage 308 .
- a determination is made as to whether there are any secret counters that are not equal to their initialization value, usually zero (decision 810 ). If all counters are at their initialization values (e.g., zero), then decision 810 branches to “no” branch 815 and processing returns at 820 because no secrets are in jeopardy.
- decision 810 branches to “yes” branch in order to scrub memory where the secret was stored.
- processing retrieves localities data and metadata regarding where secrets were stored in memory. Based on the data retrieved in predefined process 830 , at step 840 an attempt is made to retrieve a list of memory addresses where the secrets were previously stored by requesters during the prior execution of the computer system.
- Memory map 850 shows a segregated memory map between various localities that indicates how memory was segregated between various localities during the prior execution of the computer system.
- locality 851 is memory that was segregated to the hypervisor and any virtual machines (VMs) that were running under the hypervisor
- locality 852 is memory that was segregated to one or more operating systems that were running on the computer system.
- memory area 853 is where a list of the memory addresses where secrets were stored by a particular locality, in this case locality 851 which corresponds to the hypervisor.
- the various memory addresses where secrets were stored in the locality are depicted as memory addresses 854 (showing where any number of secrets A, B, and N were stored).
- decision 860 if the process is able to retrieve a list of the memory addresses where secrets were stored during the prior execution of the computer system, then decision 860 branches to “yes” branch 885 whereupon, at step 890 the data in the particular memory addresses (memory addresses 854 ) is scrubbed (e.g., by writing over the memory addresses with zeros, using a hardware command to clear the memory, etc.). Processing then returns to the calling process at 895 .
- FIG. 9 is a flowchart showing steps taken by the bring-up process to retrieve the memory addresses where secrets were stored during the prior usage of the computer system. Processing commences at 900 whereupon, at step 910 , the TPM 195 's nonvolatile storage 308 is checked for address ranges of localities 901 and addresses of secret address list(s) 902 . If secrets were released to two localities (e.g., localities 851 and 852 shown in FIG. 8 ), then address ranges 901 would indicate the address ranges of the two localities. Likewise, if a list of where in the locality the secrets were stored is maintained by the localities, then address lists 902 would include one or more addresses for each locality pointing to where in the localities the secrets were stored.
- two localities e.g., localities 851 and 852 shown in FIG. 8
- address ranges 901 would indicate the address ranges of the two localities.
- address lists 902 would include one or more addresses for each locality pointing to where in the
- the address ranges that were formerly used by the various localities e.g., the hypervisor's locality, etc.
- the address lists identifying where the secrets were stored in the various localities is retrieved from the TPM's nonvolatile memory (me
- decision 920 branches to “no” branch 955 whereupon, at step 960 , the address ranges that were formerly used by the various localities (e.g., the hypervisor's locality, etc.) are retrieved from the general nonvolatile memory 970 (memory area 901 ).
- the address lists identifying where the secrets were stored in the various localities is retrieved from general nonvolatile memory 970 (memory area 902 ).
- step 980 the general nonvolatile memory 970 used to store memory areas 901 and 902 is cleared, and processing returns at 995 .
- One of the preferred implementations of the invention is a client application, namely, a set of instructions (program code) or other functional descriptive material in a code module that may, for example, be resident in the random access memory of the computer.
- the set of instructions may be stored in another computer memory, for example, in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or other computer network.
- the present invention may be implemented as a computer program product for use in a computer.
- Functional descriptive material is information that imparts functionality to a machine.
- Functional descriptive material includes, but is not limited to, computer programs, instructions, rules, facts, definitions of computable functions, objects, and data structures.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Storage Device Security (AREA)
Abstract
Description
- 1. Technical Field
- The present invention relates to a system and method securely clears secret data from computer system memory. More particularly, the present invention relates to a system and method that securely clears secret data that has been provided by a Trusted Platform Module (TPM).
- 2. Description of the Related Art
- Security of sensitive data and intellectual property is of increased concern in modern computer systems. To address this concern, special security modules, such as a Trusted Platform Module (TPM) have been developed and incorporated in computer systems in order to perform various security and cryptographic functions. The security module (hereinafter, the TPM) releases sensitive (“secret”) data only when the requestor has been properly authenticated.
- While the TPM is quite useful in only releasing secrets when proper authentication is provided, a challenge exists with ensuring that secrets, having been released to authenticated requesters, are not compromised when the system is re-booted. For example, a requestor might store a secret in RAM that has been allocated to the requestor, but when the system is re-booted the RAM where the secret was stored no longer belongs to the original requestor and may fall into the hands of a malevolent user. One approach is to have requestors clean up (e.g. write over) the secret once the requestor is finished using it. A challenge to this approach is that the system can generally be booted at any time and, therefore, the requestor might not have the opportunity to clean up the memory where secrets are stored prior to a re-boot. Another approach would be to clear (write over) all of the RAM every time the system is rebooted so that any secret data would be written over before the system could be used by a malevolent user. The substantial challenge to this approach is that modern systems often contain many megabytes of RAM and, consequently, this approach would often require a long amount of time to clear all of the memory and would likely lead to user frustration and dissatisfaction in waiting such a long time before being able to use the system.
- It has been discovered that the aforementioned challenges are resolved using a system, method and computer program product that initializes a counter maintained in a nonvolatile memory of a security module (e.g., a TPM) to an initialization value, such as zero. The security module receives requests for a secret that is secured by the security module. The requests are received from requesters, such as processes and applications running on the computer system. The security module releases the secret to the requesters and the released secrets are stored in memory areas allocated to the requesters (e.g., system RAM). Each time the secret is released by the security module a counter is incremented. When a requestor is finished using the secret, it sends a notification to the security module that indicates that the requestor has removed the secret from the requestor's allocated memory area. The security module then decrements the counter each time one of the notifications is received.
- When the computer system is rebooted, the counter is compared to the initialization value to determine if notifications were not received from one of the requesters during the previous running of the computer system. If the counter was not decremented back to the initialization value (e.g., zero), then a memory area is scrubbed. The memory area that is scrubbed includes the memory areas where the secret was stored in system memory (RAM).
- The foregoing is a summary and thus contains, by necessity, simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the present invention, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
- The present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings, wherein:
-
FIG. 1 is a block diagram of a data processing system in which the methods described herein can be implemented; -
FIG. 2 provides an extension of the information handling system environment shown inFIG. 1 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment; -
FIG. 3 is a high level diagram showing the interaction between the Trusted Platform Module (TPM) and the application that is using the secrets to keep a counter corresponding to the various secrets maintained by the TPM; -
FIG. 4 is a flowchart showing steps by the BIOS and the TPM when booting a system and checking whether any secrets are potentially at risk and handling the situation accordingly; -
FIG. 5 is a flowchart showing the interaction between the requesting application and the TPM in releasing secrets and accounting for secrets that have been scrubbed by the application; -
FIG. 6 is a flowchart showing steps performed by the TPM to validate an application's scrub notice and decrement the counter corresponding to the secret; -
FIG. 7 is a flowchart showing steps taken by the TPM to process a notification received from a requester that a requester is no longer using a secret; -
FIG. 8 is a flowchart showing steps performed during system bring-up to check if any secrets are at risk and writing over selective memory where secrets were stored during a prior usage of the computer system; and -
FIG. 9 is a flowchart showing steps taken by the bring-up process to retrieve the memory addresses where secrets were stored during the prior usage of the computer system. - Certain specific details are set forth in the following description and figures to provide a thorough understanding of various embodiments of the invention. Certain well-known details often associated with computing and software technology are not set forth in the following disclosure, however, to avoid unnecessarily obscuring the various embodiments of the invention. Further, those of ordinary skill in the relevant art will understand that they can practice other embodiments of the invention without one or more of the details described below. Finally, while various methods are described with reference to steps and sequences in the following disclosure, the description as such is for providing a clear implementation of embodiments of the invention, and the steps and sequences of steps should not be taken as required to practice this invention. Instead, the following is intended to provide a detailed description of an example of the invention and should not be taken to be limiting of the invention itself. Rather, any number of variations may fall within the scope of the invention, which is defined by the claims that follow the description.
- The following detailed description will generally follow the summary of the invention, as set forth above, further explaining and expanding the definitions of the various aspects and embodiments of the invention as necessary. To this end, this detailed description first sets forth a computing environment in
FIG. 1 that is suitable to implement the software and/or hardware techniques associated with the invention. A networked environment is illustrated inFIG. 2 as an extension of the basic computing environment, to emphasize that modern computing techniques can be performed across multiple discrete devices. -
FIG. 1 illustratesinformation handling system 100 which is a simplified example of a computer system capable of performing the computing operations described herein.Information handling system 100 includes one ormore processors 110 which is coupled toprocessor interface bus 112.Processor interface bus 112 connectsprocessors 110 to Northbridge 115, which is also known as the Memory Controller Hub (MCH). Northbridge 115 is connected tosystem memory 120 and provides a means for processor(s) 110 to access the system memory.Graphics controller 125 is also connected to Northbridge 115. In one embodiment, PCI Expressbus 118 is used to connect Northbridge 115 tographics controller 125.Graphics controller 125 is connected todisplay device 130, such as a computer monitor. - Northbridge 115 and Southbridge 135 are connected to each other using bus 119. In one embodiment, the bus is a Direct Media Interface (DMI) bus that transfers data at high speeds in each direction between Northbridge 115 and Southbridge 135. In another embodiment, a Peripheral Component Interconnect (PCI) bus is used to connect the Northbridge and the Southbridge. Southbridge 135, also known as the I/O Controller Hub (ICH) is a chip that generally implements capabilities that operate at slower speeds than the capabilities provided by the Northbridge. Southbridge 135 typically provides various busses used to connect various components. These busses can include PCI and PCI Express busses, an ISA bus, a System Management Bus (SMBus or SMB), a Low Pin Count (LPC) bus. The LPC bus is often used to connect low-bandwidth devices, such as
boot ROM 196 and “legacy” I/O devices (using a “super I/O” chip). The “legacy” I/O devices (198) can include serial and parallel ports, keyboard, mouse, floppy disk controller. The LPC bus is also used to connectSouthbridge 135 to Trusted Platform Module (TPM) 195. Other components often included inSouthbridge 135 include a Direct Memory Access (DMA) controller, a Programmable Interrupt Controller (PIC), a storage device controller, which connectsSouthbridge 135 tononvolatile storage device 185, such as a hard disk drive, usingbus 184. -
ExpressCard 155 is a slot used to connect hot-pluggable devices to the information handling system.ExpressCard 155 supports both PCI Express and USB connectivity as it is connected toSouthbridge 135 using both the Universal Serial Bus (USB) the PCI Express bus.Southbridge 135 includesUSB Controller 140 that provides USB connectivity to devices that connect to the USB. These devices include webcam (camera) 150, infrared (IR)receiver 148,Bluetooth device 146 which provides for wireless personal area networks (PANs), keyboard andtrackpad 144, and other miscellaneous USB connecteddevices 142, such as a mouse, removable nonvolatile storage device 145, modems, network cards, ISDN connectors, fax, printers, USB hubs, and many other types of USB connected devices. While removable nonvolatile storage device 145 is shown as a USB-connected device, removable nonvolatile storage device 145 could be connected using a different interface, such as a Firewire interface, etc. - Wireless Local Area Network (LAN)
device 175 is connected toSouthbridge 135 via the PCI orPCI Express bus 172.LAN device 175 typically implements one of the IEEE 802.11 standards of over-the-air modulation techniques that all use the same protocol to wireless communicate betweeninformation handling system 100 and another computer system or device.Optical storage device 190 is connected toSouthbridge 135 using Serial ATA (SATA)bus 188. Serial ATA adapters and devices communicate over a high-speed serial link. The Serial ATA bus is also used to connectSouthbridge 135 to other forms of storage devices, such as hard disk drives.Audio circuitry 160, such as a sound card, is connected toSouthbridge 135 viabus 158.Audio circuitry 160 is used to provide functionality such as audio line-in and optical digital audio inport 162, optical digital output andheadphone jack 164,internal speakers 166, andinternal microphone 168.Ethernet controller 170 is connected toSouthbridge 135 using a bus, such as the PCI or PCI Express bus.Ethernet controller 170 is used to connectinformation handling system 100 with a computer network, such as a Local Area Network (LAN), the Internet, and other public and private computer networks. - While
FIG. 1 shows one information handling system, an information handling system may take many forms. For example, an information handling system may take the form of a desktop, server, portable, laptop, notebook, or other form factor computer or data processing system. In addition, an information handling system may take other form factors such as a personal digital assistant (PDA), a gaming device, ATM machine, a portable telephone device, a communication device or other devices that include a processor and memory. - The Trusted Platform Module (TPM 195) shown in
FIG. 1 and described herein to provide security functions is but one example of a hardware security module (HSM). Therefore, the TPM described and claimed herein includes any type of HSM including, but not limited to, hardware security devices that conform to the Trusted Computing Groups (TCG) standard, and entitled “Trusted Platform Module (TPM) Specification Version 1.2.” The TPM is a hardware security subsystem that may be incorporated into any number of information handling systems, such as those outlined inFIG. 2 . -
FIG. 2 provides an extension of the information handling system environment shown inFIG. 1 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment. Types of information handling systems range from small handheld devices, such as handheld computer/mobile telephone 210 to large mainframe systems, such asmainframe computer 270. Examples ofhandheld computer 210 include personal digital assistants (PDAs), personal entertainment devices, such as MP3 players, portable televisions, and compact disc players. Other examples of information handling systems include pen, or tablet,computer 220, laptop, or notebook,computer 230,workstation 240,personal computer system 250, andserver 260. Other types of information handling systems that are not individually shown inFIG. 2 are represented byinformation handling system 280. As shown, the various information handling systems can be networked together using computer network 200. Types of computer network that can be used to interconnect the various information handling systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN), other wireless networks, and any other network topology that can be used to interconnect the information handling systems. Many of the information handling system include nonvolatile data stores, such as hard drives and/or nonvolatile memory. Some of the information handling systems shown inFIG. 2 are depicted with separate nonvolatile data stores (server 260 is shown withnonvolatile data store 265,mainframe computer 270 is shown withnonvolatile data store 275, andinformation handling system 280 is shown with nonvolatile data store 285). The nonvolatile data store can be a component that is external to the various information handling systems or can be internal to one of the information handling systems. In addition, removable nonvolatile storage device 145 can be shared amongst two or more information handling systems using various techniques, such as connecting the removable nonvolatile storage device 145 to a USB port or other connector of the information handling systems. -
FIG. 3 is a high level diagram showing the interaction between the Trusted Platform Module (TPM) and the application that is using the secrets to keep a counter corresponding to the various secrets maintained by the TPM.TPM 195 is a security module that, among other activities, safeguards secrets (e.g., encryption keys, etc.) so that unauthorized (e.g., malevolent) users and processes are unable to retrieve and abuse the secrets. As shown,TPM 195 includes nonvolatile storage, such as nonvolatile memory, in whichsecrets 310 are stored. As explained in further detail herein,TPM 195 hascounters 314 that keep track of the number of times a secret has been requested. These counters are decremented when the requesting process informs the TPM that the process has erased the secret from memory and is no longer using the secret. To ensure that malevolent users and processes do not decrement counters,validation data 312 is used, as will be explained in further detail below. -
Processes 360 include instructions that are executed by processor(s) 110 of an information handling system, such asinformation handling system 100 shown inFIG. 1 . Some of these processes are “requesters” ofsecrets 310 that are maintained byTPM 195. Atstep 365, a process requests a secret (e.g., an encryption key) from the TPM. The TPM performs authentication processes to ensure that the secret is only provided to authenticated requesters. If authentication is successful, thenTPM 195 releases the secret to the requester where, atstep 370, the requestor receives and uses the secret (e.g., uses an encryption key to encrypt a file or data packet, etc.). While using the secret, the requester stores the secret in memory (e.g., RAM) that has been allocated to the requester (memory 375). The operating system ensures that malevolent users and processes cannot access the memory that has been allocated to the requestor process. - In one embodiment, when the TPM releases the secret to the requesting process it also sends validation data to the requestor. The validation data is used by the requester when notifying the TPM that the requester is no longer using the secret and has scrubs the memory where the secret was stored in
memory 375. Atstep 380, the requestor is finished using the secret and scrubs the memory so that the secret no longer remains inmemory 375. In one embodiment, the requestor scrubs the memory by invoking a command (or commands) that writes a value (e.g., zeros) to the memory location where the secret was stored inmemory 375. Atstep 385, the requestor sends a notification to the TPM that informs the TPM that the requester is no longer using the secret. In the embodiment that uses validation data, the notification would be accompanied by validation data that corresponds to the original validation data that was sent by the TPM. The TPM checks to make sure that the validation data sent by the process corresponds to the validation data that was used when the secret was released. In one embodiment, the same validation data value (e.g., a random number) when the secret is released as well as when the notification is sent that the secret is no longer being used or stored by the requestor. In another embodiment, the validation data value sent by the TPM corresponds to the expected validation data value but is not the same value. For example, the validation data value that was sent may be processed by an algorithm to generate the expected validation data value. If the validation data value sent with the notification does not correspond to (e.g., is not equal to) the expected validation value stored invalidation data 312 then the counter is not decremented. On the other hand, if the validation value does correspond to the expected validation value (or if validation values are not being used in the implementation), then the counter corresponding to the secret is decremented. In one embodiment, each secret has a separate counter value that is incremented and decremented as outlined above and as further described herein. In another embodiment, a single counter is maintained for all secrets and this counter is incremented each time any secret is released and is also decremented each time any secret is accounted for by the requestor (e.g., whenever a notification is received from a requestor). - As outlined in the Background Section, above, in a traditional system once the computer system is rebooted the memory is no longer allocated to the requesting process by the operating system, which may allow a malevolent user or process to obtain the secret that was stored in
memory 375. To prevent this from happening,secure BIOS 390 operates to scrubmemory 375 if, during the boot process, it is discovered that any of the counters that track usage of secrets are not set to zero. In one embodiment, the BIOS receives the counter value(s) fromTPM 195. The BIOS checks that each of the counters are set to the initialization value (e.g., zero). Predefined process 395, executed bysecure BIOS 390, is responsible for scrubbing memory 375 (e.g., writing zeros to the memory addresses) if any counters corresponding to any of the secrets are not at their initialization value (e.g., zero) when the system is booted. If all of the counters are set to their initialization values, thenBIOS 390 does not scrub the memory as no secrets are in jeopardy. -
FIG. 4 is a flowchart showing steps by the BIOS and the TPM when booting a system and checking whether any secrets are potentially at risk and handling the situation accordingly.Secure BIOS 400 processing commences at 400 when the computer is initialized (e.g., re-booted with a command such as ctrl+alt+del or booted by having a main power switch of the computer system turned “ON”, etc.). Atstep 405, before a user or application program is able to use the system, the secure BIOS requests secret counter data from the TPM. As previously mentioned, in one embodiment a counter is maintained for each secret managed by the TPM while in another embodiment an overall counter is maintained for all secrets managed by the TPM. TPM processing commences at 410 where, at step 415, the TPM receives the request from the secure BIOS for the counter data. Atstep 420, the TPM readssecret counter data 314 from the TPM'snonvolatile storage 308, such as the TPM's nonvolatile memory. A determination (decision 425) is made by the TPM as to whether any of the counters are not equal to the counter's initialization value, such as zero (0). If any of the counters are not equal to zero, thendecision 425 branches to “yes”branch 430 whereupon, atstep 435, the TPM returns a response to the secure BIOS (the caller) indicating that there are counter values that are not equal to their expected initialization values (e.g., zero). On the other hand, if the counters are all equal to the initialization values, thendecision 425 branches to “no”branch 440 whereupon, atstep 445, the TPM returns a response to the secure BIOS indicating that all counter values are as expected (i.e., equal to their respective initialization values, such as zero). - Returning to secure BIOS processing, at
step 450, the secure BIOS receives a response from the TPM regarding the counter values. A determination is made as to whether the response indicates that at least one counter is not at its expected initialization value (decision 460). If one or more counters are not at their expected initialization values, thendecision 460 branches to “yes”branch 465 whereupon, at step 470, the memory that was used by the processes that accessed the secrets is scrubbed. In one embodiment, scrubbing the memory includes writing a predetermined value, such as zeros, to the memory locations included in the memory. After the memory has been scrubbed, atstep 475, the secure BIOS requests that the secret counters be reset to their initialization values (e.g., zero). Atstep 480, the TPM receives the request to reset the secret counters and, atstep 485, the TPM resets the counters but only if the TPM determines that the computer system is in a secure state (e.g., under the control of the secure BIOS). - Returning to secure BIOS processing, if the response received from the TPM at
step 450 indicates that the counters are all at their expected initialization values, thendecision 460 branches to “no”branch 490 bypassingstep 470 and 475. Atstep 495, either after scrubbing memory if counters are not at their initialization values or ifsteps 470 and 475 have been bypassed, the remaining boot operations, including any user-configurable or non-secure BIOS operations, are performed and the BIOS also executes a bootstrapping process that loads the operating system, such as a Windows-based operating system distributed by Microsoft Corporation. In a second embodiment, a hypervisor is loaded and communicates with the TPM. - In this second embodiment, guest operating systems are loaded under the hypervisor and one or more virtual machines (VMs) may be executed by the hypervisors. In this second embodiment, the hypervisor, or one of the VMs, interfaces with the TPM and the operating systems do not directly communicate with the TPM. Instead, the operating systems communicate with the hypervisor (or with a VM running in the hypervisor) to make TPM requests. In this second embodiment, memory can be segregated into hypervisor memory that is used by the hypervisor and the virtual machines and non-hypervisor memory that is used by the operating systems (e.g., guest operating systems, etc.). In this manner, using the hypervisor and/or virtual machines to facilitate communications between the TPM and applications or processes running in the operating systems, the secrets released by the TPM will only be stored in the hypervisor's memory area and will not be stored in the operating systems memory area. Using this embodiment, if a counter is not at its initial value when the system is booted, only the hypervisor memory (or areas thereof) would have to be scrubbed because any released secrets would only be stored in the hypervisor memory. Taking as an example, a system with 8 GB of RAM that is segregated so that 1 GB of RAM is dedicated to the hypervisor and any of its virtual machines and 7 GB is dedicated to primary and guest operating systems, only 1 GB of memory (or less) would have to be scrubbed rather than all 8 GBs of memory, so long as the hypervisor and its virtual machines are programmed to ensure that the secrets are only stored in memory segregated to the hypervisor.
-
FIG. 5 is a flowchart showing the interaction between the requesting application and the TPM in releasing secrets and accounting for secrets that have been scrubbed by the application. Requestor processing is shown commencing at 500. In one embodiment, the requestor is a software application running under the control of an operating system. In a second embodiment, introduced in the discussion ofFIG. 4 , the requestor is a process running in a hypervisor or a virtual machine executed by a hypervisor. - Processing commences at 500 whereupon, at
step 505 the requestor sends a request to the TPM for a particular secret. TPM processing commences at 510 whereupon, atstep 515, the TPM receives the request for the secret. A determination is made by the TPM (e.g., based on PCR values, etc.) as to whether to release the secret to the requester (decision 520). If the TPM decides not to release the requested secret, thendecision 520 branches to “no”branch 522 whereupon, atstep 525 an error is returned to the requestor. - On the other hand, if the TPM decides to release the secret to the requestor, then
decision 520 branches to “yes”branch 528 whereupon, at predefined process 530, the secret is released to the requestor and the counter is incremented. As previously described, in one embodiment a counter is maintained for each secret that is released, while in another embodiment, a single counter is maintained for all of the combined secrets that are released. In addition, as known by those skilled in the art, the process of “incrementing” and “decrementing” can be performed in many ways. In one embodiment, a positive value (e.g., +1) is used when incrementing and a negative value (e.g., −1) is used when decrementing. However, the incrementing can also be implemented in a “countdown” fashion. For example, the counters can be initialized to a high initialization value and these values can be incremented by a negative number (e.g., −1) to keep track of the number of times a secret was released (such as in a system where a maximum number of “releases” is implemented). In this example, consequently, the decrementing would be performed by adding a positive number (e.g., +1) so that, if all of the releases are accounted for, the ending counter value is again equal to the initialization value. - Returning to requestor processing, at
step 535 the requestor receives a response from the TPM. A determination is made as to whether the secret was released to the requester (decision 540). If the secret was not released, thendecision 540 branches to “no”branch 542 whereupon processing ends with an error at 545. On the other hand, if the secret was released to the requestor, thendecision 540 branches to “yes”branch 548 whereupon, at step 550, the secret is stored inmemory 551 that has been allocated withinsystem memory 375 to the requestor. If validation data is being used to notify the TPM when the requestor has scrubbed the secret, then the validation data is stored inmemory 552 which is also memory that has been allocated withinsystem memory 375 to the requestor. As previously introduced, in one embodiment, memory is segregated between the hypervisor (and its virtual machines) and non-hypervisor applications. In this embodiment, the memory that is allocated to the requester (memory areas 551 and 552) are allocated from the hypervisor's memory area as the requestor is either a hypervisor process or a virtual machine running under the hypervisor. - At step 555 the requestor uses the secret (e.g., to encrypt or decrypt data when the secret is an encryption key, etc.). When the requestor is finished using the secret, at
step 560, the requestor scrubs the memory area where the secret was stored (e.g., by writing zeros tomemory area 551, using a hardware command designed to clearmemory area 551, etc.). Atstep 565, the requestor sends a notification to the TPM that the secret has been scrubbed from the requestor's memory. If validation data is being used in conjunction with sending the notification, then validation data is also sent to the TPM by the requestor atstep 565. In one embodiment, the validation data returned to the TPM is the same validation data that the TPM sent to the requestor (e.g., a random number generated by the TPM, etc.). In another embodiment, the validation data returned to the TPM is a second validation value that corresponds to the validation value initially sent by the TPM but is not the same exact value (e.g., executing an algorithm using the initial validation value to generate the second validation value that can then be verified by the TPM, etc.). - Turning now to TPM processing, at
step 570, the TPM receives the notification from the requestor that the secret has been scrubbed (i.e., cleared from the requestor's memory). In one embodiment, the notification received by the TPM includes an identification of the secret that was scrubbed. In one embodiment, the notification received by the TPM includes an identification of the requestor that is sending the notification. In another embodiment, the notification includes validation data (either the same validation data sent by the TPM or a second validation value that corresponds to the validation value sent by the TPM). The various embodiments can be combined as needed. - At predefined process 575, the TPM validates the notification as needed and, if the notification is valid, decrements the counter. To perform predefined process 575, the TPM uses data maintained in the TPM's
nonvolatile storage 308 that is inaccessible outside of the TPM. This data includes the secret counter (314), and validation data 312 (if validation is being used to decrement the counter). -
FIG. 6 is a flowchart showing steps performed by the TPM to validate an application's scrub notice and decrement the counter corresponding to the secret. TPM processing commences at 600 whereupon, atstep 610, the secret is retrieved fromsecret memory area 310 within the TPM's nonvolatile storage (memory) 308. A determination is made as to whether a validation data (a validation value) is being used (decision 620). If a validation value is being used, thendecision 620 branches to “yes”branch 625 whereupon, atstep 630, a validation value is generated, such as a random number. Atstep 640, the generated validation value is stored invalidation data memory 312 within the TPM'snonvolatile storage 308. Returning todecision 620, if validation data is not being used, thendecision 620 branches to “no”branch 645 bypassingsteps - A determination is made as to whether localities are being used to store counters associated with secrets (decision 650). Localities are used when memory is segregated between the hypervisor and other entities, such as operating systems. If memory is segregated, then one locality can be established for the hypervisor, and other localities can be established for other units of memory segregation, such as operating systems and the like. In this manner, the counters can keep track of the localities that have received secrets so that, upon booting, only the memory of localities with non-zero counters will have all or part of their memory scrubbed. If the scrubbing routine can ascertain where (which memory addresses) were used by the locality to store secrets, then just those memory addresses will be scrubbed. However, if the scrubbing routine cannot ascertain which memory addresses were used to store secrets, then all of the memory in a locality will be scrubbed. Using an example of a system with three localities each of which includes 2 GB of memory, then, upon system startup, if one of the localities has a secret count not equal to zero, then just the memory in that locality would be scrubbed (at worse case, 2 GB). However, in the same system if localities were not being used with the system having 6 GB of system memory, then if the scrubbing process cannot ascertain where in memory the secrets were stored, then the scrubbing process would scrub all 6 GB of memory, taking roughly three times as long as the worse case if the memory was segregated into localities.
- If memory is segregated into localities, then decision 650 branches to “yes”
branch 655 whereupon, atstep 660, the counter that is associated with the locality where the secret is being released is incremented. Secret counters 314 are shown with two different implementations. Secret counter implementation 670 shows secrets being incremented based on locality, whilesecret counter implementation 685 shows the counter being incremented without using locality data. Moreover, each implementation can be used to count the release of individual secrets or the overall release of secrets. If only the overall release of secrets is being maintained, then implementation 670 will have a count of the total secrets released to the various localities whileimplementation 685 will have a total count of secrets released to any process in the computer system. Returning to decision 650, if localities are not being used to track the release of secrets, then decision 650 branches to “no”branch 675 whereupon, atstep 680, the counter (685) is incremented. - At
step 690, the secret value that was requested is returned to the requestor. In addition, if validation values are being used, then the validation value generated instep 630 is also returned to the requestor. This validation value will be used, either directly or indirectly, when the requester notifies the TPM that the requester is no longer using the secret and has scrubbed the memory where the secret was stored. -
FIG. 7 is a flowchart showing steps taken by the TPM to process a notification received from a requestor that a requestor is no longer using a secret. Processing commences at 700 whereupon a determination is made as to whether a validation value is being used with notifications (decision 710). If validation values are being used, thendecision 710 branches to “yes”branch 715 whereupon, atstep 720, the TPM reads the validation value that the requester included with the scrub notification. In addition, the TPM compares the validation value provided by the requestor against the expected validation value that was stored invalidation data memory 312 when the secret was released. A determination is made as to whether the validation value received from the requester matches the stored validation value, either directly or indirectly (decision 720). If an algorithm is being used, then the validation value provided by the requestor is processed by the algorithm and the resulting value is compared with the stored validation value to determine if they match. If no manipulation or computation of the validation value is being performed, then a simple comparison is made as to whether the validation value provided by the requester is the same as the validation value that was stored invalidation data memory 312. If the validation values do not match, thendecision 730 branches to “no”branch 735 whereupon processing ends at 740 without decrementing the counter. For example, if the validation value is not included in the notification or an incorrect validation value is used, this may indicate that a malevolent user is attempting to decrement the counters so that the counters remain in memory and are not scrubbed when the system is rebooted. By not decrementing the counter without proper validation, more assurance is provided that the secrets have actually been accounted for and scrubbed by the applications before the counter is decremented. - Returning to
decision 730, if the validation value provided by the requester matches the stored validation value (decision 730 branching to “yes” branch 745), or if validation values are not being used (decision 710 branching to “no”branch 748 bypassingsteps 720 to 740), then a determination is made as to whether localities are being used, as previously described in conjunction withFIG. 6 . If localities are not being used, thendecision 750 branches to “no”branch 755 whereupon, atstep 760, the counter (secret counter 314 as implemented by non-locality counter 685) is decremented. On the other hand, if a locality is being used, thendecision 750 branches to “yes”branch 765 whereupon, atstep 770, a search is made of the counters in counters implementation 670 for the counter that corresponds to the requestor's locality. A determination is made as to whether the requestor's locality was found (decision 775). If the requestor's locality was not found, which again may indicate a malevolent user or process attempting to decrement the counters without actually scrubbing the secret from memory, thendecision 775 branches to “no”branch 780 whereupon processing ends at 780 without decrementing the counter. However, if the requestor's locality was found, thendecision 775 branches to “yes”branch 790 whereupon, at step 795, the counter corresponding to the requesters locality shown in counter implementation 670 is decremented. -
FIG. 8 is a flowchart showing steps performed during system bring-up to check if any secrets are at risk and writing over selective memory where secrets were stored during a prior usage of the computer system. Processing commences at 800 whereupon, atstep 805, one or more counters are retrieved fromcounters memory area 314 within theTPM 195'snonvolatile storage 308. A determination is made as to whether there are any secret counters that are not equal to their initialization value, usually zero (decision 810). If all counters are at their initialization values (e.g., zero), then decision 810 branches to “no”branch 815 and processing returns at 820 because no secrets are in jeopardy. - On the other hand, if one or more counters are not equal to their initialization values, indicating that validated notifications were not received for all released secrets, then decision 810 branches to “yes” branch in order to scrub memory where the secret was stored. At predefined process 830, processing retrieves localities data and metadata regarding where secrets were stored in memory. Based on the data retrieved in predefined process 830, at
step 840 an attempt is made to retrieve a list of memory addresses where the secrets were previously stored by requesters during the prior execution of the computer system.Memory map 850 shows a segregated memory map between various localities that indicates how memory was segregated between various localities during the prior execution of the computer system. In the example, two localities are shown: locality 851 is memory that was segregated to the hypervisor and any virtual machines (VMs) that were running under the hypervisor, andlocality 852 is memory that was segregated to one or more operating systems that were running on the computer system. In the example shown,memory area 853 is where a list of the memory addresses where secrets were stored by a particular locality, in this case locality 851 which corresponds to the hypervisor. The various memory addresses where secrets were stored in the locality are depicted as memory addresses 854 (showing where any number of secrets A, B, and N were stored). - A determination is made as to whether the address list of where the secrets were stored by the locality was able to be retrieved (decision 860). If the list of addresses was not able to be retrieved (e.g., the data was corrupted, the locality did not keep a list of where the secret data was stored, etc.), then
decision 860 branches to “no”branch 865 whereupon, atstep 870, the memory in the entire locality is scrubbed (in this example, the memory in locality 851 is scrubbed). Moreover, if localities were not being used, then atstep 870, the memory in the entire computer system would be scrubbed. Using a prior example, if the computer system were previously segregated into two localities with one locality having 1 GB of memory and running the hypervisor (e.g., locality 851), and the other locality having 7 GB and running the operating system and the user's application programs, then if the memory in the hypervisor's locality is scrubbed, then 1 GB of data is scrubbed rather than scrubbing all 8 GB of memory. However, if localities were not used, then the entire 8 GB of memory would be scrubbed atstep 870. Processing thereafter returns to the calling process at 875. - Returning to
decision 860, if the process is able to retrieve a list of the memory addresses where secrets were stored during the prior execution of the computer system, thendecision 860 branches to “yes”branch 885 whereupon, atstep 890 the data in the particular memory addresses (memory addresses 854) is scrubbed (e.g., by writing over the memory addresses with zeros, using a hardware command to clear the memory, etc.). Processing then returns to the calling process at 895. -
FIG. 9 is a flowchart showing steps taken by the bring-up process to retrieve the memory addresses where secrets were stored during the prior usage of the computer system. Processing commences at 900 whereupon, atstep 910, theTPM 195'snonvolatile storage 308 is checked for address ranges oflocalities 901 and addresses of secret address list(s) 902. If secrets were released to two localities (e.g.,localities 851 and 852 shown inFIG. 8 ), then address ranges 901 would indicate the address ranges of the two localities. Likewise, if a list of where in the locality the secrets were stored is maintained by the localities, then address lists 902 would include one or more addresses for each locality pointing to where in the localities the secrets were stored. - A determination is made as to whether the address data was stored in the TPM (decision 920). If the address data is stored in the TPM, then
decision 920 branches to “yes”branch 925 whereupon, at step 930, the address ranges that were formerly used by the various localities (e.g., the hypervisor's locality, etc.) are retrieved from the TPM's nonvolatile memory (memory area 901). Atstep 935, the address lists identifying where the secrets were stored in the various localities is retrieved from the TPM's nonvolatile memory (memory area 902). Atstep 940, the TPM's nonvolatile memory areas (901 and 902) are cleared, and processing returns at 945. - Returning to
decision 920, if the address data is not stored in the TPM's nonvolatile storage, thendecision 920 branches to “no”branch 955 whereupon, at step 960, the address ranges that were formerly used by the various localities (e.g., the hypervisor's locality, etc.) are retrieved from the general nonvolatile memory 970 (memory area 901). At step 975, the address lists identifying where the secrets were stored in the various localities is retrieved from general nonvolatile memory 970 (memory area 902). Atstep 980, the generalnonvolatile memory 970 used to storememory areas - One of the preferred implementations of the invention is a client application, namely, a set of instructions (program code) or other functional descriptive material in a code module that may, for example, be resident in the random access memory of the computer. Until required by the computer, the set of instructions may be stored in another computer memory, for example, in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or other computer network. Thus, the present invention may be implemented as a computer program product for use in a computer. In addition, although the various methods described are conveniently implemented in a general purpose computer selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the required method steps. Functional descriptive material is information that imparts functionality to a machine. Functional descriptive material includes, but is not limited to, computer programs, instructions, rules, facts, definitions of computable functions, objects, and data structures.
- While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, that changes and modifications may be made without departing from this invention and its broader aspects. Therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For non-limiting example, as an aid to understanding, the following appended claims contain usage of the introductory phrases “at least one” and “one or more” to introduce claim elements. However, the use of such phrases should not be construed to imply that the introduction of a claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an”; the same holds true for the use in the claims of definite articles.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/040,953 US8312534B2 (en) | 2008-03-03 | 2008-03-03 | System and method for securely clearing secret data that remain in a computer system memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/040,953 US8312534B2 (en) | 2008-03-03 | 2008-03-03 | System and method for securely clearing secret data that remain in a computer system memory |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090222915A1 true US20090222915A1 (en) | 2009-09-03 |
US8312534B2 US8312534B2 (en) | 2012-11-13 |
Family
ID=41014266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/040,953 Active 2031-08-31 US8312534B2 (en) | 2008-03-03 | 2008-03-03 | System and method for securely clearing secret data that remain in a computer system memory |
Country Status (1)
Country | Link |
---|---|
US (1) | US8312534B2 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100146267A1 (en) * | 2008-12-10 | 2010-06-10 | David Konetski | Systems and methods for providing secure platform services |
US20110093689A1 (en) * | 2009-10-16 | 2011-04-21 | Dell Products L.P. | System and Method for Bios and Controller Communication |
US20110289386A1 (en) * | 2009-06-24 | 2011-11-24 | Magic Technologies, Inc. | Method and apparatus for scrubbing accumulated data errors from a memory system |
US8261320B1 (en) * | 2008-06-30 | 2012-09-04 | Symantec Corporation | Systems and methods for securely managing access to data |
US20140089712A1 (en) * | 2012-09-25 | 2014-03-27 | Apple Inc. | Security Enclave Processor Power Control |
US9047471B2 (en) | 2012-09-25 | 2015-06-02 | Apple Inc. | Security enclave processor boot control |
US9323552B1 (en) | 2013-03-14 | 2016-04-26 | Amazon Technologies, Inc. | Secure virtual machine memory allocation management via dedicated memory pools |
US9419794B2 (en) | 2012-09-25 | 2016-08-16 | Apple Inc. | Key management using security enclave processor |
US9507540B1 (en) * | 2013-03-14 | 2016-11-29 | Amazon Technologies, Inc. | Secure virtual machine memory allocation management via memory usage trust groups |
US9547778B1 (en) | 2014-09-26 | 2017-01-17 | Apple Inc. | Secure public key acceleration |
US20230055285A1 (en) * | 2021-08-19 | 2023-02-23 | Lenovo (Singapore) Pte. Ltd. | Secure erase of user data using storage regions |
US11669441B1 (en) | 2013-03-14 | 2023-06-06 | Amazon Technologies, Inc. | Secure virtual machine reboot via memory allocation recycling |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009054114A1 (en) * | 2009-11-20 | 2011-05-26 | Siemens Aktiengesellschaft | Method and device for accessing control data according to provided rights information |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020159601A1 (en) * | 2001-04-30 | 2002-10-31 | Dennis Bushmitch | Computer network security system employing portable storage device |
US6480982B1 (en) * | 1999-06-04 | 2002-11-12 | International Business Machines Corporation | Computer RAM memory system with enhanced scrubbing and sparing |
US20030028771A1 (en) * | 1998-01-02 | 2003-02-06 | Cryptography Research, Inc. | Leak-resistant cryptographic payment smartcard |
US6560725B1 (en) * | 1999-06-18 | 2003-05-06 | Madrone Solutions, Inc. | Method for apparatus for tracking errors in a memory system |
US20030196100A1 (en) * | 2002-04-15 | 2003-10-16 | Grawrock David W. | Protection against memory attacks following reset |
US6757832B1 (en) * | 2000-02-15 | 2004-06-29 | Silverbrook Research Pty Ltd | Unauthorized modification of values in flash memory |
US20060071981A1 (en) * | 2002-12-02 | 2006-04-06 | Silverbrook Research Pty Ltd | Data rate supply proportional to the ratio of different printhead lengths |
US20060247849A1 (en) * | 2005-04-27 | 2006-11-02 | Proxemics, Inc. | Wayfinding |
US20070006226A1 (en) * | 2005-06-29 | 2007-01-04 | Microsoft Corporation | Failure management for a virtualized computing environment |
US20070005985A1 (en) * | 2005-06-30 | 2007-01-04 | Avigdor Eldar | Techniques for password attack mitigation |
US20080120710A1 (en) * | 2006-11-17 | 2008-05-22 | Prime Technology Llc | Data management |
US20080134321A1 (en) * | 2006-12-05 | 2008-06-05 | Priya Rajagopal | Tamper-resistant method and apparatus for verification and measurement of host agent dynamic data updates |
US20080189557A1 (en) * | 2005-01-19 | 2008-08-07 | Stmicroelectronics S.R.I. | Method and architecture for restricting access to a memory device |
US20080235505A1 (en) * | 2007-03-21 | 2008-09-25 | Hobson Louis B | Methods and systems to selectively scrub a system memory |
US20090172806A1 (en) * | 2007-12-31 | 2009-07-02 | Natu Mahesh S | Security management in multi-node, multi-processor platforms |
US20100005317A1 (en) * | 2007-07-11 | 2010-01-07 | Memory Experts International Inc. | Securing temporary data stored in non-volatile memory using volatile memory |
US7646873B2 (en) * | 2004-07-08 | 2010-01-12 | Magiq Technologies, Inc. | Key manager for QKD networks |
US7725703B2 (en) * | 2005-01-07 | 2010-05-25 | Microsoft Corporation | Systems and methods for securely booting a computer with a trusted processing module |
-
2008
- 2008-03-03 US US12/040,953 patent/US8312534B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030028771A1 (en) * | 1998-01-02 | 2003-02-06 | Cryptography Research, Inc. | Leak-resistant cryptographic payment smartcard |
US6480982B1 (en) * | 1999-06-04 | 2002-11-12 | International Business Machines Corporation | Computer RAM memory system with enhanced scrubbing and sparing |
US6560725B1 (en) * | 1999-06-18 | 2003-05-06 | Madrone Solutions, Inc. | Method for apparatus for tracking errors in a memory system |
US20030135794A1 (en) * | 1999-06-18 | 2003-07-17 | Longwell Michael L. | Method for apparatus for tracking errors in a memory system |
US6757832B1 (en) * | 2000-02-15 | 2004-06-29 | Silverbrook Research Pty Ltd | Unauthorized modification of values in flash memory |
US20020159601A1 (en) * | 2001-04-30 | 2002-10-31 | Dennis Bushmitch | Computer network security system employing portable storage device |
US20030196100A1 (en) * | 2002-04-15 | 2003-10-16 | Grawrock David W. | Protection against memory attacks following reset |
US20060071981A1 (en) * | 2002-12-02 | 2006-04-06 | Silverbrook Research Pty Ltd | Data rate supply proportional to the ratio of different printhead lengths |
US7646873B2 (en) * | 2004-07-08 | 2010-01-12 | Magiq Technologies, Inc. | Key manager for QKD networks |
US7725703B2 (en) * | 2005-01-07 | 2010-05-25 | Microsoft Corporation | Systems and methods for securely booting a computer with a trusted processing module |
US20080189557A1 (en) * | 2005-01-19 | 2008-08-07 | Stmicroelectronics S.R.I. | Method and architecture for restricting access to a memory device |
US20060247849A1 (en) * | 2005-04-27 | 2006-11-02 | Proxemics, Inc. | Wayfinding |
US20070006226A1 (en) * | 2005-06-29 | 2007-01-04 | Microsoft Corporation | Failure management for a virtualized computing environment |
US20070005985A1 (en) * | 2005-06-30 | 2007-01-04 | Avigdor Eldar | Techniques for password attack mitigation |
US20080120710A1 (en) * | 2006-11-17 | 2008-05-22 | Prime Technology Llc | Data management |
US20080134321A1 (en) * | 2006-12-05 | 2008-06-05 | Priya Rajagopal | Tamper-resistant method and apparatus for verification and measurement of host agent dynamic data updates |
US20080235505A1 (en) * | 2007-03-21 | 2008-09-25 | Hobson Louis B | Methods and systems to selectively scrub a system memory |
US20100005317A1 (en) * | 2007-07-11 | 2010-01-07 | Memory Experts International Inc. | Securing temporary data stored in non-volatile memory using volatile memory |
US20090172806A1 (en) * | 2007-12-31 | 2009-07-02 | Natu Mahesh S | Security management in multi-node, multi-processor platforms |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8261320B1 (en) * | 2008-06-30 | 2012-09-04 | Symantec Corporation | Systems and methods for securely managing access to data |
US20100146267A1 (en) * | 2008-12-10 | 2010-06-10 | David Konetski | Systems and methods for providing secure platform services |
US20110289386A1 (en) * | 2009-06-24 | 2011-11-24 | Magic Technologies, Inc. | Method and apparatus for scrubbing accumulated data errors from a memory system |
US8775865B2 (en) * | 2009-06-24 | 2014-07-08 | Headway Technologies, Inc. | Method and apparatus for scrubbing accumulated disturb data errors in an array of SMT MRAM memory cells including rewriting reference bits |
US8918652B2 (en) * | 2009-10-16 | 2014-12-23 | Dell Products L.P. | System and method for BIOS and controller communication |
US20110093689A1 (en) * | 2009-10-16 | 2011-04-21 | Dell Products L.P. | System and Method for Bios and Controller Communication |
US8321657B2 (en) * | 2009-10-16 | 2012-11-27 | Dell Products L.P. | System and method for BIOS and controller communication |
US20130061031A1 (en) * | 2009-10-16 | 2013-03-07 | Alok Pant | System and method for bios and controller communication |
US20140089712A1 (en) * | 2012-09-25 | 2014-03-27 | Apple Inc. | Security Enclave Processor Power Control |
US9043632B2 (en) * | 2012-09-25 | 2015-05-26 | Apple Inc. | Security enclave processor power control |
US9047471B2 (en) | 2012-09-25 | 2015-06-02 | Apple Inc. | Security enclave processor boot control |
US9202061B1 (en) | 2012-09-25 | 2015-12-01 | Apple Inc. | Security enclave processor boot control |
US9419794B2 (en) | 2012-09-25 | 2016-08-16 | Apple Inc. | Key management using security enclave processor |
US9323552B1 (en) | 2013-03-14 | 2016-04-26 | Amazon Technologies, Inc. | Secure virtual machine memory allocation management via dedicated memory pools |
US9507540B1 (en) * | 2013-03-14 | 2016-11-29 | Amazon Technologies, Inc. | Secure virtual machine memory allocation management via memory usage trust groups |
US11669441B1 (en) | 2013-03-14 | 2023-06-06 | Amazon Technologies, Inc. | Secure virtual machine reboot via memory allocation recycling |
US9547778B1 (en) | 2014-09-26 | 2017-01-17 | Apple Inc. | Secure public key acceleration |
US9892267B1 (en) | 2014-09-26 | 2018-02-13 | Apple Inc. | Secure public key acceleration |
US10114956B1 (en) | 2014-09-26 | 2018-10-30 | Apple Inc. | Secure public key acceleration |
US10521596B1 (en) | 2014-09-26 | 2019-12-31 | Apple Inc. | Secure public key acceleration |
US10853504B1 (en) | 2014-09-26 | 2020-12-01 | Apple Inc. | Secure public key acceleration |
US11630903B1 (en) | 2014-09-26 | 2023-04-18 | Apple Inc. | Secure public key acceleration |
US12079350B2 (en) | 2014-09-26 | 2024-09-03 | Apple Inc. | Secure public key acceleration |
US20230055285A1 (en) * | 2021-08-19 | 2023-02-23 | Lenovo (Singapore) Pte. Ltd. | Secure erase of user data using storage regions |
Also Published As
Publication number | Publication date |
---|---|
US8312534B2 (en) | 2012-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8312534B2 (en) | System and method for securely clearing secret data that remain in a computer system memory | |
US8201161B2 (en) | System and method to update device driver or firmware using a hypervisor environment without system shutdown | |
US7853804B2 (en) | System and method for secure data disposal | |
US8201239B2 (en) | Extensible pre-boot authentication | |
US8151262B2 (en) | System and method for reporting the trusted state of a virtual machine | |
US9319380B2 (en) | Below-OS security solution for distributed network endpoints | |
US9354857B2 (en) | System and method to update firmware on a hybrid drive | |
JP6096301B2 (en) | Theft prevention in firmware | |
US8909940B2 (en) | Extensible pre-boot authentication | |
KR101359841B1 (en) | Methods and apparatus for trusted boot optimization | |
CN107092495B (en) | Platform firmware armoring technology | |
US7836299B2 (en) | Virtualization of software configuration registers of the TPM cryptographic processor | |
EP2652666B1 (en) | Storage drive based antimalware methods and apparatuses | |
US8607071B2 (en) | Preventing replay attacks in encrypted file systems | |
US8499345B2 (en) | Blocking computer system ports on per user basis | |
US10853086B2 (en) | Information handling systems and related methods for establishing trust between boot firmware and applications based on user physical presence verification | |
US10146704B2 (en) | Volatile/non-volatile memory device access provisioning system | |
US20170140149A1 (en) | Detecting malign code in unused firmware memory | |
US11909882B2 (en) | Systems and methods to cryptographically verify an identity of an information handling system | |
WO2019103902A1 (en) | Software packages policies management in a securela booted enclave | |
US8359635B2 (en) | System and method for dynamic creation of privileges to secure system services | |
US10146943B2 (en) | System and method to disable the erasure of an administrator password in an information handling system | |
US12008111B2 (en) | System and method for efficient secured startup of data processing systems | |
JP5476381B2 (en) | Improved I / O control and efficiency in encrypted file systems | |
US20090222635A1 (en) | System and Method to Use Chipset Resources to Clear Sensitive Data from Computer System Memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LENOVO (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHALLENER, DAVID CARROLL;CROMER, DARYL CARVIS;LOCKER, HOWARD JEFFREY;AND OTHERS;REEL/FRAME:020657/0005 Effective date: 20080218 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: LENOVO PC INTERNATIONAL, HONG KONG Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:LENOVO (SINGAPORE) PTE LTD.;REEL/FRAME:037160/0001 Effective date: 20130401 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |