CN117807644A - Managing responses to resets in response to tamper activity detection - Google Patents

Managing responses to resets in response to tamper activity detection Download PDF

Info

Publication number
CN117807644A
CN117807644A CN202310825398.1A CN202310825398A CN117807644A CN 117807644 A CN117807644 A CN 117807644A CN 202310825398 A CN202310825398 A CN 202310825398A CN 117807644 A CN117807644 A CN 117807644A
Authority
CN
China
Prior art keywords
reset
response
semiconductor package
secure
indication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310825398.1A
Other languages
Chinese (zh)
Inventor
T·F·爱默生
C·M·韦斯内斯基
D·J·津克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Enterprise Development LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/166,717 external-priority patent/US20240111909A1/en
Application filed by Hewlett Packard Enterprise Development LP filed Critical Hewlett Packard Enterprise Development LP
Publication of CN117807644A publication Critical patent/CN117807644A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure relates to managing a response to a reset in response to tamper activity detection. A process includes receiving a given reset indication for resetting a semiconductor package. The given reset indication is one of a time series of recent indications received by the semiconductor package. The semiconductor package includes a hardware root of trust. The process includes detecting an activity associated with the semiconductor package that corresponds to the tamper activity. The process includes managing a response of the semiconductor package to a given reset indication in response to detecting the activity.

Description

Managing responses to resets in response to tamper activity detection
Technical Field
The present disclosure relates to the field of computers, and more particularly to managing responses to resets in response to tamper activity detection. In particular, the present disclosure relates to a method, baseboard management controller, and computer platform for managing responses to resets.
Background
The computer platform may be subject to security attacks aimed at attempting to access information stored on the computer platform, or damaging components of the computer platform, etc. To prevent security attacks or at least to suppress the extent of potential damage caused by security attacks, computer platforms may have different levels of security protection. For example, the computer platform may have various mechanisms to restrict access, such as firewalls, passwords, and keys. As another example, a computer platform may have a secure processor. The security processor may provide a number of security-related functions to strengthen the computer platform against security attacks. As an example, the security-related function may be a secure storage of platform secrets. As another example, the security-related function may be verification of firmware. As another example, the security related function may be to protect firmware updates. As other examples, security related functions may include encryption key (cryptographic key) generation, sealing encryption keys, and unsealing encryption keys.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method comprising: receiving a given reset indication for resetting the semiconductor package, wherein the given reset indication is one of a time series of reset indications received by the semiconductor package, and the semiconductor package includes a hardware root of trust; detecting an activity associated with the semiconductor package that corresponds to the tamper activity; and in response to detecting the activity, managing a response of the semiconductor package to the given reset indication.
According to another aspect of the present disclosure, there is provided a baseboard management controller including: a management processor; and a secure enclave separate from the management processor, wherein the secure enclave has an associated cryptographic boundary and comprises: a secure processing core; a trust root engine for verifying machine-readable instructions to be executed by the secure processing core, wherein the trust root engine comprises a reset input; and a reset controller for: receiving a time sequence of reset indications, including receiving a current reset indication in the time sequence of reset indications and receiving at least one previous reset indication in the time sequence of reset indications; transmitting a reset signal to the reset input to place the root of trust engine in reset in response to the current reset indication; and controlling a delay applied when releasing the reset in response to detecting tampering with the secure enclosure.
According to yet another aspect of the present disclosure, there is provided a computer platform comprising: a main processing core; and a secure processor, the secure processor comprising: a secure processing core; a trust root engine, wherein the trust root engine is to verify a first firmware instruction portion to be executed by the secure processing core, the first firmware instruction portion being part of a trust chain, and the trust chain comprising a second firmware instruction portion to be executed by the primary processing core; and a reset controller for: receiving a series of reset requests for resetting the secure processor; responding to a series of reset requests, wherein responding to the series of reset requests includes providing a reset signal to the root of trust engine in response to each reset request in the series of reset requests; and throttling responses to a series of reset requests in response to detecting tampering with the secure processor.
Other aspects are also provided.
Drawings
FIG. 1 is a schematic diagram of a computer platform having a reset gate for protecting a secure processor of the computer platform from environmental condition-induced security attacks, according to an example embodiment.
Fig. 2 is a schematic diagram of a secure enclave (secure enclave) of a secure processor according to an example embodiment.
FIG. 3 is a block diagram of a subsystem for adjusting a secure enclave of a reset hold time (reset hold time) according to an example embodiment.
FIG. 4 is a flowchart depicting a process performed by a secure processor to regulate a response of the secure processor to a reset request in accordance with an example embodiment.
FIG. 5 depicts a silicon root-of-trust engine (SRoT) engine of a secure processor, illustrating the incorporation of digital canary (canary) circuitry in the SRoT engine, in accordance with an illustrative embodiment.
FIG. 6 is an illustration of a spatial mix of root of trust engine logic gates and canary circuit logic gates in accordance with an example embodiment.
FIG. 7 depicts a processing core of a secure processor illustrating the incorporation of digital canary circuitry in the processing core, in accordance with an illustrative embodiment.
Fig. 8 is a diagram of a spatial mix of processing core logic gates and canary circuit logic gates in accordance with an example embodiment.
Fig. 9 is a block diagram of a digital canary circuit in accordance with an example embodiment.
FIG. 10 is a flowchart of a process for managing the response of an integrated circuit to a reset indication in response to detecting tampering activity, according to an example embodiment.
FIG. 11 is a block diagram of a Baseboard Management Controller (BMC) having a reset manager for controlling a delay applied when releasing a secure enclave of the BMC from a reset state in response to tamper detection, according to an example embodiment.
FIG. 12 is a block diagram of a computer platform having a reset governor for throttling reset responses of a secure enclave according to an example embodiment.
Detailed Description
The computer platform may include a security processor that may perform one or more security-related services for the computer platform. For example, the secure processor may verify the firmware instructions as part of a secure boot of the computer platform. As a more specific example, the secure processor may include a root of trust engine to verify firmware instructions associated with the encrypted trust chain in response to a reset or power up of the computer platform. As other examples, the secure processor may perform one or more of the following: storing a measurement hash, loading a reference measurement hash, storing an encryption key, retrieving an encryption key, generating an encryption key, retrieving an encryption platform identification, creating a certificate, storing a certificate, adding a certificate, deleting a certificate, sealing an encryption key, and unsealing an encryption key.
A potential method of attacking a computer platform is to manipulate the environmental conditions of the secure processor of the platform such that the environmental conditions are far beyond the specification of the conditions by the secure processor. In this case, the "environmental condition" of the safety processor refers to a parameter or characteristic of the operating state of the safety processor, which has an expected operating range and may be affected by stimuli external to the safety processor. As an example, the environmental condition of the secure processor may be the magnitude of a supply voltage provided to the secure processor, the rate of a clock signal received by the secure processor, the die temperature of the secure processor, the radiation level of the secure processor, or the strength of an electromagnetic field to which the secure processor is exposed. The environmental conditions of the secure processor are purposefully manipulated such that they are outside of the corresponding expected range, which is referred to herein as an "environmental condition induced" security attack.
As an example of a security attack caused by an environmental condition, a secure processor may be provided with a supply voltage outside a specified supply voltage range of the secure processor. As another example, the rate of the clock signal provided to the secure processor may be greater than a maximum clock frequency specified for the secure processor. As another example, the ambient temperature of the secure processor may be manipulated (e.g., a fan may be turned off, a heat sink may be removed, thermal energy may be applied to the surrounding environment, or another mechanism may be used to raise the ambient temperature) to cause the die temperature of the secure processor to exceed a maximum die temperature specified for the secure processor.
The goal of an environmental condition-induced security attack on a secure processor is to cause the secure processor to fail, or to cause one or more small faults, in order to create an opportunity for an attacker to enter the system in which the secure processor would otherwise shut down. Failure may be manifested in a number of different ways. As an example, a secure processor failure may cause a bit state to flip; program instructions are destroyed; program execution behavior deviates from expected behavior; executing protected instructions that should not be executed; bypassing execution of instructions that should not be bypassed; bypassing firmware verification; or generally cause an exception in the behavior of the secure processor. In other words, a security attack caused by an environmental condition may provide an attack path that bypasses security processor safeguards that are otherwise in place when the security processor is operating properly. Since the nature of the security attack caused by the environmental conditions is such that the secure processor is caused to operate outside of its design specifications, the exact root cause(s) of the fault may be unknown and may not be predictable.
Subjecting the secure processor to a security attack caused by environmental conditions, coupled with repeatedly resetting the secure processor, may increase the likelihood of success of the security attack. For example, the secure processor may be part of a computer platform having a secure boot that ensures that firmware and software executing on the computer platform are authenticated. Secure booting involves a computer platform establishing a chain of encrypted trust for machine-readable instructions (e.g., firmware and software) executed by the computer platform. The trust chain may start from an initial link or trust anchor, which is the initial link or start of the trust chain. In general, secure booting involves each set of machine-readable instructions corresponding to each link of a chain of trust being considered trusted and then being loaded and executed to verify that the next set of machine-readable instructions corresponding to the next subsequent link is trusted before allowing the next set of machine-readable instructions to be loaded and executed.
Because the secure processor may play a role in establishing a chain of trust for the computer platform when the component(s) of the secure processor are released from reset, each reset of the secure processor is another opportunity for the secure processor to advantageously benefit an attacker from the induced failure. For example, a secure processor may contain one or more components that establish one or more links of an encrypted trust chain, and failure of one or more such components may present an attack path.
As a more specific example, the secure processor may contain a trust root engine that establishes a trust anchor for a trust chain of the computer platform when the trust root engine is released from reset. By having the secure processor fail at or near the time that the root of trust engine is released from reset, the failure may, for example, result in a set of malicious initial firmware instructions being erroneously authenticated (i.e., erroneously considered trusted). As another example, firmware executed by a processing core of a secure processor may be trusted, but the processing core may erroneously authenticate a set of malicious firmware instructions corresponding to a next link of a trust chain after being released from a reset. As another example, the initial set of firmware may be trusted, but the processing core may not properly perform the initial fault and/or security checks after being released from reset, or may generally exhibit unexpected execution behavior.
According to example embodiments described herein, a semiconductor package has a reset controller for controlling or regulating a reset response of a system. Herein, adjusting the reset response of the system may refer to adjusting the reset response of the entire system or adjusting the reset response of a particular component or subsystem of the entire system. As an example, according to some embodiments, adjusting the reset response of the system may include adjusting the reset response of the semiconductor package including the reset controller. As another example, according to an example embodiment, adjusting the reset response of the system may include adjusting the reset response of a subsystem (e.g., a management subsystem) or component (e.g., a management controller) that includes the reset master (e.g., includes a semiconductor package that includes the reset master). As another example, according to some embodiments, adjusting the reset response of the system may include adjusting the reset response of the computer platform (e.g., power cycling the computer platform) that includes the reset controller (e.g., that includes a semiconductor package that includes the reset controller). According to an example embodiment, a "reset response" of a control or regulation system refers to the time that a reset master controls or regulates the component subsystem or system to remain reset after being placed in reset. In other words, according to an example embodiment, the reset master controls or adjusts the "reset hold time", i.e. the continuous time between the time of the start of the reset and the time of the end of the reset, also called release from reset (the release of the reset). Although specific examples for controlling or adjusting the reset response of a semiconductor package containing a reset controller are described herein, it should be understood that according to further example embodiments, adjusting the reset response may extend beyond a semiconductor package containing a reset controller according to further embodiments.
Typically, the reset controller controls the response of the semiconductor package to a "reset request". In this case, a "reset request" refers to an indication provided or generated to reset one or more components of the semiconductor package. As an example, a reset request may be provided by asserting an (asset) electrical reset signal. Many different components inside and outside the semiconductor package may result in assertion of a reset signal. As an example, a power monitoring circuit external to the semiconductor package may assert a reset signal. As another example, an internal watchdog timer of the semiconductor package may assert a reset signal in response to the timer expiring. As another example, the internal circuit may assert the reset signal in response to a particular bit in the control register being written.
The reset signal may have a first state (e.g., an asserted state, such as a logic zero state) for placing the semiconductor package in a reset (or requesting initiation of a reset) and a second state (e.g., a de-asserted state, such as a logic one state) for releasing the semiconductor package from the reset. According to an example embodiment, a reset controller receives an input reset signal (e.g., a reset signal asserted to provide a reset request) and provides an output reset signal to reset terminal(s) of one or more components of a semiconductor package. The reset master adjusts the reset hold time, i.e., the delay between the time the reset master asserts the output reset signal (to begin reset) and the time the output reset signal deasserts (deasserts) the output reset signal (to release reset).
According to example embodiments, the reset controller may adjust the reset hold time independently of the reset hold time that may be indicated by the input reset signal. In other words, according to example embodiments, while the reset master may begin resetting in response to assertion of the input reset signal, release of the reset by the reset master may be independent of the time at which the input reset signal was de-asserted.
According to an example embodiment, the reset controller adjusts the reset hold time based on whether tampering activity of the semiconductor package has been detected. For example, when no tampering activity is detected, the reset hypervisor may initially apply a first smaller reset hold delay (e.g., apply a reset hold time commensurate with the reset hold time of the input reset signal, or apply a predefined minimum reset hold time). The reset controller may increase the reset hold time when tampering activity is detected. In this case, detecting "tampering activity" refers to detecting or identifying a flag of one or more events, wherein the flag is consistent with the flag of the security attack (temporal pattern of events, attributes of events, specific sequence of events).
As an example, the tampering activity may be detected as a switch sensor indicating that a computer platform (e.g., a blade server) has been removed from a housing (e.g., a rack) or a switch sensor indicating that a chassis lid of the computer platform has been opened. As other examples, the tampering activity may be detected in response to detecting: the magnitude of the supply voltage deviates from a specified range; the clock rate deviates from the specified range; or die temperature deviates from the specified range.
As another example, according to some embodiments, a semiconductor package may have one or more digital canary circuits purposefully designed to fail or fail near or before one or more protected components of the semiconductor package fail. As further described herein, the acute sensitivity of canary circuits to security attacks caused by environmental conditions, according to example embodiments, enables timely action to be taken to mitigate or prevent security damage. As used herein, "canary circuit" refers to a circuit that fails due to a security attack by an environmental condition, and provides an observable indication when a failure occurs, such that the indication can also be used as an indicator of the security attack by the environmental condition. According to some embodiments, the canary circuit performs one or more cryptographic transformations that produce an output based on the known input, and the output may be used as an indicator. In this way, deviations of the canary circuit's output from the expected output are indicative of faults and indicative of security attacks caused by environmental conditions.
Since the reset manager adjusts the reset hold time based on detected tampering activity, the reset manager can advantageously throttle the number of resets that may occur with a security attack, thereby reducing the likelihood that the security attack will compromise the integrity of components within the semiconductor package.
Although the semiconductor package may include a dedicated tamper detection circuit, the reset master may also detect tamper activity according to some embodiments. For example, according to some embodiments, the reset master may detect tampering activity based on the observed temporal pattern of reset requests. For example, according to some embodiments, the reset master may detect tampering by detecting when the current time rate of the reset request exceeds a predefined threshold rate.
The semiconductor package may initiate and/or perform one or more actions in response to detecting the tampering activity, which actions may extend beyond changing the reset response for countering, reporting, and/or mitigating the detected tampering activity. As an example, power down of the semiconductor package may be initiated in response to detecting a tampering activity. As another example, the semiconductor package may record that tampering activity was detected. As another example, the semiconductor package may send or initiate the sending of an alert (e.g., sending a message or other notification to a system administrator) in response to the detected tampering activity. As another example, the semiconductor package may erase a secret stored in the secure memory in response to detecting the tampering activity.
Referring to fig. 1, as a more specific example, according to some embodiments, a computer platform 100 includes a semiconductor package 153 that includes a reset controller 135. Semiconductor package 153 can be any of a number of different semiconductor packages such as a surface mount package, a via package, a ball grid array package, a low profile package, a chip scale package, or any other container containing one or more semiconductor die 157. As further described herein, according to example embodiments, the reset manager 135 controls the reset response of the semiconductor package 153, i.e., how the semiconductor package 153 responds to the reset request. More specifically, according to an example embodiment, the reset manager 135 adjusts the reset hold time of the semiconductor package 153 in response to whether tampering activity of the semiconductor package 153 has been detected.
According to some embodiments, the tamper activity may be detected by a tamper detection circuit of the semiconductor package. According to some embodiments, tampering activity may be detected by resetting the controller 135. For example, according to an example embodiment, the reset manager 135 monitors the temporal pattern of reset requests of the semiconductor package 153 for determining whether the temporal rate at which a reset is requested exceeds a predefined threshold (which indicates that tampering activity is detected). According to example embodiments, the semiconductor package 153 may initiate and/or perform one or more responsive actions in response to detecting a tamper activity to resist the tamper activity, report the tamper activity, and/or mitigate the effects caused by the tamper activity.
Depending on the particular implementation, the reset controller 135 may be part of any of a number of different semiconductor packages. As one example, as depicted in fig. 1, the semiconductor package 153 may contain the secure processor 130, and the reset controller 135 may be part of the secure processor 130. As an example, the security processor 130 may be fabricated on one or more semiconductor die 157 of the semiconductor package 153. As used herein, a "security processor (security processor)" refers to a hardware component of an electronic device (e.g., computer platform 100) that performs one or more security-related services for the electronic device.
According to further embodiments, the semiconductor package may contain the reset controller 135, but the semiconductor package may not include a secure processor. For example, according to some embodiments, a Central Processing Unit (CPU) semiconductor package (or "socket") may include a reset controller 135 and one or more processing cores (e.g., CPU cores), and the CPU semiconductor package may not include a secure processor. As another example, according to further embodiments, the semiconductor package may include the reset controller 135, but not include any processing cores or any secure processors.
The security processor 130 may have any of a number of different forms depending on the particular implementation. For example, according to some embodiments, the secure processor 130 may correspond to a separate secure specialized semiconductor package containing a hardware trust root, validating a firmware image of a computer platform, and controlling booting of the computer platform based on the results of the validation. As another example, according to some embodiments, the secure processor 130 may be a Trusted Platform Module (TPM). As another example, according to some embodiments, the secure processor 130 may be a coprocessor having a multi-CPU core, CPU semiconductor package (or "socket").
For the example embodiment depicted in fig. 1, secure processor 130 is part of a management controller (such as BMC 129). According to some embodiments, secure processor 130 and BMC 129 may be fabricated on the same semiconductor die 157. According to a further example embodiment, the secure processor 130 and the BMC 129 may be fabricated on respective semiconductor die 157.
In the context of this document, a "BMC" or "baseboard management controller (baseboard management controller)" is a dedicated service processor that uses sensors to monitor the physical state of a server or other hardware and communicates with a management system through a management network. The baseboard management controller may also communicate with applications executing at the operating system level by: an input/output controller (IOCTL) interface driver, a representational state transfer (REST) Application Program Interface (API), or some system software agent that facilitates communication between the baseboard management controller and the application program. The baseboard management controller can have hardware level access to hardware devices located in a server chassis that includes system memory. The baseboard management controller may be capable of directly modifying the hardware devices. The baseboard management controller may operate independently of the operating system of the system in which the baseboard management controller is disposed. The baseboard management controller may be located on a motherboard or main circuit board of a server or other device to be monitored.
The fact that the baseboard management controller is mounted on or otherwise connected or attached to the motherboard of the managed server/hardware does not prevent the baseboard management controller from being considered "separate" from the server/hardware. As used herein, a baseboard management controller has the ability to manage subsystems of a computing device and is separate from processing resources executing the operating system of the computing device. The baseboard management controller is separate from a processor (e.g., a central processing unit) that executes a high-level operating system or hypervisor on the system.
According to an example embodiment, computer platform 100 is a modular unit that includes a frame or chassis. In addition, the modular unit may include hardware mounted on the chassis and capable of executing machine readable instructions. According to some embodiments, a blade server is an example of computer platform 100. According to further embodiments, the computer platform 100 may have a variety of other forms, such as a rack-mounted server, a stand-alone server, a client, a desktop, a smart phone, a wearable computer, a network component, a gateway, a network switch, a storage array, a portable electronic device, a portable computer, a tablet computer, a thin client, a laptop computer, a television, a modular switch, a consumer electronic device, an appliance, an edge processing system, a sensor system, a watch, a removable peripheral card, or generally any other processor-based platform.
According to an example embodiment, the computer platform 100 may be connected to a network structure 161. Network fabric 161 may be associated with one or more types of communication networks such as, by way of example, a fibre channel network, a computing fast link (CXL) fabric, a private management network, a Local Area Network (LAN), a Wide Area Network (WAN), a global network (e.g., the internet), a wireless network, or any combination thereof.
According to an example embodiment, BMC 129 may execute a set of firmware instructions, referred to as a "firmware management stack (firmware management stack)", for performing various management-related functions of host 101 of computer platform 100. As an example, BMC 129 may provide management related functions such as: an operating system runtime service; detecting and initializing resources; pre-operating system services. The management related functions may also include remote management functions. As an example, the remote management function may include: a Keyboard Video Mouse (KVM) function; a virtual power function (e.g., a function for remote activation of a remote set power state (e.g., a power saving state, a power on, a reset state, or a power off state); a virtual media management function; and one or more other management-related functions of host 101.
A "host" (or "host instance") is associated with an operating system 113 instance (e.g., a Linux or Windows operating system instance) and is provided by a corresponding set of resources of computer platform 100. For the example embodiment depicted in fig. 1, the resources of host 101 may include one or more main CPU cores 102 (e.g., CPU processing cores, semiconductors containing CPU processor cores), and memory devices connected to CPU core(s) 102 to form system memory 104. The CPU core(s) 102 may be coupled to one or more input/output (I/O) bridges 106 that allow communication between the CPU core(s) 102 and the BMC 129, as well as with various I/O devices, such as a storage drive 122; one or more Network Interface Controllers (NICs) 124; one or more Universal Serial Bus (USB) devices 126; an I/O device; a video controller; etc. Further, as also depicted in fig. 1, computer platform 100 may include one or more peripheral component interconnect express (PCIe) devices 110 (e.g., PCIe expansion cards) that may be coupled to CPU core(s) 102 through corresponding individual PCIe bus(s) 108. According to further example embodiments, PCIe device(s) 110 may be coupled to I/O bridge(s) 106 instead of CPU core(s) 102. According to still further embodiments, the I/O bridge(s) 106 and PCIe interface may be part of the CPU core(s) 102.
According to some embodiments, computer platform 100 may contain multiple hosts 101 (e.g., each host 101 may correspond to an associated CPU multi-core package (or "socket")). The BMC 129 may provide management related services and security related services for each host 101.
In general, the memory devices forming the system memory 104, as well as other memories or storage media described herein, may be formed from non-transitory memory devices, such as semiconductor memory devices, flash memory devices, memristors, phase change memory devices, combinations of one or more of the foregoing storage techniques, and the like. Further, unless otherwise indicated herein, a memory device may be a volatile memory device (e.g., a Dynamic Random Access Memory (DRAM) device, a Static Random Access (SRAM) device, etc.) or a nonvolatile memory device (e.g., a flash memory device, a Read Only Memory (ROM) device, etc.).
According to some embodiments, one or more NICs 124 may be intelligent input/output peripherals, or "smart I/O peripherals," which may provide backend I/O services for one or more applications 115 (or application instances) executing on computer platform 100. According to some embodiments, one or more of PCIe devices 110 may be intelligent I/O peripherals.
According to an example embodiment, the BMC 129 includes one or more master management processing cores 154 (referred to herein as "master processing cores (main processing core) 154") that execute machine-readable instructions as part of the BMC's management plane to perform management functions of the host 101. These instructions may correspond to the firmware management stack of BMC 129. The main processing core(s) 154 execute a firmware management stack to allow the BMC 129 to perform various management roles for the host 101, such as monitoring sensors; monitoring the state of an operating system; monitoring the power state; recording a computer system event; providing a remote console; providing remote control functions and other virtual presence techniques; as well as other management activities. According to an example embodiment, the BMC 129 may communicate with the remote management server 190 via the NIC 158 of the BMC 129.
According to further embodiments, the BMC 129 may communicate with the remote management server 190 via the NIC 124 over a sideband bus 125 (e.g., a bus corresponding to a network controller sideband interface (NC-SI) electrical interface and protocol defined by the Distributed Management Task Force (DMTF)).
In addition to providing management functionality for host(s) 101, BMC 129 may provide security-related features that protect host(s) 101 from security attacks. More specifically, according to an example embodiment, the security plane of the BMC includes the security enclosure 140. In this case, a "secure enclave" refers to a subsystem of BMC 129, access to and from which is tightly controlled. As further described herein, according to example embodiments, the secure enclave 140 may include, among other features thereof: resetting the controller 135; one or more digital canary circuits 134 (also referred to herein as "canary circuits 134"); a secure memory 144; and a secure processing core 142.
According to an example embodiment, the secure enclave 140 performs cryptographic functions for the host(s) 101 and is disposed entirely within the cryptographic boundary. In this case, an "encryption boundary" refers to a contiguous boundary or perimeter that contains the logical and physical components of the encryption subsystem, such as the BMC components that form the secure enclave 140.
According to an example embodiment, the secure enclave 140 of the BMC 129 is isolated from the management plane of the BMC (as well as other non-secure components of the BMC 129 that are external to the secure enclave 140). According to an example embodiment, the secure enclave 140 includes hardware or silicon RoT (referred to herein as "SRoT"), which may be provided via an SRoT engine 143.
More specifically, according to an example embodiment, secure enclave 140 stores an immutable fingerprint that is used by SRoT engine 143 to verify an initial portion of firmware 170 (i.e., verify the trustworthiness of the initial portion) before the initial portion of firmware 170 is executed. According to an example embodiment, SRoT engine 143 holds secure processing core 142, management processing core 154, and main CPU core 102 in reset until SRoT engine 143 verifies the initial portion of firmware 170. In response to a power on/reset, SRoT engine 143 validates the initial portion of firmware 170 and then loads the initial portion of firmware into memory 151 of secure enclave 140 so that the firmware portion is now authentic. SROT engine 143 then releases secure processing core 142 from the reset to allow secure processing core 142 to boot and execute the loaded firmware instructions.
By executing the firmware instructions, secure processing core 142 may then verify one or more portions of firmware 170 that contain additional instructions for secure core 142 to execute. In addition, secure processing core 142 may then validate another portion of firmware 170 corresponding to a portion of the BMC management firmware stack and load that portion of the firmware stack into memory 155 of BMC 129 after validation. The portion of the management firmware stack may then be executed by the main processing core(s) 154 of the BMC (when released from reset), which causes the main processing core(s) 154 to load additional portions of firmware 170 and place the loaded portions into memory 164. Access to memory 164 may involve additional training and initialization steps (e.g., the training and initialization steps set forth by the DDR4 specification). These instructions may be executed from the validated portion of the BMC firmware management stack in memory 155. According to an example embodiment, the secure enclave 140 may lock the memory 155 to prevent modification or tampering of the verified firmware portion(s) stored in the memory 155.
Thus, according to an example embodiment, the encrypted trust chain anchored by the SRoT may extend from the SRoT to the firmware management stack executed by the BMC's main processing core 154. Further, according to example embodiments, the firmware management stack executed by the main processing core(s) 154 may validate host system firmware, such as Unified Extensible Firmware Interface (UEFI) 111 firmware, thereby extending the trust chain to the host system firmware. According to an example embodiment, UEFI firmware 111 provides services through a bus structure from firmware 170.
According to an example embodiment, BMC 129 is configured to preventA given domain or entity of the BMC 129 powers up or exits the reset until the secure enclave 140 verifies the domain/entity. Furthermore, according to example embodiments, BMC 129 may prevent components of BMC 129 from accessing resources of BMC 129 and resources of computer platform 100 until such resources are approved/validated by secure enclosure 140. BMC 129 may perform bus filtering and monitoring (e.g., on a Serial Peripheral Interface (SPI) bus, a System Management Bus (SMB), an inter-integrated component (I) 2 C) Bus, improved I 2 C(I 3 C) Bus filtering and monitoring of buses, etc.) to prevent unwanted access to bus devices. For example, BMC 129 may perform bus filtering and monitoring on a bus 167 (e.g., SPI bus) coupled to a nonvolatile memory 168 that stores firmware 170.
According to an example embodiment, reset master 135 may be fabricated on the same semiconductor die 157 as the safety enclosure 140. As described herein, according to example embodiments, although the secure processor 130 has a hierarchical reset because the secure processing core 142 and the SRoT engine 143 are simultaneously placed in reset, the secure processing core 142 and the SRoT engine 143 are released from reset at different times. In this manner, according to an example embodiment, SRoT engine 143 is first released from reset (while secure processing core 142 remains in reset) to verify the initial portion of firmware 170. According to an example embodiment, after SRoT engine verification, SRoT engine 143 releases secure processing core 142 from reset and then loads the initial portion of firmware 170 into memory 151 for execution of the processing core. According to an example embodiment, the reset hold time (as regulated by reset regulator 135) controls the time that SRoT engine 143 remains reset. By imposing a controllable reset hold time on the reset state, the reset governor 135 may throttle or limit the reset request to impose a limit on the rate at which the secure processor 130 may be reset. According to an example embodiment, all components of computer platform 100 (i.e., BMC 129, secure processing core 142, and SRoT engine 143) may be placed in reset at the same time.
According to some embodiments, reset hypervisor 135 adjusts the duration of the reset hold time based on whether tampering activity has been detected. For example, according to an example embodiment, when a tamper detection history (e.g., a history represented by non-volatile memory bit (s)) indicates no previous tamper detection, the reset hypervisor 135 may apply a first smaller reset hold time and the reset hypervisor 135 may increase the reset hold time in response to detecting tamper activity. According to some embodiments, the indication of tampering activity may be the result of a sensor of a tamper detection circuit of secure processor 130 detecting a change in an environmental condition (e.g., a rate exceeding a threshold defining a supply voltage, a clock rate, or a die temperature) beyond a specified range of environmental conditions. According to an example embodiment, the canary circuit 134 of the secure processor 130 may detect a security attack caused by an environmental condition and provide a corresponding tamper detection indication.
According to some embodiments, reset manager 135 may detect tampering activity based on a temporal pattern of reset requests provided to secure processor 130. For example, according to some embodiments, reset manager 135 may detect tampering activity in response to the rate of reset requests exceeding a predefined rate threshold (e.g., a rate exceeding a threshold defining a maximum number of reset requests N within time period T).
Referring to fig. 2, according to an example embodiment, the BMC 129 may be a complete system on a chip (SOC) and the secure enclave 140 may be contained within a tightly controlled cryptographic boundary 204. In general, components of the secure enclave 140 may communicate using the bus infrastructure 205. According to an example embodiment, the bus infrastructure 205 may include features such as a data bus, a control bus, an address bus, a system bus, one or more buses, one or more bridges, and the like.
Volatile memory 151 may be, for example, static Random Access Memory (SRAM) and may store data representing Trusted Computing Base (TCB) measurements, such as one or more PCR libraries. The secure memory 144 may be, for example, non-volatile RAM (NVRAM). The secure enclave 140 may include a register 240. Depending on the particular implementation, registers 240 may be software registers, hardware registers, or a combination of hardware and software registers. For example, according to some embodiments, registers 240 include encrypted security registers such as software PCRs. Further, according to an example embodiment, registers 240 may include operational registers, such as hardware registers that provide control, status, and configuration functions for secure enclave 140.
According to an example embodiment, secure enclave 140 includes secure bridge 214, which controls access to secure enclave 140 via secure interconnect 218 (i.e., establishes a firewall for secure enclave 140). By way of example, interconnect 218 may include a bus or internal interconnect structure, such as an Advanced Microcontroller Bus Architecture (AMBA) structure or an advanced extensible interface (AXI) structure. As an example, according to some embodiments, interconnect 218 may include an SPI bus controller for coupling one or more SPI devices to secure enclosure 140. The secure bridge 214 may provide an additional upstream interface to allow the secure enclave 140 to "contact out" to the interconnect 218. The secure enclave 140 may use the upstream interface to obtain its firmware and, in general, verify the firmware 170 (fig. 1). Secure bridge 214 may employ filtering and monitoring on interconnect 218 to prevent unauthorized access to memory 151. According to an example embodiment, the management plane of the BMC 129 may communicate with the secure enclave 140 via execution of one or more secure service Application Programming Interfaces (APIs).
As also depicted in fig. 2, the secure enclave 140 may include a tamper detection circuit 234, according to an example embodiment. According to an example embodiment, tamper detection circuitry 234 receives one or more environmental signals 236 (e.g., sensor signals representative of die temperature, clock rate, supply voltage magnitude, housing open state, removed state, etc.), which tamper detection circuitry 234 may use to detect tampering. For example, tamper detection circuitry 234 may compare a value (e.g., a supply voltage) represented by particular environmental signal 236 to a threshold (e.g., a supply voltage upper threshold or a supply voltage lower threshold defining a particular supply voltage range) to determine whether tampering activity is detected. As another example, tamper detection circuitry 234 may determine whether particular environmental signal 236 (e.g., a switch state indicating whether a case lid has been opened) has a state indicating that a lid of computer platform 100 has been opened.
According to an example embodiment, the tamper detection circuit 234 may monitor the tamper indication signal 237 provided by the canary circuit 134 of the secure enclosure 140 for determining whether tampering activity caused by an environmental condition has been detected by any of the canary circuits 134. Further, according to some embodiments, tamper detection circuitry 234 may receive an indication of detected tampering activity (e.g., an indication that the time rate at which reset request was detected by reset master 135 exceeds a predefined rate threshold) from reset master 135 via one or more communication lines 274.
According to some implementations, tamper detection circuitry 234 may monitor a reset indicator 283 (e.g., a bit stored in non-volatile memory) associated with a real-time clock (RTC) device 280 for detecting tampering activity associated with the RTC device 280. According to an example embodiment, reset master 135 may include a controller coupled to tamper detection circuitry 234 and reset indicator 283 is coupled to a controller internal to reset master 135. In this manner, as further described herein, the RTC device 280 may be used by the reset master 135 to measure the reset hold time, according to example embodiments. The RTC device 280 may be coupled to a backup battery 285 (e.g., a "coin" battery) and if the backup battery 285 is removed, the volatile memory of the RTC device 280 may be erased, thereby resetting the RTC device 280.
Because the primary way to issue repeated resets may be to power cycle the secure enclave 140 (and possibly the BMC and/or computer platform), and because the logic of the secure enclave 140 (such as the reset hypervisor 135) may be volatile, power cycling may otherwise be a way to potentially avoid the reset hold adjustment of the hypervisor 135. According to an example embodiment, to prevent such a reset hold adjustment that bypasses the reset master, a non-volatile time reference provided by the RTC device 280 between system power cycles may be used. In this way, if the backup battery 285 to the RTC device 280 is disconnected, the "battery loss" indication informs the reset master 135 to keep the first reset for a predetermined period of time, which may be the maximum hold time interval imposed by the reset master 135. For example, according to some embodiments, the reset governor 135 may use an alarm timer of the RTC device 280 such that when the timer reaches a time when the reset governor 135 writes to a register of the RTC device 280, the RTC device 280 generates an indication (e.g., asserts an interrupt signal). The reset controller 135 may use the indication provided by the RTC device 280 to trigger a release from reset (i.e., set the duration of the reset hold time). However, when the RTC device 280 is reset, the reset hold time may also be effectively reset, preventing longer reset hold times from being applied, which may be beneficial to an attacker.
According to an example embodiment, removal of the backup battery 285 may be detected, and in response to the detection, the state of the reset indicator 283 may be set to a value indicating that the RTC device 280 has been reset. More specifically, according to some embodiments, the backup battery 285 renders the RTC device 280 non-volatile. The battery removal condition is caused by the loss of a volatile "time normal and active (time is OKand valid)" indicator. According to some embodiments, the RTC device 280 may detect its reset and set a reset indicator 283 to indicate the reset. According to a further embodiment, tamper detection circuitry 234 may control the setting of reset indicator 283. It should be noted that although FIG. 2 depicts the RTC device 280 outside of the secure enclave 140, according to further embodiments, the RTC device 280 may be part of the secure enclave 140. According to some embodiments, the RTC device 280 may be part of the BMC 129 or one of the set of timers 254 of the secure enclave 140.
According to example embodiments, when tamper detection circuitry 234 detects tampering, tamper detection circuitry 234 may initiate and/or perform one or more actions in response to the detected tampering activity. For example, tamper detection circuitry 234 may communicate with bus infrastructure 205 via communication line 290 for initiating one or more responsive actions to resist, report, and/or mitigate the effects of a tamper activity. As another example, tamper detection circuitry 234 may communicate with reset master 135 via one or more communication lines 274 for causing reset master 135 to increase a reset hold time. As another example, tamper detection circuitry 234 may cause secure enclosure 140 to remove sensitive information (e.g., erase certain secrets stored in secure memory 144); asserting a signal or message to alert an external component (e.g., the main processing core 154, the operating system 113 (fig. 1), the remote management server 190 (fig. 1), or other entity) to tamper-proof activity; resetting the main processing core(s) 154; or perform one or more other responsive actions to combat, report, and/or mitigate the effects of the tampering activity.
According to some embodiments, reset controller 135 may be coupled to reset control circuitry 282 that generates one or more reset signals (to initiate a corresponding reset) to one or more corresponding circuits of the computer platform in response to an input reset signal received by reset controller 135. As depicted in fig. 2, according to an example embodiment, reset master 135 includes an input 207 that receives an input reset signal, which may be indicative of a reset request 201. For example, a particular reset request 201 may be generated by asserting (e.g., driving to a logic zero level) an input reset signal. According to an example embodiment, reset controller 135 may cause reset control circuit 282 to generate corresponding reset signals on reset lines 292, 294, and 203 for the BMC, the secure enclave, and SRoT engine 143 in response to assertion of the input reset signal. In the absence of the reset manager 135 (i.e., if the input reset signal and the output reset signal are the same), de-assertion of the input reset signal (e.g., allowing the input reset signal to return to a logic-one level) releases the SRoT engine 143 from reset. According to an example embodiment, SRoT engine 143 may provide a signal indicating successful verification of firmware (e.g., firmware 170 of fig. 1) to allow secure enclave 140 and potentially other components to exit the reset.
According to some embodiments, if no tampering activity is detected, reset hypervisor 135 releases the reset (e.g., releases the reset of SRoT engine 143, BMC, and/or computer platform), such as by deasserting the output reset signal when the input reset signal is deasserted or within a relatively short time thereafter. However, according to an example embodiment, if tampering activity has been detected, then reset hypervisor 135 imposes a relatively long delay in releasing SRoT engine 143 from reset (e.g., reset hypervisor 135 imposes a long reset hold time).
In other features thereof, according to some embodiments, as depicted in fig. 2, the secure enclave 140 may include an encryption processing engine 270 that encrypts data written to the secure memory 144 and decrypts data read from the secure memory 144. Depending on the particular implementation, encryption and decryption may use an advanced encryption standard-XOR-encryption-XOR-based adjusted codebook mode (or "AES-XTS") block cipher (block cipher) or another block cipher with ciphertext stealing. According to further embodiments, encryption and/or decryption may be performed by secure processing core 142.
According to an example embodiment, the secure enclave 140 may include encryption accelerators 244 (e.g., symmetric encryption accelerators and asymmetric encryption accelerators) that assist the secure processing core 142 in operations such as key generation, signature verification, encryption, decryption, and the like. Further, the encryption accelerator 244 may include a true random number generator for providing a source of trusted entropy for encryption operations.
According to an example embodiment, the secure enclosure 140 may include a one-time programmable (OTP) fuse 258 that stores data representing a truly immutable attribute. For example, according to some embodiments, fuse 258 may store data representing a master secret from which other private keys and secrets may be derived. As another example, according to some embodiments, fuse 258 may store a silicon root of trust secure hash algorithm-2 (SHA-2) signature (e.g., an immutable fingerprint used by SRoT engine 143). As another example, according to some embodiments, fuse 258 may store a unique identifier (e.g., an identifier selected for a platform identity certificate). According to a further example embodiment, fuse 258 may store data representing a security-enabled fingerprint. Those of ordinary skill in the art will appreciate that the secure enclosure 140 may have other components that may exist in a processor-based architecture, such as a timer 254, an interrupt controller 250 (which receives interrupt trigger stimuli from the timer 254 and other sources), and so forth.
Furthermore, the secure enclave 140 may contain interfaces to assist in the initial development and commissioning of the secure enclave 140 (in a pre-production mode of the secure enclave 140), but the interfaces may be completely disabled or the functionality may have been changed (for a production mode of the secure enclave 140) when certain fuses (e.g., certain OTP fuses 258) are blown. For example, these interfaces may include a universal asynchronous receiver/transmitter (UART) 262 that may be used to debug and develop the secure enclave 140 and then be fixed to a send-only configuration for the production mode of the secure enclave 140. As an example, according to some embodiments, UART 262 may be configured by OTP fuse 258 to provide one-way status health information from secure enclosure 140 in a production mode of secure enclosure 140. As another example, according to a further embodiment, OTP fuse 258 may disable UART 262 for production mode, thereby disabling all communications with UART 262 to prevent all communications across encryption boundary 204. As another example of an interface that may assist in initial development and debugging of secure enclave 140, but may be modified/disabled for production mode, secure enclave 140 may include a Joint Test Action Group (JTAG) interface (not shown) for a secure processor; and the JTAG interface may be disabled for the production mode of the secure enclave 140.
Fig. 3 depicts a subsystem 300 of the secure enclave 140 that governs the response of the secure enclave 140 to a reset request 201, according to an example embodiment. Referring to FIG. 3, according to an example embodiment, a subsystem 300 includes a reset controller 135 and an RTC device 280. According to an example embodiment, reset manager 135 receives reset request 201 from one or more reset request sources 350. A given reset request source 350 may be external or internal to secure processor 130 (fig. 1). According to some embodiments, reset request source 350 may correspond to a power cycle of a computer platform.
As an example, reset request source 350 may be a power monitoring circuit. As another example, according to some embodiments, reset request source 350 may be circuitry to generate reset request 201 in response to secure processing core 142 (fig. 2) writing a reset bit of a control register. As another example, according to some embodiments, the reset request source 350 may be a watchdog timer. The reset request source 350 may potentially be associated with a malicious entity for generating the reset request 201 as part of a security attack. The reset request source 350 may be circuitry of the secure processor 130 (FIG. 1), BMC 129 (FIG. 1), or computer platform 100 (FIG. 1), but may be manipulated by a malicious entity for generating the reset request 201 as part of a security attack.
As depicted in fig. 3, the reset master 135 includes a reset release delay circuit 310 and a controller 304, according to some embodiments. According to an example embodiment, the reset release delay circuit 310 passes to the output reset signal (provided at output 282) through assertion of the input reset signal (received at input 207). In other words, according to an example embodiment, the reset release delay circuit 310 applies a small delay to no delay when the output reset signal is asserted after the input reset signal is asserted. According to an example embodiment, an edge (e.g., a positive edge) of the input reset signal associated with an assertion triggers the reset master 135 to measure the reset hold time, i.e., the time that elapses before the reset master 135 de-asserts the output reset signal to release the reset.
More specifically, according to some embodiments, in response to assertion of the input reset signal, the controller 304 writes data representing the time value to the alarm timer register 360 of the RTC device 280. The time value represents future RTC time from the current RTC time offset reset hold time. In this way, the RTC device 280 measures the reset hold time and generates an indication (e.g., asserts an interrupt signal) that the reset hold time has elapsed. According to an example embodiment, the reset release delay circuit 310 waits for an indication that the RTC device 280 generates a measurement result representing a reset hold time. In response to the RTC device 280 generating the indication, the reset release delay circuit 310 asserts an output reset signal on the output 282 to release the reset.
According to an example embodiment, the controller 304 may adjust the reset hold time (and thus write the appropriate value to the alarm time register accordingly) based on whether tampering activity has been detected. For example, when no tampering activity is detected, the controller 304 may write data to an alarm register of the RTC device 280, which represents a time offset corresponding to a minimum reset hold time. When tampering activity has been detected, the controller 304 may write data to the alarm register of the RTC device 280, which represents a predefined longer reset hold time that is offset from the current time.
According to some embodiments, the controller 304 may adjust the reset hold time based on the type of tampering detected. For example, the controller 304 may apply a longer reset hold time than applied for other types of tampering activities in response to security attacks caused by environmental conditions. Further, according to some embodiments, the controller 304 may adjust the reset hold time of certain type(s) of tampering (e.g., security attacks caused by environmental conditions) upward, but not other type(s) of tampering (e.g., detection of a lid opening).
According to some embodiments, the controller 304 may apply more than two reset hold times. Further, the controller 304 may adjust the reset hold time based on the tamper activity detection history. For example, the controller 304 may apply a first reset hold time in response to an initial detection of a tampering activity, and the controller may apply a second reset hold time greater than the first reset hold time in response to a subsequent detection of a tampering activity.
According to some embodiments, from the perspective of the reset master 135, the controller 304 may reset the tamper history. For example, the controller 304 may store the tamper activity detection history of the reset master 135 in a non-volatile memory, adjust the reset hold time based on the history, and clear the tamper activity detection history after a predetermined time has elapsed without any tamper activity being detected (associated with the reset master 135). According to a further embodiment, the controller 304 clears the tamper activity detection history, regardless of how long has elapsed since the last time the tamper was detected.
According to an example embodiment, the controller 304 may be formed of one or more hardware processing cores that execute machine-executable instructions for performing the functions of the reset manager 135, as described herein. According to further embodiments, all or a portion of reset hypervisor 135 may be performed by dedicated hardware (e.g., logic gates) that performs one or more functions of reset hypervisor 135 without executing machine-executable instructions. In this manner, depending on the particular implementation, the hardware may be an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Complex Programmable Logic Device (CPLD), or the like.
According to some embodiments, the reset release delay circuit 310 may be logic, such as a combinational logic gate and flip-flop. According to further embodiments, the operation of the reset release delay circuit 310 may be independent of the RTC device. For example, according to some embodiments, the reset release delay circuit 310 may measure the reset hold time using a chain of delay elements coupled in series, and the controller 304 may control the number of delay elements of the chain to adjust the reset hold time. As another example, according to further embodiments, the reset release delay element may include a timer circuit or other delay element to measure the reset hold time.
FIG. 4 depicts an example process 400 that may be performed by the secure processor 130 (FIG. 1) to control a reset response of the secure processor, according to an example embodiment. The process 400 may involve various components of the security processor 130 taking action, such as the reset manager 135 (fig. 2 and 3), the tamper detection circuit 234 (fig. 2), the canary circuit(s) 134 (fig. 2), and the RTC device 280 (fig. 2), as described herein.
Referring to fig. 4, according to an example embodiment, process 400 begins with power up and includes determining (according to decision block 408) whether a new tamper activity is detected by the tamper detection circuitry, and if so, performing one or more responsive actions on the tamper activity (as depicted at 412). More specifically, according to an example embodiment, performing responsive action 412 may include increasing a reset hold time according to block 416 and recording tamper activity detection according to block 420. Further, as also depicted in fig. 4, according to some embodiments, performing the response action 412 may include initiating (block 424) an alarm notification (e.g., notifying the management processing core 154 of the BMC 129 or notifying the remote management server 190 (fig. 1) of tamper activity detection).
The process 400 further includes determining whether the temporal pattern of the most recent reset indication indicates tampering activity, in accordance with decision block 430. If so, control passes to block 412 for performing a responsive action, according to an example embodiment. Otherwise, according to process 400, if no new tamper activity is detected, process 400 includes determining (decision block 438) whether one or more predefined criteria for recovering an initial reset hold time (e.g., a predefined minimum reset hold time) have been met. As an example, according to some embodiments, after a predetermined period of time has elapsed, the process 400 may include the reset hypervisor recovering (block 439) an initial reset hold time.
According to example embodiments, one or more canary circuits may be provided in proximity to respective components of a secure processor to be protected. For example, according to some embodiments, a canary circuit may be disposed near an SRoT engine, and another canary circuit may be disposed near a secure processing core of a secure processor.
According to a further embodiment, sub-components of the component to be protected may be spatially mixed with components of the associated canary circuit. For example, according to some embodiments, components of the canary circuit may be spatially mixed with components of the SRoT engine. Referring to fig. 5, as a more specific example, according to some embodiments, SRoT engine 143 may include SRoT logic element 508. As used herein, a "logic element" refers to a circuit formed by a set of logic gates. As depicted in fig. 5, logic element 508 may include one or more logic gates 520 (referred to herein as "SRoT logic gates 520"). As an example, SRoT logic gate 520 may be a combinational logic gate, such as an AND gate, OR gate, NAND gate, NOT gate, NOR gate, XOR gate, OR a device that generally applies a boolean algebraic expression to one OR more boolean inputs to provide a boolean output in accordance with the boolean algebraic expression. As an example, SRoT logic element 508 may be a flip-flop, a register, a counter, a timer, a delay circuit, a compare circuit, a circuit for implementing the state of a state machine, or generally a set of logic gates that perform the function of SRoT engine 143.
The SRoT engine 143 may further include canary logic 512 of a canary circuit. Each canary logic element 512 may include one or more logic gates 516 (referred to herein as "canary logic gates 516"), which may be combinational logic gates. As examples, canary logic element 512 may be a flip-flop, a register, a counter, a timer, a delay circuit, a compare circuit, a circuit for implementing the state of a state machine, a circuit for performing an encryption cipher (e.g., an AES cipher block), or generally a set of logic gates that perform the function of a canary circuit.
As depicted at reference numeral 514 of fig. 5, the canary logic gate 516 may be spatially mixed with the SRoT logic gate 520, according to example embodiments. In this case, "spatial mixing (spatial commingling)" of the logic gates 516 and 520 refers to configuring the logic gates 516 and 520 in a region of the semiconductor die such that in the region, the logic gates 516 and 520 are mixed or intermixed along a first path parallel to a first dimension axis, and the logic gates 516 and 520 are also mixed or intermixed along a second path parallel to a second dimension axis orthogonal to the first dimension axis. Because the canary logic gate 516 is spatially mixed with the SRoT logic gate 520, the canary logic gate 516 experiences all or more environmental conditions that are the same as the SRoT logic gate 520. Thus, the canary circuit formed by the canary logic gate 516 can accurately sense the environmental conditions of the SRoT engine 143 and quickly provide an indication of security attacks caused by environmental conditions affecting the SRoT engine 143.
Fig. 6 depicts an example semiconductor die area 600 according to an example embodiment. According to some embodiments, semiconductor die region 600 corresponds to a region of semiconductor die 157 (fig. 1) in which SRoT engine 143 is partially or fully fabricated. Note that SRoT engine 143 may be further fabricated in one or more other areas of semiconductor die 157.
Referring to fig. 6, a semiconductor die region 600 extends laterally across the semiconductor die along X and Y dimensions corresponding to X and Y axes 602 and 601, respectively. The semiconductor die has a thickness Z dimension corresponding to Z-axis 603. The Z-axis 603 extends in a direction aligned with the surface normal of the wafer from which the die is created.
For the particular example embodiment depicted in fig. 6, spatial mixing is along the X-dimension and the Y-dimension. More specifically, in region 600, SRoT logic gates 520 (e.g., SRoT logic gates 520-1, 520-2, 520-3, and 520-4) and canary logic gates 516 (e.g., canary logic gates 516-1, 516-3, and 516-3) are mixed along a first path 650 that is parallel to Y-axis 601. Also in region 600, SRoT logic gates 520 (e.g., SRoT logic gates 520-3, 520-5, and 520-6) and canary logic gates 516 (e.g., canary logic gates 516-1, 516-2, 516-4, and 516-5) are mixed along a second path 654 that is parallel to X-axis 602 (which is perpendicular to Y-axis 601).
For this example embodiment, canary logic gates 516 and SRoT logic gates 520 may be spatially mixed by using layout design tools for placing transistors, metal interconnects, and other features of the semiconductor die. For example, the user input of the layout design tool may specify the boundaries of a first X-Y window of canary logic gate 516, and further specify the boundaries of a second X-Y window of SRoT logic gate 520 (which at least partially overlaps the first X-Y window). For example, a first X-Y window may be assigned to a cell corresponding to a canary circuit and a second X-Y window may be assigned to a cell corresponding to an SRoT engine. The layout design tool may place the transistors, interconnects, and other features of the canary circuit and SRoT engine in the semiconductor die corresponding to the first and second X-Y windows according to placement rules and generate a file containing data describing the layout.
As another example, according to some embodiments, user input may be provided to a layout design tool to specify specific locations of canary logic gates 516 and SRoT logic gates 520 by spatially mixing gates 516 and 520 along one or more dimensions of the semiconductor die. As another example, according to some embodiments, user input may be provided to a layout design tool to specify specific locations of transistors of canary logic gate 516 and SRoT logic gate 520 by spatially mixing the transistors (and thus spatially mixing gates 516 and 520) along one or more dimensions of the semiconductor die.
According to further embodiments, the spatial mix of canary logic gates 516 and SRoT gates 520 may extend along the Z-axis 603.
As another example, according to some embodiments, components of the canary circuit may be spatially mixed with components of the secure processing core. Referring to fig. 7, according to some embodiments, secure processing core 142 may include processing core logic 724, and as depicted in fig. 7, logic unit 724 may include one or more logic gates 740 (referred to herein as "processing core gates 740"), which may be combinational logic gates. By way of example, the processing core logic 724 may be a flip-flop, a register, a counter, a timer, a delay circuit, a compare circuit, a circuit for implementing the state of a state machine, or generally a set of logic gates that perform the function of the secure processing core 142. According to example embodiments, a given processing core logic 724 may form all or part of an arithmetic logic unit, a control unit, a cache, a register, an execution unit, an instruction fetch unit, a memory management unit, or other component of secure processing unit 142.
As depicted in fig. 7, the secure processing core 142 may further include canary logic elements 512, and each canary logic element 512 may include one or more canary logic gates 516. As depicted at reference numeral 714 of fig. 7, the canary logic gate 516 may be spatially mixed with the processing core logic gate 740, according to example embodiments. Because the canary logic gate 516 is spatially intermixed with the processing core logic gate 740, the canary logic gate 516 experiences all or more environmental conditions that are the same as the processing core logic gate 740. Thus, the canary circuit formed by the canary logic gates 516 can accurately sense the environmental conditions of the secure processing core 142 and quickly provide an indication of a security attack caused by the environmental conditions affecting the secure processing core 142.
Fig. 8 depicts an example semiconductor die area 800 according to an example embodiment. According to some embodiments, semiconductor die region 800 corresponds to a region of semiconductor die 157 (fig. 1) in which security processing core 142 is partially or fully fabricated. It should be noted that the security processing core 142 may be further fabricated in one or more other areas of the semiconductor die 157.
Referring to fig. 8, a semiconductor die region 800 extends laterally across the semiconductor die along an X-dimension and a Y-dimension corresponding to an X-axis 802 and a Y-axis 801, respectively. The semiconductor die has a thickness Z dimension extending along a Z-axis 803. According to an example embodiment, spatial mixing of canary logic gate 516 and secure processing core logic gate 740 may occur along the X-dimension and the Y-dimension. According to some embodiments, the spatial mixing may extend along the Z-dimension. The spatial mix may be established using layout design tools as described herein in discussing spatial mixes of canary logic gates and SRoT logic gates.
Referring to fig. 9, the canary circuit 134 may include a chain 908 of encryption processing stages 912, according to some embodiments. According to an example embodiment, each stage 912 may correspond to an instance of an encryption cipher block or transform. For example, as depicted in fig. 9, according to some embodiments, stage 912 may correspond to an Advanced Encryption System (AES) cipher or transform, and may correspond to a particular AES iteration. According to an example embodiment, the particular stage 912 may be implemented by a set of combinational logic or logic cones. Generally, according to an example embodiment, the logic cone is configured such that the canary circuit 134 just meets the timing within a given clock cycle (e.g., if a security attack by an environmental condition occurs, the canary circuit 134 does not meet the timing within the clock cycle).
According to an example embodiment, the canary circuit 134 performs multiple AES conversion iterations in a single cycle of the clock signal 921 (via the corresponding stage 912). More specifically, according to some embodiments, clock signal 921 may be the same clock signal that clocks the operation of secure processing core 142 (fig. 2) and/or SRoT engine 143 (fig. 2). According to an example embodiment, canary circuit 134 generates output 916 for each cycle of clock signal 921. In other words, according to an example embodiment, in a single clock cycle, the chain 908 receives an input or input vector 904 and performs a plurality of AES transform iterations (via respective stages 912) to produce an output 916.
According to an example embodiment, if canary circuit 134 is operating properly and is not malfunctioning (e.g., has failed due to an environmentally-induced condition), chain 908 produces an output 916 corresponding to an expected output 920. Otherwise, output 916 does not match expected output 920, and such mismatch causes canary circuit 134 to provide an indication of detected tampering activity (e.g., an indication that a security attack against an environmental condition was detected).
As depicted in fig. 9, according to some embodiments, the control logic 940 of the canary circuit 134 includes a comparator 924 that compares the output 916 of the chain 908 with the expected output 920. If these outputs do not match, according to an example embodiment, comparator 924 provides a tamper indication to output 237 of canary circuit 134 (e.g., comparator 924 asserts a tamper indication signal to alert tamper detection circuit 234 (fig. 2) to detect tampering activity).
Due to the nature of the AES transform, a single logical value change corresponding to the transform may result in a multi-bit change of the output of the transform. Thus, using an AES transform (particularly using multiple cascaded AES transforms) amplifies the fault indicator (the difference between the output 916 and the expected output 920) and accordingly results in the canary circuit 134 being highly sensitive to security attacks caused by environmental conditions.
According to some implementations, propagation of the input vector 904 through all transforms (e.g., AES transforms) consumes a significant portion (e.g., 80% to 90% or more) of the clock cycles due to the complex nature of the logic of the canary circuit 134 (e.g., AES stage 912). For example, according to an example embodiment, the number of AES stages 912 of the chain 908 may be adjusted such that the chain 908 hardly meets the timing specification during a single clock cycle. For example, the number of AES stages 912 that expect the slowest silicon may be selected to provide the output 916 at the end of a clock cycle, taking into account manufacturing variations. According to an example embodiment, canary circuit 134 is designed to be purposely tuned to be the first (if not the first) circuit of secure processor 130 to fail (e.g., output 916 does not match expected output 920) in the event of a security attack by an environmental condition.
According to an example embodiment, the control logic 940 is configured to sample the output 916 of each cycle of the clock signal 921. For example, according to an example embodiment, the control logic 940 may provide the input vector 904 to the chain 908 in response to a particular edge (e.g., a positive or rising edge) of the clock signal 921, and the comparator 924 may sample the output 916 on the particular edge (e.g., rising edge) of the clock signal 921. Thus, according to some embodiments, the transformation starts at the beginning of a clock cycle and the result of the transformation is sampled at the end of the clock cycle.
To refresh the logic gates of canary circuit 134 for each clock cycle so that the logic gates will transition (rather than remain static) during the clock cycle, control logic 940 provides different input vectors 904 to chain 908 on alternate clock cycles. In this way, as depicted in fig. 9, according to some embodiments, control logic 940 may select a particular candidate input vector 946 as input vector 904 for clock cycle a. Candidate input vector 946 has a corresponding expected output 948 that control logic 940 compares to output 916 at the end of clock cycle a. For the next consecutive clock cycle B, the control logic 940 may select another candidate input vector 946 as the input vector 904 for clock cycle B. The other candidate input vector 946 has a corresponding output 948 that the control logic 940 compares to the output 916 at the end of clock cycle B.
According to some embodiments, control logic 940 may alternate between providing two different input vectors to chain 908 for respective alternate clock cycles. According to a further embodiment, the control logic 940 may alternate between more than two input vectors. According to further embodiments, the same input vector may be provided to the chain 908 for each clock cycle.
According to a further embodiment, stage 912 may correspond to a cipher other than an AES cipher. For example, according to further embodiments, stage 912 may correspond to a secure hash algorithm-3 (SHA-3) cipher, and may correspond to a particular SHA-3 iteration.
Referring to fig. 10, according to an example embodiment, a process 1000 includes receiving (block 1004) a given reset indication for resetting a semiconductor package. As an example, according to some embodiments, the semiconductor package may include the secure processor 130 (fig. 1 and 2), and the given reset indication may be a request to reset the secure processor 130. According to an example embodiment, the reset indication may be a state of an electrical signal (e.g., a state corresponding to a logic zero). The reset indication may be generated external to the semiconductor package, may be generated internal to the semiconductor package, may be generated by circuitry, and may be generated in response to execution of the machine-readable instructions. According to an example embodiment, the semiconductor package may be associated with a management controller, such as BMC 129 (fig. 1).
The given reset indication is one of a time series of most recent reset indications received by the semiconductor package. The semiconductor package includes a hardware root of trust. Process 1000 includes detecting (block 1008) an activity associated with the semiconductor package that corresponds to the tamper activity. The activity conforming to the tamper activity may be that the temporal pattern of the reset request exceeds a specified temporal rate threshold, and the tamper activity may correspond to a security attack caused by an environmental condition. Process 1000 includes managing (block 1012) a response of the semiconductor package to the given reset indication in response to detecting the activity. According to an example embodiment, managing the response of the semiconductor package to a given reset indication includes applying a reset hold time before releasing the semiconductor package from the reset.
Referring to fig. 11, according to an example embodiment, a baseboard management controller 1100 includes a management processor 1104 and a security enclosure 1108 separate from the management processor 1104. According to some embodiments, the management processor may be a BMC, such as BMC 129. According to an example embodiment, the management processor 1104 may be a processing core 154 (FIG. 1) that executes management stack firmware. Secure enclave 1108 has an associated encryption boundary and includes secure processing core 1112; trust root engine 1116; and reset the controller 1120. The trust root engine 1116 validates machine-readable instructions to be executed by the secure processing core 1112. The trust root engine 1116 includes a reset input 1117. According to an example embodiment, secure enclave 1108 may have a hierarchical reset in which trust root engine 1116 controls when secure processing core 1112 exits the reset in response to its reset.
The reset controller 1120 receives a time sequence of reset indications, including receiving a current reset indication in the time sequence of reset indications and receiving at least one previous reset indication in the time sequence of reset indications. According to an example embodiment, the reset indication may be a state of an electrical signal (e.g., a state corresponding to a logic zero). The reset indication may be generated outside of the secure enclave 1108, may be generated inside of the secure enclave 1108, may be generated by circuitry, and may be generated in response to execution of the machine readable instructions. Reset master 1120 communicates a reset signal to reset input 1117 of root of trust engine 1116 in response to the current reset indication to place root of trust engine 1116 in reset. Reset master 1120 controls the delay imposed when releasing the reset in response to detecting tampering with secure enclave 1108. According to example embodiments, detecting tampering may include the reset master 1120 detecting a temporal pattern of reset requests exceeding a specified temporal rate threshold, and the tampering may correspond to a security attack by an environmental condition.
Referring to fig. 12, according to an example embodiment, a computer platform 1200 includes a main processing core 1204 and a secure processor 1202. According to some embodiments, the main CPU processor core 102 (fig. 1) and the secure enclave 140 (fig. 1) are examples of the main processing core 1204 and the secure processor 1202. The secure processor 1202 includes a secure processing core 1208; a trust root engine 1216; and reset the hypervisor 1212. The trust root engine 1216 validates the first portion of firmware instructions to be executed by the secure processing core 1208. The first firmware instruction portion is part of a trust chain, and the trust chain includes a second firmware instruction portion to be executed by the main processing core 1204. The reset manager 1212 receives a series of reset requests for resetting the secure processor 1202 and responds to the series of resets. According to an example embodiment, the request may be a state of an electrical signal (e.g., a state corresponding to a logic zero). The request may be generated external to the secure processor 1202, may be generated internal to the secure processor 1202, may be generated by circuitry, and may be generated in response to execution of machine-readable instructions. According to an example embodiment, the secure processor 1202 may have a hierarchical reset in which the root of trust engine 1216 controls when the secure processing core 1208 exits the reset in response to its reset.
The reset hypervisor 1212 responds to the series of resets by providing a reset signal to the root of trust engine 1216 in response to each reset. Reset manager 1212 throttles the response to the series of reset requests in response to detecting tampering with secure processor 1202. According to an example embodiment, detecting tampering may include the reset hypervisor 1212 detecting a temporal pattern of reset requests exceeding a specified temporal rate threshold, and the tampering may correspond to a security attack by an environmental condition. According to an example embodiment, throttling the response to the series of resets may include applying a predefined reset hold time.
According to an example embodiment, the detecting activity includes detecting a failure of the semiconductor package based on an output of a canary circuit of the semiconductor package. Managing the response of the semiconductor package to a given reset indication includes adjusting the reset hold time. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, detecting the fault further includes providing an input vector to the canary circuit and processing, by the canary circuit, the input vector with logic corresponding to the encrypted password to cause the canary circuit to provide an output. Detecting the fault further includes comparing the output of the canary circuit with an expected output. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, detecting the activity includes detecting a pattern of reset indication time series. Managing the response of the semiconductor package to a given reset indication includes applying a predefined reset hold time in response to detection of a pattern. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to a further embodiment, managing the response of the semiconductor package to a given reset indication includes limiting the rate at which the semiconductor package resets in response to detecting activity. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, the rate at which resetting of the semiconductor package occurs is limited to a maximum rate. Managing the response of the semiconductor package to the given reset indication includes reducing the maximum rate in response to detecting activity. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, the process further includes managing a response of the semiconductor package to the reset indication time sequence in response to a clock signal provided by a Real Time Clock (RTC) device. Detecting activity includes detecting a reset of the RTC device. Managing the response of the semiconductor package to the given reset indication includes adjusting a reset hold time in response to detecting a reset of the RTC device. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, managing a response of the semiconductor package to a given reset indication includes measuring a time to hold a reset of the semiconductor package in response to detecting activity in response to a timing indication provided by a Real Time Clock (RTC) device. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, the process includes reporting that activity is detected. A particular advantage is that security attacks caused by environmental conditions can be prevented.
According to an example embodiment, the semiconductor package includes a secure enclave of the baseboard management controller, and the secure enclave is within an encrypted boundary. A particular advantage is that security attacks caused by environmental conditions can be prevented.
While the present disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations.

Claims (20)

1. A method, comprising:
receiving a given reset indication for resetting a semiconductor package, wherein the given reset indication is one of a time series of reset indications received by the semiconductor package, and the semiconductor package comprises a hardware root of trust;
Detecting an activity associated with the semiconductor package that corresponds to a tamper activity; and
a response of the semiconductor package to the given reset indication is governed in response to detecting the activity.
2. The method of claim 1, wherein:
detecting the activity includes detecting a failure of the semiconductor package based on an output of a canary circuit of the semiconductor package; and is also provided with
Managing the response of the semiconductor package to the given reset indication includes adjusting a reset hold time.
3. The method of claim 2, wherein detecting the fault further comprises:
providing an input vector to the canary circuit;
processing, by the canary circuit, the input vector with logic corresponding to an encrypted password to cause the canary circuit to provide the output; and
the output of the canary circuit is compared to an expected output.
4. The method of claim 1, wherein:
detecting the activity includes detecting a pattern of the reset indication time series; and is also provided with
Managing the response of the semiconductor package to the given reset indication includes applying a predefined reset hold time in response to detection of the pattern.
5. The method of claim 1, wherein managing the response of the semiconductor package to the given reset indication comprises limiting a rate at which the semiconductor package resets in response to detecting the activity.
6. The method of claim 1, further comprising limiting a rate at which a reset of the semiconductor package occurs to a maximum rate,
wherein managing the response of the semiconductor package to the given reset indication includes reducing the maximum rate in response to detecting the activity.
7. The method of claim 1, further comprising managing the response of the semiconductor package to the reset indication time sequence in response to a clock signal provided by a Real Time Clock (RTC) device,
wherein:
detecting the activity includes detecting a reset of the RTC device; and is also provided with
Managing the response of the semiconductor package to the given reset indication includes adjusting a reset hold time in response to detecting the reset of the RTC device.
8. The method of claim 1, wherein managing the response of the semiconductor package to the given reset indication comprises measuring a time to hold a reset of the semiconductor package in response to detecting the activity in response to a timing indication provided by a Real Time Clock (RTC) device.
9. The method of claim 1, further comprising reporting the detection of the activity.
10. The method of claim 1, wherein the semiconductor package comprises a secure enclosure of a baseboard management controller, and the secure enclosure is within an encrypted boundary.
11. A baseboard management controller comprising:
a management processor; and
a secure enclave separate from the management processor, wherein the secure enclave has an associated cryptographic boundary and comprises:
a secure processing core;
a root of trust engine to verify machine readable instructions to be executed by the secure processing core, wherein the root of trust engine includes a reset input; and
the reset management and control device is used for:
receiving a time sequence of reset indications, including receiving a current reset indication in the time sequence of reset indications and receiving at least one previous reset indication in the time sequence of reset indications;
transmitting a reset signal to the reset input to place the root of trust engine in reset in response to the current reset indication; and
controlling a delay applied when releasing the reset in response to detecting tampering with the secure enclosure.
12. The baseboard management controller of claim 11, wherein the secure enclosure further comprises a canary circuit, and the canary circuit comprises:
a stage chain corresponding to the cryptographic transformation for processing the input vector to provide an output value; and
a comparator for comparing the output value with an expected value and generating a signal indicative of the detected tampering in response to the comparison.
13. The baseboard management controller of claim 12, wherein the output value is different from the expected value in response to environmental condition induced instability of the semiconductor package attributable to at least one of a clock frequency of the secure enclosure, a die temperature of the secure enclosure, or a supply voltage of the secure enclosure.
14. The baseboard management controller of claim 11, wherein the reset indication includes a state of a reset signal, and the state of the reset signal is manipulated inside or outside the secure enclosure.
15. The baseboard management controller of claim 13, further comprising a clock source for providing an indication of measurement time,
Wherein the reset controller is further configured to use the indication of the measured time to control the delay applied when releasing the reset in response to the detection of tampering.
16. A computer platform, comprising:
a main processing core; and
a secure processor, the secure processor comprising:
a secure processing core;
a root of trust engine, wherein the root of trust engine is to verify a first firmware instruction portion to be executed by the secure processing core, the first firmware instruction portion being part of a chain of trust, and the chain of trust including a second firmware instruction portion to be executed by the main processing core; and
the reset management and control device is used for:
receiving a series of reset requests for resetting the secure processor;
responding to the series of reset requests, wherein responding to the series of reset requests includes providing a reset signal to the root of trust engine in response to each reset request in the series of reset requests; and
the response to the series of reset requests is throttled in response to detecting tampering with the secure processor.
17. The computer platform of claim 16, further comprising a canary circuit to provide an indication indicative of the detected tampering in response to a failure of the canary circuit, wherein the reset governor is to further increase a reset hold time associated with the throttling in response to the indication.
18. The computer platform of claim 16, wherein the reset controller is further to detect a current rate associated with the series of requests and to detect the tampering in response to the current rate exceeding a predetermined threshold.
19. The computer platform of claim 16, further comprising a tamper detection circuit to determine whether a clock source associated with a backup power source has been reset and to detect the tampering in response to the determination.
20. The computer platform of claim 16, further comprising a baseboard management controller, wherein the baseboard management controller includes the security processor.
CN202310825398.1A 2022-09-30 2023-07-06 Managing responses to resets in response to tamper activity detection Pending CN117807644A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/412,040 2022-09-30
US18/166,717 2023-02-09
US18/166,717 US20240111909A1 (en) 2022-09-30 2023-02-09 Governing responses to resets responsive to tampering activity detection

Publications (1)

Publication Number Publication Date
CN117807644A true CN117807644A (en) 2024-04-02

Family

ID=90430772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310825398.1A Pending CN117807644A (en) 2022-09-30 2023-07-06 Managing responses to resets in response to tamper activity detection

Country Status (1)

Country Link
CN (1) CN117807644A (en)

Similar Documents

Publication Publication Date Title
US11843705B2 (en) Dynamic certificate management as part of a distributed authentication system
US11809544B2 (en) Remote attestation for multi-core processor
US7900252B2 (en) Method and apparatus for managing shared passwords on a multi-user computer
US7322042B2 (en) Secure and backward-compatible processor and secure software execution thereon
US11354417B2 (en) Enhanced secure boot
EP3646224B1 (en) Secure key storage for multi-core processor
CN113568799A (en) Simulation of physical security devices
EP3757838B1 (en) Warm boot attack mitigations for non-volatile memory modules
US20230134324A1 (en) Managing storage of secrets in memories of baseboard management controllers
US20230246827A1 (en) Managing use of management controller secrets based on firmware ownership history
US20240111862A1 (en) Detecting and responding to environmental condition-induced security attacks on semiconductor packages
US20240111909A1 (en) Governing responses to resets responsive to tampering activity detection
CN117807644A (en) Managing responses to resets in response to tamper activity detection
CN117807639A (en) Detecting and responding to security attacks on semiconductor packages caused by environmental conditions
US11734457B2 (en) Technology for controlling access to processor debug features
Noubir et al. Towards malicious exploitation of energy management mechanisms
US20230342446A1 (en) Management controller-based verification of platform certificates
US20230078058A1 (en) Computing systems employing a secure boot processing system that disallows inbound access when performing immutable boot-up tasks for enhanced security, and related methods
DE102023110485A1 (en) DETECTING AND RESPOND TO ENVIRONMENTAL SECURITY ATTACKS ON SEMICONDUCTOR PACKAGING
DE102023110486A1 (en) RESPONSE TO RESETS RULES IN RESPONSE TO THE DETECTION OF TAMPERING ACTIVITIES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication