WO2020197722A1 - Generating an identity for a computing device using a physical unclonable function - Google Patents

Generating an identity for a computing device using a physical unclonable function Download PDF

Info

Publication number
WO2020197722A1
WO2020197722A1 PCT/US2020/020906 US2020020906W WO2020197722A1 WO 2020197722 A1 WO2020197722 A1 WO 2020197722A1 US 2020020906 W US2020020906 W US 2020020906W WO 2020197722 A1 WO2020197722 A1 WO 2020197722A1
Authority
WO
WIPO (PCT)
Prior art keywords
key
secret
computing device
memory
puf
Prior art date
Application number
PCT/US2020/020906
Other languages
French (fr)
Inventor
Antonino Mondello
Alberto TROIA
Original Assignee
Micron Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology, Inc. filed Critical Micron Technology, Inc.
Priority to CN202080024834.XA priority Critical patent/CN113632417A/en
Priority to JP2021557290A priority patent/JP2022527757A/en
Priority to KR1020217034176A priority patent/KR20210131444A/en
Priority to EP20778971.0A priority patent/EP3949257A4/en
Publication of WO2020197722A1 publication Critical patent/WO2020197722A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0866Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09CCIPHERING OR DECIPHERING APPARATUS FOR CRYPTOGRAPHIC OR OTHER PURPOSES INVOLVING THE NEED FOR SECRECY
    • G09C1/00Apparatus or methods whereby a given sequence of signs, e.g. an intelligible text, is transformed into an unintelligible sequence of signs by transposing the signs or groups of signs or by replacing them by others according to a predetermined system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0643Hash functions, e.g. MD5, SHA, HMAC or f9 MAC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0877Generation of secret information including derivation or calculation of cryptographic keys or passwords using additional device, e.g. trusted platform module [TPM], smartcard, USB or hardware security module [HSM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • H04L9/0897Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage involving additional devices, e.g. trusted platform module [TPM], smartcard or USB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/30Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/321Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving a third party or a trusted authority
    • H04L9/3213Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving a third party or a trusted authority using tickets or tokens, e.g. Kerberos
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3228One-time or temporary data, i.e. information which is sent for every authentication or authorization, e.g. one-time-password, one-time-token or one-time-key
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3234Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving additional secure or trusted devices, e.g. TPM, smartcard, USB or software token
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3242Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving keyed hash functions, e.g. message authentication codes [MACs], CBC-MAC or HMAC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3263Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving certificates, e.g. public key certificate [PKC] or attribute certificate [AC]; Public key infrastructure [PKI] arrangements
    • H04L9/3268Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving certificates, e.g. public key certificate [PKC] or attribute certificate [AC]; Public key infrastructure [PKI] arrangements using certificate validation, registration, distribution or revocation, e.g. certificate revocation list [CRL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3271Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using challenge-response
    • H04L9/3278Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using challenge-response using physically unclonable functions [PUF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • H04W12/069Authentication using certificates or pre-shared keys
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/80Wireless
    • H04L2209/805Lightweight hardware, e.g. radio-frequency identification [RFID] or sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/84Vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0876Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/065Encryption by serially and continuously modifying data stream elements, e.g. stream cipher systems, RC4, SEAL or A5/3
    • H04L9/0656Pseudorandom key sequence combined element-for-element with data sequence, e.g. one-time-pad [OTP] or Vernam's cipher
    • H04L9/0662Pseudorandom key sequence combined element-for-element with data sequence, e.g. one-time-pad [OTP] or Vernam's cipher with particular pseudorandom sequence generator
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0819Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
    • H04L9/0825Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using asymmetric-key encryption or public key infrastructure [PKI], e.g. key signature or public key certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3218Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using proof of knowledge, e.g. Fiat-Shamir, GQ, Schnorr, ornon-interactive zero-knowledge proofs
    • H04L9/3221Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using proof of knowledge, e.g. Fiat-Shamir, GQ, Schnorr, ornon-interactive zero-knowledge proofs interactive zero-knowledge proofs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/08Access security
    • H04W12/086Access security using security domains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/69Identity-dependent
    • H04W12/71Hardware identity

Definitions

  • At least some embodiments disclosed herein relate to identity for computing devices in general and more particularly, but not limited to generating an identity for a computing device using a physical unclonable function.
  • a physical unclonable function provides, for example, a digital value that can serve as a unique identity for a semiconductor device, such as a
  • PUFs are based, for example, on physical variations which occur naturally during semiconductor manufacturing, and which permit differentiating between otherwise identical semiconductor chips. [0007] PUFs are typically used in cryptography. A PUF can be, for example, a physical entity that is embodied in a physical structure. PUFs are often
  • PUFs can be used as a unique and
  • PUFs can also be used for secure key generation, and as a source of randomness.
  • the Microsoft® Azure® loT platform is a set of cloud services provided by Microsoft.
  • the Azure® loT platform supports Device Identity Composition Engine (DICE) and many different kinds of Hardware Security Modules (HSMs).
  • DICE is an upcoming standard at Trusted Computing Group (TCG) for device identification and attestation which enables manufacturers to use silicon gates to create device identification based in hardware.
  • HSMs are used to secure device identities and provide advanced functionality such as hardware-based device attestation and zero touch provisioning.
  • DICE offers a scalable security framework that uses an HSM footprint to anchor trust for use in building security solutions like authentication, secure boot, and remote attestation.
  • DICE is useful for the current environment of constraint computing that characterizes loT devices, and provides an alternative to more traditional security framework standards like the Trusted Computing Group’s (TCG) and Trusted Platform Module (TPM).
  • TCG Trusted Computing Group
  • TPM Trusted Platform Module
  • the Azure® loT platform has HSM support for DICE in HSMs from some silicon vendors.
  • the Robust Internet-of -Things is an architecture for providing trust services to computing devices.
  • the trust services include device identity, attestation, and data integrity.
  • the RloT is an architecture for providing trust services to computing devices.
  • the trust services include device identity, attestation, and data integrity.
  • RloT services can be provided at low cost on even very small devices.
  • RloT provides a foundation for cryptographic operations and key management for many security scenarios. Authentication, integrity verification, and data protection require cryptographic keys to encrypt and decrypt, as well as mechanisms to hash and sign data. Most internet-connected devices also use cryptography to secure communication with other devices.
  • the cryptographic services provided by RloT include device identity, data protection, and attestation.
  • device identity devices typically authenticate themselves by proving possession of a cryptographic key. If the key associated with a device is extracted and cloned, then the device can be impersonated.
  • devices typically use cryptography to encrypt and integrity protect locally stored data. If the cryptographic keys are only accessible to authorized code, then unauthorized software is not be able to decrypt or modify the data.
  • attestation devices sometimes need to report code they are running and their security configuration. For example, attestation is used to prove that a device is running up-to-date code.
  • keys are managed in software alone, then bugs in software components can result in key compromise .
  • the primary way to restore trust following a key compromise is to install updated software and provision new keys for the device. This is time consuming for server and mobile devices, and not possible when devices are physically inaccessible.
  • TPM Trusted Platform Module
  • TPMs are widely available on computing platforms (e.g., using SoC- integrated and processor-mode-isolated firmware TPMs).
  • TPMs are often impractical. For example, a small loT device is not be able to support a TPM without substantial increase in cost and power needs.
  • RloT can be used to provide device security for small computing devices, but it can also be applied to any processor or computer system. If software components outside of the RloT core are compromised, then RloT provides for secure patching and re-provisioning. RloT also uses a different approach to cryptographic key protection. The most-protected cryptographic keys used by the RloT framework are only available briefly during boot.
  • FIG. 1 shows a host device that verifies the identity of a computing device, according to one embodiment.
  • FIG. 2 shows an example computing system having an identity component and a verification component, according to one embodiment.
  • FIG. 3 shows an example computing device of a vehicle, according to one embodiment.
  • FIG. 4 shows an example host device communicating with an example computing device of a vehicle, according to one embodiment.
  • FIG. 5A shows an application board that generates an identifier, certificate, and key for a host device, according to one embodiment.
  • FIG. 5B shows an example computing system booting in stages using layers, according to one embodiment.
  • FIG. 6 shows an example computing device generating an identifier, certificate, and key using asymmetric generators, according to one embodiment.
  • FIG. 7 shows a verification component that verifies the identity of a computing device using decryption operations, according to one embodiment.
  • FIG. 8 shows a block diagram of an example process to verify a certificate, according to one embodiment.
  • FIG. 9 shows a method to verify an identity of a computing device using an identifier, certificate, and a key, according to one embodiment.
  • FIG. 10 shows a system for generating a unique key from an output of a message authentication code (MAC) that receives an input from a physical unclonable function (PUF) device, according to one embodiment.
  • MAC message authentication code
  • PEF physical unclonable function
  • FIG. 11 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices selected by a selector module, according to one embodiment.
  • FIG. 12 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), according to one embodiment.
  • FIG. 13 shows a method to generate an output from a MAC that uses one or more input values provided from one or more PUFs, according to one
  • FIG. 14 shows a system for generating a root key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time- stamp, etc.), and that adds an additional MAC to generate a session key, according to one embodiment.
  • FIG. 15 shows a computing device for storing an obfuscated key in non volatile memory, according to one embodiment.
  • FIG. 16 shows an example of an intermediate key generated during an obfuscation process, according to one embodiment.
  • FIG. 17 shows an example of another intermediate key generated during the obfuscation process of FIG. 16, according to one embodiment.
  • FIG. 18 shows a method for generating and storing an obfuscated key in a non-volatile memory, according to one embodiment.
  • FIG. 19 shows a computing device for generating an initial key based on key injection, obfuscating the initial key, and storing the obfuscated key in non volatile memory, according to one embodiment.
  • FIG. 20 shows a computing device for generating an identity using a physical unclonable function (PUF), according to one embodiment.
  • PPF physical unclonable function
  • FIG. 21 shows a system that sends an initial value provided by a monotonic counter of the system for use in determining whether tampering with the system has occurred, according to one embodiment.
  • FIG. 22 shows a method for generating an identity for a computing device using a physical unclonable function (PUF), according to one embodiment.
  • PPF physical unclonable function
  • a host device verifies the identity of a computing device by sending a message to the computing device.
  • the computing device uses the message to generate an identifier, a certificate, and a key, which are sent to the host device.
  • the host device uses the generated identifier, certificate, and key to verify the identity of the computing device.
  • PUF physical unclonable function
  • the computing device can be a flash memory device.
  • flash memory is leveraged to add a strong level of security capability in a computing system (e.g., an application controller of an autonomous vehicle).
  • Flash memory is used in numerous computer systems.
  • serial NOR is used in a wide array of applications like medical devices, factory automation boards, automotive ECUs, smart meters, and internet gateways.
  • chipset architectures processors, controllers or SoCs
  • operating systems and supply chains used across these applications
  • flash memory is a common denominator building block in these systems.
  • a computing device integrates hardware-based roots of trust into a flash memory device, enabling strong cryptographic identity and health management for loT devices.
  • a computing device integrates hardware-based roots of trust into a flash memory device, enabling strong cryptographic identity and health management for loT devices.
  • By moving essential security primitives in-memory it becomes simpler to protect the integrity of code and data housed within the memory itself. This approach can significantly enhance system level security while minimizing the complexity and cost of implementations.
  • a new loT device management capability leverages flash memory by enabling device onboarding and management by the Microsoft® Azure® loT cloud using flash memory and associated software.
  • the solutions provide a cryptographic identity that becomes the basis for critical device provisioning services (e.g., the Azure loT Hub Device Provisioning Service (DPS)).
  • DPS Azure loT Hub Device Provisioning Service
  • this DPS along with the enabled memory can enable zero-touch provisioning of devices to the correct loT hub as well as other services.
  • the Device Identity Composition Engine (DICE is an upcoming standard from the Trusted Computing Group (TCG)).
  • TCG Trusted Computing Group
  • the enabled memory permits only trusted hardware to gain access to the Microsoft Azure loT cloud.
  • the health and identity of an loT device is verified in memory where critical code is typically stored. The unique identity of each loT device can now offer end-to- end device integrity at a new level, starting at the boot process. This can enable additional functionality like hardware-based device attestation and provisioning as well as administrative remediation of the device if necessary.
  • a method includes: receiving, by a computing device (e.g., a serial NOR flash memory device), a message from a host device (e.g., a CPU, GPU, FPGA, or an application controller of a vehicle); generating, by the computing device, an identifier (e.g., a public identifier IDu public), a certificate (e.g., IDu certificate), and a key (e.g., Ku public), wherein the identifier is associated with an identity of the computing device, and the certificate is generated using the message; and sending, by the computing device, the identifier, the certificate, and the key to the host device, wherein the host device is configured to verify the identity of the computing device using the identifier, the certificate, and the key.
  • a computing device e.g., a serial NOR flash memory device
  • a message e.g., a message from a host device
  • a host device e.g., a CPU, GPU, FPGA, or an application controller
  • the computing device above integrates DICE-RloT functionality, which is used to generate the identifier, certificate, and key described above and used by the host device to verify the identity of the computing device.
  • the computing device stores a device secret that acts a primitive key on which the sequence of identification steps between layers of the DICE-RloT protocol is based.
  • layers Lo and Li of the DICE-RloT functionality are implemented in the computing device using hardware and/or software.
  • layer Lo is implemented solely in hardware.
  • FIG. 1 shows a host device 151 that verifies the identity of a computing device 141 , according to one embodiment.
  • Host device 151 sends a message to the computing device 141.
  • host device 151 includes a freshness mechanism (not shown) that generates a freshness for use in sending messages to the computing device 141 to avoid replay attacks.
  • each message sent to the computing device 141 includes a freshness generated by a monotonic counter.
  • the message is an empty string, a conventional known string (e.g., alphanumeric string known to the manufacturer or operator of host device 151 ), or can be another value (e.g., an identity value assigned to the computing device).
  • the message is a unique identity of the device (UID).
  • computing device 141 In response to receiving the message, computing device 141 generates an identifier, a certificate, and a key.
  • the identifier is associated with an identity of the computing device 141.
  • Computing device 141 includes one or more processors 143 that control the operation of identity component 147 and/or other functions of computing device 141.
  • the identifier, the certificate, and the key are generated by identity component 147 and are based on device secret 149.
  • device secret 149 is a unique device secret (UDS) stored in memory of computing device 141.
  • identity component 147 uses the UDS as a primitive key for implementation of the DICE-RloT protocol.
  • the identifier, certificate, and key are outputs from layer Li of the DICE-RloT protocol (see, e.g., FIG. 6).
  • the identity of layer Li corresponds to the identity of computing device 141 itself, the manufacturer of computing device 141 , the manufacturer of a thing that includes computing device 141 as component, and/or an application or other software stored in memory of computing device 141.
  • the application identity e.g., an ID number
  • the application identity is for a mobile phone, a TV, an STB, etc. , for which a unique combination of characters and numbers is used to identify the thing.
  • the identity of layer Li is an ASCII string.
  • the identity can be a manufacturer name concatenated with a thing name (e.g., LG
  • the identity can be represented in hexadecimal form (e.g., 53 61 6D 73 75 6E 67 20 7C 20 54 56 5F 6D 6F 64 65 6C 5F 31 32 33 5F 79 65 61 72 5F 32 30 31 38).
  • a manufacturer can use a UDS for a class or set of items that are being produced.
  • each item can have its own unique UDS.
  • the device secret 149 is a secret key stored by computing device 141 in memory 145.
  • Identity component 147 uses the secret key as an input to a message authentication code (MAC) to generate a derived secret.
  • MAC message authentication code
  • the derived secret is fused derived secret (FDS) in the DICE-RloT protocol.
  • memory 145 includes read-only memory (ROM) that stores initial boot code for booting computing device 141.
  • the FDS is a key provided to the initial boot code by processor 143 during a booting operation.
  • the ROM corresponds to layer Lo of the DICE-RloT protocol.
  • Host device 151 uses the identifier, certificate, and key as inputs to a verification component 153, which verifies the identity of the computing device 141.
  • verification component 153 performs at least one decryption operation using the identifier to provide a result.
  • the result is compared to the key to determine whether the identity of the computer device 141 is valid. If so, host device 151 performs further communications with computing device 141 using the key received from computing device 141 . For example, once host device 151 verifies the“triple” (the identifier, certificate, and key), the key can be used to attest any other information exchanged between computing device 141 and host device 151.
  • a digital identification is assigned to numerous “things” (e.g., as per the Internet of Things).
  • the thing is a physical object such as a vehicle or a physical item present inside the vehicle.
  • the thing is a person or animal.
  • each person or animal can be assigned a unique digital identifier.
  • computing device 141 implements the DICE-RloT protocol using identity component 147 in order to associate unique signatures to a chain of trust corresponding to computing device 141 .
  • Computing device 141 establishes layers Lo and Li.
  • the chain of trust is continued by host device 151 which establishes layers l_2, ... .
  • a unique identifier can be assigned to every object, person, and animal in any defined environment (e.g., a trust zone defined by geographic parameters).
  • computing device 141 is a component in the thing that is desired to be assigned an identity.
  • the thing can be an autonomous vehicle including computing device 141.
  • computing device 141 can be flash memory that is used by an application controller of the vehicle.
  • the manufacturer can inject a UDS into memory 145.
  • the UDS can be agreed to and shared with a customer that will perform additional manufacturing operations using computing device 141.
  • the UDS can be generated randomly by the original manufacturer and then communicated to the customer using a secure infrastructure (e.g., over a network such as the internet).
  • the customer can be a manufacturer of a vehicle that incorporates computing device 141 .
  • the vehicle manufacturer desires to change the UDS so that it is unknown to the seller of computing device 141.
  • the customer can replace the UDS using an authenticated replace command that is provided by host device 151 to computing device 141.
  • the customer can inject customer immutable information into memory 145 of computing device 141.
  • the immutable info is used to generate a unique FDS, and is not solely used as a differentiator.
  • the customer immutable information is used to differentiate various objects that are manufactured by the customer.
  • customer immutable information can be a combination of letters and/or numbers to define primitive information (e.g., a combination of some or all of the following information: date, time, lot position, wafer position, x, y location in a wafer, etc.).
  • the immutable information also includes data from cryptographic feature configuration performed by a user (e.g., a customer who receives a device from a manufacturer). This configuration or setting can be done only by using authenticated commands (commands that need the knowledge of a key to be executed). The user has knowledge of the key (e.g., based on being provided the key over a secure infrastructure from the manufacturer).
  • the immutable information represents a form of cryptographic identity of a computing device, which is different from the unique ID (UID) of the device.
  • the inclusion of the cryptographic configuration in the immutable set of information provides the user with a tool useful to self-custom ize the immutable information.
  • computing device 141 includes a freshness mechanism that generates a freshness.
  • the freshness can be provided with the identifier, certificate, and key when sent to host device 151.
  • the freshness can also be used with other communications with host device 151 .
  • computing device 141 is a component on an application board. Another component (not shown) on the application board can verify the identity of computing device 141 using knowledge of device secret 149 (e.g., knowledge of an injected UDS). The component requests that computing device 141 generate an output using a message authentication code in order to prove possession of the UDS.
  • the message authentication code can be as follows: FIMAC (UDS,“application board message
  • the FDS can also be used as criteria to prove the possession of the device (e.g., the knowledge of the secret key(s)).
  • the message authentication code can be as follows: FIMAC (FDS,“application board message
  • FIG. 2 shows an example computing system having an identity component 107 and a verification component 109, according to one embodiment.
  • a host system 101 communicates over a bus 103 with a memory system 105.
  • a memory system 105 communicates over a bus 103 with a verification component 109.
  • processing device 1 1 1 of memory system 105 has read/write access to memory regions 1 1 1 , 1 13, ... , 1 19 of non-volatile memory 121.
  • host system 101 also reads data from and writes data to volatile memory 123.
  • identity component 107 supports layers Lo and Li of the DICE-RloT protocol.
  • non-volatile memory 121 stores boot code.
  • Verification component 109 is used to verify an identity of memory system 105. Verification component 109 uses a triple including an identifier, certificate, and key generated by identity component 107 in response to receiving a host message from host system 101 , for example as described above.
  • Identity component 107 is an example of identity component 147 of FIG. 1.
  • Verification component 109 is an example of verification component 153 of FIG. 1.
  • Memory system 105 includes key storage 157 and key generators 159.
  • key storage 157 can store root keys, session keys, a UDS (DICE- RloT), and/or other keys used for cryptographic operations by memory system 105.
  • UDS DICE- RloT
  • key generators 159 generate a public key sent to host system 101 for use in verification by verification component 109.
  • the public key is sent as part of a triple that also includes an identifier and certificate, as described above.
  • Memory system 105 includes a freshness generator 155.
  • freshness generator 155 can be used for authenticated commands.
  • multiple freshness generators 155 can be used.
  • freshness generator 155 is available for use by host system 101 .
  • the processing device 1 1 1 and the memory regions 1 1 1 1 , 1 13, ... , 1 19 are on the same chip or die.
  • the memory regions store data used by the host system 101 and/or the processing device 1 1 1 during machine learning processing or other run-time data generated by software process(es) executing on host system 101 or on processing device 11 1.
  • the computing system can include a write component in the memory system 105 that selects a memory region 1 1 1 (e.g., a recording segment of flash memory) for recording new data from host system 101.
  • the computing system 100 can further include a write component in the host system 101 that coordinates with the write component 107 in the memory system 105 to at least facilitate selection of the memory region 1 11 .
  • volatile memory 123 is used as system memory for a processing device (not shown) of host system 101.
  • a process of host system 101 selects memory regions for writing data.
  • the host system 101 can select a memory region based in part on data from sensors and/or software processes executing on an autonomous vehicle.
  • the foregoing data is provided by the host system 101 to processing device 1 1 1 , which selects the memory region.
  • host system 101 or processing device 1 1 1 includes at least a portion of the identity component 107 and/or verification component 109.
  • the processing device 1 1 1 and/or a processing device in the host system 101 includes at least a portion of the identity component 107 and/or verification component 109.
  • processing device 1 1 1 and/or a processing device of the host system 101 can include logic circuitry implementing the identity component 107 and/or verification component 109.
  • a controller or processing device e.g., a CPU, FPGA, or GPU
  • the host system 101 can be configured to execute instructions stored in memory for performing the operations of the identity component 107 and/or verification component 109 described herein.
  • the identity component 107 is implemented in an integrated circuit chip disposed in the memory system 105.
  • the verification component 109 in the host system 101 is part of an operating system of the host system 101 , a device driver, or an application.
  • An example of memory system 105 is a memory module that is connected to a central processing unit (CPU) via a memory bus.
  • memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO- DIMM), a non-volatile dual in-line memory module (NVDIMM), etc.
  • the memory system can be a hybrid memory/storage system that provides both memory functions and storage functions.
  • a host system can utilize a memory system that includes one or more memory regions. The host system can provide data to be stored at the memory system and can request data to be retrieved from the memory system.
  • a host can access various types of memory, including volatile and non-volatile memory.
  • the host system 101 can be a computing device such as a controller in a vehicle, a network server, a mobile device, a cellular telephone, an embedded system (e.g., an embedded system having a system -on-chip (SOC) and internal or external memory), or any computing device that includes a memory and a processing device.
  • the host system 101 can include or be coupled to the memory system 105 so that the host system 101 can read data from or write data to the memory system 105.
  • the host system 101 can be coupled to the memory system 105 via a physical host interface.
  • “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening
  • a physical host interface examples include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc.
  • SATA serial advanced technology attachment
  • PCIe peripheral component interconnect express
  • USB universal serial bus
  • SAS Serial Attached SCSI
  • DDR double data rate
  • the physical host interface can be used to transmit data between the host system 101 and the memory system 105.
  • the physical host interface can provide an interface for passing control, address, data, and other signals between the memory system 105 and the host system 101 .
  • FIG. 2 illustrates a memory system 105 as an example.
  • the host system 101 can access multiple memory systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.
  • the host system 101 can include a processing device and a controller.
  • the processing device of the host system 101 can be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, etc.
  • the controller of the host system can be referred to as a memory controller, a memory management unit, and/or an initiator.
  • the controller controls the communications over bus 103 between the host system 101 and the memory system 105. These communications include sending of a host message for verifying identity of memory system 105 as described above.
  • a controller of the host system 101 can communicate with a controller of the memory system 105 to perform operations such as reading data, writing data, or erasing data at the memory regions of non-volatile memory 121.
  • the controller is integrated within the same package of the processing device 1 1 1 .
  • the controller is separate from the package of the processing device 1 1 1.
  • the controller and/or the processing device can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof.
  • the controller and/or the processing device can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the memory regions 1 1 1 1 , 1 13, ... , 1 19 can include any combination of different types of non-volatile memory components.
  • the memory cells of the memory regions can be grouped as memory pages or data blocks that can refer to a unit used to store data.
  • the memory cells of the memory regions can be grouped as memory pages or data blocks that can refer to a unit used to store data.
  • the volatile memory 123 can be, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM).
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • one or more controllers of the memory system 105 can communicate with the memory regions 1 1 1 1 , 113, ... , 1 19 to perform operations such as reading data, writing data, or erasing data.
  • Each controller can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof.
  • Each controller can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the controller(s) can include a processing device (processor) configured to execute instructions stored in local memory.
  • local memory of the controller includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory system 105, including handling communications between the memory system 105 and the host system 101 .
  • the local memory can include memory registers storing memory pointers, fetched data, etc.
  • the local memory can also include read-only memory (ROM) for storing micro-code.
  • controller(s) of memory system 105 can receive commands or operations from the host system 101 and/or processing device 1 1 1 and can convert the commands or operations into instructions or appropriate commands to achieve selection of a memory region based on data write counters for the memory regions.
  • the controller can also be responsible for other operations such as wear-leveling, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory regions.
  • ECC error detection and error-correcting code
  • the controller can further include host interface circuitry to communicate with the host system 101 via the physical host interface.
  • the host interface circuitry can convert the commands received from the host system into command instructions to access one or more of the memory regions as well as convert responses associated with the memory regions into information for the host system 101 .
  • the memory system 105 can also include additional circuitry or components that are not illustrated.
  • the memory system 105 can include a cache or buffer (e.g., DRAM or SRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from one or more controllers and decode the address to access the memory regions.
  • a cache or buffer e.g., DRAM or SRAM
  • address circuitry e.g., a row decoder and a column decoder
  • a controller in the host system 101 or memory system 105, and/or the processing device 1 1 1 includes at least a portion of the identity component 107 and/or verification component 109.
  • the controller and/or the processing device 1 11 can include logic circuitry implementing the identity component 107 and/or verification component 109.
  • a processing device can be configured to execute instructions stored in memory for performing operations that provide read/write access to memory regions for the identity component 107 as described herein.
  • the verification component 109 is part of an operating system, a device driver, or an application.
  • FIG. 3 shows an example computing device of a vehicle 100, according to one embodiment.
  • the vehicle 100 can be an autonomous vehicle, a non-autonomous vehicle, an emergency vehicle, a service vehicle, or the like.
  • the vehicle 100 includes a vehicle computing device 1 10, such as an on-board computer.
  • Vehicle computing device 1 10 is an example of host device 151 of FIG. 1.
  • vehicle computing device 1 10 is an example of host system 101 of FIG. 2, and memory 160 is an example of memory system 105.
  • the vehicle computing device 1 10 includes a processor 120 coupled to a vehicular communication component 130, such as a reader, writer, and/or other computing device capable of performing the functions described below, that is coupled to (or includes) an antenna 140.
  • the vehicular communication component 130 includes a processor 150 coupled to a memory 160, such as a non-volatile flash memory, although embodiments are not so limited to such a kind of memory devices.
  • the memory 160 is adapted to store all the information related to the vehicle (e.g., driver, passengers, and carried goods) in such a way that the vehicle 100 is able to provide this information when approaching a check point by using a communication interface (for example the so-called DICE-RloT protocol), as described below.
  • a communication interface for example the so-called DICE-RloT protocol
  • the vehicle information (such as vehicle ID/plate number) is already stored in the vehicle memory 160, and the vehicle 100 is able to identify, for example through the communication component 130 and by using a known DICE-RloT protocol or a similar protocol, the electronic ID of the passengers and/or the IDs of the carried luggage, goods and the like, and then to store this information in the memory 160.
  • electronic IDs, transported luggage and goods containers are equipped with wireless transponders, NFC, Bluetooth, RFID, touchless sensors, magnetic bars, and the like, and the communication component 130 can use readers and/or electromagnetic field to acquire the needed info from such remote sources.
  • all the passenger IDs and/or the IDs of the carried luggage, goods and the like are equipped with electronic devices capable to exchange data with a communication component.
  • Those electronic devices may be active or passive elements in the sense that they may be active because supplied by electric power or may be activated and powered by an external electric supply source that provided the required electric supply just when the electric device is in its proximity.
  • Rental vehicles or autonomous vehicles can use readers and/or electromagnetic field to acquire information inside or in the proximity of the vehicle or, as an alternative, may receive information even from remote sources, for instance when the driver of a rental vehicle is already known to the rental system because of a previous reservation. A further check may be performed in real time when the driver arrives to pick up the vehicle.
  • all the information about the transported luggage and goods (and also about the passengers) carried by the vehicle 100 may maintained to be always up-to-date.
  • the electronic ID of the passengers and/or the IDs of the carried luggage and goods are up-dated in real-time due to the wireless transponders associated to the luggage and good or owned by the passengers (not shown).
  • the communication between the vehicular communication component 130 and the proximity sources occurs via the DICE-RloT protocol.
  • the vehicle computing device 1 10 can control operational parameters of the vehicle 100, such as steering and speed.
  • a controller (not shown) can be coupled to a steering control system 170 and a speed control system 180.
  • the vehicle computing device 1 10 can be coupled to an information system 190.
  • Information system 190 can be configured to display a message, such as the route information or a check point security message and can display visual warnings and/or output audible warmings.
  • the communication component 130 can receive information from additional computing devices, such as from an external computing device (not shown).
  • FIG. 4 shows an example system 390 having a host device 350
  • the computing device includes a passive communication component 310, such as a short-range communication device (e.g., an NFC tag).
  • a passive communication component 310 such as a short-range communication device (e.g., an NFC tag).
  • the communication component 310 can be in the vehicle 300, which can be configured as shown in FIG. 3 for the vehicle 100 and include the components of vehicle 100 in addition to the communication component 310, which can be configured as the vehicular communication component 130.
  • the communication component 310 includes a chip 320 (e.g., implementing a CPU or application controller for vehicle 300) having a non-volatile storage component 330 that stores information about the vehicle 300 (such as vehicle ID, driver/passenger information, carried goods information, etc.).
  • the communication component 310 can include an antenna 340.
  • the host device 350 is, for example, an active communications device (e.g., that includes a power supply), which can receive information from the communication component 310 and/or transmit information thereto.
  • the host device 350 can include a reader (e.g., an NFC reader), such as a toll reader, or other components.
  • the host device 350 can be an external device arranged (e.g., embedded) in proximity of a check point (e.g., at the boundary of a trust zone) or in general in proximity of limited access areas.
  • the host device 350 can also be carried by a policeman for use as a portable device.
  • the host device 350 can include a processor 360, a memory 370, such as a non-volatile memory, and an antenna 380.
  • the memory 370 can include an NFC protocol that allows the host device 350 to communicate with the communication component 310.
  • the host device 350 and the communication component 310 can communicate using the NFC protocol, such as for example at about 13.56 mega-Hertz and according to the ISO/IEC 18000-3 international standard. Other approaches that use RFID tags can be used.
  • the host device 350 can also communicate with a server or other computing device (e.g., communicate over a wireless network with a central operation center).
  • the host device 350 can be wirelessly coupled or hardwired to the server or a communication center.
  • the host device 350 can communicate with the operation center via WIFI or over the Internet.
  • the host device 350 can energize the communication component 310 when the vehicle 300 brings antenna 340 within a communication distance of antenna 380.
  • the host device 350 can receive real-time information from the operation center and can transmit that information to vehicle 300.
  • the communication component 310 can have its own battery.
  • the host device 350 is adapted to read/send information from/to the vehicle 300, which is equipped with the communication component 310 (e.g., an active device) configured to allow information exchange.
  • the communication component 310 e.g., an active device
  • the vehicular communication component 130 of the vehicle 100 can be active internally to pick up in real-time pertinent information concerning the passengers IDs, the transported luggage and/or goods (e.g., when equipped with the corresponding wireless communication component discussed with respect to FIG. 4 above).
  • the vehicle’s computing device may detect information in a space range of few meters (e.g., 2-3 meters), so that all data corresponding to passengers, luggage and goods may be acquired. In one example, this occurs when the vehicle approaches an external communication component (e.g., a server or other computing device acting as a host device) within a particular proximity so that communication can begin and/or become strengthened.
  • the communication distance is for example 2-3 meters.
  • the vehicular communication component 130 can encrypt data when communicating to external entities and/or with internal entities.
  • data concerning transported luggage, goods or even passengers may be confidential or include confidential information (e.g., the health status of a passenger or confidential documents or a dangerous material).
  • confidential information e.g., the health status of a passenger or confidential documents or a dangerous material.
  • a method for encrypted and decrypted communication between the internal vehicle computing device and the external entity e.g., a server acting as a host device
  • this method may be applied even between the internal vehicle computing device and the electronic components associated to passengers, luggage and goods boarded on the vehicle.
  • the vehicular communication component 130 sends a vehicular public key to the external communication component (e.g., acting as a host device 151 ), and the external communication component sends an external public key to the vehicular communication component 130.
  • These public keys can be used to encrypt data sent to each respective communication component and verify an identity of each, and also exchange confirmations and other information.
  • the vehicular communication component 130 can encrypt data using the received external public key and send the encrypted data to the external communication component.
  • the external communication component can encrypt data using the received vehicular public key and send the encrypted data to the vehicular communication component 130.
  • Data sent by the vehicle 100 can include car information, passenger information, goods information, and the like.
  • the information can optionally be sent with a digital signature to verify an identity of the vehicle 100.
  • information can be provided to the vehicle 100 and displayed on a dashboard of the vehicle 100 or sent to an email of a computing device (e.g., a user device or central server that monitors vehicles) associated with the vehicle 100.
  • the vehicle can be recognized based on an identification of the vehicle, a VIN number, etc., along with a vehicular digital signature.
  • data exchanged between the vehicle and the external entity can have a freshness used by the other.
  • data sent by the vehicle to the external entity to indicate the identical instructions can be altered at each of a particular time frame or for a particular amount of data being sent. This can prevent a hacker from intercepting confidential information contained in previously sent data and sending the same data again to result in the same outcome. If the data has been slightly altered, but still indicates a same instruction, the hacker might send the identical information at a later point in time, and the same instruction would not be carried out due to the recipient expecting the altered data to carry out the same instruction.
  • the data exchanged between the vehicle 100 and an external entity can be performed using a number of encryption and/or decryption methods as described below.
  • the securing of the data can ensure that unauthorized activity is prevented from interfering with the operation the vehicle 100 and the external entity.
  • FIG. 5A shows an application board that generates a triple including an identifier, certificate, and key that is sent to a host device, according to one embodiment.
  • the host device uses the triple to verify an identity of the application board.
  • the application board is an example of computing device 141 of FIG. 1.
  • the host device is an example of host device 151 of FIG. 1.
  • the application board and the host include
  • DICE-RloT device identification composition engine
  • the DICE-RloT protocol is applied to communication between the vehicular communication component and an external communication component, as well as to a communication performed internally to the vehicle environment between the vehicle communication component and the various wireless electronic devices that are associated to each of the passenger IDs, the luggage, the goods and the like.
  • FIG. 5B shows an example computing system that boots in stages using layers, according to one embodiment.
  • the system includes an external
  • the associated vehicular communication component 430” of the vehicle can exchange data with the external entity as described above for example using a sensor (e.g., a radio frequency identification sensor, or RFID, or the like).
  • a sensor e.g., a radio frequency identification sensor, or RFID, or the like.
  • the component 430’ can be an application board located in a vehicle, and the component 430” can be a host device also located in the vehicle that uses the DICE-RloT protocol to verify an identity of component 430’ (for example, as discussed with respect to FIG. 1 above).
  • the DICE-RloT protocol is used by a computing device to boot in stages using layers, with each layer authenticating and loading a subsequent layer and providing increasingly sophisticated runtime services at each layer.
  • a layer can thus be served by a prior layer and serve a subsequent layer, thereby creating an interconnected web of the layers that builds upon lower layers and serves higher order layers.
  • other protocols can be used instead of the DICE-RloT protocol.
  • security of the communication protocol is based on a secret value, which is a device secret (e.g., a UDS), that is set during manufacture (or also later).
  • a device secret e.g., a UDS
  • the device secret UDS exists within the device on which it was provisioned (e.g., stored as device secret 149 of FIG. 1 ).
  • the device secret UDS is accessible to the first stage ROM-based boot loader at boot time.
  • the system then provides a mechanism rendering the device secret inaccessible until the next boot cycle, and only the boot loader (e.g., the boot layer) can ever access the device secret UDS.
  • the boot is layered in a specific architecture that begins with the device secret UDS.
  • Layer 0, Lo, and Layer 1 , Li are within the external communication component 430’.
  • Layer 0 Lo can provide a fuse derived secret, FDS, key to Layer 1 Li.
  • the FDS key can be based on the identity of code in Layer 1 Li and other security relevant data.
  • a particular protocol (such as robust internet of things (RloT) core protocol) can use the FDS to validate code of Layer 1 Li that it loads.
  • the particular protocol can include a device
  • the FDS can include a Layer 1 Li firmware image itself, a manifest that cryptographically identifies authorized Layer 1 Li firmware, a firmware version number of signed firmware in the context of a secure boot implementation, and/or security-critical configuration settings for the device.
  • the device secret UDS can be used to create the FDS, and is stored in the memory of the external communication component.
  • the Layer 0 Lo never reveals the actual device secret UDS and it provides a derived key (e.g., the FDS key) to the next layer in the boot chain.
  • the external communication component 430’ is adapted to transmit data, as illustrated by arrow 410’, to the vehicular communication component 430”.
  • the transmitted data can include an external identification that is public, a certificate (e.g., an external identification certificate), and/or an external public key, as it will be illustrated in connection with FIG. 6.
  • Layer 2 L2 of the vehicular communication component 430” can receive the transmitted data, execute the data in operations of the operating system, OS, for example on a first application Appi and a second application App2.
  • the vehicular communication component 430 can transmit data, as illustrated by arrow 410”, including a vehicular identification that is public, a certificate (e.g., a vehicular identification certificate), and/or a vehicular public key.
  • the vehicular communication component 430 can send a vehicle identification number, VIN, for further authentication, identification, and/or verification of the vehicle.
  • the external communication component 430’ can read the device secret DS, hash an identity of Layer 1 Li , and perform the following calculation:
  • FDS KDF [UDS, Hash ("immutable information")]
  • KDF is a cryptographic one-way key derivation function (e.g., HMAC-SHA256).
  • Hash can be any cryptographic primitive, such as SHA256, MD5, SHA3, etc.
  • the vehicle can communicate using either of an anonymous log in or an authenticated log in.
  • the authenticated log in can allow the vehicle to obtain additional information that may not be accessible when
  • the authentication can include providing the vehicular identification number VIN and/or authentication information, such as an exchange of public keys, as will be described below.
  • the external entity e.g., a check point police at a boundary of a trust zone
  • the vehicle can communicate with the vehicle to provide the external public key associated with the external entity to the vehicle.
  • FIG. 6 shows an example computing device generating an identifier, certificate, and key using asymmetric generators, according to one embodiment.
  • the computing device implements a process to determine parameters (e.g., within the Layer Li of an external device, or Layer Li of an internal computing device in alternative embodiments).
  • the parameters are determined including the external public identification, the external certificate, and the external public key that are then sent (as indicated by arrow 51 O’) to Layer 2 L2 of the vehicular communication component (e.g., reference 430” in FIG. 5B). Arrows 510’ and 510” of FIG. 6
  • FIG. 6 correspond to the layers of FIG. 5B.
  • a message (“Host Message”) from the host device is merged with the external public key by pattern (data) merging 531 to provide merged data for encryption.
  • the merged data is an input to encryptor 530.
  • the host message is concatenated with the external public key.
  • the generated parameters include a triple that is sent to a host device and used to verify an identity of a computing device.
  • the external public identification, the external certificate, and the external public key are used by a verification component of the host device to verify the identity.
  • the host device is host device 151 of FIG. 1.
  • the FDS from Layer 0 Lo is sent to Layer 1 Li and used by an asymmetric ID generator 520 to generate a public identification
  • IDIkpublic and a private identification, IDIkprivate.
  • IDIkpublic the "Ik” indicates a generic Layer k (in this example, Layer 1 Li), and the "public” indicates that the identification is openly shared.
  • the public identification IDIkpublic is illustrated as shared by the arrow extending to the right and outside of Layer 1 Li of the external communication component.
  • the generated private identification IDIkprivate is used as a key input into an encryptor 530.
  • the encryptor 530 can be, for example, any processor, computing device, etc. used to encrypt data.
  • Layer 1 Li of the external communication component can include an asymmetric key generator 540.
  • a random number generator, RND can optionally input a random number into the asymmetric key generator 540.
  • the asymmetric key generator 540 can generate a public key, KLkpublic, (referred to as an external public key) and a private key, KLkprivate, (referred to as an external private key) associated with an external communication component such as the external communication component 430’ in FIG. 5B.
  • the external public key KLkpublic can be an input (as "data") into the encryptor 530.
  • a host message previously received from the host device as part of an identity verification process is merged with KLkpublic to provide merged data as the input data to encryptor 530.
  • the encryptor 530 can generate a result K’ using the inputs of the external private identification IDIkprivate and the external public key KLkpublic.
  • the external private key KLkprivate and the result K’ can be input into an additional encryptor 550, resulting in output K”.
  • the output K” is the external certificate
  • the external certificate I DL1 certificate can provide an ability to verify and/or authenticate an origin of data sent from a device.
  • data sent from the external communication component can be associated with an identity of the external communication component by verifying the certificate, as it will be described further in association with FIG. 7.
  • the external public key KL1 public key can be transmitted to Layer 2 L2. Therefore, the public
  • identification IDI1 public the certificate I DL1 certificate, and the external public key KL1 public key of the external communication component can be transmitted to Layer 2 L2 of the vehicular communication component.
  • FIG. 7 shows a verification component that verifies the identity of a computing device using decryption operations, according to one embodiment.
  • the verification component includes decryptors 730, 750.
  • the verification component implements a process to verify a certificate in accordance with an embodiment of the present disclosure.
  • a public key KL1 public, a certificate IDL1 certificate, and a public identification IDL1 public is provided from the external communication component (e.g., from Layer 1 Li of the external communication component 430’ in FIG. 5B).
  • the data of the certificate IDL1 certificate and the external public key KL1 public can be used as inputs into decryptor 730.
  • the decryptor 730 can be any processor, computing device, etc. used to decrypt data.
  • the result of the decryption of the certificate I DL1 certificate and the external public key KL1 public can be used as an input into decryptor 750 along with the public identification IDL1 public, resulting in an output.
  • the external public key KL1 public and the output from the decryptor 750 can indicate, as illustrated at block 760, whether the certificate is verified, resulting in a yes or no as an output. Private keys are associated with single layers and a specific certificate can only be generated by a specific layer.
  • data received from the device being verified can be accepted, decrypted, and/or processed.
  • data received from the device being verified can be discarded, removed, and/or ignored. In this way, unauthorized devices sending nefarious data can be detected and avoided. As an example, a hacker sending data to be processed can be identified and the hacking data not processed.
  • the public key KL1 public, a certificate IDL1 certificate, and a public identification IDL1 public are provided from computing device 141 of FIG. 1 , or from memory system 105 of FIG. 2. This triple is generated by computing device 141 in response to receiving a host message from the host device. Prior to providing I DL1 certificate as an input to decryptor 730, the
  • IDL1 certificate and a message from the host device are merged by pattern (data) merging 731 .
  • the merging is a concatenation of data.
  • the merged data is provided as the input to decryptor 730. The verification process then proceeds otherwise as described above.
  • FIG. 8 shows a block diagram of an example process to verify a certificate, according to one embodiment.
  • a signature can be generated and sent with the data.
  • a first device may make a request of a second device and once the second device performs the request, the first device may indicate that the first device never made such a request.
  • An anti-repudiation approach such as using a signature, can avoid repudiation by the first device and ensure that the second device can perform the requested task without subsequent difficulty.
  • a vehicle computing device 810 (e.g., vehicle computing device 1 10 in FIG. 3 or computing device 141 of FIG. 1 ) can send data Dat” to an external computing device 810’ (or to any other computing device in general).
  • the vehicle computing device 810” can generate a signature Sk using the vehicular private key KLkprivate.
  • the signature Sk can be transmitted to the external computing device 810’.
  • the external computing device 810’ can verify using data Dat' and the public key KLkpublic previously received (e.g., the vehicular public key). In this way, signature verification operates by using a private key to encrypt the signature and a public key to decrypt the signature.
  • a unique signature for each device can remain private to the device sending the signature while allowing the receiving device to be able to decrypt the signature for verification.
  • This is in contrast to encryption/decryption of the data, which is encrypted by the sending device using the public key of the receiving device and decrypted by the receiving device using the private key of the receiver.
  • the vehicle can verify the digital signature by using an internal cryptography process (e.g., Elliptical Curve Digital signature (ECDSA) or a similar process).
  • EDSA Elliptical Curve Digital signature
  • the devices Due to the exchange and verification of the certificates and of the public keys, the devices are able to communicate in a secure way with each other.
  • an external entity e.g., a trust zone boundary, a border security entity or, generally, an electronically-controlled host device
  • the respective communication devices which have the capability shown in FIG. 7 of verifying the respective certificate
  • the vehicle After the authentication (e.g. after receiving/verifying from the external entity the certificate and the public key), the vehicle is thus able to communicate all the needed information related thereto and stored in the memory thereof, such as plate number/ID, VIN, insurance number, driver info (e.g., IDs, eventual permission for border transition), passenger info, transported goods info and the like. Then, after checking the received info, the external entity communicates to the vehicle the result of the transition request, this info being possibly encrypted using the public key of the receiver.
  • the exchanged messages/info can be encrypted/decrypted using the above-described DICE-RloT protocol.
  • the so-called immutable info (such as plate number/ID, VIN, insurance number) is not encrypted, while other info is encrypted.
  • the info in the exchanged message, there can be non-encrypted data as well as encrypted data: the info can thus be encrypted or not, or mixed.
  • the correctness of the message is then ensured by using the certificate/public key to validate that the content of the message is valid.
  • FIG. 9 shows a method to verify an identity of a computing device using an identifier, certificate, and a key, according to one embodiment.
  • the method of FIG. 9 can be implemented in the system of FIGs. 1-7.
  • the method of FIG. 9 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • the method of FIG. 9 is performed at least in part by the identity component 147 and verification component 153 of FIG. 1.
  • a message is received from a host device.
  • computing device 141 receives a message (e.g.,“host message’’ or“host message
  • a message e.g.,“host message’’ or“host message
  • an identifier, a certificate, and a key are generated.
  • the identifier is associated with an identity of a computing device.
  • the certificate is generated using the message (e.g.,“host message”) from the host device.
  • the message is merged with the public key prior to encryption.
  • This encryption uses private identifier I Du private as a key.
  • the private identifier IDuprivate is associated with the public identifier IDLI public (e.g., an associated pair that is generated by asymmetric ID generator 520).
  • identity component 147 generates the identifier, certificate, and key to provide a triple.
  • the triple is generated based on the DICE-RloT protocol.
  • the triple is generated as illustrated in FIG. 6 [00153]
  • each layer (Lk) provides to the next layers (Lk +i ) a set of keys and certificates, and each certificate can be verified by the receiving layer.
  • layer 1 Li in a DICE-RloT architecture generates the certificate using a host message sent by the host device.
  • Layer 1 calculates two associated key pairs as follows:
  • Layer 1 also calculates two signatures as follows:
  • K’ encrypt ( I Dik private , KLk public
  • K encrypt (KLk private, K)
  • layer 1 provides a triple as follows:
  • each layer provides a triple as follows:
  • each layer is able to prove its identity to the next layer.
  • layer 2 corresponds to application firmware
  • subsequent layers correspond to an operating system and/or applications of the host device.
  • the generated identifier, certificate, and key are sent to the host device.
  • the host device verifies the identity of the computing device using the identifier, certificate, and key.
  • host device 151 receives the identifier, certificate, and key from computing device 141.
  • Host device 151 uses verification component 153 to verify the identity of the computing device.
  • verification component 153 performs decryption operations as part of the verification process.
  • the decryption includes merging the message from the host with the certificate prior to decryption using the key received from computing device 141.
  • verification of the identity of the computing device is performed as illustrated in FIG. 7.
  • the decryption operations are performed as follows:
  • the Result is compared to KLi public. If the Result is equal to KLi public, then the identity is verified. In one example, an application board identity is verified.
  • an identity of a human or animal is proven.
  • Verification of the identity of a person is performed similarly as for verifying the identity of a computing device 141 as described above.
  • computing device 141 is integrated into a passport for a person.
  • a public administration department of a country that has issued the passport can use a UDS that is specific for a class of document (e.g., driver license, passport, ID card, etc.).
  • a class of document e.g., driver license, passport, ID card, etc.
  • the UDS 0x12234...4444.
  • the UDS Oxaabb....00322
  • L1 is an ASCII string as follows:
  • The“granularity” of the assignation can be determined by the public administration of each country.
  • the method is usable for mass production of things.
  • the customer UDS is protected at a hardware level (e.g., inaccessibility of layer 0 external to the component).
  • the UDS cannot be read by anyone, but it can be replaced (e.g., only the customer can do this by using a secure protocol).
  • secure protocols include security protocols based on authenticated, replay protected commands and/or on secret sharing algorithms like Diffie Heilman (e.g., ECDH elliptic curves Diffie Heilman).
  • the UDS is communicated to the customer (not the end user) by using secure infrastructure.
  • the UDS is customizable by the customer.
  • the component recognition can work in the absence of an internet or other network connection.
  • the method can be used to readily check identity of: things, animals, and humans at a trust zone boundary (e.g., a country border, internal check point, etc.).
  • knowledge of the UDS permits the host device to securely replace the UDS. For example, replacement can be done if: the host desires to change the identity of a thing, or the host desires that the thing’s identity is unknown to anyone else (including the original manufacturer).
  • a replace command is used by the host device.
  • the host device can send a replace UDS command to the computing device.
  • the replace command includes the existing UDS and the new UDS to be attributed to the computing device.
  • the replace command has a field including a hash value as follows: hash (existing UDS
  • an authenticated replay protect command is used that has a field as follows: Replace_command
  • signature MAC [secret key, Replace_command
  • hash existing UDS I new UDS
  • the secret key is an additional key, and is the key used for authenticated commands present on the device.
  • the secret key can be a session key as described below (see, e.g., FIG. 12).
  • a method comprises: receiving, by a computing device (e.g., computing device 141 ), a message from a host device (e.g., host device 151 ); generating, by the computing device, an identifier, a certificate, and a key, wherein the identifier is associated with an identity of the computing device, and the certificate is generated using the message; and sending, by the computing device, the identifier, the certificate, and the key to the host device, wherein the host device is configured to verify the identity of the computing device using the identifier, the certificate, and the key.
  • verifying the identity of the computing device comprises concatenating the message and the certificate to provide first data.
  • verifying the identity of the computing device further comprises decrypting the first data using the key to provide second data.
  • verifying the identity of the computing device further comprises decrypting the second data using the identifier to provide a result, and comparing the result to the key.
  • the identifier is a public identifier
  • the computing device stores a secret key
  • the method further comprising: using the secret key as an input to a message authentication code to generate a derived secret; wherein the public identifier is generated using the derived secret as an input to an asymmetric generator.
  • the identifier is a first public identifier
  • the computing device stores a first device secret used to generate the first public identifier
  • the method further comprising: receiving a replace command from the host device; in response to receiving the replace command, replacing the first device secret with a second device secret; and sending, to the host device, a second public identifier generated using the second device secret.
  • the key is a public key
  • generating the certificate includes concatenating the message with the public key to provide a data input for encryption.
  • the identifier is a public identifier
  • a first asymmetric generator generates the public identifier and a private identifier as an associated pair
  • the key is a public key
  • a second asymmetric generator generates the public key and a private key as an associated pair
  • generating the certificate comprises: concatenating the message with the public key to provide first data; encrypting the first data using the private identifier to provide second data; and encrypting the second data using the private key to provide the certificate.
  • the key is a public key
  • the method further comprising generating a random number as an input to an asymmetric key generator, wherein the public key and an associated private key are generated using the asymmetric key generator.
  • the random number is generated using a physical unclonable function (PUF).
  • PEF physical unclonable function
  • a system comprises: at least one processor; and memory containing instructions configured to instruct the at least one processor to: send a message to a computing device; receive, from the computing device, an identifier, a certificate, and a key, wherein the identifier is associated with an identity of the computing device, and the certificate is generated by the computing device using the message; and verify the identity of the computing device using the identifier, the certificate, and the key.
  • verifying the identity of the computing device comprises: concatenating the message and the certificate to provide first data; decrypting the first data using the key to provide second data; decrypting the second data using the identifier to provide a result; and comparing the result to the key.
  • the identifier is a first public identifier
  • the computing device stores a first device secret used to generate the first public identifier
  • the instructions are further configured to instruct the at least one processor to: send a replace command to the computing device, the replace command to cause the computing device to replace the first device secret with a second device secret, and receive, from the computing device, a second public identifier generated using the second device secret.
  • the computing device is configured to use the second device secret as an input to a message authentication code that provides a derived secret, and to generate the second public identifier using the derived secret.
  • the replace command includes a field having a value based on the first device secret.
  • system further comprises a freshness mechanism configured to generate a freshness, wherein the message sent to the computing device includes the freshness.
  • the identity of the computing device includes an alphanumeric string.
  • a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to at least: receive a message from a host device; generate an identifier, a certificate, and a key, wherein the identifier corresponds to an identity of the computing device, and the certificate is generated using the message; and send the identifier, the certificate, and the key to the host device for use in verifying the identity of the computing device.
  • the identifier is a public identifier associated with a private identifier
  • the key is a public key associated with a private key
  • generating the certificate comprises: concatenating the message with the public key to provide first data; encrypting the first data using the private identifier to provide second data; and encrypting the second data using the private key to provide the certificate.
  • verifying the identity of the computing device comprises performing a decryption operation using the identifier to provide a result, and comparing the result to the key.
  • the PUF value can itself be used as a device secret, or used to generate a device secret.
  • the PUF value is used as a unique device secret (UDS) for use with the DICE-RloT protocol as described above (e.g., see FIG. 5A and FIG. 5B).
  • UDS unique device secret
  • a value generated by a PUF is used as an input to a message authentication code (MAC). The output from the MAC is used as the UDS.
  • MAC message authentication code
  • the PUF value or a value generated from the PUF value, can be used as a random number (e g., a device specific random number).
  • the random number (e.g., RND) is used as an input when generating the associated public key and private key via the asymmetric key generator described above (e.g., see FIG. 6).
  • the architecture below generates an output by feeding inputs provided from one or more PUFs into a message authentication code (MAC).
  • MAC message authentication code
  • the output from the MAC provides the improved PUF (e.g., the UDS above).
  • key injection is the programming of a unique secret key for each chip or die, for example, provided from a semiconductor wafer. It is desired that key injection be performed in a secure environment to avoid leaking or disclosing the secret keys injected into the chips. It is also desired to ensure that the key cannot be hacked or read back after production of the chip. In some cases, for example, key injection procedures are certified or executed by a third-party infrastructure.
  • Chip manufacturers desire to reduce the production cost of chips that include cryptographic capabilities. Chip manufacturers also desire to simplify production flows while maintaining a consistent level of security performance of the manufactured chips. However, key injection is one of the more expensive production steps.
  • Chip manufacturers also face the problem of improving the uniformity of PUFs when used as pseudo-random number generators. In some cases, this problem may include a cross-correlation between dice because of the phenomena on which a seed value provided by the PUF is based.
  • a PUF is based on unpredictable physical phenomena such as, for example, on-chip parasitic effect, on-chip path delays, etc., which are unique for each die. These phenomena are used, for example, to provide a seed value for a pseudo-random number generator.
  • Two different chips selected in the production line must have different PUF values.
  • the PUF value generated in each chip must not change during the life of the device. If two chips have similar keys (e.g., there is a low Flamming distance between them), it may be possible to use a key of one chip to guess the key of another chip (e g., preimage hacker attack).
  • the improved PUF architecture described below can provide a solution to one or more of the above problems by providing output values suitable for providing the function of a PUF on each chip or die.
  • the improved PUF architecture below uses a PUF, which enables each chip or die to automatically generate a unique secure key at each power-up of the chip or die.
  • the secure key does not need to be stored in a non-volatile memory, which might be hacked or otherwise compromised.
  • the improved PUF architecture further uses a MAC to generate the improved PUF output (e.g., a unique key) for use by, for example, cryptographic functions or processes that are integrated into the semiconductor chip.
  • a MAC to generate the improved PUF output (e.g., a unique key) for use by, for example, cryptographic functions or processes that are integrated into the semiconductor chip.
  • the use of the MAC can, for example, increase the Flamming distance between keys generated on different chips.
  • an improved PUF architecture using the output from a MAC is provided as a way to generate seed or other values.
  • the improved PUF architecture provides, for example, a way to perform key injection that reduces cost of manufacture, and that improves reliability and/or uniformity of PUF operation on the final chip.
  • a method includes: providing, by at least one PUF, at least one value; and generating, based on a MAC, a first output, wherein the MAC uses the at least one value provided by the at least one PUF as an input for generating the first output.
  • a system includes: at least one PUF device; a message authentication code MAC module configured to receive a first input based on at least one value provided by the at least one PUF device; at least one processor; and memory containing instructions configured to instruct the at least one processor to generate, based on the first input, a first output from the MAC module.
  • the MAC module can be implemented using hardware and/or software.
  • the system further includes a selector module that is used to select one or more of the PUF devices for use in providing values to the MAC module. For example, values provided from several PUF devices can be linked and provided as an input to the MAC module.
  • the selector module can be implemented using hardware and/or software.
  • FIG. 10 shows a system for generating a unique key 125 from an output of a message authentication code (MAC) 123 that receives an input from a physical unclonable function (PUF) device 121 , according to one embodiment.
  • the system provides a PUF architecture 11 1 used to generate the unique key 125 (or other value) from an output of message authentication code (MAC) module 123.
  • the MAC module 123 receives an input value obtained from the physical unclonable function (PUF) device 121.
  • the PUF device 121 in FIG. 10 can be, for example, any one of various different, known types of PUFs.
  • the MAC module 123 provides, for example, a one-way function such as SHA1 , SFIA2, MD5, CRC, TIGER, etc.
  • the architecture 1 1 1 can, for example, improve the Flamming distance of the PUF values or codes generated between chips.
  • the MAC functions are unpredictable (e.g., input sequences with just a single bit difference provided to the MAC function provide two completely different output results). Thus, the input to MAC function cannot be recognized or determined when having only knowledge of the output.
  • the architecture 1 1 1 also can, for example, improve the uniformity of the PUF as a pseudo-random number generator.
  • the value generated by the PUF architecture 1 11 may be a number having N bits, where N depends on a cryptographic algorithm implemented on a chip (e.g., memory device 103 or another device) that includes the PUF architecture 1 1 1 .
  • the chip implements a cryptographic function that uses HMAC-SFIA256, in which case the output from MAC module 123 has a size N of 256 bits. The use of the output from the MAC module 123 provides a message length for the output value that is suitable for use as a key (without needing further compression or padding).
  • the PUF architecture 1 1 1 is implemented in a device such as the illustrated memory device 103, or can be implemented in other types of computing devices such as, for example, integrated circuits implemented in a number of semiconductor chips provided by a wafer manufacturing production line.
  • the MAC module 123 cooperates with and/or is integrated into or as part of cryptographic module 127, for example which can provide cryptographic functions for memory device 103.
  • the output of the MAC module 123 can be suitable to be used as a key due to the MAC being used by the memory device 103 for other cryptographic purposes.
  • the operation of the PUF architecture 1 1 1 , the cryptographic module 127, and/or other functions of the memory device 103 can be controlled by a controller 107.
  • the controller 107 can include, for example, one or more microprocessors.
  • a host 101 can communicate with the memory device 103 via a communication channel.
  • the host 101 can be a computer having one or more Central Processing Units (CPUs) to which computer peripheral devices, such as the memory device 103, may be attached via an interconnect, such as a computer bus (e.g., Peripheral Component Interconnect (PCI), PCI extended (PCI-X), PCI Express (PCIe)), a communication portion, and/or a computer network.
  • PCI Peripheral Component Interconnect
  • PCI-X PCI extended
  • PCIe PCI Express
  • unique key 125 is used as a UDS to provide an identity for memory device 103.
  • Controller 107 implements layer 0 Lo and layer 1 Li in a DICE-RloT architecture.
  • cryptographic module 127 performs processing to generate a triple, as described above.
  • Host 101 uses the triple to verify the identity of memory device 103.
  • Memory device 103 is an example of computing device 141 .
  • a first problem is to prove the identity of the board to the host.
  • the problem can be handled by the use of the public triple and
  • the second problem could be solved using the public triple and asymmetric cryptography above.
  • a lighter security mechanism based simply on a MAC function is often sufficient for handling the second problem.
  • the memory device 103 can be used to store data for the host 101 , for example, in the non-volatile storage media 109.
  • Examples of memory devices in general include hard disk drives (HDDs), solid state drives (SSDs), flash memory, dynamic random-access memory, magnetic tapes, network attached storage device, etc.
  • the memory device 103 has a host interface 105 that implements
  • the communication channel between the host 101 and the memory device 103 is a Peripheral Component Interconnect Express (PCI Express or PCIe) bus in one embodiment; and the host 101 and the memory device 103 communicate with each other using NVMe protocol (Non-Volatile Memory Host Controller Interface
  • NVMHCI NVM Express
  • NVMe NVM Express
  • the communication channel between the host 101 and the memory device 103 includes a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communications network, a broadband high-speed a I ways -connected wireless communication connection (e.g., a current or future generation of mobile network link); and the host 101 and the memory device 103 can be configured to
  • the controller 107 can run firmware 104 to perform operations responsive to the communications from the host 101 , and/or other operations.
  • Firmware in general is a type of computer program that provides control, monitoring and data manipulation of engineered computing devices.
  • the firmware 104 controls the operations of the controller 107 in operating the memory device 103, such as the operation of the PUF architecture 1 1 1 , as further discussed below.
  • the memory device 103 has non-volatile storage media 109, such as magnetic material coated on rigid disks, and/or memory cells in an integrated circuit.
  • the storage media 109 is non-volatile in that no power is required to maintain the data/information stored in the non-volatile storage media 109, which data/information can be retrieved after the non-volatile storage media 109 is powered off and then powered on again.
  • the memory cells may be implemented using various memory/storage technologies, such as NAND gate based flash memory, phase- change memory (PCM), magnetic memory (MRAM), resistive random-access memory, and 3D XPoint, such that the storage media 109 is non-volatile and can retain data stored therein without power for days, months, and/or years.
  • the memory device 103 includes volatile Dynamic Random-Access Memory (DRAM) 106 for the storage of run-time data and instructions used by the controller 107 to improve the computation performance of the controller 107 and/or provide buffers for data transferred between the host 101 and the non-volatile storage media 109.
  • DRAM 106 is volatile in that it requires power to maintain the data/information stored therein, which data/information is lost immediately or rapidly when the power is interrupted.
  • Volatile DRAM 106 typically has less latency than non-volatile storage media 109, but loses its data quickly when power is removed. Thus, it is
  • the volatile DRAM 106 is replaced with volatile Static Random-Access Memory (SRAM) that uses less power than DRAM in some applications.
  • SRAM Static Random-Access Memory
  • the volatile DRAM 106 can be eliminated; and the controller 107 can perform computing by operating on the non-volatile storage media 109 for instructions and data instead of operating on the volatile DRAM 106.
  • cross point storage and memory devices have data access performance comparable to volatile DRAM 106.
  • a cross point memory device uses transistor-less memory elements, each of which has a memory cell and a selector that are stacked together as a column. Memory element columns are connected via two perpendicular lays of wires, where one lay is above the memory element columns and the other lay below the memory element columns. Each memory element can be individually selected at a cross point of one wire on each of the two layers.
  • Cross point memory devices are fast and non-volatile and can be used as a unified memory pool for processing and storage.
  • the controller 107 has in-processor cache memory with data access performance that is better than the volatile DRAM 106 and/or the non volatile storage media 109. Thus, parts of instructions and data used in the current computing task are cached in the in-processor cache memory of the controller 107 during the computing operations of the controller 107. In some instances, the controller 107 has multiple processors, each having its own in-processor cache memory.
  • the controller 107 performs data intensive, in-memory processing using data and/or instructions organized in the memory device 103. For example, in response to a request from the host 101 , the controller 107 performs a real-time analysis of a set of data stored in the memory device 103 and
  • the memory device 103 is connected to real-time sensors to store sensor inputs; and the processors of the controller 107 are configured to perform machine learning and/or pattern recognition based on the sensor inputs to support an artificial intelligence (Al) system that is implemented at least in part via the memory device 103 and/or the host 101.
  • Al artificial intelligence
  • the processors of the controller 107 are integrated with memory (e.g., 106 or 109) in computer chip fabrication to enable processing in memory and thus overcome the von Neumann bottleneck that limits computing performance as a result of a limit in throughput caused by latency in data moves between a processor and memory configured separately according to the von Neumann architecture.
  • memory e.g., 106 or 109
  • the integration of processing and memory increases processing speed and memory transfer rate, and decreases latency and power usage.
  • the memory device 103 can be used in various computing systems, such as a cloud computing system, an edge computing system, a fog computing system, and/or a standalone computer.
  • a cloud computing system remote computer servers are connected in a network to store, manage, and process data.
  • An edge computing system optimizes cloud computing by performing data processing at the edge of the computer network that is close to the data source and thus reduces data communications with a centralize server and/or data storage.
  • a fog computing system uses one or more end-user devices or near-user edge devices to store data and thus reduces or eliminates the need to store the data in a centralized data warehouse.
  • At least some embodiments disclosed herein can be implemented using computer instructions executed by the controller 107, such as the firmware 104.
  • hardware circuits can be used to implement at least some of the functions of the firmware 104.
  • the firmware 104 can be initially stored in the non volatile storage media 109, or another non-volatile device, and loaded into the volatile DRAM 106 and/or the in-processor cache memory for execution by the controller 107.
  • the firmware 104 can be configured to use the techniques discussed below in operating the PUF architecture.
  • the techniques discussed below are not limited to being used in the computer system of FIG. 10 and/or the examples discussed above.
  • the output of the MAC module 123 can be used to provide, for example, a root key or a seed value. In other implementations, the output can be used to generate one or more session keys.
  • the output from the MAC module 123 can be transmitted to another computing device.
  • the unique key 125 can be transmitted via host interface 105 to host 101.
  • FIG. 11 shows a system for generating unique key 125 from an output of MAC 123, which receives inputs from one or more PUF devices selected by a selector module 204, according to one embodiment.
  • the system generates the unique key 125 from an output of the MAC module 123 using a PUF architecture similar to architecture 1 1 1 of FIG. 10, but including multiple PUF devices 202 and selector module 204, according to one embodiment.
  • the MAC module 123 receives inputs from one or more PUF devices 202 selected by the selector module 204.
  • PUF devices 202 include PUF device 121 .
  • the PUF devices 202 can be, for example, identical or different (e.g., based on different random physical phenomena).
  • selector module 204 acts as an intelligent PUF selection block or circuit to select one or more of PUF devices 202 from which to obtain values to provide as inputs to the MAC module 123.
  • the selector module 204 bases the selection of the PUF devices 202 at least in part on results from testing the PUF devices 202. For example, the selector module 204 can test the repeatability of each PUF device 202. If any PUF device 202 fails testing, then the selector module 204 excludes the failing device from providing an input value to the MAC module 123. In one example, the failing device can be excluded temporarily or indefinitely.
  • the selector module 204 permits testing the PUF functionality of each chip during production and/or during use in the field (e.g., by checking the repeatability of the value provided by each PUF device 202). If two or more values provided by a given PUF device are different, then the PUF device is determined to be failing and is excluded from use as an input to the MAC module 123.
  • the selector module 204 is used to concurrently use multiple PUF devices 202 as sources for calculating an improved PUF output from the MAC module 123. For example, the selector module 204 can link a value from a first PUF device with a value from a second PUF device to provide as an input to the MAC module 123. In some implementations, this architecture permits obtaining a robust PUF output due to its dependence on several different physical phenomena.
  • FIG. 12 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter 302 (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), according to one embodiment.
  • the system generates the unique key 125 from an output of the MAC module 123, according to one embodiment.
  • the PUF architecture illustrated in FIG. 12 is similar to the PUF architecture illustrated in FIG. 11 , except that a monotonic counter 302 is included to provide values to selector module 204.
  • the monotonic counter 302 can be implemented using hardware and/or software.
  • the MAC module 123 receives inputs from one or more PUF devices 202 and an input from the monotonic counter 302.
  • values obtained from the PUF devices 202 and the monotonic counter 302 are linked and then provided as an input to the MAC module 123.
  • the monotonic counter 302 is a non-volatile counter that only increments its value when requested. In some embodiments, the monotonic counter 302 is incremented after each power-up cycle of a chip.
  • the PUF architecture of FIG. 12 can be used to provide a way to securely share keys between a semiconductor chip and other components in an application, such as for example a public key mechanism.
  • the monotonic counter 302 is incremented before each calculation of a PUF, which ensures that the input of the MAC module 123 is different at each cycle, and thus the output (and/or pattern of output) provided is different.
  • this approach can be used to generate a session key, where each session key is different.
  • the selector module 204 can selectively include or exclude the monotonic counter 302 (or other freshness mechanism like NONCE, timestamp) from providing a counter value as an input to the MAC module 123.
  • the monotonic counter 302 is also used by cryptographic module 127.
  • a PUF architecture that includes the monotonic counter can be used as a session key generator to guarantee a different key at each cycle.
  • a mechanism is used as follows:
  • Session key MACkey_based [ Root_Key, MTC or other freshness mechanism]
  • Root_Key an output value provided from the MAC module 123 above, or any other kind of key that is present on the chip.
  • the MACkey_based function above is, for example, a MAC algorithm based on a secret key.
  • MAC algorithm there can be two types of MAC algorithm in cryptography:
  • HMAC-SHA256 is key based
  • An algorithm that is not based on a secret key for example like SHA256 (SHA stand-alone is not key based).
  • a MAC that is key-based can be transformed in a MAC that is not key-based by setting the key to a known value (e.g. 0x000...
  • FIG. 13 shows a method to generate an output from a MAC that uses one or more input values provided from one or more PUFs, according to one
  • the method of FIG. 13 can be implemented in the memory device 103 of FIG. 10.
  • the method of FIG. 13 includes, at block 41 1 , providing one or more values by at least one PUF (e.g., providing values from one or more of PUF devices 202).
  • repeatability of one or more of the PUFs can be tested, for example as was described above. This testing is optional.
  • the failing PUF device is excluded from providing an input to the MAC. This excluding may be performed, for example, by selector module 204, as was discussed above.
  • a value is provided from a monotonic counter (e.g., monotonic counter 302).
  • a monotonic counter e.g., monotonic counter 302
  • the use of the monotonic counter in the PUF architecture is optional.
  • an output is generated from the MAC, which uses one or more values provided by the PUFs (and optionally at least one value from the monotonic counter) as inputs to the MAC.
  • the computing device is a first computing device, and the method further comprises transmitting the first output to a second computing device, wherein the first output is a unique identifier of the first computing device.
  • providing the at least one value comprises selecting a first value from a first PUF and selecting a second value from a second PUF.
  • the method further comprises: providing a value from a monotonic counter; wherein generating the first output further comprises using the value from the monotonic counter as an additional input to the MAC for generating the first output.
  • the method further comprises: generating a plurality of session keys based on respective outputs provided by the MAC, wherein the monotonic counter provides values used as inputs to the MAC; and incrementing the monotonic counter after generating each of the session keys.
  • the method further comprises: testing repeatability of a first PUF of the at least one PUF; and based on determining that the first PUF fails the testing, excluding the first PUF from providing any input to the MAC when generating the first output.
  • the testing comprises comparing two or more values provided by the first PUF.
  • the computing device is a memory device, and the memory device comprises a non-volatile storage media configured to store an output value generated using the MAC.
  • the method further comprises performing, by at least one processor, at least one cryptographic function, wherein performing the at least one cryptographic function comprises using an output value generated using the MAC.
  • a non-transitory computer storage medium stores instructions which, when executed on a memory device (e.g., the memory device 103), cause the memory device to perform a method, the method comprising:
  • PUF physical unclonable function
  • MAC message authentication code
  • the method of FIG. 4 can be performed on a system that includes: at least one physical unclonable function (PUF) device; a message authentication code (MAC) module configured to receive a first input based on at least one value provided by the at least one PUF device; at least one processor; and memory containing instructions configured to instruct the at least one processor to generate, based on the first input, a first output from the MAC module.
  • PUF physical unclonable function
  • MAC message authentication code
  • the MAC module includes a circuit. In one embodiment, the MAC module includes a circuit.
  • the first output from the MAC module is a key that identifies a die.
  • the first output from the MAC module is a root key, and the instructions are further configured to instruct the at least one processor to generate a session key using an output from the MAC module.
  • the system is part of a semiconductor chip (e.g., one chip of several chips obtained from a semiconductor wafer), the first output from the MAC module is a unique value that identifies the chip, and the instructions are further configured to instruct the at least one processor to transmit the unique value to a computing device.
  • a semiconductor chip e.g., one chip of several chips obtained from a semiconductor wafer
  • the first output from the MAC module is a unique value that identifies the chip
  • the instructions are further configured to instruct the at least one processor to transmit the unique value to a computing device.
  • the at least one PUF device comprises a plurality of PUF devices (e.g., PUF devices 202), and the system further comprises a selector module configured to select the at least one PUF device that provides the at least one value.
  • the selector module is further configured to generate the first input for the MAC module by linking a first value from a first PUF device and a second value from a second PUF device.
  • the system further comprises a monotonic counter configured to provide a counter value
  • the instructions are further configured to instruct the at least one processor to generate the first input by linking the counter value with the at least one value provided by the at least one PUF device.
  • the system further comprises a selector module configured to select the at least one PUF device that provides the at least one value, wherein linking the counter value with the at least one value provided by the at least one PUF device is performed by the selector module.
  • the monotonic counter is further configured to increment, after generating the first input, the counter value to provide an
  • the instructions are further configured to instruct the at least one processor to generate, based on the incremented value and at least one new value provided by the at least one PUF device, a second output from the MAC module.
  • FIG. 14 shows a system for generating a root key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time- stamp, etc.), and that adds an additional MAC to generate a session key, according to one embodiment.
  • the system generates the root key from an output of a MAC that receives inputs from one or more PUF devices 202 and an input from a monotonic counter 302 (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), and that adds an additional MAC module 504 to generate a session key using a root key input, according to one embodiment.
  • MAC module 123 provides root key 502 as the output from MAC module 123.
  • the root key input in this key- based function can be root key 502, as illustrated.
  • monotonic counter 302 can provide an input to the MAC module 504.
  • a different monotonic counter or other value from the chip can be provided as an input to MAC module 504 instead of using monotonic counter 302.
  • the monotonic counter 302 provides a counter value to MAC module 504, but not to selector module 204.
  • the counter value can be provided to both MAC modules, or excluded from both modules.
  • PUFs can be used for secure key generation.
  • Various embodiments discussed below relate to generating an initial key using at least one PUF, applying processing to increase obfuscation of the initial key, and storing the final obfuscated key in a non-volatile memory.
  • the final obfuscated key and/or an intermediate key used to generate the final obfuscated key can be shared with another computing device and used for secure communication with the other computing device (e.g., messaging using symmetric cryptography based on a shared key).
  • the secure key generation is done for computing devices to be used in automotive applications (e.g., a controller in an autonomous vehicle).
  • the initial key is generated in other ways that do not require using the at least one PUF above.
  • the initial key can be generated by using an injected key.
  • the initial key is present in a chip due to being injected in a factory or other secure environment.
  • the applying processing to increase obfuscation of the initial key is performed by applying obfuscation processing to the injected key.
  • the automotive environment can affect key generation in various ways.
  • engine power-on can cause a drop in application power to a computing device resulting in a key being generated in the wrong manner. Temperature extremes can also affect the circuit that generates the key. Other sources such as magnetic fields from power lines can cause inter-symbol interference or crosstalk, making a host not recognize the device.
  • a safe environment can be, for example, directly mounted in a car, in a test environment, or in a factory (e.g., that
  • ADAS or other computing systems as used in vehicles are subject to power supply variations. This can occur, for example, during turning on the vehicle, braking, powering the engine, etc.
  • Various embodiments to generate and store a key as discussed below provide the advantages of being substantially independent from external factors (e.g., power supply variations, temperature and other external sources of noise). Another advantage in some embodiments is that for every cycle, for example, the generation of the key vector is the same.
  • the key is substantially immune against hardware attack (e.g., that hackers might put in place).
  • one such attack is monitoring of the power-on current of a device so as to associate current variation to bits associated with the key.
  • Other attacks can use, for example, voltage measurements (e.g., a Vdd supply voltage).
  • Some attacks can use, for example, temperature variations to interfere with operation of a device.
  • the initial key can be generated using the approaches and/or architectures as described above for FIGs. 10-14.
  • a PUF is used to generate the key for every power-on cycle of the computing device that is storing the key.
  • other approaches can be used to generate the initial key.
  • key injection uses at least one PUF and a MAC algorithm (e.g., SFIA256) to generate a key for a device that is significantly different from other devices (e.g., from adjacent die located on a wafer).
  • a MAC algorithm e.g., SFIA256
  • the MAC cryptography algorithm provides the benefit of increasing the entropy of the bits generated by the PUF.
  • the generated key (e.g., the initial key as provided from a PUF and then a MAC algorithm) is stored in a non-volatile area of the device after pre-processing is performed on the key in order to diminish or avoid hacker attacks, and also to improve reliability of the stored key.
  • the circuit generating the key can be disabled.
  • the pre processing is generally referred to herein as obfuscation processing.
  • circuitry and/or other logic is used to implement the obfuscation processing on the device.
  • the stored key can be read by the device because the key is independent from the external source of noise. An internal mechanism is used to read any data of the device.
  • storing the key as described herein increases the margin against noise. Also, this makes it difficult for a hacker to read the stored key, for example, using a power monitoring or other hacking method.
  • At least some embodiments herein use a PUF and an encryption algorithm (e.g., HMAC-SHA256) to make the key generation independent from external factors such as temperature or voltage that may otherwise cause the key to be different from one power-on of the device to the next power-on. If this occurs, it can be a problem for a host to be able to exchange messages with the device.
  • Various embodiments make the key generation more robust by placing the stored key in memory such that it is not impacted by external factors.
  • the key is generated once on a device and stored in non-volatile memory of the device.
  • the key can be generated using the content of an SRAM before a reset is applied to the SRAM.
  • the key which is a function of the PUF, is generated using the pseudo random value output from the PUF.
  • the content of the SRAM is read before a reset of the appliance or other device.
  • the key can also be re-generated at other times through a command sequence, as may be desired.
  • the generated key is used as a UDS in the DICE-RloT protocol, as described above.
  • the command sequence uses a replace command to replace a previously-generated UDS with a new UDS, as described above.
  • the key generation is independent of the cryptography implemented by the device.
  • the generated key is shared with a host.
  • This embodiment stores the key and/or reads the key in the device in a way that avoids an attacker guessing the key and using it internally, such as for example by analyzing the shape of the current that the device absorbs during key usage.
  • the generated key becomes the variable password that is the secret key of the system.
  • the key is not shared with others.
  • the key is used to generate a corresponding public key.
  • an initial key is generated using an injected key or using one or more PUFs (e.g., to provide a initial key PUF0 ).
  • the initial key is then subjected to one or more steps of obfuscation processing to provide
  • intermediate keys e.g., PUF1 , PUF2, ... , PUF5
  • the output (e.g., PUF5) from this processing is an obfuscated key that is stored in non volatile memory of the device.
  • obfuscation processing is applied to the injected key similarly as described below for the non-limiting example of PUF0.
  • Session key MACkey_based [ Root_Key, MTC or other freshness mechanism]
  • Root_Key any other kind of key that is present on the chip (e.g., the key can be an initial key injected in the chip in a factory or other secure environment)
  • a special sequence wakes up at least one circuit (e.g., a read circuit) of the device and verifies that the circuit(s) is executing properly.
  • the device then generates an initial key PUF0, as mentioned above. This key can be stored or further processed to make it more robust for secure storage, as described below.
  • An intermediate key, PUF1 is generated by concatenating PUF0 with a predetermined bit sequence (e.g., a sequence known by others) to generate PUF1.
  • a predetermined bit sequence e.g., a sequence known by others
  • PUF1 is used to verify the ability of the device to correctly read the key and to ensure that noise, such as fluctuations in the power supply, are not affecting the generated key.
  • PUF1 is interleaved with an inverted bit pattern (e.g., formed by inverting the bits of PUF1 , and sometimes referred to herein as PUF1 bar) to generate PUF2.
  • inverted bit pattern e.g., formed by inverting the bits of PUF1 , and sometimes referred to herein as PUF1 bar
  • PUF2 has the same bit number of 0s and 1 s. This makes the shape of the device current substantially the same for any key (e.g., any key stored on the device). This reduces the possibility of an attacker guessing the key value by looking at the shape of the device current when the key is being read by the device.
  • a next intermediate key, PUF3, is generated.
  • the bits of PUF2 are interleaved with pseudo-random bits to form PUF3. This further helps to obfuscate the key.
  • the pseudo-random bits are derived from PUF1 or PUF2 by using a hash function. For example, these derived bits are added to PUF2 to form PUF3.
  • ECCs Error Correction Codes
  • the bits of the ECC are added to PUF3 to generate PUF4.
  • the ECC bits help guard against the effects of non-volatile memory (e.g., NVRAM) aging that can be caused by, for example, device endurance limits, X-rays and particles.
  • Non-volatile memory aging can also be caused, for example, by an increase in the number of electrons in the NV cell which can cause bits to flip.
  • PUF5 is a concatenation of several copies of PUF4. Having the redundancy of multiple PUF4 copies present in PUF5 further increases robustness by increasing the likelihood of being able to correctly read the key at a later time.
  • several copies of PUF5 are stored in various regions of non-volatile memory storage to further increase robustness. For example, even if PUF5 is corrupted in one of the regions, PUF5 can be read from other of the regions, and thus the correct key can be extracted.
  • PUF1 or PUF3 is the key that is shared with a host for symmetric cryptography, or used to generate a public key for asymmetric
  • PUF4 and PUF5 are not shared with end users or a host.
  • one or more of the foregoing obfuscation steps can be applied to the initial key, and further the ordering can be varied. For example, the number obfuscation steps can be decreased for a system that is known not to have Vdd voltage supply drops.
  • the bit patterns when storing the obfuscated key, will be physically spread around the non-volatile storage media (e.g., in different rows and words). For example, the device is able to read the bits at the same time and protect against multi-bit errors.
  • FIG. 15 shows a computing device 603 for storing an obfuscated key 635 in non-volatile memory (e.g., non-volatile storage media 109), according to one embodiment.
  • Computing device 603 is an example of computing device 141 of FIG. 1.
  • the obfuscated key is used as a UDS.
  • the obfuscation adds entropy to the bits of the key to avoid a possible attempt by a hacker to understand the value of the key. The device is always able to extract the key by removing the added bits used as obfuscation.
  • a common hacker attack consists of guessing the secret key generated/elaborated inside the device by processing, with statistical tools, the current profile absorbed by the device in some particular timeframe. The obfuscation mitigates this problem in a considerable way.
  • An initial key 625 is generated based on a value provided by at least one physical unclonable function device 121.
  • the obfuscated key 635 is generated based on initial key 625. After being generated, the obfuscated key 635 is stored in non-volatile storage media 109.
  • a message authentication code (MAC) 123 uses the value from PUF device 121 as an input and provides the initial key 625 as an output.
  • obfuscation processing module 630 is used to perform processing on initial key 625 in order to provide obfuscated key 635 (e.g., PUF5), for example as was discussed above.
  • initial key 625 and/or any one or more of the intermediate keys from the obfuscation processing described herein can be securely distributed in the same or a similar manner.
  • an end user/customer uses the foregoing approach to read the value of an initial key (e.g., PUF0), an intermediate key, and/or a final obfuscated key (e.g., PUF5).
  • an initial key e.g., PUF0
  • an intermediate key e.g., PUF5
  • a final obfuscated key e.g., PUF5
  • the end user can verify the proper execution of the internal generation of the key by the device, and/or monitor the statistical quality of the key generation.
  • FIG. 16 shows an example of an intermediate key (PUF2) generated during an obfuscation process by obfuscation processing module 630, according to one embodiment.
  • PUF1 intermediate key
  • bits of PUF1 are inverted to provide inverted bits 702.
  • Bits 702 are interleaved with the bits of PUF1 as illustrated. For example, every second bit in the illustrated key is an interleaved inverted bit 702.
  • FIG. 17 shows an example of another intermediate key (PUF3) generated during the obfuscation process of FIG. 16 (PUF3 is based on PUF2 in this example), according to one embodiment.
  • PUF3 is based on PUF2 in this example
  • the bits of PUF2 are further interleaved with pseudo-random bits 802.
  • bits 802 are interleaved with
  • every third bit in the illustrated key is an interleaved pseudo random bit 802.
  • FIG. 18 shows a method for generating and storing an obfuscated key (e.g., obfuscated key 635) in a non-volatile memory (e g., non-volatile storage media 109), according to one embodiment.
  • memory system 105 of FIG. 2 stores the obfuscated key in non-volatile memory 121.
  • an initial key is generated based on a value provided by at least one physical unclonable function (PUF).
  • PAF physical unclonable function
  • the initial key is generated by key injection.
  • the initial key can simply be a value injected into a chip during manufacture.
  • an obfuscated key is generated based on the initial key.
  • the generated obfuscated key is PUF3 or PUF5.
  • the obfuscated key is stored in a non-volatile memory of a computing device.
  • the obfuscated key is stored in NAND flash memory or an EEPROM.
  • a method includes: generating an initial key using key injection; generating an obfuscated key based on the initial key; and storing the obfuscated key in non-volatile memory.
  • the initial key can be the key injected during a key injection process at the time of manufacture.
  • a method comprises: generating an initial key provided by key injection or based on a value provided by at least one physical unclonable function (PUF); generating an obfuscated key based on the initial key; and storing the obfuscated key in a non-volatile memory of the computing device.
  • PAF physical unclonable function
  • generating the initial key comprises using the value from the PUF (or, for example, another value on the chip) as an input to a message authentication code (MAC) to generate the initial key.
  • MAC message authentication code
  • the obfuscated key is stored in the non-volatile memory outside of user-addressable memory space.
  • generating the obfuscated key comprises
  • concatenating the initial key with the predetermined pattern of bits provides a first key (e.g., PUF1 ); and generating the obfuscated key further comprises interleaving the first key with an inverted bit pattern, wherein the inverted bit pattern is provided by inverting bits of the first key.
  • a first key e.g., PUF1
  • generating the obfuscated key further comprises interleaving the first key with an inverted bit pattern, wherein the inverted bit pattern is provided by inverting bits of the first key.
  • interleaving the first key with the inverted bit pattern provides a second key (e.g., PUF2); and generating the obfuscated key further comprises interleaving the second key with pseudo-random bits.
  • a second key e.g., PUF2
  • the method further comprises deriving the pseudo random bits from the first key or the second key using a hash function.
  • interleaving the second key with pseudo-random bits provides a third key (e.g., PUF3); and generating the obfuscated key further comprises concatenating the third key with error correction code bits.
  • a third key e.g., PUF3
  • the computing device is a first computing device, the method further comprising sharing at least one of the initial key, the first key, or the third key with a second computing device, and receiving messages from the second computing device encrypted using the shared at least one of the initial key, the first key, or the third key.
  • concatenating the third key with error correction code bits provides a fourth key (e.g., PUF4); and generating the obfuscated key further comprises concatenating the fourth key with one or more copies of the fourth key.
  • a fourth key e.g., PUF4
  • concatenating the fourth key with one or more copies of the fourth key provides a fifth key (e.g., PUF5); and storing the obfuscated key comprises storing a first copy of the fifth key on at least one of a different row or block of the non-volatile memory than a row or block on which a second copy of the fifth key is stored.
  • a fifth key e.g., PUF5
  • a system comprises: at least one physical unclonable function (PUF) device (e.g., PUF device 121 ) configured to provide a first value; a non-volatile memory (e.g., non-volatile storage media 109) configured to store an obfuscated key (e.g., key 635); at least one processor; and memory containing instructions configured to instruct the at least one processor to: generate an initial key based on the first value provided by the at least one PUF device;
  • PUF physical unclonable function
  • system further comprises a message
  • MAC authentication code
  • MAC 123 configured to receive values provided by the at least one PUF device, wherein generating the initial key comprises using the first value as an input to the MAC module to generate the initial key.
  • generating the obfuscated key comprises at least one of: concatenating a key with a predetermined pattern of bits; interleaving a first key with an inverted bit pattern of the first key; interleaving a key with pseudo-random bits; concatenating a key with error correction code bits; or concatenating a second key with one or more copies of the second key.
  • the stored obfuscated key has an equal number of zero bits and one bits.
  • generating the obfuscated key comprises
  • concatenating the initial key with the first pattern of bits provides a first key; and generating the obfuscated key further comprises interleaving the first key with a second pattern of bits.
  • generating the obfuscated key further comprises interleaving a key with pseudo-random bits.
  • generating the obfuscated key further comprises concatenating a key with error correction code bits.
  • a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to perform a method, the method comprising: generating an initial key using at least one physical unclonable function (PUF); generating an obfuscated key based on the initial key; and storing the obfuscated key in non-volatile memory.
  • PAF physical unclonable function
  • FIG. 19 shows computing device 1003 used for generating initial key 625 based on key injection 1010, obfuscating the initial key, and storing the obfuscated key in non-volatile memory, according to one embodiment.
  • the initial key 625 is generated by using the injected key 1010.
  • initial key 625 is present in a chip by being injected in a factory or other secure environment during manufacture, or other assembly or testing.
  • the initial key 625 is used as an initial UDS for computing device 1003.
  • the obfuscation can also be applied to the UDS.
  • the UDS is the secret that the DICE-RloT starts to use to generate the secure generation of keys and certificates.
  • the applying processing to increase obfuscation of the initial key is performed by applying obfuscation processing (via module 630) to the injected key (e.g., the value from key injection 1010).
  • obfuscation processing can be applied to any other value that may be stored or otherwise present on a chip or die.
  • a special sequence is activated to turn on the device containing a cryptographic engine (e.g., cryptographic module 127).
  • the sequence further wakes-up the internal PUF and verifies its functionality, then the PUF generates an initial value PUF0, for instance as described above.
  • the PUF0 value is processed by an on-chip algorithm (e.g., by obfuscation processing module 630) and written in a special region of a non-volatile array (out of the user addressable space).
  • an injected key is processed by the on-chip algorithm similarly as described below to provide an obfuscated key for storage.
  • obfuscation processing is performed to prevent Vdd (voltage) and/or temperature fault hacker attacks.
  • This processing includes concatenating PUF0 with a well-known pattern (e.g., which contains a fixed amount of 0/1 bits). These bits permit, during the life of the device (e.g., chip) when the PUF value is internally read, determining if the read circuitry is able to properly discriminate 0/1 bits.
  • PUF1 PUF0
  • the result of the above processing (e.g., PUF1 ) is further embodied with dummy bits (e.g., to avoid lcc hacker analysis).
  • the bits of PUF1 are interleaved with an inverted version of PUF1 (i.e. , PUF1 bar, which is formed by inverting each bit of PUF1 ).
  • PUF2 PUF1 interleaved PUF1 bar.
  • the rule of interleaving depends on the kind of column decoder (e.g., of a NV non-volatile array) that is present on the chip/device.
  • the device ensures that at each read of the PUF value (from the non-volatile array), the read circuitry processes (in a single shot) the same number of bits from PUF1 and PUF1 bar. This ensures reading the same number of bits at values of 0 and 1 , which provides a regular shape in the supply current (Idd).
  • the bits of PUF2 are further interleaved with pseudo-random bits.
  • the interleaving depends on the non-volatile array column decoder structure.
  • the output has the same number of PUF2 bits stuffed with a certain number of pseudo-random bits (e.g., in order to obfuscate an eventual residual correlation that may be present in the PUF2 pattern).
  • the pseudo-random bits can be derived from PUF1 or PUF2 by using a hash function. Other alternative approaches can also be used.
  • the bits of PUF3 are concatenated with error correction code (ECC) bits.
  • ECC error correction code
  • the bits of PUF4 are optionally replicated one or more times (which also extends ECC capabilities).
  • the foregoing may be implemented on a NAND memory.
  • PUF5 PUF4
  • the value of PUF5 can be written two or more times on different rows and or blocks of a non-volatile memory array.
  • the value can be used with diminished or no concern about key reliability (e.g., due to noise, or charge loss), or any attempt to infer its value by Idd analysis or forcing its value by Vdd fault attack.
  • the PUF circuitry can be disabled.
  • the PUF device can provide values used internally on a device for other purposes (e.g., using a standard read operation inside the non-volatile array).
  • key bits are differentiated from random bits when extracting a key from PUF3.
  • internal logic of a device storing a key is aware of the position and method required to return from PUF 5 to a prior or original PUF (e.g., PUF3).
  • the bit positions of key bits are known by the device extracting the key.
  • the internal logic of the device can receive one of the intermediate PUF or the final key PUF5, depending on design choice. Then, applying the operation(s) in the reverse order will obtain the original PUF.
  • the processing steps from PUF1 to PUF5 are executed to store the obfuscated PUF in a manner that a hacker would have to both: read the content (e.g., key bits), and also know the operation(s) that were applied in order to get back to and determine the original key.
  • a computing device generates an identity using one or more PUFs.
  • the identity is a UDS.
  • the generation of identity for the computing device is assigned in an automatic way (e.g., based upon a scheduled time or occurrence of a predetermined event, in response to which a computing device will self-generate a UDS using a PUF).
  • identity is assigned in an automatic way (e.g., based upon a scheduled time or occurrence of a predetermined event, in response to which a computing device will self-generate a UDS using a PUF).
  • the computing device After the computing device generates the identity, it can be used to generate a triple of an identifier, a certificate, and a key. In one embodiment, the triple is generated in response to receiving a message from a host device. The host device can use the generated identifier, certificate, and key to verify the identity of the computing device. After the identity is verified, further secure communications by the host device with the computing device can be performed using the key.
  • the computing device generates the identity in response to receiving a command from the host device.
  • the command can be a secure replace command that is authenticated by the computing device.
  • the computing device After generating the identity, the computing device sends a confirmation message to the host device to confirm that the replacement identity was generated.
  • the replacement identity is a new UDS that is stored in non-volatile memory and replaces a previously-stored UDS (e g., a UDS assigned by the original manufacturer of the computing device).
  • the identity is a device secret (e.g., a UDS as used in the DICE-RloT protocol, such as for example discussed above) stored in memory of a computing device.
  • a device secret e.g., a UDS as used in the DICE-RloT protocol, such as for example discussed above
  • At least one value is provided by one or more PUFs of the computing device.
  • the computing device generates the device secret using a key derivative function (KDF).
  • KDF key derivative function
  • the value(s) provided by the one or more PUFs is an input(s) to the KDF.
  • the output of the KDF provides the device secret.
  • the output of the KDF is stored in memory of the computing device as the device secret.
  • the KDF is a hash. In one example, the KDF is a message authentication code.
  • the computing device stores a secret key that is used to communicate with a host device, and the KDF is a message authentication code (MAC).
  • the at least one value provided by one or more PUFs is a first input to the MAC, and the secret key is used as a second input to the MAC.
  • the computing device can be a flash memory device.
  • serial NOR can be used.
  • FIG. 20 shows computing device 141 as used for generating an identity (e.g., a UDS for computing device 141 ) using a physical unclonable function (PUF) 2005, according to one embodiment. More specifically, a value is provided by PUF 2005. This value is provided as an input to key derivative function (KDF) 2007.
  • PUF physical unclonable function
  • the output from the KDF is stored as device secret 149.
  • device secret 149 is a UDS as used in the DICE-RloT protocol. Similarly as described above, the UDS can be used as a basis for generating a triple for sending to host device 151. This triple includes a public key that can be used by host device 151 for secure communications with computing device 141.
  • the generation of the device secret is performed in response receiving a command from host device 151 via a host interface 2009.
  • the command is a replace command.
  • the command is accompanied by a signature signed by the host device 151 using a secret key.
  • the confirmation message is sent to host device 151 via host interface 2009.
  • computing device 141 stores a secret key 2013.
  • the secret key 2013 can be shared with host device 151 .
  • host device 151 uses the secret key 2013 to sign a signature sent with a replace command.
  • KDF 2007 is a message authentication code.
  • the secret key 2013 is used as a key input to KDF 2007 when generating the device secret.
  • the value from PUF 2005 is used as a data input to KDF 2007.
  • a freshness mechanism is implemented in computing device 141 using a monotonic counter 2003.
  • Monotonic counter 2003 can provide values for use as a freshness in secure communications with host device 151.
  • a unique identifier (UID) 2001 is stored in memory of computing device 141.
  • UID 2001 is injected at the factory.
  • the customer uses host device 151 in its factory to request that computing device 141 self- generate a new UDS (e.g., UDS_puf).
  • UDS_puf a new UDS
  • This step can be done by using an authenticated command. Only a customer or host device that knows the secret key 2013 is able to perform this operation (e.g., the secret key 2013 can be more generally used to manage an authenticated command set supported by computing device 141 ).
  • the UDS_puf is generated, it is used to replace the original (trivial) UDS.
  • the replacement happens by using an authenticated command.
  • the external host device (the customer) can read the UDS_puf.
  • the generated UDS_puf can be used to implement the DICE-RloT protocol.
  • FDS can be calculated using UDS_puf, similarly as described above for identity component 147 and identity component 107.
  • FDS HMAC-SHA256 [UDS, SHA256(“ldentity of L1”)].
  • a triple (e.g., KLI ) that includes an identifier, certificate, and key can be generated using UDS_puf, similarly as described for FIG. 1 above.
  • the host device uses the key (e.g., Kupublic) for trusted communications with computing device 141.
  • the identity generation mechanism above can be automatically executed by the computing device (e.g., an application board including a processor) at first use of the application board, or in the field once a scheduled or predetermined event occurs (e.g., as scheduled/determined by the customer and stored in memory 145 as a configuration of the computing device such as an update, etc.).
  • the computing device e.g., an application board including a processor
  • self-identity generation is performed as follows: A configurator host device (e.g., a laptop with software) is connected to a computing device coupled to an autonomous vehicle bus (e.g., using a secure over-the-air interface, etc.). The host device uses authenticated commands to request that the computing device self-generate a UDS (e.g., U DSPUF). The authentication is based on secret key 2013 (e.g., the secret key can be injected by a manufacturer and provided to the customer with a secure infrastructure).
  • UDS e.g., U DSPUF
  • the authenticated command execution is confirmed with an authenticated response (e.g.,“Confirmation” as illustrated in FIG. 20).
  • an authenticated response e.g.,“Confirmation” as illustrated in FIG. 20.
  • the host device is informed about the U DSPUF generated by using a secure protocol (e.g., by sending over a secure wired and/or wireless network(s) using a freshness provided by the monotonic counter 2003).
  • host interface 2009 is a command interface that supports authenticated and replay protected commands.
  • the authentication is based on a secret key (e.g., secret key 2013) and uses a MAC algorithm (e.g., HMAC).
  • a MAC algorithm e.g., HMAC
  • an identity generation command is received from host device 151 that includes a signature based on a command opcode, command parameters, and a freshness.
  • the signature MAC (opcode
  • computing device 141 provides an identity generation confirmation including a signature.
  • the signature MAC (command result I freshness, secret key).
  • the secret key 2013 is injected in the factory.
  • the secret key can be symmetric (e.g., based on HMAC-SHA256).
  • the secret key can use an asymmetric scheme (e.g., ECDSA).
  • the device secret 149 (e.g., U DSPUF) can be generated using various options. For example, once the generation flow is activated, by using the proper authenticated and replay protected command, the U DSPUF is generated according to a command option selected as follows:
  • the KDF function can be used as a simple HASH function (e.g., SHA256), or a MAC function (e.g., HMAC-SHA256), which uses a secret key.
  • the secret key used is the same key used to provide the authenticated commands.
  • the device secret e.g., UDS
  • UDS can be generated using one of the options as follows:
  • UDSPUF MAC [Secret_Key, (info provided by host
  • the U DSPUF can be communicated to the host device 151 after being generated.
  • the host device 151 can directly read the U DSPUF.
  • the UDSPUF can be read only a predetermined number of times (e.g., just once or a few times). In one example, the process as described for FIG. 21 below can be used. After reading the UDSPUF the predetermined number of times, the read mechanism is permanently disabled. For example, this approach can be used in a secure environment (e.g.,. a customer’s factory when assembling a computing system or other product using computing device 141 ).
  • the UDSPUF is encrypted by computing device 141 using a public key received from host device 151 , and then the UDSPUF is sent to host device 151. After this procedure, the U DSPUF read mechanism is permanently disabled.
  • computing device 141 includes obfuscation processing module 201 1.
  • Obfuscation processing module 2011 is an example of obfuscation processing module 630 of FIG. 15, or obfuscation processing module 630 of FIG. 19.
  • the UDSPUF is encrypted with the host public key and communicated to the host device; the host device uses its corresponding secret key to decrypt it.
  • the host public key is communicated to the computing device 141 during a specific communication setup phase.
  • the sharing of the UDSPUF can be done by a direct read operation by using an authenticated command, and after a predetermined number of reads (e.g., normally just one), such read operation is disabled forever.
  • the computing device 141 stores the UDS key in an anti-tamper area.
  • an obfuscation mechanism is used to avoid information leakage (e.g., by avoiding any chance of a hacker to guess the stored UDS key, such as by the hacker analyzing either the current or the voltage wave profile).
  • the obfuscation mechanism used can be as described above for FIGs. 15, 18, 19.
  • computing device 141 obfuscates the device secret prior to sending the encrypted device secret to the host device 151 .
  • FIG. 21 shows a system that sends an initial value provided by a monotonic counter of the system for use in determining whether tampering with the system has occurred, according to one embodiment.
  • the system is computing device 141 of FIG. 20.
  • the system uses a secret key for secure
  • the secret key is generated and stored on the system (e.g., by key injection at the factory after initial assembly of a system board).
  • a monotonic counter of the system is used to provide an initial value.
  • the secret key is a UDS used with the DICE-RloT protocol.
  • the secret key is generated in response to a command from a host device (e.g., host device 151 ).
  • the initial value is sent by electronic communication to an intended recipient of the physical system (e.g., the recipient will receive the physical system after it is physically transported to the recipient’s location).
  • the recipient reads an output value from the monotonic counter.
  • the recipient e.g., using a computing device or server of the recipient that earlier received the initial value
  • the tampering is an unauthorized attempt by an intruder to access the secret key of the system during its physical transport.
  • a secret key can be generated, for instance, using a true RNG (random number generator) or a PUF, or previously injected in the system (memory) in a secure environment like a factory.
  • the generated key is associated with the initial value of the monotonic counter.
  • the initial value is used by a recipient of the system to determine whether an unauthorized attempt has been made to access the stored key.
  • a key injection process can use an output from a physical unclonable function (PUF) to generate the key for every power-on cycle of a computing device that stores the key.
  • PAF physical unclonable function
  • FIG. 21 shows a system 351 that sends an initial value provided by a monotonic counter 355 for use in determining whether tampering with system 351 has occurred, according to one embodiment. For example, it can be determined whether system 351 has been tampered with by a hacker seeking unauthorized access to a stored key during physical transport of system 351 .
  • system 351 is a system board that includes non-volatile memory 306, processor(s) 304, monotonic counter(s) 355, and power supply 318.
  • Nonvolatile memory 306 is used to store the generated keys.
  • Nonvolatile memory 306 is, for example, a non-volatile memory device (e.g., 3D cross point storage)
  • Monotonic counter 355 is initialized to provide an initial value. This initial value is associated with the stored key 314.
  • the initial value is sent by a processor 304 via external interface 312 to another computing device. For example, the initial value can be sent to a server of the receiver to which system 351 will be shipped after manufacture and key injection is completed.
  • a computing device of the receiver determines the initial value. For example, the computing device can store in memory the initial value received when sent as described above. The computing device reads an output value from the monotonic counter 355. This output value is compared to the initial value to determine whether there has been tampering with the system 351 .
  • the computing device of the receiver determines a number of events that have occurred that cause the monotonic counter 355 to increment.
  • the output values from monotonic counter 355 can be configured to increment on each power-on operation of system 351 (e.g., as detected by monitoring power supply 318).
  • the output values from monotonic counter 355 can be additionally and/or alternatively configured to increment on each attempt to perform a read access of the stored key 314.
  • the initial value received from the sender can be adjusted based on this number of known events. Then, the adjusted initial value is compared to the output value read from monotonic counter 355. If the values match, then no tampering has occurred. If the values do not match, the computing device determines that tampering has been detected.
  • processor 304 receives a communication from the computing device of the receiver that includes an indication that tampering has been detected. In response receiving the communication, processor 304 disables at least one function of system 351. In one example, the function disabled is read access to the stored key 314.
  • system 351 can be configured so that a counter value output from monotonic counter 355 cannot exceed a predetermined maximum value. For example, when each counter value is read from monotonic counter 355, a determination is made whether the counter value exceeds a predetermined maximum value. If the counter value exceeds the predetermined maximum value, read access to stored key 314 can be permanently disabled (e.g., under control of processor 304).
  • system 351 is embodied on a semiconductor die. In another embodiment, system 351 is formed on a system board. An application is stored in system memory 353 and executed by processor 304. Execution of the application occurs after power-on of the system 351 . For example, the receiver of the system 351 can execute the application after determining that no tampering has occurred with the system.
  • key 314 is generated using an output value from one or more physical unclonable functions (PUFs).
  • PEFs physical unclonable functions
  • the keys are generated for each power-on cycle of system 351 .
  • key 314 is a UDS generated in response to a replace command from a host device received via external interface 312.
  • system 351 is a controller that stores key 314.
  • the external interface 312 is used to send an initial value from monotonic counter 355 to an external nonvolatile memory device (not shown) on the same system board as the controller.
  • the external non-volatile memory device determines whether tampering has occurred by reading an output value from monotonic counter 355 and comparing the output value to the initial value received from system 351 .
  • system memory 353 includes volatile memory 308 and/or non-volatile memory 306.
  • Cryptographic module 316 is used to perform cryptographic operations for secure communications over external interface 312 using keys 314 (e.g., symmetric keys).
  • a secure communication channel is setup by key sharing between the actors that participate in the communication.
  • Components that are used in a trusted platform module board often do not have sufficient processing capability to implement schemes such as, for example, public key cryptography.
  • one or more secret keys are shared between a device/board manufacturer and the OEM/final customers. Also, keys can be shared as needed between different devices in the same board in the field. One or more monotonic counters are used inside the secure device, as was discussed above.
  • the monotonic counters (MTCs) are configured to increment the output value any time the power-on of the system/device occurs and any time the stored secret key is (attempted) to be read.
  • each MTCk can be public and shared with the customer (e.g., as MTCk).
  • a command sequence e.g., as arbitrarily determined by the system designer
  • the command sequence can be public (e.g., the strength of the method is not based on secrecy of read protocol).
  • some keys can be read together, and they only need a single MTC.
  • the MTCk values may be different for each MTC or associated key.
  • the receiver customer/user powers on the system 351.
  • tampering has occurred. For example, an unauthorized person powered-up the device and attempted to read the stored keys. If tampering has occurred, the device is discarded and the indication of tampering communicated to the sender (e.g., transfer company) to avoid further technical security problems.
  • sender e.g., transfer company
  • multiple read operations are performed by the receiver of the system 351.
  • the device leaves the factory and arrives at the customer/OEM.
  • the customer reads one or more keys and the related MTCk are incremented by one. If the MTCj incremented exceeds the value preset (MTCj MAX) by the silicon factory (sender), the read of the key (Secret_Keyj ) is permanently disabled.
  • the MTCj MAX is a predetermined maximum value agreed with by the sender and customer for each MTC. This methodology allows one or more attempts of key reading after/before reading of the MTC value can be checked or performed. This permits, for example, discovery of any un-authorized access and also at the same time ensures that the OEM/customer has a few times to perform reading of the keys before disablement.
  • one or more MTCs has an unexpected value (e.g., the initial value and read value do not match after adjustments for the number of known read and power-on operations), the device is discarded.
  • the unexpected value e.g., the initial value and read value do not match after adjustments for the number of known read and power-on operations
  • the related MTCk are cleaned-up and can be re-used for other purposes (e.g., cryptographic processing functions, etc., as configured and desired by the system designer).
  • device/board usage is monitored.
  • Such MTC counters are incremented at each power-up of the device.
  • Each counter can be initialized to zero (or to another initial value, if desired).
  • the receiver end user can use the received MTCk values to obtain various information regarding system 351 such as, for example: detection of unauthorized use of component/board, determining that a component has been desoldered and used outside the board to hack it, power cycle count to implement a fee mechanism based on a customer’s services, device board warranty policies, and power loss.
  • implementing the fee mechanism involves associating a value to a specific usage. An application can be provided to a customer, and the customer must pay a fee to unlock the specific usage.
  • a secret key is shared between different devices on the same system board (e.g., shared in the field when being used by an end user).
  • the key is shared by exchange between components on the same board (e.g., between processors).
  • each of one or more monotonic counters is initialized by setting its initial value to zero.
  • the MTCk is a counter associated with a specific key, indicated by the number k.
  • Each k indicates one of the counter numbers, which corresponds to the number of internal keys stored in the device. The value of each counter is stored in a non-volatile memory area of the device.
  • a power-on (sometimes also referred to as a power- up) of the device is detected.
  • the device includes internal circuitry to measure a value of the power supply. When the power supply exceeds a certain threshold, the circuitry triggers an internal signal to indicate the presence (detection) of the power- on event. This signal can cause incrementing of the monotonic counters.
  • an attempt to read the secret key is detected.
  • the counters (MTCk) are incremented each time that the power is detected (as mentioned above), and further each time that a command sequence to read the key is recognized by the device interface. Knowing the initial MTCk values, when the shipping of the device is done, permits the final receiver (e.g., end user/customer) of the device to know the device (and counter) status. So, if during transit, there was an attempt to power-on and/or read the device, this generates a variation in the counter value stored in each MTCk.
  • a command sequence is used to read the key from the device.
  • the device has an external interface, that accepts commands from other computing components or devices. The key is available for reading until a maximum counter value is reached.
  • the device has a sequence at the interface: Command (e.g., 0x9b) + Argument (0x34) + signature. The device understands that the key is to be read and provided at the output.
  • the values MTCO, MTC1 , MTC2 are sent/transmitted to the customer, as a set of initial values.
  • the customer uses a command sequence, to read the values.
  • the customer e.g., using a computing device
  • a predetermined maximum counter value is preset.
  • the MTC associated with a key increments with power up and an attempt to read the key. Each time that one of the two events happen, the MTC increments.
  • the monotonic counters are cleaned up after final reading of the stored keys (and optionally can be re-used for other processing on the device).
  • the system designer can configure this as desired.
  • Final reading is, for example, when the purpose of the MTC counting, as a detector of a malicious event, is no longer needed. This releases the counters as resources, which can be used for other purposes, or the counters can remain to count.
  • the monotonic counter can be used to get various types of information such as a component being de-soldered, implementing a fee mechanism, implementing a warranty, etc.
  • the counters can be used to monitor different types of occurrences because the incrementing of the counter value can be externally triggered.
  • a multiple key read option is provided.
  • the maximum thresholds are set to allow the read of each key by the receiver of the component and/or the final user.
  • the multiple read option allows changing the threshold according to the maximum number of attempts to read a particular key (which may differ for each key).
  • various types of devices can use the above methods.
  • a CPU or MCU typically has an internal hardware security module that is not accessible by outside access.
  • the CPU/MCU benefits from the distribution/sharing of the key to an authorized entity, device, or component. This sharing allows reading of the key (e g., this sharing corresponds to the programming of a value in the MTCk threshold).
  • the above methods are used for customer firmware injection.
  • the above methods allow storing critical content inside a device (e.g., application firmware) and implementing movement of the device in an un-secured environment (without compromising the integrity of the firmware/device).
  • firmware injection an unexpected change of the MTCk counter value during transport is used as an index to indicate that firmware compromise has occurred.
  • FIG. 22 shows a method for generating an identity for a computing device using a physical unclonable function (PUF), according to one embodiment.
  • PEF physical unclonable function
  • the method of FIG. 22 can be implemented in the system of FIG. 20.
  • the method of FIG. 22 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
  • the method of FIG. 22 is performed at least in part by processor 143 of FIG. 20.
  • At block 221 1 at least one value is provided by at least one physical unclonable function (PUF).
  • PUF 2005 provides a value as an output.
  • a device secret is generated using a key derivative function (KDF).
  • KDF key derivative function
  • the at least one value provided by the at least one PUF is used as an input to the KDF.
  • the output from PUF 2005 is used as an input to KDF 2007.
  • the generated device secret is stored.
  • KDF 2007 generates an output that is stored as device secret 149.
  • a method comprises: generating, by a computing device, a device secret (e.g., a UDS stored as device secret 149), the generating comprising: providing, by at least one physical unclonable function (PUF), at least one value; and generating, using a key derivative function (e.g., KDF 2007), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and storing, in memory of the computing device, the generated device secret.
  • a device secret e.g., a UDS stored as device secret 149
  • PUF physical unclonable function
  • KDF 2007 key derivative function
  • storing the generated device secret comprises replacing a previously-stored device secret in the memory with the generated device secret.
  • the KDF is a hash, or a message authentication code.
  • the method further comprises storing a secret key used to communicate with a host device, wherein the KDF is a message
  • the at least one value provided by the at least one PUF is a first input to the MAC, and the secret key is a second input to the MAC.
  • the device secret is generated in response to an event, and the event is receiving, by the computing device, of a command from a host device.
  • the method further comprises receiving a host public key from the host device, encrypting the generated device secret using the host public key, and sending the encrypted device secret to the host device.
  • the method further comprises after sending the encrypted device secret to the host device, permanently disabling read access to the device secret in the memory.
  • the method further comprises obfuscating the device secret prior to sending the encrypted device secret to the host device.
  • the method further comprises storing, by the computing device in memory, a secret key (e.g., secret key 2013), wherein the command is authenticated, by the computing device, using a message
  • MAC authentication code
  • the host device sends a signature for authenticating the command, the signature is generated using the MAC, and a freshness, generated by the host device, is used as an input to the MAC.
  • the freshness is an additional input to the KDF.
  • the host device provides a user pattern with the command, and wherein a hash of the user pattern is an additional input to the KDF.
  • the method further comprises authenticating, by the computing device, the command prior to generating the device secret.
  • the method further comprises storing a unique identifier of the computing device, wherein the unique identifier is an additional input to the KDF.
  • the device secret is generated in response to an event, and the event is detection, by the computing device, of usage of a computing system.
  • usage is execution of an application by the computing system. In one embodiment, the usage is initiation of a boot loading process.
  • the method of claim 1 wherein the device secret is generated in response to an event, and the event is a time-scheduled event.
  • a system comprises: at least one processor; and
  • memory containing instructions configured to instruct the at least one processor to: generate a device secret, the generating comprising: providing, by at least one physical unclonable function (PUF), at least one value; and generating, using a key derivative function (KDF), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and store, in memory of the computing device, the generated device secret.
  • PUF physical unclonable function
  • KDF key derivative function
  • the instructions are further configured to instruct the at least one processor to: receive a replace command from a host device; and send, to the host device, a public identifier generated using the generated device secret; wherein the device secret is generated in response to receiving the replace command; wherein storing the generated device secret comprises replacing a previously-stored device secret with the generated device secret.
  • a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to at least: provide, by at least one physical unclonable function (PUF), at least one value; generate, using a key derivative function (KDF), a device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and store, in memory, the generated device secret.
  • PUF physical unclonable function
  • KDF key derivative function
  • a non-transitory computer storage medium can be used to store instructions of the firmware 104, or to store instructions for processor 143 or processing device 1 1 1.
  • the instructions When the instructions are executed by, for example, the controller 107 of the memory device 103 or computing device 603, the instructions cause the controller 107 to perform any of the methods discussed above.
  • the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using Application- Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).
  • ASIC Application- Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
  • At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor or microcontroller, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • processor such as a microprocessor or microcontroller
  • a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs.”
  • the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
  • a tangible, non-transitory computer storage medium can be used to store software and data which, when executed by a data processing system, causes the system to perform various methods.
  • the executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
  • the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to- peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in their entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine- readable medium in their entirety at a particular instance of time.
  • Examples of computer-readable storage media include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, and optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM),
  • recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, and optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM),
  • CD ROM Compact Disk Read-Only Memory
  • the instructions may be embodied in a transitory medium, such as electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc.
  • a transitory medium is typically used to transmit instructions, but not viewed as capable of storing the instructions.
  • hardwired circuitry may be used in combination with software instructions to implement the techniques.
  • the techniques are neither limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

A method includes: generating, by a computing device, a device secret, the generating comprising: providing, by at least one physical unclonable function (PUF), at least one value; and generating, using a key derivative function (KDF), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and storing, in memory of the computing device, the generated device secret.

Description

GENERATING AN IDENTITY FOR A COMPUTING DEVICE
USING A PHYSICAL UNCLONABLE FUNCTION
RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Pat. App. Ser. No.
16/363,204, filed Mar. 25, 2019 and entitled“GENERATING AN IDENTITY FOR A COMPUTING DEVICE USING A PHYSICAL UNCLONABLE FUNCTION,” the entire disclosure of which is hereby incorporated herein by reference.
[0002] This application is related to U.S. Non-Provisional Application Serial No. 15/970,660, filed 05/03/2018, entitled“KEY GENERATION AND SECURE
STORAGE IN A NOISY ENVIRONMENT,” by Pisasale et al., the entire contents of which application is incorporated by reference as if fully set forth herein.
[0003] This application is related to U.S. Non-Provisional Application Serial No. 15/853,498, filed 12/22/2017, entitled“PHYSICAL UNCLONABLE FUNCTION USING MESSAGE AUTHENTICATION CODE,” by Mondello et al. , the entire contents of which application is incorporated by reference as if fully set forth herein.
[0004] This application is related to U.S. Non-Provisional Application Serial No. 15/965,731 , filed 27-APR-2018, entitled“SECURE DISTRIBUTION OF SECRET KEY USING A MONOTONIC COUNTER,” by Mondello et al., the entire contents of which application is incorporated by reference as if fully set forth herein.
FIELD OF THE TECHNOLOGY
[0005] At least some embodiments disclosed herein relate to identity for computing devices in general and more particularly, but not limited to generating an identity for a computing device using a physical unclonable function.
BACKGROUND
[0006] A physical unclonable function (PUF) provides, for example, a digital value that can serve as a unique identity for a semiconductor device, such as a
microprocessor. PUFs are based, for example, on physical variations which occur naturally during semiconductor manufacturing, and which permit differentiating between otherwise identical semiconductor chips. [0007] PUFs are typically used in cryptography. A PUF can be, for example, a physical entity that is embodied in a physical structure. PUFs are often
implemented in integrated circuits, and are typically used in applications with high security requirements. For example, PUFs can be used as a unique and
untamperable device identifier. PUFs can also be used for secure key generation, and as a source of randomness.
[0008] In one example related to device identification, the Microsoft® Azure® loT platform is a set of cloud services provided by Microsoft. The Azure® loT platform supports Device Identity Composition Engine (DICE) and many different kinds of Hardware Security Modules (HSMs). DICE is an upcoming standard at Trusted Computing Group (TCG) for device identification and attestation which enables manufacturers to use silicon gates to create device identification based in hardware. HSMs are used to secure device identities and provide advanced functionality such as hardware-based device attestation and zero touch provisioning.
[0009] DICE offers a scalable security framework that uses an HSM footprint to anchor trust for use in building security solutions like authentication, secure boot, and remote attestation. DICE is useful for the current environment of constraint computing that characterizes loT devices, and provides an alternative to more traditional security framework standards like the Trusted Computing Group’s (TCG) and Trusted Platform Module (TPM). The Azure® loT platform has HSM support for DICE in HSMs from some silicon vendors.
[0010] In one example related to trust services, the Robust Internet-of -Things (RloT) is an architecture for providing trust services to computing devices. The trust services include device identity, attestation, and data integrity. The RloT
architecture can be used to remotely re-establish trust in devices that have been compromised by malware. Also, RloT services can be provided at low cost on even very small devices.
[0011] Improving security techniques have created a need for more frequent software updates to products in the field. However, these updates must be administered and verified without human involvement. RloT can be used to address these technical problems.
[0012] RloT provides a foundation for cryptographic operations and key management for many security scenarios. Authentication, integrity verification, and data protection require cryptographic keys to encrypt and decrypt, as well as mechanisms to hash and sign data. Most internet-connected devices also use cryptography to secure communication with other devices.
[0013] The cryptographic services provided by RloT include device identity, data protection, and attestation. Regarding device identity, devices typically authenticate themselves by proving possession of a cryptographic key. If the key associated with a device is extracted and cloned, then the device can be impersonated.
[0014] Regarding data protection, devices typically use cryptography to encrypt and integrity protect locally stored data. If the cryptographic keys are only accessible to authorized code, then unauthorized software is not be able to decrypt or modify the data.
[0015] Regarding attestation, devices sometimes need to report code they are running and their security configuration. For example, attestation is used to prove that a device is running up-to-date code.
[0016] If keys are managed in software alone, then bugs in software components can result in key compromise . For software-only systems, the primary way to restore trust following a key compromise is to install updated software and provision new keys for the device. This is time consuming for server and mobile devices, and not possible when devices are physically inaccessible.
[0017] Some approaches to secure remote re-provisioning use hardware-based security. Software-level attacks can allow hackers to use hardware-protected keys but not extract them, so hardware-protected keys are a useful building block for secure re-provisioning of compromised systems. The Trusted Platform Module, or TPM, is an example of security module that provides hardware protection for keys, and also allows the device to report (attest to) the software it is running. Thus, a compromised TPM-equipped device can be securely issued new keys, and can provide attestation reports.
[0018] TPMs are widely available on computing platforms (e.g., using SoC- integrated and processor-mode-isolated firmware TPMs). However, TPMs are often impractical. For example, a small loT device is not be able to support a TPM without substantial increase in cost and power needs.
[0019] RloT can be used to provide device security for small computing devices, but it can also be applied to any processor or computer system. If software components outside of the RloT core are compromised, then RloT provides for secure patching and re-provisioning. RloT also uses a different approach to cryptographic key protection. The most-protected cryptographic keys used by the RloT framework are only available briefly during boot.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
[0021] FIG. 1 shows a host device that verifies the identity of a computing device, according to one embodiment.
[0022] FIG. 2 shows an example computing system having an identity component and a verification component, according to one embodiment.
[0023] FIG. 3 shows an example computing device of a vehicle, according to one embodiment.
[0024] FIG. 4 shows an example host device communicating with an example computing device of a vehicle, according to one embodiment.
[0025] FIG. 5A shows an application board that generates an identifier, certificate, and key for a host device, according to one embodiment.
[0026] FIG. 5B shows an example computing system booting in stages using layers, according to one embodiment.
[0027] FIG. 6 shows an example computing device generating an identifier, certificate, and key using asymmetric generators, according to one embodiment.
Note
[0028] FIG. 7 shows a verification component that verifies the identity of a computing device using decryption operations, according to one embodiment.
[0029] FIG. 8 shows a block diagram of an example process to verify a certificate, according to one embodiment.
[0030] FIG. 9 shows a method to verify an identity of a computing device using an identifier, certificate, and a key, according to one embodiment.
[0031] FIG. 10 shows a system for generating a unique key from an output of a message authentication code (MAC) that receives an input from a physical unclonable function (PUF) device, according to one embodiment.
[0032] FIG. 11 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices selected by a selector module, according to one embodiment.
[0033] FIG. 12 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), according to one embodiment.
[0034] FIG. 13 shows a method to generate an output from a MAC that uses one or more input values provided from one or more PUFs, according to one
embodiment.
[0035] FIG. 14 shows a system for generating a root key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time- stamp, etc.), and that adds an additional MAC to generate a session key, according to one embodiment.
[0036] FIG. 15 shows a computing device for storing an obfuscated key in non volatile memory, according to one embodiment.
[0037] FIG. 16 shows an example of an intermediate key generated during an obfuscation process, according to one embodiment.
[0038] FIG. 17 shows an example of another intermediate key generated during the obfuscation process of FIG. 16, according to one embodiment.
[0039] FIG. 18 shows a method for generating and storing an obfuscated key in a non-volatile memory, according to one embodiment.
[0040] FIG. 19 shows a computing device for generating an initial key based on key injection, obfuscating the initial key, and storing the obfuscated key in non volatile memory, according to one embodiment.
[0041] FIG. 20 shows a computing device for generating an identity using a physical unclonable function (PUF), according to one embodiment.
[0042] FIG. 21 shows a system that sends an initial value provided by a monotonic counter of the system for use in determining whether tampering with the system has occurred, according to one embodiment.
[0043] FIG. 22 shows a method for generating an identity for a computing device using a physical unclonable function (PUF), according to one embodiment.
DETAILED DESCRIPTION [0044] At least some embodiments herein relate to verification of identity for one or more computing devices. In various embodiments, a host device verifies the identity of a computing device by sending a message to the computing device. The computing device uses the message to generate an identifier, a certificate, and a key, which are sent to the host device. The host device uses the generated identifier, certificate, and key to verify the identity of the computing device.
[0045] Other embodiments relate to generating an identity for a computing device using a physical unclonable function (PUF). In various embodiments described below, prior to the host device verifying the identity as described above, the computing device above can generate its self-identity using at least one PUF.
Various embodiments regarding generating an identity using one or more PUFs are described in the section below titled“Generating an Identity for a Computing Device Using a PUF”.
[0046] In some examples related to verification of identity, the computing device can be a flash memory device. In some examples, flash memory is leveraged to add a strong level of security capability in a computing system (e.g., an application controller of an autonomous vehicle).
[0047] Flash memory is used in numerous computer systems. Various types of flash memory exist today, including serial NOR, parallel NOR, serial NAND, parallel NAND, e.MMC, UFS, etc. These sockets are used in most embedded systems across various industries and applications.
[0048] For example, serial NOR is used in a wide array of applications like medical devices, factory automation boards, automotive ECUs, smart meters, and internet gateways. Given diversity of chipset architectures (processors, controllers or SoCs), operating systems, and supply chains used across these applications, flash memory is a common denominator building block in these systems.
[0049] Computer system resilience today is typically characterized by the location of roots of trust integrated into devices and leveraged by the solution for the security functions they provide. For more information on roots of trust, see the definition created by the National Institute of Technology (NIST) in Special Publication 800- 164. Existing industry uses varied implementations of roots of trust at the system level, using a mix of hardware and software capabilities, resulting in the technical problems of fragmentation of approaches and confusing level of security. This perplexing array of options also suffers from the key limitation of how to defend the non-volatile memory where critical code and data is stored.
[0050] Existing approaches rely on the processor and other secure elements like hardware security modules (HSMs) to offer critical security services to their systems. This has created a security gap at the lowest levels of boot in many systems where discrete flash memory components store system -critical code and data. The flash has become the target for many hackers to create Advanced Persistent Threats (APT’s) that can mask themselves from higher levels of code and resist removal. In many of these cases, flash memory is re-imaged or rewritten with new malicious code, which undermines the integrity of that device.
[0051] Various embodiments of the present disclosure related to verification of identity provide a technological solution to the above technical problems. In some embodiments, a computing device integrates hardware-based roots of trust into a flash memory device, enabling strong cryptographic identity and health management for loT devices. By moving essential security primitives in-memory, it becomes simpler to protect the integrity of code and data housed within the memory itself. This approach can significantly enhance system level security while minimizing the complexity and cost of implementations.
[0052] In one embodiment, a new loT device management capability leverages flash memory by enabling device onboarding and management by the Microsoft® Azure® loT cloud using flash memory and associated software. In one example, the solutions provide a cryptographic identity that becomes the basis for critical device provisioning services (e.g., the Azure loT Hub Device Provisioning Service (DPS)).
In one example, this DPS along with the enabled memory can enable zero-touch provisioning of devices to the correct loT hub as well as other services.
[0053] In some embodiments, to implement the above capability, the Device Identity Composition Engine (DICE) is used (DICE is an upcoming standard from the Trusted Computing Group (TCG)). In one example, the enabled memory permits only trusted hardware to gain access to the Microsoft Azure loT cloud. In one example, the health and identity of an loT device is verified in memory where critical code is typically stored. The unique identity of each loT device can now offer end-to- end device integrity at a new level, starting at the boot process. This can enable additional functionality like hardware-based device attestation and provisioning as well as administrative remediation of the device if necessary.
[0054] In one embodiment, a method includes: receiving, by a computing device (e.g., a serial NOR flash memory device), a message from a host device (e.g., a CPU, GPU, FPGA, or an application controller of a vehicle); generating, by the computing device, an identifier (e.g., a public identifier IDu public), a certificate (e.g., IDu certificate), and a key (e.g., Ku public), wherein the identifier is associated with an identity of the computing device, and the certificate is generated using the message; and sending, by the computing device, the identifier, the certificate, and the key to the host device, wherein the host device is configured to verify the identity of the computing device using the identifier, the certificate, and the key.
[0055] In some embodiments, the computing device above (e.g., a flash memory device) integrates DICE-RloT functionality, which is used to generate the identifier, certificate, and key described above and used by the host device to verify the identity of the computing device. In one example, the computing device stores a device secret that acts a primitive key on which the sequence of identification steps between layers of the DICE-RloT protocol is based. In one example, layers Lo and Li of the DICE-RloT functionality are implemented in the computing device using hardware and/or software. In one example, layer Lo is implemented solely in hardware.
[0056] FIG. 1 shows a host device 151 that verifies the identity of a computing device 141 , according to one embodiment. Host device 151 sends a message to the computing device 141. In one embodiment, host device 151 includes a freshness mechanism (not shown) that generates a freshness for use in sending messages to the computing device 141 to avoid replay attacks. In one example, each message sent to the computing device 141 includes a freshness generated by a monotonic counter.
[0057] In one example, the message is an empty string, a conventional known string (e.g., alphanumeric string known to the manufacturer or operator of host device 151 ), or can be another value (e.g., an identity value assigned to the computing device). In one example, the message is a unique identity of the device (UID).
[0058] In response to receiving the message, computing device 141 generates an identifier, a certificate, and a key. The identifier is associated with an identity of the computing device 141. Computing device 141 includes one or more processors 143 that control the operation of identity component 147 and/or other functions of computing device 141. [0059] The identifier, the certificate, and the key are generated by identity component 147 and are based on device secret 149. In one example, device secret 149 is a unique device secret (UDS) stored in memory of computing device 141. In one example, identity component 147 uses the UDS as a primitive key for implementation of the DICE-RloT protocol. The identifier, certificate, and key are outputs from layer Li of the DICE-RloT protocol (see, e.g., FIG. 6). In one embodiment, the identity of layer Li corresponds to the identity of computing device 141 itself, the manufacturer of computing device 141 , the manufacturer of a thing that includes computing device 141 as component, and/or an application or other software stored in memory of computing device 141. In one example, the application identity (e.g., an ID number) is for a mobile phone, a TV, an STB, etc. , for which a unique combination of characters and numbers is used to identify the thing.
[0060] In one example, the identity of layer Li is an ASCII string. For example, the identity can be a manufacturer name concatenated with a thing name (e.g., LG | TV_model_123_year_2018, etc.). In one example, the identity can be represented in hexadecimal form (e.g., 53 61 6D 73 75 6E 67 20 7C 20 54 56 5F 6D 6F 64 65 6C 5F 31 32 33 5F 79 65 61 72 5F 32 30 31 38).
[0061] In one embodiment, a manufacturer can use a UDS for a class or set of items that are being produced. In other embodiments, each item can have its own unique UDS. For example, the UDS for a TV can be UDS = 0x12234...4444, and the UDS for a laptop can be UDS = Oxaabb....00322.
[0062] In one embodiment, the device secret 149 is a secret key stored by computing device 141 in memory 145. Identity component 147 uses the secret key as an input to a message authentication code (MAC) to generate a derived secret.
In one example, the derived secret is fused derived secret (FDS) in the DICE-RloT protocol.
[0063] In one example, memory 145 includes read-only memory (ROM) that stores initial boot code for booting computing device 141. The FDS is a key provided to the initial boot code by processor 143 during a booting operation. In one example, the ROM corresponds to layer Lo of the DICE-RloT protocol.
[0064] Host device 151 uses the identifier, certificate, and key as inputs to a verification component 153, which verifies the identity of the computing device 141.
In one embodiment, verification component 153 performs at least one decryption operation using the identifier to provide a result. The result is compared to the key to determine whether the identity of the computer device 141 is valid. If so, host device 151 performs further communications with computing device 141 using the key received from computing device 141 . For example, once host device 151 verifies the“triple” (the identifier, certificate, and key), the key can be used to attest any other information exchanged between computing device 141 and host device 151.
[0065] In one embodiment, a digital identification is assigned to numerous “things” (e.g., as per the Internet of Things). In one example, the thing is a physical object such as a vehicle or a physical item present inside the vehicle. In one example the thing is a person or animal. For example, each person or animal can be assigned a unique digital identifier.
[0066] In some cases, manufacturers of products desire that each product can be proved as being genuine. Presently, this problem is solved by buying things only from a trusted seller, or buying things from others with some kind of legal certificate that ensures the thing purchased is genuine. However, in the case of theft of a thing, if the thing does not have an electronic identity, it is difficult to block or localize the thing so that the thing is not used improperly. In one example, localization is based on identity when the thing tries to interact with public infrastructures. In one example, blocking is based on the inability to prove the identity of a thing that wants to use a public infrastructure.
[0067] In one embodiment, computing device 141 implements the DICE-RloT protocol using identity component 147 in order to associate unique signatures to a chain of trust corresponding to computing device 141 . Computing device 141 establishes layers Lo and Li. The chain of trust is continued by host device 151 which establishes layers l_2, ... . In one example, a unique identifier can be assigned to every object, person, and animal in any defined environment (e.g., a trust zone defined by geographic parameters).
[0068] In one embodiment, computing device 141 is a component in the thing that is desired to be assigned an identity. For example, the thing can be an autonomous vehicle including computing device 141. For example, computing device 141 can be flash memory that is used by an application controller of the vehicle.
[0069] When the computing device 141 is manufactured, the manufacturer can inject a UDS into memory 145. In one example, the UDS can be agreed to and shared with a customer that will perform additional manufacturing operations using computing device 141. In another example, the UDS can be generated randomly by the original manufacturer and then communicated to the customer using a secure infrastructure (e.g., over a network such as the internet).
[0070] In one example, the customer can be a manufacturer of a vehicle that incorporates computing device 141 . In many cases, the vehicle manufacturer desires to change the UDS so that it is unknown to the seller of computing device 141. In such cases, the customer can replace the UDS using an authenticated replace command that is provided by host device 151 to computing device 141.
[0071] In some embodiments, the customer can inject customer immutable information into memory 145 of computing device 141. In one example, the immutable info is used to generate a unique FDS, and is not solely used as a differentiator. The customer immutable information is used to differentiate various objects that are manufactured by the customer. For example, customer immutable information can be a combination of letters and/or numbers to define primitive information (e.g., a combination of some or all of the following information: date, time, lot position, wafer position, x, y location in a wafer, etc.).
[0072] For example, in many cases, the immutable information also includes data from cryptographic feature configuration performed by a user (e.g., a customer who receives a device from a manufacturer). This configuration or setting can be done only by using authenticated commands (commands that need the knowledge of a key to be executed). The user has knowledge of the key (e.g., based on being provided the key over a secure infrastructure from the manufacturer). The immutable information represents a form of cryptographic identity of a computing device, which is different from the unique ID (UID) of the device. In one example, the inclusion of the cryptographic configuration in the immutable set of information provides the user with a tool useful to self-custom ize the immutable information.
[0073] In one embodiment, computing device 141 includes a freshness mechanism that generates a freshness. The freshness can be provided with the identifier, certificate, and key when sent to host device 151. The freshness can also be used with other communications with host device 151 .
[0074] In one embodiment, computing device 141 is a component on an application board. Another component (not shown) on the application board can verify the identity of computing device 141 using knowledge of device secret 149 (e.g., knowledge of an injected UDS). The component requests that computing device 141 generate an output using a message authentication code in order to prove possession of the UDS. For example, the message authentication code can be as follows: FIMAC (UDS,“application board message | freshness”)
[0075] In another embodiment, the FDS can also be used as criteria to prove the possession of the device (e.g., the knowledge of the secret key(s)). The FDS is derived from the UDS in this way: FDS = HMAC-SHA256 [ UDS, SHA256(“ Identity of Li”)]
So, the message authentication code can be as follows: FIMAC (FDS,“application board message | freshness”)
[0076] FIG. 2 shows an example computing system having an identity component 107 and a verification component 109, according to one embodiment. A host system 101 communicates over a bus 103 with a memory system 105. A
processing device 1 1 1 of memory system 105 has read/write access to memory regions 1 1 1 , 1 13, ... , 1 19 of non-volatile memory 121. In one example, host system 101 also reads data from and writes data to volatile memory 123. In one example, identity component 107 supports layers Lo and Li of the DICE-RloT protocol. In one example, non-volatile memory 121 stores boot code.
[0077] Verification component 109 is used to verify an identity of memory system 105. Verification component 109 uses a triple including an identifier, certificate, and key generated by identity component 107 in response to receiving a host message from host system 101 , for example as described above.
[0078] Identity component 107 is an example of identity component 147 of FIG. 1. Verification component 109 is an example of verification component 153 of FIG. 1.
[0079] Memory system 105 includes key storage 157 and key generators 159.
In one example, key storage 157 can store root keys, session keys, a UDS (DICE- RloT), and/or other keys used for cryptographic operations by memory system 105.
[0080] In one example, key generators 159 generate a public key sent to host system 101 for use in verification by verification component 109. The public key is sent as part of a triple that also includes an identifier and certificate, as described above.
[0081] Memory system 105 includes a freshness generator 155. In one example, freshness generator 155 can be used for authenticated commands. In one example, multiple freshness generators 155 can be used. In one example, freshness generator 155 is available for use by host system 101 . [0082] In one example, the processing device 1 1 1 and the memory regions 1 1 1 , 1 13, ... , 1 19 are on the same chip or die. In some embodiments, the memory regions store data used by the host system 101 and/or the processing device 1 1 1 during machine learning processing or other run-time data generated by software process(es) executing on host system 101 or on processing device 11 1.
[0083] The computing system can include a write component in the memory system 105 that selects a memory region 1 1 1 (e.g., a recording segment of flash memory) for recording new data from host system 101. The computing system 100 can further include a write component in the host system 101 that coordinates with the write component 107 in the memory system 105 to at least facilitate selection of the memory region 1 11 .
[0084] In one example, volatile memory 123 is used as system memory for a processing device (not shown) of host system 101. In one embodiment, a process of host system 101 selects memory regions for writing data. In one example, the host system 101 can select a memory region based in part on data from sensors and/or software processes executing on an autonomous vehicle. In one example, the foregoing data is provided by the host system 101 to processing device 1 1 1 , which selects the memory region.
[0085] In some embodiments, host system 101 or processing device 1 1 1 includes at least a portion of the identity component 107 and/or verification component 109.
In other embodiments, or in combination, the processing device 1 1 1 and/or a processing device in the host system 101 includes at least a portion of the identity component 107 and/or verification component 109. For example, processing device 1 1 1 and/or a processing device of the host system 101 can include logic circuitry implementing the identity component 107 and/or verification component 109. For example, a controller or processing device (e.g., a CPU, FPGA, or GPU) of the host system 101 , can be configured to execute instructions stored in memory for performing the operations of the identity component 107 and/or verification component 109 described herein.
[0086] In some embodiments, the identity component 107 is implemented in an integrated circuit chip disposed in the memory system 105. In other embodiments, the verification component 109 in the host system 101 is part of an operating system of the host system 101 , a device driver, or an application.
[0087] An example of memory system 105 is a memory module that is connected to a central processing unit (CPU) via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO- DIMM), a non-volatile dual in-line memory module (NVDIMM), etc. In some embodiments, the memory system can be a hybrid memory/storage system that provides both memory functions and storage functions. In general, a host system can utilize a memory system that includes one or more memory regions. The host system can provide data to be stored at the memory system and can request data to be retrieved from the memory system. In one example, a host can access various types of memory, including volatile and non-volatile memory.
[0088] The host system 101 can be a computing device such as a controller in a vehicle, a network server, a mobile device, a cellular telephone, an embedded system (e.g., an embedded system having a system -on-chip (SOC) and internal or external memory), or any computing device that includes a memory and a processing device. The host system 101 can include or be coupled to the memory system 105 so that the host system 101 can read data from or write data to the memory system 105. The host system 101 can be coupled to the memory system 105 via a physical host interface. As used herein,“coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening
components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc. The physical host interface can be used to transmit data between the host system 101 and the memory system 105. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory system 105 and the host system 101 .
[0089] FIG. 2 illustrates a memory system 105 as an example. In general, the host system 101 can access multiple memory systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.
[0090] The host system 101 can include a processing device and a controller.
The processing device of the host system 101 can be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, etc. In some instances, the controller of the host system can be referred to as a memory controller, a memory management unit, and/or an initiator.
In one example, the controller controls the communications over bus 103 between the host system 101 and the memory system 105. These communications include sending of a host message for verifying identity of memory system 105 as described above.
[0091] A controller of the host system 101 can communicate with a controller of the memory system 105 to perform operations such as reading data, writing data, or erasing data at the memory regions of non-volatile memory 121. In some instances, the controller is integrated within the same package of the processing device 1 1 1 .
In other instances, the controller is separate from the package of the processing device 1 1 1. The controller and/or the processing device can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof. The controller and/or the processing device can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
[0092] In one embodiment, the memory regions 1 1 1 , 1 13, ... , 1 19 can include any combination of different types of non-volatile memory components.
Furthermore, the memory cells of the memory regions can be grouped as memory pages or data blocks that can refer to a unit used to store data. In some
embodiments, the volatile memory 123 can be, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM).
[0093] In one embodiment, one or more controllers of the memory system 105 can communicate with the memory regions 1 1 1 , 113, ... , 1 19 to perform operations such as reading data, writing data, or erasing data. Each controller can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. Each controller can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
The controller(s) can include a processing device (processor) configured to execute instructions stored in local memory. In one example, local memory of the controller includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory system 105, including handling communications between the memory system 105 and the host system 101 . In some embodiments, the local memory can include memory registers storing memory pointers, fetched data, etc. The local memory can also include read-only memory (ROM) for storing micro-code.
[0094] In general, controller(s) of memory system 105 can receive commands or operations from the host system 101 and/or processing device 1 1 1 and can convert the commands or operations into instructions or appropriate commands to achieve selection of a memory region based on data write counters for the memory regions. The controller can also be responsible for other operations such as wear-leveling, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory regions. The controller can further include host interface circuitry to communicate with the host system 101 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access one or more of the memory regions as well as convert responses associated with the memory regions into information for the host system 101 .
[0095] The memory system 105 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory system 105 can include a cache or buffer (e.g., DRAM or SRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from one or more controllers and decode the address to access the memory regions.
[0096] In some embodiments, a controller in the host system 101 or memory system 105, and/or the processing device 1 1 1 includes at least a portion of the identity component 107 and/or verification component 109. For example, the controller and/or the processing device 1 11 can include logic circuitry implementing the identity component 107 and/or verification component 109. For example, a processing device (processor) can be configured to execute instructions stored in memory for performing operations that provide read/write access to memory regions for the identity component 107 as described herein. In some embodiments, the verification component 109 is part of an operating system, a device driver, or an application.
[0097] FIG. 3 shows an example computing device of a vehicle 100, according to one embodiment. For example, the vehicle 100 can be an autonomous vehicle, a non-autonomous vehicle, an emergency vehicle, a service vehicle, or the like.
[0098] The vehicle 100 includes a vehicle computing device 1 10, such as an on-board computer. Vehicle computing device 1 10 is an example of host device 151 of FIG. 1. In another example vehicle computing device 1 10 is an example of host system 101 of FIG. 2, and memory 160 is an example of memory system 105.
[0099] The vehicle computing device 1 10 includes a processor 120 coupled to a vehicular communication component 130, such as a reader, writer, and/or other computing device capable of performing the functions described below, that is coupled to (or includes) an antenna 140. The vehicular communication component 130 includes a processor 150 coupled to a memory 160, such as a non-volatile flash memory, although embodiments are not so limited to such a kind of memory devices.
[00100] In one example, the memory 160 is adapted to store all the information related to the vehicle (e.g., driver, passengers, and carried goods) in such a way that the vehicle 100 is able to provide this information when approaching a check point by using a communication interface (for example the so-called DICE-RloT protocol), as described below.
[00101] In one example, the vehicle information (such as vehicle ID/plate number) is already stored in the vehicle memory 160, and the vehicle 100 is able to identify, for example through the communication component 130 and by using a known DICE-RloT protocol or a similar protocol, the electronic ID of the passengers and/or the IDs of the carried luggage, goods and the like, and then to store this information in the memory 160. In one example, electronic IDs, transported luggage and goods containers are equipped with wireless transponders, NFC, Bluetooth, RFID, touchless sensors, magnetic bars, and the like, and the communication component 130 can use readers and/or electromagnetic field to acquire the needed info from such remote sources.
[00102] In one example, all the passenger IDs and/or the IDs of the carried luggage, goods and the like are equipped with electronic devices capable to exchange data with a communication component. Those electronic devices may be active or passive elements in the sense that they may be active because supplied by electric power or may be activated and powered by an external electric supply source that provided the required electric supply just when the electric device is in its proximity.
[00103] Rental vehicles or autonomous vehicles can use readers and/or electromagnetic field to acquire information inside or in the proximity of the vehicle or, as an alternative, may receive information even from remote sources, for instance when the driver of a rental vehicle is already known to the rental system because of a previous reservation. A further check may be performed in real time when the driver arrives to pick up the vehicle.
[00104] Similarly, all the information about the transported luggage and goods (and also about the passengers) carried by the vehicle 100 may maintained to be always up-to-date. To do so, the electronic ID of the passengers and/or the IDs of the carried luggage and goods are up-dated in real-time due to the wireless transponders associated to the luggage and good or owned by the passengers (not shown).
[00105] In one example, the communication between the vehicular communication component 130 and the proximity sources (e.g., the goods transponders and the like), occurs via the DICE-RloT protocol.
[00106] In one example, the vehicle computing device 1 10 can control operational parameters of the vehicle 100, such as steering and speed. For example, a controller (not shown) can be coupled to a steering control system 170 and a speed control system 180. Further, the vehicle computing device 1 10 can be coupled to an information system 190. Information system 190 can be configured to display a message, such as the route information or a check point security message and can display visual warnings and/or output audible warmings. The communication component 130 can receive information from additional computing devices, such as from an external computing device (not shown).
[00107] FIG. 4 shows an example system 390 having a host device 350
communicating with an example computing device of a vehicle 300, according to one embodiment. The computing device includes a passive communication component 310, such as a short-range communication device (e.g., an NFC tag). The
communication component 310 can be in the vehicle 300, which can be configured as shown in FIG. 3 for the vehicle 100 and include the components of vehicle 100 in addition to the communication component 310, which can be configured as the vehicular communication component 130. The communication component 310 includes a chip 320 (e.g., implementing a CPU or application controller for vehicle 300) having a non-volatile storage component 330 that stores information about the vehicle 300 (such as vehicle ID, driver/passenger information, carried goods information, etc.). The communication component 310 can include an antenna 340.
[00108] The host device 350 is, for example, an active communications device (e.g., that includes a power supply), which can receive information from the communication component 310 and/or transmit information thereto. In some examples, the host device 350 can include a reader (e.g., an NFC reader), such as a toll reader, or other components. The host device 350 can be an external device arranged (e.g., embedded) in proximity of a check point (e.g., at the boundary of a trust zone) or in general in proximity of limited access areas. In some embodiments, the host device 350 can also be carried by a policeman for use as a portable device.
[00109] The host device 350 can include a processor 360, a memory 370, such as a non-volatile memory, and an antenna 380. The memory 370 can include an NFC protocol that allows the host device 350 to communicate with the communication component 310. For example, the host device 350 and the communication component 310 can communicate using the NFC protocol, such as for example at about 13.56 mega-Hertz and according to the ISO/IEC 18000-3 international standard. Other approaches that use RFID tags can be used.
[00110] The host device 350 can also communicate with a server or other computing device (e.g., communicate over a wireless network with a central operation center). For example, the host device 350 can be wirelessly coupled or hardwired to the server or a communication center. In some examples, the host device 350 can communicate with the operation center via WIFI or over the Internet. The host device 350 can energize the communication component 310 when the vehicle 300 brings antenna 340 within a communication distance of antenna 380. In some examples, the host device 350 can receive real-time information from the operation center and can transmit that information to vehicle 300. In some embodiments, the communication component 310 can have its own battery.
[00111] In one embodiment, the host device 350 is adapted to read/send information from/to the vehicle 300, which is equipped with the communication component 310 (e.g., an active device) configured to allow information exchange.
[00112] Referring again to FIG. 3, the vehicular communication component 130 of the vehicle 100 can be active internally to pick up in real-time pertinent information concerning the passengers IDs, the transported luggage and/or goods (e.g., when equipped with the corresponding wireless communication component discussed with respect to FIG. 4 above). The vehicle’s computing device may detect information in a space range of few meters (e.g., 2-3 meters), so that all data corresponding to passengers, luggage and goods may be acquired. In one example, this occurs when the vehicle approaches an external communication component (e.g., a server or other computing device acting as a host device) within a particular proximity so that communication can begin and/or become strengthened. The communication distance is for example 2-3 meters.
[00113] In one embodiment, the vehicular communication component 130 can encrypt data when communicating to external entities and/or with internal entities.
In some cases, data concerning transported luggage, goods or even passengers may be confidential or include confidential information (e.g., the health status of a passenger or confidential documents or a dangerous material). In such a case, it is desired that the information and data stored in the memory portion associated to the vehicle computing device is kept as encrypted data.
[00114] In various embodiments discussed below, a method for encrypted and decrypted communication between the internal vehicle computing device and the external entity (e.g., a server acting as a host device) is discussed. In one example, this method may be applied even between the internal vehicle computing device and the electronic components associated to passengers, luggage and goods boarded on the vehicle.
[00115] In one example, the vehicular communication component 130 sends a vehicular public key to the external communication component (e.g., acting as a host device 151 ), and the external communication component sends an external public key to the vehicular communication component 130. These public keys (vehicular and external) can be used to encrypt data sent to each respective communication component and verify an identity of each, and also exchange confirmations and other information. As an example, as described further below, the vehicular communication component 130 can encrypt data using the received external public key and send the encrypted data to the external communication component.
Likewise, the external communication component can encrypt data using the received vehicular public key and send the encrypted data to the vehicular communication component 130. Data sent by the vehicle 100 can include car information, passenger information, goods information, and the like. The information can optionally be sent with a digital signature to verify an identity of the vehicle 100. Moreover, information can be provided to the vehicle 100 and displayed on a dashboard of the vehicle 100 or sent to an email of a computing device (e.g., a user device or central server that monitors vehicles) associated with the vehicle 100. The vehicle can be recognized based on an identification of the vehicle, a VIN number, etc., along with a vehicular digital signature.
[00116] In one example, data exchanged between the vehicle and the external entity can have a freshness used by the other. As an example, data sent by the vehicle to the external entity to indicate the identical instructions can be altered at each of a particular time frame or for a particular amount of data being sent. This can prevent a hacker from intercepting confidential information contained in previously sent data and sending the same data again to result in the same outcome. If the data has been slightly altered, but still indicates a same instruction, the hacker might send the identical information at a later point in time, and the same instruction would not be carried out due to the recipient expecting the altered data to carry out the same instruction.
[00117] The data exchanged between the vehicle 100 and an external entity (e.g., a computing system or device) (not shown) can be performed using a number of encryption and/or decryption methods as described below. The securing of the data can ensure that unauthorized activity is prevented from interfering with the operation the vehicle 100 and the external entity.
[00118] FIG. 5A shows an application board that generates a triple including an identifier, certificate, and key that is sent to a host device, according to one embodiment. The host device uses the triple to verify an identity of the application board. The application board is an example of computing device 141 of FIG. 1.
The host device is an example of host device 151 of FIG. 1.
[00119] In one embodiment, the application board and the host include
communication components that perform encryption and/or decryption operations for communications (e.g., on information and data) using a device identification composition engine (DICE)-robust internet of things (RloT) protocol. In one example, the DICE-RloT protocol is applied to communication between the vehicular communication component and an external communication component, as well as to a communication performed internally to the vehicle environment between the vehicle communication component and the various wireless electronic devices that are associated to each of the passenger IDs, the luggage, the goods and the like.
[00120] FIG. 5B shows an example computing system that boots in stages using layers, according to one embodiment. The system includes an external
communication component 430’ and a vehicular communication component 430” in accordance with an embodiment of the present disclosure. As the vehicle comes near the external entity or in its proximity, the associated vehicular communication component 430” of the vehicle can exchange data with the external entity as described above for example using a sensor (e.g., a radio frequency identification sensor, or RFID, or the like).
[00121] In other embodiments, the component 430’ can be an application board located in a vehicle, and the component 430” can be a host device also located in the vehicle that uses the DICE-RloT protocol to verify an identity of component 430’ (for example, as discussed with respect to FIG. 1 above).
[00122] In one embodiment, the DICE-RloT protocol is used by a computing device to boot in stages using layers, with each layer authenticating and loading a subsequent layer and providing increasingly sophisticated runtime services at each layer. A layer can thus be served by a prior layer and serve a subsequent layer, thereby creating an interconnected web of the layers that builds upon lower layers and serves higher order layers. Alternatively, other protocols can be used instead of the DICE-RloT protocol.
[00123] In one example implementation of the communication protocol, security of the communication protocol is based on a secret value, which is a device secret (e.g., a UDS), that is set during manufacture (or also later). The device secret UDS exists within the device on which it was provisioned (e.g., stored as device secret 149 of FIG. 1 ).
[00124] The device secret UDS is accessible to the first stage ROM-based boot loader at boot time. The system then provides a mechanism rendering the device secret inaccessible until the next boot cycle, and only the boot loader (e.g., the boot layer) can ever access the device secret UDS. Thus, in this approach, the boot is layered in a specific architecture that begins with the device secret UDS.
[00125] As is illustrated in FIG. 5B, Layer 0, Lo, and Layer 1 , Li, are within the external communication component 430’. Layer 0 Lo can provide a fuse derived secret, FDS, key to Layer 1 Li. The FDS key can be based on the identity of code in Layer 1 Li and other security relevant data. A particular protocol (such as robust internet of things (RloT) core protocol) can use the FDS to validate code of Layer 1 Li that it loads. In an example, the particular protocol can include a device
identification composition engine (DICE) and/or the RloT core protocol. As an example, the FDS can include a Layer 1 Li firmware image itself, a manifest that cryptographically identifies authorized Layer 1 Li firmware, a firmware version number of signed firmware in the context of a secure boot implementation, and/or security-critical configuration settings for the device. The device secret UDS can be used to create the FDS, and is stored in the memory of the external communication component. Thus, the Layer 0 Lo never reveals the actual device secret UDS and it provides a derived key (e.g., the FDS key) to the next layer in the boot chain.
[00126] The external communication component 430’ is adapted to transmit data, as illustrated by arrow 410’, to the vehicular communication component 430”. The transmitted data can include an external identification that is public, a certificate (e.g., an external identification certificate), and/or an external public key, as it will be illustrated in connection with FIG. 6. Layer 2 L2 of the vehicular communication component 430” can receive the transmitted data, execute the data in operations of the operating system, OS, for example on a first application Appi and a second application App2.
[00127] Likewise, the vehicular communication component 430” can transmit data, as illustrated by arrow 410”, including a vehicular identification that is public, a certificate (e.g., a vehicular identification certificate), and/or a vehicular public key.
As an example, after the authentication (e.g., after verifying the certificate), the vehicular communication component 430” can send a vehicle identification number, VIN, for further authentication, identification, and/or verification of the vehicle.
[00128] As shown in FIG. 5B and FIG. 6, in an example operation, the external communication component 430’ can read the device secret DS, hash an identity of Layer 1 Li , and perform the following calculation:
[00129] FDS = KDF [UDS, Hash ("immutable information")]
[00130] where KDF is a cryptographic one-way key derivation function (e.g., HMAC-SHA256). In the above calculation, Hash can be any cryptographic primitive, such as SHA256, MD5, SHA3, etc.
[00131] In at least one example, the vehicle can communicate using either of an anonymous log in or an authenticated log in. The authenticated log in can allow the vehicle to obtain additional information that may not be accessible when
communicating in an anonymous mode. In at least one example, the authentication can include providing the vehicular identification number VIN and/or authentication information, such as an exchange of public keys, as will be described below. In either of the anonymous and authenticated modes, the external entity (e.g., a check point police at a boundary of a trust zone) can communicate with the vehicle to provide the external public key associated with the external entity to the vehicle.
[00132] FIG. 6 shows an example computing device generating an identifier, certificate, and key using asymmetric generators, according to one embodiment. In one embodiment, the computing device implements a process to determine parameters (e.g., within the Layer Li of an external device, or Layer Li of an internal computing device in alternative embodiments).
[00133] In one embodiment, the parameters are determined including the external public identification, the external certificate, and the external public key that are then sent (as indicated by arrow 51 O’) to Layer 2 L2 of the vehicular communication component (e.g., reference 430” in FIG. 5B). Arrows 510’ and 510” of FIG. 6
correspond to arrows 410’ and 410”, respectively, of FIG. 5B. Also, the layers in FIG. 6 correspond to the layers of FIG. 5B.
[00134] In another embodiment, a message (“Host Message”) from the host device is merged with the external public key by pattern (data) merging 531 to provide merged data for encryption. The merged data is an input to encryptor 530. In one example, the host message is concatenated with the external public key. The generated parameters include a triple that is sent to a host device and used to verify an identity of a computing device. For example, the external public identification, the external certificate, and the external public key are used by a verification component of the host device to verify the identity. In one example, the host device is host device 151 of FIG. 1.
[00135] As shown in FIG. 6, the FDS from Layer 0 Lo is sent to Layer 1 Li and used by an asymmetric ID generator 520 to generate a public identification,
IDIkpublic, and a private identification, IDIkprivate. In the abbreviated "IDIkpublic" the "Ik" indicates a generic Layer k (in this example, Layer 1 Li), and the "public" indicates that the identification is openly shared. The public identification IDIkpublic is illustrated as shared by the arrow extending to the right and outside of Layer 1 Li of the external communication component. The generated private identification IDIkprivate is used as a key input into an encryptor 530. The encryptor 530 can be, for example, any processor, computing device, etc. used to encrypt data.
[00136] Layer 1 Li of the external communication component can include an asymmetric key generator 540. In at least one example, a random number generator, RND, can optionally input a random number into the asymmetric key generator 540. The asymmetric key generator 540 can generate a public key, KLkpublic, (referred to as an external public key) and a private key, KLkprivate, (referred to as an external private key) associated with an external communication component such as the external communication component 430’ in FIG. 5B.
[00137] The external public key KLkpublic can be an input (as "data") into the encryptor 530. As mentioned above, in some embodiments, a host message previously received from the host device as part of an identity verification process is merged with KLkpublic to provide merged data as the input data to encryptor 530.
[00138] The encryptor 530 can generate a result K’ using the inputs of the external private identification IDIkprivate and the external public key KLkpublic. The external private key KLkprivate and the result K’ can be input into an additional encryptor 550, resulting in output K”. The output K” is the external certificate,
IDL1 certificate, transmitted to the Layer 2 L2 (or alternatively transmitted to a host device that verifies identity). The external certificate I DL1 certificate can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the external communication component can be associated with an identity of the external communication component by verifying the certificate, as it will be described further in association with FIG. 7. Further, the external public key KL1 public key can be transmitted to Layer 2 L2. Therefore, the public
identification IDI1 public, the certificate I DL1 certificate, and the external public key KL1 public key of the external communication component can be transmitted to Layer 2 L2 of the vehicular communication component.
[00139] FIG. 7 shows a verification component that verifies the identity of a computing device using decryption operations, according to one embodiment. The verification component includes decryptors 730, 750. The verification component implements a process to verify a certificate in accordance with an embodiment of the present disclosure.
[00140] In the illustrated example of FIG. 7, a public key KL1 public, a certificate IDL1 certificate, and a public identification IDL1 public is provided from the external communication component (e.g., from Layer 1 Li of the external communication component 430’ in FIG. 5B).
[00141] The data of the certificate IDL1 certificate and the external public key KL1 public can be used as inputs into decryptor 730. The decryptor 730 can be any processor, computing device, etc. used to decrypt data. The result of the decryption of the certificate I DL1 certificate and the external public key KL1 public can be used as an input into decryptor 750 along with the public identification IDL1 public, resulting in an output. The external public key KL1 public and the output from the decryptor 750 can indicate, as illustrated at block 760, whether the certificate is verified, resulting in a yes or no as an output. Private keys are associated with single layers and a specific certificate can only be generated by a specific layer.
[00142] In response to the certificate being verified (e.g., after the authentication), data received from the device being verified can be accepted, decrypted, and/or processed. In response to the certificate not being verified, data received from the device being verified can be discarded, removed, and/or ignored. In this way, unauthorized devices sending nefarious data can be detected and avoided. As an example, a hacker sending data to be processed can be identified and the hacking data not processed.
[00143] In an alternative embodiment, the public key KL1 public, a certificate IDL1 certificate, and a public identification IDL1 public are provided from computing device 141 of FIG. 1 , or from memory system 105 of FIG. 2. This triple is generated by computing device 141 in response to receiving a host message from the host device. Prior to providing I DL1 certificate as an input to decryptor 730, the
IDL1 certificate and a message from the host device (“host message”) are merged by pattern (data) merging 731 . In one example, the merging is a concatenation of data. The merged data is provided as the input to decryptor 730. The verification process then proceeds otherwise as described above.
[00144] FIG. 8 shows a block diagram of an example process to verify a certificate, according to one embodiment. In the case where a device is sending data that may be verified in order to avoid subsequent repudiation, a signature can be generated and sent with the data. As an example, a first device may make a request of a second device and once the second device performs the request, the first device may indicate that the first device never made such a request. An anti-repudiation approach, such as using a signature, can avoid repudiation by the first device and ensure that the second device can perform the requested task without subsequent difficulty.
[00145] A vehicle computing device 810” (e.g., vehicle computing device 1 10 in FIG. 3 or computing device 141 of FIG. 1 ) can send data Dat” to an external computing device 810’ (or to any other computing device in general). The vehicle computing device 810” can generate a signature Sk using the vehicular private key KLkprivate. The signature Sk can be transmitted to the external computing device 810’. The external computing device 810’ can verify using data Dat' and the public key KLkpublic previously received (e.g., the vehicular public key). In this way, signature verification operates by using a private key to encrypt the signature and a public key to decrypt the signature. In this way, a unique signature for each device can remain private to the device sending the signature while allowing the receiving device to be able to decrypt the signature for verification. This is in contrast to encryption/decryption of the data, which is encrypted by the sending device using the public key of the receiving device and decrypted by the receiving device using the private key of the receiver. In at least one example, the vehicle can verify the digital signature by using an internal cryptography process (e.g., Elliptical Curve Digital signature (ECDSA) or a similar process).
[00146] Due to the exchange and verification of the certificates and of the public keys, the devices are able to communicate in a secure way with each other. When a vehicle approaches an external entity (e.g., a trust zone boundary, a border security entity or, generally, an electronically-controlled host device), the respective communication devices (which have the capability shown in FIG. 7 of verifying the respective certificate) exchange the certificates and communicate to each other.
After the authentication (e.g. after receiving/verifying from the external entity the certificate and the public key), the vehicle is thus able to communicate all the needed information related thereto and stored in the memory thereof, such as plate number/ID, VIN, insurance number, driver info (e.g., IDs, eventual permission for border transition), passenger info, transported goods info and the like. Then, after checking the received info, the external entity communicates to the vehicle the result of the transition request, this info being possibly encrypted using the public key of the receiver. The exchanged messages/info can be encrypted/decrypted using the above-described DICE-RloT protocol. In some embodiments, the so-called immutable info (such as plate number/ID, VIN, insurance number) is not encrypted, while other info is encrypted. In other words, in the exchanged message, there can be non-encrypted data as well as encrypted data: the info can thus be encrypted or not, or mixed. The correctness of the message is then ensured by using the certificate/public key to validate that the content of the message is valid.
[00147] FIG. 9 shows a method to verify an identity of a computing device using an identifier, certificate, and a key, according to one embodiment. For example, the method of FIG. 9 can be implemented in the system of FIGs. 1-7.
[00148] The method of FIG. 9 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method of FIG. 9 is performed at least in part by the identity component 147 and verification component 153 of FIG. 1.
[00149] Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various
embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
[00150] At block 921 , a message is received from a host device. For example, computing device 141 receives a message (e.g.,“host message’’ or“host message | freshness”) from host device 151.
[00151] At block 923, an identifier, a certificate, and a key (e.g., a public key Ku public) are generated. The identifier is associated with an identity of a computing device. The certificate is generated using the message (e.g.,“host message”) from the host device. In one embodiment, the message is merged with the public key prior to encryption. This encryption uses private identifier I Du private as a key. The private identifier IDuprivate is associated with the public identifier IDLI public (e.g., an associated pair that is generated by asymmetric ID generator 520).
[00152] In one example, identity component 147 generates the identifier, certificate, and key to provide a triple. In one example, the triple is generated based on the DICE-RloT protocol. In one example, the triple is generated as illustrated in FIG. 6 [00153] In one example, using the DICE-RloT protocol, each layer (Lk) provides to the next layers (Lk+i) a set of keys and certificates, and each certificate can be verified by the receiving layer. The fuse derived secret FDS is calculated as follows: FDS = HMAC-SHA256 [ UDS, SHA256(“ldentity of Li”) ]
[00154] In one example, layer 1 Li in a DICE-RloT architecture generates the certificate using a host message sent by the host device. Layer 1 calculates two associated key pairs as follows:
( I Dlk public , IDlk private) and (KLk public, KLk private)
[00155] Layer 1 also calculates two signatures as follows:
K’ = encrypt ( I Dik private , KLk public | host message)
K” = encrypt (KLk private, K)
[00156] From the above processing, layer 1 provides a triple as follows:
Kh= { IDu public, IDu certificate, KLI public}
More generally, each layer provides a triple as follows:
KLk = {set of keys and certificate} for each k=1 :N
Using the respective triple, each layer is able to prove its identity to the next layer.
[00157] In one example, layer 2 corresponds to application firmware, and subsequent layers correspond to an operating system and/or applications of the host device.
[00158] At block 925, the generated identifier, certificate, and key are sent to the host device. The host device verifies the identity of the computing device using the identifier, certificate, and key. In one example, host device 151 receives the identifier, certificate, and key from computing device 141. Host device 151 uses verification component 153 to verify the identity of the computing device.
[00159] In one example, verification component 153 performs decryption operations as part of the verification process. The decryption includes merging the message from the host with the certificate prior to decryption using the key received from computing device 141. In one example, verification of the identity of the computing device is performed as illustrated in FIG. 7.
[00160] In one example, the decryption operations are performed as follows:
Decrypt (I DLI certificate) using KLi public to provide K
Decrypt K using I DLI public to provide Result
The Result is compared to KLi public. If the Result is equal to KLi public, then the identity is verified. In one example, an application board identity is verified.
[00161] In one embodiment, an identity of a human or animal is proven.
Verification of the identity of a person is performed similarly as for verifying the identity of a computing device 141 as described above. In one example, computing device 141 is integrated into a passport for a person. A public administration department of a country that has issued the passport can use a UDS that is specific for a class of document (e.g., driver license, passport, ID card, etc.). For example, for the Italy, Sicily, Messina, Passport Office, the UDS = 0x12234...4444. For the Germany, Bavaria, Munich, Passport Office, the UDS = Oxaabb....00322
[00162] In one example regarding a passport, the identity of L1 is an ASCII string as follows:
Country | Document type | etc. (e.g.,“Italy, Sicily, Messina, Passport Office”)
The“granularity” of the assignation can be determined by the public administration of each country.
[00163] Various embodiments of the method of FIG. 9 provide various advantages. For example, a thing can be identified and certified as being produced by a specific factory without using a third party key infrastructure (e.g., PKI = public key
infrastructure). Malicious or hacker man in the middle attacks are prevented due to being replay protected. The method is usable for mass production of things.
[00164] Further, the customer UDS is protected at a hardware level (e.g., inaccessibility of layer 0 external to the component). The UDS cannot be read by anyone, but it can be replaced (e.g., only the customer can do this by using a secure protocol). Examples of secure protocols include security protocols based on authenticated, replay protected commands and/or on secret sharing algorithms like Diffie Heilman (e.g., ECDH elliptic curves Diffie Heilman). In one example, the UDS is communicated to the customer (not the end user) by using secure infrastructure. The UDS is customizable by the customer.
[00165] In addition, the component recognition can work in the absence of an internet or other network connection. Also, the method can be used to readily check identity of: things, animals, and humans at a trust zone boundary (e.g., a country border, internal check point, etc.).
[00166] In one example, knowledge of the UDS permits the host device to securely replace the UDS. For example, replacement can be done if: the host desires to change the identity of a thing, or the host desires that the thing’s identity is unknown to anyone else (including the original manufacturer).
[00167] In another example, a replace command is used by the host device. For example, the host device can send a replace UDS command to the computing device. The replace command includes the existing UDS and the new UDS to be attributed to the computing device. In one example, the replace command has a field including a hash value as follows: hash (existing UDS | new UDS).
[00168] In another example, an authenticated replay protect command is used that has a field as follows: Replace_command | freshness | signature
where signature = MAC [secret key, Replace_command | freshness | hash (existing UDS I new UDS)]
The secret key is an additional key, and is the key used for authenticated commands present on the device. For example, the secret key can be a session key as described below (see, e.g., FIG. 12).
[00169] In one embodiment, a method comprises: receiving, by a computing device (e.g., computing device 141 ), a message from a host device (e.g., host device 151 ); generating, by the computing device, an identifier, a certificate, and a key, wherein the identifier is associated with an identity of the computing device, and the certificate is generated using the message; and sending, by the computing device, the identifier, the certificate, and the key to the host device, wherein the host device is configured to verify the identity of the computing device using the identifier, the certificate, and the key.
[00170] In one embodiment, verifying the identity of the computing device comprises concatenating the message and the certificate to provide first data.
[00171] In one embodiment, verifying the identity of the computing device further comprises decrypting the first data using the key to provide second data.
[00172] In one embodiment, verifying the identity of the computing device further comprises decrypting the second data using the identifier to provide a result, and comparing the result to the key.
[00173] In one embodiment, the identifier is a public identifier, and the computing device stores a secret key, the method further comprising: using the secret key as an input to a message authentication code to generate a derived secret; wherein the public identifier is generated using the derived secret as an input to an asymmetric generator.
[00174] In one embodiment, the identifier is a first public identifier, and the computing device stores a first device secret used to generate the first public identifier, the method further comprising: receiving a replace command from the host device; in response to receiving the replace command, replacing the first device secret with a second device secret; and sending, to the host device, a second public identifier generated using the second device secret.
[00175] In one embodiment, the key is a public key, and generating the certificate includes concatenating the message with the public key to provide a data input for encryption.
[00176] In one embodiment, the identifier is a public identifier, and a first asymmetric generator generates the public identifier and a private identifier as an associated pair; the key is a public key, and a second asymmetric generator generates the public key and a private key as an associated pair; and generating the certificate comprises: concatenating the message with the public key to provide first data; encrypting the first data using the private identifier to provide second data; and encrypting the second data using the private key to provide the certificate.
[00177] In one embodiment, the key is a public key, the method further comprising generating a random number as an input to an asymmetric key generator, wherein the public key and an associated private key are generated using the asymmetric key generator.
[00178] In one embodiment, the random number is generated using a physical unclonable function (PUF).
[00179] In one embodiment, a system comprises: at least one processor; and memory containing instructions configured to instruct the at least one processor to: send a message to a computing device; receive, from the computing device, an identifier, a certificate, and a key, wherein the identifier is associated with an identity of the computing device, and the certificate is generated by the computing device using the message; and verify the identity of the computing device using the identifier, the certificate, and the key.
[00180] In one embodiment, verifying the identity of the computing device comprises: concatenating the message and the certificate to provide first data; decrypting the first data using the key to provide second data; decrypting the second data using the identifier to provide a result; and comparing the result to the key.
[00181] In one embodiment, the identifier is a first public identifier, the computing device stores a first device secret used to generate the first public identifier, and the instructions are further configured to instruct the at least one processor to: send a replace command to the computing device, the replace command to cause the computing device to replace the first device secret with a second device secret, and receive, from the computing device, a second public identifier generated using the second device secret.
[00182] In one embodiment, the computing device is configured to use the second device secret as an input to a message authentication code that provides a derived secret, and to generate the second public identifier using the derived secret.
[00183] In one embodiment, the replace command includes a field having a value based on the first device secret.
[00184] In one embodiment, the system further comprises a freshness mechanism configured to generate a freshness, wherein the message sent to the computing device includes the freshness.
[00185] In one embodiment, the identity of the computing device includes an alphanumeric string.
[00186] In one embodiment, a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to at least: receive a message from a host device; generate an identifier, a certificate, and a key, wherein the identifier corresponds to an identity of the computing device, and the certificate is generated using the message; and send the identifier, the certificate, and the key to the host device for use in verifying the identity of the computing device.
[00187] In one embodiment, the identifier is a public identifier associated with a private identifier, the key is a public key associated with a private key, and generating the certificate comprises: concatenating the message with the public key to provide first data; encrypting the first data using the private identifier to provide second data; and encrypting the second data using the private key to provide the certificate.
[00188] In one embodiment, verifying the identity of the computing device comprises performing a decryption operation using the identifier to provide a result, and comparing the result to the key.
Generating Values Using Physical Unclonable Function
Figure imgf000034_0001
[00189] At least some embodiments disclosed below provide an improved architecture for generating values using a physical unclonable function (PUF). In some embodiments, the PUF value can itself be used as a device secret, or used to generate a device secret. In one example, the PUF value is used as a unique device secret (UDS) for use with the DICE-RloT protocol as described above (e.g., see FIG. 5A and FIG. 5B). In one example, a value generated by a PUF is used as an input to a message authentication code (MAC). The output from the MAC is used as the UDS.
[00190] In some embodiments, the PUF value, or a value generated from the PUF value, can be used as a random number (e g., a device specific random number).
In one example, the random number (e.g., RND) is used as an input when generating the associated public key and private key via the asymmetric key generator described above (e.g., see FIG. 6).
[00191] In general, the architecture below generates an output by feeding inputs provided from one or more PUFs into a message authentication code (MAC). The output from the MAC provides the improved PUF (e.g., the UDS above).
[00192] In general, semiconductor chip manufacturers face the problem of key injection, which is the programming of a unique secret key for each chip or die, for example, provided from a semiconductor wafer. It is desired that key injection be performed in a secure environment to avoid leaking or disclosing the secret keys injected into the chips. It is also desired to ensure that the key cannot be hacked or read back after production of the chip. In some cases, for example, key injection procedures are certified or executed by a third-party infrastructure.
[00193] Chip manufacturers desire to reduce the production cost of chips that include cryptographic capabilities. Chip manufacturers also desire to simplify production flows while maintaining a consistent level of security performance of the manufactured chips. However, key injection is one of the more expensive production steps.
[00194] Chip manufacturers also face the problem of improving the uniformity of PUFs when used as pseudo-random number generators. In some cases, this problem may include a cross-correlation between dice because of the phenomena on which a seed value provided by the PUF is based.
[00195] A PUF is based on unpredictable physical phenomena such as, for example, on-chip parasitic effect, on-chip path delays, etc., which are unique for each die. These phenomena are used, for example, to provide a seed value for a pseudo-random number generator.
[00196] Two different chips selected in the production line must have different PUF values. The PUF value generated in each chip must not change during the life of the device. If two chips have similar keys (e.g., there is a low Flamming distance between them), it may be possible to use a key of one chip to guess the key of another chip (e g., preimage hacker attack).
[00197] Using the improved PUF architecture described below can provide a solution to one or more of the above problems by providing output values suitable for providing the function of a PUF on each chip or die. The improved PUF architecture below uses a PUF, which enables each chip or die to automatically generate a unique secure key at each power-up of the chip or die. The secure key does not need to be stored in a non-volatile memory, which might be hacked or otherwise compromised.
[00198] The improved PUF architecture further uses a MAC to generate the improved PUF output (e.g., a unique key) for use by, for example, cryptographic functions or processes that are integrated into the semiconductor chip. The use of the MAC can, for example, increase the Flamming distance between keys generated on different chips.
[00199] In at least some embodiments disclosed herein, an improved PUF architecture using the output from a MAC is provided as a way to generate seed or other values. Thus, the improved PUF architecture provides, for example, a way to perform key injection that reduces cost of manufacture, and that improves reliability and/or uniformity of PUF operation on the final chip.
[00200] In one embodiment, a method includes: providing, by at least one PUF, at least one value; and generating, based on a MAC, a first output, wherein the MAC uses the at least one value provided by the at least one PUF as an input for generating the first output.
[00201] In one embodiment, a system includes: at least one PUF device; a message authentication code MAC module configured to receive a first input based on at least one value provided by the at least one PUF device; at least one processor; and memory containing instructions configured to instruct the at least one processor to generate, based on the first input, a first output from the MAC module.
In various embodiments, the MAC module can be implemented using hardware and/or software. [00202] In one embodiment, the system further includes a selector module that is used to select one or more of the PUF devices for use in providing values to the MAC module. For example, values provided from several PUF devices can be linked and provided as an input to the MAC module. In various embodiments, the selector module can be implemented using hardware and/or software.
[00203] FIG. 10 shows a system for generating a unique key 125 from an output of a message authentication code (MAC) 123 that receives an input from a physical unclonable function (PUF) device 121 , according to one embodiment. The system provides a PUF architecture 11 1 used to generate the unique key 125 (or other value) from an output of message authentication code (MAC) module 123. The MAC module 123 receives an input value obtained from the physical unclonable function (PUF) device 121.
[00204] The PUF device 121 in FIG. 10 can be, for example, any one of various different, known types of PUFs. The MAC module 123 provides, for example, a one-way function such as SHA1 , SFIA2, MD5, CRC, TIGER, etc.
[00205] The architecture 1 1 1 can, for example, improve the Flamming distance of the PUF values or codes generated between chips. The MAC functions are unpredictable (e.g., input sequences with just a single bit difference provided to the MAC function provide two completely different output results). Thus, the input to MAC function cannot be recognized or determined when having only knowledge of the output. The architecture 1 1 1 also can, for example, improve the uniformity of the PUF as a pseudo-random number generator.
[00206] In one example, the value generated by the PUF architecture 1 11 (e.g., unique key 125 or another value) may be a number having N bits, where N depends on a cryptographic algorithm implemented on a chip (e.g., memory device 103 or another device) that includes the PUF architecture 1 1 1 . In one example, the chip implements a cryptographic function that uses HMAC-SFIA256, in which case the output from MAC module 123 has a size N of 256 bits. The use of the output from the MAC module 123 provides a message length for the output value that is suitable for use as a key (without needing further compression or padding).
[00207] The PUF architecture 1 1 1 is implemented in a device such as the illustrated memory device 103, or can be implemented in other types of computing devices such as, for example, integrated circuits implemented in a number of semiconductor chips provided by a wafer manufacturing production line. [00208] In one embodiment, the MAC module 123 cooperates with and/or is integrated into or as part of cryptographic module 127, for example which can provide cryptographic functions for memory device 103. For example, the output of the MAC module 123 can be suitable to be used as a key due to the MAC being used by the memory device 103 for other cryptographic purposes.
[00209] The operation of the PUF architecture 1 1 1 , the cryptographic module 127, and/or other functions of the memory device 103 can be controlled by a controller 107. The controller 107 can include, for example, one or more microprocessors.
[00210] In FIG. 10, a host 101 can communicate with the memory device 103 via a communication channel. The host 101 can be a computer having one or more Central Processing Units (CPUs) to which computer peripheral devices, such as the memory device 103, may be attached via an interconnect, such as a computer bus (e.g., Peripheral Component Interconnect (PCI), PCI extended (PCI-X), PCI Express (PCIe)), a communication portion, and/or a computer network.
[00211] In one embodiment, unique key 125 is used as a UDS to provide an identity for memory device 103. Controller 107 implements layer 0 Lo and layer 1 Li in a DICE-RloT architecture. In response to receiving host message from host 101 via host interface 105, cryptographic module 127 performs processing to generate a triple, as described above. Host 101 uses the triple to verify the identity of memory device 103. Memory device 103 is an example of computing device 141 .
[00212] For purposes of exemplary illustration, it can be noted that there typically are two technical problems. A first problem is to prove the identity of the board to the host. The problem can be handled by the use of the public triple and
asymmetric cryptography, as for example discussed above for DICE-RloT. This approach is secure and elegant, but in some cases may be too expensive/time consuming to be used directly by a circuitry board itself. A second problem is to prove to the board the identity of the memory on the board (e.g., to avoid
unauthorized memory replacement)(this is performed for example after each power- up). The second problem could be solved using the public triple and asymmetric cryptography above. However, a lighter security mechanism based simply on a MAC function is often sufficient for handling the second problem.
[00213] The memory device 103 can be used to store data for the host 101 , for example, in the non-volatile storage media 109. Examples of memory devices in general include hard disk drives (HDDs), solid state drives (SSDs), flash memory, dynamic random-access memory, magnetic tapes, network attached storage device, etc. The memory device 103 has a host interface 105 that implements
communications with the host 101 using the communication channel. For example, the communication channel between the host 101 and the memory device 103 is a Peripheral Component Interconnect Express (PCI Express or PCIe) bus in one embodiment; and the host 101 and the memory device 103 communicate with each other using NVMe protocol (Non-Volatile Memory Host Controller Interface
Specification (NVMHCI), also known as NVM Express (NVMe)).
[00214] In some implementations, the communication channel between the host 101 and the memory device 103 includes a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communications network, a broadband high-speed a I ways -connected wireless communication connection (e.g., a current or future generation of mobile network link); and the host 101 and the memory device 103 can be configured to
communicate with each other using data storage management and usage
commands similar to those in NVMe protocol.
[00215] The controller 107 can run firmware 104 to perform operations responsive to the communications from the host 101 , and/or other operations. Firmware in general is a type of computer program that provides control, monitoring and data manipulation of engineered computing devices. In FIG. 10, the firmware 104 controls the operations of the controller 107 in operating the memory device 103, such as the operation of the PUF architecture 1 1 1 , as further discussed below.
[00216] The memory device 103 has non-volatile storage media 109, such as magnetic material coated on rigid disks, and/or memory cells in an integrated circuit. The storage media 109 is non-volatile in that no power is required to maintain the data/information stored in the non-volatile storage media 109, which data/information can be retrieved after the non-volatile storage media 109 is powered off and then powered on again. The memory cells may be implemented using various memory/storage technologies, such as NAND gate based flash memory, phase- change memory (PCM), magnetic memory (MRAM), resistive random-access memory, and 3D XPoint, such that the storage media 109 is non-volatile and can retain data stored therein without power for days, months, and/or years.
[00217] The memory device 103 includes volatile Dynamic Random-Access Memory (DRAM) 106 for the storage of run-time data and instructions used by the controller 107 to improve the computation performance of the controller 107 and/or provide buffers for data transferred between the host 101 and the non-volatile storage media 109. DRAM 106 is volatile in that it requires power to maintain the data/information stored therein, which data/information is lost immediately or rapidly when the power is interrupted.
[00218] Volatile DRAM 106 typically has less latency than non-volatile storage media 109, but loses its data quickly when power is removed. Thus, it is
advantageous to use the volatile DRAM 106 to temporarily store instructions and data used for the controller 107 in its current computing task to improve
performance. In some instances, the volatile DRAM 106 is replaced with volatile Static Random-Access Memory (SRAM) that uses less power than DRAM in some applications. When the non-volatile storage media 109 has data access
performance (e.g., in latency, read/write speed) comparable to volatile DRAM 106, the volatile DRAM 106 can be eliminated; and the controller 107 can perform computing by operating on the non-volatile storage media 109 for instructions and data instead of operating on the volatile DRAM 106.
[00219] For example, cross point storage and memory devices (e.g., 3D XPoint memory) have data access performance comparable to volatile DRAM 106. A cross point memory device uses transistor-less memory elements, each of which has a memory cell and a selector that are stacked together as a column. Memory element columns are connected via two perpendicular lays of wires, where one lay is above the memory element columns and the other lay below the memory element columns. Each memory element can be individually selected at a cross point of one wire on each of the two layers. Cross point memory devices are fast and non-volatile and can be used as a unified memory pool for processing and storage.
[00220] In some instances, the controller 107 has in-processor cache memory with data access performance that is better than the volatile DRAM 106 and/or the non volatile storage media 109. Thus, parts of instructions and data used in the current computing task are cached in the in-processor cache memory of the controller 107 during the computing operations of the controller 107. In some instances, the controller 107 has multiple processors, each having its own in-processor cache memory.
[00221] Optionally, the controller 107 performs data intensive, in-memory processing using data and/or instructions organized in the memory device 103. For example, in response to a request from the host 101 , the controller 107 performs a real-time analysis of a set of data stored in the memory device 103 and
communicates a reduced data set to the host 101 as a response. For example, in some applications, the memory device 103 is connected to real-time sensors to store sensor inputs; and the processors of the controller 107 are configured to perform machine learning and/or pattern recognition based on the sensor inputs to support an artificial intelligence (Al) system that is implemented at least in part via the memory device 103 and/or the host 101.
[00222] In some implementations, the processors of the controller 107 are integrated with memory (e.g., 106 or 109) in computer chip fabrication to enable processing in memory and thus overcome the von Neumann bottleneck that limits computing performance as a result of a limit in throughput caused by latency in data moves between a processor and memory configured separately according to the von Neumann architecture. The integration of processing and memory increases processing speed and memory transfer rate, and decreases latency and power usage.
[00223] The memory device 103 can be used in various computing systems, such as a cloud computing system, an edge computing system, a fog computing system, and/or a standalone computer. In a cloud computing system, remote computer servers are connected in a network to store, manage, and process data. An edge computing system optimizes cloud computing by performing data processing at the edge of the computer network that is close to the data source and thus reduces data communications with a centralize server and/or data storage. A fog computing system uses one or more end-user devices or near-user edge devices to store data and thus reduces or eliminates the need to store the data in a centralized data warehouse.
[00224] At least some embodiments disclosed herein can be implemented using computer instructions executed by the controller 107, such as the firmware 104. In some instances, hardware circuits can be used to implement at least some of the functions of the firmware 104. The firmware 104 can be initially stored in the non volatile storage media 109, or another non-volatile device, and loaded into the volatile DRAM 106 and/or the in-processor cache memory for execution by the controller 107.
[00225] For example, the firmware 104 can be configured to use the techniques discussed below in operating the PUF architecture. However, the techniques discussed below are not limited to being used in the computer system of FIG. 10 and/or the examples discussed above.
[00226] In some implementations, the output of the MAC module 123 can be used to provide, for example, a root key or a seed value. In other implementations, the output can be used to generate one or more session keys.
[00227] In one embodiment, the output from the MAC module 123 can be transmitted to another computing device. For example, the unique key 125 can be transmitted via host interface 105 to host 101.
[00228] FIG. 11 shows a system for generating unique key 125 from an output of MAC 123, which receives inputs from one or more PUF devices selected by a selector module 204, according to one embodiment. The system generates the unique key 125 from an output of the MAC module 123 using a PUF architecture similar to architecture 1 1 1 of FIG. 10, but including multiple PUF devices 202 and selector module 204, according to one embodiment. The MAC module 123 receives inputs from one or more PUF devices 202 selected by the selector module 204. In one example, PUF devices 202 include PUF device 121 .
[00229] The PUF devices 202 can be, for example, identical or different (e.g., based on different random physical phenomena). In one embodiment, selector module 204 acts as an intelligent PUF selection block or circuit to select one or more of PUF devices 202 from which to obtain values to provide as inputs to the MAC module 123.
[00230] In one embodiment, the selector module 204 bases the selection of the PUF devices 202 at least in part on results from testing the PUF devices 202. For example, the selector module 204 can test the repeatability of each PUF device 202. If any PUF device 202 fails testing, then the selector module 204 excludes the failing device from providing an input value to the MAC module 123. In one example, the failing device can be excluded temporarily or indefinitely.
[00231] In some implementations, the selector module 204 permits testing the PUF functionality of each chip during production and/or during use in the field (e.g., by checking the repeatability of the value provided by each PUF device 202). If two or more values provided by a given PUF device are different, then the PUF device is determined to be failing and is excluded from use as an input to the MAC module 123. [00232] In one embodiment, the selector module 204 is used to concurrently use multiple PUF devices 202 as sources for calculating an improved PUF output from the MAC module 123. For example, the selector module 204 can link a value from a first PUF device with a value from a second PUF device to provide as an input to the MAC module 123. In some implementations, this architecture permits obtaining a robust PUF output due to its dependence on several different physical phenomena.
[00233] FIG. 12 shows a system for generating a unique key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter 302 (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), according to one embodiment. The system generates the unique key 125 from an output of the MAC module 123, according to one embodiment. The PUF architecture illustrated in FIG. 12 is similar to the PUF architecture illustrated in FIG. 11 , except that a monotonic counter 302 is included to provide values to selector module 204. In various embodiments, the monotonic counter 302 can be implemented using hardware and/or software.
[00234] The MAC module 123 receives inputs from one or more PUF devices 202 and an input from the monotonic counter 302. In one example, values obtained from the PUF devices 202 and the monotonic counter 302 are linked and then provided as an input to the MAC module 123. In some implementations, the monotonic counter 302 is a non-volatile counter that only increments its value when requested. In some embodiments, the monotonic counter 302 is incremented after each power-up cycle of a chip.
[00235] In some implementations, the PUF architecture of FIG. 12 can be used to provide a way to securely share keys between a semiconductor chip and other components in an application, such as for example a public key mechanism.
[00236] In some implementations, the monotonic counter 302 is incremented before each calculation of a PUF, which ensures that the input of the MAC module 123 is different at each cycle, and thus the output (and/or pattern of output) provided is different. In some examples, this approach can be used to generate a session key, where each session key is different.
[00237] In some embodiments, the selector module 204 can selectively include or exclude the monotonic counter 302 (or other freshness mechanism like NONCE, timestamp) from providing a counter value as an input to the MAC module 123.
[00238] In some embodiments, the monotonic counter 302 is also used by cryptographic module 127. In some embodiments, a PUF architecture that includes the monotonic counter can be used as a session key generator to guarantee a different key at each cycle. In some implementations, the generated session key is protected in this way: Session key = MAC [ one or more PUFs | MTC or other freshness]
[00239] In other embodiments, a mechanism is used as follows:
Session key = MACkey_based [ Root_Key, MTC or other freshness mechanism]
Where: Root_Key = an output value provided from the MAC module 123 above, or any other kind of key that is present on the chip.
The MACkey_based function above is, for example, a MAC algorithm based on a secret key. For example, there can be two types of MAC algorithm in cryptography:
1. An algorithm based on a secret key like, for example, HMAC family (HMAC-SHA256 is key based).
2. An algorithm that is not based on a secret key, for example like SHA256 (SHA stand-alone is not key based).
It should be noted that a MAC that is key-based can be transformed in a MAC that is not key-based by setting the key to a known value (e.g. 0x000...
OxFFFF etc... .).
[00240] FIG. 13 shows a method to generate an output from a MAC that uses one or more input values provided from one or more PUFs, according to one
embodiment. For example, the method of FIG. 13 can be implemented in the memory device 103 of FIG. 10.
[00241] The method of FIG. 13 includes, at block 41 1 , providing one or more values by at least one PUF (e.g., providing values from one or more of PUF devices 202).
[00242] At block 413, repeatability of one or more of the PUFs can be tested, for example as was described above. This testing is optional.
[00243] At block 415, if testing has been performed at block 413, and it has been determined that a PUF device fails the testing, then the failing PUF device is excluded from providing an input to the MAC. This excluding may be performed, for example, by selector module 204, as was discussed above.
[00244] At block 417, a value is provided from a monotonic counter (e.g., monotonic counter 302). The use of the monotonic counter in the PUF architecture is optional.
[00245] At block 419, an output is generated from the MAC, which uses one or more values provided by the PUFs (and optionally at least one value from the monotonic counter) as inputs to the MAC.
[00246] Various other embodiments are now described below for a method implemented in a computing device that includes: providing, by at least one physical unclonable function (PUF), at least one value; and generating, based on a message authentication code (MAC), a first output, wherein the MAC uses the at least one value provided by the at least one PUF as an input for generating the first output.
[00247] In one embodiment, the computing device is a first computing device, and the method further comprises transmitting the first output to a second computing device, wherein the first output is a unique identifier of the first computing device.
[00248] In one embodiment, providing the at least one value comprises selecting a first value from a first PUF and selecting a second value from a second PUF.
[00249] In one embodiment, the method further comprises: providing a value from a monotonic counter; wherein generating the first output further comprises using the value from the monotonic counter as an additional input to the MAC for generating the first output.
[00250] In one embodiment, the method further comprises: generating a plurality of session keys based on respective outputs provided by the MAC, wherein the monotonic counter provides values used as inputs to the MAC; and incrementing the monotonic counter after generating each of the session keys.
[00251] In one embodiment, the method further comprises: testing repeatability of a first PUF of the at least one PUF; and based on determining that the first PUF fails the testing, excluding the first PUF from providing any input to the MAC when generating the first output.
[00252] In one embodiment, the testing comprises comparing two or more values provided by the first PUF. [00253] In one embodiment, the computing device is a memory device, and the memory device comprises a non-volatile storage media configured to store an output value generated using the MAC.
[00254] In one embodiment, the method further comprises performing, by at least one processor, at least one cryptographic function, wherein performing the at least one cryptographic function comprises using an output value generated using the MAC.
[00255] In one embodiment, a non-transitory computer storage medium stores instructions which, when executed on a memory device (e.g., the memory device 103), cause the memory device to perform a method, the method comprising:
providing, by at least one physical unclonable function (PUF), at least one value; and generating, based on a message authentication code (MAC), a first output, wherein the MAC uses the at least one value provided by the at least one PUF as an input for generating the first output.
[00256] In various other embodiments described below, the method of FIG. 4 can be performed on a system that includes: at least one physical unclonable function (PUF) device; a message authentication code (MAC) module configured to receive a first input based on at least one value provided by the at least one PUF device; at least one processor; and memory containing instructions configured to instruct the at least one processor to generate, based on the first input, a first output from the MAC module.
[00257] In one embodiment, the MAC module includes a circuit. In one
embodiment, the first output from the MAC module is a key that identifies a die. In one embodiment, the first output from the MAC module is a root key, and the instructions are further configured to instruct the at least one processor to generate a session key using an output from the MAC module.
[00258] In one embodiment, the system is part of a semiconductor chip (e.g., one chip of several chips obtained from a semiconductor wafer), the first output from the MAC module is a unique value that identifies the chip, and the instructions are further configured to instruct the at least one processor to transmit the unique value to a computing device.
[00259] In one embodiment, the at least one PUF device comprises a plurality of PUF devices (e.g., PUF devices 202), and the system further comprises a selector module configured to select the at least one PUF device that provides the at least one value.
[00260] In one embodiment, the selector module is further configured to generate the first input for the MAC module by linking a first value from a first PUF device and a second value from a second PUF device.
[00261] In one embodiment, the system further comprises a monotonic counter configured to provide a counter value, and the instructions are further configured to instruct the at least one processor to generate the first input by linking the counter value with the at least one value provided by the at least one PUF device.
[00262] In one embodiment, the system further comprises a selector module configured to select the at least one PUF device that provides the at least one value, wherein linking the counter value with the at least one value provided by the at least one PUF device is performed by the selector module.
[00263] In one embodiment, the monotonic counter is further configured to increment, after generating the first input, the counter value to provide an
incremented value; and the instructions are further configured to instruct the at least one processor to generate, based on the incremented value and at least one new value provided by the at least one PUF device, a second output from the MAC module.
[00264] FIG. 14 shows a system for generating a root key from an output of a MAC that receives inputs from one or more PUF devices and an input from a monotonic counter (and/or an input from another freshness mechanism like NONCE, time- stamp, etc.), and that adds an additional MAC to generate a session key, according to one embodiment.
[00265] In one embodiment, the system generates the root key from an output of a MAC that receives inputs from one or more PUF devices 202 and an input from a monotonic counter 302 (and/or an input from another freshness mechanism like NONCE, time-stamp, etc.), and that adds an additional MAC module 504 to generate a session key using a root key input, according to one embodiment. In this embodiment, MAC module 123 provides root key 502 as the output from MAC module 123. Root key 502 is an input to the MAC module 504, which can use a MAC function such as Session key = MACkey_based [ Root_Key, MTC or other freshness mechanism], which was described above. The root key input in this key- based function can be root key 502, as illustrated.
[00266] Additionally, in one embodiment, monotonic counter 302 can provide an input to the MAC module 504. In other embodiments, a different monotonic counter or other value from the chip can be provided as an input to MAC module 504 instead of using monotonic counter 302. In some cases, the monotonic counter 302 provides a counter value to MAC module 504, but not to selector module 204. In other cases, the counter value can be provided to both MAC modules, or excluded from both modules.
Key Generation and Secure Storage
[00267] As mentioned above, PUFs can be used for secure key generation.
Various embodiments discussed below relate to generating an initial key using at least one PUF, applying processing to increase obfuscation of the initial key, and storing the final obfuscated key in a non-volatile memory. The final obfuscated key and/or an intermediate key used to generate the final obfuscated key can be shared with another computing device and used for secure communication with the other computing device (e.g., messaging using symmetric cryptography based on a shared key). In some embodiments, the secure key generation is done for computing devices to be used in automotive applications (e.g., a controller in an autonomous vehicle).
[00268] In alternative embodiments, the initial key is generated in other ways that do not require using the at least one PUF above. In one embodiment, the initial key can be generated by using an injected key. For example, the initial key is present in a chip due to being injected in a factory or other secure environment. In this case, the applying processing to increase obfuscation of the initial key is performed by applying obfuscation processing to the injected key.
[00269] The automotive environment presents the technical problem of introducing “noise” during the key generation phase. Various embodiments below provide a technological solution to this problem by using a methodology to diminish or avoid key variation due to this induced noise by storing an obfuscated key inside a non volatile memory area.
[00270] The automotive environment can affect key generation in various ways.
For example, engine power-on can cause a drop in application power to a computing device resulting in a key being generated in the wrong manner. Temperature extremes can also affect the circuit that generates the key. Other sources such as magnetic fields from power lines can cause inter-symbol interference or crosstalk, making a host not recognize the device.
[00271] In contrast, if the key is generated in a safe environment and is stored in memory, it will be immune from noise. A safe environment can be, for example, directly mounted in a car, in a test environment, or in a factory (e.g., that
manufactures the computing device generating the key) depending on the strategy used to propagate the key between end users/customers of the computing device product.
[00272] In one example, ADAS or other computing systems as used in vehicles are subject to power supply variations. This can occur, for example, during turning on the vehicle, braking, powering the engine, etc.
[00273] Various embodiments to generate and store a key as discussed below provide the advantages of being substantially independent from external factors (e.g., power supply variations, temperature and other external sources of noise). Another advantage in some embodiments is that for every cycle, for example, the generation of the key vector is the same.
[00274] When storing the key, another advantage provided in some embodiments is that the key is substantially immune against hardware attack (e.g., that hackers might put in place). For example, one such attack is monitoring of the power-on current of a device so as to associate current variation to bits associated with the key. Other attacks can use, for example, voltage measurements (e.g., a Vdd supply voltage). Some attacks can use, for example, temperature variations to interfere with operation of a device.
[00275] In some embodiments, the initial key can be generated using the approaches and/or architectures as described above for FIGs. 10-14. For example, a PUF is used to generate the key for every power-on cycle of the computing device that is storing the key. In alternative embodiments, other approaches can be used to generate the initial key.
[00276] In one exemplary approach, as discussed earlier above, key injection uses at least one PUF and a MAC algorithm (e.g., SFIA256) to generate a key for a device that is significantly different from other devices (e.g., from adjacent die located on a wafer). The MAC cryptography algorithm provides the benefit of increasing the entropy of the bits generated by the PUF.
[00277] In one embodiment, the generated key (e.g., the initial key as provided from a PUF and then a MAC algorithm) is stored in a non-volatile area of the device after pre-processing is performed on the key in order to diminish or avoid hacker attacks, and also to improve reliability of the stored key. In one embodiment, after the key is stored, the circuit generating the key can be disabled. The pre processing is generally referred to herein as obfuscation processing. In one example, circuitry and/or other logic is used to implement the obfuscation processing on the device. In one example, the stored key can be read by the device because the key is independent from the external source of noise. An internal mechanism is used to read any data of the device.
[00278] In various embodiments, storing the key as described herein increases the margin against noise. Also, this makes it difficult for a hacker to read the stored key, for example, using a power monitoring or other hacking method.
[00279] At least some embodiments herein use a PUF and an encryption algorithm (e.g., HMAC-SHA256) to make the key generation independent from external factors such as temperature or voltage that may otherwise cause the key to be different from one power-on of the device to the next power-on. If this occurs, it can be a problem for a host to be able to exchange messages with the device. Various embodiments make the key generation more robust by placing the stored key in memory such that it is not impacted by external factors.
[00280] In one embodiment, the key is generated once on a device and stored in non-volatile memory of the device. In one example, the key can be generated using the content of an SRAM before a reset is applied to the SRAM. The key, which is a function of the PUF, is generated using the pseudo random value output from the PUF. The content of the SRAM is read before a reset of the appliance or other device. The key can also be re-generated at other times through a command sequence, as may be desired. In one example, the generated key is used as a UDS in the DICE-RloT protocol, as described above. In one example, the command sequence uses a replace command to replace a previously-generated UDS with a new UDS, as described above.
[00281] In one embodiment, the key generation is independent of the cryptography implemented by the device. The generated key is shared with a host. This embodiment stores the key and/or reads the key in the device in a way that avoids an attacker guessing the key and using it internally, such as for example by analyzing the shape of the current that the device absorbs during key usage.
[00282] In addition, for example, in asymmetric cryptography the generated key becomes the variable password that is the secret key of the system. The key is not shared with others. For public key cryptography, the key is used to generate a corresponding public key.
[00283] In various embodiments, an initial key is generated using an injected key or using one or more PUFs (e.g., to provide a initial key PUF0 ). The initial key is then subjected to one or more steps of obfuscation processing to provide
intermediate keys (e.g., PUF1 , PUF2, ... , PUF5) such as described below. The output (e.g., PUF5) from this processing is an obfuscated key that is stored in non volatile memory of the device. When using an injected key, obfuscation processing is applied to the injected key similarly as described below for the non-limiting example of PUF0.
[00284] In one embodiment, as mentioned above, a mechanism is used as follows for the case of an initial injected key:
Session key = MACkey_based [ Root_Key, MTC or other freshness mechanism]
Where: Root_Key = any other kind of key that is present on the chip (e.g., the key can be an initial key injected in the chip in a factory or other secure environment)
[00285] In one embodiment, on first power-up of a device, a special sequence wakes up at least one circuit (e.g., a read circuit) of the device and verifies that the circuit(s) is executing properly. The device then generates an initial key PUF0, as mentioned above. This key can be stored or further processed to make it more robust for secure storage, as described below.
[00286] An intermediate key, PUF1 , is generated by concatenating PUF0 with a predetermined bit sequence (e.g., a sequence known by others) to generate PUF1.
In one embodiment, PUF1 is used to verify the ability of the device to correctly read the key and to ensure that noise, such as fluctuations in the power supply, are not affecting the generated key.
[00287] A next intermediate key, PUF2, is generated. PUF1 is interleaved with an inverted bit pattern (e.g., formed by inverting the bits of PUF1 , and sometimes referred to herein as PUF1 bar) to generate PUF2.
[00288] In one embodiment, PUF2 has the same bit number of 0s and 1 s. This makes the shape of the device current substantially the same for any key (e.g., any key stored on the device). This reduces the possibility of an attacker guessing the key value by looking at the shape of the device current when the key is being read by the device.
[00289] A next intermediate key, PUF3, is generated. The bits of PUF2 are interleaved with pseudo-random bits to form PUF3. This further helps to obfuscate the key. In one embodiment, the pseudo-random bits are derived from PUF1 or PUF2 by using a hash function. For example, these derived bits are added to PUF2 to form PUF3.
[00290] A next intermediate key, PUF4, is generated. Error Correction Codes (ECCs) are generated by the internal circuitry of the device (e.g., during
programming). The bits of the ECC are added to PUF3 to generate PUF4. In one embodiment, the ECC bits help guard against the effects of non-volatile memory (e.g., NVRAM) aging that can be caused by, for example, device endurance limits, X-rays and particles. Non-volatile memory aging can also be caused, for example, by an increase in the number of electrons in the NV cell which can cause bits to flip.
[00291] A next intermediate key, PUF5, is generated. PUF5 is a concatenation of several copies of PUF4. Having the redundancy of multiple PUF4 copies present in PUF5 further increases robustness by increasing the likelihood of being able to correctly read the key at a later time. In one embodiment, several copies of PUF5 are stored in various regions of non-volatile memory storage to further increase robustness. For example, even if PUF5 is corrupted in one of the regions, PUF5 can be read from other of the regions, and thus the correct key can be extracted.
[00292] In one embodiment, PUF1 or PUF3 is the key that is shared with a host for symmetric cryptography, or used to generate a public key for asymmetric
cryptography. In one embodiment, PUF4 and PUF5 are not shared with end users or a host.
[00293] The above approach is modular in that PUF2, PUF3, PUF4 and/or PUF5 are not required for generating an obfuscated key. Instead, in various
embodiments, one or more of the foregoing obfuscation steps can be applied to the initial key, and further the ordering can be varied. For example, the number obfuscation steps can be decreased for a system that is known not to have Vdd voltage supply drops.
[00294] In one embodiment, when storing the obfuscated key, the bit patterns will be physically spread around the non-volatile storage media (e.g., in different rows and words). For example, the device is able to read the bits at the same time and protect against multi-bit errors.
[00295] FIG. 15 shows a computing device 603 for storing an obfuscated key 635 in non-volatile memory (e.g., non-volatile storage media 109), according to one embodiment. Computing device 603 is an example of computing device 141 of FIG. 1. In one example, the obfuscated key is used as a UDS. (Note) For example, the obfuscation adds entropy to the bits of the key to avoid a possible attempt by a hacker to understand the value of the key. The device is always able to extract the key by removing the added bits used as obfuscation. In one example, a common hacker attack consists of guessing the secret key generated/elaborated inside the device by processing, with statistical tools, the current profile absorbed by the device in some particular timeframe. The obfuscation mitigates this problem in a considerable way.
[00296] An initial key 625 is generated based on a value provided by at least one physical unclonable function device 121. The obfuscated key 635 is generated based on initial key 625. After being generated, the obfuscated key 635 is stored in non-volatile storage media 109.
[00297] In one embodiment, a message authentication code (MAC) 123 uses the value from PUF device 121 as an input and provides the initial key 625 as an output. In one embodiment, obfuscation processing module 630 is used to perform processing on initial key 625 in order to provide obfuscated key 635 (e.g., PUF5), for example as was discussed above.
[00298] In one embodiment, the obfuscated key 635 is securely distributed to another computing device as described in related U.S. Non-Provisional Application Serial No. 15/965,731 , filed 27-APR-2018, entitled“SECURE DISTRIBUTION OF SECRET KEY USING A MONOTONIC COUNTER,” by Mondello et al„ the entire contents of which application is incorporated by reference as if fully set forth herein. In other embodiments, initial key 625 and/or any one or more of the intermediate keys from the obfuscation processing described herein can be securely distributed in the same or a similar manner. Optionally, an end user/customer uses the foregoing approach to read the value of an initial key (e.g., PUF0), an intermediate key, and/or a final obfuscated key (e.g., PUF5). For example, the end user can verify the proper execution of the internal generation of the key by the device, and/or monitor the statistical quality of the key generation.
[00299] FIG. 16 shows an example of an intermediate key (PUF2) generated during an obfuscation process by obfuscation processing module 630, according to one embodiment. As mentioned above, the bits of PUF1 are inverted to provide inverted bits 702. Bits 702 are interleaved with the bits of PUF1 as illustrated. For example, every second bit in the illustrated key is an interleaved inverted bit 702.
[00300] FIG. 17 shows an example of another intermediate key (PUF3) generated during the obfuscation process of FIG. 16 (PUF3 is based on PUF2 in this example), according to one embodiment. As mentioned above, the bits of PUF2 are further interleaved with pseudo-random bits 802. As illustrated, bits 802 are interleaved with
PUF2. For example, every third bit in the illustrated key is an interleaved pseudo random bit 802.
[00301] FIG. 18 shows a method for generating and storing an obfuscated key (e.g., obfuscated key 635) in a non-volatile memory (e g., non-volatile storage media 109), according to one embodiment. In one example, memory system 105 of FIG. 2 stores the obfuscated key in non-volatile memory 121.
[00302] In block 91 1 , an initial key is generated based on a value provided by at least one physical unclonable function (PUF).
[00303] In other embodiments, in block 91 1 , the initial key is generated by key injection. For example, the initial key can simply be a value injected into a chip during manufacture.
[00304] In block 913, an obfuscated key is generated based on the initial key. For example, the generated obfuscated key is PUF3 or PUF5.
[00305] In block 915, the obfuscated key is stored in a non-volatile memory of a computing device. For example, the obfuscated key is stored in NAND flash memory or an EEPROM.
[00306] In one embodiment, a method includes: generating an initial key using key injection; generating an obfuscated key based on the initial key; and storing the obfuscated key in non-volatile memory. For example, the initial key can be the key injected during a key injection process at the time of manufacture.
[00307] In one embodiment, a method comprises: generating an initial key provided by key injection or based on a value provided by at least one physical unclonable function (PUF); generating an obfuscated key based on the initial key; and storing the obfuscated key in a non-volatile memory of the computing device.
[00308] In one embodiment, generating the initial key comprises using the value from the PUF (or, for example, another value on the chip) as an input to a message authentication code (MAC) to generate the initial key.
[00309] In one embodiment, the obfuscated key is stored in the non-volatile memory outside of user-addressable memory space.
[00310] In one embodiment, generating the obfuscated key comprises
concatenating the initial key with a predetermined pattern of bits.
[00311] In one embodiment, concatenating the initial key with the predetermined pattern of bits provides a first key (e.g., PUF1 ); and generating the obfuscated key further comprises interleaving the first key with an inverted bit pattern, wherein the inverted bit pattern is provided by inverting bits of the first key.
[00312] In one embodiment, interleaving the first key with the inverted bit pattern provides a second key (e.g., PUF2); and generating the obfuscated key further comprises interleaving the second key with pseudo-random bits.
[00313] In one embodiment, the method further comprises deriving the pseudo random bits from the first key or the second key using a hash function.
[00314] In one embodiment, interleaving the second key with pseudo-random bits provides a third key (e.g., PUF3); and generating the obfuscated key further comprises concatenating the third key with error correction code bits.
[00315] In one embodiment, the computing device is a first computing device, the method further comprising sharing at least one of the initial key, the first key, or the third key with a second computing device, and receiving messages from the second computing device encrypted using the shared at least one of the initial key, the first key, or the third key.
[00316] In one embodiment, concatenating the third key with error correction code bits provides a fourth key (e.g., PUF4); and generating the obfuscated key further comprises concatenating the fourth key with one or more copies of the fourth key.
[00317] In one embodiment, concatenating the fourth key with one or more copies of the fourth key provides a fifth key (e.g., PUF5); and storing the obfuscated key comprises storing a first copy of the fifth key on at least one of a different row or block of the non-volatile memory than a row or block on which a second copy of the fifth key is stored.
[00318] In one embodiment, a system comprises: at least one physical unclonable function (PUF) device (e.g., PUF device 121 ) configured to provide a first value; a non-volatile memory (e.g., non-volatile storage media 109) configured to store an obfuscated key (e.g., key 635); at least one processor; and memory containing instructions configured to instruct the at least one processor to: generate an initial key based on the first value provided by the at least one PUF device;
generate the obfuscated key based on the initial key; and store the obfuscated key in the non-volatile memory.
[00319] In one embodiment, the system further comprises a message
authentication code (MAC) module (e.g., MAC 123) configured to receive values provided by the at least one PUF device, wherein generating the initial key comprises using the first value as an input to the MAC module to generate the initial key.
[00320] In one embodiment, generating the obfuscated key comprises at least one of: concatenating a key with a predetermined pattern of bits; interleaving a first key with an inverted bit pattern of the first key; interleaving a key with pseudo-random bits; concatenating a key with error correction code bits; or concatenating a second key with one or more copies of the second key.
[00321] In one embodiment, the stored obfuscated key has an equal number of zero bits and one bits.
[00322] In one embodiment, generating the obfuscated key comprises
concatenating the initial key with a first pattern of bits.
[00323] In one embodiment, concatenating the initial key with the first pattern of bits provides a first key; and generating the obfuscated key further comprises interleaving the first key with a second pattern of bits.
[00324] In one embodiment, generating the obfuscated key further comprises interleaving a key with pseudo-random bits.
[00325] In one embodiment, generating the obfuscated key further comprises concatenating a key with error correction code bits.
[00326] In one embodiment, a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to perform a method, the method comprising: generating an initial key using at least one physical unclonable function (PUF); generating an obfuscated key based on the initial key; and storing the obfuscated key in non-volatile memory.
[00327] FIG. 19 shows computing device 1003 used for generating initial key 625 based on key injection 1010, obfuscating the initial key, and storing the obfuscated key in non-volatile memory, according to one embodiment.
[00328] In one embodiment, the initial key 625 is generated by using the injected key 1010. For example, initial key 625 is present in a chip by being injected in a factory or other secure environment during manufacture, or other assembly or testing. In one example, the initial key 625 is used as an initial UDS for computing device 1003. The obfuscation can also be applied to the UDS. The UDS is the secret that the DICE-RloT starts to use to generate the secure generation of keys and certificates. The applying processing to increase obfuscation of the initial key is performed by applying obfuscation processing (via module 630) to the injected key (e.g., the value from key injection 1010). In other embodiments, obfuscation processing can be applied to any other value that may be stored or otherwise present on a chip or die.
Variations of Key Generation and Secure Storage
[00329] Various additional non-limiting embodiments are now described below. In one embodiment, after (or during) first power up of a system board, a special sequence is activated to turn on the device containing a cryptographic engine (e.g., cryptographic module 127). The sequence further wakes-up the internal PUF and verifies its functionality, then the PUF generates an initial value PUF0, for instance as described above. The PUF0 value is processed by an on-chip algorithm (e.g., by obfuscation processing module 630) and written in a special region of a non-volatile array (out of the user addressable space). In alternative embodiments, instead of the PUF0 value, an injected key is processed by the on-chip algorithm similarly as described below to provide an obfuscated key for storage.
[00330] In one embodiment, obfuscation processing is performed to prevent Vdd (voltage) and/or temperature fault hacker attacks. This processing includes concatenating PUF0 with a well-known pattern (e.g., which contains a fixed amount of 0/1 bits). These bits permit, during the life of the device (e.g., chip) when the PUF value is internally read, determining if the read circuitry is able to properly discriminate 0/1 bits. For example, PUF1 =PUF0 || 010101 ...01
[00331] Next, the result of the above processing (e.g., PUF1 ) is further embodied with dummy bits (e.g., to avoid lcc hacker analysis). Specifically, for example, the bits of PUF1 are interleaved with an inverted version of PUF1 (i.e. , PUF1 bar, which is formed by inverting each bit of PUF1 ). For example, PUF2= PUF1 interleaved PUF1 bar.
[00332] In one embodiment, the rule of interleaving depends on the kind of column decoder (e.g., of a NV non-volatile array) that is present on the chip/device. The device ensures that at each read of the PUF value (from the non-volatile array), the read circuitry processes (in a single shot) the same number of bits from PUF1 and PUF1 bar. This ensures reading the same number of bits at values of 0 and 1 , which provides a regular shape in the supply current (Idd).
[00333] Next, the bits of PUF2 are further interleaved with pseudo-random bits. In one example, the interleaving depends on the non-volatile array column decoder structure. In one embodiment, the output has the same number of PUF2 bits stuffed with a certain number of pseudo-random bits (e.g., in order to obfuscate an eventual residual correlation that may be present in the PUF2 pattern).
[00334] In one embodiment, the pseudo-random bits can be derived from PUF1 or PUF2 by using a hash function. Other alternative approaches can also be used.
[00335] In one embodiment, optionally, to reduce or prevent bit loss due to non volatile aging, the bits of PUF3 are concatenated with error correction code (ECC) bits. In one embodiment, the bits of PUF4 are optionally replicated one or more times (which also extends ECC capabilities). For example, the foregoing may be implemented on a NAND memory. In one example, PUF5 =PUF4 || PUF4 || ... || PUF4
[00336] In one embodiment, the value of PUF5 can be written two or more times on different rows and or blocks of a non-volatile memory array.
[00337] As a result of the above obfuscation processing, for example, once the final PUF value is written into a non-volatile array block, the value can be used with diminished or no concern about key reliability (e.g., due to noise, or charge loss), or any attempt to infer its value by Idd analysis or forcing its value by Vdd fault attack.
[00338] In one embodiment, once obfuscation processing is completed, the PUF circuitry can be disabled. In one embodiment, after disablement, the PUF device can provide values used internally on a device for other purposes (e.g., using a standard read operation inside the non-volatile array).
[00339] In one embodiment, key bits are differentiated from random bits when extracting a key from PUF3. For example, internal logic of a device storing a key is aware of the position and method required to return from PUF 5 to a prior or original PUF (e.g., PUF3).
[00340] In one embodiment, the bit positions of key bits are known by the device extracting the key. For example, the internal logic of the device can receive one of the intermediate PUF or the final key PUF5, depending on design choice. Then, applying the operation(s) in the reverse order will obtain the original PUF. For example, the processing steps from PUF1 to PUF5 are executed to store the obfuscated PUF in a manner that a hacker would have to both: read the content (e.g., key bits), and also know the operation(s) that were applied in order to get back to and determine the original key.
Generating an Identity for a Computing Device Using a PUF
[00341] Various embodiments related to generating an identity for a computing device using a physical unclonable function (PUF) are now described below. The generality of the following description is not limited by the various embodiments described above.
[00342] In prior approaches, it is necessary for a manufacturer of a computing device to share one or more secret keys with a customer that purchases the computing device in order to establish an identity for the computing device.
However, the sharing of secret keys causes technical problems due to the need for a cumbersome, complex, and expensive secure channel and infrastructure with the customer for sharing the keys. Further, personnel services are required to implement the key sharing. Moreover, the foregoing security needs can increase the risk that security measures can be compromised by a hacker or other unauthorized person.
[00343] Various embodiments of the present disclosure discussed below provide a technological solution to the above technical problems. In various embodiments, a computing device generates an identity using one or more PUFs. In one example, the identity is a UDS.
[00344] In one embodiment, the generation of identity for the computing device is assigned in an automatic way (e.g., based upon a scheduled time or occurrence of a predetermined event, in response to which a computing device will self-generate a UDS using a PUF). By assigning identity using a PUF, the complexity and expense of identity assignment can be reduced.
[00345] After the computing device generates the identity, it can be used to generate a triple of an identifier, a certificate, and a key. In one embodiment, the triple is generated in response to receiving a message from a host device. The host device can use the generated identifier, certificate, and key to verify the identity of the computing device. After the identity is verified, further secure communications by the host device with the computing device can be performed using the key.
[00346] In some embodiments, the computing device generates the identity in response to receiving a command from the host device. For example, the command can be a secure replace command that is authenticated by the computing device. After generating the identity, the computing device sends a confirmation message to the host device to confirm that the replacement identity was generated.
In one example, the replacement identity is a new UDS that is stored in non-volatile memory and replaces a previously-stored UDS (e g., a UDS assigned by the original manufacturer of the computing device).
[00347] In one embodiment, the identity is a device secret (e.g., a UDS as used in the DICE-RloT protocol, such as for example discussed above) stored in memory of a computing device. At least one value is provided by one or more PUFs of the computing device. The computing device generates the device secret using a key derivative function (KDF). The value(s) provided by the one or more PUFs is an input(s) to the KDF. The output of the KDF provides the device secret. The output of the KDF is stored in memory of the computing device as the device secret.
[00348] In one example, the KDF is a hash. In one example, the KDF is a message authentication code.
[00349] In one embodiment, the computing device stores a secret key that is used to communicate with a host device, and the KDF is a message authentication code (MAC). The at least one value provided by one or more PUFs is a first input to the MAC, and the secret key is used as a second input to the MAC.
[00350] In some examples, the computing device can be a flash memory device. For example, serial NOR can be used.
[00351] FIG. 20 shows computing device 141 as used for generating an identity (e.g., a UDS for computing device 141 ) using a physical unclonable function (PUF) 2005, according to one embodiment. More specifically, a value is provided by PUF 2005. This value is provided as an input to key derivative function (KDF) 2007.
The output from the KDF is stored as device secret 149.
[00352] In one embodiment, device secret 149 is a UDS as used in the DICE-RloT protocol. Similarly as described above, the UDS can be used as a basis for generating a triple for sending to host device 151. This triple includes a public key that can be used by host device 151 for secure communications with computing device 141.
[00353] In one embodiment, the generation of the device secret is performed in response receiving a command from host device 151 via a host interface 2009. In one example, the command is a replace command. In one example, the command is accompanied by a signature signed by the host device 151 using a secret key. After generating the device secret, the confirmation message is sent to host device 151 via host interface 2009.
[00354] In one embodiment, computing device 141 stores a secret key 2013. For example, the secret key 2013 can be shared with host device 151 . In one example, host device 151 uses the secret key 2013 to sign a signature sent with a replace command.
[00355] In one embodiment, KDF 2007 is a message authentication code. The secret key 2013 is used as a key input to KDF 2007 when generating the device secret. The value from PUF 2005 is used as a data input to KDF 2007.
[00356] A freshness mechanism is implemented in computing device 141 using a monotonic counter 2003. Monotonic counter 2003 can provide values for use as a freshness in secure communications with host device 151.
[00357] A unique identifier (UID) 2001 is stored in memory of computing device 141. For example, UID 2001 is injected at the factory.
[00358] In one embodiment, computing device 141 is delivered to a customer with a well-known UDS (e.g., a trivial UDS = 0x00000...000 or similar). The customer uses host device 151 in its factory to request that computing device 141 self- generate a new UDS (e.g., UDS_puf). This step can be done by using an authenticated command. Only a customer or host device that knows the secret key 2013 is able to perform this operation (e.g., the secret key 2013 can be more generally used to manage an authenticated command set supported by computing device 141 ).
[00359] Once the UDS_puf is generated, it is used to replace the original (trivial) UDS. The replacement happens by using an authenticated command. The external host device (the customer) can read the UDS_puf.
[00360] The generated UDS_puf can be used to implement the DICE-RloT protocol. For example, FDS can be calculated using UDS_puf, similarly as described above for identity component 147 and identity component 107. In one example, FDS = HMAC-SHA256 [UDS, SHA256(“ldentity of L1”)].
[00361] In addition, a triple (e.g., KLI ) that includes an identifier, certificate, and key can be generated using UDS_puf, similarly as described for FIG. 1 above. The host device uses the key (e.g., Kupublic) for trusted communications with computing device 141.
[00362] In one embodiment, the identity generation mechanism above can be automatically executed by the computing device (e.g., an application board including a processor) at first use of the application board, or in the field once a scheduled or predetermined event occurs (e.g., as scheduled/determined by the customer and stored in memory 145 as a configuration of the computing device such as an update, etc.).
[00363] In one embodiment, self-identity generation is performed as follows: A configurator host device (e.g., a laptop with software) is connected to a computing device coupled to an autonomous vehicle bus (e.g., using a secure over-the-air interface, etc.). The host device uses authenticated commands to request that the computing device self-generate a UDS (e.g., U DSPUF). The authentication is based on secret key 2013 (e.g., the secret key can be injected by a manufacturer and provided to the customer with a secure infrastructure).
[00364] The authenticated command execution is confirmed with an authenticated response (e.g.,“Confirmation” as illustrated in FIG. 20). At the end of UDSPUF generation, the host device is informed about the U DSPUF generated by using a secure protocol (e.g., by sending over a secure wired and/or wireless network(s) using a freshness provided by the monotonic counter 2003).
[00365] In one embodiment, host interface 2009 is a command interface that supports authenticated and replay protected commands. In one embodiment, the authentication is based on a secret key (e.g., secret key 2013) and uses a MAC algorithm (e.g., HMAC). In one example, an identity generation command is received from host device 151 that includes a signature based on a command opcode, command parameters, and a freshness. In one example, the signature = MAC (opcode | parameters | freshness, secret key).
[00366] In one embodiment, computing device 141 provides an identity generation confirmation including a signature. In one example, the signature = MAC (command result I freshness, secret key).
[00367] In one embodiment, the secret key 2013 is injected in the factory. In one example, the secret key can be symmetric (e.g., based on HMAC-SHA256). In another example, the secret key can use an asymmetric scheme (e.g., ECDSA).
[00368] The device secret 149 (e.g., U DSPUF) can be generated using various options. For example, once the generation flow is activated, by using the proper authenticated and replay protected command, the U DSPUF is generated according to a command option selected as follows:
[00369] Option #1 : the at least one value from a PUF pattern (sometimes referred to as“PUF RAW”) (also sometimes denoted as U DSRAW) is read, and provided to KDF 2007 (key derivative function) (e.g., a SHA256). The result of such process is the final key. So: U DSPUF = KDF (U DSRAW)
[00370] Option #2: the PUF RAW pattern (UDSRAW) is read and the UDSPUF is calculated as: U DSPUF = KDF (U DSRAW | UID) where UID is a public unique ID assigned to all the devices (in some cases, the UID can also be assigned to non- secure devices). In one embodiment, UID 2001 is used as the input to the KDF.
[00371] Option #3: the PUF RAW pattern (UDSRAW) is read and the UDSPUF is calculated as: U DSPUF = KDF (U DSRAW | UID | HASH (user pattern) ) where the user pattern is provided by the host in the command payload that requests self identity generation.
[00372] Option #4: the PUF RAW pattern (UDSRAW) is read and the UDSPUF is calculated as: U DSPUF = KDF (U DSRAW | UID | HASH (user pattern)| freshness ) where the freshness is, for example, the freshness provided in the command payload.
[00373] More generally, the UDS is calculated as UDSPUF = KDF [ (info provided by the host device | info present in the device)]. The KDF function can be used as a simple HASH function (e.g., SHA256), or a MAC function (e.g., HMAC-SHA256), which uses a secret key.
[00374] In one example, when a MAC function is used as the KDF, the secret key used is the same key used to provide the authenticated commands. The device secret (e.g., UDS) can be generated using one of the options as follows:
[00375] Option #5: U DSPUF = MAC [Secret_Key, (U DSRAW)]
[00376] Option #6: U DSPUF = MAC [Secret_Key, (U DSRAW | UID)]
[00377] Option #7: U DSPUF = MAC [Secret_Key, (U DSRAW | UID | HASH (user pattern) )]
[00378] Option #8: U DSPUF = MAC [Secret_Key, (U DSRAW | UID | HASH (user pattern)| freshness )]
[00379] More generally, UDSPUF = MAC [Secret_Key, (info provided by host | info present in the device)]
[00380] As mentioned above, the U DSPUF can be communicated to the host device 151 after being generated. In one embodiment, the host device 151 can directly read the U DSPUF. The UDSPUF can be read only a predetermined number of times (e.g., just once or a few times). In one example, the process as described for FIG. 21 below can be used. After reading the UDSPUF the predetermined number of times, the read mechanism is permanently disabled. For example, this approach can be used in a secure environment (e.g.,. a customer’s factory when assembling a computing system or other product using computing device 141 ).
[00381] In one example, the UDSPUF is encrypted by computing device 141 using a public key received from host device 151 , and then the UDSPUF is sent to host device 151. After this procedure, the U DSPUF read mechanism is permanently disabled.
[00382] In another embodiment, computing device 141 includes obfuscation processing module 201 1. Obfuscation processing module 2011 is an example of obfuscation processing module 630 of FIG. 15, or obfuscation processing module 630 of FIG. 19.
[00383] In one embodiment, the UDSPUF is encrypted with the host public key and communicated to the host device; the host device uses its corresponding secret key to decrypt it. The host public key is communicated to the computing device 141 during a specific communication setup phase. In one embodiment, the sharing of the UDSPUF can be done by a direct read operation by using an authenticated command, and after a predetermined number of reads (e.g., normally just one), such read operation is disabled forever. The computing device 141 stores the UDS key in an anti-tamper area. At each usage of the UDS key also for internal (computing device) use, an obfuscation mechanism is used to avoid information leakage (e.g., by avoiding any chance of a hacker to guess the stored UDS key, such as by the hacker analyzing either the current or the voltage wave profile). For example, the obfuscation mechanism used can be as described above for FIGs. 15, 18, 19.
[00384] In one embodiment, computing device 141 obfuscates the device secret prior to sending the encrypted device secret to the host device 151 . [00385] FIG. 21 shows a system that sends an initial value provided by a monotonic counter of the system for use in determining whether tampering with the system has occurred, according to one embodiment. In one example, the system is computing device 141 of FIG. 20.
[00386] In some embodiments, the system uses a secret key for secure
communication with other devices (e.g., host device 151 ). The secret key is generated and stored on the system (e.g., by key injection at the factory after initial assembly of a system board). A monotonic counter of the system is used to provide an initial value.
[00387] In another embodiment, the secret key is a UDS used with the DICE-RloT protocol. The secret key is generated in response to a command from a host device (e.g., host device 151 ).
[00388] In one embodiment, the initial value is sent by electronic communication to an intended recipient of the physical system (e.g., the recipient will receive the physical system after it is physically transported to the recipient’s location). After receiving the system, the recipient reads an output value from the monotonic counter. The recipient (e.g., using a computing device or server of the recipient that earlier received the initial value) compares the initial value and the output value to determine whether tampering has occurred with the system. In one example, the tampering is an unauthorized attempt by an intruder to access the secret key of the system during its physical transport.
[00389] In one embodiment, a secret key can be generated, for instance, using a true RNG (random number generator) or a PUF, or previously injected in the system (memory) in a secure environment like a factory.
[00390] In one embodiment, the generated key is associated with the initial value of the monotonic counter. The initial value is used by a recipient of the system to determine whether an unauthorized attempt has been made to access the stored key. In one embodiment, a key injection process can use an output from a physical unclonable function (PUF) to generate the key for every power-on cycle of a computing device that stores the key.
[00391] More specifically, FIG. 21 shows a system 351 that sends an initial value provided by a monotonic counter 355 for use in determining whether tampering with system 351 has occurred, according to one embodiment. For example, it can be determined whether system 351 has been tampered with by a hacker seeking unauthorized access to a stored key during physical transport of system 351 . In one example, system 351 is a system board that includes non-volatile memory 306, processor(s) 304, monotonic counter(s) 355, and power supply 318.
[00392] One or more keys 314 are generated under control of processor 304. Nonvolatile memory 306 is used to store the generated keys. Nonvolatile memory 306 is, for example, a non-volatile memory device (e.g., 3D cross point storage)
[00393] Monotonic counter 355 is initialized to provide an initial value. This initial value is associated with the stored key 314. The initial value is sent by a processor 304 via external interface 312 to another computing device. For example, the initial value can be sent to a server of the receiver to which system 351 will be shipped after manufacture and key injection is completed.
[00394] When system 351 is received by the receiver, a computing device of the receiver determines the initial value. For example, the computing device can store in memory the initial value received when sent as described above. The computing device reads an output value from the monotonic counter 355. This output value is compared to the initial value to determine whether there has been tampering with the system 351 .
[00395] In one embodiment, the computing device of the receiver determines a number of events that have occurred that cause the monotonic counter 355 to increment. For example, the output values from monotonic counter 355 can be configured to increment on each power-on operation of system 351 (e.g., as detected by monitoring power supply 318). The output values from monotonic counter 355 can be additionally and/or alternatively configured to increment on each attempt to perform a read access of the stored key 314.
[00396] By keeping track, for example, of the number of known events that cause the monotonic counter to increment, the initial value received from the sender can be adjusted based on this number of known events. Then, the adjusted initial value is compared to the output value read from monotonic counter 355. If the values match, then no tampering has occurred. If the values do not match, the computing device determines that tampering has been detected.
[00397] In response to determining that tampering has been detected, one or more functions of system 351 can be disabled. In one embodiment, processor 304 receives a communication from the computing device of the receiver that includes an indication that tampering has been detected. In response receiving the communication, processor 304 disables at least one function of system 351. In one example, the function disabled is read access to the stored key 314.
[00398] In one embodiment, system 351 can be configured so that a counter value output from monotonic counter 355 cannot exceed a predetermined maximum value. For example, when each counter value is read from monotonic counter 355, a determination is made whether the counter value exceeds a predetermined maximum value. If the counter value exceeds the predetermined maximum value, read access to stored key 314 can be permanently disabled (e.g., under control of processor 304).
[00399] In one embodiment, system 351 is embodied on a semiconductor die. In another embodiment, system 351 is formed on a system board. An application is stored in system memory 353 and executed by processor 304. Execution of the application occurs after power-on of the system 351 . For example, the receiver of the system 351 can execute the application after determining that no tampering has occurred with the system.
[00400] In one embodiment, key 314 is generated using an output value from one or more physical unclonable functions (PUFs). For example, the keys are generated for each power-on cycle of system 351 . In another example, key 314 is a UDS generated in response to a replace command from a host device received via external interface 312.
[00401] In one embodiment, system 351 is a controller that stores key 314. The external interface 312 is used to send an initial value from monotonic counter 355 to an external nonvolatile memory device (not shown) on the same system board as the controller. The external non-volatile memory device determines whether tampering has occurred by reading an output value from monotonic counter 355 and comparing the output value to the initial value received from system 351 .
[00402] In one embodiment, system memory 353 includes volatile memory 308 and/or non-volatile memory 306. Cryptographic module 316 is used to perform cryptographic operations for secure communications over external interface 312 using keys 314 (e.g., symmetric keys).
[00403] Secret key sharing is now described in various further embodiments. A secure communication channel is setup by key sharing between the actors that participate in the communication. Components that are used in a trusted platform module board often do not have sufficient processing capability to implement schemes such as, for example, public key cryptography.
[00404] In one embodiment, one or more secret keys are shared between a device/board manufacturer and the OEM/final customers. Also, keys can be shared as needed between different devices in the same board in the field. One or more monotonic counters are used inside the secure device, as was discussed above.
[00405] In one example, before a device leaves the factory or the manufacturer testing line, one or more secret keys are injected inside the device (e.g., one or more secret keys as follows: Secret_Keykwith k=1 ,..,N), depending on the device capability and user needs.
[00406] One or more monotonic counters are initialized (N (N>1 ) different MTCs - MTCk=0), depending on the device capability and user needs. The monotonic counters (MTCs) are configured to increment the output value any time the power-on of the system/device occurs and any time the stored secret key is (attempted) to be read.
[00407] The value of each MTCk can be public and shared with the customer (e.g., as MTCk). A command sequence (e.g., as arbitrarily determined by the system designer) is used to read the key from the device, and the command sequence can be public (e.g., the strength of the method is not based on secrecy of read protocol).
[00408] In one embodiment, it is not required to use a dedicated MTC for each secret key. This can vary depending on the type of security service being used.
For example, some keys can be read together, and they only need a single MTC.
[00409] In one embodiment, after arrival of the system at the receiver (e.g., customer/user) location, several operations are performed. A computing device of the receiver first determines the initial value(s) of the monotonic counter(s) 355. For example, the values of MTCk = (MTCo)k are retrieved (e.g., the initial values written and sent at the moment that the device leaves the factory). The MTCk values may be different for each MTC or associated key.
[00410] Then, the receiver customer/user powers on the system 351. The first addressed operation is the read of the MTCk values. If each MTCk value matches with the expected (received) initial value [e.g., MTCk = (MTCo+1 )k], then the device is determined as not having been powered on during physical shipping or other transfer, and the system 351 is considered authentic (and not tampered) by the end user/customer.
[00411] If one or more values do not match, then tampering has occurred. For example, an unauthorized person powered-up the device and attempted to read the stored keys. If tampering has occurred, the device is discarded and the indication of tampering communicated to the sender (e.g., transfer company) to avoid further technical security problems.
[00412] In another embodiment, multiple read operations are performed by the receiver of the system 351. For example, the device leaves the factory and arrives at the customer/OEM. The customer reads one or more keys and the related MTCk are incremented by one. If the MTCj incremented exceeds the value preset (MTCj MAX) by the silicon factory (sender), the read of the key (Secret_Keyj ) is permanently disabled.
[00413] In this example, the MTCj MAX is a predetermined maximum value agreed with by the sender and customer for each MTC. This methodology allows one or more attempts of key reading after/before reading of the MTC value can be checked or performed. This permits, for example, discovery of any un-authorized access and also at the same time ensures that the OEM/customer has a few times to perform reading of the keys before disablement.
[00414] If during the read process, one or more MTCs has an unexpected value (e.g., the initial value and read value do not match after adjustments for the number of known read and power-on operations), the device is discarded. In one
embodiment, once all the keys are read, the related MTCk are cleaned-up and can be re-used for other purposes (e.g., cryptographic processing functions, etc., as configured and desired by the system designer).
[00415] In another example, device/board usage is monitored. This example uses monotonic counters that are already present on a secure device (e.g., present for use by cryptographic module 316). Before the device leaves the silicon factory, on the testing line the following is performed: Initialize M (M>1 ) different MTC - MTCk=0.
Such MTC counters are incremented at each power-up of the device. Each counter can be initialized to zero (or to another initial value, if desired).
[00416] The receiver end user can use the received MTCk values to obtain various information regarding system 351 such as, for example: detection of unauthorized use of component/board, determining that a component has been desoldered and used outside the board to hack it, power cycle count to implement a fee mechanism based on a customer’s services, device board warranty policies, and power loss. [00417] In one embodiment, implementing the fee mechanism involves associating a value to a specific usage. An application can be provided to a customer, and the customer must pay a fee to unlock the specific usage.
[00418] In one embodiment, a secret key is shared between different devices on the same system board (e.g., shared in the field when being used by an end user). The key is shared by exchange between components on the same board (e.g., between processors).
[00419] In one embodiment, each of one or more monotonic counters (MTCk) is initialized by setting its initial value to zero. In one example, when setting MTCk=0, the MTCk is a counter associated with a specific key, indicated by the number k. MTCk= 0 indicates that the initial value of the counter is zero. Each k indicates one of the counter numbers, which corresponds to the number of internal keys stored in the device. The value of each counter is stored in a non-volatile memory area of the device.
[00420] In one embodiment, a power-on (sometimes also referred to as a power- up) of the device is detected. The device includes internal circuitry to measure a value of the power supply. When the power supply exceeds a certain threshold, the circuitry triggers an internal signal to indicate the presence (detection) of the power- on event. This signal can cause incrementing of the monotonic counters.
[00421] In one embodiment, an attempt to read the secret key is detected. The counters (MTCk) are incremented each time that the power is detected (as mentioned above), and further each time that a command sequence to read the key is recognized by the device interface. Knowing the initial MTCk values, when the shipping of the device is done, permits the final receiver (e.g., end user/customer) of the device to know the device (and counter) status. So, if during transit, there was an attempt to power-on and/or read the device, this generates a variation in the counter value stored in each MTCk.
[00422] In one embodiment, a command sequence is used to read the key from the device. For example, the device has an external interface, that accepts commands from other computing components or devices. The key is available for reading until a maximum counter value is reached. In one example, the device has a sequence at the interface: Command (e.g., 0x9b) + Argument (0x34) + signature. The device understands that the key is to be read and provided at the output.
[00423] In one embodiment, after testing of the device (e.g., at the initial factory), the MTCk for each key will have a initial value from initialization. So, for example, if k = 3, then: MTCO = 20d, MTC1 = 25d, MTC2 = 10. The values MTCO, MTC1 , MTC2 are sent/transmitted to the customer, as a set of initial values. When the device is received by customer, the customer uses a command sequence, to read the values. The customer (e.g., using a computing device) then determines if the device was compromised during the shipping. In one example, the monotonic counter values are read using the command sequence that was described above.
[00424] In one embodiment, a predetermined maximum counter value is preset.
For example, the maximum value preset can be selected based on customer procedures. For example, assume that the customer wants to read the key 10 times. Also, assume that the MTCk are incrementing only when the power is turned on, and assume that there are two counters, MTCO (associated with the power-on only) and MTC1 (associated with the key read command procedure only). After the factory testing, based on monitoring the MTCO and MTC1 values, it is found: MTCO = 30, MTC1 = 25. In one example, the internal threshold is set at 40 for MTCO and at 35 for MTC1 .
[00425] In one embodiment, the MTC associated with a key increments with power up and an attempt to read the key. Each time that one of the two events happen, the MTC increments.
[00426] In one embodiment, the monotonic counters are cleaned up after final reading of the stored keys (and optionally can be re-used for other processing on the device). The system designer can configure this as desired. Final reading is, for example, when the purpose of the MTC counting, as a detector of a malicious event, is no longer needed. This releases the counters as resources, which can be used for other purposes, or the counters can remain to count.
[00427] In one embodiment, the monotonic counter can be used to get various types of information such as a component being de-soldered, implementing a fee mechanism, implementing a warranty, etc. The counters can be used to monitor different types of occurrences because the incrementing of the counter value can be externally triggered.
[00428] In one embodiment, a multiple key read option is provided. In one example, for initialization of the MTC values (for MTCk associated with keys), the maximum thresholds are set to allow the read of each key by the receiver of the component and/or the final user. The multiple read option allows changing the threshold according to the maximum number of attempts to read a particular key (which may differ for each key).
[00429] In one embodiment, various types of devices can use the above methods. For example, a CPU or MCU typically has an internal hardware security module that is not accessible by outside access. In one example, there is a need to have the same keys (e.g., when a symmetric key approach is used) stored in devices or components in order to operate correctly. In such case, the CPU/MCU benefits from the distribution/sharing of the key to an authorized entity, device, or component. This sharing allows reading of the key (e g., this sharing corresponds to the programming of a value in the MTCk threshold).
[00430] In one embodiment, the above methods are used for customer firmware injection. For example, the above methods allow storing critical content inside a device (e.g., application firmware) and implementing movement of the device in an un-secured environment (without compromising the integrity of the firmware/device). In one example, for firmware injection, an unexpected change of the MTCk counter value during transport is used as an index to indicate that firmware compromise has occurred.
[00431] FIG. 22 shows a method for generating an identity for a computing device using a physical unclonable function (PUF), according to one embodiment. For example, the method of FIG. 22 can be implemented in the system of FIG. 20.
[00432] The method of FIG. 22 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method of FIG. 22 is performed at least in part by processor 143 of FIG. 20.
[00433] Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various
embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
[00434] At block 221 1 , at least one value is provided by at least one physical unclonable function (PUF). For example, PUF 2005 provides a value as an output.
[00435] At block 2213, a device secret is generated using a key derivative function (KDF). The at least one value provided by the at least one PUF is used as an input to the KDF. For example, the output from PUF 2005 is used as an input to KDF 2007.
[00436] At block 2215, the generated device secret is stored. For example, KDF 2007 generates an output that is stored as device secret 149.
[00437] In one embodiment, a method comprises: generating, by a computing device, a device secret (e.g., a UDS stored as device secret 149), the generating comprising: providing, by at least one physical unclonable function (PUF), at least one value; and generating, using a key derivative function (e.g., KDF 2007), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and storing, in memory of the computing device, the generated device secret.
[00438] In one embodiment, storing the generated device secret comprises replacing a previously-stored device secret in the memory with the generated device secret.
[00439] In one embodiment, the KDF is a hash, or a message authentication code.
[00440] In one embodiment, the method further comprises storing a secret key used to communicate with a host device, wherein the KDF is a message
authentication code (MAC), the at least one value provided by the at least one PUF is a first input to the MAC, and the secret key is a second input to the MAC.
[00441] In one embodiment, the device secret is generated in response to an event, and the event is receiving, by the computing device, of a command from a host device.
[00442] In one embodiment, the method further comprises receiving a host public key from the host device, encrypting the generated device secret using the host public key, and sending the encrypted device secret to the host device.
[00443] In one embodiment, the method further comprises after sending the encrypted device secret to the host device, permanently disabling read access to the device secret in the memory.
[00444] In one embodiment, the method further comprises obfuscating the device secret prior to sending the encrypted device secret to the host device.
[00445] In one embodiment, the method further comprises storing, by the computing device in memory, a secret key (e.g., secret key 2013), wherein the command is authenticated, by the computing device, using a message
authentication code (MAC), and the secret key is used as an input to the MAC.
[00446] In one embodiment, the host device sends a signature for authenticating the command, the signature is generated using the MAC, and a freshness, generated by the host device, is used as an input to the MAC.
[00447] In one embodiment, the freshness is an additional input to the KDF.
[00448] In one embodiment, the host device provides a user pattern with the command, and wherein a hash of the user pattern is an additional input to the KDF.
[00449] In one embodiment, the method further comprises authenticating, by the computing device, the command prior to generating the device secret.
[00450] In one embodiment, the method further comprises storing a unique identifier of the computing device, wherein the unique identifier is an additional input to the KDF.
[00451] In one embodiment, the device secret is generated in response to an event, and the event is detection, by the computing device, of usage of a computing system.
[00452] In one embodiment, usage is execution of an application by the computing system. In one embodiment, the usage is initiation of a boot loading process.
[00453] In one embodiment, The method of claim 1 , wherein the device secret is generated in response to an event, and the event is a time-scheduled event.
[00454] In one embodiment, a system comprises: at least one processor; and
[00455] memory containing instructions configured to instruct the at least one processor to: generate a device secret, the generating comprising: providing, by at least one physical unclonable function (PUF), at least one value; and generating, using a key derivative function (KDF), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and store, in memory of the computing device, the generated device secret.
[00456] In one embodiment, the instructions are further configured to instruct the at least one processor to: receive a replace command from a host device; and send, to the host device, a public identifier generated using the generated device secret; wherein the device secret is generated in response to receiving the replace command; wherein storing the generated device secret comprises replacing a previously-stored device secret with the generated device secret. [00457] In one embodiment, a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to at least: provide, by at least one physical unclonable function (PUF), at least one value; generate, using a key derivative function (KDF), a device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and store, in memory, the generated device secret.
Closing
[00458] A non-transitory computer storage medium can be used to store instructions of the firmware 104, or to store instructions for processor 143 or processing device 1 1 1. When the instructions are executed by, for example, the controller 107 of the memory device 103 or computing device 603, the instructions cause the controller 107 to perform any of the methods discussed above.
[00459] In this description, various functions and operations may be described as being performed by or caused by computer instructions to simplify description.
However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the computer instructions by one or more controllers or processors, such as a microprocessor. Alternatively, or in
combination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using Application- Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).
Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
[00460] While some embodiments can be implemented in fully-functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
[00461] At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor or microcontroller, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
[00462] Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
[00463] A tangible, non-transitory computer storage medium can be used to store software and data which, when executed by a data processing system, causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to- peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in their entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine- readable medium in their entirety at a particular instance of time.
[00464] Examples of computer-readable storage media include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, and optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM),
Digital Versatile Disks (DVDs), etc.), among others. The instructions may be embodied in a transitory medium, such as electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. A transitory medium is typically used to transmit instructions, but not viewed as capable of storing the instructions.
[00465] In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
[00466] Although some of the drawings illustrate a number of operations in a particular order, operations that are not order dependent may be reordered and other operations may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
[00467] The above description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
[00468] In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims

CLAIMS What is claimed is:
1. A method comprising:
generating, by a computing device, a device secret, the generating
comprising:
providing, by at least one physical unclonable function (PUF), at least one value; and
generating, using a key derivative function (KDF), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and
storing, in memory of the computing device, the generated device secret.
2. The method of claim 1 , wherein storing the generated device secret
comprises replacing a previously-stored device secret in the memory with the generated device secret.
3. The method of claim 1 , wherein the KDF is a hash, or a message
authentication code.
4. The method of claim 1 , further comprising storing a secret key used to
communicate with a host device, wherein the KDF is a message
authentication code (MAC), the at least one value provided by the at least one PUF is a first input to the MAC, and the secret key is a second input to the MAC.
5. The method of claim 1 , wherein the device secret is generated in response to an event, and the event is receiving, by the computing device, of a command from a host device.
6. The method of claim 5, further comprising receiving a host public key from the host device, encrypting the generated device secret using the host public key, and sending the encrypted device secret to the host device.
7. The method of claim 6, further comprising after sending the encrypted device secret to the host device, permanently disabling read access to the device secret in the memory.
8. The method of claim 6, further comprising obfuscating the device secret when reading the device secret from the memory of the computing device.
9. The method of claim 5, further comprising storing, by the computing device in memory, a secret key, wherein the command is authenticated, by the computing device, using a message authentication code (MAC), and the secret key is used as an input to the MAC.
10. The method of claim 9, wherein the host device sends a signature for
authenticating the command, the signature is generated using the MAC, and a freshness, generated by the host device, is used as an input to the MAC.
1 1 . The method of claim 10, wherein the freshness is an additional input to the KDF.
12. The method of claim 5, wherein the host device provides a user pattern with the command, and wherein a hash of the user pattern is an additional input to the KDF.
13. The method of claim 5, further comprising authenticating, by the computing device, the command prior to generating the device secret.
14. The method of claim 1 , further comprising storing a unique identifier of the computing device, wherein the unique identifier is an additional input to the KDF.
15. The method of claim 1 , wherein the device secret is generated in response to an event, and the event is detection, by the computing device, of usage of a computing system.
16. The method of claim 15, wherein the usage is execution of an application by the computing system.
17. The method of claim 1 , wherein the device secret is generated in response to an event, and the event is a time-scheduled event.
18. A system comprising:
at least one processor; and
memory containing instructions configured to instruct the at least one
processor to:
generate a device secret, the generating comprising:
providing, by at least one physical unclonable function (PUF), at least one value; and
generating, using a key derivative function (KDF), the device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and
store, in memory of the computing device, the generated device secret.
19. The system of claim 18, wherein the instructions are further configured to instruct the at least one processor to:
receive a replace command from a host device; and
send, to the host device, a public identifier generated using the generated device secret;
wherein the device secret is generated in response to receiving the replace command;
wherein storing the generated device secret comprises replacing a previously- stored device secret with the generated device secret.
20. A non-transitory computer storage medium storing instructions which, when executed on a computing device, cause the computing device to at least: provide, by at least one physical unclonable function (PUF), at least one
value;
generate, using a key derivative function (KDF), a device secret, wherein the at least one value provided by the at least one PUF is an input to the KDF; and
store, in memory, the generated device secret.
PCT/US2020/020906 2019-03-25 2020-03-04 Generating an identity for a computing device using a physical unclonable function WO2020197722A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080024834.XA CN113632417A (en) 2019-03-25 2020-03-04 Generating an identity of a computing device using a physical unclonable function
JP2021557290A JP2022527757A (en) 2019-03-25 2020-03-04 Generating the ID of a computing device using a physical duplication difficulty function
KR1020217034176A KR20210131444A (en) 2019-03-25 2020-03-04 Identity creation for computing devices using physical copy protection
EP20778971.0A EP3949257A4 (en) 2019-03-25 2020-03-04 Generating an identity for a computing device using a physical unclonable function

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/363,204 US11218330B2 (en) 2019-03-25 2019-03-25 Generating an identity for a computing device using a physical unclonable function
US16/363,204 2019-03-25

Publications (1)

Publication Number Publication Date
WO2020197722A1 true WO2020197722A1 (en) 2020-10-01

Family

ID=72607371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/020906 WO2020197722A1 (en) 2019-03-25 2020-03-04 Generating an identity for a computing device using a physical unclonable function

Country Status (6)

Country Link
US (2) US11218330B2 (en)
EP (1) EP3949257A4 (en)
JP (1) JP2022527757A (en)
KR (1) KR20210131444A (en)
CN (1) CN113632417A (en)
WO (1) WO2020197722A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11233650B2 (en) 2019-03-25 2022-01-25 Micron Technology, Inc. Verifying identity of a vehicle entering a trust zone
US11361660B2 (en) 2019-03-25 2022-06-14 Micron Technology, Inc. Verifying identity of an emergency vehicle during operation
US11323275B2 (en) 2019-03-25 2022-05-03 Micron Technology, Inc. Verification of identity using a secret key
US11387983B2 (en) * 2019-03-25 2022-07-12 Micron Technology, Inc. Secure medical apparatus communication
US11018861B2 (en) * 2019-04-17 2021-05-25 Piston Vault Pte. Ltd. System and method for storage and management of confidential information
US11218307B1 (en) * 2019-04-24 2022-01-04 Wells Fargo Bank, N.A. Systems and methods for generation of the last obfuscated secret using a seed
US11520709B2 (en) 2020-01-15 2022-12-06 International Business Machines Corporation Memory based encryption using an encryption key based on a physical address
US11763008B2 (en) * 2020-01-15 2023-09-19 International Business Machines Corporation Encrypting data using an encryption path and a bypass path
US11343139B2 (en) * 2020-03-23 2022-05-24 Microsoft Technology Licensing, Llc Device provisioning using a supplemental cryptographic identity
US11444771B2 (en) * 2020-09-08 2022-09-13 Micron Technology, Inc. Leveraging a trusted party third-party HSM and database to securely share a key
US12113895B2 (en) * 2020-12-11 2024-10-08 PUFsecurity Corporation Key management system providing secure management of cryptographic keys, and methods of operating the same
CN112243011A (en) * 2020-12-18 2021-01-19 东方微电科技(武汉)有限公司 Signature verification method, system, electronic equipment and storage medium
US11784827B2 (en) * 2021-03-09 2023-10-10 Micron Technology, Inc. In-memory signing of messages with a personal identifier
US11650914B2 (en) 2021-08-05 2023-05-16 SK Hynix Inc. System and method for identification of memory device based on physical unclonable function
US11741224B2 (en) * 2021-09-20 2023-08-29 Intel Corporation Attestation with a quantified trusted computing base
US20220029838A1 (en) * 2021-09-22 2022-01-27 Intel Corporation Method, System and Apparatus for Protection of Multi-Die Structures
CN116451188B (en) * 2023-06-16 2023-08-29 无锡沐创集成电路设计有限公司 Software program operation safety protection method, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130142329A1 (en) * 2011-12-02 2013-06-06 Cisco Technology, Inc. Utilizing physically unclonable functions to derive device specific keying material for protection of information
US20140032933A1 (en) * 2012-07-24 2014-01-30 Ned M. Smith Providing access to encrypted data
US20140189890A1 (en) * 2012-12-28 2014-07-03 Patrick Koeberl Device authentication using a physically unclonable functions based key generation system
CN105337725A (en) * 2014-08-08 2016-02-17 中国科学院数据与通信保护研究教育中心 Key management device and key management method
WO2017194335A2 (en) * 2016-05-09 2017-11-16 Intrinsic Id B.V. Programming device arranged to obtain and store a random bit string in a memory device

Family Cites Families (136)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119105A (en) 1996-06-17 2000-09-12 Verifone, Inc. System, method and article of manufacture for initiation of software distribution from a point of certificate creation utilizing an extensible, flexible architecture
AUPP752398A0 (en) * 1998-12-04 1999-01-07 Collins, Lyal Sidney Secure multi-point data transfer system
US7155590B2 (en) * 2000-04-11 2006-12-26 Mathis Richard M Method and apparatus for computer memory protection and verification
US20050268099A1 (en) * 2000-08-22 2005-12-01 Dallas Semiconductor Corporation Security device and method
KR100427323B1 (en) 2001-08-31 2004-04-14 현대자동차주식회사 Garage door auto open and closed controlling device and method thereof
US20030147534A1 (en) 2002-02-06 2003-08-07 Ablay Sewim F. Method and apparatus for in-vehicle device authentication and secure data delivery in a distributed vehicle network
US7840803B2 (en) 2002-04-16 2010-11-23 Massachusetts Institute Of Technology Authentication of integrated circuits
US7600114B2 (en) 2002-06-28 2009-10-06 Temic Automotive Of North America, Inc. Method and system for vehicle authentication of another vehicle
US6977580B2 (en) 2002-09-26 2005-12-20 International Business Machines Corporation Apparatus, system and method of securing perimeters of security zones from suspect vehicles
US7165181B2 (en) * 2002-11-27 2007-01-16 Intel Corporation System and method for establishing trust without revealing identity
JP4621200B2 (en) 2004-04-15 2011-01-26 パナソニック株式会社 Communication apparatus, communication system, and authentication method
WO2005104431A1 (en) * 2004-04-21 2005-11-03 Matsushita Electric Industrial Co., Ltd. Content providing system, information processing device, and memory card
US7573373B2 (en) 2004-08-25 2009-08-11 Hap Nguyen Anti-carjacking apparatus, systems, and methods for hi-speed pursuit avoidance and occupant safety
US7525435B2 (en) 2005-08-02 2009-04-28 Performance Partners, Llc Method, apparatus, and system for securing areas of use of vehicles
US7613891B2 (en) * 2006-05-04 2009-11-03 Intel Corporation Methods and apparatus for providing a read access control system associated with a flash device
US20100138652A1 (en) 2006-07-07 2010-06-03 Rotem Sela Content control method using certificate revocation lists
US9794247B2 (en) 2006-08-22 2017-10-17 Stmicroelectronics, Inc. Method to prevent cloning of electronic components using public key infrastructure secure hardware device
KR100823738B1 (en) 2006-09-29 2008-04-21 한국전자통신연구원 Method for integrity attestation of a computing platform hiding its configuration information
US7613915B2 (en) * 2006-11-09 2009-11-03 BroadOn Communications Corp Method for programming on-chip non-volatile memory in a secure processor, and a device so programmed
US9830637B2 (en) 2007-02-23 2017-11-28 Epona Llc System and method for processing vehicle transactions
EP2003813B1 (en) 2007-06-15 2009-03-18 NTT DoCoMo, Inc. Method and Apparatus for Authentication
US8539596B2 (en) * 2008-06-24 2013-09-17 Cisco Technology Inc. Security within integrated circuits
US8761390B2 (en) 2008-06-30 2014-06-24 Gm Global Technology Operations Production of cryptographic keys for an embedded processing device
US8484486B2 (en) * 2008-08-06 2013-07-09 Silver Spring Networks, Inc. Integrated cryptographic security module for a network node
US8699714B2 (en) 2008-11-17 2014-04-15 Intrinsic Id B.V. Distributed PUF
JP5390844B2 (en) * 2008-12-05 2014-01-15 パナソニック株式会社 Key distribution system and key distribution method
TWM356972U (en) 2008-12-17 2009-05-11 Univ Kun Shan Portable storage device with local and remote identity recognition function
FR2941343B1 (en) 2009-01-20 2011-04-08 Groupe Des Ecoles De Telecommunications Get Ecole Nat Superieure Des Telecommunications Enst CIRCUIT OF CRYPTOGRAPHY, PROTECTS IN PARTICULAR AGAINST ATTACKS BY OBSERVATION OF LEAKS OF INFORMATION BY THEIR ENCRYPTION.
US8499154B2 (en) 2009-01-27 2013-07-30 GM Global Technology Operations LLC System and method for establishing a secure connection with a mobile device
US8184812B2 (en) 2009-06-03 2012-05-22 Freescale Semiconductor, Inc. Secure computing device with monotonic counter and method therefor
DE102010011022A1 (en) * 2010-03-11 2012-02-16 Siemens Aktiengesellschaft Method for secure unidirectional transmission of signals
JP5612514B2 (en) * 2010-03-24 2014-10-22 パナソニック株式会社 Nonvolatile memory controller and nonvolatile storage device
US8499155B2 (en) 2010-03-24 2013-07-30 GM Global Technology Operations LLC Adaptive certificate distribution mechanism in vehicular networks using variable inter-certificate refresh period
US8667265B1 (en) 2010-07-28 2014-03-04 Sandia Corporation Hardware device binding and mutual authentication
US20120038489A1 (en) 2010-08-12 2012-02-16 Goldshmidt Ehud System and method for spontaneous p2p communication between identified vehicles
JP2012118805A (en) * 2010-12-01 2012-06-21 Sony Corp Information processing apparatus, removable storage device, information processing method and information processing system
US8526606B2 (en) 2010-12-20 2013-09-03 GM Global Technology Operations LLC On-demand secure key generation in a vehicle-to-vehicle communication network
US9467293B1 (en) * 2010-12-22 2016-10-11 Emc Corporation Generating authentication codes associated with devices
EP2479731B1 (en) 2011-01-18 2015-09-23 Alcatel Lucent User/vehicle-ID associating access rights and privileges
US20120183135A1 (en) 2011-01-19 2012-07-19 Verayo, Inc. Reliable puf value generation by pattern matching
KR101881167B1 (en) 2011-06-13 2018-07-23 주식회사 케이티 Car control system
JP5770026B2 (en) * 2011-06-20 2015-08-26 ルネサスエレクトロニクス株式会社 Semiconductor device
US8924737B2 (en) 2011-08-25 2014-12-30 Microsoft Corporation Digital signing authority dependent platform secret
JP5710460B2 (en) 2011-12-16 2015-04-30 株式会社東芝 Encryption key generation apparatus and program
US9774581B2 (en) 2012-01-20 2017-09-26 Interdigital Patent Holdings, Inc. Identity management with local functionality
DE102012201164B4 (en) 2012-01-26 2017-12-07 Infineon Technologies Ag DEVICE AND METHOD FOR GENERATING A MESSAGE AUTHENTICATION CODE
US8750502B2 (en) 2012-03-22 2014-06-10 Purdue Research Foundation System on chip and method for cryptography using a physically unclonable function
US9591484B2 (en) 2012-04-20 2017-03-07 T-Mobile Usa, Inc. Secure environment for subscriber device
US20140006777A1 (en) * 2012-06-29 2014-01-02 Oslsoft, Inc. Establishing Secure Communication Between Networks
US8525169B1 (en) * 2012-08-10 2013-09-03 International Business Machines Corporation Reliable physical unclonable function for device authentication
US9742563B2 (en) 2012-09-28 2017-08-22 Intel Corporation Secure provisioning of secret keys during integrated circuit manufacturing
EP2904732B1 (en) 2012-10-04 2018-11-28 Intrinsic ID B.V. System for generating a cryptographic key from a memory used as a physically unclonable function
JP5967822B2 (en) * 2012-10-12 2016-08-10 ルネサスエレクトロニクス株式会社 In-vehicle communication system and apparatus
JP5939126B2 (en) 2012-10-17 2016-06-22 株式会社デンソー In-vehicle device and vehicle antitheft system
US8885819B2 (en) 2012-12-27 2014-11-11 Intel Corporation Fuse attestation to secure the provisioning of secret keys during integrated circuit manufacturing
JP2014158105A (en) 2013-02-14 2014-08-28 Panasonic Corp Terminal device
US20140245010A1 (en) * 2013-02-25 2014-08-28 Kabushiki Kaisha Toshiba Device and authentication method therefor
DE102013203415B4 (en) * 2013-02-28 2016-02-11 Siemens Aktiengesellschaft Create a derived key from a cryptographic key using a non-cloning function
US9367701B2 (en) 2013-03-08 2016-06-14 Robert Bosch Gmbh Systems and methods for maintaining integrity and secrecy in untrusted computing platforms
US9858208B2 (en) 2013-03-21 2018-01-02 International Business Machines Corporation System for securing contents of removable memory
US9906372B2 (en) * 2013-06-03 2018-02-27 Intel Deutschland Gmbh Authentication devices, key generator devices, methods for controlling an authentication device, and methods for controlling a key generator
US9769658B2 (en) 2013-06-23 2017-09-19 Shlomi Dolev Certificating vehicle public key with vehicle attributes
WO2015001600A1 (en) * 2013-07-01 2015-01-08 三菱電機株式会社 Equipment authentication system, manufacturer key generation device, equipment key generation device, production equipment, cooperative authentication device, equipment playback key generation device, equipment authentication method, and equipment authentication program
KR101521412B1 (en) 2013-07-11 2015-05-19 가톨릭관동대학교산학협력단 Protocol Management System for Aggregating Massages based on certification
US20150256522A1 (en) 2013-09-16 2015-09-10 Clutch Authentication Systems, Llc System and method for communication over color encoded light patterns
US9992031B2 (en) 2013-09-27 2018-06-05 Intel Corporation Dark bits to reduce physically unclonable function error rates
EP3056394B1 (en) * 2013-10-08 2022-11-30 ICTK Holdings Co., Ltd. Vehicle security network device and design method therefor
US9225530B2 (en) * 2013-10-21 2015-12-29 Microsoft Technology Licensing, Llc Secure crypto-processor certification
FR3013138B1 (en) 2013-11-12 2015-10-30 Morpho METHOD AND SYSTEM FOR CONTROLLING ACCESS TO OR EXITING A ZONE
DE102013227087A1 (en) * 2013-12-23 2015-06-25 Siemens Aktiengesellschaft Secured provision of a key
WO2015106248A1 (en) * 2014-01-13 2015-07-16 Visa International Service Association Efficient methods for protecting identity in authenticated transmissions
CN104901931B (en) 2014-03-05 2018-10-12 财团法人工业技术研究院 certificate management method and device
US9147075B1 (en) 2014-03-20 2015-09-29 Juniper Networks, Inc. Apparatus and method for securely logging boot-tampering actions
CN106575324A (en) 2014-04-09 2017-04-19 有限公司Ictk Authentication apparatus and method
GB2530028B8 (en) * 2014-09-08 2021-08-04 Advanced Risc Mach Ltd Registry apparatus, agent device, application providing apparatus and corresponding methods
EP3207539B1 (en) * 2014-10-13 2021-03-17 Intrinsic ID B.V. Cryptographic device comprising a physical unclonable function
US9935937B1 (en) 2014-11-05 2018-04-03 Amazon Technologies, Inc. Implementing network security policies using TPM-based credentials
EP3605943B1 (en) 2014-11-13 2021-02-17 Panasonic Intellectual Property Corporation of America Key management method, vehicle-mounted network system, and key management device
US9584329B1 (en) 2014-11-25 2017-02-28 Xilinx, Inc. Physically unclonable function and helper data indicating unstable bits
US9740863B2 (en) 2014-11-25 2017-08-22 Intel Corporation Protecting a secure boot process against side channel attacks
US9569601B2 (en) 2015-05-19 2017-02-14 Anvaya Solutions, Inc. System and method for authenticating and enabling functioning of a manufactured electronic device
US10057224B2 (en) * 2015-08-04 2018-08-21 Rubicon Labs, Inc. System and method for initializing a shared secret system
US9604651B1 (en) 2015-08-05 2017-03-28 Sprint Communications Company L.P. Vehicle telematics unit communication authorization and authentication and communication service provisioning
CN107924645B (en) * 2015-08-06 2021-06-25 本质Id有限责任公司 Encryption device with physical unclonable function
US10402792B2 (en) 2015-08-13 2019-09-03 The Toronto-Dominion Bank Systems and method for tracking enterprise events using hybrid public-private blockchain ledgers
US9917687B2 (en) 2015-10-12 2018-03-13 Microsoft Technology Licensing, Llc Migrating secrets using hardware roots of trust for devices
JP6951329B2 (en) 2015-10-14 2021-10-20 ケンブリッジ ブロックチェーン,エルエルシー Systems and methods for managing digital identities
DE102015220224A1 (en) 2015-10-16 2017-04-20 Volkswagen Aktiengesellschaft Method for protected communication of a vehicle
DE102015220227A1 (en) 2015-10-16 2017-04-20 Volkswagen Aktiengesellschaft Method and system for asymmetric key derivation
CN108352984B (en) * 2015-11-05 2021-06-01 三菱电机株式会社 Security device and security method
KR101782483B1 (en) 2015-12-03 2017-10-23 현대오토에버 주식회사 Method and apparatus for generating certificate of vehicle in vehicular ad-hoc network
JP5991561B2 (en) 2015-12-25 2016-09-14 パナソニックIpマネジメント株式会社 Wireless device
JP6684690B2 (en) 2016-01-08 2020-04-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Fraud detection method, monitoring electronic control unit and in-vehicle network system
US10484351B2 (en) 2016-01-28 2019-11-19 Etas Embedded Systems Canada Inc. System and method for certificate selection in vehicle-to-vehicle applications to enhance privacy
DE102016205198A1 (en) * 2016-03-30 2017-10-05 Siemens Aktiengesellschaft Demonstrate the authenticity of a device by means of a credential
US20200177398A1 (en) 2016-06-17 2020-06-04 Kddi Corporation System, certification authority, vehicle-mounted computer, vehicle, public key certificate issuance method, and program
KR102562786B1 (en) 2016-07-07 2023-08-03 엘지이노텍 주식회사 Driver assistance apparatus and parking control system comprising same
KR102598613B1 (en) 2016-07-21 2023-11-07 삼성전자주식회사 System and method for providing vehicle information based on personal certification and vehicle certification
US10390221B2 (en) 2016-07-25 2019-08-20 Ford Global Technologies, Llc Private vehicle-to-vehicle communication
US20180060813A1 (en) 2016-08-23 2018-03-01 Ford Global Technologies, Llc Autonomous delivery vehicle system
JP6479724B2 (en) * 2016-09-02 2019-03-06 日本電信電話株式会社 Secret key synchronization system, user terminal, and secret key synchronization method
US10397215B2 (en) * 2016-09-27 2019-08-27 Visa International Service Assocation Secure element installation and provisioning
US10594702B2 (en) * 2016-12-16 2020-03-17 ULedger, Inc. Electronic interaction authentication and verification, and related systems, devices, and methods
EP3934203A1 (en) 2016-12-30 2022-01-05 INTEL Corporation Decentralized data storage and processing for iot devices
PH12017000044B1 (en) 2017-02-13 2018-08-20 Samsung Electronics Co Ltd Vehicle parking area access management system and method
DE112017007002T5 (en) 2017-03-03 2020-01-02 Ford Global Technologies, Llc Car Parking Control
US11341251B2 (en) * 2017-04-19 2022-05-24 Quintessencelabs Pty Ltd. Encryption enabling storage systems
US10984136B2 (en) 2017-04-21 2021-04-20 Micron Technology, Inc. Secure memory device with unique identifier for authentication
US10783600B2 (en) 2017-05-25 2020-09-22 GM Global Technology Operations LLC Method and system using a blockchain database for data exchange between vehicles and entities
JP6754325B2 (en) 2017-06-20 2020-09-09 国立大学法人東海国立大学機構 Authentication method for in-vehicle authentication system, in-vehicle authentication device, computer program and communication device
US20190027044A1 (en) 2017-07-19 2019-01-24 Aptiv Technologies Limited Automated secured-area access system for an automated vehicle
JP6773617B2 (en) 2017-08-21 2020-10-21 株式会社東芝 Update controller, software update system and update control method
GB2566265B (en) * 2017-09-01 2020-05-13 Trustonic Ltd Post-manufacture generation of device certificate and private key for public key infrastructure
JP6903529B2 (en) * 2017-09-11 2021-07-14 株式会社東芝 Information processing equipment, information processing methods and programs
US11140141B2 (en) 2017-09-18 2021-10-05 Fiske Software Llc Multiparty key exchange
JP6929181B2 (en) * 2017-09-27 2021-09-01 キヤノン株式会社 Devices and their control methods and programs
US10783729B2 (en) 2017-10-11 2020-09-22 Marc Chelnik Vehicle parking authorization assurance system
US10536279B2 (en) 2017-10-22 2020-01-14 Lg Electronics, Inc. Cryptographic methods and systems for managing digital certificates
US10812257B2 (en) 2017-11-13 2020-10-20 Volkswagen Ag Systems and methods for a cryptographically guaranteed vehicle identity
US11323249B2 (en) 2017-12-20 2022-05-03 Lg Electronics, Inc. Cryptographic methods and systems for authentication in connected vehicle systems and for other uses
US10867058B2 (en) * 2017-12-29 2020-12-15 Niall Joseph Duffy Method and system for protecting secure computer systems from insider threats
US11011056B2 (en) 2018-01-29 2021-05-18 Fujitsu Limited Fragmentation-aware intelligent autonomous intersection management using a space-time resource model
CN108683491B (en) * 2018-03-19 2021-02-05 中山大学 Information hiding method based on encryption and natural language generation
US10917237B2 (en) * 2018-04-16 2021-02-09 Microsoft Technology Licensing, Llc Attestable and destructible device identity
US10778661B2 (en) 2018-04-27 2020-09-15 Micron Technology, Inc. Secure distribution of secret key using a monotonic counter
US10742406B2 (en) 2018-05-03 2020-08-11 Micron Technology, Inc. Key generation and secure storage in a noisy environment
CN109525396B (en) * 2018-09-30 2021-02-23 华为技术有限公司 Method and device for processing identity key and server
US10326797B1 (en) * 2018-10-03 2019-06-18 Clover Network, Inc Provisioning a secure connection using a pre-shared key
WO2020074933A1 (en) 2018-10-12 2020-04-16 Micron Technology, Inc. Method and apparatus to recognize transported passengers and goods
WO2020074934A1 (en) 2018-10-12 2020-04-16 Micron Technology , Inc. Improved vehicle communication
US10868667B2 (en) 2018-11-06 2020-12-15 GM Global Technology Operations LLC Blockchain enhanced V2X communication system and method
KR20200091689A (en) 2019-01-23 2020-07-31 한국전자통신연구원 Security management system for vehicle communication and operating method thereof, messege processing method of vehicle communication service providing system having the same
US11271755B2 (en) 2019-03-25 2022-03-08 Micron Technology, Inc. Verifying vehicular identity
US11361660B2 (en) 2019-03-25 2022-06-14 Micron Technology, Inc. Verifying identity of an emergency vehicle during operation
US11233650B2 (en) 2019-03-25 2022-01-25 Micron Technology, Inc. Verifying identity of a vehicle entering a trust zone
US11323275B2 (en) 2019-03-25 2022-05-03 Micron Technology, Inc. Verification of identity using a secret key

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130142329A1 (en) * 2011-12-02 2013-06-06 Cisco Technology, Inc. Utilizing physically unclonable functions to derive device specific keying material for protection of information
US20140032933A1 (en) * 2012-07-24 2014-01-30 Ned M. Smith Providing access to encrypted data
US20140189890A1 (en) * 2012-12-28 2014-07-03 Patrick Koeberl Device authentication using a physically unclonable functions based key generation system
CN105337725A (en) * 2014-08-08 2016-02-17 中国科学院数据与通信保护研究教育中心 Key management device and key management method
WO2017194335A2 (en) * 2016-05-09 2017-11-16 Intrinsic Id B.V. Programming device arranged to obtain and store a random bit string in a memory device

Also Published As

Publication number Publication date
KR20210131444A (en) 2021-11-02
EP3949257A1 (en) 2022-02-09
US20220078035A1 (en) 2022-03-10
US11218330B2 (en) 2022-01-04
EP3949257A4 (en) 2022-12-21
JP2022527757A (en) 2022-06-06
CN113632417A (en) 2021-11-09
US20200313911A1 (en) 2020-10-01

Similar Documents

Publication Publication Date Title
US11218330B2 (en) Generating an identity for a computing device using a physical unclonable function
US11962701B2 (en) Verifying identity of a vehicle entering a trust zone
US11323275B2 (en) Verification of identity using a secret key
US11361660B2 (en) Verifying identity of an emergency vehicle during operation
CN112042151B (en) Secure distribution of secret keys using monotonic counters
US9886596B1 (en) Systems and methods for secure processing with embedded cryptographic unit
US20130086385A1 (en) System and Method for Providing Hardware-Based Security
KR20210132721A (en) Secure communication when accessing the network
EP2575068A1 (en) System and method for providing hardware-based security
CN115037492A (en) Online security services based on security features implemented in memory devices
US12088581B2 (en) Track activities of components in endpoints having secure memory devices via identity validation
CN115037494A (en) Cloud service login without pre-customization of endpoints
CN115021949A (en) Method and system for identification management of endpoints having memory devices protected for reliable authentication
CN115037493A (en) Monitoring integrity of endpoints with secure memory devices for identity authentication
CN115021950A (en) Online service store for endpoints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20778971

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021557290

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217034176

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020778971

Country of ref document: EP

Effective date: 20211025