CN116305136A - Source audit trail for micro-service architecture - Google Patents

Source audit trail for micro-service architecture Download PDF

Info

Publication number
CN116305136A
CN116305136A CN202211588006.6A CN202211588006A CN116305136A CN 116305136 A CN116305136 A CN 116305136A CN 202211588006 A CN202211588006 A CN 202211588006A CN 116305136 A CN116305136 A CN 116305136A
Authority
CN
China
Prior art keywords
service
micro
source metadata
source
metadata
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211588006.6A
Other languages
Chinese (zh)
Inventor
R·普尔纳查得兰
V·齐默
S·巴尼克
M·卡兰扎
K·A·杜什
F·圭姆伯纳特
K·库马
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN116305136A publication Critical patent/CN116305136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5006Creating or negotiating SLA contracts, guarantees or penalties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/562Brokering proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3271Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using challenge-response
    • H04L9/3278Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using challenge-response using physically unclonable functions [PUF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Virology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

Source audit trails for micro-service architecture are described herein. An apparatus for facilitating source audit trails for a micro-service architecture is disclosed. The apparatus includes one or more processors to: obtaining, by a micro-service of a service hosted in a data center, provisioning credentials for the micro-service based on a validation protocol; generating source metadata for a task performed by the micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task; encrypting the source metadata with provisioning credentials for the micro-service; and recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing the task and the micro-service.

Description

Source audit trail for micro-service architecture
Technical Field
Embodiments relate generally to data processing and, more particularly, to source audit trails for micro-service architecture.
Background
Data centers typically utilize a micro-service architecture to provide network infrastructure services. The micro-service architecture may arrange applications as a collection of loosely coupled micro-services. Micro-services may refer to the process of communicating over a network to achieve a goal using a technology agnostic protocol. In some cases, the micro-services may be deployed using a containerization platform that provides containerized workload and/or services. The container orchestration platform may utilize a service grid to manage a large number of network-based inter-process communications between micro services. The service grid is a specialized software infrastructure layer for micro services that includes elements for making communications between micro services fast, reliable, and secure. The service grid provides capabilities including service discovery, load balancing, encryption, observability, traceability, and authentication and authorization. The micro-service deployment model provided by the service grid becomes more and more flexible, providing flexibility for expanding and shrinking micro-services.
In a service grid environment, a typical worker node in a computing cluster may handle hundreds of container workloads simultaneously. These worker nodes may also have statically attached dedicated hardware accelerators optimized for computationally intensive tasks. For example, the hardware accelerator class may be optimized to run cryptographic and compression algorithms efficiently, or to run machine learning acceleration algorithms. Such hardware accelerators may be provided as a form of split computation, where the workload is distributed across separate computing resources such as CPUs, GPUs, and hardware accelerators (including field programmable gate arrays (field programmablegate arrays, FPGAs)), connected via a network rather than on the same platform, and connected via physical links such as peripheral component interconnect express (peripheral componentinterconnect express, PCIe). Split computing achieves improved resource utilization and reduced ownership costs by enabling more efficient utilization of available resources. The split computing can also aggregate a large number of hardware accelerators for large computing, making computing more efficient and better performing.
The micro-service deployment model provided by the service grid becomes more and more flexible, providing flexibility for expanding and shrinking micro-services. With the increasing flexibility of deployment of micro-services and the transition of micro-service architecture to utilizing separate computing resources, micro-services can be deployed for services across many heterogeneous hardware devices. Thus, it is increasingly difficult to provide secure audit trails to confirm origin and maintain security and confidentiality in a service grid.
Drawings
So that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate typical embodiments and are therefore not to be considered limiting of scope. The figures are not drawn to scale. In general, the same reference numerals are used throughout the drawings(s) and the accompanying written description to refer to the same or like parts.
FIG. 1 illustrates a data center system that provides source audit trails for micro-service architectures according to an implementation herein.
FIG. 2 illustrates a block diagram of components of a computing platform in a data center system according to an implementation herein.
FIG. 3A is a block diagram of a service platform implementing source audit trail for micro-service architecture according to an implementation herein.
Fig. 3B illustrates an example data center hosting multiple server racks connected via a network according to an implementation herein.
FIG. 4A is a block diagram depicting a blockchain system for secure source metadata tagging and tracking via blockchains according to implementations herein.
FIG. 4B is a diagram illustrating an operational schematic for source audit trail for a micro-service architecture according to an implementation herein.
FIG. 5A is a flow chart illustrating an embodiment of a method for facilitating source audit trails for a micro-service architecture.
FIG. 5B is a flow chart illustrating an embodiment of a method for implementing evaluation of service policies using source audit trails for micro-service architecture.
FIG. 6 is a schematic diagram of an illustrative electronic computing device enabling source audit trails for a micro-service architecture, according to some embodiments.
Detailed Description
Implementations of the present disclosure describe source audit trails for micro-service architectures.
Cloud service providers (cloud service provider, CSP) are deploying solutions in data centers where the processing of the workload is distributed over various computing resources such as central processing units (central processing unit, CPUs), graphics processing units (graphics processing unit, GPUs) and/or hardware accelerators (including, but not limited to GPUs, field Programmable Gate Arrays (FPGAs), application-specific integrated circuits (ASICs), cryptographic accelerators, compression accelerators, etc.). Traditionally, these computing resources run on the same platform and are connected via physical communication links, such as peripheral component interconnect express (PCIe).
However, separate computing in data centers is increasing. With split computing, CSP is deploying solutions where the processing of the workload is distributed across split computing resources (such as CPUs, GPUs, and hardware accelerators (including FPGAs, ASICs, etc.) that are connected via a network rather than on the same platform, and via physical links such as PCIe. Split computing achieves improved resource utilization and reduces ownership costs by more efficiently utilizing available resources. The split computing can also aggregate a large number of hardware accelerators for large computing, making computing more efficient and better performing.
The hardware accelerator (also referred to herein as hardware accelerator resources, hardware accelerator devices, accelerator resources, accelerator devices, and/or extension resources) as discussed herein may refer to any of a special purpose Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a General Purpose GPU (GPGPU), and a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an inferred accelerator, a cryptographic accelerator, a compression accelerator, other special purpose hardware accelerator, and the like.
Furthermore, data centers used by CSPs to deploy service grids typically utilize a microservice architecture to provide the network infrastructure services of the service grid. The micro-service architecture may arrange applications as a collection of loosely coupled micro-services. Micro-services may be processes that communicate over a network to achieve a goal using a technology agnostic protocol. In some cases, the micro-services may be deployed using a containerization platform that provides containerized workload and/or services. In some examples, the service may be a large service comprising hundreds of micro-services working in conjunction with each other, or may be a modest single service. Workload may refer to resources that consume resources (such as computing power) running on the cloud. In some embodiments, an application, service, or micro-service may be referred to as a workload, meaning that the workload may move back and forth between different cloud platforms, either from local to cloud, or from cloud to local, without any dependencies or hassles.
The container orchestration platform may utilize a service grid to manage a large number of network-based inter-process communications between micro services. The service grid is a specialized software infrastructure layer for micro services that includes elements for making communications between micro services fast, reliable, and secure. The service grid provides capabilities including service discovery, load balancing, encryption, observability, traceability, and authentication and authorization.
As previously mentioned, the micro-service deployment model provided by the service grid becomes more and more flexible, providing flexibility for expanding and shrinking micro-services. With the increasing flexibility of deployment of micro-services and the transition of micro-service architecture to utilizing split computing resources, micro-services can be deployed for services across many heterogeneous hardware devices (e.g., intellectual property cores or blocks (intellectual property, IP), heterogeneous processing units (heterogeneous processing unit, XPU)). Thus, it is increasingly difficult to provide secure audit trails to confirm origin and maintain security and confidentiality in a service grid.
This is particularly evident in conventional systems that lack the ability to provide robust and accurate source audit trails. As used herein, "source" may refer to the timing of ownership, administration, or location of a historical object. Similarly, audit trails may refer to a series of records of computer events pertaining to operating systems, applications, or user activities. The computer system may have several audit trails, each audit trail for a particular type of activity. Audit is the inspection and analysis of management, operational and technical controls. Such conventional service grids lack the ability to provide metadata that is transmitted with computing communications (e.g., remote Procedure Call (RPC) inter-process communications) to enable security audit trails of origin, IP confidentiality, micropayment, etc., relative to conventional service grids deploying microservice architecture. Furthermore, conventional systems lack appropriate revocation, loading (on-board), and unloading (off-board) based on source reputation scores.
Implementations of the present disclosure address the above-described technical deficiencies by providing source audit trails for micro-service architectures. In implementations herein, techniques are provided for generating source metadata for transactions performed by micro-services of a service managed by a service grid. The interchangeable computing cores (including computing nodes and/or different blocks on different XPUs) may run multiple micro-services. In implementations herein, security audit trails generated from source metadata may be used to monitor/track such micro-services. The source metadata may be generated based on telemetry metadata collected from transactions of the microservice, including data in rest, motion, and computation across heterogeneous XPU blocks (including software stacks). In an example embodiment, source metadata may be homomorphically encrypted and tracked via a blockchain.
In implementations herein, the controller may implement one or more provisioning policies for the micro-service during runtime of the service based on the generated source metadata. The evaluator checks the controller using security audit trails generated from the source metadata. The controller and evaluator may be part of a trusted execution environment (trusted execution environment, TEE) to provide secure and confidential computing. For example, the evaluator may perform a check on any hardware or software instance proposed by the controller to determine that such proposed hardware or software instance satisfies the provisioning policy for the micro-service. The evaluator may utilize the evaluation metrics to provide feedback to the controller regarding: whether the controller properly enforces the provisioning policy based on the evaluation audit trail generated from the source metadata.
Implementations of the present disclosure provide technical advantages over the conventional methods described above. One technical advantage is that implementations provide improved source tracking in a micro-service architecture. For example, implementations herein provide source tracking that enables auditability, security revocation, XPU and microservice load/unload management, improved IP confidentiality, and the like. Further, implementations herein provide fine-grained control mechanisms with respect to: how source data is tracked across heterogeneous components of a service (e.g., hardware components, software components, micro-services) and is policy configurable. This helps to address trade-offs in performance vs.
FIG. 1 illustrates a data center system 100 that provides source audit trails for micro-service architectures according to an implementation herein. The data center system 100 illustrates an example data center (e.g., hosted by a Cloud Service Provider (CSP)) that provides various XPUs (heterogeneous processing units) for processing tasks at the data center, where the XPUs may include one or more of the following: a Central Processing Unit (CPU) 115, a Graphics Processing Unit (GPU) 135 (including a General Purpose GPU (GPGPU)), an ASIC or other processing unit (e.g., accelerators 145, 155, 166, an inference accelerator 145, a cryptographic accelerator 155, a programmable or fixed function FPGA 164, an Application Specific Integrated Circuit (ASIC) 166, a compression accelerator, etc.). The data center may also provide storage units for data storage tasks. For example, the storage unit may include a solid state drive (solid state drive, SSD) 125. The XPU and/or storage units may be hosted by similar types of units, such as CPU 115 hosted on application server (app server) 110, SSD 125 hosted on storage rack 120, GPU 135 hosted on GPU rack 130, inferred accelerator 145 hosted on inferred accelerator server 140, cryptographic accelerator 155 hosted on cryptographic accelerator rack 150, and generic accelerators 162, 164, 166 hosted on accelerator rack 160.
The data center of the system 100 provides various migration for its hosted processing components 115, 125, 135, 145, 155, 162, 164, 166 using, for example, IPUs 105 attached directly to the respective host processing components. Although IPU 105 is discussed for purposes of example, other programmable network devices (such as DPUs or smartnics) may also be used interchangeably herein with IPU 105. The migration provided may be networking, storage, security, etc. This allows the processing components 115, 125, 135, 145, 155, 162, 164, 166 to run without a hypervisor and provides the CSP with the ability to rent hosts in the data center to security-focused customers, or to avoid cross-talk and other problems associated with multi-tenant hosts.
The IPU 105 may provide roles in a data center by providing control points for security, acceleration, telemetry, and service orchestration to data center operators, such as Cloud Service Providers (CSPs). The IPU 105 architecture may build on existing intelligent network interface card (Smart NetworkInterface Card, smartNIC) features and be part of controlling security and data acceleration within and across distributed platforms. It is a security domain controlled by CSP for managing the platform, providing services to tenants, and securing access to the data center network. By migrating host services, reliable transmission, and optimizing data replication, the IPU 105 improves performance and predictability of distributed runtime and enables scaling of multi-ethernet bit throughput.
In recent years, the IPU 105 has grown in complexity starting with the only basic NIC that aims at getting packets into and out of the host. With the addition of networking software migration, NICs evolved into SmartNICs that were able to migrate functions such as VSwitch, VIRTIO-Net, AVF, etc. Remote split storage architectures provide a further evolution in which computing and storage are no longer co-located, but rather large computing clusters are connected to large storage clusters through a network. The increase in network speed and evolution of protocols have made this possible. One of the advantages provided by remote detached storage over directly attached storage is that computing and memory can be developed and updated at different beats. The amount of memory attached to a compute node is no longer limited by the physical addition or removal of hard drives, but rather can be hot swapped as a PF to a PCIe switch. Technologies such as Smart End Point enable IPUs to have firmware controlled switches and leave PCIe switches themselves unrestricted by hardware implementations.
As discussed above, embodiments herein provide source audit trails for micro-service architectures. In one implementation, the data center system 100 includes one or more resources that can implement the service management component 170 to provide a source audit trail for the micro-service architecture. For illustrative example purposes, service management component 170 is shown in CPU115 and GPU 135, respectively, of data center system 100. However, according to implementations herein, the service management component 170 may operate in one or more of various other separate resources of the data center system 100. In this way, the resources of the data center system 100 may be located in different platforms connected via a network (not shown) in the data center system 100. In some implementations, software and/or middleware may cause the resources of the data center system 100 to logically appear to be in the same platform. Furthermore, a transport protocol implemented in software and/or hardware (e.g., network interface card (network interface card, NIC)) may make remote resources logically appear as if they were local resources as well.
Further details of the service management component 170 implementing source audit trails for micro-service architecture are described below with reference to fig. 2-6.
FIG. 2 illustrates a block diagram of components of a computing platform in a data center system according to an implementation herein. In the depicted embodiment, platforms 202A, 202B, and 202C (collectively referred to herein as platforms 202) and data center management platform 206 are interconnected via network 208. In other embodiments, the computer system may include any suitable number (i.e., one or more) of platforms. In some embodiments (e.g., when the computer system comprises a single platform), all or part of the data center management platform 206 may be included on the platform 202.
Platform 202 may include platform resources 210 having one or more processing resources 212 (e.g., XPU including CPU, GPU, FPGA, ASIC, other hardware accelerators), memory 214 (which may include any number of different modules), chipset 216, communication interface device(s) 218, and any other suitable hardware and/or software to execute hypervisor 213 or other operating systems capable of executing workloads associated with applications running on platform 202.
In some embodiments, platform 202 may serve as a host platform for one or more guest systems 222 invoking these applications. Platform 202A may represent any suitable computing environment, such as a high-performance computing environment, a data center, a communication service provider infrastructure (e.g., one or more portions of an evolved packet core), an in-memory computing environment, a computing system of a vehicle (e.g., an automobile or airplane), an internet of things (Internet of Things, ioT) environment, an industrial control system, other computing environments, or a combination thereof.
Each platform 202 may include platform resources 210. The platform resources 210 may include, among other logic to implement the functionality of the platform 202, one or more processing resources 212 (such as CPU, GPU, FPGA, other hardware accelerators, etc.), memory 214, one or more chipsets 216, and communication interface(s) 228. Although three platforms are shown, computer platform 202A may be interconnected with any suitable number of platforms. In various embodiments, platform 202 may reside on a circuit board mounted in a chassis, rack, or other suitable structure that includes multiple platforms (which may include, for example, a rack or backplane switch) coupled together by network 208.
Where processing resources 212 include CPUs, each may include any suitable number of processor cores and supporting logic (e.g., non-cores). The cores may be coupled to each other, to the memory 214, to at least one chipset 216, and/or to the communication interface device 218 by one or more controllers residing on the processing resources 212 (e.g., a CPU) and/or the chipset 216. In some embodiments, processing resources 212 are embodied within slots that are permanently or removably coupled to platform 202A. Platform 202 may include any suitable number of processing resources 212.
Memory 214 may include any form of volatile or non-volatile memory, including, but not limited to, magnetic media (e.g., one or more tape drives), optical media, random access memory (random access memory, RAM), read-only memory (ROM), flash memory, removable media, or any other suitable local or remote memory component or components. The memory 214 may be used for short, medium, and/or long term storage of the platform 202A. Memory 214 may store any suitable data or information utilized by platform resource 210, including software embedded in a computer-readable medium and/or encoded logic (e.g., firmware) incorporated in hardware or otherwise stored. Memory 214 may store data used by cores of processing resources 212. In some embodiments, memory 214 may also include storage for instructions executable by processing resources 212 (e.g., cores of a CPU) or other processing elements (e.g., logic residing on chipset 216) to provide functionality associated with management component 226 or other components of platform resource 210.
Platform 202 may also include one or more chipsets 216, the one or more chipsets 216 including any suitable logic for supporting operations of processing resources 212. In various embodiments, chipset 216 may reside on the same die or package as processing resources 212, or on one or more different dies or packages. Each chipset may support any suitable number of processing resources 212. The chipset 216 may also include one or more controllers to couple other components of the platform resource 210 (e.g., the communication interface 228 or the memory 214) to the one or more processing resources 212.
In the depicted embodiment, each chipset 216 also includes a management component 226. Management component 226 may include any suitable logic for supporting the operation of chipset 216. In particular embodiments, management component 226 may collect real-time telemetry data from various connections between chipset 216, processing resources 212, and/or memory 214 managed by chipset 216, other components of platform resource 210, and/or components of platform resource 210.
The chipsets 216 also each include a communication interface 228. Communication interface 228 may be used for communication of signaling and/or data between chipset 216 and one or more I/O devices, one or more networks 208, and/or one or more devices coupled to network 208 (e.g., data center management platform 206). For example, communication interface 228 may be used to send and receive network traffic, such as data packets. In particular embodiments, communication interface 228 includes one or more physical network interface controllers (networkinterface controller, NICs) (also referred to as network interface cards or network adapters). The NIC may include electronic circuitry for communicating using any suitable physical layer and data link layer standard, such as ethernet (e.g., as defined by the IEEE 802.3 standard), fibre channel, infiniBand (InfiniBand), wi-Fi, or other suitable standard. The NIC may include one or more physical ports that may be coupled to a cable (e.g., an ethernet cable). The NIC may enable communication between any suitable element of chipset 216 (e.g., management component 226) and another device coupled to network 208. In various embodiments, the NIC may be integrated with chipset 216 (i.e., may be on the same integrated circuit or circuit board as the remainder of the chipset logic), or may be on a different integrated circuit or circuit board electromagnetically coupled to the chipset.
The platform resource 210 may include additional communication interface device(s) 228. Similar to communication interface 218, communication interface device(s) 228 may be used for communication of signaling and/or data between platform resource 210 and one or more networks 208 and one or more devices coupled to networks 208. For example, communication interface device(s) 228 may be used to send and receive network traffic, such as data packets. In particular embodiments, communication interface device(s) 228 include one or more physical NICs. These NICs may enable communication between any suitable element of platform resource 210 (e.g., processing resource 212 or memory 214) and another device coupled to network 208 (e.g., an element of another platform coupled to network 208 through one or more networks or a remote computing device).
Platform resource 210 may receive and execute any suitable type of workload. The workload may include any request to utilize one or more resources of the platform resources 210, such as one or more cores or associated logic. For example, the workload may include: a request to instantiate a software component, such as an I/O device driver 224 or guest system 222; processing requests for network packets received from micro service containers 232A, 232B (collectively referred to herein as micro service containers 232) or devices external to platform 202A, such as network nodes coupled to network 208; executing a request for a process or thread associated with guest system 222, an application running on platform 202A, hypervisor 213, or other operating system running on platform 202A; or other suitable processing request.
The micro-service container 232 may utilize its own dedicated hardware to simulate a computer system. Container 232 may refer to a standard software unit that encapsulates code and all its dependencies, so that an application runs quickly and reliably from one computing environment to another. The container image is a lightweight, stand-alone, executable software package that includes components for running applications: code, runtime, system tools, system libraries, and settings. The container 232 takes the form of Operating System (OS) virtualization, where features of the OS are used for isolated processes and for controlling the number of CPUs, memory and disks that can be accessed by these processes.
When container 232 is implemented, hypervisor 213 may also be referred to as a container runtime. Although implementations herein discuss virtualization of micro-service functions via containers, in some implementations, virtual machines may be hosted by hypervisor 213 and used to host micro-services and/or other components of services provided by applications.
The hypervisor 213 (also referred to as a virtual machine monitor (virtual machine monitor, VMM)) may include logic for creating and running guest systems 222. The hypervisor 213 may present guest operating systems that are run by virtual machines having virtual operating platforms (i.e., when virtual machines are actually consolidated onto a single hardware platform, they are running on separate physical nodes as viewed by the virtual machines) and manage execution of the guest operating systems by the platform resources 210. Services of the hypervisor 213 can be provided by virtualization in software or by hardware-assisted resources with minimal software intervention, or both. Multiple instances of various guest operating systems may be managed by hypervisor 213. Each platform 202 may have a separate instance of a hypervisor 213.
In implementations herein, the hypervisor 213 can also be implemented as a container runtime environment capable of building and containerizing applications.
The hypervisor 213 may be a native or bare metal hypervisor running directly on the platform resources 210 for controlling the platform logic and managing guest operating systems. Alternatively, hypervisor 213 can be a hosted hypervisor running on and extracting guest operating systems from a host operating system. The hypervisor 213 may include a virtual switch 238 that may provide virtual switching and/or routing functions to virtual machines of the guest system 222.
Virtual switch 238 may include software elements that execute using components of platform resource 210. In various embodiments, the hypervisor 213 may communicate with any suitable entity (e.g., SDN controller) that may cause the hypervisor 213 to reconfigure parameters of the virtual switch 238 in response to changing conditions in the platform 202 (e.g., addition or deletion of micro-service containers 232 or optimization that may enhance performance of the platform).
The elements of platform resource 210 may be coupled together in any suitable manner. For example, a bus may couple any of the components together. The bus may include any known interconnect, such as a multi-drop bus, a mesh interconnect, a ring interconnect, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g., cache coherent) bus, a layered protocol architecture, a differential bus, or a radio transceiver logic (Gunning transceiver logic, GTL) bus, to name a few.
The elements of computer platform 202A may be coupled together in any suitable manner, such as through one or more networks 208. Network 208 may be any suitable network or combination of one or more networks operating using one or more suitable networking protocols. A network may represent a series of nodes, points, and interconnected communication paths for receiving and transmitting packets of information that propagate through a communication system. For example, the network may include one or more firewalls, routers, switches, security devices, anti-virus servers, or other useful network devices.
In implementations herein, one or more of the processing resources 212 and/or the micro-service containers 232 may provide a service management component (not shown), such as the service management component 170 described with respect to fig. 1. Further details of how processing resources 212 and/or micro-service containers 232 implement service management components to provide source audit trails for micro-service architecture are described below with respect to fig. 3A-6.
Fig. 3A is a block diagram of a service platform 300 implementing source audit trails for micro-service architecture according to an implementation herein. In one implementation, service platform 300 is the same as platform 202 of data center system 200 described with reference to FIG. 2. In some implementations, the service platform 300 may be hosted in a data center that may or may not utilize separate computing. Embodiments herein are not limited to implementation in a split computing environment and may be deployed across a wide range of different data center environments. The split computing data center system 200 of fig. 2 is provided as an example implementation of a service platform 300 and is not intended to limit embodiments herein.
In one implementation, the service platform 300 may host services implemented by one or more micro service containers 320A, 320B (collectively referred to herein as micro service containers 320). The micro-service container 320 may be identical to the micro-service container 232 described with reference to fig. 2. The services can be orchestrated and managed using the service management component 340. Service management component 320 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware.
Service platform 300 may act as a hosting platform for services, implementing deployed service micro-services as one or more micro-service containers 320 that invoke the functionality of the services. Service platform 300 may represent any suitable computing environment, such as a high performance computing environment, a data center, a communication service provider infrastructure (e.g., one or more portions of an evolved packet core), an in-memory computing environment, a computing system of a vehicle (e.g., an automobile or aircraft), an internet of things (IoT) environment, an industrial control system, other computing environments, or a combination thereof. In implementations herein, the container 320 may be implemented using hardware circuitry (such as one or more of a CPU, GPU, hardware accelerator, etc.). In one embodiment, container 320 may be implemented using platform 202 described with reference to FIG. 2.
The micro-service container 320 may include logic for implementing the functionality of the micro-services 325A, 325B (collectively referred to herein as micro-services 325) and the sidecars 330A, 330B (collectively referred to herein as sidecars 330). The sidecar 330 may be a container that runs on the same pod as the micro-service 325. As depicted herein, the sidecar 330 is shown as part of the micro-service container 320, but in some implementations, the sidecar 330 may be implemented as a container that is functionally separate from the micro-service 325.
The local facilitator 310 is connected to the sidecar 330 and can operate in the privileged space of the micro-service container 320. In one implementation, local facilitator 310 is a privileged daemon (daemon) that has access to low-level information. For example, the local facilitator 310 has access to low-level software telemetry and hardware data (such as registries).
In implementations herein, the sidecar 330 may include one or more components to support source audit trails for the micro-service architecture. These components may include telemetry data ingestion components 332A, 332B (collectively referred to herein as telemetry data ingestion components 332) and collected data 334A, 334B (collectively referred to as data stores of collected data 334). The telemetry data ingestion component 332 may further include metadata components 336A, 336B (collectively referred to herein as metadata components 336) to collect metadata associated with tasks performed by the micro-services 325. Metadata collected by metadata component 336 may track the source of hardware device(s) (and/or IP blocks) and micro-services operating on the tasks.
In implementations herein, there may be multiple micro-services running in the data center that are running tasks that are capable of generating source metadata (such as source metadata collected by metadata component 336). Fig. 3B illustrates an example data center 380 hosting a plurality of server racks 385 connected via a network 382 according to an implementation herein. In one implementation, service platform 300 of FIG. 3A is hosted by data center 380. Each server rack 385 may host a plurality of computing nodes 390. The computing node 390 may be one or more processing resources such as an XPU that includes CPUs, GPUs, hardware accelerators, and the like. In one implementation, the computing node 390 may include any of the processing components 115, 125, 135, 145, 155, 162, 164, 166 hosted in the data center system described with respect to fig. 1. In one implementation, service platform 300 deploys one or more micro-services in data center 380. As shown in fig. 3B, micro services MS 1391, MS2392, MS3393, MS4394, and MS5395 are deployed on computing node 390 in data center 380 and operate on computing node 390 in data center 380. In one implementation, MSs 391-395 may be associated with more than one service hosted in data center 380. In one implementation, each MS 391-395 may perform one or more tasks that may have associated source metadata generated for the task. The generation, collection, and processing of such source metadata is described in further detail below with reference to fig. 3A.
Referring back to fig. 3A, the service platform 300 further includes a service management component 340. The service management component 340 and its underlying subcomponents may be implemented using hardware circuitry (such as one or more of a CPU, GPU, hardware accelerator, etc.). In one embodiment, service management component 340 may be implemented using platform 202 described with reference to FIG. 2. More generally, the example service management component 340 can be implemented in hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, service management component 340 may be implemented by one or more analog or digital circuits, logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU), digital signal processor(s) (DSP), application Specific Integrated Circuit (ASIC), programmable logic device(s) (programmable logic device, PLD), and/or field programmable logic device(s) (field programmable logic device, FPLD).
In one implementation, the service management component 340 operates to control management and/or orchestration of resources (such as micro-services) of services of a services grid hosted by a data center (such as the data center system 100 of fig. 1). The service management component 340 may be located on the same node or on different nodes of the micro service container 320 in the service platform 300.
The service management component 340 can include one or more components to support source audit trails for a micro-service architecture. These components may include a controller 350, an evaluator 360, and a metadata 370. In implementations herein, the controller 350 may host the source metadata manager 352, the XPU manager 354, and the blockchain integration manager 356. The evaluator may host a metadata enforcement manager 362 and a rewards feedback manager 364.
In implementations herein, the service management component 340, including the controller 350, evaluator 360, and metadata repository 370, may operate as part of a Trusted Execution Environment (TEE) (not shown) generated by the underlying computing system(s) hosting the controller 350, evaluator 360, and metadata repository 370. In some implementations, a subset of the service management components 340 may operate as part of a TEE. Hardware support of the underlying computing system(s) may be used to authenticate the TEE and protect the TEE from unauthorized access. Illustratively, the TEE may be embodied as one or more secure enclaves established using Intel SGX technology. The TEE may also include or otherwise interface with one or more drivers, libraries, or other components of the underlying computing system(s) to provide an interface to one or more other XPUs.
In implementations herein, the micro-service container 320 and the service management component 340 provide source audit trails for micro-service architecture. In one implementation, the sidecar 330 for each micro-service container 320 includes a telemetry data ingestion component 332 that receives telemetry data of the service platform 300 associated with the micro-service 325. Telemetry data may include lower level layers in the architecture (e.g., privilege space) as well as applications (micro-services 325) telemetry data and logs (e.g., user space). The collected data 334 is a data store that maintains telemetry data about the micro service 325. The metadata component 336 can generate and collect source metadata associated with tasks performed by the micro-services 325. The source metadata may also be stored in the collected data 334.
At the service management component 340, the controller 350 can manage source metadata collection and utilization for the service. In one implementation, the source metadata 352 may cause the metadata component 336 to collect and store source metadata for the micro-services 325. For example, the source metadata manager 352 may cause the platform credentials to be securely provisioned to the micro-services 325 deployed for the service. Such platform credentials may be created during the manufacture of the micro-service 325 (e.g., physical unclonable function (physical unclonable function, PUF)) or platform preparation. Platform credentials may be specific to the underlying processing resources (e.g., XPU such as CPU, GPU, hardware accelerator, etc.) that host and execute the micro-services. Platform credentials may be securely provisioned via the TEE. In some implementations, the platform credential may be supplied by a baseboard management controller (baseboard management controller, BMC) and/or by a remote or local administrator.
Based on the provisioned credentials, during the discovery phase of the micro service 325, the micro service 322 may perform a validation protocol with the controller 350. Once the micro-service 325 is successfully validated, it is allowed to participate in source audit trails (e.g., trails and service exposures) provided via source metadata.
The source metadata manager 352 may orchestrate the source metadata collection process for the services and their deployment of the micro-services 325 by indicating when and/or how the micro-services 325 should collect source metadata corresponding to tasks performed by the micro-services. For example, the source metadata manager 352 instructs the micro-service 325 to collect source metadata for all transactions (e.g., tasks) for the micro-service, for the determined time frames, for the determined time snapshots, for transactions occurring in the determined geographic locations, for the determined transaction types, and so forth. Thus, source metadata tracking is a policy configurable option that may be controlled by source metadata manager 352 in implementations herein.
In response to being notified to perform source metadata collection for the micro-service 325, the sidecar 330 may utilize the metadata component 336 to obtain telemetry metadata associated with the transaction. Such telemetry metadata may include identification of the micro-service 325 that processed the transaction, communications from the sidecar 330, XPU calculation utilization data, and/or XPU calculation characteristics data. In one implementation, the sidecar 330 may manage these telemetry metadata elements differently (e.g., encrypt them using secure HW (hardware) features). The telemetry data may then travel with an inter-process communication packet (e.g., an RPC or gRPC communication) from the micro service 325.
In some implementations, telemetry metadata that travels with RPC or gRPC calls may also be split and mixed (n-forked), combined, etc.). Thus, the support structure creation and interpretation may be a service provided by the infrastructure to run across heterogeneous components (SW, soC, devices).
The source metadata manager 352 may then intercept the telemetry data and generate a source metadata structure for each transaction of the micro-service 325. The source metadata structure may begin with the underlying platform credentials supplied for the micro-service and build on top of those platform credentials using telemetry metadata collected by metadata component 336. The source metadata structure is exemplified as follows:
Figure SMS_1
in implementations herein, the source metadata manager 352 may protect source metadata by applying homomorphic encryption. In one implementation, source metadata manager 352 applies additive homomorphic encryption, which allows each layer to add its source data without looking at the previous layer data. In some implementations, the source metadata manager 352 may then hash the encrypted source metadata with a platform ID (such as the provisioned platform credential).
In implementations herein, the generated source metadata may be tracked via a distributed ledger (such as a blockchain). For example, controller 350 may utilize blockchain integrity manager 356 to record source metadata in a blockchain available via a public ledger. The blockchain integration manager 356 may cause the source metadata to be recorded in the blockchain maintained for each hardware device hosting the micro service 325. For example, if micro service 325A is hosted by local XPU 1, source metadata generated by source metadata manager 352 from telemetry metadata of transactions performed by micro service 325A may be stored to local XPU 1 blockchain replica 372, which may be a data store. Similarly, if micro service 325B is hosted by local XPU 2, source metadata generated by source metadata manager 352 from telemetry metadata of transactions performed by micro service 325B may be stored to local XPU 2 blockchain copy 374, which may be a data store. Multiple local XPU blockchain copies for each XPU in the service platform 300 may be maintained (e.g., up to the local XPU N blockchain copy 376). Secure source metadata tagging and tracking via blockchain is discussed in more detail with respect to fig. 4A.
Fig. 4A is a block diagram depicting a blockchain system 400 for secure source metadata tagging and tracking via blockchains in accordance with implementations herein. In one implementation, the blockchain system 400 may be implemented using the service platform 300 described with reference to fig. 3A.
In one implementation, the plurality of source metadata 405 may be generated by a plurality of micro services S1, S2, to Sn operating on a plurality of XPU1, XPU 2, to XPU n. For example, micro-service S1 may operate on XPU 1-XPU n and generate source metadata 405 known as peer-to-peer resource management metadata, P2PRM, which is shown as P2PRMs1. The source metadata 405 is generated on each XPU for S1 and is encrypted using platform credentials (k) supplied for each XPU. Thus, the source metadata for S1 includes [ P2PRMs1] k1, [ P2PRMs1] k2 to [ P2PRM1] kn. Similarly, source metadata 405 is generated by S2 through Sn, and is similarly depicted in fig. 4A. The source metadata 405 generated by each micro service S1-Sn at each XPU1-n is stored and recorded in the blockchain copy 372, 374, 376 maintained for each XPU1, XPU 2, 412 through XPU n 414 (which may be the same as the local XPU blockchain copy 372-376 described with respect to FIG. 3A). Local blockchain copies 372, 374, 376 may then be available through the public ledger by trust agent 420. Thus, in implementations herein, a decentralized audit may be performed on the source metadata 405.
Referring back to fig. 3A, in implementations herein, the generation and enablement of secure "metadata" traveling with, for example, the gRPC allows tracking which IP block/micro-services operate on what tasks, particularly when heterogeneous multiple services work in concert with different/competing providers. In one implementation, the XPU manager 354 of the controller 350 may provide such traceability in terms of loading and unloading of micro-services for services and revocation management.
For example, tracking or security metadata may have some validation capability in addition to being secure. For example, if a service obtains tracking of execution of a micro-service using multiple RPC (e.g., gRPC) calls and multiple IP intersections, each IP can potentially add secure and signed telemetry data. Each portion of the signature that can potentially be used by the SW stack to verify that tracking is generated by a trusted party. Thus, the method facilitates determining that data is not maliciously generated. In addition to mapping tasks and services to vendor information based on runtime observations, implementations herein can also utilize certain information to understand a value metric normalized to utilization on a per IP block basis. For example, IP block X: the utilization rate is as follows: 30%; missed latency SLA-40% time; IP block Y: the utilization rate is 90 percent; missed latency SLA:5%.
The evaluator 360 of the service management component 340 can be employed in implementations herein to oversee secure source metadata regarding provisioning policies of a service and take any responsive action accordingly. In one implementation, metadata enforcement manager 362 can monitor source metadata generated by controller 350 to determine whether controller 350 is accurately enforcing policies provisioned for a service (and its micro-services). Policies provisioned for a service may include a service level agreement (service level agreement, SLA) for the service including quality of service (quality of service, qoS) metrics for the service (and its micro-services) and a Service Level Objective (SLO) for the service (and its micro-services). For example, the policy may indicate a level of confidentiality that should be maintained for the data, a class of micro-services that may work together, a network protocol to be used, a processing device specification to be used, and so on.
Metadata implementation manager 362 may access source metadata via a metadata base and/or blockchain copies maintained from local blockchain copies 372-376. In some implementations, the metadata enforcement manager 362 can decrypt at least a portion of the source metadata as part of the access. Metadata enforcement manager 362 can analyze source metadata with respect to policies supplied to the micro-services to identify whether any violations of the policies exist for the micro-services. For example, when XPU manager 354 loads a new micro-service or unloads an existing micro-service, metadata enforcement manager 362 may examine audit trails generated from recorded source metadata to determine whether such actions of XPU manager 354 conform to provisioning policies for the micro-service. The evaluation performed by metadata enforcement manager 362 may determine whether any revocation management indicated as being used by provisioning policies for micro-services was performed when the services were loaded or unloaded.
The reward feedback manager 364 may generate an evaluation metric based on whether a violation of one or more policies is identified. Additionally, ML-based techniques may be applied to future improvements based on rewards. These evaluation metrics may be used as a reward function that encourages positive behavior from the controller 350 and prevents negative behavior from the controller 350. Thus, implementations herein provide a method of checking and balancing between the controller 350 and the evaluator 360.
Fig. 4B is a diagram illustrating an operational diagram 430 for source audit trail for a micro-service architecture according to an implementation herein. In one implementation, the service management component 340 described with reference to FIG. 3A implements the operational schematic 430.
Operational schematic 430 includes TEE 435 of master controller 440 and evaluator 450. In one implementation, controller 440 may be the same as controller 350 described with respect to fig. 3A, and evaluator 450 may be the same as evaluator 360 described with respect to fig. 3A. The controller 440 may include metadata 442 and XPU manager 444, which may be the same as the source metadata manager 352 and XPU manager 354 of FIG. 3A, respectively. The evaluator 450 may include a security audit trail 452 and an evaluation metric 454.
In implementations herein, interchangeable computing kernel 460 may include different blocks of computing nodes and/or on different XPUs running multiple micro services, such as MS1-MS5 391-395. In one implementation, MS1-MS5 391-395 are the same as MS1-MS1 391-395 described with reference to FIG. 3B. These micro-services MS1-MS5 391-395 may be part of an XPU interdependence glow graph (glow graph) and SLA model architecture 470 that is monitored using security audit trail 475 with source metadata, as described herein.
Using the techniques described above with respect to fig. 3A-4A, controller 440 may generate source metadata 442 based on telemetry metadata stored in metadata topology archive 490. In implementations herein, source metadata 442 may be homomorphically encrypted and tracked via blockchain as described above. XPU manager 444 may implement one or more provisioning policies (e.g., stored in archive 480) for micro services MS1-MS5 391-395 during runtime of the service based on the generated source metadata 442.
The evaluator 450 uses the security audit trail 452 generated from the source metadata 442 to examine the controller 440. For example, evaluator 450 may examine any hardware or software instance proposed by XPU manager 444 to determine that such proposed hardware or software instance satisfies the provisioning policy for the micro-service. The evaluator 450 may utilize the evaluation metrics 454 to provide feedback to the controller 440 in the following respects: whether the controller 440 properly enforces the provisioning policy based on the evaluated security audit trail 452 generated from the source metadata 442.
Embodiments may be provided, for example, as a computer program product that may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines (such as a computer, network of computers, or other electronic devices), may cause the one or more machines to perform operations in accordance with embodiments described herein. The machine-readable medium may include, but is not limited to: floppy disks, optical disks, CD-ROMs (compact disk read-only memory), and magneto-optical disks, ROM, RAM, EPROM (Erasable Programmable Read Only Memory, erasable programmable read-only memory), EEPROMs (Electrically Erasable Programmable Read Only Memory, electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions.
Furthermore, embodiments may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection).
Throughout this document, the term "user" may be interchangeably referred to as "viewer," "observer," "presenter," "individual," "end user," and the like. It should be noted that throughout this document, terms such as "graphics domain" may be interchangeably referenced with "graphics processing unit," graphics processor, "or simply" GPU, "and similarly," CPU domain "or" host domain "may be interchangeably referenced with" computer processing unit, "" application processor, "or simply" CPU.
It is noted that terms like "node," "computing node," "server device," "cloud computer," "cloud server computer," "machine," "host," "device," "computing device," "computer," "computing system," and the like may be used interchangeably throughout this document. It should be further noted that terms like "application," "software application," "program," "software program," "package," "software package," and the like may be used interchangeably throughout this document. Also, terms such as "job," "input," "request," "message," and the like may be used interchangeably throughout this document.
FIG. 5A is a flow chart illustrating an embodiment of a method 500 for facilitating source audit trails for a micro-service architecture. The method 500 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc.), software (such as instructions run on a processing device), or a combination thereof. More specifically, the method 500 may be implemented as one or more modules employing a set of logic instructions stored in a machine or computer readable storage medium (also referred to herein as a non-transitory computer readable storage medium) such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as PLA, FPGA, CPLD, etc., in fixed function logic hardware using circuit technology such as ASIC, CMOS, or TTL technology, for example, etc., or in any combination thereof.
For simplicity and clarity of presentation, the processes of method 500 are illustrated in linear order; however, it is contemplated that any number of them may be executed in parallel, asynchronously, or in a different order. Further, many of the components and processes described with reference to fig. 1-4 may not be repeated or discussed below for brevity, clarity, and ease of understanding. In one implementation, a data center system implementing a sidecar in a micro-service container, such as a processing device executing the service management component 340 of the service platform 300 of fig. 3 (which may operate in a TEE), may perform the method 500.
The example process of the method 500 of fig. 5 begins at block 510, where a processing device may obtain provisioning credentials for a micro service based on a validation protocol through the micro service of the service hosted in a data center that includes the processing device. At block 520, the processing device may generate source metadata for a task performed by the micro-service for the task. In one implementation, the source metadata may include information such as an identification of the micro-service, an operational status of hardware resources and/or software resources used to perform the micro-service and tasks, and an operational status of a sidecar of the micro-service during the tasks.
Subsequently, at block 530, the processing device may encrypt the source metadata with the provisioning credentials for the micro-service using additive homomorphic encryption. Then, at block 540, the processing device may record the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing tasks and micro-services. Finally, at block 550, the processing device may make the local blockchain of the source metadata available in the trusted agent along with other blockchains of the source metadata for the service.
FIG. 5B is a flow chart illustrating an embodiment of a method 560 for implementing evaluation of service policies using source audit trails for micro-service architecture. The method 560 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, etc.), software (such as instructions run on a processing device), or a combination thereof. More specifically, the method 560 may be implemented as one or more modules employing a set of logic instructions stored in a machine or computer readable storage medium (also referred to herein as a non-transitory computer readable storage medium) such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as PLA, FPGA, CPLD, etc., in fixed function logic hardware using circuit technology such as ASIC, CMOS, or TTL technology, for example, etc., or in any combination thereof.
For simplicity and clarity of presentation, the processes of method 560 are illustrated in linear order; however, it is contemplated that any number of them may be executed in parallel, asynchronously, or in a different order. Further, many of the components and processes described with reference to fig. 1-4 may not be repeated or discussed below for brevity, clarity, and ease of understanding. In one implementation, a data center system implementing a sidecar in a micro-service container, such as a processing device executing the service management component 340 of the service platform 300 of fig. 3 (which may operate in a TEE), may perform the method 560.
The example process of method 560 of fig. 5 begins at block 565, where a processing device may access a log of source metadata for a service. In one implementation, the log of source metadata may include source metadata generated by a micro-service of the service, where the source metadata is homomorphically encrypted and recorded in the blockchain. At block 570, the processing device may decrypt at least a portion of the source metadata.
Subsequently, at block 575, the processing device may access one or more policies established for the service, the one or more policies including a service level agreement (service levelagreements, SLA) for the service, including QoS and SLO. Then, at block 580, the processing device may analyze the portion of the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies. Finally, at block 585, the processing device may generate an evaluation metric based on whether violations of one or more policies are identified.
FIG. 6 is a schematic diagram of an illustrative electronic computing device 600 that enables source audit trails for micro-service architectures, according to some embodiments. In some embodiments, computing device 600 includes one or more processors 610, the one or more processors 610 including one or more processor cores 618, the one or more processor cores 618 including a service management component (servicemanagement component, SMC) 615, such as service management components 170, 340 described with respect to fig. 1 and 3A. In some embodiments, one or more processor cores 618 establish a TEE for hosting the SMC 615. In some embodiments, computing device 600 includes a hardware accelerator 668 that includes a service management component 682, such as service management components 170, 340 described with respect to fig. 1 and 3A. In some embodiments, hardware accelerator 668 establishes a TEE for hosting service management component 682. In some embodiments, the computing device will provide source audit trails for the micro-service architecture, as provided in fig. 1-5B.
Computing device 600 may additionally include one or more of the following: cache 662, graphics Processing Unit (GPU) 612 (which may be a hardware accelerator in some implementations), wireless input/output (I/O) interface 620, wired I/O interface 630, system memory 640 (e.g., memory circuitry), power management circuitry 650, non-transitory storage 660, and network interface 670 for connecting to network 672. The following discussion provides a brief, general description of the components that form the illustrative computing device 600. For example, non-limiting computing device 600 may comprise a desktop computing device, a blade service device, a workstation, or similar device or system.
In an embodiment, processor core 618 is capable of executing machine-readable instruction set 614, reading data and/or instruction set 614 from one or more storage devices 660, and writing data to one or more storage devices 660. Those skilled in the relevant art will appreciate that the illustrated embodiments, as well as other embodiments, may be implemented with other processor-based device configurations, including portable or handheld electronic devices, e.g., smartphones, portable computers, wearable computers, consumer electronics, personal computers ("PCs"), network PCs, minicomputers, server blades, mainframe computers, and the like.
Processor core 618 may include any number of hardwired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements disposed partially or fully in a PC, server, or other computing system capable of executing processor-readable instructions.
Computing device 600 includes a bus or similar communication link 616 that communicatively couples various system components including a processor core 618, a cache 662, graphics processor circuitry 612, one or more wireless I/O interfaces 620, one or more wired I/O interfaces 630, one or more storage devices 660, and/or one or more network interfaces 670, and facilitates the exchange of information and/or data among the various system components. Computing device 600 may be referred to herein in the singular, but this is not intended to limit embodiments to a single computing device 600 as in some embodiments there may be more than one computing device 600 incorporating, including, or containing any number of communicatively coupled, collocated, or remotely networked circuits or devices.
Processor core 618 may include any number, type, or combination of currently available or future developed devices capable of executing a set of machine-readable instructions.
The processor circuit 618 may include (be coupled to) but is not limited to any current or future developed single-core or multi-core processor or microprocessor, such as: one or more systems on a chip (SOCs); a Central Processing Unit (CPU); a Digital Signal Processor (DSP); a Graphics Processing Unit (GPU); an Application Specific Integrated Circuit (ASIC), a programmable logic unit, a Field Programmable Gate Array (FPGA), or the like. The construction and operation of the various blocks shown in fig. 6 are of conventional design unless otherwise indicated. Accordingly, such blocks are not described in further detail herein as those of skill in the relevant art may understand such blocks. The bus 616 that interconnects at least some of the components of the computing device 600 may employ any currently available or future developed serial or parallel bus structure or architecture.
The system memory 640 may include read only memory ("ROM") 642 and random access memory ("RAM") 646. Portions of ROM 642 may be used to store or otherwise retain a basic input/output system ("BIOS") 644.BIOS 644 provides basic functionality to computing device 600, for example by causing processor core 618 to load and/or execute one or more sets of machine-readable instructions 614. In an embodiment, at least some of the one or more sets of machine-readable instructions 614 cause at least a portion of the processor core 618 to provide, create, generate, transform, and/or act as a dedicated, specified, and special machine, such as a word processor, digital image acquisition machine, media player, gaming system, communication device, smart phone, or the like.
Computing device 600 may include at least one wireless input/output (I/O) interface 620. The at least one wireless I/O interface 620 may be communicatively coupled to one or more physical output devices 622 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wireless I/O interface 620 may be communicatively coupled to one or more physical input devices 624 (pointing device, touch screen, keyboard, haptic device, etc.). The at least one wireless I/O interface 620 may include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to:
Figure SMS_2
Figure SMS_3
near Field Communication (NFC), and the like.
Computing device 600 may include one or more wired input/output (I/O) interfaces 630. At least one wired I/O interface 630 may be communicatively coupled to one or more physical output devices 622 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wired I/O interface 630 may be communicatively coupled to one or more physical input devices 624 (pointing device, touch screen, keyboard, haptic device, etc.). The wired I/O interface 630 may include any currently available or future developed I/O interface. Example wired I/O interfaces include, but are not limited to: universal Serial Bus (USB), IEEE 1394 ("firewire"), and the like.
Computing device 600 may include one or more communicatively coupled non-transitory data storage devices 660. The data storage devices 660 may include one or more Hard Disk Drives (HDDs) and/or one or more solid State Storage Devices (SSDs). The one or more data storage devices 660 may include any current or future developed storage, network storage, and/or systems. Non-limiting examples of such data storage devices 660 may include, but are not limited to, any currently or future developed non-transitory storage or device, such as one or more magnetic storage devices, one or more optical storage devices, one or more resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some implementations, the one or more data storage devices 660 may include one or more removable storage devices, such as one or more flash drives, flash memories, flash memory storage units, or similar apparatus or devices that are capable of being communicatively coupled to the computing device 600 and decoupled from the computing device 600.
The one or more data storage devices 660 may include an interface or controller (not shown) that communicatively couples the respective storage device or system to the bus 616. The one or more data storage devices 660 may store, retain, or otherwise contain information that is electronic to the processor core 618 and/or the graphics processor The circuitry 612 and/or one or more sets of machine-readable instructions, data structures, program modules, data stores, databases, logic structures, and/or other data that are useful to the processor core 618 and/or graphics processor circuitry 612 or one or more applications executed by the processor core 618 and/or graphics processor circuitry 612. In some examples, the one or more data storage devices 660 can communicate with one or more other devices via, for example, the bus 616, or via one or more wired communication interfaces 630 (e.g., universal serial bus or USB), one or more wireless communication interfaces 620 (e.g.,
Figure SMS_4
near field communication or NFC), and/or one or more network interfaces 670 (IEEE 802.3 or ethernet, IEEE 802.11 or +.>
Figure SMS_5
Etc.) and is communicatively coupled to the processor core 618.
The processor-readable instruction set 614 and other programs, applications, logic sets, and/or modules may be stored in whole or in part in the system memory 640. Such instruction set 614 may be transferred in whole or in part from one or more data storage devices 660. The instruction set 614 may be loaded, stored, or otherwise retained in whole or in part in the system memory 640 during execution by the processor core 618 and/or the graphics processor circuitry 612.
Computing device 600 can include power management circuitry 650, which power management circuitry 652 controls one or more operational aspects of energy storage device 652. In embodiments, energy storage device 652 may include one or more primary (i.e., non-rechargeable) batteries or auxiliary (i.e., rechargeable) batteries or similar energy storage devices. In embodiments, energy storage device 652 may include one or more supercapacitors or super-supercapacitors. In an embodiment, power management circuitry 650 may alter, adjust, or control the flow of energy from external power source 654 to energy storage device 652 and/or to computing device 600. The power source 654 may include, but is not limited to, a solar power system, a commercial power grid, a portable generator, an external energy storage device, or any combination thereof.
For convenience, processor core 618, graphics processor circuitry 612, wireless I/O interface 620, wired I/O interface 630, storage 660, and network interface 670 are illustrated as communicatively coupled to each other via bus 616, thereby providing connectivity between the components described above. In alternative embodiments, the components described above may be communicatively coupled differently than illustrated in fig. 6. For example, one or more of the components described above may be directly coupled to other components, or may be coupled to each other via one or more intermediate components (not shown). In another example, one or more of the components described above may be integrated into processor core 618 and/or graphics processor circuitry 612. In some embodiments, all or part of bus 616 may be omitted and the components directly coupled to one another using suitable wired or wireless connections.
The following examples relate to further embodiments. Example 1 is an apparatus to facilitate source audit trails for a micro-service architecture. The apparatus of example 1 comprises one or more processors to: obtaining provisioning credentials for a micro service based on a validation protocol by the micro service of the service hosted in the data center; for a task performed by the micro-service, generating source metadata for the task, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task; encrypting source metadata with provisioning credentials for the micro-service; and recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing tasks and micro-services.
In example 2, the subject matter of example 1 can optionally include wherein the one or more processors further make the local blockchain of source metadata available in the trust agent with other blockchains of source metadata for the service. In example 3, the subject matter of any of examples 1-2 can optionally include wherein the source metadata includes a data structure including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task with respect to provisioning credentials for the micro-service.
In example 4, the subject matter of any of examples 1-3 can optionally include, wherein the provisioning credential includes a platform credential for a hardware device to perform the task performed by the micro-service. In example 5, the subject matter of any of examples 1-4 can optionally include, wherein the platform credential includes a physical unclonable function of the hardware device. In example 6, the subject matter of any of examples 1-5 can optionally include wherein the one or more processors provide a Trusted Execution Environment (TEE) for a controller of the device to generate and encrypt the source metadata.
In example 7, the subject matter of any of examples 1-6 can optionally include, wherein the one or more processors are further to: obtaining source metadata for the micro-service from the local blockchain; accessing one or more policies established for the micro-service; analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and generating one or more evaluation metrics based on whether violations of the one or more policies are identified. In example 8, the subject matter of any of examples 1-7 can optionally include, wherein the one or more policies include a Service Level Agreement (SLA) for the micro-service, the SLA including one or more of a quality of service (QoS) metric and a Service Level Objective (SLO) for the micro-service.
In example 9, the subject matter of any of examples 1-8 can optionally include wherein the one or more processors are further to decrypt source metadata obtained from the local blockchain. In example 10, the subject matter of any of examples 1-9 can optionally include wherein the one or more processors to encrypt the source metadata comprises the one or more processors to apply additive homomorphic encryption to encrypt the source metadata.
Example 11 is a non-transitory computer-readable storage medium to facilitate source audit trails for a micro-service architecture. The non-transitory computer-readable storage medium of example 11 having stored thereon executable computer program instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: obtaining provisioning credentials for a micro service based on a validation protocol by the micro service of the service hosted in a data center comprising one or more processors; for a task performed by the micro-service, generating source metadata for the task, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task; encrypting source metadata with provisioning credentials for the micro-service; and recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing tasks and micro-services.
In example 12, the subject matter of example 11 can optionally include wherein the operations further make the local blockchain of source metadata available in the trust agent with other blockchains of source metadata for the service. In example 13, the subject matter of any of examples 11-12 can optionally include wherein the source metadata includes a data structure including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task with respect to provisioning credentials for the micro-service.
In example 14, the subject matter of any of examples 11-13 may optionally include, wherein the operations further comprise providing a Trusted Execution Environment (TEE) for a controller of the device to generate the source metadata and encrypting the source metadata. In example 15, the subject matter of examples 11-14 can optionally include, wherein the operations further comprise: obtaining source metadata for the micro-service from the local blockchain; accessing one or more policies established for the micro-service; analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and generating one or more evaluation metrics based on whether violations of the one or more policies are identified.
Example 16 is a method to facilitate source audit trails for a micro-service architecture. The method of example 16 may include obtaining, by one or more processors of a micro-service hosting a service, provisioning credentials for the micro-service based on a validation protocol; for a task performed by the micro-service, generating source metadata for the task, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task; encrypting source metadata with provisioning credentials for the micro-service; and recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing tasks and micro-services.
In example 17, the subject matter of example 16 can optionally include further comprising making the local blockchain of source metadata available in the trust proxy with other blockchains of source metadata for the service. In example 18, the subject matter of any of examples 16-17 can optionally include, wherein the source metadata includes a data structure including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task with respect to provisioning credentials for the micro-service.
In example 19, the subject matter of any of examples 16-18 can optionally include further comprising providing a Trusted Execution Environment (TEE) for a controller of the device to generate source metadata and encrypting the source metadata. In example 20, the subject matter of examples 16-19 can optionally include further comprising: obtaining source metadata for the micro-service from the local blockchain; accessing one or more policies established for the micro-service; analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and generating one or more evaluation metrics based on whether violations of the one or more policies are identified.
Example 21 is a system to facilitate source audit trails for a micro-service architecture. The system of example 21 may optionally include: a memory for storing data blocks; and a processor communicatively coupled to the memory for: obtaining provisioning credentials for a micro service based on a validation protocol by the micro service of the service hosted in a data center comprising a processor; generating source metadata for a task performed by a micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of hardware resources or software resources for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task; encrypting the source metadata with provisioning credentials for the micro-service; and recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing the tasks and the micro-services.
In example 22, the subject name of example 21 can optionally include wherein the one or more processors further make the local blockchain of source metadata available in the trust agent with other blockchains of source metadata for the service. In example 23, the subject matter of any of examples 21-22 can optionally include, wherein the source metadata includes a data structure including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task with respect to provisioning credentials for the micro-service.
In example 24, the topic name of any of examples 21-23 may optionally include, wherein the provisioning credential includes a platform credential for a hardware device to perform the task performed by the micro-service. In example 25, the subject matter of any of examples 21-24 can optionally include, wherein the platform credential includes a physical unclonable function of the hardware device. In example 26, the subject matter of any of examples 21-25 can optionally include wherein the one or more processors provide a Trusted Execution Environment (TEE) for a controller of the device to generate and encrypt source metadata.
In example 27, the subject matter of any of examples 21-26 can optionally include, wherein the one or more processors are further to: obtaining source metadata for the micro-service from the local blockchain; accessing one or more policies established for the micro-service; analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and generating one or more evaluation metrics based on whether violations of the one or more policies are identified. In example 28, the subject matter of any of examples 21-27 can optionally include, wherein the one or more policies include a Service Level Agreement (SLA) for the micro-service, the Service Level Agreement (SLA) including one or more of a quality of service (QoS) metric and a Service Level Objective (SLO) for the micro-service.
In example 29, the subject matter of any of examples 21-28 can optionally include wherein the one or more processors are further to decrypt source metadata obtained from the local blockchain. In example 30, the subject matter of any of examples 21-29 can optionally include, wherein the one or more processors to encrypt the source metadata includes the one or more processors to apply additive homomorphic encryption to encrypt the source metadata.
Example 30 is an apparatus to facilitate source audit trail for a micro-service architecture, comprising means for obtaining provisioning credentials for a micro-service based on a validation protocol via one or more processors of the micro-service hosting the service; means for generating source metadata for a task performed by the micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a side car of the micro-service during the task; means for encrypting the source metadata with provisioning credentials for the micro-service; and means for recording the encrypted source metadata in a local blockchain of source metadata maintained for hardware resources performing the tasks and the micro-services. In example 31, the subject matter of example 30 may optionally include the apparatus being further configured to perform the method of any of examples 17-20.
Example 32 is at least one machine readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to carry out the method of any of examples 16-20. Example 33 is an apparatus to facilitate source audit trails for a micro-service architecture, the apparatus configured to perform the method of any of examples 16-20. Example 34 is an apparatus to facilitate source audit trail for a micro-service architecture, the apparatus comprising means for performing the method of any of examples 16-20. The details in the examples may be used anywhere in one or more embodiments.
The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Those skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the features as set forth in the appended claims.

Claims (25)

1. An apparatus, the apparatus comprising:
one or more processors configured to:
obtaining, by a micro-service of a service hosted in a data center, provisioning credentials for the micro-service based on a validation protocol;
generating source metadata for a task performed by the micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task;
encrypting the source metadata with the provisioning credential for the micro-service; and
the encrypted source metadata is recorded in a local blockchain of source metadata maintained for hardware resources performing the tasks and the micro-services.
2. The apparatus of claim 1, wherein the one or more processors are further to make the local blockchain of source metadata available in a trust proxy with other blockchains of source metadata for the service.
3. The apparatus of any of claims 1-2, wherein the source metadata comprises a data structure including the identification of the micro-service, an operational state of at least one of the hardware resource or the software resource for performing the micro-service and the task, and an operational state of the sidecar of the micro-service with respect to the provisioning credential for the micro-service during the task.
4. The apparatus of any of claims 1-3, wherein the provisioning credential comprises a platform credential for a hardware device to perform the task performed by the micro-service.
5. The apparatus of any of claims 1-4, wherein the platform credential comprises a physical unclonable function of the hardware device.
6. The apparatus of any of claims 1-5, wherein the one or more processors provide a trusted execution environment, TEE, for a controller of the device to generate and encrypt the source metadata.
7. The apparatus of any one of claims 1-6, wherein the one or more processors are further to:
Obtaining the source metadata for the micro-service from the local blockchain;
accessing one or more policies established for the micro-service;
analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and
one or more evaluation metrics are generated based on whether violations of the one or more policies are identified.
8. The apparatus of any of claims 1-7, wherein the one or more policies comprise a service level agreement SLA for the micro-service, the service level agreement SLA comprising one or more of a quality of service QoS metric and a service level objective SLO for the micro-service.
9. The apparatus of any one of claims 1-8, wherein the one or more processors are further to decrypt the source metadata obtained from the local blockchain.
10. The apparatus of any of claims 1-9, wherein the one or more processors to encrypt the source metadata comprise the one or more processors to apply additive homomorphic encryption to encrypt the source metadata.
11. A method, the method comprising:
obtaining, by one or more processors of a micro-service hosting a service, provisioning credentials for the micro-service based on a validation protocol;
generating source metadata for a task performed by the micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task;
encrypting the source metadata with the provisioning credential for the micro-service; and
the encrypted source metadata is recorded in a local blockchain of source metadata maintained for hardware resources performing the tasks and the micro-services.
12. The method of claim 11, further comprising making the local blockchain of source metadata available in a trust proxy with other blockchains of source metadata for services.
13. The method of any of claims 11-12, wherein the source metadata includes a data structure including the identification of the micro-service, an operational state of at least one of the hardware resource or the software resource for performing the micro-service and the task, and an operational state of the sidecar of the micro-service with respect to the provisioning credential for the micro-service during the task.
14. The method of any of claims 11-13, further comprising providing a trusted execution environment, TEE, for a controller of the device to generate and encrypt the source metadata.
15. The method of any of claims 11-14, further comprising:
obtaining the source metadata for the micro-service from the local blockchain;
accessing one or more policies established for the micro-service;
analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and
one or more evaluation metrics are generated based on whether violations of the one or more policies are identified.
16. A system to facilitate source audit trails for a micro-service architecture, the system comprising:
a memory for storing blocks of data; and
a processor communicatively coupled to the memory for:
obtaining provisioning credentials for a micro-service by the micro-service of a service hosted in a data center comprising the processor based on a validation protocol;
generating source metadata for a task performed by the micro-service, the source metadata including an identification of the micro-service, an operational state of at least one of a hardware resource or a software resource for performing the micro-service and the task, and an operational state of a sidecar of the micro-service during the task;
Encrypting the source metadata with the provisioning credential for the micro-service; and
the encrypted source metadata is recorded in a local blockchain of source metadata maintained for hardware resources performing the tasks and the micro-services.
17. The system of claim 16, wherein the one or more processors further make the local blockchain of source metadata available in a trust agent with other blockchains of source metadata for the service.
18. The system of any of claims 16-17, wherein the source metadata includes a data structure including the identification of the micro-service, an operational state of at least one of the hardware resource or the software resource for performing the micro-service and the task, and an operational state of the sidecar of the micro-service with respect to the provisioning credentials for the micro-service during the task.
19. The system of any of claims 16-18, wherein the provisioning credential comprises a platform credential for a hardware device to perform the task performed by the micro-service.
20. The system of any of claims 16-19, wherein the platform credential comprises a physically unclonable function of the hardware device.
21. The system of any of claims 16-20, wherein the one or more processors provide a trusted execution environment, TEE, for a controller of the device to generate and encrypt the source metadata.
22. The system of any one of claims 16-21, wherein the one or more processors are further to: obtaining the source metadata for the micro-service from the local blockchain; accessing one or more policies established for the micro-service; analyzing the source metadata with respect to the one or more policies to identify whether there is a violation of the one or more policies; and generating one or more evaluation metrics based on whether violations of the one or more policies are identified.
23. The system of any of claims 16-22, wherein the one or more policies include a service level agreement SLA for the micro-service, the service level agreement SLA including one or more of a quality of service QoS metric and a service level objective SLO for the micro-service.
24. At least one machine readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to carry out the method according to any one of claims 11-15.
25. An apparatus for facilitating source audit trails for a micro-service architecture, comprising means for performing the method of any of claims 11-15.
CN202211588006.6A 2021-12-21 2022-12-09 Source audit trail for micro-service architecture Pending CN116305136A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/557,604 US11570264B1 (en) 2021-12-21 2021-12-21 Provenance audit trails for microservices architectures
US17/557,604 2021-12-21

Publications (1)

Publication Number Publication Date
CN116305136A true CN116305136A (en) 2023-06-23

Family

ID=84332310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211588006.6A Pending CN116305136A (en) 2021-12-21 2022-12-09 Source audit trail for micro-service architecture

Country Status (3)

Country Link
US (3) US11570264B1 (en)
EP (1) EP4202739A1 (en)
CN (1) CN116305136A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200374127A1 (en) * 2019-05-21 2020-11-26 The University Of Akron Blockchain-powered cloud management system
US20230108209A1 (en) * 2021-10-05 2023-04-06 International Business Machines Corporation Managing workload in a service mesh
US11972007B2 (en) 2021-12-09 2024-04-30 Cisco Technology, Inc. Enforcing location-based data privacy rules across networked workloads
US11960607B2 (en) * 2021-12-09 2024-04-16 Cisco Technology, Inc. Achieving minimum trustworthiness in distributed workloads
US11570264B1 (en) 2021-12-21 2023-01-31 Intel Corporation Provenance audit trails for microservices architectures

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10484341B1 (en) * 2017-04-27 2019-11-19 EMC IP Holding Company LLC Distributed ledger for multi-cloud operational state
US10581621B2 (en) * 2017-05-18 2020-03-03 International Business Machines Corporation Enhanced chaincode analytics provenance in a blockchain
US11570271B2 (en) * 2019-04-10 2023-01-31 Cisco Technology, Inc. Differentiated smart sidecars in a service mesh
US11388054B2 (en) * 2019-04-30 2022-07-12 Intel Corporation Modular I/O configurations for edge computing using disaggregated chiplets
US11082525B2 (en) * 2019-05-17 2021-08-03 Intel Corporation Technologies for managing sensor and telemetry data on an edge networking platform
US20210117249A1 (en) * 2020-10-03 2021-04-22 Intel Corporation Infrastructure processing unit
US20220121461A1 (en) * 2020-10-20 2022-04-21 Sri International Sound and clear provenance tracking for microservice deployments
US11570264B1 (en) 2021-12-21 2023-01-31 Intel Corporation Provenance audit trails for microservices architectures

Also Published As

Publication number Publication date
US11570264B1 (en) 2023-01-31
US11792280B2 (en) 2023-10-17
US20230412699A1 (en) 2023-12-21
US20230199077A1 (en) 2023-06-22
EP4202739A1 (en) 2023-06-28

Similar Documents

Publication Publication Date Title
NL2029116B1 (en) Infrastructure processing unit
US20220171648A1 (en) Container-first architecture
US11792280B2 (en) Provenance audit trails for microservices architectures
Le et al. Cloud computing and virtualization
US8806015B2 (en) Workload-aware placement in private heterogeneous clouds
US20210399954A1 (en) Orchestrating configuration of a programmable accelerator
US20130238785A1 (en) System and Method for Metadata Discovery and Metadata-Aware Scheduling
US11558265B1 (en) Telemetry targeted query injection for enhanced debugging in microservices architectures
US20210011823A1 (en) Continuous testing, integration, and deployment management for edge computing
US20220179674A1 (en) Data encryption key management system
EP4198739A1 (en) Matchmaking-based enhanced debugging for microservices architectures
JP2023046248A (en) Metrics and security-based accelerator service rescheduling and auto-scaling using programmable network device
CN116319240A (en) Scale telemetry using interactive matrices for deterministic microservice performance
US20230195601A1 (en) Synthetic data generation for enhanced microservice debugging in microservices architectures
US20220417257A1 (en) Protecting accelerators from malicious network functions
US11537425B2 (en) Methods for application deployment across multiple computing domains and devices thereof
US11392513B2 (en) Graph-based data flow control system
US20230342496A1 (en) Trust brokering and secure information container migration
WO2023230831A1 (en) Transparent transportation in cloud-to-pc extension framework
US20230319133A1 (en) Network interface device to select a target service and boot an application
US20230353359A1 (en) Secure onboarding of external compute fabric in an edge horizontal platform
Marotta Architectures and Algorithms for Resource Management in Virtualized Cloud Data Centers
Soursouri et al. Adaptive resource allocation for software defined networking controllers
Doriy et al. Use of cloud computing platforms towards robotics applications
Data et al. ICT 269978

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication