US20210266368A1 - Disaggregated & Distributed Composable Infrastructure - Google Patents

Disaggregated & Distributed Composable Infrastructure Download PDF

Info

Publication number
US20210266368A1
US20210266368A1 US17/184,879 US202117184879A US2021266368A1 US 20210266368 A1 US20210266368 A1 US 20210266368A1 US 202117184879 A US202117184879 A US 202117184879A US 2021266368 A1 US2021266368 A1 US 2021266368A1
Authority
US
United States
Prior art keywords
network
network resources
identified
resources
requested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/184,879
Inventor
Kevin M. McBride
James E. Sutherland
Frank Moss
Brent Smith
Charles Stallings
Mitch Mollard
William O'Brien, JR.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Level 3 Communications LLC
Original Assignee
Level 3 Communications LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Level 3 Communications LLC filed Critical Level 3 Communications LLC
Priority to US17/184,879 priority Critical patent/US20210266368A1/en
Assigned to LEVEL 3 COMMUNICATIONS, LLC reassignment LEVEL 3 COMMUNICATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SUTHERLAND, JAMES E., MOSS, FRANK, MCBRIDE, KEVIN M., MOLLARD, MITCH, O'BRIEN, WILLIAM, JR., SMITH, BRENT, STALLINGS, CHARLES
Assigned to LEVEL 3 COMMUNICATIONS, LLC reassignment LEVEL 3 COMMUNICATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOLLARD, MITCH
Publication of US20210266368A1 publication Critical patent/US20210266368A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • H04L67/16
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Definitions

  • the present disclosure relates, in general, to methods, systems, and apparatuses for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure.
  • a customer might provide a request for network services from a set list of network services, which might include, among other things, information regarding one or more of specific hardware, specific hardware type, specific location, and/or specific network for providing network services, or the like.
  • the customer might select the particular hardware, hardware type, location, and/or network based on stated or estimated performance metrics for these components or generic versions of these components, but might not convey the customer's specific desired performance parameters.
  • the service provider then allocates network resources based on the selected one or more of specific hardware, specific hardware type, specific location, or specific network for providing network services, as indicated in the request.
  • conventional network resource allocation systems typically utilize either specialized or all-purpose network devices that are expensive or that contains network resources that are not used to full potential (i.e., with wasted potential). Such conventional network resource allocation systems also do not simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, much less configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • FIG. 1 is a schematic diagram illustrating a system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 2 is a schematic diagram illustrating another system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 3 is a schematic diagram illustrating yet another system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIGS. 4A-4C are schematic diagrams illustrating various non-limiting examples of implementing intent-based service configuration, service conformance, and/or service auditing that may be applicable to implementing intent-based disaggregated and distributed composable infrastructure, in accordance to various embodiments.
  • FIGS. 5A-5D are flow diagrams illustrating a method for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 6 is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments.
  • FIG. 7 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
  • Various embodiments provide tools and techniques for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure.
  • a computing system might receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • the computing system might identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system might establish one or more transport links (e.g., optical transport links, network transport links, or wired transport links, and/or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources.
  • establishing the one or more transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more transport links between the disaggregated and distributed identified two or more network resources.
  • the computing system might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system might allocate the identified two or more network resources for providing the requested network services.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources.
  • the computing system might map a plurality of network resources within the two or more first networks.
  • identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”)-based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like.
  • PCI peripheral component interconnect
  • the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • the various embodiments utilize two or more generic or single-purpose network devices in place of specialized or all-purpose network devices, and as such reduces the cost of network resources and thus reducing the cost of allocation of network resources, while avoiding wasted potential or unused portions of the network resources when allocating said resources to customers.
  • the various embodiments also simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, and also configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • Various embodiments described herein, while embodying (in some cases) software products, computer-performed methods, and/or computer systems, represent tangible, concrete improvements to existing technological areas, including, without limitation, network configuration technology, network resource allocation technology, and/or the like.
  • certain embodiments can improve the functioning of a computer or network system itself (e.g., computing devices or systems that form parts of the network, computing devices or systems, network elements or the like for performing the functionalities described below, etc.), for example, by receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establishing, with the computing system, one or more transport links (e.g., optical
  • any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve specific novel functionality (e.g., steps or operations), such as, establishing, with a computing system, one or more transport links between identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on desired characteristics and performance parameters for requested network services; and allocating, with the computing system, the identified two or more network resources for providing the requested network services, and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations.
  • steps or operations such as, establishing, with a computing system, one or more transport links between identified two or more network resources, the identified two or more network resources
  • a method might comprise receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establishing, with the computing system, one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links; configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the
  • the computing system might comprise one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the one or more transport links comprise at least one of one or more optical transport links, one or more network transport links, one or more wired transport links, or one or more wireless transport links, and/or the like.
  • deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links may comprise performing one of: comparing, with the computing system, system clocks each associated with each of the identified two or more network resources, and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing, with the computing system, two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources, based at least in part on the derived distributable synchronization state.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources, based at least in part on the derived distributable synchronization state.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources, based at least in part on the derived distributable synchronization state.
  • establishing the one or more transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more transport links between the disaggregated and distributed identified two or more network resources.
  • the method might further comprise mapping, with computing system, a plurality of network resources within the two or more first networks.
  • identifying the two or more network resources might comprise identifying, with the computing system, the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • At least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems, one or more machine learning systems, one or more cloud systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • AI artificial intelligence
  • SDN software defined network
  • the identified two or more network resources might comprise peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like.
  • PCI peripheral component interconnect
  • the identified two or more network resources might comprise two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • the desired characteristics might comprise at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • the desired performance parameters might comprise at least one of a maximum latency, a maximum jitter, a maximum packet loss, a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • SLA service level agreement
  • FCAPS fault, configuration, accounting, performance, or security
  • allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
  • providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • API application programming interface
  • NFV network functions virtualization
  • the method might further comprise determining, with an audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters. In some cases, determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit.
  • determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified two or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified two or more network resources with the desired performance parameters; determining characteristics of each of the identified two or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified two or more network resources with the desired characteristics.
  • an apparatus might comprise at least one processor and a non-transitory computer readable medium communicatively coupled to the at least one processor.
  • the non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to: receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links; configure
  • a system might comprise a computing system, which might comprise at least one first processor and a first non-transitory computer readable medium communicatively coupled to the at least one first processor.
  • the first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more
  • the computing system might comprise one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • FIGS. 1-7 illustrate some of the features of the method, system, and apparatus for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure, as referred to above.
  • the methods, systems, and apparatuses illustrated by FIGS. 1-7 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments.
  • the description of the illustrated methods, systems, and apparatuses shown in FIGS. 1-7 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
  • FIG. 1 is a schematic diagram illustrating a system 100 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • system 100 might comprise a computing system 105 in service provider network 110 .
  • the computing system 105 might include, but is not limited to, one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the computing system 105 might receive (via one or more of wired connection, wireless connection, optical transport links, and/or electrical connection, or the like (collectively, “network connectivity” or the like)) a request for network services from a customer 115 , via one or more user devices 120 a - 120 n (collectively, “user devices 120 ”), via access network 125 .
  • the one or more user devices 120 might include, without limitation, at least one of a smart phone, a mobile phone, a tablet computer, a laptop computer, a desktop computer, and/or the like.
  • the request for network services might include desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • the desired performance parameters might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • SLA service level agreement
  • QoS quality of service
  • QoS quality of service
  • OSS operations support systems
  • BSS business support systems
  • FCAPS fault, configuration, accounting, performance, or security
  • the desired characteristics might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • System 100 might further comprise network resources 130 that may be disposed, and/or communicatively coupled to, networks 135 a - 135 n (collectively, “networks 135 ” or the like) and/or networks 140 a - 140 n (collectively, “networks 140 ” or the like).
  • the computing system 105 might analyze first metadata regarding resource attributes and characteristics of a plurality of unassigned network resources to identify one or more network resources 130 among the plurality of unassigned network resources for providing the requested network services, the first metadata having been striped to entries of the plurality of unassigned network resources in a resource database, which might include, without limitation, resource inventory database 145 , intent metadata database 150 , data lake 170 , and/or the like.
  • the computing system 105 might allocate at least one identified network resource 130 among the identified one or more network resources 130 for providing the requested network services.
  • the computing system 105 might stripe the entry with second metadata indicative of the desired characteristics and performance parameters as comprised in the request for network services.
  • striping the entry with the second metadata might comprise striping the entry in the resource inventory database 145 .
  • striping the entry with the second metadata might comprise striping or adding an entry in the intent metadata inventory 150 , which might be part of resource inventory database 145 or might be physically separate (or logically partitioned) from the resource inventory database 145 , or the like.
  • the first metadata might be analyzed after being received by the computing system in response to one of a pull data distribution instruction, a push data distribution instruction, or a hybrid push-pull data distribution instruction, and/or the like.
  • the computing system 105 might update an active inventory database 155 with such information—in some cases, by adding an entry in the active inventory database 155 with information indicating that the at least one identified network resource 130 has been allocated to provide particular requested network service(s) to customer 115 .
  • the computing system 105 might stripe the added entry in the active inventory database 155 with a copy of the second metadata indicative of the desired characteristics and performance parameters as comprised in the request for network services.
  • the resource inventory database 145 might store an equipment record that lists every piece of inventory that is accessible by the computing system 105 (either already allocated for fulfillment of network services to existing customers or available for allocation for fulfillment of new network services to existing or new customers).
  • the active inventory database 155 might store a circuit record listing the active inventory that are being used for fulfilling network services.
  • the data lake 170 might store a customer record that lists the service record of customer, and/or the like.
  • system 100 might further comprise quality of service test and validate server or audit engine 160 , which performs measurement and/or collection of network performance metrics for at least one of the one or more network resources 130 and/or the one or more networks 135 and/or 140 , and/or which performs auditing to determine whether each of the identified one or more network resources 130 conforms with the desired characteristics and performance parameters.
  • network performance metrics might include, without limitation, at least one of quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, or network usage trend data, and/or the like.
  • QoS quality of service
  • network performance metrics might include, but are not limited to, one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like, which are described in greater detail in the '244 and '884 applications, which have already been incorporated herein by reference in their entirety.
  • the operations associated with metadata striping and allocation (or re-allocation) of network resources are described in greater detail in the '095, '244, and '884 applications, which have already been incorporated herein by reference in their entirety.
  • computing system 105 might allocate one or more network resources 130 from one or more first networks 135 a - 135 n of a first set of networks 135 and/or from one or more second networks 140 a - 140 n of a second set of networks 140 for providing the requested network services, based at least in part on the desired performance parameters and/or based at least in part on a determination that the one or more first networks is capable of providing network resources each having the desired performance parameters.
  • determination that the one or more first networks is capable of providing network resources each having the desired performance parameters is based on one or more network performance metrics of the one or more first networks at the time that the request for network services from a customer is received.
  • System 100 might further comprise one or more databases, including, but not limited to, a platform resource database 165 a, a service usage database 165 b, a topology and reference database 165 c, a QoS measurement database 165 d, and/or the like.
  • the platform resource database 165 a might collect and store data related or pertaining to platform resource data and metrics, or the like
  • the service usage database 165 b might collect and store data related or pertaining to service usage data or service profile data
  • the topology and reference database 165 c might collect and store data related or pertaining to topology and reference data.
  • the QoS measurement database 165 d might collect and store QoS data, network performance metrics, and/or results of the QoS test and validate process.
  • Data stored in each of at least one of the platform resource database 165 a, the service usage database 165 b, the topology and reference database 165 c, the QoS measurement database 165 d, and/or the like, collected in data lake 170 , and the collective data or selected data from the data lake 170 are used to perform optimization of network resource allocation (both physical and/or virtual) using the computing system 105 (and, in some cases, using an orchestration optimization engine (e.g., orchestration optimization engine 275 of FIG. 2 of the '244 and '884 applications), or the like).
  • an orchestration optimization engine e.g., orchestration optimization engine 275 of FIG. 2 of the '244 and '884 applications
  • determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine 160 , whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit.
  • determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified one or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified one or more network resources with the desired performance parameters; determining characteristics of each of the identified one or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified one or more network resources with the desired characteristics.
  • the computing system 105 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services.
  • the computing system 105 might perform one of reconfiguring the at least one identified network resource or reallocating at least one other identified network resources, based on a determination that the measured one or more network performance metrics of each of the identified one or more network resources fails to match the desired performance parameters within third predetermined thresholds or based on a determination that the measured one or more network performance metrics of each of the identified one or more network resources fails to match the desired performance parameters within fourth predetermined thresholds.
  • intent might further include, without limitation, path intent, location intent, performance intent, time intent, and/or the like.
  • Path intent might include a requirement that network traffic must be routed through a first particular geophysical location (e.g., a continent, a country, a region, a state, a province, a city, a town, a mountain range, etc.) and/or a requirement that network traffic must not be routed through a second particular geophysical location, or the like.
  • a service commission engine might either add (and/or mark as required) all paths through the first particular geophysical location and all network resources that indicate that they are located in the first particular geophysical location, or remove (and/or mark as excluded) all paths through the second particular geophysical location and all network resources that indicate that they are located in the second particular geophysical location.
  • the service commission engine might use the required or non-excluded paths and network resources to identify which paths and network resources to allocate to fulfill requested network services.
  • the active inventory might be marked so that any fix or repair action is also restricted and that policy audits might be implemented to ensure no violations of path intent actually occur.
  • Location intent for instance, might include a requirement that network resources that are used for fulfilling the requested network services are located in specific geographical locations (which are more specific compared to the general geophysical locations described above). In such cases, the inventory is required to include the metadata for the intent, then the service engine can perform the filtering and selection. Monitoring and/or restricting assets being reassigned may be performed using location intent policy markings (or metadata) on the service.
  • Performance intent might include a requirement that the requested services satisfy particular performance parameters or metrics—which might include, without limitation, maximum latency or delay, maximum jitter, maximum packet loss, maximum number of hops, minimum bandwidth, nodal connectivity, minimum amount of compute resources for each allocated network resource, minimum amount of storage resources for each allocated network resource, minimum memory capacity for each allocated network resource, fastest possible path, and/or the like.
  • service conformance engine might use the performance metrics (as measured by one or more nodes in the network, which in some cases might include the allocated network resource itself, or the like) between points (or network nodes) for filtering the compliant inventory options, and/or might propose higher levels of service to satisfy the customer and/or cost level alignment, or the like.
  • Time intent might include a requirement that the requested services take into account conditions related to time of day (e.g., morning, noon, afternoon, evening, night, etc.), special days (e.g., holidays, snow days, storm days, etc.), weeks of the year (e.g., around holidays, etc.), etc., based at least in part on baseline or normality analyses of average or typical conditions.
  • time of day e.g., morning, noon, afternoon, evening, night, etc.
  • special days e.g., holidays, snow days, storm days, etc.
  • weeks of the year e.g., around holidays, etc.
  • a SS7 advanced intelligence framework (which might have a local number portability dip to get instructions from an external advanced intelligence function) can be adapted with intent-based orchestration (as described herein) by putting a trigger (e.g., an external data dip, or the like) on the orchestrator between the requesting device or node (where the intent and intent criteria might be sent) and the source of the external function, which might scrape the inventory database to make its instructions and/or solution sets for the fulfillment engine and then stripe metadata, and/or returns that to the normal fulfillment engine.
  • a trigger e.g., an external data dip, or the like
  • the computing system 105 might receive, over a network (e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n, or the like), a request for network services from a customer (e.g., customer 115 , or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • a network e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n, or the like
  • a request for network services from a customer (e.g., customer 115 , or the like)
  • the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of
  • the computing system 105 might identify two or more network resources (e.g., network resources 130 , or the like) from two or more first networks (e.g., network 135 and/or network 140 , or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system 105 might establish one or more optical transport links (e.g., optical transport 175 , or the like; depicted in FIG. 1 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources.
  • optical transport links e.g., optical transport 175 , or the like; depicted in FIG. 1 as long-dash lines, or the like
  • establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 175 ) between the disaggregated and distributed identified two or more network resources 130 .
  • FIG. 1 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like).
  • the computing system 105 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • the computing system 105 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state.
  • the computing system 105 might allocate the identified two or more network resources for providing the requested network services.
  • the computing system 105 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services; and/or the like.
  • deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like might comprise the computing system 105 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment.
  • Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings.
  • plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals.
  • Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary).
  • isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations.
  • isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter.
  • a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 185 , or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 130 , based at least in part on the derived distributable synchronization state.
  • a re-timer e.g., re-timer 185 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 190 , or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 130 , based at least in part on the derived distributable synchronization state.
  • a re-driver or a repeater e.g., re-driver 190 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 1 ) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 130 , based at least in part on the derived distributable synchronization state.
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • the computing system 105 might map a plurality of network resources within the two or more first networks 130 .
  • identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • At least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180 , or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • AI artificial intelligence
  • the one or more AI systems may also be used to assist in assigning resources and/or managing intent-based curation or composability process.
  • the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like.
  • PCI peripheral component interconnect
  • the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • two or more tiny servers or server blades might be curated or composed to function and simulate a single large server, or the like.
  • allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
  • providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • API application programming interface
  • NFV network functions virtualization
  • the various embodiments provide disaggregated and distributed composable infrastructure.
  • the various embodiments also add a layer of composability by using different AI systems to treat certain data with priority and/or by using curation or composability (which might include, without limitation, geo composability, resource composability, network composability, and/or the like) based at least in part on path intent, location intent, performance intent, time intent, and/or the like (collectively referred to as “intent-based curation or composability” or the like).
  • the various embodiments utilize the composability or orchestration to enable dynamic allocation or composability of compute and/or network resources.
  • the various embodiments further utilize two or more generic or single-purpose network devices in place of specialized or all-purpose network devices, and as such reduces the cost of network resources and thus reducing the cost of allocation of network resources, while avoiding wasted potential or unused portions of the network resources when allocating said resources to customers.
  • the various embodiments also simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, and also configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • FIG. 2 is a schematic diagram illustrating another system 200 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • system 200 might comprise a main hub 205 , first through N th ring hubs 210 a - 210 n (collectively, “ring hubs 210 ” or the like), first through N th remote hubs 215 a - 215 n (collectively, “remote hubs 215 ” or the like), a plurality of universal customer premises equipment (“UCPEs”) 220 or 220 a - 220 n that are located at corresponding customer premises 225 or 225 a - 225 n, a plurality of network resources 230 , computing system 235 , host or main 240 , and optical transport or optical transport links 245 .
  • FIG. 1 FIG.
  • the various embodiments are not so limited, and the configuration or arrangement may be any suitable configuration or arrangement of the main hub 205 , the remote hubs 210 , and the UPCEs 215 in customer premises 220 , and/or the like.
  • the main hub 205 might communicatively couple to the ring hubs 210 a - 210 n in a ring configuration in which the main hub 205 might communicatively couple directly or indirectly to the first ring hub 210 a, which might communicatively couple directly or indirectly to the second ring hub 210 b, which might communicatively couple directly or indirectly to the next ring hub and so on until the N th ring hub 210 n, which might in turn communicatively couple back to the main hub 205 , where the main hub 205 might be located in a geographic location that is different from the geographic location of each of the ring hubs 210 a - 210 n, each of which is in turn located in a geographic location that is different from the geographic location of each of the other ring hubs 210 a - 210 n.
  • Each ring hub 210 might be communicatively coupled (in a hub and spoke configuration, or the like) to a plurality of UCPEs 220 , each of which might be located at a customer premises 225 among a plurality of customer premises 225 .
  • customer premises 225 might include, without limitation, customer residences, multi-dwelling units (“MDUs”), commercial customer premises, industrial customer premises, and/or the like, within one or more blocks of customer premises (e.g., residential neighborhoods, university/college campuses, office blocks, industrial parks, mixed-use zoning areas, and/or the like), in which roadways and/or pathways might be adjacent to each of the customer premises.
  • MDUs multi-dwelling units
  • the main hub 205 might communicatively couple to the remote hubs 215 a - 215 n in a hub and spoke configuration in which the main hub 205 might communicatively couple directly or indirectly to each of the first through N th remote hubs 215 a - 215 n, where the main hub 205 might be located in a geographic location that is different from the geographic location of each of the remote hubs 215 a - 215 n, each of which is in turn located in a geographic location that is different from the geographic location of each of the other remote hubs 215 a - 215 n.
  • Each remote hub 215 might be communicatively coupled (in a hub and spoke configuration, or the like) to a plurality of UCPEs 220 , each of which might be located at a customer premises 225 among a plurality of customer premises 225 .
  • the main hub 205 and/or the network resources 230 disposed on the main hub 205 might communicatively couple to the ring hubs 210 a - 210 n in the ring configuration via optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like), and, in some cases, each ring hub 210 and/or the network resources 230 disposed on each ring hub 210 might communicatively couple (in a hub and spoke configuration, or the like) to the UCPEs 220 located at corresponding customer premises 225 via corresponding optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like).
  • the main hub 205 and/or the network resources 230 disposed on the main hub 205 might communicatively couple to the remote hubs 215 a - 215 n in the hub and spoke configuration via optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like), and, in some cases, each remote hub 215 and/or the network resources 230 disposed on each remote hub 215 might communicatively couple (in a hub and spoke configuration, or the like) to the UCPEs 220 located at corresponding customer premises 225 via corresponding optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like).
  • the computing system 235 might receive, over a network (e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n of FIG. 1 , or the like), a request for network services from a customer (e.g., customer 115 of FIG. 1 , or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • a network e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n of FIG. 1 , or the like
  • a request for network services from a customer (e.g., customer 115 of FIG. 1 , or the like)
  • the request for network services
  • the computing system 235 might identify two or more network resources (e.g., network resources 230 , or the like) from two or more first networks (e.g., network 135 and/or network 140 of FIG. 1 , or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system 235 might establish one or more optical transport links (e.g., optical transport 245 , or the like; depicted in FIG. 2 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources.
  • optical transport links e.g., optical transport 245 , or the like; depicted in FIG. 2 as long-dash lines, or the like
  • establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 245 ) between the disaggregated and distributed identified two or more network resources 230 .
  • FIG. 2 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like).
  • the computing system 235 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • the desired performance parameters might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • SLA service level agreement
  • QoS quality of service
  • QoS quality of service
  • OSS operations support systems
  • BSS business support systems
  • FCAPS fault, configuration, accounting, performance, or security
  • the desired characteristics might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • the computing system 235 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state.
  • the computing system 235 might allocate the identified two or more network resources for providing the requested network services.
  • deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like might comprise the computing system 235 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment.
  • Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings.
  • plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals.
  • Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary).
  • isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations.
  • isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter.
  • a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 185 , or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 230 , based at least in part on the derived distributable synchronization state.
  • a re-timer e.g., re-timer 185 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 190 , or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 230 , based at least in part on the derived distributable synchronization state.
  • a re-driver or a repeater e.g., re-driver 190 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 3 ) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 230 , based at least in part on the derived distributable synchronization state.
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • the computing system 235 might map a plurality of network resources within the two or more first networks.
  • identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180 of FIG. 1 , or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • AI artificial intelligence
  • SDN software defined network
  • the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like), and/or the like.
  • PCI peripheral component interconnect
  • NICs network interface cards
  • smart NICs one or more graphics processing units (“GPUs”)
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • the identified two or more network resources might include
  • allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
  • providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • API application programming interface
  • NFV network functions virtualization
  • FIG. 3 is a schematic diagram illustrating yet another system 300 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • system 300 might comprise a main hub 305 , one or more remote hubs 310 a - 310 n (collectively, “remote hubs 310 ” or the like), a plurality of universal customer premises equipment (“UCPEs”) 315 or 315 a - 315 n that are located at corresponding customer premises 320 or 320 a - 320 n, a computing system 325 , a plurality of network resources 330 , and optical transport or optical transport links 335 (depicted in FIG. 3 as long-dash lines, or the like).
  • FIG. 3 shows a main hub 305 , one or more remote hubs 310 a - 310 n (collectively, “remote hubs 310 ” or the like), a plurality of universal customer premises equipment (“UCPEs”) 315 or 315 a - 315 n that are located at corresponding customer premises 320 or 320 a - 320 n, a computing system 325 , a plurality of network resources
  • FIG. 3 depicts a particular example of the configuration or arrangement of the main hub 305 , the remote hubs 310 , and the UPCEs 315 in customer premises 320 , the various embodiments are not so limited, and the configuration or arrangement may be as shown and described in FIG. 2 , or may be any suitable configuration or arrangement of the main hub 305 , the remote hubs 310 , and the UPCEs 315 in customer premises 320 , and/or the like.
  • the main hub 305 might communicatively couple directly or indirectly to the remote hubs 310 a - 310 n (either in the ring configuration and/or the spoke and hub configuration as shown in FIG. 2 ), each of which might communicatively couple directly or indirectly to a plurality of UCPEs 315 , each of which might be located at a customer premises 320 among a plurality of customer premises 320 .
  • customer premises 320 might include, without limitation, customer residences, multi-dwelling units (“MDUs”), commercial customer premises, industrial customer premises, and/or the like, within one or more blocks of customer premises (e.g., residential neighborhoods, university/college campuses, office blocks, industrial parks, mixed-use zoning areas, and/or the like), in which roadways and/or pathways might be adjacent to each of the customer premises.
  • MDUs multi-dwelling units
  • the main hub 305 might communicatively couple to the remote hubs 310 a - 310 n in which the main hub 305 might communicatively couple directly or indirectly (in either a ring configuration (as by the ring hubs 210 a - 210 n, or the like) or a hub and spoke configuration as shown in FIG.
  • each remote hub 310 might be communicatively coupled (in a ring configuration or in a hub and spoke configuration, or the like) to a plurality of UCPEs 315 , each of which might be located at a customer premises 320 among a plurality of customer premises 320 .
  • At least one of the network resources 330 disposed in the main hub 305 , the network resources 330 disposed in the remote hub 310 a, the network resources 330 disposed in the remote hub 310 b, and/or the like might comprise a plurality of network resource units 330 a mounted in a plurality of equipment racks or ports 340 .
  • the UPCE 315 a might comprise network resources 330 including, but not limited to, two or more network resource units 330 a.
  • the network resources 330 might include, without limitation, one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like.
  • NICs network interface cards
  • smart NICs one or more graphics processing units
  • GPUs graphics processing units
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • the computing system 325 might receive, over a network (e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n of FIG. 1 , or the like), a request for network services from a customer (e.g., customer 115 of FIG. 1 , or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • a network e.g., at least one of service provider network 110 , access network 125 , one or more first networks 135 a - 135 n, and/or one or more second networks 140 a - 140 n of FIG. 1 , or the like
  • a request for network services from a customer (e.g., customer 115 of FIG. 1 , or the like)
  • the request for network services
  • the computing system 325 might identify two or more network resources (e.g., network resources 330 , or the like) from two or more first networks (e.g., network 135 and/or network 140 of FIG. 1 , or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system 325 might establish one or more optical transport links (e.g., optical transport 335 , or the like; depicted in FIG. 3 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources.
  • establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 335 ) between the disaggregated and distributed identified two or more network resources 330 .
  • FIG. 3 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like).
  • the computing system 325 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • the desired performance parameters might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • SLA service level agreement
  • QoS quality of service
  • QoS quality of service
  • OSS operations support systems
  • BSS business support systems
  • FCAPS fault, configuration, accounting, performance, or security
  • the desired characteristics might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • the computing system 325 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state.
  • the computing system 325 might allocate the identified two or more network resources for providing the requested network services.
  • deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like might comprise the computing system 325 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment.
  • Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings.
  • plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals.
  • Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary).
  • isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations.
  • isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter.
  • a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 345 , or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 330 , based at least in part on the derived distributable synchronization state.
  • a re-timer e.g., re-timer 345 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 350 , or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 330 , based at least in part on the derived distributable synchronization state.
  • a re-driver or a repeater e.g., re-driver 350 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 3 ) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 330 , based at least in part on the derived distributable synchronization state.
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • the computing system 325 might map a plurality of network resources within the two or more first networks.
  • identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180 of FIG. 1 , or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • AI artificial intelligence
  • SDN software defined network
  • the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like), and/or the like.
  • PCI peripheral component interconnect
  • NICs network interface cards
  • smart NICs one or more graphics processing units (“GPUs”)
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • the identified two or more network resources might include
  • allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
  • providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • API application programming interface
  • NFV network functions virtualization
  • identifying the two or more network resources capable of providing the requested network services might comprise identifying a first generic or single-purpose network device 330 b in a first slot or first rack 340 among the network resources 330 disposed at the main hub 305 , identifying a second generic or single-purpose network device 330 c in a second slot or second rack 340 among the network resources 330 disposed at the main hub 305 , identifying a third generic or single-purpose network device 330 d in a second slot or second rack 340 among the network resources 330 disposed at the first remote hub 310 a, identifying a fourth generic or single-purpose network device 330 e in an N th slot or N th rack 340 among the network resources 330 disposed at the second remote hub 310 b, and identifying a fifth generic or single-purpose network device 330 f among the network resources 330 disposed at the UPCE 315 a of customer premises 320 a, or
  • the computing system 325 might utilize the re-timer 345 functionality to simulate zero latency or near-zero latency between the identified two or more network resources and/or utilize re-driver (or repeater) 350 functionality to simulate zero distance or near-zero distance between the identified two or more network resources, and/or the like, over the optical transport links 335 , resulting effectively in the first through fifth generic or single-purpose network devices 330 b - 330 f being configured as if they were contained within a virtual slot or virtual rack 340 ′ (such operation being shown at the distal end of arrow 355 in FIG. 3 ).
  • FIGS. 4A-4C are schematic diagrams illustrating various non-limiting examples 400 , 400 ′, and 400 ′′ of implementing intent-based service configuration, service conformance, and/or service auditing that may be applicable to implementing intent-based disaggregated and distributed composable infrastructure, in accordance to various embodiments.
  • a plurality of nodes 405 might include, without limitation, node A 405 a, node B 405 b, node C 405 c, node D 405 d, node E 405 e, node F 405 f, node G 405 g, node H 405 h, and/or the like.
  • the system might further comprise ME node 410 .
  • the system might further comprise paths A through K, with path A between node A 405 a and node B 405 b, path B between node B 405 b and node C 405 c, path C between node C 405 c and node D 405 d, path D between node D 405 d and node E 405 e, path E between node E 405 e and node F 405 f, path F between node F 405 f and node G 405 g, path G between node G 405 g and node H 405 h, path H between node H 405 h and node A 405 a, path J between node H 405 h and node C 405 c, path K between node A 405 a and node E 405 e, and/or the like.
  • the system might further comprise a path between the ME node 410 and one of the nodes 405 (e.g., node E 405 e, or the like).
  • each node 405 might be a network resource or might include a network resource(s), or the like.
  • the intent framework might require a named goal that includes standardized criteria that may be a relationship between two items.
  • the named goal (or intent) might include, without limitation, lowest delay (where the criteria might be delay), least number of hops (where the criteria might be hops), proximity to me (in this case, the ME node 410 ; where the criteria might be geographical proximity, geophysical proximity, distance, etc.).
  • two or more goals (or intents) might be combined.
  • the criteria might be added or striped via metadata into the inventory database (e.g., databases 145 , 150 , 155 , and/or 170 of FIG. 1 , or the like) and might be used for node and/or resource selection or deselection.
  • prioritization striping might be applied for the fulfillment engine to be considered, possibly along with selection or deselection criteria.
  • the inventory database might be augmented with tables that correlate with the “intent” criteria (such as shown in the delay table in FIG. 4A ).
  • the table might include intent (in this case, delay represented by the letter “D”), the path (e.g., path A through K, or the like), the delay (in this case, delay in milliseconds, or the like).
  • re-timer and/or re-driver or repeater functionalities that zeroes out latency between two or more nodes 405 a - 405 h and/or that simulates zero or near-zero distance between two or more nodes 405 a - 405 h despite the actual physical or geographic distances, respectively (as depicted in the table by the “re-timed delay” being set to, or measured or estimated at, 150 ns, 120 ns, 80 ns, 40 ns, 70 ns, 100 ns, 130 ns, 80, ns, 320 ns, and 550 ns along paths A-K, respectively, between two or more nodes 405 a - 405 h ).
  • the re-timed delay in FIG. 4A is shown in nanoseconds, the various embodiments are not so limited, and the re-timed delay may be in microseconds or milliseconds, or, in some cases, may be tunable as desired (e.g., with a tunable re-timed delay of between about 500 nanosecond and about 1 microsecond, or the like).
  • DSPs digital signal processors
  • PCI peripheral component interconnect
  • GPUs graphics processing units
  • intent-based service configuration might include, without limitation, exclusion intent, intrusion intent, and goal-oriented intent, or the like.
  • the exclusion intent (as indicated at block 420 ) might refer to intent or requirement not to fulfill network service using the indicated types of resources (in this case, resources 435 within a set of resources 430 )
  • the inclusion intent (as indicated at block 425 ) might refer to intent or requirement to fulfill service using the indicated types of resources (in this case, resources 440 within the set of resources 430 ), or the like.
  • the exclusion and inclusion intents might modify the pool of resources that the fulfillment process might pick from by removing (i.e., excluding) or limiting (i.e., including) the resources that can be assigned to fulfill the service. Once this process is completed, then the normal fulfillment process continues on.
  • the goal-oriented intent might include a single goal (as indicated at block 445 ) or a multi-goal (as indicated at block 450 ).
  • the single goal might, for instance, provide a “priority” to the resources that are assigned within that service class.
  • the single goal might include a priority to require low delay, for instance.
  • the multi-goal might, for example, provide matrix priorities to the resource pool assignment based on a fast matrix recursion process, or the like.
  • the user might apply one or more goals to the engine that then performs a single or matrix recursion to identify the best resources to meet the intent, and either passes a candidate list to the fulfillment engineer or stripes the inventory for the specific choice being made. Subsequently, fulfillment might continue.
  • the set of resources 430 ′ might include resources 1 through 7 455 .
  • a single goal might provide, for instance, priority to resource 1 that is assigned within that service class (as depicted by the arrow between block 445 and resource 1 in FIG. 4C ), or the like.
  • a multi-goal might provide matrix priorities to the resource pool (including, without limitation, resources 2-4, or the like) that are assigned based on a fast matrix recursion process (as depicted by the arrows between block 450 and resources 2-4 in FIG. 4C ), or the like.
  • FIGS. 5A-5D are flow diagrams illustrating a method 500 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIGS. 1, 2, 3, 4A, 4B, and 4C respectively (or components thereof), can operate according to the method 500 illustrated by FIG. 5 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100 , 200 , 300 , 400 , 400 ′, and 400 ′′ of FIGS. 1, 2, 3, 4A, 4B, and 4C can each also operate according to other modes of operation and/or perform other suitable procedures.
  • method 500 at block 505 , might comprise receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • method 500 might comprise mapping, with computing system, a plurality of network resources within the two or more first networks.
  • Method 500 might further comprise, at block 515 , identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services (and, in some cases, based at least in part on the mapping of the plurality of network resources).
  • Method 500 might further comprise establishing, with the computing system, one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources (block 520 ).
  • the one or more transport links might comprise at least one of one or more optical transport links, one or more network transport links, one or more wired transport links, or one or more wireless transport links, and/or the like (collectively, “network connectivity” or the like).
  • method 500 might comprise deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like.
  • Method 500 at block 530 , might comprise configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state.
  • Method 500 might comprise, at block 535 , allocating, with the computing system, the identified two or more network resources for providing the requested network services.
  • Method 500 might comprise determining, with an audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters. Based on a determination that at least one identified network resource among the identified two or more network resources fails to conform with the desired performance parameters within first predetermined thresholds or based on a determination that the determined characteristics of the at least one identified network resource fails to conform with the desired characteristics within second predetermined thresholds, method 500 might further comprise one of: reconfiguring, with the computing system, the at least one identified network resource to provide the desired characteristics and performance parameters (optional block 545 ); or reallocating, with the computing system, at least one other identified network resources among the identified two or more network resources for providing the requested network services (optional block 550 ).
  • deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like may comprise comparing, with the computing system, system clocks each associated with each of the identified two or more network resources (block 525 a ); and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the system clocks (block 525 b ).
  • deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like may comprise comparing, with the computing system, two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources (block 525 c ); and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system (block 525 d ).
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources (optional block 555 ), based at least in part on the derived distributable synchronization state.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources (optional block 560 ), based at least in part on the derived distributable synchronization state.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources (optional block 565 ), based at least in part on the derived distributable synchronization state.
  • determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit (optional block 570 ).
  • determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified two or more network resources (optional block 575 ); comparing, with the audit engine, the measured one or more network performance metrics of each of the identified two or more network resources with the desired performance parameters (optional block 580 ); determining characteristics of each of the identified two or more network resources (optional block 585 ); and comparing, with the audit engine, the determined characteristics of each of the identified two or more network resources with the desired characteristics (optional block 590 ).
  • FIG. 6 is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments.
  • FIG. 6 provides a schematic illustration of one embodiment of a computer system 600 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., computing systems 105 , 235 , and 325 , user devices 120 a - 120 n, network resources 130 , quality of service (“QoS”) test and validate server and/or audit engine 160 , main hub 205 and 305 , ring hubs 210 a - 210 n, remote hubs 215 a - 215 n, 310 a, and 310 b, universal customer premises equipment (“UCPEs”) 220 and 315 a, host/main 240 , etc.), as described above.
  • computing systems 105 , 235 , and 325 i.e., computing systems 105 , 235 , and 325 ,
  • FIG. 6 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate.
  • FIG. 6 therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • the computer or hardware system 600 which might represent an embodiment of the computer or hardware system (i.e., computing systems 105 , 235 , and 325 , user devices 120 a - 120 n, network resources 130 , QoS test and validate server and/or audit engine 160 , main hub 205 and 305 , ring hubs 210 a - 210 n, remote hubs 215 a - 215 n, 310 a, and 310 b, UCPEs 220 and 315 a, host/main 240 , etc.), described above with respect to FIGS. 1-5 —is shown comprising hardware elements that can be electrically coupled via a bus 605 (or may otherwise be in communication, as appropriate).
  • a bus 605 or may otherwise be in communication, as appropriate.
  • the hardware elements may include one or more processors 610 , including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 615 , which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 620 , which can include, without limitation, a display device, a printer, and/or the like.
  • processors 610 including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like)
  • input devices 615 which can include, without limitation, a mouse, a keyboard, and/or the like
  • output devices 620 which can include, without limitation, a display device, a printer, and/or the like.
  • the computer or hardware system 600 may further include (and/or be in communication with) one or more storage devices 625 , which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
  • the computer or hardware system 600 might also include a communications subsystem 630 , which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a BluetoothTM device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 630 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein.
  • the computer or hardware system 600 will further comprise a working memory 635 , which can include a RAM or ROM device, as described above.
  • the computer or hardware system 600 also may comprise software elements, shown as being currently located within the working memory 635 , including an operating system 640 , device drivers, executable libraries, and/or other code, such as one or more application programs 645 , which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • an operating system 640 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 645 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
  • a set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s) 625 described above.
  • the storage medium might be incorporated within a computer system, such as the system 600 .
  • the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer or hardware system 600 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 600 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • some embodiments may employ a computer or hardware system (such as the computer or hardware system 600 ) to perform methods in accordance with various embodiments of the invention.
  • some or all of the procedures of such methods are performed by the computer or hardware system 600 in response to processor 610 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 640 and/or other code, such as an application program 645 ) contained in the working memory 635 .
  • Such instructions may be read into the working memory 635 from another computer readable medium, such as one or more of the storage device(s) 625 .
  • execution of the sequences of instructions contained in the working memory 635 might cause the processor(s) 610 to perform one or more procedures of the methods described herein.
  • machine readable medium and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion.
  • various computer readable media might be involved in providing instructions/code to processor(s) 610 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer readable medium is a non-transitory, physical, and/or tangible storage medium.
  • a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like.
  • Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 625 .
  • Volatile media includes, without limitation, dynamic memory, such as the working memory 635 .
  • a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 605 , as well as the various components of the communication subsystem 630 (and/or the media by which the communications subsystem 630 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
  • Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 610 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 600 .
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 630 (and/or components thereof) generally will receive the signals, and the bus 605 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 635 , from which the processor(s) 605 retrieves and executes the instructions.
  • the instructions received by the working memory 635 may optionally be stored on a storage device 625 either before or after execution by the processor(s) 610 .
  • FIG. 7 illustrates a schematic diagram of a system 700 that can be used in accordance with one set of embodiments.
  • the system 700 can include one or more user computers, user devices, or customer devices 705 .
  • a user computer, user device, or customer device 705 can be a general purpose personal computer (including, merely by way of example, desktop computers, tablet computers, laptop computers, handheld computers, and the like, running any appropriate operating system, several of which are available from vendors such as Apple, Microsoft Corp., and the like), cloud computing devices, a server(s), and/or a workstation computer(s) running any of a variety of commercially-available UNIXTM or UNIX-like operating systems.
  • a user computer, user device, or customer device 705 can also have any of a variety of applications, including one or more applications configured to perform methods provided by various embodiments (as described above, for example), as well as one or more office applications, database client and/or server applications, and/or web browser applications.
  • a user computer, user device, or customer device 705 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network (e.g., the network(s) 710 described below) and/or of displaying and navigating web pages or other types of electronic documents.
  • a network e.g., the network(s) 710 described below
  • the exemplary system 700 is shown with two user computers, user devices, or customer devices 705 , any number of user computers, user devices, or customer devices can be supported.
  • Certain embodiments operate in a networked environment, which can include a network(s) 710 .
  • the network(s) 710 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available (and/or free or proprietary) protocols, including, without limitation, TCP/IP, SNATM, IPXTM, AppleTalkTM, and the like.
  • the network(s) 710 (similar to network(s) 110 , 125 , 135 a - 135 n, and 140 a - 140 n of FIG.
  • LAN local area network
  • WAN wide-area network
  • WWAN wireless wide area network
  • VPN virtual private network
  • PSTN public switched telephone network
  • PSTN public switched telephone network
  • a wireless network including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the BluetoothTM protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • the network might include an access network of the service provider (e.g., an Internet service provider (“ISP”)).
  • ISP Internet service provider
  • the network might include a core network of the service provider, and/or the Internet.
  • Embodiments can also include one or more server computers 715 .
  • Each of the server computers 715 may be configured with an operating system, including, without limitation, any of those discussed above, as well as any commercially (or freely) available server operating systems.
  • Each of the servers 715 may also be running one or more applications, which can be configured to provide services to one or more clients 705 and/or other servers 715 .
  • one of the servers 715 might be a data server, a web server, a cloud computing device(s), or the like, as described above.
  • the data server might include (or be in communication with) a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 705 .
  • the web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like.
  • the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 705 to perform methods of the invention.
  • the server computers 715 might include one or more application servers, which can be configured with one or more applications accessible by a client running on one or more of the client computers 705 and/or other servers 715 .
  • the server(s) 715 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 705 and/or other servers 715 , including, without limitation, web applications (which might, in some cases, be configured to perform methods provided by various embodiments).
  • a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as JavaTM, C, C#TM or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming and/or scripting languages.
  • the application server(s) can also include database servers, including, without limitation, those commercially available from OracleTM, MicrosoftTM, SybaseTM IBMTM, and the like, which can process requests from clients (including, depending on the configuration, dedicated database clients, API clients, web browsers, etc.) running on a user computer, user device, or customer device 705 and/or another server 715 .
  • an application server can perform one or more of the processes for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure, as described in detail above.
  • Data provided by an application server may be formatted as one or more web pages (comprising HTML, JavaScript, etc., for example) and/or may be forwarded to a user computer 705 via a web server (as described above, for example).
  • a web server might receive web page requests and/or input data from a user computer 705 and/or forward the web page requests and/or input data to an application server.
  • a web server may be integrated with an application server.
  • one or more servers 715 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement various disclosed methods, incorporated by an application running on a user computer 705 and/or another server 715 .
  • a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer, user device, or customer device 705 and/or server 715 .
  • the system can include one or more databases 720 a - 720 n (collectively, “databases 720 ”).
  • databases 720 The location of each of the databases 720 is discretionary: merely by way of example, a database 720 a might reside on a storage medium local to (and/or resident in) a server 715 a (and/or a user computer, user device, or customer device 705 ).
  • a database 720 n can be remote from any or all of the computers 705 , 715 , so long as it can be in communication (e.g., via the network 710 ) with one or more of these.
  • a database 720 can reside in a storage-area network (“SAN”) familiar to those skilled in the art. (Likewise, any necessary files for performing the functions attributed to the computers 705 , 715 can be stored locally on the respective computer and/or remotely, as appropriate.)
  • the database 720 can be a relational database, such as an Oracle database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. The database might be controlled and/or maintained by a database server, as described above, for example.
  • system 700 might further comprise computing system 725 (similar to computing systems 105 of FIG. 1 , or the like), quality of service (“QoS”) test and validate server or audit engine 730 (similar to QoS test and validate server or audit engine 160 of FIG. 1 , or the like), one or more network resources 735 (similar to network resources 130 of FIG. 1 , or the like), resource inventory database 740 (similar to resource inventory databases 145 , 215 , and 305 of FIGS. 1-3 , or the like), intent metadata database 745 (similar to resource inventory databases 150 and 220 of FIGS. 1 and 2 , or the like), and active inventory database 750 (similar to resource inventory databases 155 , 235 , and 320 of FIGS. 1-3 , or the like).
  • computing system 725 similar to computing systems 105 of FIG. 1 , or the like
  • QoS quality of service
  • audit engine 730 similar to QoS test and validate server or audit engine 160 of FIG. 1 , or the like
  • network resources 735 similar to network resources
  • computing system 725 might receive a request for network services from a customer (e.g., from user device 705 a or 705 b (which might correspond to user devices 120 a - 120 n of FIG. 1 , or the like)).
  • the request for network services might comprise desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, or specific network for providing the requested network services.
  • the computing system 725 might analyze first metadata regarding resource attributes and characteristics of a plurality of unassigned network resources to identify one or more network resources among the plurality of unassigned network resources for providing the requested network services, the first metadata having been striped to entries of the plurality of unassigned network resources in a resource database (e.g., resource inventory database 740 , or the like). Based on the analysis, the computing system 725 might allocate at least one identified network resource among the identified one or more network resources for providing the requested network services.
  • a resource database e.g., resource inventory database 740 , or the like
  • the computing system 725 might update a service database by adding or updating an entry in the service database (e.g., resource inventory database 740 or intent metadata database 745 , or the like) with information indicating that the at least one identified network resource have been allocated for providing the requested network services, and might stripe the entry with second metadata (in some cases, in resource inventory database 740 , intent metadata database 745 , or active inventory database 750 , or the like) indicative of the desired characteristics and performance parameters as comprised in the request for network services.
  • the service database e.g., resource inventory database 740 or intent metadata database 745 , or the like
  • second metadata in some cases, in resource inventory database 740 , intent metadata database 745 , or active inventory database 750 , or the like
  • the desired performance parameters might include, without limitation, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, and/or the like.
  • the desired characteristics might include, but are not limited to, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • the audit engine 730 might determine whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters. In some instances, determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit.
  • determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified one or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified one or more network resources with the desired performance parameters; determining characteristics of each of the identified one or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified one or more network resources with the desired characteristics.
  • the computing system 725 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services.
  • the computing system 725 might receive, over a network (e.g., at least one of service provider network(s) 710 , or the like), a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • the computing system 725 might identify two or more network resources (e.g., network resources 735 a - 735 n, or the like) from two or more first networks (e.g., network(s) 710 , or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services.
  • the computing system 725 might establish one or more optical transport links (e.g., optical transport 755 , or the like; depicted in FIG. 7 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources.
  • establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 755 ) between the disaggregated and distributed identified two or more network resources 735 a - 735 n.
  • the computing system 725 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • the computing system 725 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state.
  • the computing system 725 might allocate the identified two or more network resources for providing the requested network services.
  • the computing system 725 might map a plurality of network resources within the two or more first networks 735 a - 735 n.
  • identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • At least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 760 , or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • AI artificial intelligence
  • SDN software defined network
  • deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like might comprise the computing system 725 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment.
  • Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings.
  • plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals.
  • Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary).
  • isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations.
  • isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter.
  • a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 765 , or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 735 a - 735 n, based at least in part on the derived distributable synchronization state.
  • a re-timer e.g., re-timer 765 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer.
  • simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 770 , or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 735 a - 735 n, based at least in part on the derived distributable synchronization state.
  • a re-driver or a repeater e.g., re-driver 770 , or the like
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater.
  • simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 7 ) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 735 a - 735 n, based at least in part on the derived distributable synchronization state.
  • quantum timing such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like), and/or the like.
  • PCI peripheral component interconnect
  • NICs network interface cards
  • smart NICs one or more graphics processing units (“GPUs”)
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • storage devices e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like
  • the identified two or more network resources might include
  • allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
  • providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • API application programming interface
  • NFV network functions virtualization

Abstract

Novel tools and techniques are provided for implementing intent-based disaggregated and distributed composable infrastructure. In some embodiments, a computing system might receive, over a network, a request for network services from a customer, the request comprising desired characteristics and performance parameters, without specific information regarding any of hardware, hardware type, location, or network for providing the requested services. The computing system might identify network resources based at least in part on the desired characteristics and performance parameters, might establish transport links between the identified two or more network resources (which may be disaggregated and distributed), might configure (in some cases, based on derived distributable synchronization state(s)) at least one of the identified network resources to simulate zero (or near-zero) latency and/or to simulate zero (or near-zero) distance between the identified network resources, and might allocate the identified two or more network resources for providing the requested network services.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application claims priority to U.S. Patent Application Ser. No. 62/981,308 (the “'308 Application”) filed Feb. 25, 2020 by Kevin M. McBride et al. (attorney docket no. 1562-US-P1), entitled, “Disaggregated & Distributed Composable Infrastructure,” and U.S. Patent Application Ser. No. 63/142,109 (the “'109 Application”) filed Jan. 27, 2021 by Kevin M. McBride et al. (attorney docket no. 1562-US-P2), entitled, “Disaggregated & Distributed Composable Infrastructure.” This application is also related to U.S. patent application Ser. No. ______ (the “'______ Application”) filed Feb. ______, 2021 by Kevin M. McBride et al. (attorney docket no. 1562-US-U2), entitled, “Disaggregated & Distributed Composable Infrastructure,” which claims priority to each of the '308 and '109 Applications, the disclosure of each of which is incorporated herein by reference in its entirety for all purposes.
  • The respective disclosures of these applications/patents (which this document refers to collectively as the “Related Applications”) are incorporated herein by reference in their entirety for all purposes.
  • COPYRIGHT STATEMENT
  • A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
  • FIELD
  • The present disclosure relates, in general, to methods, systems, and apparatuses for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure.
  • BACKGROUND
  • In typical network resource allocation schemes, a customer might provide a request for network services from a set list of network services, which might include, among other things, information regarding one or more of specific hardware, specific hardware type, specific location, and/or specific network for providing network services, or the like. The customer might select the particular hardware, hardware type, location, and/or network based on stated or estimated performance metrics for these components or generic versions of these components, but might not convey the customer's specific desired performance parameters. The service provider then allocates network resources based on the selected one or more of specific hardware, specific hardware type, specific location, or specific network for providing network services, as indicated in the request.
  • Such specific requests, however, do not necessarily provide the service provider with the intent or expectations of the customer. Accordingly, the service provider will likely make network resource reallocation decisions based on what is best for the network from the perspective of the service provider, but not necessarily what is best for the customer. Importantly, these conventional systems do not utilize metadata in resource inventory databases for implementing intent-based service configuration, service conformance, and/or service auditing.
  • Further, conventional network resource allocation systems typically utilize either specialized or all-purpose network devices that are expensive or that contains network resources that are not used to full potential (i.e., with wasted potential). Such conventional network resource allocation systems also do not simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, much less configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • Hence, there is a need for more robust and scalable solutions for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A further understanding of the nature and advantages of particular embodiments may be realized by reference to the remaining portions of the specification and the drawings, in which like reference numerals are used to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
  • FIG. 1 is a schematic diagram illustrating a system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 2 is a schematic diagram illustrating another system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 3 is a schematic diagram illustrating yet another system for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIGS. 4A-4C are schematic diagrams illustrating various non-limiting examples of implementing intent-based service configuration, service conformance, and/or service auditing that may be applicable to implementing intent-based disaggregated and distributed composable infrastructure, in accordance to various embodiments.
  • FIGS. 5A-5D are flow diagrams illustrating a method for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • FIG. 6 is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments.
  • FIG. 7 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments.
  • DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS Overview
  • Various embodiments provide tools and techniques for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure.
  • In various embodiments, a computing system might receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. The computing system might identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system might establish one or more transport links (e.g., optical transport links, network transport links, or wired transport links, and/or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources. According to some embodiments, establishing the one or more transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more transport links between the disaggregated and distributed identified two or more network resources.
  • The computing system might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system might allocate the identified two or more network resources for providing the requested network services.
  • In some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources.
  • According to some embodiments, the computing system might map a plurality of network resources within the two or more first networks. In some cases, identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
  • In some embodiments, the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”)-based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like. Alternatively, or additionally, the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • The various embodiments utilize two or more generic or single-purpose network devices in place of specialized or all-purpose network devices, and as such reduces the cost of network resources and thus reducing the cost of allocation of network resources, while avoiding wasted potential or unused portions of the network resources when allocating said resources to customers. The various embodiments also simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, and also configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • These and other aspects of the intent-based disaggregated and distributed composable infrastructure are described in greater detail with respect to the figures.
  • The following detailed description illustrates a few exemplary embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention.
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these specific details. In other instances, certain structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features.
  • Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term “about.” In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms “and” and “or” means “and/or” unless otherwise indicated. Moreover, the use of the term “including,” as well as other forms, such as “includes” and “included,” should be considered non-exclusive. Also, terms such as “element” or “component” encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.
  • Various embodiments described herein, while embodying (in some cases) software products, computer-performed methods, and/or computer systems, represent tangible, concrete improvements to existing technological areas, including, without limitation, network configuration technology, network resource allocation technology, and/or the like. In other aspects, certain embodiments, can improve the functioning of a computer or network system itself (e.g., computing devices or systems that form parts of the network, computing devices or systems, network elements or the like for performing the functionalities described below, etc.), for example, by receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establishing, with the computing system, one or more transport links (e.g., optical transport links, network transport links, wired transport links, or wireless transport links, and/or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services; and allocating, with the computing system, the identified two or more network resources for providing the requested network services; and/or the like.
  • In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve specific novel functionality (e.g., steps or operations), such as, establishing, with a computing system, one or more transport links between identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on desired characteristics and performance parameters for requested network services; and allocating, with the computing system, the identified two or more network resources for providing the requested network services, and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, ability to improve network functions, network resource allocation and utilization, and/or the like, in various embodiments based on the intent-driven requests for network resources used to fulfill network service requests by customers, which may be observed or measured by customers and/or service providers.
  • In an aspect, a method might comprise receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establishing, with the computing system, one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links; configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state; and allocating, with the computing system, the identified two or more network resources for providing the requested network services.
  • In some embodiments, the computing system might comprise one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like. In some cases, the one or more transport links comprise at least one of one or more optical transport links, one or more network transport links, one or more wired transport links, or one or more wireless transport links, and/or the like.
  • Merely by way of example, in some cases, deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links may comprise performing one of: comparing, with the computing system, system clocks each associated with each of the identified two or more network resources, and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing, with the computing system, two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • According to some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources, based at least in part on the derived distributable synchronization state. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources, based at least in part on the derived distributable synchronization state. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources, based at least in part on the derived distributable synchronization state.
  • In some embodiments, establishing the one or more transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more transport links between the disaggregated and distributed identified two or more network resources.
  • According to some embodiments, the method might further comprise mapping, with computing system, a plurality of network resources within the two or more first networks. In some instances, identifying the two or more network resources might comprise identifying, with the computing system, the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources. In some cases, at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems, one or more machine learning systems, one or more cloud systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • In some embodiments, the identified two or more network resources might comprise peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like. Alternatively, or additionally, the identified two or more network resources might comprise two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • According to some embodiments, the desired characteristics might comprise at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • In some embodiments, the desired performance parameters might comprise at least one of a maximum latency, a maximum jitter, a maximum packet loss, a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • According to some embodiments, allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters. In some instances, providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • In some embodiments, the method might further comprise determining, with an audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters. In some cases, determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit. Alternatively, or additionally, determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified two or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified two or more network resources with the desired performance parameters; determining characteristics of each of the identified two or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified two or more network resources with the desired characteristics.
  • In another aspect, an apparatus might comprise at least one processor and a non-transitory computer readable medium communicatively coupled to the at least one processor. The non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to: receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links; configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state; and allocate the identified two or more network resources for providing the requested network services.
  • In yet another aspect, a system might comprise a computing system, which might comprise at least one first processor and a first non-transitory computer readable medium communicatively coupled to the at least one first processor. The first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services; identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services; establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources; derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links; configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state; and allocate the identified two or more network resources for providing the requested network services.
  • According to some embodiments, the computing system might comprise one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above described features.
  • Specific Exemplary Embodiments
  • We now turn to the embodiments as illustrated by the drawings. FIGS. 1-7 illustrate some of the features of the method, system, and apparatus for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure, as referred to above. The methods, systems, and apparatuses illustrated by FIGS. 1-7 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown in FIGS. 1-7 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
  • With reference to the figures, FIG. 1 is a schematic diagram illustrating a system 100 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • In the non-limiting embodiment of FIG. 1, system 100 might comprise a computing system 105 in service provider network 110. In some embodiments, the computing system 105 might include, but is not limited to, one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like. The computing system 105 might receive (via one or more of wired connection, wireless connection, optical transport links, and/or electrical connection, or the like (collectively, “network connectivity” or the like)) a request for network services from a customer 115, via one or more user devices 120 a-120 n (collectively, “user devices 120”), via access network 125. The one or more user devices 120 might include, without limitation, at least one of a smart phone, a mobile phone, a tablet computer, a laptop computer, a desktop computer, and/or the like. The request for network services might include desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services.
  • The desired performance parameters, in some embodiments, might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • The desired characteristics, according to some embodiments, might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • System 100 might further comprise network resources 130 that may be disposed, and/or communicatively coupled to, networks 135 a-135 n (collectively, “networks 135” or the like) and/or networks 140 a-140 n (collectively, “networks 140” or the like). In some embodiments, the computing system 105 might analyze first metadata regarding resource attributes and characteristics of a plurality of unassigned network resources to identify one or more network resources 130 among the plurality of unassigned network resources for providing the requested network services, the first metadata having been striped to entries of the plurality of unassigned network resources in a resource database, which might include, without limitation, resource inventory database 145, intent metadata database 150, data lake 170, and/or the like. Based on the analysis, the computing system 105 might allocate at least one identified network resource 130 among the identified one or more network resources 130 for providing the requested network services. The computing system 105 might stripe the entry with second metadata indicative of the desired characteristics and performance parameters as comprised in the request for network services. In some cases, striping the entry with the second metadata might comprise striping the entry in the resource inventory database 145. Alternatively, striping the entry with the second metadata might comprise striping or adding an entry in the intent metadata inventory 150, which might be part of resource inventory database 145 or might be physically separate (or logically partitioned) from the resource inventory database 145, or the like. In some cases, the first metadata might be analyzed after being received by the computing system in response to one of a pull data distribution instruction, a push data distribution instruction, or a hybrid push-pull data distribution instruction, and/or the like.
  • Once the at least one identified network resource 130 has been allocated or assigned, the computing system 105 might update an active inventory database 155 with such information—in some cases, by adding an entry in the active inventory database 155 with information indicating that the at least one identified network resource 130 has been allocated to provide particular requested network service(s) to customer 115. In some embodiments, the computing system 105 might stripe the added entry in the active inventory database 155 with a copy of the second metadata indicative of the desired characteristics and performance parameters as comprised in the request for network services. In some instances, the resource inventory database 145 might store an equipment record that lists every piece of inventory that is accessible by the computing system 105 (either already allocated for fulfillment of network services to existing customers or available for allocation for fulfillment of new network services to existing or new customers). The active inventory database 155 might store a circuit record listing the active inventory that are being used for fulfilling network services. The data lake 170 might store a customer record that lists the service record of customer, and/or the like.
  • According to some embodiments, system 100 might further comprise quality of service test and validate server or audit engine 160, which performs measurement and/or collection of network performance metrics for at least one of the one or more network resources 130 and/or the one or more networks 135 and/or 140, and/or which performs auditing to determine whether each of the identified one or more network resources 130 conforms with the desired characteristics and performance parameters. In some cases, network performance metrics might include, without limitation, at least one of quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, or network usage trend data, and/or the like. Alternatively, or additionally, network performance metrics might include, but are not limited to, one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like, which are described in greater detail in the '244 and '884 applications, which have already been incorporated herein by reference in their entirety. The operations associated with metadata striping and allocation (or re-allocation) of network resources are described in greater detail in the '095, '244, and '884 applications, which have already been incorporated herein by reference in their entirety.
  • In some embodiments, computing system 105 might allocate one or more network resources 130 from one or more first networks 135 a-135 n of a first set of networks 135 and/or from one or more second networks 140 a-140 n of a second set of networks 140 for providing the requested network services, based at least in part on the desired performance parameters and/or based at least in part on a determination that the one or more first networks is capable of providing network resources each having the desired performance parameters. According to some embodiments, determination that the one or more first networks is capable of providing network resources each having the desired performance parameters is based on one or more network performance metrics of the one or more first networks at the time that the request for network services from a customer is received.
  • System 100 might further comprise one or more databases, including, but not limited to, a platform resource database 165 a, a service usage database 165 b, a topology and reference database 165 c, a QoS measurement database 165 d, and/or the like. The platform resource database 165 a might collect and store data related or pertaining to platform resource data and metrics, or the like, while the service usage database 165 b might collect and store data related or pertaining to service usage data or service profile data, and the topology and reference database 165 c might collect and store data related or pertaining to topology and reference data. The QoS measurement database 165 d might collect and store QoS data, network performance metrics, and/or results of the QoS test and validate process. Data stored in each of at least one of the platform resource database 165 a, the service usage database 165 b, the topology and reference database 165 c, the QoS measurement database 165 d, and/or the like, collected in data lake 170, and the collective data or selected data from the data lake 170 are used to perform optimization of network resource allocation (both physical and/or virtual) using the computing system 105 (and, in some cases, using an orchestration optimization engine (e.g., orchestration optimization engine 275 of FIG. 2 of the '244 and '884 applications), or the like).
  • In some embodiments, determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine 160, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit. Alternatively, or additionally, determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified one or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified one or more network resources with the desired performance parameters; determining characteristics of each of the identified one or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified one or more network resources with the desired characteristics.
  • Based on a determination that at least one identified network resource among the identified one or more network resources fails to conform with the desired performance parameters within first predetermined thresholds or based on a determination that the determined characteristics of the at least one identified network resource fails to conform with the desired characteristics within second predetermined thresholds, the computing system 105 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services. In some cases, the computing system 105 might perform one of reconfiguring the at least one identified network resource or reallocating at least one other identified network resources, based on a determination that the measured one or more network performance metrics of each of the identified one or more network resources fails to match the desired performance parameters within third predetermined thresholds or based on a determination that the measured one or more network performance metrics of each of the identified one or more network resources fails to match the desired performance parameters within fourth predetermined thresholds.
  • In some aspects, intent might further include, without limitation, path intent, location intent, performance intent, time intent, and/or the like. Path intent, for example, might include a requirement that network traffic must be routed through a first particular geophysical location (e.g., a continent, a country, a region, a state, a province, a city, a town, a mountain range, etc.) and/or a requirement that network traffic must not be routed through a second particular geophysical location, or the like. In such cases, a service commission engine might either add (and/or mark as required) all paths through the first particular geophysical location and all network resources that indicate that they are located in the first particular geophysical location, or remove (and/or mark as excluded) all paths through the second particular geophysical location and all network resources that indicate that they are located in the second particular geophysical location. The service commission engine might use the required or non-excluded paths and network resources to identify which paths and network resources to allocate to fulfill requested network services. In some embodiments, the active inventory might be marked so that any fix or repair action is also restricted and that policy audits might be implemented to ensure no violations of path intent actually occur.
  • Location intent, for instance, might include a requirement that network resources that are used for fulfilling the requested network services are located in specific geographical locations (which are more specific compared to the general geophysical locations described above). In such cases, the inventory is required to include the metadata for the intent, then the service engine can perform the filtering and selection. Monitoring and/or restricting assets being reassigned may be performed using location intent policy markings (or metadata) on the service.
  • Performance intent, for example, might include a requirement that the requested services satisfy particular performance parameters or metrics—which might include, without limitation, maximum latency or delay, maximum jitter, maximum packet loss, maximum number of hops, minimum bandwidth, nodal connectivity, minimum amount of compute resources for each allocated network resource, minimum amount of storage resources for each allocated network resource, minimum memory capacity for each allocated network resource, fastest possible path, and/or the like. In such cases, service conformance engine might use the performance metrics (as measured by one or more nodes in the network, which in some cases might include the allocated network resource itself, or the like) between points (or network nodes) for filtering the compliant inventory options, and/or might propose higher levels of service to satisfy the customer and/or cost level alignment, or the like. Once the assignment portion of the engine has been performed, the active inventory might be marked with the appropriate performance intent policy.
  • Time intent, for instance, might include a requirement that the requested services take into account conditions related to time of day (e.g., morning, noon, afternoon, evening, night, etc.), special days (e.g., holidays, snow days, storm days, etc.), weeks of the year (e.g., around holidays, etc.), etc., based at least in part on baseline or normality analyses of average or typical conditions.
  • In some embodiments, a SS7 advanced intelligence framework (which might have a local number portability dip to get instructions from an external advanced intelligence function) can be adapted with intent-based orchestration (as described herein) by putting a trigger (e.g., an external data dip, or the like) on the orchestrator between the requesting device or node (where the intent and intent criteria might be sent) and the source of the external function, which might scrape the inventory database to make its instructions and/or solution sets for the fulfillment engine and then stripe metadata, and/or returns that to the normal fulfillment engine.
  • Alternatively, or additionally, according to some embodiments, the computing system 105 might receive, over a network (e.g., at least one of service provider network 110, access network 125, one or more first networks 135 a-135 n, and/or one or more second networks 140 a-140 n, or the like), a request for network services from a customer (e.g., customer 115, or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. The computing system 105 might identify two or more network resources (e.g., network resources 130, or the like) from two or more first networks (e.g., network 135 and/or network 140, or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system 105 might establish one or more optical transport links (e.g., optical transport 175, or the like; depicted in FIG. 1 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources. According to some embodiments, establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 175) between the disaggregated and distributed identified two or more network resources 130. Although FIG. 1 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like). In some embodiments, the computing system 105 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • The computing system 105 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state. The computing system 105 might allocate the identified two or more network resources for providing the requested network services. In some cases, based on a determination that a resource or parameter is not available or based on a determination that no resources or parameters are available to meet an intent (based on a customer desired requirement or the like), the computing system 105 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services; and/or the like.
  • According to some embodiments, deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like, might comprise the computing system 105 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • With respect to the latter set of embodiments, timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment. Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings. Here, plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals. Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary). In some embodiments, isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations. In some instances, where the information-bearer channel rate is higher than either the input data signaling rate or the output data signaling rate, isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter. In some cases, a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • In some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 185, or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 130, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 190, or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 130, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 1) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 130, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • According to some embodiments, the computing system 105 might map a plurality of network resources within the two or more first networks 130. In some cases, identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources. In some instances, at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180, or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like. In some cases, the one or more AI systems may also be used to assist in assigning resources and/or managing intent-based curation or composability process.
  • In some embodiments, the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices, and/or the like. Alternatively, or additionally, the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices. In some non-limiting examples, two or more tiny servers or server blades might be curated or composed to function and simulate a single large server, or the like.
  • According to some embodiments, allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters. In some instances, providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • In some aspects, the various embodiments provide disaggregated and distributed composable infrastructure. The various embodiments also add a layer of composability by using different AI systems to treat certain data with priority and/or by using curation or composability (which might include, without limitation, geo composability, resource composability, network composability, and/or the like) based at least in part on path intent, location intent, performance intent, time intent, and/or the like (collectively referred to as “intent-based curation or composability” or the like). The various embodiments utilize the composability or orchestration to enable dynamic allocation or composability of compute and/or network resources. The various embodiments further utilize two or more generic or single-purpose network devices in place of specialized or all-purpose network devices, and as such reduces the cost of network resources and thus reducing the cost of allocation of network resources, while avoiding wasted potential or unused portions of the network resources when allocating said resources to customers. The various embodiments also simulate zero latency or near-zero latency between two or more network resources or simulate zero distance or near-zero distance between the two or more network resources while utilizing optical transport, and also configure the two or more network resources as a combined or integrated network resource despite the two or more network resources being disaggregated and distributed network resources.
  • These and other functions of the system 100 (and its components) are described in greater detail below with respect to FIGS. 2-5.
  • FIG. 2 is a schematic diagram illustrating another system 200 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • In the non-limiting embodiment of FIG. 2, system 200 might comprise a main hub 205, first through Nth ring hubs 210 a-210 n (collectively, “ring hubs 210” or the like), first through Nth remote hubs 215 a-215 n (collectively, “remote hubs 215” or the like), a plurality of universal customer premises equipment (“UCPEs”) 220 or 220 a-220 n that are located at corresponding customer premises 225 or 225 a-225 n, a plurality of network resources 230, computing system 235, host or main 240, and optical transport or optical transport links 245. Although FIG. 2 depicts a particular example of the configuration or arrangement of the main hub 205, the ring hubs 210, the remote hubs 215, and the UPCEs 220 in customer premises 220, the various embodiments are not so limited, and the configuration or arrangement may be any suitable configuration or arrangement of the main hub 205, the remote hubs 210, and the UPCEs 215 in customer premises 220, and/or the like.
  • In some embodiments, the main hub 205 might communicatively couple to the ring hubs 210 a-210 n in a ring configuration in which the main hub 205 might communicatively couple directly or indirectly to the first ring hub 210 a, which might communicatively couple directly or indirectly to the second ring hub 210 b, which might communicatively couple directly or indirectly to the next ring hub and so on until the Nth ring hub 210 n, which might in turn communicatively couple back to the main hub 205, where the main hub 205 might be located in a geographic location that is different from the geographic location of each of the ring hubs 210 a-210 n, each of which is in turn located in a geographic location that is different from the geographic location of each of the other ring hubs 210 a-210 n. Each ring hub 210 might be communicatively coupled (in a hub and spoke configuration, or the like) to a plurality of UCPEs 220, each of which might be located at a customer premises 225 among a plurality of customer premises 225. In some instances, customer premises 225 might include, without limitation, customer residences, multi-dwelling units (“MDUs”), commercial customer premises, industrial customer premises, and/or the like, within one or more blocks of customer premises (e.g., residential neighborhoods, university/college campuses, office blocks, industrial parks, mixed-use zoning areas, and/or the like), in which roadways and/or pathways might be adjacent to each of the customer premises.
  • According to some embodiments, the main hub 205 might communicatively couple to the remote hubs 215 a-215 n in a hub and spoke configuration in which the main hub 205 might communicatively couple directly or indirectly to each of the first through Nth remote hubs 215 a-215 n, where the main hub 205 might be located in a geographic location that is different from the geographic location of each of the remote hubs 215 a-215 n, each of which is in turn located in a geographic location that is different from the geographic location of each of the other remote hubs 215 a-215 n. Each remote hub 215 might be communicatively coupled (in a hub and spoke configuration, or the like) to a plurality of UCPEs 220, each of which might be located at a customer premises 225 among a plurality of customer premises 225.
  • In some embodiments, the main hub 205 and/or the network resources 230 disposed on the main hub 205 might communicatively couple to the ring hubs 210 a-210 n in the ring configuration via optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like), and, in some cases, each ring hub 210 and/or the network resources 230 disposed on each ring hub 210 might communicatively couple (in a hub and spoke configuration, or the like) to the UCPEs 220 located at corresponding customer premises 225 via corresponding optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like).
  • According to some embodiments, the main hub 205 and/or the network resources 230 disposed on the main hub 205 might communicatively couple to the remote hubs 215 a-215 n in the hub and spoke configuration via optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like), and, in some cases, each remote hub 215 and/or the network resources 230 disposed on each remote hub 215 might communicatively couple (in a hub and spoke configuration, or the like) to the UCPEs 220 located at corresponding customer premises 225 via corresponding optical transport or optical transport links 245 (depicted in FIG. 2 as long-dash lines, or the like).
  • In operation, the computing system 235 might receive, over a network (e.g., at least one of service provider network 110, access network 125, one or more first networks 135 a-135 n, and/or one or more second networks 140 a-140 n of FIG. 1, or the like), a request for network services from a customer (e.g., customer 115 of FIG. 1, or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. The computing system 235 might identify two or more network resources (e.g., network resources 230, or the like) from two or more first networks (e.g., network 135 and/or network 140 of FIG. 1, or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system 235 might establish one or more optical transport links (e.g., optical transport 245, or the like; depicted in FIG. 2 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources. According to some embodiments, establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 245) between the disaggregated and distributed identified two or more network resources 230. Although FIG. 2 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like). In some embodiments, the computing system 235 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • The desired performance parameters, in some embodiments, might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • The desired characteristics, according to some embodiments, might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • The computing system 235 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state. The computing system 235 might allocate the identified two or more network resources for providing the requested network services.
  • According to some embodiments, deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like, might comprise the computing system 235 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • With respect to the latter set of embodiments, timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment. Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings. Here, plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals. Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary). In some embodiments, isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations. In some instances, where the information-bearer channel rate is higher than either the input data signaling rate or the output data signaling rate, isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter. In some cases, a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • In some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 185, or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 230, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 190, or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 230, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 3) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 230, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • According to some embodiments, the computing system 235 might map a plurality of network resources within the two or more first networks. In some cases, identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources. In some instances, at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180 of FIG. 1, or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • In some embodiments, the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like. Alternatively, or additionally, the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • According to some embodiments, allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters. In some instances, providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • FIG. 3 is a schematic diagram illustrating yet another system 300 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • In the non-limiting embodiment of FIG. 3, system 300 might comprise a main hub 305, one or more remote hubs 310 a-310 n (collectively, “remote hubs 310” or the like), a plurality of universal customer premises equipment (“UCPEs”) 315 or 315 a-315 n that are located at corresponding customer premises 320 or 320 a-320 n, a computing system 325, a plurality of network resources 330, and optical transport or optical transport links 335 (depicted in FIG. 3 as long-dash lines, or the like). Although FIG. 3 depicts a particular example of the configuration or arrangement of the main hub 305, the remote hubs 310, and the UPCEs 315 in customer premises 320, the various embodiments are not so limited, and the configuration or arrangement may be as shown and described in FIG. 2, or may be any suitable configuration or arrangement of the main hub 305, the remote hubs 310, and the UPCEs 315 in customer premises 320, and/or the like.
  • In some embodiments, the main hub 305 might communicatively couple directly or indirectly to the remote hubs 310 a-310 n (either in the ring configuration and/or the spoke and hub configuration as shown in FIG. 2), each of which might communicatively couple directly or indirectly to a plurality of UCPEs 315, each of which might be located at a customer premises 320 among a plurality of customer premises 320. In some instances, customer premises 320 might include, without limitation, customer residences, multi-dwelling units (“MDUs”), commercial customer premises, industrial customer premises, and/or the like, within one or more blocks of customer premises (e.g., residential neighborhoods, university/college campuses, office blocks, industrial parks, mixed-use zoning areas, and/or the like), in which roadways and/or pathways might be adjacent to each of the customer premises.
  • According to some embodiments, the main hub 305 might communicatively couple to the remote hubs 310 a-310 n in which the main hub 305 might communicatively couple directly or indirectly (in either a ring configuration (as by the ring hubs 210 a-210 n, or the like) or a hub and spoke configuration as shown in FIG. 2, or the like) to each of the first through Nth remote hubs 310 a-310 n, where the main hub 305 might be located in a geographic location that is different from the geographic location of each of the remote hubs 310 a-310 n, each of which is in turn located in a geographic location that is different from the geographic location of each of the other remote hubs 310 a-310 n. Each remote hub 310 might be communicatively coupled (in a ring configuration or in a hub and spoke configuration, or the like) to a plurality of UCPEs 315, each of which might be located at a customer premises 320 among a plurality of customer premises 320.
  • Merely by way of example, in some cases, at least one of the network resources 330 disposed in the main hub 305, the network resources 330 disposed in the remote hub 310 a, the network resources 330 disposed in the remote hub 310 b, and/or the like, might comprise a plurality of network resource units 330 a mounted in a plurality of equipment racks or ports 340. In some instances, the UPCE 315 a might comprise network resources 330 including, but not limited to, two or more network resource units 330 a. In some embodiments, the network resources 330 might include, without limitation, one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like.
  • In operation, the computing system 325 might receive, over a network (e.g., at least one of service provider network 110, access network 125, one or more first networks 135 a-135 n, and/or one or more second networks 140 a-140 n of FIG. 1, or the like), a request for network services from a customer (e.g., customer 115 of FIG. 1, or the like), the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. The computing system 325 might identify two or more network resources (e.g., network resources 330, or the like) from two or more first networks (e.g., network 135 and/or network 140 of FIG. 1, or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system 325 might establish one or more optical transport links (e.g., optical transport 335, or the like; depicted in FIG. 3 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources. According to some embodiments, establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 335) between the disaggregated and distributed identified two or more network resources 330. Although FIG. 3 shows the use of optical transport links, the various embodiments are not so limited, and other transport links or other forms of network connectivity may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like). In some embodiments, the computing system 325 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • The desired performance parameters, in some embodiments, might include, but is not limited to, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”), and/or the like.
  • The desired characteristics, according to some embodiments, might include, without limitation, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • The computing system 325 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state. The computing system 325 might allocate the identified two or more network resources for providing the requested network services.
  • According to some embodiments, deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like, might comprise the computing system 325 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • With respect to the latter set of embodiments, timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment. Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings. Here, plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals. Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary). In some embodiments, isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations. In some instances, where the information-bearer channel rate is higher than either the input data signaling rate or the output data signaling rate, isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter. In some cases, a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • In some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 345, or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 330, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 350, or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 330, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 3) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 330, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • According to some embodiments, the computing system 325 might map a plurality of network resources within the two or more first networks. In some cases, identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources. In some instances, at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 180 of FIG. 1, or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • In some embodiments, the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like. Alternatively, or additionally, the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices. In some non-limiting examples, two or more tiny servers or server blades might be curated or composed to function and simulate a single large server, or the like.
  • According to some embodiments, allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters. In some instances, providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • In some embodiments, such as shown in the non-limiting example of FIG. 3, identifying the two or more network resources capable of providing the requested network services might comprise identifying a first generic or single-purpose network device 330 b in a first slot or first rack 340 among the network resources 330 disposed at the main hub 305, identifying a second generic or single-purpose network device 330 c in a second slot or second rack 340 among the network resources 330 disposed at the main hub 305, identifying a third generic or single-purpose network device 330 d in a second slot or second rack 340 among the network resources 330 disposed at the first remote hub 310 a, identifying a fourth generic or single-purpose network device 330 e in an Nth slot or Nth rack 340 among the network resources 330 disposed at the second remote hub 310 b, and identifying a fifth generic or single-purpose network device 330 f among the network resources 330 disposed at the UPCE 315 a of customer premises 320 a, or the like. The computing system 325 might utilize the re-timer 345 functionality to simulate zero latency or near-zero latency between the identified two or more network resources and/or utilize re-driver (or repeater) 350 functionality to simulate zero distance or near-zero distance between the identified two or more network resources, and/or the like, over the optical transport links 335, resulting effectively in the first through fifth generic or single-purpose network devices 330 b-330 f being configured as if they were contained within a virtual slot or virtual rack 340′ (such operation being shown at the distal end of arrow 355 in FIG. 3).
  • FIGS. 4A-4C (collectively, “FIG. 4”) are schematic diagrams illustrating various non-limiting examples 400, 400′, and 400″ of implementing intent-based service configuration, service conformance, and/or service auditing that may be applicable to implementing intent-based disaggregated and distributed composable infrastructure, in accordance to various embodiments.
  • In the non-limiting example 400 of FIG. 4A, a plurality of nodes 405 might include, without limitation, node A 405 a, node B 405 b, node C 405 c, node D 405 d, node E 405 e, node F 405 f, node G 405 g, node H 405 h, and/or the like. The system might further comprise ME node 410. The system might further comprise paths A through K, with path A between node A 405 a and node B 405 b, path B between node B 405 b and node C 405 c, path C between node C 405 c and node D 405 d, path D between node D 405 d and node E 405 e, path E between node E 405 e and node F 405 f, path F between node F 405 f and node G 405 g, path G between node G 405 g and node H 405 h, path H between node H 405 h and node A 405 a, path J between node H 405 h and node C 405 c, path K between node A 405 a and node E 405 e, and/or the like. The system might further comprise a path between the ME node 410 and one of the nodes 405 (e.g., node E 405 e, or the like). Here, each node 405 might be a network resource or might include a network resource(s), or the like.
  • Here, the intent framework might require a named goal that includes standardized criteria that may be a relationship between two items. For example, the named goal (or intent) might include, without limitation, lowest delay (where the criteria might be delay), least number of hops (where the criteria might be hops), proximity to me (in this case, the ME node 410; where the criteria might be geographical proximity, geophysical proximity, distance, etc.). In some embodiments, two or more goals (or intents) might be combined. In all cases, the criteria might be added or striped via metadata into the inventory database (e.g., databases 145, 150, 155, and/or 170 of FIG. 1, or the like) and might be used for node and/or resource selection or deselection. In goal-oriented implementation, prioritization striping might be applied for the fulfillment engine to be considered, possibly along with selection or deselection criteria.
  • In some cases, where goal-oriented intent is established, the inventory database might be augmented with tables that correlate with the “intent” criteria (such as shown in the delay table in FIG. 4A). For instance, the table might include intent (in this case, delay represented by the letter “D”), the path (e.g., path A through K, or the like), the delay (in this case, delay in milliseconds, or the like). Using the optical transport as shown and described with respect to FIGS. 1-3, as well as the re-timer and/or re-driver (or repeater) functionalities that zeroes out latency between two or more nodes 405 a-405 h and/or that simulates zero or near-zero distance between two or more nodes 405 a-405 h despite the actual physical or geographic distances, respectively (as depicted in the table by the “re-timed delay” being set to, or measured or estimated at, 150 ns, 120 ns, 80 ns, 40 ns, 70 ns, 100 ns, 130 ns, 80, ns, 320 ns, and 550 ns along paths A-K, respectively, between two or more nodes 405 a-405 h). Although the re-timed delay in FIG. 4A is shown in nanoseconds, the various embodiments are not so limited, and the re-timed delay may be in microseconds or milliseconds, or, in some cases, may be tunable as desired (e.g., with a tunable re-timed delay of between about 500 nanosecond and about 1 microsecond, or the like). For example, in the case of digital signal processors (“DSPs”) on peripheral component interconnect (“PCI”) cards or DSPs on graphics processing units (“GPUs”), or the like, tunable re-timed delays may be implemented.
  • With reference to the non-limiting example 400′ of FIG. 4B, intent-based service configuration (at block 415) might include, without limitation, exclusion intent, intrusion intent, and goal-oriented intent, or the like. In some embodiments, the exclusion intent (as indicated at block 420) might refer to intent or requirement not to fulfill network service using the indicated types of resources (in this case, resources 435 within a set of resources 430), while the inclusion intent (as indicated at block 425) might refer to intent or requirement to fulfill service using the indicated types of resources (in this case, resources 440 within the set of resources 430), or the like.
  • Here, the exclusion and inclusion intents might modify the pool of resources that the fulfillment process might pick from by removing (i.e., excluding) or limiting (i.e., including) the resources that can be assigned to fulfill the service. Once this process is completed, then the normal fulfillment process continues on.
  • Referring to the non-limiting example 400″ of FIG. 4C, according to some embodiments, the goal-oriented intent might include a single goal (as indicated at block 445) or a multi-goal (as indicated at block 450). In some cases, the single goal might, for instance, provide a “priority” to the resources that are assigned within that service class. For example, the single goal might include a priority to require low delay, for instance. In some instance, the multi-goal might, for example, provide matrix priorities to the resource pool assignment based on a fast matrix recursion process, or the like. In some embodiments, with goal-oriented intent, the user might apply one or more goals to the engine that then performs a single or matrix recursion to identify the best resources to meet the intent, and either passes a candidate list to the fulfillment engineer or stripes the inventory for the specific choice being made. Subsequently, fulfillment might continue.
  • In some cases, the set of resources 430′ (as shown in FIG. 4C) might include resources 1 through 7 455. In one example, a single goal might provide, for instance, priority to resource 1 that is assigned within that service class (as depicted by the arrow between block 445 and resource 1 in FIG. 4C), or the like. In another example, a multi-goal might provide matrix priorities to the resource pool (including, without limitation, resources 2-4, or the like) that are assigned based on a fast matrix recursion process (as depicted by the arrows between block 450 and resources 2-4 in FIG. 4C), or the like.
  • FIGS. 5A-5D (collectively, “FIG. 5”) are flow diagrams illustrating a method 500 for implementing intent-based disaggregated and distributed composable infrastructure, in accordance with various embodiments.
  • While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method 500 illustrated by FIG. 5 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 200, 300, 400, 400′, and 400″ of FIGS. 1, 2, 3, 4A, 4B, and 4C respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments 100, 200, 300, 400, 400′, and 400″ of FIGS. 1, 2, 3, 4A, 4B, and 4C, respectively (or components thereof), can operate according to the method 500 illustrated by FIG. 5 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 200, 300, 400, 400′, and 400″ of FIGS. 1, 2, 3, 4A, 4B, and 4C can each also operate according to other modes of operation and/or perform other suitable procedures.
  • In the non-limiting embodiment of FIG. 5A, method 500, at block 505, might comprise receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. At optional block 510, method 500 might comprise mapping, with computing system, a plurality of network resources within the two or more first networks.
  • Method 500 might further comprise, at block 515, identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services (and, in some cases, based at least in part on the mapping of the plurality of network resources). Method 500 might further comprise establishing, with the computing system, one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources (block 520). In some cases, the one or more transport links might comprise at least one of one or more optical transport links, one or more network transport links, one or more wired transport links, or one or more wireless transport links, and/or the like (collectively, “network connectivity” or the like).
  • At block 525, method 500 might comprise deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like. Method 500, at block 530, might comprise configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state. Method 500 might comprise, at block 535, allocating, with the computing system, the identified two or more network resources for providing the requested network services.
  • Method 500, at optional block 540, might comprise determining, with an audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters. Based on a determination that at least one identified network resource among the identified two or more network resources fails to conform with the desired performance parameters within first predetermined thresholds or based on a determination that the determined characteristics of the at least one identified network resource fails to conform with the desired characteristics within second predetermined thresholds, method 500 might further comprise one of: reconfiguring, with the computing system, the at least one identified network resource to provide the desired characteristics and performance parameters (optional block 545); or reallocating, with the computing system, at least one other identified network resources among the identified two or more network resources for providing the requested network services (optional block 550).
  • Turning to FIG. 5B, deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like (at block 525) may comprise comparing, with the computing system, system clocks each associated with each of the identified two or more network resources (block 525 a); and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the system clocks (block 525 b). Alternatively, deriving, with the computing system, distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like (at block 525) may comprise comparing, with the computing system, two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources (block 525 c); and deriving, with the computing system, the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system (block 525 d).
  • With reference to FIG. 5C, simulating zero latency or near-zero latency between the identified two or more network resources (at block 530 a) might comprise using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources (optional block 555), based at least in part on the derived distributable synchronization state. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources (at block 530 b) might comprise using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources (optional block 560), based at least in part on the derived distributable synchronization state. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources (at block 530 c) might comprise utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources (optional block 565), based at least in part on the derived distributable synchronization state.
  • Referring to FIG. 5D, determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters (at block 540) might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit (optional block 570). Alternatively, or additionally, determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters (at block 540) might comprise determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified two or more network resources (optional block 575); comparing, with the audit engine, the measured one or more network performance metrics of each of the identified two or more network resources with the desired performance parameters (optional block 580); determining characteristics of each of the identified two or more network resources (optional block 585); and comparing, with the audit engine, the determined characteristics of each of the identified two or more network resources with the desired characteristics (optional block 590).
  • Exemplary System and Hardware Implementation
  • FIG. 6 is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments. FIG. 6 provides a schematic illustration of one embodiment of a computer system 600 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., computing systems 105, 235, and 325, user devices 120 a-120 n, network resources 130, quality of service (“QoS”) test and validate server and/or audit engine 160, main hub 205 and 305, ring hubs 210 a-210 n, remote hubs 215 a-215 n, 310 a, and 310 b, universal customer premises equipment (“UCPEs”) 220 and 315 a, host/main 240, etc.), as described above. It should be noted that FIG. 6 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. FIG. 6, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • The computer or hardware system 600—which might represent an embodiment of the computer or hardware system (i.e., computing systems 105, 235, and 325, user devices 120 a-120 n, network resources 130, QoS test and validate server and/or audit engine 160, main hub 205 and 305, ring hubs 210 a-210 n, remote hubs 215 a-215 n, 310 a, and 310 b, UCPEs 220 and 315 a, host/main 240, etc.), described above with respect to FIGS. 1-5—is shown comprising hardware elements that can be electrically coupled via a bus 605 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 610, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 615, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 620, which can include, without limitation, a display device, a printer, and/or the like.
  • The computer or hardware system 600 may further include (and/or be in communication with) one or more storage devices 625, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
  • The computer or hardware system 600 might also include a communications subsystem 630, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem 630 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system 600 will further comprise a working memory 635, which can include a RAM or ROM device, as described above.
  • The computer or hardware system 600 also may comprise software elements, shown as being currently located within the working memory 635, including an operating system 640, device drivers, executable libraries, and/or other code, such as one or more application programs 645, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
  • A set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s) 625 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 600. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system 600 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 600 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
  • As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system 600) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system 600 in response to processor 610 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 640 and/or other code, such as an application program 645) contained in the working memory 635. Such instructions may be read into the working memory 635 from another computer readable medium, such as one or more of the storage device(s) 625. Merely by way of example, execution of the sequences of instructions contained in the working memory 635 might cause the processor(s) 610 to perform one or more procedures of the methods described herein.
  • The terms “machine readable medium” and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer or hardware system 600, various computer readable media might be involved in providing instructions/code to processor(s) 610 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 625. Volatile media includes, without limitation, dynamic memory, such as the working memory 635. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 605, as well as the various components of the communication subsystem 630 (and/or the media by which the communications subsystem 630 provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
  • Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 610 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 600. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • The communications subsystem 630 (and/or components thereof) generally will receive the signals, and the bus 605 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 635, from which the processor(s) 605 retrieves and executes the instructions. The instructions received by the working memory 635 may optionally be stored on a storage device 625 either before or after execution by the processor(s) 610.
  • As noted above, a set of embodiments comprises methods and systems for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure. FIG. 7 illustrates a schematic diagram of a system 700 that can be used in accordance with one set of embodiments. The system 700 can include one or more user computers, user devices, or customer devices 705. A user computer, user device, or customer device 705 can be a general purpose personal computer (including, merely by way of example, desktop computers, tablet computers, laptop computers, handheld computers, and the like, running any appropriate operating system, several of which are available from vendors such as Apple, Microsoft Corp., and the like), cloud computing devices, a server(s), and/or a workstation computer(s) running any of a variety of commercially-available UNIX™ or UNIX-like operating systems. A user computer, user device, or customer device 705 can also have any of a variety of applications, including one or more applications configured to perform methods provided by various embodiments (as described above, for example), as well as one or more office applications, database client and/or server applications, and/or web browser applications. Alternatively, a user computer, user device, or customer device 705 can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network (e.g., the network(s) 710 described below) and/or of displaying and navigating web pages or other types of electronic documents. Although the exemplary system 700 is shown with two user computers, user devices, or customer devices 705, any number of user computers, user devices, or customer devices can be supported.
  • Certain embodiments operate in a networked environment, which can include a network(s) 710. The network(s) 710 can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available (and/or free or proprietary) protocols, including, without limitation, TCP/IP, SNA™, IPX™, AppleTalk™, and the like. Merely by way of example, the network(s) 710 (similar to network(s) 110, 125, 135 a-135 n, and 140 a-140 n of FIG. 1, or the like) can each include a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-Ring™ network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network might include an access network of the service provider (e.g., an Internet service provider (“ISP”)). In another embodiment, the network might include a core network of the service provider, and/or the Internet.
  • Embodiments can also include one or more server computers 715. Each of the server computers 715 may be configured with an operating system, including, without limitation, any of those discussed above, as well as any commercially (or freely) available server operating systems. Each of the servers 715 may also be running one or more applications, which can be configured to provide services to one or more clients 705 and/or other servers 715.
  • Merely by way of example, one of the servers 715 might be a data server, a web server, a cloud computing device(s), or the like, as described above. The data server might include (or be in communication with) a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers 705. The web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like. In some embodiments of the invention, the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers 705 to perform methods of the invention.
  • The server computers 715, in some embodiments, might include one or more application servers, which can be configured with one or more applications accessible by a client running on one or more of the client computers 705 and/or other servers 715. Merely by way of example, the server(s) 715 can be one or more general purpose computers capable of executing programs or scripts in response to the user computers 705 and/or other servers 715, including, without limitation, web applications (which might, in some cases, be configured to perform methods provided by various embodiments). Merely by way of example, a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as Java™, C, C#™ or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming and/or scripting languages. The application server(s) can also include database servers, including, without limitation, those commercially available from Oracle™, Microsoft™, Sybase™ IBM™, and the like, which can process requests from clients (including, depending on the configuration, dedicated database clients, API clients, web browsers, etc.) running on a user computer, user device, or customer device 705 and/or another server 715. In some embodiments, an application server can perform one or more of the processes for implementing disaggregated composable infrastructure, and, more particularly, to methods, systems, and apparatuses for implementing intent-based disaggregated and distributed composable infrastructure, as described in detail above. Data provided by an application server may be formatted as one or more web pages (comprising HTML, JavaScript, etc., for example) and/or may be forwarded to a user computer 705 via a web server (as described above, for example). Similarly, a web server might receive web page requests and/or input data from a user computer 705 and/or forward the web page requests and/or input data to an application server. In some cases, a web server may be integrated with an application server.
  • In accordance with further embodiments, one or more servers 715 can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement various disclosed methods, incorporated by an application running on a user computer 705 and/or another server 715. Alternatively, as those skilled in the art will appreciate, a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer, user device, or customer device 705 and/or server 715.
  • It should be noted that the functions described with respect to various servers herein (e.g., application server, database server, web server, file server, etc.) can be performed by a single server and/or a plurality of specialized servers, depending on implementation-specific needs and parameters.
  • In certain embodiments, the system can include one or more databases 720 a-720 n (collectively, “databases 720”). The location of each of the databases 720 is discretionary: merely by way of example, a database 720 a might reside on a storage medium local to (and/or resident in) a server 715 a (and/or a user computer, user device, or customer device 705). Alternatively, a database 720 n can be remote from any or all of the computers 705, 715, so long as it can be in communication (e.g., via the network 710) with one or more of these. In a particular set of embodiments, a database 720 can reside in a storage-area network (“SAN”) familiar to those skilled in the art. (Likewise, any necessary files for performing the functions attributed to the computers 705, 715 can be stored locally on the respective computer and/or remotely, as appropriate.) In one set of embodiments, the database 720 can be a relational database, such as an Oracle database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. The database might be controlled and/or maintained by a database server, as described above, for example.
  • According to some embodiments, system 700 might further comprise computing system 725 (similar to computing systems 105 of FIG. 1, or the like), quality of service (“QoS”) test and validate server or audit engine 730 (similar to QoS test and validate server or audit engine 160 of FIG. 1, or the like), one or more network resources 735 (similar to network resources 130 of FIG. 1, or the like), resource inventory database 740 (similar to resource inventory databases 145, 215, and 305 of FIGS. 1-3, or the like), intent metadata database 745 (similar to resource inventory databases 150 and 220 of FIGS. 1 and 2, or the like), and active inventory database 750 (similar to resource inventory databases 155, 235, and 320 of FIGS. 1-3, or the like).
  • In operation, computing system 725 might receive a request for network services from a customer (e.g., from user device 705 a or 705 b (which might correspond to user devices 120 a-120 n of FIG. 1, or the like)). The request for network services might comprise desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, or specific network for providing the requested network services.
  • The computing system 725 might analyze first metadata regarding resource attributes and characteristics of a plurality of unassigned network resources to identify one or more network resources among the plurality of unassigned network resources for providing the requested network services, the first metadata having been striped to entries of the plurality of unassigned network resources in a resource database (e.g., resource inventory database 740, or the like). Based on the analysis, the computing system 725 might allocate at least one identified network resource among the identified one or more network resources for providing the requested network services.
  • The computing system 725 might update a service database by adding or updating an entry in the service database (e.g., resource inventory database 740 or intent metadata database 745, or the like) with information indicating that the at least one identified network resource have been allocated for providing the requested network services, and might stripe the entry with second metadata (in some cases, in resource inventory database 740, intent metadata database 745, or active inventory database 750, or the like) indicative of the desired characteristics and performance parameters as comprised in the request for network services.
  • According to some embodiments, the desired performance parameters might include, without limitation, at least one of a maximum latency, a maximum jitter, a maximum packet loss, or a maximum number of hops, and/or the like. In some embodiments, the desired characteristics might include, but are not limited to, at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer, and/or the like.
  • Merely by way of example, in some cases, the audit engine 730 might determine whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters. In some instances, determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit. Alternatively, determining whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters might comprise determining, with the audit engine, whether each of the identified one or more network resources conforms with the desired characteristics and performance parameters, by: measuring one or more network performance metrics of each of the identified one or more network resources; comparing, with the audit engine, the measured one or more network performance metrics of each of the identified one or more network resources with the desired performance parameters; determining characteristics of each of the identified one or more network resources; and comparing, with the audit engine, the determined characteristics of each of the identified one or more network resources with the desired characteristics. Based on a determination that at least one identified network resource among the identified one or more network resources fails to conform with the desired performance parameters within first predetermined thresholds or based on a determination that the determined characteristics of the at least one identified network resource fails to conform with the desired characteristics within second predetermined thresholds, the computing system 725 might perform one of: reconfiguring the at least one identified network resource to provide the desired characteristics and performance parameters; or reallocating at least one other identified network resources among the identified one or more network resources for providing the requested network services.
  • Alternatively, or additionally, according to some embodiments, the computing system 725 might receive, over a network (e.g., at least one of service provider network(s) 710, or the like), a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services. The computing system 725 might identify two or more network resources (e.g., network resources 735 a-735 n, or the like) from two or more first networks (e.g., network(s) 710, or the like) capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services. The computing system 725 might establish one or more optical transport links (e.g., optical transport 755, or the like; depicted in FIG. 7 as long-dash lines, or the like) between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources. According to some embodiments, establishing the one or more optical transport links between the disaggregated and distributed identified two or more network resources might comprise utilizing light steered transport to establish the one or more optical transport links (e.g., optical transport 755) between the disaggregated and distributed identified two or more network resources 735 a-735 n. Although FIG. 7 shows the use of optical transport links, the various embodiments are not so limited, and other transport links may be used (e.g., network transport links, wired transport links, or wireless transport links, and/or the like). In some embodiments, the computing system 725 might derive distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links.
  • The computing system 725 might configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the derived distributable synchronization state. The computing system 725 might allocate the identified two or more network resources for providing the requested network services.
  • In some embodiments, the computing system 725 might map a plurality of network resources within the two or more first networks 735 a-735 n. In some cases, identifying the two or more network resources might comprise identifying the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources. In some instances, at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources might be performed using at least one of one or more artificial intelligence (“AI”) systems (e.g., AI system 760, or the like), one or more machine learning systems, or one or more software defined network (“SDN”) systems, and/or the like.
  • Merely by way of example, in some cases, deriving the distributable synchronization state across at least one of the identified two or more network resources, a backplane of one or more of the two or more first networks, or the one or more transport links, and/or the like, might comprise the computing system 725 performing one of: comparing system clocks each associated with each of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the system clocks; or comparing two or more Qbit multi-states of two or more quantum timing systems associated with at least two of the identified two or more network resources, and deriving the distributable synchronization state based on any differences in the comparison of the two or more Qbit multi-states of each quantum timing system.
  • With respect to the latter set of embodiments, timing source and propagation is no longer predicated on dedicated links, or on existing atomic structure while still allowing for interface with atomic-based sources utilizing legacy network timing alignment. Quantum-based timing or quantum timing leverages the multi-state ability of multiple Q-bits to provide plesiochronous as well as isochronous timings. Here, plesiochronous timing may refer to almost, but not quite, perfectly synchronized events, systems, or signals, with significant instants occurring at nominally the same rate across plesiochronous events, systems, or signals. Isochronous timing may refer to events, systems, or signals in which any two corresponding transitions occurs are regular or equal time intervals (i.e., where the time interval separating any two corresponding transitions is equal to the unit interval (or a multiple thereof) where phase may be arbitrary and may vary). In some embodiments, isochronous burst transmission may be implemented, where such transmission is capable of ordering traffic with or without the use of dedicated timing distribution facilities between devices or between geographic locations. In some instances, where the information-bearer channel rate is higher than either the input data signaling rate or the output data signaling rate, isochronous burst transmission may be performed by interrupting, at controlled intervals, the data stream being transmitter. In some cases, a comparator software running with (or on) the compute structure may be used to compare two or more Q-bit multi-states with a local oscillator to derive distributable synchronization across a backplane of a network(s) and/or across optical transmission networks, or the like. Accordingly, quantum timing may allow for distributed timing as well as the ability to flex time equipment buffers and the network(s) to speed up or slow down the flow of traffic.
  • According to some embodiments, simulating zero latency or near-zero latency between the identified two or more network resources might comprise using a re-timer (e.g., re-timer 765, or the like) to simulate zero latency or near-zero latency between the identified two or more network resources 735 a-735 n, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-timer. Alternatively, or additionally, simulating zero distance or near-zero distance between the identified two or more network resources might comprise using a re-driver or a repeater (e.g., re-driver 770, or the like) to simulate zero distance or near-zero distance between the identified two or more network resources 735 a-735 n, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the re-driver or repeater. Alternatively, or additionally, simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources might comprise utilizing a buffer (not shown in FIG. 7) with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources 735 a-735 n, based at least in part on the derived distributable synchronization state. In the case that quantum timing is implemented, such may be implemented using a quantum timing system(s) disposed on (or communicatively coupled to) the buffer.
  • In some embodiments, the identified two or more network resources might include, without limitation, peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices (e.g., non-volatile memory (“NVM”) devices, NVM express (“NVMe”) devices, optical storage devices, magnetic storage devices, and/or the like), and/or the like. Alternatively, or additionally, the identified two or more network resources might include, but is not limited to, two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
  • According to some embodiments, allocating the two or more network resources from the two or more first networks for providing the requested network services might comprise providing the two or more first networks with access over the one or more optical transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters. In some instances, providing access to the one or more VNFs might comprise bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
  • These and other functions of the system 700 (and its components) are described in greater detail above with respect to FIGS. 1-5.
  • While certain features and aspects have been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while certain functionality is ascribed to certain system components, unless the context dictates otherwise, this functionality can be distributed among various other system components in accordance with the several embodiments.
  • Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with—or without—certain features for ease of description and to illustrate exemplary aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several exemplary embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims (20)

What is claimed is:
1. A method, comprising:
receiving, with a computing system over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services;
identifying, with the computing system, two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services;
establishing, with the computing system, one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources;
configuring, with the computing system, at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services; and
allocating, with the computing system, the identified two or more network resources for providing the requested network services.
2. The method of claim 1, wherein the computing system comprises one of a path computation engine, a data flow manager, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system.
3. The method of claim 1, wherein the one or more transport links comprise at least one of one or more optical transport links, one or more network transport links, or one or more wired transport links.
4. The method of claim 1, wherein simulating zero latency or near-zero latency between the identified two or more network resources comprises using a re-timer to simulate zero latency or near-zero latency between the identified two or more network resources.
5. The method of claim 1, wherein simulating zero distance or near-zero distance between the identified two or more network resources comprises using a re-driver or a repeater to simulate zero distance or near-zero distance between the identified two or more network resources.
6. The method of claim 1, wherein simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources comprises utilizing a buffer with flexible buffer capacity to simulate zero latency or near-zero latency between the identified two or more network resources or to simulate zero distance or near-zero distance between the identified two or more network resources.
7. The method of claim 1, wherein establishing the one or more transport links between the disaggregated and distributed identified two or more network resources comprises utilizing light steered transport to establish the one or more transport links between the disaggregated and distributed identified two or more network resources.
8. The method of claim 1, further comprising:
mapping, with computing system, a plurality of network resources within the two or more first networks;
wherein identifying the two or more network resources comprises identifying, with the computing system, the two or more network resources from the two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services and based at least in part on the mapping of the plurality of network resources.
9. The method of claim 8, wherein at least one of identifying the two or more network resources, mapping the plurality of network resources, or configuring the at least one network resource of the identified two or more network resources is performed using at least one of one or more artificial intelligence (“AI”) systems, one or more machine learning systems, or one or more software defined network (“SDN”) systems.
10. The method of claim 1, wherein the identified two or more network resources comprise peripheral component interconnect (“PCI”) -based network cards each comprising one or more network interface cards (“NICs”), one or more smart NICs, one or more graphics processing units (“GPUs”), or one or more storage devices.
11. The method of claim 1, wherein the identified two or more network resources comprise two or more generic or single-purpose network devices in place of specialized or all-purpose network devices.
12. The method of claim 1, wherein the desired characteristics comprise at least one of requirement for network equipment to be geophysically proximate to the customer, requirement for network equipment to be located within a geophysical location, requirement to avoid routing network traffic through a geophysical location, requirement to route network traffic through a geophysical location, requirement to exclude a first type of network resources from fulfillment of the requested network services, requirement to include a second type of network resources for fulfillment of the requested network services, requirement to fulfill the requested network services based on a single goal indicated by the customer, or requirement to fulfill the requested network services based on multi-goals indicated by the customer.
13. The method of claim 1, wherein the desired performance parameters comprise at least one of a maximum latency, a maximum jitter, a maximum packet loss, a maximum number of hops, performance parameters defined in a service level agreement (“SLA”) associated with the customer or performance parameters defined in terms of natural resource usage, quality of service (“QoS”) measurement data, platform resource data and metrics, service usage data, topology and reference data, historical network data, network usage trend data, or one or more of information regarding at least one of latency, jitter, bandwidth, packet loss, nodal connectivity, compute resources, storage resources, memory capacity, routing, operations support systems (“OSS”), or business support systems (“BSS”) or information regarding at least one of fault, configuration, accounting, performance, or security (“FCAPS”).
14. The method of claim 1, wherein allocating the two or more network resources from the two or more first networks for providing the requested network services comprises providing the two or more first networks with access over the one or more transport links to one or more virtual network functions (“VNFs”) for use by the customer, the one or more VNFs providing the two or more network resources having the desired performance parameters.
15. The method of claim 14, wherein providing access to the one or more VNFs comprises bursting, using an application programming interface (“API”), one or more VNFs to one or more network functions virtualization (“NFV”) entities at the two or more first networks.
16. The method of claim 1, further comprising:
determining, with an audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters.
17. The method of claim 16, wherein determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters comprises determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters on a periodic basis or in response to a request to perform an audit.
18. The method of claim 16, wherein determining whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters comprises determining, with the audit engine, whether each of the identified two or more network resources conforms with the desired characteristics and performance parameters, by:
measuring one or more network performance metrics of each of the identified two or more network resources;
comparing, with the audit engine, the measured one or more network performance metrics of each of the identified two or more network resources with the desired performance parameters;
determining characteristics of each of the identified two or more network resources; and
comparing, with the audit engine, the determined characteristics of each of the identified two or more network resources with the desired characteristics.
19. An apparatus, comprising:
at least one processor; and
a non-transitory computer readable medium communicatively coupled to the at least one processor, the non-transitory computer readable medium having stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the apparatus to:
receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services;
identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services;
establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources;
configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services; and
allocate the identified two or more network resources for providing the requested network services.
20. A system, comprising:
a computing system, comprising:
at least one first processor; and
a first non-transitory computer readable medium communicatively coupled to the at least one first processor, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to:
receive, over a network, a request for network services from a customer, the request for network services comprising desired characteristics and performance parameters for the requested network services, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing the requested network services;
identify two or more network resources from two or more first networks capable of providing the requested network services, based at least in part on the desired characteristics and performance parameters for the requested network services;
establish one or more transport links between the identified two or more network resources, the identified two or more network resources being disaggregated and distributed network resources;
configure at least one network resource of the identified two or more network resources to perform at least one of simulating zero latency or near-zero latency between the identified two or more network resources or simulating zero distance or near-zero distance between the identified two or more network resources, based at least in part on the desired characteristics and performance parameters for the requested network services; and
allocate the identified two or more network resources for providing the requested network services.
US17/184,879 2020-02-25 2021-02-25 Disaggregated & Distributed Composable Infrastructure Pending US20210266368A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/184,879 US20210266368A1 (en) 2020-02-25 2021-02-25 Disaggregated & Distributed Composable Infrastructure

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202062981308P 2020-02-25 2020-02-25
US202163142109P 2021-01-27 2021-01-27
US17/184,879 US20210266368A1 (en) 2020-02-25 2021-02-25 Disaggregated & Distributed Composable Infrastructure

Publications (1)

Publication Number Publication Date
US20210266368A1 true US20210266368A1 (en) 2021-08-26

Family

ID=77366583

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/184,884 Active US11425224B2 (en) 2020-02-25 2021-02-25 Disaggregated and distributed composable infrastructure
US17/184,879 Pending US20210266368A1 (en) 2020-02-25 2021-02-25 Disaggregated & Distributed Composable Infrastructure
US17/891,775 Pending US20220417345A1 (en) 2020-02-25 2022-08-19 Disaggregated & distributed composable infrastructure

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/184,884 Active US11425224B2 (en) 2020-02-25 2021-02-25 Disaggregated and distributed composable infrastructure

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/891,775 Pending US20220417345A1 (en) 2020-02-25 2022-08-19 Disaggregated & distributed composable infrastructure

Country Status (1)

Country Link
US (3) US11425224B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197681A1 (en) * 2020-12-22 2022-06-23 Reliance Jio Infocomm Usa, Inc. Intelligent data plane acceleration by offloading to distributed smart network interfaces

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11425224B2 (en) * 2020-02-25 2022-08-23 Level 3 Communications, Llc Disaggregated and distributed composable infrastructure
US11218594B1 (en) * 2020-08-11 2022-01-04 Genesys Telecommunications Laboratories, Inc. System and method for creating bots for automating first party touchpoints

Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577418B1 (en) * 1999-11-04 2003-06-10 International Business Machines Corporation Optical internet protocol switch and method therefor
US20050021632A1 (en) * 2003-06-18 2005-01-27 Rachlin Elliott H. Method and apparatus for disambiguating transmit-by-exception telemetry from a multi-path, multi-tier network
US7424225B1 (en) * 2003-11-17 2008-09-09 Bbn Technologies Corp. Systems and methods for implementing contention-based optical channel access
US7522628B1 (en) * 2003-11-17 2009-04-21 Bbn Technologies Corp. Systems and methods for implementing coordinated optical channel access
US20090214216A1 (en) * 2007-11-30 2009-08-27 Miniscalco William J Space-time division multiple-access laser communications system
US20100299366A1 (en) * 2009-05-20 2010-11-25 Sap Ag Systems and Methods for Generating Cloud Computing Landscapes
US20110002429A1 (en) * 2008-02-29 2011-01-06 Audinate Pty Ltd Network devices, methods and/or systems for use in a media network
US20110158658A1 (en) * 2009-12-08 2011-06-30 Vello Systems, Inc. Optical Subchannel-Based Cyclical Filter Architecture
US20120257661A1 (en) * 2011-04-06 2012-10-11 Murphy John J Shielding flaw detection and measurement in quadrature amplitude modulated cable telecommunications environment
US20130273839A1 (en) * 2012-04-11 2013-10-17 The Boeing Company Method and Apparatus for Providing a Communications Pathway with High Reliability
US20140059226A1 (en) * 2012-08-21 2014-02-27 Rackspace Us, Inc. Multi-Level Cloud Computing System
US20140270749A1 (en) * 2013-03-15 2014-09-18 Raytheon Company Free-space optical network with agile beam-based protection switching
US20150207586A1 (en) * 2014-01-17 2015-07-23 Telefonaktiebolaget L M Ericsson (Publ) System and methods for optical lambda flow steering
US20150244458A1 (en) * 2014-02-25 2015-08-27 Google Inc. Optical Communication Terminal
US20150310898A1 (en) * 2014-04-23 2015-10-29 Diablo Technologies Inc. System and method for providing a configurable timing control for a memory system
US20150317169A1 (en) * 2014-05-04 2015-11-05 Midfin Systems Inc. Constructing and operating high-performance unified compute infrastructure across geo-distributed datacenters
US20150381426A1 (en) * 2014-06-30 2015-12-31 Emc Corporation Dynamically composed compute nodes comprising disaggregated components
US20160072580A1 (en) * 2013-03-25 2016-03-10 Nokia Technologies Oy Optical link establishment
US20160173964A1 (en) * 2014-12-11 2016-06-16 Alcatel-Lucent Usa Inc. Hybrid optical switch for software-defined networking
US20170093750A1 (en) * 2015-09-28 2017-03-30 Centurylink Intellectual Property Llc Intent-Based Services Orchestration
US20170195230A1 (en) * 2015-12-31 2017-07-06 William Carson McCormick Methods and systems for transport sdn traffic engineering using dual variables
US20170264981A1 (en) * 2016-03-09 2017-09-14 ADVA Optical Networking Sp. z o.o. Method and apparatus for performing an automatic bandwidth management in a communication network
US9838119B1 (en) * 2015-01-29 2017-12-05 Google Llc Automatically steered optical wireless communication for mobile devices
US20170359735A1 (en) * 2016-06-14 2017-12-14 Hughes Network Systems, Llc Automated network diagnostic techniques
US20180198738A1 (en) * 2017-01-10 2018-07-12 Netspeed Systems, Inc. Buffer Sizing of a NoC Through Machine Learning
US10069693B1 (en) * 2014-12-11 2018-09-04 Amazon Technologies, Inc. Distributed resource allocation
US20190124669A1 (en) * 2016-02-03 2019-04-25 Zte Corporation Resource application and allocation method, ue, network control unit, and storage medium
US20190163030A1 (en) * 2017-11-24 2019-05-30 Tesat-Spacecom Gmbh & Co. Kg Beam Orientation In Unidirectional Optical Communication Systems
US20190303759A1 (en) * 2018-03-27 2019-10-03 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
US20200028927A1 (en) * 2018-07-19 2020-01-23 Verizon Digital Media Services Inc. Hybrid pull and push based streaming
US20200169857A1 (en) * 2018-11-28 2020-05-28 Verizon Patent And Licensing Inc. Method and system for intelligent routing for mobile edge computing
US20200394183A1 (en) * 2019-06-12 2020-12-17 Subramanya R. Jois System and method of executing, confirming and storing a transaction in a serverless decentralized node network
US20200413107A1 (en) * 2019-06-27 2020-12-31 Infrared5, Inc. Systems and methods for extraterrestrial streaming
US10915361B1 (en) * 2018-04-30 2021-02-09 Amazon Technologies, Inc. Dynamic capacity buffers
US20210266376A1 (en) * 2020-02-25 2021-08-26 Level 3 Communications, Llc Disaggregated & Distributed Composable Infrastructure
US20210266236A1 (en) * 2020-02-25 2021-08-26 Level 3 Communications, Llc Intent-Based Multi-Tiered Orchestration and Automation
US20210274512A1 (en) * 2020-02-28 2021-09-02 At&T Intellectual Property I, L.P. Recalibrating resource profiles for network slices in a 5g or other next generation wireless network
US11258515B1 (en) * 2020-11-20 2022-02-22 Lockheed Martin Corporation Laser communication link ranging and timing
US20220067333A1 (en) * 2020-08-31 2022-03-03 Teledyne Lecroy, Inc. Method and apparatus for simultaneous protocol and physical layer testing
US20220070255A1 (en) * 2020-09-01 2022-03-03 International Business Machines Corporation Data transmission routing based on replication path capability

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6799202B1 (en) * 1999-12-16 2004-09-28 Hachiro Kawaii Federated operating system for a server
US6944187B1 (en) * 2000-08-09 2005-09-13 Alcatel Canada Inc. Feature implementation in a real time stamp distribution system
US7007106B1 (en) * 2001-05-22 2006-02-28 Rockwell Automation Technologies, Inc. Protocol and method for multi-chassis configurable time synchronization
US9998320B2 (en) * 2014-04-03 2018-06-12 Centurylink Intellectual Property Llc Customer environment network functions virtualization (NFV)

Patent Citations (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577418B1 (en) * 1999-11-04 2003-06-10 International Business Machines Corporation Optical internet protocol switch and method therefor
US20050021632A1 (en) * 2003-06-18 2005-01-27 Rachlin Elliott H. Method and apparatus for disambiguating transmit-by-exception telemetry from a multi-path, multi-tier network
US7424225B1 (en) * 2003-11-17 2008-09-09 Bbn Technologies Corp. Systems and methods for implementing contention-based optical channel access
US7522628B1 (en) * 2003-11-17 2009-04-21 Bbn Technologies Corp. Systems and methods for implementing coordinated optical channel access
US20090214216A1 (en) * 2007-11-30 2009-08-27 Miniscalco William J Space-time division multiple-access laser communications system
US20110002429A1 (en) * 2008-02-29 2011-01-06 Audinate Pty Ltd Network devices, methods and/or systems for use in a media network
US20100299366A1 (en) * 2009-05-20 2010-11-25 Sap Ag Systems and Methods for Generating Cloud Computing Landscapes
US20110158658A1 (en) * 2009-12-08 2011-06-30 Vello Systems, Inc. Optical Subchannel-Based Cyclical Filter Architecture
US20120257661A1 (en) * 2011-04-06 2012-10-11 Murphy John J Shielding flaw detection and measurement in quadrature amplitude modulated cable telecommunications environment
US20130273839A1 (en) * 2012-04-11 2013-10-17 The Boeing Company Method and Apparatus for Providing a Communications Pathway with High Reliability
US20140059226A1 (en) * 2012-08-21 2014-02-27 Rackspace Us, Inc. Multi-Level Cloud Computing System
US20140270749A1 (en) * 2013-03-15 2014-09-18 Raytheon Company Free-space optical network with agile beam-based protection switching
US20160072580A1 (en) * 2013-03-25 2016-03-10 Nokia Technologies Oy Optical link establishment
US20150207586A1 (en) * 2014-01-17 2015-07-23 Telefonaktiebolaget L M Ericsson (Publ) System and methods for optical lambda flow steering
US20150244458A1 (en) * 2014-02-25 2015-08-27 Google Inc. Optical Communication Terminal
US20150310898A1 (en) * 2014-04-23 2015-10-29 Diablo Technologies Inc. System and method for providing a configurable timing control for a memory system
US20150317169A1 (en) * 2014-05-04 2015-11-05 Midfin Systems Inc. Constructing and operating high-performance unified compute infrastructure across geo-distributed datacenters
US20150381426A1 (en) * 2014-06-30 2015-12-31 Emc Corporation Dynamically composed compute nodes comprising disaggregated components
US20160173964A1 (en) * 2014-12-11 2016-06-16 Alcatel-Lucent Usa Inc. Hybrid optical switch for software-defined networking
US10069693B1 (en) * 2014-12-11 2018-09-04 Amazon Technologies, Inc. Distributed resource allocation
US9838119B1 (en) * 2015-01-29 2017-12-05 Google Llc Automatically steered optical wireless communication for mobile devices
US20170093750A1 (en) * 2015-09-28 2017-03-30 Centurylink Intellectual Property Llc Intent-Based Services Orchestration
US20180123974A1 (en) * 2015-09-28 2018-05-03 Centurylink Intellectual Property Llc Intent-Based Services Orchestration
US20190230047A1 (en) * 2015-09-28 2019-07-25 Centurylink Intellectual Property Llc Intent-Based Services Orchestration
US20170195230A1 (en) * 2015-12-31 2017-07-06 William Carson McCormick Methods and systems for transport sdn traffic engineering using dual variables
US20190124669A1 (en) * 2016-02-03 2019-04-25 Zte Corporation Resource application and allocation method, ue, network control unit, and storage medium
US20170264981A1 (en) * 2016-03-09 2017-09-14 ADVA Optical Networking Sp. z o.o. Method and apparatus for performing an automatic bandwidth management in a communication network
US20170359735A1 (en) * 2016-06-14 2017-12-14 Hughes Network Systems, Llc Automated network diagnostic techniques
US20180198738A1 (en) * 2017-01-10 2018-07-12 Netspeed Systems, Inc. Buffer Sizing of a NoC Through Machine Learning
US20190163030A1 (en) * 2017-11-24 2019-05-30 Tesat-Spacecom Gmbh & Co. Kg Beam Orientation In Unidirectional Optical Communication Systems
US20190303759A1 (en) * 2018-03-27 2019-10-03 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
US10915361B1 (en) * 2018-04-30 2021-02-09 Amazon Technologies, Inc. Dynamic capacity buffers
US20200028927A1 (en) * 2018-07-19 2020-01-23 Verizon Digital Media Services Inc. Hybrid pull and push based streaming
US20200169857A1 (en) * 2018-11-28 2020-05-28 Verizon Patent And Licensing Inc. Method and system for intelligent routing for mobile edge computing
US20200394183A1 (en) * 2019-06-12 2020-12-17 Subramanya R. Jois System and method of executing, confirming and storing a transaction in a serverless decentralized node network
US20200413107A1 (en) * 2019-06-27 2020-12-31 Infrared5, Inc. Systems and methods for extraterrestrial streaming
US20210266376A1 (en) * 2020-02-25 2021-08-26 Level 3 Communications, Llc Disaggregated & Distributed Composable Infrastructure
US20210266236A1 (en) * 2020-02-25 2021-08-26 Level 3 Communications, Llc Intent-Based Multi-Tiered Orchestration and Automation
US20210274512A1 (en) * 2020-02-28 2021-09-02 At&T Intellectual Property I, L.P. Recalibrating resource profiles for network slices in a 5g or other next generation wireless network
US20220067333A1 (en) * 2020-08-31 2022-03-03 Teledyne Lecroy, Inc. Method and apparatus for simultaneous protocol and physical layer testing
US20220070255A1 (en) * 2020-09-01 2022-03-03 International Business Machines Corporation Data transmission routing based on replication path capability
US11258515B1 (en) * 2020-11-20 2022-02-22 Lockheed Martin Corporation Laser communication link ranging and timing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197681A1 (en) * 2020-12-22 2022-06-23 Reliance Jio Infocomm Usa, Inc. Intelligent data plane acceleration by offloading to distributed smart network interfaces
US11645104B2 (en) * 2020-12-22 2023-05-09 Reliance Jio Infocomm Usa, Inc. Intelligent data plane acceleration by offloading to distributed smart network interfaces
US20230251893A1 (en) * 2020-12-22 2023-08-10 Reliance Jio Infocomm Usa, Inc. Intelligent data plane acceleration by offloading to distributed smart network interfaces

Also Published As

Publication number Publication date
US11425224B2 (en) 2022-08-23
US20220417345A1 (en) 2022-12-29
US20210266376A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
US10673777B2 (en) Intent-based services orchestration
US11425224B2 (en) Disaggregated and distributed composable infrastructure
US11442764B2 (en) Optimizing the deployment of virtual resources and automating post-deployment actions in a cloud environment
US10558483B2 (en) Optimal dynamic placement of virtual machines in geographically distributed cloud data centers
US11637790B2 (en) Intent-based orchestration using network parsimony trees
US20150113144A1 (en) Virtual resource placement for cloud-based applications and solutions
US20180276109A1 (en) Distributed system test device
US10250488B2 (en) Link aggregation management with respect to a shared pool of configurable computing resources
US9191330B2 (en) Path selection for network service requests
US11722371B2 (en) Utilizing unstructured data in self-organized networks
US10862822B2 (en) Intent-based service configuration, service conformance, and service auditing
US10171349B2 (en) Packet forwarding for quality of service delivery
CA3208382A1 (en) Network capacity planning systems and methods
JP2022546672A (en) Distributed system deployment
US9912563B2 (en) Traffic engineering of cloud services
US11210156B1 (en) Intelligent distributed tracing
US11778053B1 (en) Fault-tolerant function placement for edge computing
US11973842B2 (en) Service status prediction based transaction failure avoidance
US20230409419A1 (en) Techniques for controlling log rate using policy

Legal Events

Date Code Title Description
AS Assignment

Owner name: LEVEL 3 COMMUNICATIONS, LLC, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCBRIDE, KEVIN M.;SUTHERLAND, JAMES E.;MOSS, FRANK;AND OTHERS;SIGNING DATES FROM 20210216 TO 20210224;REEL/FRAME:055416/0253

AS Assignment

Owner name: LEVEL 3 COMMUNICATIONS, LLC, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOLLARD, MITCH;REEL/FRAME:055595/0736

Effective date: 20210216

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED