US12224911B2 - Enhanced network automation - Google Patents
Enhanced network automation Download PDFInfo
- Publication number
- US12224911B2 US12224911B2 US18/455,409 US202318455409A US12224911B2 US 12224911 B2 US12224911 B2 US 12224911B2 US 202318455409 A US202318455409 A US 202318455409A US 12224911 B2 US12224911 B2 US 12224911B2
- Authority
- US
- United States
- Prior art keywords
- communication network
- performance
- devices
- topology
- network devices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/50—Testing arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Definitions
- Embodiments of the present invention generally relate to systems and methods for automating changes to and testing of communications networks.
- Some communications networks use a single-vender homogenous edge computing infrastructure in which adding or modifying a network device stack may require physically moving or re-cabling network and compute hardware. Configuring and deploying network devices may require connecting devices of one vender with network devices of another vender and performing various tests, but the physical connecting and configuring of the devices may be time-consuming and inefficient.
- a method may include identifying templates defining respective communication network topologies defining network devices, connections between the network devices, roles associated with the network devices, and performance tests for the communication network topologies.
- the method may include selecting a first template of the templates.
- the method may include instantiating, based on the selection of the first template, an instance associated with generating a first communication network topology by establishing first connections between first network devices based on the first communication network topology and first roles associated with first network devices of the first communication network topology.
- the method may include generating performance test results for the first communication network topology based on performance of first performance tests defined by the first template, wherein first test thresholds of the first performance tests are based on a machine learning model trained based on the communication network topologies and the performance tests.
- the method may include modifying, using the machine learning model, the first test thresholds based on the performance test results.
- a system may include one or more devices with processors and memory to identify templates defining respective communication network topologies defining network devices, connections between the network devices, roles associated with the network devices, and performance tests for the communication network topologies.
- the system may select a first template of the templates.
- the system may instantiate, based on the selection of the first template, an instance associated with generating a first communication network topology by establishing first connections between first network devices based on the first communication network topology and first roles associated with first network devices of the first communication network topology.
- the system may generate performance test results for the first communication network topology based on performance of first performance tests defined by the first template, wherein first test thresholds of the first performance tests are based on a machine learning model trained based on the communication network topologies and the performance tests.
- the system may modify, using the machine learning model, the first test thresholds based on the performance test results.
- a device may include one or more processors and memory to identify templates defining respective communication network topologies defining network devices, connections between the network devices, roles associated with the network devices, and performance tests for the communication network topologies.
- the device may select a first template of the templates.
- the device may instantiate, based on the selection of the first template, an instance associated with generating a first communication network topology by establishing first connections between first network devices based on the first communication network topology and first roles associated with first network devices of the first communication network topology.
- the device may generate performance test results for the first communication network topology based on performance of first performance tests defined by the first template, wherein first test thresholds of the first performance tests are based on a machine learning model trained based on the communication network topologies and the performance tests.
- the device may modify, using the machine learning model, the first test thresholds based on the performance test results.
- FIG. 1 illustrates an example process for selecting and generating a communications network topology in accordance with one embodiment.
- FIG. 2 illustrates an example user interface for selecting and generating a communications network topology in accordance with one embodiment.
- FIG. 3 illustrates an example user interface for testing a communications network topology in accordance with one embodiment.
- FIG. 4 illustrates an example user interface for testing a communications network topology in accordance with one embodiment.
- FIG. 5 is a flow chart of an example process for automating changes to and testing of communications networks in accordance with one embodiment.
- FIG. 6 is a diagram illustrating an example of a computing system that may be used in implementing embodiments of the present disclosure.
- aspects of the present disclosure involve systems, methods, and the like, for automating changes to and testing of communications networks.
- Telecommunications networks may provide many services to customers or devices connected to the network, including transmission of communications between network devices, network services, remote computing environments, cloud services (such as storage services, networking service, compute services, etc.), and the like.
- Such telecommunications networks generally include interconnected devices and/or components that are configured to communicate with each other and/or customer devices to provide access to the available services from the network.
- configuration of devices and interconnections of the network require a network engineer to remotely or locally access network components and manually configure settings, ports, operating systems, and the like to enable the network to provide services to customers.
- the configuration of network devices includes multiple network administrators and other groups to manually configure the devices.
- configuration of some network services may require a first network administrator to log into one or more components associated with the services separately after installation of the components into the network and provide one or more inputs via a workstation or other computing device to configure the components according to a service plan as a single step in the overall service configuration process. The first network administrator may then notify another group or administrator of the completion of a step in the configuration process so that the next step in the process may be executed by the second network administrator, and so on.
- Such a process can be time consuming, require steps or acts from multiple network groups, and includes multiple potential points of delay or errors that must be identified and corrected before the network service is available to the customer.
- a network topology e.g., a full or mixed network device stack
- the user may need to physically move and re-cable the network hardware (e.g., connect devices), update firmware and device configurations, and test for performance of the topology.
- the network hardware e.g., connect devices
- update firmware and device configurations e.g., firmware and device configurations
- test for performance of the topology e.g., a network topology may require manually plugging in devices of one vender to devices of another vender to test the devices and the performance of the topology. Such connecting and testing may be inefficient.
- One aspect of the present disclosure is to automate the hardware infrastructure, including dedicated connection switching hardware, allowing for more efficient moving of network hardware topologies without having to physically move or re-cable network and compute hardware.
- a user may be able to generate a set of network topologies based on existing set-ups, and then test the topologies and make adjustments to their connections and configurations to ensure quality of service compliance.
- One aspect of the present disclosure is to automate network topology changes.
- users may be able to select a template to implement as a topology.
- the templates also may define tests to be performed, layers, host leafs, optical switches, interface connections, and the like for various topologies that, when selected, may result in the automatic configuration and testing of a topology.
- a topology may have a stack that includes only devices of a single vender, but a user may want to implement the same topology with devices of a different vender.
- the templates may allow for testing the topology and its configurations with the devices of the different vender (e.g., as the topology and its configuration for one set of network devices may not perform the exact same for a different set of network devices). Rather than having to select and add individual devices and determine and implement their configurations, selection of a template allows a user to select a pre-defined topology configuration and set of tests for the topology.
- One aspect of the present disclosure is to automate firmware versions and configuration changes. Another aspect of the present disclosure is to automate test cases, including functional, performance, and the like.
- the templates may define the test criteria for different topologies and device types.
- Another aspect of the disclosure is to automate and store the baseline results of topology tests. Network equipment, servers, services running on servers, and the like, all may be tested.
- Machine learning models may set performance baselines based on test cases and performance results, allowing for determinations of system performance at a current time compared to past performance, reasons for differences, and recommendations for adjustments. For example, when a generated topology meets performance criteria defined by the templates, the topology may be implemented. However, when a generated topology does not meet performance criteria defined by the templates, the machine learning models may generate recommendations for adjusting connections and/or configurations. The machine learning models may adjust performance baselines based on performance results so that performance testing results are more likely to be indicative of performance once a topology is implemented. For example, the machine learning models may set a performance baseline, compare the performance results to the performance baseline, and adjust the performance baseline based on how much the performance results vary from the performance baseline.
- One aspect of the disclosure is to generate a zero-touch process in which selecting a topology generates an instance that may define the specific connections between devices (e.g., how each interface of a device connects to ports of the devices).
- selecting a topology generates an instance that may define the specific connections between devices (e.g., how each interface of a device connects to ports of the devices).
- U.S. Patent Application Pub. No. 2022/0038340 to Dreyer et al. is hereby incorporated by reference in its entirety.
- One aspect of the disclosure is to provide a user interface with which to select spines, core layers, host leafs, switches, and the like to include in a network topology.
- the devices may be from a single vender or multiple venders (e.g., mixed).
- the user interface may allow users to add network devices to a topology and configure a device map file (e.g., a .csv file) based on templates for existing topologies.
- a template may define map file attributes that a user may accept or modify, including a device hostname, a device identifier, a user identifier (e.g., list of registered users), and the like.
- the user interface may present the hostname of any network device in a topology, its operating system, management address, validation status, connections, performance metrics, and the like.
- the user interface may present any network devices interfaces and their hostnames, circuit identifiers, and the like.
- the user interface may allow a user to select configurations to apply, such as available port configurations, bare metal server chassis nodes, storage grid administrative nodes, unused channelized ports, and the like.
- FIG. 1 illustrates an example process 100 selecting and generating a communications network topology in accordance with one embodiment.
- a device 102 may present a user interface that allows a user to select from among multiple network devices (e.g., device 1, . . . , device N as shown) from multiple venders/manufacturers to be included in a network topology.
- the devices in the network topology may include spines, core devices, host leaf devices, switches, and the like.
- the user may select which devices may be used for the different types of devices (e.g., device roles) in the network topology.
- the network topology may be based on a template of multiple templates that may be generated (e.g., based on currently used network topologies, commonly used network topologies, preferred network topologies, etc.).
- the templates may define the devices and their roles, connections, configurations, tests and test parameters/criteria, and the like.
- the device 102 may instantiate an instance that automates the generation of the topology, including the device arrangement, connections, configurations, and the like, and that performs testing on the topology to determine whether the selected devices, roles, and configuration of the topology satisfy performance criteria.
- the baselines (e.g., thresholds) for the testing parameters/criteria may be set based on a machine learning model (e.g., the ML model 611 of FIG. 6 ), which may be trained based on test cases and performance results, and may update the baselines based on actual performance test results for a topology.
- a topology defined by a template may include devices all of one manufacturer/vender, or of mixed manufacturers/venders.
- the topology generated by the user may include devices of one or more different manufacturers/venders, so the performance results may not be the same, as not all devices use the same configurations or connect in the same manner.
- an existing network topology may be moved without physically moving or re-cabling the network and compute hardware.
- the template may allow for topology changes, which may result in new templates (e.g., defining new topologies and/or performance test criteria/parameters).
- the device 102 may present results of the performance tests (e.g., as shown in FIGS.
- Other outputs of the machine learning may include indications of performance at a current time, comparison of performance results with previous performance results, explanations for performance results, and recommendations (e.g., regarding device types, roles, connections, configurations, and the like) to improve performance of the topology.
- the automated system may allow for switching between topologies and testing instances.
- a topology uses devices all of one manufacturer, and a user wants to test a topology with devices all of another manufacturer, the user may want to validate the first topology to identify any performance issues.
- a user generating a topology of devices of one manufacturer may test a topology using devices of another manufacturer without having to physically connect and configure the devices.
- FIG. 2 illustrates an example user interface 200 for selecting and generating a communications network topology in accordance with one embodiment.
- the device 102 shows an interface 200 that allows a user to select from among multiple network devices.
- the intermediate step of selecting the available devices e.g., device 1, . . . , device N
- the template may define existing topologies from which a user may select, and the user may add, delete, or change which devices are used in the topology, their connections with one another, the configurations, and their testing.
- FIG. 3 illustrates an example user interface 300 for testing a communications network topology in accordance with one embodiment.
- the device 102 shows the user interface 300 , which may present the devices' hostnames, operating systems (OS), management addresses, validation, status, certificate of authenticity (COA), original equipment manufacturer (OEM) activation (OA), and edit/delete options for the particular device hostname.
- the user interface 300 also may allow a user to select other information, such as connections and detected issues (e.g., from testing). In this manner, the user interface 300 may allow a user to see the progress and test results of performance tests of a topology.
- FIG. 4 illustrates an example user interface 400 for testing a communications network topology in accordance with one embodiment.
- the device 102 shows the user interface 400 , which may present the devices' interfaces, hostname interfaces, aggregate number, circuit identifier, aggregate circuit identifier, and edit/delete options for the particular device interfaces.
- FIG. 5 is a flow chart of an example process 500 for automating changes to and testing of communications networks in accordance with one embodiment.
- a device may identify templates, which the device may have generated based on existing topologies.
- the templates may define topologies, include network devices, their roles and connections, their configurations, and their test parameters/criteria.
- the topologies may be previously generated and tested, and may be available for implementation and/or modification. For example, a user may select a template to modify or move the topology of the template, to test a new topology, or to connect to a topology of another template.
- the device may select one of the templates (e.g., using one of the user interfaces of FIGS. 1 and 2 ).
- the selection of the template may include selecting devices for the various roles of the template, including spines, core layers, switches, host leaf devices, and the like.
- the selection may include a full stack from a single manufacturer or a mixed stack.
- the device may instantiate, based on the selected template, an instance for generating a network topology.
- the instantiation may automate the creation of the topology, including its connections and configurations as defined by the selected topology.
- the templates also may define tests to be performed, layers, host leafs, optical switches, interface connections, and the like for various topologies that, when selected, may result in the automatic configuration and testing of a topology.
- a topology may have a stack that includes only devices of a single vender, but a user may want to implement the same topology with devices of a different vender.
- the templates may allow for testing the topology and its configurations with the devices of the different vender (e.g., as the topology and its configuration for one set of network devices may not perform the exact same for a different set of network devices).
- the instantiation may automate the generation of the topology of the template, including the device arrangement, connections, configurations, and the like, and that performs testing on the topology to determine whether the selected devices, roles, and configuration of the topology satisfy performance criteria.
- the instantiation may provide a zero-touch provisioning of the components of the topology and the connections, port assignments, and the like.
- the device may access and configure the network devices of the topology.
- the device may generate performance test results for the topology based on the performance of tests defined by a machine learning model.
- the tests may be included in the template for a given topology, and based on the baselines set by the machine learning model, which may learn the baselines based on test cases and performance results of other topologies.
- the device may modify test thresholds of the performance tests in the template based on the performance test results.
- the machine learning model may use the test results as feedback to determine whether a topology performed as expected. When test results deviate from expected values (e.g., the baselines), the machine learning model may update the baselines for a given topology.
- FIG. 6 is a block diagram illustrating an example of a computing device or computer system 600 which may be used in implementing the embodiments of the components of the network disclosed above.
- the computing system 600 of FIG. 5 may represent at least a portion of the device 102 shown in FIG. 1 and/or a device/system remote from the device 102 (not shown, e.g., a cloud-based system).
- the computer system (system) includes one or more processors 602 - 606 , one or more topology devices 609 , and a ML model 611 .
- Processors 602 - 606 may include one or more internal levels of cache (not shown) and a bus controller 622 or bus interface unit to direct interaction with the processor bus 612 .
- Processor bus 612 also known as the host bus or the front side bus, may be used to couple the processors 602 - 606 with the system interface 624 .
- System interface 624 may be connected to the processor bus 612 to interface other components of the system 600 with the processor bus 612 .
- system interface 624 may include a memory controller 618 for interfacing a main memory 616 with the processor bus 612 .
- the main memory 616 typically includes one or more memory cards and a control circuit (not shown).
- System interface 624 may also include an input/output (I/O) interface 620 to interface one or more I/O bridges 625 or I/O devices with the processor bus 612 .
- I/O controllers and/or I/O devices may be connected with the I/O bus 626 , such as I/O controller 628 and I/O device 630 , as illustrated.
- I/O device 630 may also include an input device (not shown), such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 602 - 606 .
- an input device such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 602 - 606 .
- cursor control such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 602 - 606 and for controlling cursor movement on the display device.
- System 600 may include a dynamic storage device, referred to as main memory 616 , or a random access memory (RAM) or other computer-readable devices coupled to the processor bus 612 for storing information and instructions to be executed by the processors 602 - 606 .
- Main memory 616 also may be used for storing temporary variables or other intermediate information during execution of instructions by the processors 602 - 606 .
- System 600 may include a read only memory (ROM) and/or other static storage device coupled to the processor bus 612 for storing static information and instructions for the processors 602 - 606 .
- ROM read only memory
- FIG. 6 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
- the above techniques may be performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 616 . These instructions may be read into main memory 616 from another machine-readable medium, such as a storage device. Execution of the sequences of instructions contained in main memory 616 may cause processors 602 - 606 to perform the process steps described herein. In alternative embodiments, circuitry may be used in place of or in combination with the software instructions. Thus, embodiments of the present disclosure may include both hardware and software components.
- a machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
- Such media may take the form of, but is not limited to, non-volatile media and volatile media and may include removable data storage media, non-removable data storage media, and/or external storage devices made available via a wired or wireless network architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components.
- removable data storage media include Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc Read-Only Memory (DVD-ROM), magneto-optical disks, flash drives, and the like.
- non-removable data storage media examples include internal magnetic hard disks, SSDs, and the like.
- the one or more memory devices 606 may include volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and/or non-volatile memory (e.g., read-only memory (ROM), flash memory, etc.).
- volatile memory e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.
- non-volatile memory e.g., read-only memory (ROM), flash memory, etc.
- Machine-readable media may include any tangible non-transitory medium that is capable of storing or encoding instructions to perform any one or more of the operations of the present disclosure for execution by a machine or that is capable of storing or encoding data structures and/or modules utilized by or associated with such instructions.
- Machine-readable media may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more executable instructions or data structures.
- Embodiments of the present disclosure include various steps, which are described in this specification. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software and/or firmware.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18/455,409 US12224911B2 (en) | 2022-08-25 | 2023-08-24 | Enhanced network automation |
| US19/047,103 US20250184226A1 (en) | 2022-08-25 | 2025-02-06 | Enhanced network automation |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263373547P | 2022-08-25 | 2022-08-25 | |
| US18/455,409 US12224911B2 (en) | 2022-08-25 | 2023-08-24 | Enhanced network automation |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US19/047,103 Continuation US20250184226A1 (en) | 2022-08-25 | 2025-02-06 | Enhanced network automation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20240073101A1 US20240073101A1 (en) | 2024-02-29 |
| US12224911B2 true US12224911B2 (en) | 2025-02-11 |
Family
ID=89995120
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/455,409 Active US12224911B2 (en) | 2022-08-25 | 2023-08-24 | Enhanced network automation |
| US19/047,103 Pending US20250184226A1 (en) | 2022-08-25 | 2025-02-06 | Enhanced network automation |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US19/047,103 Pending US20250184226A1 (en) | 2022-08-25 | 2025-02-06 | Enhanced network automation |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US12224911B2 (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250193691A1 (en) * | 2023-12-08 | 2025-06-12 | Schneider Electric Buildings Americas, Inc. | Tool to guide installers in setting up connected devices(iot)/systems for efficient and optimal network creation/utilization and prevention of network- related issues in the future |
| CN120238442B (en) * | 2025-05-29 | 2025-08-01 | 中国电子科技集团公司第三十研究所 | A test network configuration method and system suitable for large-scale network test beds |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080222065A1 (en) * | 2007-03-05 | 2008-09-11 | Sharkbait Enterprises Llc | Learning and analysis systems and methods |
| US20110243072A1 (en) * | 2010-03-30 | 2011-10-06 | Omar Hassan M | System For and Method of Dynamic Home Agent Allocation |
| US20150372873A1 (en) * | 2014-06-19 | 2015-12-24 | Palo Alto Research Center Incorporated | Method and apparatus for deploying a minimal-cost ccn topology |
| US20190007277A1 (en) * | 2017-06-30 | 2019-01-03 | Infinera Corporation | Large network simulator for device characterization |
| US10397273B1 (en) * | 2017-08-03 | 2019-08-27 | Amazon Technologies, Inc. | Threat intelligence system |
| US20220239564A1 (en) * | 2021-01-22 | 2022-07-28 | Huawei Technologies Co., Ltd. | Risk map for communication networks |
-
2023
- 2023-08-24 US US18/455,409 patent/US12224911B2/en active Active
-
2025
- 2025-02-06 US US19/047,103 patent/US20250184226A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080222065A1 (en) * | 2007-03-05 | 2008-09-11 | Sharkbait Enterprises Llc | Learning and analysis systems and methods |
| US20110243072A1 (en) * | 2010-03-30 | 2011-10-06 | Omar Hassan M | System For and Method of Dynamic Home Agent Allocation |
| US20150372873A1 (en) * | 2014-06-19 | 2015-12-24 | Palo Alto Research Center Incorporated | Method and apparatus for deploying a minimal-cost ccn topology |
| US20190007277A1 (en) * | 2017-06-30 | 2019-01-03 | Infinera Corporation | Large network simulator for device characterization |
| US10397273B1 (en) * | 2017-08-03 | 2019-08-27 | Amazon Technologies, Inc. | Threat intelligence system |
| US20220239564A1 (en) * | 2021-01-22 | 2022-07-28 | Huawei Technologies Co., Ltd. | Risk map for communication networks |
Also Published As
| Publication number | Publication date |
|---|---|
| US20250184226A1 (en) | 2025-06-05 |
| US20240073101A1 (en) | 2024-02-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20250184226A1 (en) | Enhanced network automation | |
| US9703660B2 (en) | Testing a virtualized network function in a network | |
| US12137028B2 (en) | Edge compute environment configuration tool for a communications network | |
| US7184942B2 (en) | Verifying the configuration of a virtual network | |
| CN102595184B (en) | Intelligent television automation test method and system | |
| US20220206868A1 (en) | Edge compute environment configuration tool | |
| CN105122772A (en) | Exchange server state and client information via headers for request management and load balancing | |
| WO2017036330A1 (en) | Service configuration method and device for network cutover | |
| EP4207702A1 (en) | Dynamic prediction of system resource requirement of network software in a live network using data driven models | |
| CN112350879B (en) | Data communication equipment test management method, device, system and storage medium | |
| CN118679724A (en) | Network topology map for properly configuring clustered networks | |
| CN104363122A (en) | Pre-configuration method and system of network element | |
| CN114356673A (en) | Mainboard testing method and device | |
| CN115484164A (en) | Method and system for deploying a production system in a virtualized environment | |
| US20040264382A1 (en) | Verification of connections between devices in a network | |
| US20180081716A1 (en) | Outcome-based job rescheduling in software configuration automation | |
| CN113612644B (en) | Dynamic simulation method and system for network element of transmission network | |
| CN119363422A (en) | A method for launching a security service product based on zero trust | |
| US12323299B2 (en) | Edge compute environment automatic server configuration tool | |
| CN111769992B (en) | Network data management method, cloud platform and storage medium | |
| CN113347046A (en) | Network access method and device | |
| JP2016133885A (en) | Virtual apparatus testing device, virtual apparatus testing method, and virtual apparatus testing program | |
| CN115225548B (en) | Method for evaluating cloud desktop bearing capacity by server and storage medium | |
| CN114363226B (en) | Automatic testing method and system for equipment in complex network scene based on virtualization | |
| CN113315647B (en) | Network simulation method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: EX PARTE QUAYLE ACTION MAILED |
|
| AS | Assignment |
Owner name: LEVEL 3 COMMUNICATIONS, LLC, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DREYER, BRYAN;NAULT, JASON;ROEMHILD, WILLIAM;AND OTHERS;SIGNING DATES FROM 20230815 TO 20230919;REEL/FRAME:068833/0627 |
|
| AS | Assignment |
Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, MINNESOTA Free format text: NOTICE OF GRANT OF SECURITY INTEREST IN INTELLECTUAL PROPERTY (SECOND LIEN);ASSIGNORS:LEVEL 3 COMMUNICATIONS, LLC;GLOBAL CROSSING TELECOMMUNICATIONS, INC;REEL/FRAME:069295/0749 Effective date: 20241031 Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, MINNESOTA Free format text: NOTICE OF GRANT OF SECURITY INTEREST IN INTELLECTUAL PROPERTY (FIRST LIEN);ASSIGNORS:LEVEL 3 COMMUNICATIONS, LLC;GLOBAL CROSSING TELECOMMUNICATIONS, INC.;REEL/FRAME:069295/0858 Effective date: 20241031 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO EX PARTE QUAYLE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |