WO2022058935A1 - Systèmes et procédés d'optimisation de bande passante basés sur l'intelligence artificielle - Google Patents

Systèmes et procédés d'optimisation de bande passante basés sur l'intelligence artificielle Download PDF

Info

Publication number
WO2022058935A1
WO2022058935A1 PCT/IB2021/058471 IB2021058471W WO2022058935A1 WO 2022058935 A1 WO2022058935 A1 WO 2022058935A1 IB 2021058471 W IB2021058471 W IB 2021058471W WO 2022058935 A1 WO2022058935 A1 WO 2022058935A1
Authority
WO
WIPO (PCT)
Prior art keywords
request
ports
communication
network
client
Prior art date
Application number
PCT/IB2021/058471
Other languages
English (en)
Inventor
Daniel CORREDOR PORTILLA
David CORREDOR PORTILLA
Rafi FARAH CARBONELL
Original Assignee
Itics S.A.S.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Itics S.A.S. filed Critical Itics S.A.S.
Publication of WO2022058935A1 publication Critical patent/WO2022058935A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Definitions

  • the present disclosure relates generally to systems and methods for bandwidth optimization and, more particularly, to systems and methods for optimizing a network’s bandwidth based on artificial intelligence (Al) and/or machine learning (ML) methods that enhance communication pathways and compression tools.
  • Al artificial intelligence
  • ML machine learning
  • congested networks i.e., networks without sufficient bandwidth to support demanded communications
  • Congested networks result in poor communications, poor user experience, and — in certain cases — the inability to support certain applications.
  • congested networks are incapable of effectively support video streaming or video call applications.
  • These type of applications which require real-time communication and data processing, have been widely adopted and demand strong bandwidth capacity and reliable networks.
  • the increased number of network users in combination with new applications that consume higher bandwidth (or demand a more stable bandwidth) result in connectivity issues that undermine online services and/or user experience.
  • One aspect of the present disclosure is directed to a system for bandwidth optimization.
  • the system includes at least one processor and at least one memory device including instructions that when executed configure the at least one processor to perform operations.
  • the operations include identifying ports and networked devices in a communications network and configuring communication sensors at the ports and collecting packets received or transmitted in the ports for a threshold time period to generate a training dataset.
  • the operations also include training, using the training dataset: a first machine-learning model that correlates request categories with optimized communication pathways between the ports and the networked devices (the request categories being based on packet headers), a second machine-learning model that correlates the request categories with one or more compression tools for compression of data files, and a third machinelearning model that correlates the request categories with predicted accessed information, the predicted access information including client requested data.
  • a first machine-learning model that correlates request categories with optimized communication pathways between the ports and the networked devices (the request categories being based on packet headers)
  • a second machine-learning model that correlates the request categories with one or more compression tools for compression of data files
  • a third machinelearning model that correlates the request categories with predicted accessed information, the predicted access information including client requested data.
  • Another aspect of the present disclosure is directed a computer implemented method for bandwidth optimization.
  • the method includes identifying ports and networked devices in a communications network and configuring communication sensors at each one identified ports and collecting packets received or transmitted in the ports for a threshold time to generate a training dataset.
  • the method also includes training, using the training dataset: a first machine-learning model that correlates request categories with optimized communication pathways between the ports and the networked devices (the request categories being based on packet headers), a second machine-learning model that correlates the request categories with one or more compression tools for compression of data files, and a third machine-learning model that correlates the request categories with predicted accessed information (the predicted access information including client requested data).
  • Yet another aspect of the present disclosure is directed to a computer implemented system including one or more processors and one or more storage devices storing instructions that, when executed, configure the one or more processors to perform operations.
  • the operations include identifying ports and networked devices in a communications network and configuring communication sensors at each one identified ports and collecting packets received or transmitted in the ports for a threshold time to generate a training dataset.
  • the operations also include training, using the training dataset: (1) a first machine-learning model that correlates request categories with optimized communication pathways between the ports and the networked devices, (2) a second machine-learning model that correlates the request categories with one or more compression tools, and (3) a third machine-learning model that correlates the request categories with predicted accessed information.
  • the operations may additionally include receiving a client request through at least one of the ports, assigning one or more categories, from the request categories, to the client request based on the client request characteristics, and employing (1) the first machine-learning model to identify a communication pathway for the client request based on the assigned one or more categories, (2) the second machinelearning model to select a compression tool for the client request, and (3) the third machinelearning model to identify frequently accessed data for the client request based on the assigned one or more categories.
  • the operations also include storing the frequently accessed data in a fast response module coupled to one or more of the ports, configuring the compression tool to compress communications associated with the request, and configuring routers in the network according to the communication pathway.
  • Another aspect of the present disclosure is directed to a system for optimized wireless communication including one or more memory devices storing instructions and one or more processors coupled to the one or more memory devices and configured to execute the instructions to perform operations.
  • the operations include connecting to a plurality of wireless communication nodes through at least one of Simple Network Management Protocol (SNMP) or Secure Shell (SSH) and identifying locations of the communication nodes and collecting a time series of packets received or transmitted in the communication nodes for a threshold time period.
  • the operations also include retrieving environmental data from a database, generating a training dataset by combining and correlating the environmental data and the time series of packets based on the locations, and training a machine-learning model, using the training dataset, to predict communication anomalies based on environmental conditions at the communication nodes.
  • SNMP Simple Network Management Protocol
  • SSH Secure Shell
  • FIG. 1 is a block diagram of an exemplary system, consistent with disclosed embodiments.
  • FIG. 2 is a block diagram of an exemplary network analyzer/optimizer, consistent with disclosed embodiments.
  • FIG. 3 is a block diagram of an exemplary fast response storage, consistent with disclosed embodiments.
  • FIG. 4 is a block diagram of an exemplary compressor/sorter, consistent with disclosed embodiments.
  • FIG. 5 is a block diagram of an exemplary security processor/filter, consistent with disclosed embodiments.
  • FIG. 6 is a block diagram of an exemplary database, consistent with disclosed embodiments.
  • FIG. 7 is a block diagram of an exemplary client device, consistent with disclosed embodiments.
  • FIG. 8 is a diagram of an exemplary communication network, consistent with disclosed embodiments.
  • FIG. 9 is a block diagram of an exemplary wireless communication network, consistent with disclosed embodiments.
  • FIG. 10 is a block diagram of an exemplary Al system for network bandwidth optimization, consistent with disclose embodiments.
  • FIG. 11 is a block diagram of an exemplary threat management system implemented with a Berkeley Internet Name Domain (BIND) server, consistent with disclose embodiments.
  • BIND Berkeley Internet Name Domain
  • FIG. 12 is a block diagram of an exemplary threat management system implemented with a cloud server, consistent with disclose embodiments.
  • FIG. 13 is a flow chart illustrating an exemplary process for optimized network configuration, consistent with disclosed embodiments.
  • FIG. 14 is a flow chart illustrating an exemplary process for storing client requests and training datasets, consistent with disclosed embodiments.
  • FIG. 15 is a flow chart illustrating an exemplary process for selecting a data compression tool for optimized bandwidth, consistent with disclosed embodiments.
  • FIG. 16 is a flow chart illustrating an exemplary process for network optimization based on network scenarios, consistent with disclosed embodiments.
  • FIG. 17 is a flow chart illustrating an exemplary process for programming network devices with optimized bandwidth, consistent with disclosed embodiments.
  • FIG. 18 is a flow chart illustrating an exemplary process for diagnosing a network and creating network sensors, consistent with disclosed embodiments.
  • FIG. 19 is a flow chart illustrating an exemplary threat management process, consistent with disclosed embodiments.
  • FIG. 20 is a flow chart illustrating an exemplary process for configuring a wireless network with optimized bandwidth, consistent with disclosed embodiments.
  • FIG. 21 is a flow chart illustrating an exemplary process for training machine-learning models based on collected packets received in network ports, consistent with disclosed embodiments.
  • FIG. 22 is a flow chart illustrating an exemplary process for handling a client request associated with a threat level, consistent with disclosed embodiments.
  • FIG. 23 is a flow chart for updating machine-learning models based on power anomalies, consistent with disclosed embodiments.
  • FIG. 24 is a flow chart illustrating an exemplary process for generating a predictive machine-learning model based on bandwidth and environmental data, consistent with disclosed embodiments.
  • FIG. 25 is a flow chart illustrating an exemplary process for the selection of nodes and communication parameters in a wireless network, consistent with disclosed embodiments.
  • FIG. 26 is a flow chart illustrating an exemplary process for generating a machine-learning optimization model, in accordance with disclosed embodiments
  • FIG. 27 is a bar plot illustrating exemplary bandwidth optimization results, in accordance with disclosed embodiments.
  • the disclosure is generally directed to systems and methods for optimizing the bandwidth of a network using machine-learning (ML) models that identify optimized network configurations.
  • Disclosed systems and methods improve the technical field of bandwidth optimization by customizing network configurations for specific scenarios and/or based on categories of client request.
  • Some embodiments of the disclosed systems and methods may monitor a network for a period of time to create a training dataset used to develop predictive ML models that identify optimized pathways and/or tools for handling client requests.
  • disclosed systems and methods employ ML methods to categorize client requests (e.g., based on packet header characteristics) and configure network devices to respond to client requests using optimized configurations.
  • disclosed systems and methods may also generate ML models that identify frequently accessed data — or network resources — to improve response time and bandwidth utilization.
  • disclosed systems and methods can improve bandwidth and network management by providing tools for identifying networked devices and automatically connect them through application programming interfaces (APIs).
  • APIs application programming interfaces
  • Current networks are frequently complex because they include multiple devices from different manufacturers, communicating through different protocols, and having different operating systems. Monitoring communications in such networks can be cumbersome because it is challenging to identify, access, and program the multiple devices in the system. The lack of uniformity creates a technical challenge that limits effective monitoring of communications in the network.
  • Disclosed systems and methods address these issues with tools and methods for automatically identifying devices in the network and programming them to act as sensors that monitor the network communications. For example, disclosed systems and methods enable simple discovery of networked devices and their programming through APIs or standardized firmware.
  • Disclosed systems and methods can also improve fields of bandwidth optimization and network management using ML models that diagnose and correct problems in different network layers. For example, disclosed systems and methods may allow diagnosis of faulty devices at the physical layer and determine inappropriate device configurations. Further, disclosed systems and methods may also provide tools to determine performance of internet, transport, and application layers in a network. Thus, in some embodiments, disclosed systems and methods improve the bandwidth of a network by adapting the physical layer to enhance device operation (e.g., by tuning wireless communication parameters or adjusting power consumption) in addition to determining optimized communication pathways or transport tools.
  • disclosed systems and methods may also improve network security and reliability.
  • some embodiments of the disclosed systems and methods provide tools for monitoring traffic in a network and use ML models that identify threats to be isolated or filtered in the network.
  • disclosed systems and methods can enhance network reliability through ML models that predict device and bandwidth utilization based on environmental conditions.
  • certain embodiments of the disclosed systems and methods may allow tailoring device configurations based on environmental conditions or predicted demand loads.
  • Disclosed systems and methods may also leverage Al methods to develop communication protocols tailored for the specific network. For example, in addition to collecting real communications, disclosed systems and methods allow simulation of different network configurations and create case scenarios. Simulated and collected data may be used for tailoring network protocols to have optimized configurations that are tailored for individual networks. In such embodiments, disclosed systems and methods may enable automatically programming of nodes and networked devices based on the tailored configurations to enhance bandwidth.
  • FIG. 1 is a block diagram of an exemplary system 100, consistent with disclosed embodiments.
  • System 100 includes a bandwidth (BW) optimization system 105 which includes a network analyzer/optimizer (NAO) 110, a fast response storage (FRP) 120, a compressor/sorter (CS) 130, and a security process/filter (SPF) 140.
  • BW bandwidth
  • System 100 additionally includes online resources 190, client devices 150, computing clusters 160, and a databases 180.
  • components of system 100 may be interconnected via a network 170. However, in other embodiments components of system 100 may be connected directly with each other, without network 170.
  • Online resources 190 may include one or more servers or storage services provided by an entity such as a provider of website hosting, networking, cloud, or backup services.
  • online resources 190 may be associated with hosting services, navigation data centers, and/or servers that store web pages for places such as entertainment venues and/or offices.
  • service providers such as financial service providers.
  • online resources 190 may be associated with a messaging service, such as, for example, Apple Push Notification Service, Azure Mobile Services, or Google Cloud Messaging.
  • online resources 190 may handle the delivery of messages and notifications related to functions of the disclosed embodiments, such as image compression, notification of map changes via alerts, and/or completion messages and notifications.
  • Client devices 150 include one or more computing devices configured to perform one or more operations consistent with disclosed embodiments.
  • client devices 150 may include a desktop computer, a laptop computer, a server, a mobile device (e.g., tablet, smart phone, etc.), a gaming device, a wearable computing device, or other type of computing device.
  • Client devices 150 may include one or more processors configured to execute software instructions stored in memory, such as memory included in client devices 150, to perform operations to implement the functions described below.
  • Client devices 150 may include software that when executed by a processor performs Internet-related communication, such as TCP/IP, and content display processes. For instance, client devices 150 may execute browser software that generates and displays graphical user interfaces (GUIs) displaying BW metrics.
  • GUIs graphical user interfaces
  • Client devices 150 execute applications that allow client devices 150 to communicate with components over network 170 and generate and display content in GUIs via display devices included in client devices 150.
  • the disclosed embodiments are not limited to any particular configuration of client devices 150.
  • a client device 150 may be a mobile device that stores and executes mobile applications to perform operations that provide functions offered by BW optimization system 105 and/or online resources 190.
  • Client devices 150 are further described in connection with FIG. 7.
  • Computing clusters 160 may include one or more computing devices in communication with other elements of system 100.
  • computing clusters 160 may consist of a group of processors in communication with each other through fast local area networks.
  • computing clusters 160 may consist of an array of graphical processing units configured to work in parallel as a graphics processing unit (GPU) cluster.
  • GPU graphics processing unit
  • computing clusters 160 may include heterogeneous or homogeneous hardware.
  • computing clusters 160 may include GPU drivers for the various types of GPUs present in cluster nodes, a Clustering Application Programing Interface (API), such as the Message Passing Interface (MPI), and VirtualCU (VCU) cluster platform such as a wrapper for OpenCUTM that allows applications to transparently utilize multiple OpenCU devices in a cluster.
  • API Clustering Application Programing Interface
  • VCU VirtualCU
  • computing clusters 160 may operate with distcc (a program to distribute builds of C, C++, Objective C or Objective C++ code across several machines on a network to speed up building), and MPI CH (a standard for message-passing for distributed-memory’ applications used in parallel computing), Linux Virtual ServerTM, Linux-HATM, or other director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes.
  • computing clusters 160 may be configured to generate ML models. For example, in some embodiment BW optimization system 105 may send training datasets (including sample communication packets) to computing clusters 160 to generate ML models.
  • Databases 180 consist of one or more computing devices configured with appropriate software to perform operations consistent with providing data to elements of system 100.
  • databases 180 may provide data related to communication trends or samples to BW optimization system 105, NAO 110, or SPF 140.
  • Databases 180 may include, for example, OracleTM databases, SybaseTM databases, or other relational databases or non-relational databases, such as HadoopTM sequence files, HBaseTM, or CassandraTM.
  • Databases 180 may include computing components (e.g., database management system, database server, etc.) configured to receive and process requests for data stored in memory devices of the database(s) and to provide data from the database(s).
  • databases 180 are shown separately, in some embodiments databases 180 may be included in or otherwise related to one or more of BW optimization system 105, NAO 110, FRP 120, and/or online resources 190.
  • Databases 180 can be configured to collect and/or maintain network data, client request categorization, and/or network trends. Databases 180 may provide historic communication data to BW optimization system 105, NAO 110, and/or client devices 150. Thus, databases 180 may collect data from a variety of sources, including, for instance, online resources 190. Databases 180 are further described below in connection with FIG. 6.
  • NAO 110 may include one or more computing devices configured to setup devices in a network to act as communication sensors and collect communication packets in the network to generate training datasets. Additionally, NAO 110 may be configured to analyze client requests (e.g., to identify a category associated with the client request) and implement network configuration changes based on ML analysis. In some embodiments, NAO 110 may also monitor communication ports to identify anomalies. For example, NAO 110 may monitor ports of networked devices to monitor variables like transfer rates or bit errors. Moreover, NAO 110 may implement processes for discovery and configuration of networked devices. For instance, NAO 110 may be configured to periodically broadcast signals in network 170 to discover networked devices and configure discovered devices through API interfaces.
  • NAO 110 is part of BW optimization system 105 (as shown in FIG. 1).
  • NAO 110 may be a logic partition in a server hosting BW optimization system 105.
  • NAO 110 may be implemented with specialized software for optimization tasks.
  • NAO 110 may be a specially programmed computer dedicated to monitor communications in ports. NAO 110 is further described in connection with FIG. 2.
  • FRP 120 may include one or more memory devices storing information used for responding to client requests.
  • FRP 120 may include cache memory devices storing temporary copies of files permanently stored in databases 180.
  • FRP 120 may store copies of certain records in databases 180 and/or data being collected/analyzed by NAO 110.
  • FRP 120 may be configured to buffer certain data that is being exchanged with one of client devices 150 (e.g., buffering video streams). In such embodiments, FRP 120 may shorten the time required to retrieve data from online resources 190.
  • FRP 120 may be configured by NAO 110 based on ML models that indicate frequently accessed data.
  • FRP 120 may be configured as a dynamic memory that stores content based on the status of a network in order to maximize bandwidth.
  • FRP 120 is part of BW optimization system 105.
  • FRP 120 may be within BW optimization system 105 and be part of the memory resources in a server implementing BW optimization system 105.
  • FRP 120 may be separated from BW optimization system 105 and configured to provide faster responses to client requests.
  • FRP 120 may be configured as an edge server closer to client requests to be able to fulfill data requests faster.
  • content stored in FRP 120 may be dynamically updated based on ML predictive models that determine data that will likely be used during a client interaction based on client request categories.
  • FRP 120 is further described in connection with FIG. 3.
  • CS 130 includes computing devices or storage systems that are configured to perform data compression/decompression operations to transmit or receive data.
  • CS 130 may perform operations for packet header compression, payload compression, and/or storage compression.
  • CS 130 may perform operations to compress video files that will be transmitted to one or more client devices 150.
  • CS 130 may perform operations to reduce the file size of video feeds using video codecs to store the separate but complementary data streams as one combined package.
  • CS 130 may use lossless or lossy video compression methods.
  • CS 130 may use compression techniques used in video coding standards such as discrete cosine transform (DCT) and motion compensation (MC). Additionally, or alternatively, CS 130 may perform compression of images using PNG images or JPEG images. In such embodiments, CS 130 may use compression tools like Run-length encoding (RLE), Huffman, or LZ77, or a use a combination of the compression tools.
  • RLE Run-length encoding
  • Huffman Huffman
  • LZ77 LZ77
  • SPF 140 may include servers and/or other computing devices to perform security and filtering operations, which may include identifying and controlling network threats.
  • SPF 140 may include one or more security processors with dedicated hardware for carrying out cryptographic operations.
  • SPF 140 may be configured as an interfacing server (e.g., a local or cloud security server) that routes communications to and from client devices 150 after applying scanning algorithms and security tests.
  • SPF 140 may be reconfigured periodically, or in real-time, to adjust the evaluation of threat levels based on former attacks (or attempted attacks) in network 170.
  • SPF 140 may be configured as a domain name system (DNS) such as Berkeley Internet Name Domain (BIND) to interface with outside traffic.
  • DNS domain name system
  • BIND Berkeley Internet Name Domain
  • SPF 140 may also be configured to intervene specific communication ports or grant exceptions to specific ports.
  • SPF 140 may be configured to monitor traffic in specific TCP ports or to not monitor traffic in specific ports. SPF 140 is further discussed in connection with FIG. 5.
  • Network 170 may be any type of network configured to provide communications between components of system 100.
  • network 170 may be any type of network (including infrastructure) that provides communications, exchanges information, and/or facilitates the exchange of information, such as the Internet, a Local Area Network, near field communication (NFC), radio wave, optical code scanner, or other suitable connection(s) that enables the sending and receiving of information between the components of system 100.
  • NFC Near field communication
  • one or more components of system 100 may communicate directly through a dedicated communication link(s).
  • network 170 may include a network of networks, coordinating communication through several networks.
  • FIG. 2 shows a block diagram of an exemplary implementation of Network Analyzer/Optimizer (NAO) 110, consistent with disclosed embodiments.
  • NAO 110 includes a communication device 210, a NAO database 220, and one or NAO processors 230.
  • NAO database 220 includes NAO programs 222 and ML training data 224.
  • NAO processors 230 include a crawler 232, an API configurator 234, a Robotics Process Automation (RPA) module 236, and a machine-learning (ML) optimizer 238.
  • RPA Robotics Process Automation
  • ML machine-learning
  • NAO 110 may take the form of a server, a general-purpose computer, a mainframe computer, or any combination of these components.
  • NAO 110 may be a virtual machine.
  • operations and functions described for NAO 110 may be implemented by client devices 150 and processing units in client devices 150. Other implementations consistent with disclosed embodiments are possible as well.
  • Communication device 210 is configured to communicate with one or more databases, such as databases 180 (FIG. 1), either directly or via network 170.
  • communication device 210 may be configured to receive sample packets and bandwidth information in network 170. NAO 110 may use this information to identify networked devices in a network, collected training datasets for optimization, and/or identify methods to program networked devices with customized configurations.
  • communication device 210 may be configured to communicate with other components as well, including, for example, online resources 190.
  • Communication device 210 may include, for example, one or more digital and/or analog devices that allow communication device 210 to communicate with and/or detect other components, such as a network controller and/or wireless adaptor for communicating over the Internet. Other implementations consistent with disclosed embodiments are possible as well.
  • NAO database 220 may be implemented with one or more storage devices configured to store instructions used by NAO processor(s) 230 to perform functions related to disclosed embodiments.
  • NAO database 220 may store software instructions to perform operations when executed by NAO processor(s) 230.
  • the disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks.
  • NAO database 220 may include a single program that performs the functions of NAO 110, in other embodiments NAO database 220 may include multiple programs.
  • NAO database 220 may also store programs 222.
  • Programs 222 may include instructions for training ML models discussed in connection with FIG. 26.
  • Further programs 222 may include instructions for configuring networked devices based on a categorization of client requests to optimize network bandwidth.
  • programs 222 may include instructions to configure network 170 to efficiently respond to client requests based on a categorization of the client request as one or more of video request, audio request, TCP traffic, or UDP traffic.
  • NAO database 220 includes ML training data 224, where NAO database 220 stores training datasets for generating machine learning models.
  • NAO 110 may monitor a network for a threshold time (e.g., between a few hours and a year) and store the sampled data in ML training data 224.
  • This sampled data may include traffic data and exchanged information in ports of network 170.
  • the training data stored in ML training data 224 may later be used for training models that optimize communication pathways, the selection of data compression tools, and/or the information that is stored in cache or fast response memories.
  • NAO database 220 stores sets of instructions for carrying out processes of concurrent optimization of network confirmations.
  • NAO database 220 may be configured to continually collect and update network data to patch or update ML optimization models.
  • NAO processors 230 may include one or more known processing devices, such as, but not limited to, microprocessors from the PentiumTM or XeonTM family manufactured by IntelTM, the TurionTM family manufactured by AMDTM, or any of various processors from other manufacturers. However, in other embodiments, NAO processors 230 may be a plurality of devices coupled and configured to perform functions consistent with the disclosure.
  • NAO processors 230 execute software or firmware to perform functions associated with each component of NAO processors 230.
  • each component of NAO processors 230 is an independent device.
  • each component is a hardware device configured to specifically process data or perform operations associated with modeling traffic behavior, generating identification models, and/or handling large data sets.
  • ML optimizer 238 may be a field-programmable gate array (FPGA)
  • RPA module 236 may be a graphics processing unit (GPU)
  • API configurator 234 may be a central processing unit (CPU).
  • Other hardware combinations are also possible.
  • combinations of hardware and software may be used to implement NAO processors 230.
  • Crawler 232 includes computing components configured to identify networked devices active in a network.
  • crawler 232 may include so-called spiders or spiderbots that systematically browse a network to identify and index networked devices.
  • Crawler 232 allows NAO 110 to update network topologies or identify active networked devices.
  • Crawler 232 operates based on broadcasting UDP signals and indexing responses to the broadcast.
  • Crawler 232 may validate hyperlinks, HTML code, or use web scraping.
  • API configurator 234 includes hardware and/or software configured to couple NAO 110 with networked devices through web APIs and/or by identifying devices’ firmware.
  • API configurator 234 may be configured to connect with one or more of the devices identified (e.g., by crawler 232 in a network) using REST API methods identified for the specific device.
  • API configurator 234 uses ML methods to correlate device information and automate configuration of API and API gateways.
  • API configurator 234 may employ convolutional neural networks (CNNs) or Random Forests to identify patterns of access in a device and automate API configurations.
  • RPA module 236 may be configured to enable automated programming of networked devices.
  • RPA module 236 may generate and transmit instructions to networked devices so they become network sensors that report traffic data to NAO 110. For example, RPA module 236 may generate and transmit instructions for setting up sensors that monitor traffic in a network leveraging existing networked devices. In such embodiments, RPA module 236 configures networked devices (through API calls or requests) to generate a plurality of sensors in ports of the network. In some embodiments, RPA module 236 can be used to automate workflow or infrastructure and can program bots that interact with applications, websites, or user portals.
  • ML optimizer 238 includes computing components configured to use data in NAO database 220 to generate machine-learning (ML) models that predict interactions between servers and clients in a network and determine best or more reliable communication pathways or tools.
  • ML optimizer 238 may use ML training data 224 to generate predictive models that identify optimized pathways, tools, or memory configurations.
  • ML optimizer 238 includes programs (e.g., scripts, functions, algorithms) to train, implement, store, receive, retrieve, and/or transmit one or more ML models.
  • the ML models may include a neural network model, an attention network model, a generative adversarial model (GAN), a recurrent neural network (RNN) model, a deep learning model (e.g., a long short-term memory (LSTM) model), a random forest model, a convolutional neural network (CNN) model, an RNN-CNN model, an LSTM-CNN model, a temporal-CNN model, a support vector machine(SVM) model, a Density-based spatial clustering of applications with noise (DBSCAN) model, a k-means clustering model, a distribution-based clustering model, a k-medoids model, a naturallanguage model, and/or another machine-learning model.
  • models may include an ensemble model (i.e., a model having a plurality of models).
  • ML optimizer 238 is configured to terminate training when a training criterion is satisfied.
  • Training criterion may include several epochs, a training time, a performance metric (e.g., an estimate of accuracy in reproducing test data), or the like.
  • ML optimizer 238 may be configured to adjust model parameters during training. Model parameters may include weights, coefficients, offsets, or the like. Training may be supervised or unsupervised.
  • ML optimizer 238 is configured to train ML models by optimizing model parameters and/or hyperparameters (i.e., hyperparameter tuning) using an optimization technique, consistent with disclosed embodiments.
  • Hyperparameters include training hyperparameters, which may affect how training of a model occurs, or architectural hyperparameters, which may affect the structure of a model.
  • An optimization technique may include a grid search, a random search, a gaussian process, a Bayesian process, a Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a derivative -based search, a stochastic hill-climb, a neighborhood search, an adaptive random search, or the like.
  • ML trainer 646 may be configured to optimize statistical models using known optimization techniques.
  • NAO 110 may be implemented in hardware, software, or a combination of both, as will be apparent to those skilled in the art.
  • one or more components of NAO 110 may be implemented as computer processing instructions embodied in computer software in NAO database 220, some or all of the functionality of NAO 110 may be implemented in dedicated hardware.
  • groups of GPUs and/or FPGAs may be used to quickly analyze data in NAO processors 230.
  • FRP 120 includes FRP processor(s) 340, an FRP memory 350, and a communication device 360.
  • FRP memory 350 includes FRP programs 352 and FRP cached data 354.
  • Communication device 360 may be configured like communication device 210 (FIG. 2) but be specifically configured to communicate data with client devices 150.
  • FRP processors 340 may be embodied as a processor similar to NAO processors 230.
  • FRP processors 340 includes logic partitions or assigned hardware for a predictive ML module 344, a router configurator 346, and a pattern identifier 348.
  • Predictive ML module 344 is configured to execute ML models to predict data that will be requested or accessed based on client request categories. For example, predictive ML module 344 can run operations to guess data or files that will be requested by a client using branch predictors. In some embodiments, predictive ML module 344 is configured to dynamically store data in FRP cached data 354, so it can be access quickly by clients. Additionally, or alternatively, predictive ML module 344 may select data to feed a buffer in FRP memory 350. For example, when a client request is associated with a data stream, predictive ML module 344 may organize stream chains based on ML models — like the ones described in connection with FIG. 2.
  • Router configurator 346 is configured to setup routers for delivery of content from FRP 120.
  • router configurator 346 may configure one or multiple nodes to deliver content stored in FRP memory 350 by setting up an interior gateway protocol (IGP).
  • IGP interior gateway protocol
  • router configurator 346 may configure a Content Delivery Network (CDN) that configures a network of web caches with the purpose of offering responses to the client more efficiently.
  • CDN Content Delivery Network
  • router configurator 346 sets up virtual servers to isolate resources for handling a client request and balance the traffic load destined to the virtual server through Server Load Balancer (SLB) tools.
  • SLB Server Load Balancer
  • Pattern identifier 348 can be configured to extract features from training datasets and/or historic records of network communications.
  • features are extracted from a dataset by applying a pre-trained CNNs.
  • pre-trained networks such as Inception-v3, AlexNet, or TensorFlow may automatically extract features from a training dataset.
  • pattern identifier 348 may import layers of a pre-trained convolutional network, determine features described in a target layer of the pre-trained convolutional network, and initialize a multiclass fitting model using the features in the target layer and images received for extraction.
  • other deep learning models such as Fast R-CNN can be used for automatic feature extraction.
  • processes such as histogram of oriented gradients (HOG), speeded-up robust features (SURF), local binary patterns (LBP), color histogram, or Haar wavelets may also be used to extract features from a received image.
  • HOG histogram of oriented gradients
  • SURF speeded-up robust features
  • LBP local binary patterns
  • FRP memory 350 includes one or more storage devices configured to store instructions used by FRP processors 340 to perform operations related to disclosed embodiments.
  • FRP memory 350 may store software instructions as FRP programs 352 that perform operations when executed by FRP processor 340.
  • FRP memory 350 also includes FRP cached data 354.
  • FRP cached data 354 may include copies data in databases, such as databases 180 (FIG. 1), and/or requests being processed by NAO 110.
  • FRP cached data 354 can also cache copies of datafiles that are frequently requested by client devices 150 and transmit those copies for requesting client devices 150. In such embodiments, FRP cached data 354 can shorten the time required to retrieve data from FRP memory 350 or databases 180 for large fdes or fdes that are accessed frequently.
  • FRP memory 350 stores sets of instructions for carrying out processes to handle client request, further described below in connection with FIG. 16.
  • FRP 120 may be implemented in hardware, software, or a combination of both, as will be apparent to those skilled in the art.
  • one or more components of FRP 120 may be implemented as computer processing instructions embodied in computer software, some or all of the functionality of FRP 120 may be implemented in dedicated hardware.
  • FIG. 4 is a block diagram of an exemplary implementation of compressor/sorter (CS) 130, consistent with disclosed embodiments.
  • CS 130 includes a CS processor(s) 430, a CS database 420, and a communication device 410.
  • Communication device 410 may be configured like communication device 210 (FIG. 2) but be specifically configured to communicate compressed data or files.
  • CS processors 430 may be embodied as a processor similar to NAO processors 230.
  • CS processors 430 includes a category identifier 432, an ML compression selector 434, and a compression engine 436.
  • Category identifier 432 is configured to categorize a communication in one of predetermined groups based on the behavior of the exchanges or the type of exchanged files. For example, category identifier 432 may categorize communications based on the frequency of exchanges between transmitter and receiver to help CS 130 to identify a compression tool with the highest efficiency for the identified category of communication. Alternatively, or additionally, category identifier 432 may categorize communications based on whether the transmitted files are video, audio, VoIP, or streamed data. In some embodiments, category identifier 432 determines a category for an exchange based on flags, numbers, and/or option in packet headers.
  • ML compression selector 434 is configured to employ ML models to identify compression tools that result in lowest bandwidth consumption.
  • ML compression selector 434 may employ data collected by NAO 110 to train ML models, such as CNNs, random forests, and/or regressions, that determine a compression tool that is most likely to result in the lowest bandwidth consumption.
  • ML compression selector 434 may correlate compression tools with different communication categories and the resulting bandwidth consumption.
  • Compression engine 436 is configured to implement compression and decompression processes on data files that are transmitted or received in network 170.
  • Compression engine 436 may compress data using selected compression technique or selected compression tools.
  • compression engine 436 may use a compression scheme such as MPEG-2 AAC; the ATRAC and ATRAC3 compression technologies, AC- 3 algorithms, or any other suitable compression technique.
  • Compression engine 436 may also perform processes to reverse compression. For instance, compression engine 436 may perform decompression processes for both lossless and/or lossy decompression.
  • CS database 420 includes one or more storage devices configured to store instructions used by CS processors 430 to perform operations related to disclosed embodiments.
  • CS database 420 stores programs 422 with instructions to configure CS processors 430 to perform operations related to compressing, decompressing, or identifying compression tools.
  • CS database 420 also includes compression tools 424 with tools available in CS 130 for compressing or decompressing data exchanged in network 170.
  • Compression tools 424 may aggregate available compression methods to employ in network devices of network 170.
  • compression tools 424 may include compression software like Winzip, WinRAR, 7-Zip, Zip Archiver, and/or PeaZip, among other compression tools.
  • compression tools 424 may include EXE packers and/or archivers.
  • compression tools 424 include compression and encryption tools for digital data transmission.
  • compression tools 424 may include tools for encrypting data using distributed source principles.
  • CS database 420 stores sets of instructions for carrying out processes to handle client request, further described below in connection with FIG. 15.
  • CS 130 may be implemented in hardware, software, or a combination of both, as will be apparent to those skilled in the art.
  • one or more components of CS 130 may be implemented as computer processing instructions embodied in computer software, some or all of the functionality of CS 130 may be implemented in dedicated hardware.
  • FIG. 5 is a block diagram of an exemplary implementation of security processor/filter (SPF) 140, consistent with disclosed embodiments.
  • SPF 140 includes SPF processor(s) 540, SPF memory 550, and a communication device 560.
  • SPF processors 540 may be embodied as a processor similar to NAO processors 230.
  • SPF processors 540 include hardware and/or logical partitions for a filter 542, an encryption module 544, a request bouncer 546, and a virus scanner 548.
  • Filter 542 is configured to capture or retain data packets that meet conditions defined by SPF 140.
  • filter 542 may allow intercepting or creating copies of packets based on protocols, presence of fields, values of fields, and comparison between fields.
  • Filter 542 may also be configured to capture packets based on packet behavior based on, for example, frequency of interactions or scope of interactions within a threshold time.
  • filter 542 may include web filters or “content control software” to, for example, limit interactions by configuring DNS servers and/or limit the location of acceptable client requests. In such embodiments, filter 542 may provide filters based on URLs.
  • Encryption module 544 is configured to perform encryption operations on data structures generated by NAO 110 and/or documents or transmitted by FRP 120. Encryption module 544 may perform encryption operations by using a key and then transmit the encrypted information over an untrusted network. Encryption module 544 may employ two types of encryption, symmetric and asymmetric. In symmetric encryption, encryption module 544 may use the same data key to encrypt and decrypt the information. Various types of symmetric encryption which are known in the art include the Data Encryption Standard (DES), the Improved DES (IDES), and/or the RC-5 algorithm.
  • DES Data Encryption Standard
  • IDES Improved DES
  • RC-5 algorithm the Data Encryption Standard
  • encryption module 544 may use asymmetric encryption, in which a first key is used to encrypt the information and a second key is used to decrypt the information.
  • the first key is a public key which may be widely known and the second key is a private key which is known only to authorized clients.
  • encryption module 544 uses asymmetric encryption methods including the Diffie -Hellmean algorithm.
  • Request bouncer 546 is configured to identify client requests with a high threat level and isolate or reroute them outside a protected area. For example, request bouncer 546 may associate addresses or locations with potential threats and assign a high threat level.
  • request bouncer 546 may decline to process the request or route it to a sandboxed server or processor (e.g., a secure enclave) that can process the request without exposing other elements in the network.
  • request bouncer 546 may generate and enforce sandboxed environments to test suspicious client requests and/or detect behaviors of unknown threats.
  • request bouncer 546 may generate and manage virtual machine environments with multiple operating systems where client requests are partially processed and monitored to identify probes of unique files, apps, and software, and firewall behavior.
  • request bouncer 546 designs virtual environments that mimic certain protected domains to catch threats or assign a threat level.
  • Virus scanner 548 includes anti-virus scanning capabilities and is configured to identify malicious software and/or applications. Virus scanner 548 may be adapted for scanning for known types of security events in the form of malicious programs such as viruses, worms, and Trojan horses. Alternatively, or additionally, virus scanner 548 may be adapted for content scanning to enforce an organization's operational policies. For example, virus scanner 548 can be configured to detect harassing or pornographic content, junk e-mails, misinformation (virus hoaxes), or the like. In some embodiments, virus scanner 548 may employ signature files and other related control information that may be stored on a non-volatile solid state memory (i.e. FLASH RAM).
  • FLASH RAM non-volatile solid state memory
  • SPF memory 550 includes one ormore storage devices configured to store instructions used by SPF processors 540 to perform operations related to disclosed embodiments.
  • SPF memory 550 stores SPF programs 552 with instructions to configure SPF processors 540 to perform operations related to assign threat levels, identify suspicious client requests, and isolate them.
  • SPF memory 550 also includes security data 554, which stores identified threats and/or training data for ML models that help identify malicious requests.
  • security data 554 can store historic data of threats or suspicious activity.
  • security data 554 may store data to generate an assignment of threat level and information to route client requests to BIND servers for security screening.
  • SPF memory 550 stores sets of instructions for carrying out processes to handle client request, further described below in connection with FIG. 22.
  • SPF 140 may be implemented in hardware, software, or a combination of both, as will be apparent to those skilled in the art.
  • one or more components of SPF 140 may be implemented as computer processing instructions embodied in computer software, some or all of the functionality of SPF 140 may be implemented in dedicated hardware.
  • Databases 180 includes a communication device 602, one or more database processors 604, and a database memory 610 including one or more database programs 612 and data 614.
  • databases 180 takes the form of servers, general-purpose computers, mainframe computers, or any combination of these components. Other implementations consistent with disclosed embodiments are possible as well.
  • Communication device 602 is configured to communicate with one or more components of system 100, such as online resources 190, BW optimization system 105, NAO 110, FRP 120, and/or client devices 150.
  • communication device 602 may be configured to provide sample packets, data to handle client requests, or bandwidth metrics from network 170 to NAO 110.
  • Communication device 602 may be configured to communicate with other components as well, including, for example, SPF 140 (FIG. 5).
  • Communication device 602 may take any of the forms described above for communication device 210 (FIG. 2).
  • Database processors 604 and database memory 610 may take any of the forms described above for NAO processors 230 and NAO database 220 respectively (FIG. 2).
  • the components of databases 180 may be implemented in hardware, software, or a combination of both hardware and software.
  • one or more components of databases 180 may be implemented as computer processing instruction modules, all or a portion of the functionality of databases 180 may be implemented instead in dedicated electronics hardware.
  • Data 614 is data associated with traffic reported by port sensors (i.e., sensors programmed at ports of networked devices) configured in ports of routing devices in network 170.
  • Data 614 may include, for example, information relating client requests, data transmitted in response of client requests, and network metrics during exchanges with clients.
  • Data 614 may also include training datasets for training ML models related to the selection of compression tools, optimized communication pathways, and data that is being transmitted to clients in response to their requests.
  • Client device 150 includes one or more processors 702, one or more input/output (I/O) devices 704, one or more memories 710, and an imaging device such as a camera 720.
  • client device 150 takes the form of a mobile computing devices such as a smartphone or tablet, a general-purpose computer, or any combination of these components. Further, client device 150 may take the form of a wearable device, such as smart glasses or a smartwatch.
  • client device 150 may be configured as a particular apparatus, embedded systems, dedicated circuit, and the like based on the storage, execution, and/or implementation of the software instructions that perform one or more operations consistent with the disclosed embodiments.
  • client device 150 includes a web browser or similar application that can access websites consistent with disclosed embodiments.
  • Processor 702 may include one or more known processing devices, such as mobile device microprocessors manufactured by IntelTM, NVIDIATM, or various processors from other manufacturers. The disclosed embodiments are not limited to any specific type of processor configured in client device 150.
  • Memory 710 includes one or more storage devices configured to store instructions used by processor 702 to perform functions related to disclosed embodiments.
  • memory 710 may be configured with one or more software instructions, such as programs 712 that perform operations when executed by processor 702.
  • the disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks.
  • memory 710 may include a single program 712 that performs the functions of the client device 150, or program 712 may include multiple programs.
  • Memory 710 also stores data 716 that is used by one or more programs in BW optimization system 105.
  • Memory 710 includes instructions to transmit requests for data from elements of system 100.
  • memory 710 may store instructions that configure processor 702 to transmit client requests for streamed data or files from servers coupled to network 170.
  • memory 710 stores HTTP cookies (also called web cookie, Internet cookie, browser cookie, or simply cookie).
  • cookies may include a small piece of data that helps NAO 110, or other element of BW optimization system 105, to evaluate how efficiently a client request was addressed and BW consumption associated with the client request.
  • Monitoring application 714 configures processor(s) 702 to communicate with BW optimization system 105 or self-identify poor bandwidths to communicate them to NAO 110. For instance, through monitoring application 714, client devices 150 may communicate feedback to update BW optimization system 105. In such embodiments, monitoring application 514 may generate HTTP requests or other TCP/IP packets directed to BW optimization system 105 with user-collected information or requests.
  • I/O devices 704 include one or more devices configured to allow data to be received and/or transmitted by client devices 150 and to allow client devices 150 to communicate with other machines and devices, such as other components of system 100.
  • I/O devices 704 may include a screen for displaying optical payment methods such as Quick Response Codes (QR) or providing information to the user.
  • I/O devices 704 may also include components for Near Field Communication (NFC) communication.
  • I/O devices 704 may also include one or more digital and/or analog devices that allow a user to interact with client devices 150 such as a touch-sensitive area, buttons, or microphones.
  • I/O devices 704 may also include one or more accelerometers to detect the orientation and inertia of client device 150.
  • client device 150 may be implemented in hardware, software, or a combination of both hardware and software.
  • FIG. 8 is a diagram of an exemplary communication network 800, consistent with disclosed embodiments.
  • network 170 may be implemented with the topology shown in FIG. 8.
  • elements of network 800 may connect to network 170.
  • network 800 includes Firewall/Routers 802.
  • the firewall/routers 802 may include multiple firewall/routers, like firewall/router 802 (a) and firewall/router 802(z).
  • Firewall/Routers 802 include networking devices that forward data packets between computer networks. Firewall/Routers 802 monitor and control incoming and outgoing network traffic based on predetermined security rules and establish barriers networks.
  • firewall/routers 802 can include both physical and virtual ports, serial ports, parallel ports, and wireless ports, which may be monitored or controlled by BW optimization system 105 (FIG. 1).
  • BW optimization system 105 can generate port sensors in ports of firewall/routers 802.
  • firewall/routers 802 may be configured remotely using, for example, APIs to setup networking conditions or configurations — such as opening ports, routing data to specific nodes, creating communication scenarios, or programming port sensors.
  • Network 800 also includes switches 806.
  • Switches 806, like firewall/routers 802, may include multiple switches 806 including switch 806(a) and switch 806(z).
  • Switches 806 include networking hardware that connects devices on a computer network by using packet switching to receive and forward data to the destination device.
  • Switches 806 may have multiport network bridges to forward data at the data link layer. Some of switches 806 may also forward data at the network layer by additionally incorporating routing functionality. Similar to firewall/routers 802, switches 806 may be programmed by BW optimization system 105 to route data according to ML optimization models and/or report sample packets for collection of training datasets.
  • Network 800 also includes WiFi access points 808.
  • Network 800 includes multiple WiFi access points 808, like WiFi access point 808(a) and WiFi access point 808(z), interfacing firewall/routers 802 and/or switches 806 with other elements of network 800 that access network 170 wirelessly.
  • WiFi access points 808 may also be programable to report traffic data and/or sample packets to BW optimization system 105. Further, WiFi access points 808 may be configured to report data at ports in different layers and report energy consumption.
  • servers 804 get connected in network 800 through switches 806 or firewall/routers 802.
  • Servers 804 may include server 804(a) and server 804(z). Each one of servers 804 may be assigned for specific tasks. For example, while server 804(a) may be assigned to handle data streaming, server 804(z) may be configured to handle security analysis.
  • servers 804 are configurable to monitor traffic in each one of their ports, both physical and virtual ports, and report data to BW optimization system 105.
  • servers 804 are configurable to report networking metrics like perceived speed of connection, usage rate, and/or traffic congestion.
  • servers 804 are employed to perform ML analysis or training.
  • server 804(a) may be configured to employ ML data to train predictive models. Additionally, in some embodiments servers 804 are configured to store data that is accessed by client devices 150. For example, servers 804 may perform one or more functions ofFRP 120 (FIG. 3).
  • Radio towers 810 may include cell sites, cell towers, or cellular bases. Additionally, or alternatively, radio towers 810 may include HF/shortwave antennas or other types of antennas. Further, radio towers 810 may also include 5G antennas. In some embodiments, as further discussed in connection with FIG. 9, radio towers 810 are configurable through remote applications and may include backup systems. Moreover, radio towers 810 may be configurable to report traffic, network metrics (such as upload and download speeds), samples packets. Further, radio towers 810 are configurable to route packets with a specific communication path according to communication scenarios. In some embodiments, radio towers 810 are configured to transmit data in multiple frequencies, different modulations, and predetermined power. Moreover, radio towers 810 may also be configurable to be turned off remotely.
  • Network 800 may also include IP telephones 812 and networked printers 814.
  • IP telephones 812 and networked printers 814 may be configured to report traffic data to BW optimization system 105 and/or perceived network conditions to inform training datasets for ML optimization.
  • network 800 may also include multiple client devices 150, including client devices 150(a) - 150 (z).
  • client devices 150 include monitoring applications that transmit information to BW optimization system 105 that informs the status of the network.
  • client devices 150 may be programmed through cookies to send periodically, or in function of specific events, network information to BW optimization system 105.
  • BW optimization system 105 configures port sensors at ports of client devices 150.
  • FIG. 9 is a block diagram of an exemplary wireless network 900, consistent with disclosed embodiments.
  • Wireless network 900 may be complementary to network 800 (FIG. 8).
  • network 900 may service similar client devices 150 through an alternative medium.
  • network 900 includes BW optimization system 105 interfacing other elements of network 900 with network 170.
  • network 900 may couple elements of the network in star connections without the required interfacing to network 170.
  • Network 900 includes a plurality of nodes 902 that connect directly or indirectly with BW optimization system 105, network 170, and/or gateways 920.
  • the nodes 902 may be distributed throughout a geography to reach users in different locations, as is done in cellular networks.
  • Network 900 may include multiple nodes 902 including node 902(a), node 902(b), node 902(c), node 902(y), and node 902(z).
  • Gateways 920 connect one or more devices in network 900.
  • Gateways 920 may include multiple gateways like gateway 920(a) and gateway 920(z).
  • Gateways 920 route packets from a wireless LAN to another network, wired or wireless WAN.
  • Gateways 920 may be implemented as software or hardware or a combination of both.
  • Gateways 920 combine the functions of a wireless access point, a router, and often provide firewall functions as well. Further gateways 920 may provide network address translation (NAT) functionality, and dynamic host configuration protocol (DHCP) to assign IPs.
  • NAT network address translation
  • DHCP dynamic host configuration protocol
  • gateways 920 are configured to report sample packets and traffic information to BW optimization system 105 to generate training datasets for training and validation of ML models.
  • BW optimization system 105 may program port sensors at ports of gateways 920.
  • gateways 920 may be configured remotely to, for example, assign communication nodes, select specific compression tools, and store information that is frequently accessed by client devices 150.
  • Network 900 may include multiple gateways 920 like gateway 920(a) and gateway 920(z).
  • each one of nodes 902 includes at least one of antennas 904, a processing computer 906, and sensors 910.
  • Each of nodes 902(a)-902(z) include a corresponding antenna 904(a)-904(z).
  • Antennas 904 may include any antenna for telecommunication including short wave, radio, long wave, or satellite communications, among others.
  • Each of nodes 902(a)-902(z) may include a corresponding processing computer 906(a)-906(z), which may include computers, mainframes, and/or servers that control communications in the nodes 902.
  • processing computers 906 may process received information and transmit responses to client devices 150.
  • processing computers 906 may be configured to store information that is frequently requested by client devices 150 accessing network 900 through a respective node.
  • processing computers 906 may be configured to operate as FRP 120 (FIG. 2) by storing data that BW optimization system 105 identifies as frequently accessed data by users of one of nodes 902.
  • processing computers 906 may include backup systems configured to turn off based on environmental conditions.
  • Processing computers 906 control node parameters 908, where each of nodes 902(a)-902(z) includes a corresponding node parameters 908(a)-908(z). For example, processing computers 906 may determine the frequency, modulation, and cached data that each of nodes 902 will be using. In some embodiments, processing computers 906 adjust the selected frequency and power of transmission. Processing computers 906 may also select different modulation methods based on optimized protocols or processes, as identified by BW optimization system 105. In some embodiments, processing computers 906 are configured remotely to, also remotely, adjust node parameters 908.
  • one of nodes network 900 may configure the parameters of another node (e.g., node 902(a)) by communication with processing computer 906(a) to adjust node parameters 908(a).
  • processing computers 906 are configured to periodically report node parameters 908 to BW optimization system 105 along with traffic data, sample packets, and/or network metrics.
  • Each of nodes 902(a)-902(z) also include corresponding sensors 910(a)- 910(z).
  • Sensors 910 include environmental sensors and sensors capable of providing: location, position, rain, noise, gas levels, electromagnetism, and other atmospheric conditions at the node. Further, sensors 910 may also measure and report temperature, humidity, light, noise, air pressure, and air quality. Additionally, sensors 910 may include temperature sensors, humidity sensors, or electromagnetic noise sensors. Sensors 910 may be configured to report measured data to BW optimization system 105, through processing computer 906 or directly through network 900.
  • network 900 may also include multiple client devices 150, including client devices 150(a) - 150 (z).
  • client devices 150 include monitoring applications that transmit information to BW optimization system 105 that informs the status of the network.
  • client devices 150 may be programmed through cookies to send periodically, or in function of specific events, network information to BW optimization system 105.
  • FIG. 10 is a block diagram of an exemplary Al system 1000 for network bandwidth optimization, consistent with disclose embodiments.
  • Al system 1000 shows BW optimization system 105 creating an interface between network 170 with databases and processing systems that generate and evaluate ML models for BW optimization. In some embodiments, however, one or more elements of Al system 1000 are part of BW optimization system 105.
  • Al system 1000 includes one or more databases (DB) & network attached storages (NAS) 1012 which are configured to store, categorize, label, and/or provide data to BW optimization system 105 during network configurations, ML training and validation, and handling client requests.
  • DB databases
  • NAS network attached storages
  • Al system 1000 also includes a Core Al and ML module 1010.
  • Module 1010 includes hardware or software that coordinates network configurations based on ML models to optimize BW utilization.
  • module 1010 may be part of BW optimization system 105, like being a logical partition of BW optimization system 105 or a virtual machine running in BW optimization system 105.
  • Module 1010 may configure communication pathways to handle client requests, select and configure compression tools for communications, and identify data for storage arrangement of data for clients.
  • module 1010 is coupled to client devices 150.
  • client devices 150 may send a request to BW optimization system 105 through module 1010, which acts as a DNS server.
  • Module 1010 would then analyze the client request to assign it to one category that allows determining optimization parameters.
  • module 1010 is configured to employ clustering algorithms using headers or payloads in client requests to determine a client request category.
  • the client request category may be, for example, request for video, request for audio, webpage navigation, or form request. Additionally, or alternately, the request categories may include a denial of service category.
  • machine-learning models employed and/or trained by BW optimization system 105 configure routers to route client requests associated with the denial of service category to a secure networked server, as further discussed in connection with FIGs. 11 and 12.
  • Module 1010 is connected to DB & NAS 1012(b), which is assigned for software optimization.
  • Module 1010 is coupled with a ML software (SW) optimization module 1020, which generates models to predict best SW configuration for networked devices based on client request categories or scenarios.
  • SW ML software
  • module 1020 may identify protocols or software tools that result in better transfer rates than others.
  • module 1020 may identify compression tools that have the highest performance given a client request category.
  • SW optimization module 1020 may also include interfacing modules to program networked devices through APIs and/or by installing selected firmware.
  • Core Al and ML module 1010 is also connected to an Al transfer modules 1022 and compression tools 1024.
  • Al transfer modules 1022 determine optimized transmission parameters for packets (e.g., such as TTL or payload size) based on historic communications.
  • Compression tools 1024 include tools and processing capability as described for CS 130 (FIG. 4).
  • module 1010 is coupled to a faster response and cached data module 1026.
  • Module 1026 includes data for transmission to client devices 150.
  • module 1026 is implemented and performs the functions described for FRP 120 (FIG. 3).
  • system 1000 in response to a client request module 1010 identifies a request category and use models and/or information to configure hardware and software with optimized parameters to maximize BW during communication exchanges. This information may then be transmitted to BW optimization system 105 that stores the optimized configurations for future communications and to interface with network 170. Accordingly, system 1000 can improve the overall BW performance of the network by optimizing both HW and SW configuration of networked devices in a network.
  • FIG. 11 is a block diagram of an exemplary threat management system 1100 implemented with a BIND server, consistent with disclosed embodiments.
  • data from network 170 that is incoming to a protected client network 1104 is first filtered by a filter 1102, which is configured to route unexpected or unfamiliar data packets to a security enclave 1106.
  • filter 1102 may implement whitelist / blacklist filtering of packets though port-based filtering rules.
  • filter 1102 selects packets based on their headers or location to route them either directly to client network 1104 or to security enclave 1106.
  • filter 1102 monitors all ports in a network that receive external data. For example, NAO 110 (FIG.
  • Filter 1102 may then reroute traffic based on collected data or a threat analysis. Moreover, filter 1102 may employ compression tools and/or compression engine 436 (FIG. 4) when transmitting data to client network 1104 or secure enclave 1106.
  • compression tools and/or compression engine 436 FIG. 4
  • filter 1102 routes the packet to secure enclave 1106 where the packet is analyzed by ML threat analyzer 1110.
  • ML threat analyzer employs ML methods to associate packets with a threat level.
  • ML threat analyzer 1110 can employ training datasets to correlate likelihood of malicious behavior by requests based on previous requests and/or their configuration.
  • secure enclave 1106 may use a BIND sever 1112 to identify and remove malware and adware that may be in the external communications.
  • BIND server 1112 may also screen for phishing domains.
  • the secure enclave 1106 may also include a DB & NAS 1114 external to client network 1104 that is configured to store previous analysis, identified threats, or communications that should be whitelisted and provide this information to filter 1102.
  • BIND server 1112 establishes encrypted connections 1120 with client network 1104, enabling secure transmission or cleared communications or packets to network 1104 which expedite processing of data requests after they have been screened through the secure enclave 1106.
  • FIG. 12 is a block diagram of an exemplary threat management system 1200 implemented with a cloud server, consistent with disclose embodiments.
  • a cloud server 1210 interfaces a client network 1230 and network 170.
  • Cloud server 1210 may be configured to implement certain policies, technologies, applications, and controls to protect, analyze, and filter data, applications, and/or services before reaching client network 1230.
  • Such configuration allows preventive measures to avoid or minimize the impact of cyber-attacks in client network 1230. For example, by isolating security and filtering functions outside any element of the client domain, it is possible to shield client devices in client network 1230.
  • cloud server 1210 runs multiple operating systems to test client packets under different environments to identify potential threats.
  • Cloud server 1210 may employ an array of ML algorithms to identify and handle threats.
  • cloud server 1210 is coupled to a security engine 1204 that analyzes incoming traffic using ML methods that correlate incoming traffic with a threat level.
  • security engine 1204 is implemented as SBF 140 (FIG. 5), including the virus scanning and filtering modules described in connection with FIG. 5.
  • security engine 1204 is coupled to an ML security trainer 1202 that trains ML models, such as random forests or CNNs, based on data that has been collected and curated in aDB &NAS 1214thatis also coupled to cloud server 1210.
  • traffic directed to client network 1230 may be copied in DB & NAS 1214, which can later be accessed by ML trainer 1202 to generate models that correlate incoming traffic with threat levels based on historic behavior of similar client requests or network 170 communications.
  • Cloud server 1210 may create an encrypted link 1220 with client network 1230 and transmit or relay data or request incoming from network 170.
  • Client network 1230 is coupled to a local security server 1232 that performs additional analysis of incoming traffic (e.g., to enforce local rules controlled by a local administrator) and a networked device 1234 that may route communications.
  • ports of networked device 1234 may be configured to report sample traffic packets and/or traffic metrics to BW optimization system 105 which may part of client network 1230 or, in some embodiments, may be external and run in cloud server 1210.
  • Client network 1230, networked device 1234, and local security server 1232 are coupled to client devices 150.
  • client devices 150 report bandwidth utilization and perceived traffic speed to client network 1230 and/or cloud server 1210.
  • This client generated information is stored in DB & NAS 1214 for future analysis and to generate ML models that improve network BW (e.g., ML trainer 1202 may use client generated data as part of the ML models).
  • FIG. 13 is a flow chart illustrating an exemplary process 1300 for optimized network configuration, consistent with disclosed embodiments.
  • elements of system 100 may perform process 1300.
  • BW optimization system 105 may perform process 1300.
  • other elements of system 100 may perform one or more steps of process 1300.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1300, or parts of process 1300.
  • system 1000, or parts of system 1000 may perform process 1300.
  • module 1010 (FIG. 10) may perform process 1300.
  • BW optimization system 105 is connected to a target network.
  • BW optimization system 105 may get connected to a server in network 800, a node in network 900, and/or a switched or other networked device.
  • BW optimization system 105 detects ports and network devices in the target network. For example, using UDP broadcasting signals BW optimization system 105 may identify devices that are active in the network and both physical and virtual ports that are being actively employed for communications.
  • BW optimization system 105 transmits information discovered or collected in step 1304 to a network analyzer for recognition.
  • BW optimization system 105 may transmit network information to NAO 110 to analyze network configuration in step 1306.
  • NAO 110 may be configured to access the discovered networked devices through APIs that allows BW optimization system 105 to program networked devices to report port traffic for diagnosis and optimization.
  • BW optimization system 105 performs an optimization of network parameters based on client workflow. For example, as further described in connection with FIGs. 14-16, BW optimization system 105 may optimize communication pathways, the selection of compression tools, and use of cache memory based on client request categories and typical response flows. In some embodiments, optimizations in step 1308 includes using ML models that correlate bandwidth performance with the configuration of devices at the physical layer. In step 1310, BW optimization system 105 stores and/or implements the optimized configuration of the target network.
  • BW optimization system 105 performs a diagnostic of network behavior to identify logic and/or hardware errors. For example, BW optimization system 105 may determine whether one or more of the networked devices detected in 1304 is consuming more or less power than expected. Alternatively, or additionally, BW optimization system 105 may determine port utilization or speeds (using for example sensors programmed in step 1306) to identify anomalies or inconsistencies. In step 1314, BW optimization system 105 determines errors and in step 1316 BW optimization system 105 registers the network (and identified errors) in a database. This network behavior may be used as training datasets for generating ML models that facilitate determining optimized communication pathways to resolve client requests.
  • Step 1316 initiates an optimization cycle using ML models that adjust network parameters to improve BW utilization and/or network control.
  • BW optimization system 105 performs ML optimization of network control in step 1318.
  • BW optimization system 105 employs ML models trained by ML optimizer 238 (FIG. 2) to identify optimized configurations and efficient communication pathways.
  • BW optimization system 105 captures communication metrics (e.g., bandwidth utilization, average transmission, latency, number and % of out of order packets, and/or TCP retransmits).
  • the communication metrics may be stored or registered as training data in step 1316 and this information may then inform ML models with additional training data to be for implemented in step 1318, completing one optimization cycle.
  • step 1322 BW optimization system 105 generates a first level of intelligent network configuration through a Core Al & ML configuration.
  • module 1010 performs operations to configure network elements based on a categorization of client requests.
  • Response to client requests may also be configured through second level processes in step 1324, which may include performing compression Al to select optimized compression algorithms or tools (in step 1326) and a storage Al to identify frequently used data to be stored in edge or cache memories (in step 1328).
  • Second level processes of step 1324 also includes a packet transfer Al that identifies optimized communication pathways (in step 1330), hardware automated programming to configure the hardware of networked devices (in step 1332) and network identification processes to identify anomalies and/or deficient hardware in a network (in step 1334).
  • FIG. 14 is a flow chart illustrating an exemplary process 1400 for storing client requests and training datasets, consistent with disclosed embodiments.
  • elements of system 100 perform process 1400.
  • BW optimization system 105 may perform process 1400.
  • other elements of system 100 may perform one or more steps of process 1400.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1400, or parts of process 1400.
  • system 1000, or parts of system 1000 may perform process 1400.
  • module 1026 may perform process 1400.
  • BW optimization system 105 receives a client request.
  • BW optimization system 105 may receive a request for data or a service from one or more client devices 150.
  • BW optimization system 105 may receive a copy of a client request received by one of servers in a network.
  • BW optimization system 105 assigns a request category to the request received in step 1402 based on packet headers, source, and/or payload. For example, BW optimization system 105 may assign a category to the requests based on options selection in packet headers associated with the request or the type of content that is being requested. In some embodiments, BW optimization system 105 may create categories for video requests, audio requests, and/or torrent requests.
  • BW optimization system 105 determines whether a previous request shares the same category and/or are similar. For example, BW optimization system 105 may compare the request of step 1402 and the assigned category at step 1404 to determine if similar requests and categories have been handled before. If BW optimization system 105 determines that previous requests share the same category and/or are similar, BW optimization system 105 continues to step 1410 and provides the assignment information to a Core Al & ML, such as module 1010 (FIG. 10), to respond to the request with an optimized configuration. However, If BW optimization system 105 determines that no previous request shares the same category and/or is similar, BW optimization system 105 continues to step 1412. Further, concurrently with the determinations of step 1406, BW optimization system 105 may store the request in a database in step 1408 for future training of ML models.
  • BW optimization system 105 labels and stores information of client request and assignment in database.
  • BW optimization system 105 may store client requests in a database and include metadata with the assigned category, time stamps, and sourcing information.
  • BW optimization system 105 identifies network and browsing patterns associated with the client request and estimates performance statistics.
  • BW optimization system 105 may employ pattern analysis techniques to analyze exchanges between networked devices.
  • BW optimization system 105 may measure network performance associated with the client request. For example, BW optimization system 105 may determine bit error rates, BW utilization, and/or repeated communications associated with handling the request.
  • BW optimization system 105 updates a database and a training dataset with the client request, the observed network metrics, and statistical characterizations of step 1414. For instance, BW optimization system 105 may generate one or more entries in a database to include information about the client request and the network performance when responding to the client request. As BW optimization system 105 produces the analysis, BW optimization system 105 may store the analysis in a database or training dataset in step 1418.
  • BW optimization system 105 provides the updated training dataset and observed metrics to a core Al & ML module, like module 1010. Alternatively, or additionally, BW optimization system 105 may provide the training dataset to NAO 110 (e.g., to ML training data 224) to generate additional ML models that optimize network configurations.
  • FIG. 15 is a flow chart illustrating an exemplary process 1500 for selecting a data compression tool for optimized bandwidth, consistent with disclosed embodiments.
  • elements of system 100 perform process 1500.
  • BW optimization system 105 may perform process 1500.
  • other elements of system 100 may perform one or more steps of process 1500.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1500, or parts of process 1500.
  • system 1000, or parts of system 1000 may perform process 1500.
  • compression tools 1024 may perform process 1500.
  • BW optimization system 105 receives a client request or a copy of a client request.
  • BW optimization system 105 employs ML models to identify compression tool(s) based on the client request. For example, BW optimization system 105 may correlate the client request with a category and identify a compression tool with the lowest BW utilization based on ML compression selector 434 (FIG. 4). Concurrently, BW optimization system 105 queries a database to identify compression tools available for the client request in step 1506.
  • BW optimization system 105 determines whether there are any previous compressed responses to the client request. For example, when streaming data, multiple clients may request similar files. In such embodiments, it may be possible to recycle compressed folders for transmission minimizing response latency. If BW optimization system 105 identifies previous compressed responses to the client request (step 1508: Yes), BW optimization system 105 provides the compressed response and/or the compression tool to a core Al & ML module, like module 1010 (FIG. 10), in step 1512. However, if BW optimization system 105 does not identify previous compressed responses to the client request (step 1508: No), BW optimization system 105 continues to step 1514. Further, concurrently with the determinations in step 1508, BW optimization system 105 queries a NAS and/or Web cache in step 1510 to identify previous responses to client requests.
  • BW optimization system 105 provides shortlisted compression tools to Core Al & ML. For example, based on results from ML compression selector 434 (FIG. 4) BW optimization system 105 may provide shortlisted compression tools to NAO 110 or Core Al & ML module 1010. In response, in step 1516 BW optimization system 105 receives compression parameters that are used to select compression tool for client requests in step 1518.
  • step 1520 BW optimization system 105 updates a database with a new or modified entry associating the client request with a selected compression tool.
  • step 1522 BW optimization system 105 stores the compression information for the client request in a database.
  • FIG. 16 is a flow chart illustrating an exemplary process 1600 for network optimization based on network scenarios, consistent with disclosed embodiments.
  • elements of system 100 perform process 1600.
  • BW optimization system 105 may perform process 1600.
  • other elements of system 100 may perform one or more steps of process 1600.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1600, or parts of process 1600.
  • system 1000, or parts of system 1000 may perform process 1600.
  • module 1014 and module 1016 may perform process 1600.
  • BW optimization system 105 receives a client request.
  • BW optimization system 105 evaluates performance ofHW, physical, and/or virtual ports based on objective metrics, such as manufacturer metrics or consensus metric for similar networks. For example, in step 1604 BW optimization system 105 may identify networked devices and evaluate their performance based on their transfer rates, utilization rates, and power consumptions.
  • BW optimization system 105 employs a model (such as a random forest or a CNN) to define a communication pathway for the client request. For example, BW optimization system 105 may select nodes or routers that more efficiently handle the client requests.
  • BW optimization system 105 employs a model to select a compression tool, or compression tools, for handling the client request. For example, BW optimization 105 may use ML compression selector 434 to identify one or more of compression tools 424 (FIG. 4) that can be used to handle the client request.
  • BW optimization system 105 employs a model to select or identify predicted frequently accessed data for the client request. For instance, BW optimization system 105 may identify data that is frequently accessed by similar client requests based on predictive ML module 344 and store the selected information in FRP cached data 354 (FIG. 3).
  • BW optimization system 105 optimizes balance of compression, storage, and pathway for the client request. For example, based on the models, BW optimization system 105 may select a combination of compression tool, storage at the cache memories, and communication pathways that would likely result in the lowest BW consumption to handle the client request. Further, in step 1614 BW optimization system 105 performs network analysis to optimize selected hardware, compression tools, and stored data.
  • BW optimization system 105 generates a network scenario based on client data and selected compression, storage, and communication pathway. For example, BW optimization system 105 generates a scenario in a database that correlates the client request (or request category) with a selection combination of compression, storage, and pathway selection.
  • BW optimization system 105 updates a training dataset to include the network scenario of step 1616.
  • BW optimization system 105 writes in the database the network scenario and or a modified training dataset.
  • BW optimization system 105 may store a plurality of scenarios for each the request categories, the plurality of scenarios including a network status, a default compression tool, default cached data, and a default communication path.
  • FIG. 17 is a flow chart illustrating an exemplary process 1700 for programming network devices with optimized bandwidth, consistent with disclosed embodiments.
  • elements of system 100 perform process 1700.
  • BW optimization system 105 may perform process 1700.
  • other elements of system 100 may perform one ormore steps of process 1700.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1700, or parts of process 1700.
  • system 1000, or parts of system 1000 may perform process 1700.
  • module 1018 may perform process 1700.
  • BW optimization system 105 identifies repeated tasks or communications associated with a client request. For example, BW optimization system 105 may identify repeated communications, queries, route calculations, or compressions performed by networked devices when responding to a client request.
  • BW optimization system 105 automates tasks in network devices based on identified tasks or communications. For example, employing RPA module 236, BW optimization system 105 may automate devices for repeated operations.
  • BW optimization system 105 verifies variables for the network environment and networked devices. For example, in some embodiments, BW optimization system 105 may verify the automated configuration resulting from the automation of step 1704 to confirm it corresponds with ranges provided by manufacturers or client policies. In step 1708, BW optimization system 105 stores communication metrics and verified variables in a database to enhance a training dataset, such as ML training data 224 (FIG. 2). In step 1710, BW optimization system 105 writes in database the communication metrics and verified variables to include them in a training dataset.
  • ML training data 224 FIG. 2
  • BW optimization system 105 compares observed metrics with target and/or optimized metrics. For example, BW optimization system 105 compares network metrics (such as latency, BW optimization, bit error, etc.) with target metrics or best previous metrics with an optimized configuration. In step 1714, BW optimization system 105 determines whether the observed metrics correspond to target and/or optimized metrics. If BW optimization system 105 determines that the observed metrics do not correspond to target and/or optimized metrics (step 1714: No), BW optimization system 105 continues to step 1716 and generates alerts and/or notifications of the unachieved metrics.
  • network metrics such as latency, BW optimization, bit error, etc.
  • BW optimization system 105 determines that the observed metrics correspond to target and/or optimized metrics (step 1714: Yes)
  • BW optimization system 105 continues monitoring the network performance with the optimized configuration in step 1718 and re-initiates monitoring operations to identify repeated tasks or communications in step 1702.
  • FIG. 18 is a flow chart illustrating an exemplary process 1800 for diagnosing a network and creating network sensors, consistent with disclosed embodiments.
  • elements of system 100 perform process 1800.
  • BW optimization system 105 may perform process 1800.
  • other elements of system 100 may perform one or more steps of process 1800.
  • databases 180, computing clusters 160, or online resources 190 may perform process 1800, or parts of process 1800.
  • system 1000, or parts of system 1000 may perform process 1800.
  • module 1014 and module 1016 may perform process 1800.
  • BW optimization system 105 identifies HW, ports, and/or networked devices. For example, using crawler 232 NAO 110 (FIG. 2) identifies networked devices, ports used during communication, and HW capabilities or configurations.
  • BW optimization system 105 installs firmware for controlling ports and/or networked devices.
  • BW optimization system 105 may install Customizable FOSS firmware, such as OpenWrt , or Early power-boosting firmware, such as HyperWRT.
  • BW optimization system 105 may install firmware including modifying chipset software.
  • BW optimization system 105 may verify or identify firmware versions using “/componentsvers” command lines or device manager operations.
  • BW optimization system 105 updates databases to include identified devices and ports and write them in a database in step 1808.
  • BW optimization system 105 recognizes networked devices and identifies associated web interfaces and/or APIs.
  • API configurator 234 (FIG. 2) may identify APIs that enable programming or monitoring networked devices identified in step 1802.
  • RPA robotic process automation
  • BW optimization system 105 communicates with networked devices through web interfaces and/or APIs.
  • BW optimization system 105 may communicate with routers and servers in the network through API calls or requests.
  • APIs used in step 1812 include REST API exposed by an Admin Node Manager that provide a mechanism for routing API requests intended for API Servers and configure the networked devices.
  • BW optimization system 105 may identify operating systems of the networked devices, install firmware of the networked devices on the at least one processor, configure testing routines under Simple Network Management Protocol (SNMP), and open web interfaces for internet-coupled networked devices.
  • SNMP Simple Network Management Protocol
  • BW optimization system 105 creates network sensors at ports and/or networked devices. For example, based on firmware or API configuration options, BW optimization system 105 may program ports of networked devices in a network, such as network 800 or network 900, to report traffic metrics, send sample packets, and/or reroute traffic to specific nodes as determined by BW optimization system 105.
  • sensors configured in step 1814 may be embedded in a network device (e.g., sensors can be inserted in network devices) and be configured to identify a network port failure.
  • the network sensors may include an independent electrical stability sensor that identifies and/or report electrical failures.
  • the sensors of step 1814 may also include meteorological sensors that identify or categorize wireless communication conditions. Further, in step 1814 BW optimization system 105 may determine hardware performance for each one of the ports by identifying a port performance and comparing the port performance with manufacturer standards.
  • step 1816 BW optimization system 105 initiates monitoring and packet collection for training ML models.
  • the training data set is updated in step 1818 and in step 1820 information is updated and restored in a database (e.g., the training dataset is stored in a database such as databases 180).
  • FIG. 19 is a flow chart illustrating an exemplary threat management process 1900, consistent with disclosed embodiments.
  • elements of system 100 perform process 1900.
  • BW optimization system 105 may perform process 1900.
  • SPF 140 (FIG. 4) may perform one or more steps of process 1900.
  • other elements of system 100 may perform one or more steps of process 1900.
  • computing clusters 160 or online resources 190 may perform process 1900, or parts of process 1900.
  • system 1100 and system 1200, or parts of these systems may perform process 1900.
  • filter 1102 or secure enclave 1106 may perform process 1900.
  • BW optimization system 105 receives configuration parameters for local, local bind, and/or cloud servers. For example, BW optimization system 105 may receive client network policies, rules for white list or blacklist addresses or sources, and virus scanning methods.
  • BW optimization system 105 establishes encrypted connections between client network, local BIND, and/or cloud server. For example, BW optimization system 105 may establish encrypted connections 1120 with client network 1104 (FIG. 11).
  • BW optimization system 105 initiates training of ML models based on stored client requests. For example, BW optimization system 105 may initiate training of ML threat analyzer 1110 (FIG. 11) to identify threats based on previous attacks or suspicious client requests.
  • BW optimization system 105 receives a client request and validate the request format. For example, filter 1102 may receive a client request through network 170 that is routed to secure enclave 1106 when the sender or the request category is not whitelisted. Additionally, or alternatively, in step 1908 BW optimization system 105 may perform operations to validate the client request, such as verifying checksums and/or correlating payload characteristics with header information.
  • BW optimization system 105 determines whether the client request match precious requests or categories. To do so, BW optimization system 105 queries a database to identify previous client categories. If BW optimization system 105 determines that the client request matches previous requests or categories (step 1912: Yes), BW optimization system 105 continues to step 1916 and grants access to the client network through secure encrypted communications. For example, filter 1102 may grant access to client network 1104 without screening through secure enclave 1106 (FIG. 11). However, if BW optimization system 105 determines that the client request does not match previous requests or categories (step 1912: No), BW optimization system 105 continues to step 1918.
  • BW optimization system 105 performs a threat analysis based on ML models to detect malicious client requests. For example, as further described above in connection with FIGs. 11 and 12, BIND server 1112 or cloud server 1210 can test a client request under different operating systems, evaluate their behavior, and/or scan for viruses. Additionally, or alternatively, in step 1918 BW optimization system 105 may use SPF 140 to perform security analysis of client requests. In step 1920, BW optimization system 105 validates threat analysis outcomes of step 1918 based on previously stored analysis in the BIND. For example, BW optimization system 105 may compare parameters like source of request and/or client network policies to validate the threat assessment of step 1918.
  • BW optimization system 105 determines whether the client request is above a threat threshold. For example, BW optimization system 105 may determine whether the threat level identified in step 1918 is accepted by the client network policies. If BW optimization system 105 determines that the client request is not above a threat threshold (step 1922: No), BW optimization system 105 continues to step 1924 and updates database to include the client request in training datasets for positive requests, which do not require additional screenings at for example filter 1102 stage. However, if BW optimization system 105 determines that the client request is above a threat threshold (step 1922: Yes), BW optimization system 105 continues to step 1926.
  • BW optimization system 105 updates databases to include client request in training dataset for negative requests, which require additional screening or should be declined (e.g., by request bouncer 546 (FIG. 5)).
  • BW optimization system 105 transmit error or decline messages to client devices 150 and/or administrators of the client network.
  • FIG. 20 is a flow chart illustrating an exemplary process 2000 for configuring a wireless network with optimized bandwidth, consistent with disclosed embodiments.
  • elements of system 100 perform process 2000.
  • BW optimization system 105 may perform process 2000.
  • other elements of system 100 may perform one or more steps of process 2000.
  • computing clusters 160 or online resources 190 may perform process 2000, or parts of process 2000.
  • system 1000, or parts of system 1000 may perform process 2000.
  • module 1010 may perform process 2000.
  • BW optimization system 105 establishes logic connections with nodes including servers and/or radios through SNMP and/or SSH.
  • BW optimization system 105 may connect with nodes 902 (FIG. 9) using SNMP.
  • BW optimization system 105 installs and executes processing, memory, and GPS scripts to map nodes and collect packets.
  • BW optimization system 105 may query locations of sensors 910 to associate nodes 902 with locations, program processing computers 906 to report traffic data, and install reporting programs that monitor traffic in ports of devices in nodes 902 (FIG. 9).
  • BW optimization system 105 opens communication ports with monitoring server and/or testing server.
  • NAO 110 FIG.
  • BW optimization system 105 may open communication ports and setup network sensors in nodes 902. Further, BW optimization system 105 may setup networked devices to send sample packets and/or traffic metrics through API methods — as further discussed in connection with FIG. 18. In step 2008, BW optimization system 105 open ports for streamed data exchanges with a database including environmental data and/or with environmental sensors (like sensors 910).
  • BW optimization system 105 performs optimization operations based on testing packets (transmitted by networked devices configured in step 2006) and/or statistical analysis.
  • the optimization scenarios may get stored in a database in step 2030, in which BW optimization system 105 updates databases to include training datasets based on optimized network and the optimized parameters.
  • BW optimization system 105 optimization system 105 trains ML models that predicts network behavior based on environmental conditions and a communication or client request category.
  • ML optimizer 238 may identify configurations for nodes 902 that maximize transfer rates in network 900 based on environmental conditions and the type of transmitted data (e.g., between video and audio).
  • BW optimization system 105 configures and automate nodes in a network for optimized communication paths and tuned node parameters. Based on ML models of step 2012, BW optimization system 105 may tune node parameters 908 in nodes 902 and identify optimized communication pathways to delivery data to client devices 150. For example, the communication pathways may specify hardware, database sectors, and one or more of the port.
  • BW optimization system 105 validates communication scenarios through simulation and/or testing packets. In order to avoid risky configurations, BW optimization system 105 validates communication scenarios through simulations that are validated in step 2016. Additional BW optimization system 105 tests communication scenarios for simulated extreme environmental conditions. For example, BW optimization system 105 may test the optimized conditions configured in step 2014 to extreme environmental conditions such as thunderstorms or tornados. Further, BW optimization system 105 may simulate a range of environmental parameters to determine packet losses, latency, and power configurations of the selected nodes. This information may give BW optimization system 105 training data to, for example, predict when one or more nodes 902 may get damaged due to environmental conditions to turn them off or modify the node configuration to maintain network communications despite adverse environmental conditions.
  • BW optimization system 105 analyzes network behavior for testing scenarios. For example, BW optimization system 105 performs statistical and/or comparative analysis of the results from simulated scenarios to determine if the simulated scenarios would result in improved bandwidth utilization and/or better transfer rates. This information may be used for future ML training and, therefore, BW optimization system 105 stores the simulation results in a database in step 2030.
  • FIG. 21 is a flow chart illustrating an exemplary process 2100 for training machine-learning models based on collected packets received in network ports, consistent with disclosed embodiments.
  • elements of system 100 may perform process 2100.
  • BW optimization system 105 may perform process 2100.
  • other elements of system 100 may perform one or more steps of process 2100.
  • computing clusters 160 or online resources 190 may perform process 2100, or parts of process 2000.
  • system 1000, or parts of system 1000 may perform process 2000.
  • modules 1014 and/or 1020 may perform process 2100.
  • BW optimization system 105 identifies ports and network devices in a communication network and configure communication sensors at each one of the identified ports. For example, BW optimization system 105 may follow process 1800 (FIG. 18) to identify networked devices and communicate with networked devices to create network sensors at ports and/or networked devices. In some embodiments, NAO 110 may employ crawler 232 and API configurator 234 (FIG. 2) to discover and configure the networked devices. Additionally, or alternatively, in step 2102 BW optimization system 105 may configure port sensors to monitor traffic in the network and collect training datasets.
  • BW optimization system 105 collects packets received or transmitted in the ports for a threshold time to generate a training data set. For example, in step 2104, BW optimization system 105 may normalize, tag, and store sample packets, communications, and/or network metrics for developing a training dataset that can be used for ML training and ML-driven optimizations. In some embodiments, BW optimization system 105 may collect the information in databases 180 (FIG. 6) and/or ML training data 224 (FIG. 2).
  • BW optimization system 105 determines whether sufficient packets have been collected for a training dataset. For example, BW optimization system 105 may evaluate the number of samples or the time period of data collection to determine if there are enough packets for a training dataset. In some embodiments, BW optimization system 105 may target to require packets for at least one year. In other embodiments, BW optimization system 105 may target to collect a minimum number of sample packets with a predetermined variability. If BW optimization system 105 determines that sufficient packets have not been collected (step 2106: No), BW optimization system 105 returns to step 2104 and continues collecting packets for the training dataset. However, if BW optimization system 105 determines that sufficient packets have been collected (step 2106: Yes), BW optimization system 105 continues to step 2108.
  • BW optimization system 105 trains a first machine-learning model that correlates request categories with optimized communication pathways between the ports and the networked devices. For example, as further discussed in connection with FIG. 10, Al transfer modules 1022 may generate ML models that identify communication pathways associated with higher transfer rater, lower latency, and/or lower bit errors. For example, in step 2108 BW optimization system 105 may train a model that identifies which firewall/routers 802 and switches 806 in network 800 (FIG. 8) should be used based on different categories of client requests. In some embodiments, the first machine -learning model may optimize hardware physical routes.
  • BW optimization system 105 may make determinations on hardware routes to handle data according to environment variables, such as type of data format. Further, BW optimization system 105 may identify a type of data and employ first machine-learning model to route the packets through a specific hardware route. In such embodiments, BW optimization system 105 can categorize packets or traffic in heavy BW usage and low BW usage. Traffic categorized as heavy BW usage (e.g., FTP video exchange) could be routed through routes that may be longer but have greater information transfer capacity. Traffic categorized as light BW usage (e.g., messaging system) can be routed through short routes at the hardware level that will have lower latencies.
  • heavy BW usage e.g., FTP video exchange
  • light BW usage e.g., messaging system
  • BW optimization system 105 trains a second machinelearning model that correlates the request categories with one or more compression tools. For example, BW optimization system 105 may correlate the category of the client request, using category identifier 432, and employ ML compression selector 434 to identify compression tools 424 (FIG. 4) that have highest compression rates and/or are associated with the best network metrics. In some embodiments, BW optimization system 105 trains an ML model using ML training data 224 to identify compression tools and/or algorithms. In some embodiments, the tool trained in step 2110 selects one or more of compression tools 424 and or configurations of compression engine 436. Further, in certain embodiments, the tool trained in step 2110 causes the machine -learning model to generate a packet prioritization for transmission of compressed files.
  • BW optimization system 105 trains a third machine-learning model that correlates the request categories with predicted accessed information.
  • BW optimization system 105 may employ predictive ML module 344 and pattern identifier 348 (FIG. 3) to predict data that is frequently accessed by clients based on a client request categorization.
  • the third machine-learning model may identify files or content that is likely to be requested by a user based on the initial interactions and place that content in FRP memory 350 and/or FRP cached data 354.
  • the third machine-learning model may identify navigation patterns for different client requests and store compressed information in FRP memory 350 and/or FRP cached data 354 that is likely to be accessed during exchanges of the client request.
  • BW optimization system 105 may train the third machine-learning model based on statistical analysis of accessed information (e.g., identify frequency and mode of accessed content) and/or based on best practices according to server policies.
  • BW optimization system 105 configures ports and network routers based on outputs of the first, second, and third machine-learning models. For example, through API configurator 234 BW optimization system 105 may program devices in a network according to results of the first, second, and third ML models. Additionally, or alternatively, in step 2114 may identify hardware configurations that optimize transmission rates based on specific client requests. Moreover, in step 2114 BW optimization system 105 may include client requests, frequently accessed data, selected compression tools, and the communication pathways in a training dataset.
  • FIG. 22 is a flow chart illustrating an exemplary process 2200 for handling a client request associated with a threat level, consistent with disclosed embodiments.
  • elements of system 100 performs process 2200.
  • BW optimization system 105 may perform process 2200.
  • other elements of system 100 may perform one or more steps of process 2200.
  • computing clusters 160 or online resources 190 may perform process 2200, or parts of process 2200.
  • system 1000, or parts of system 1000 may perform process 2200.
  • process 2200 may follow process 2100 (FIG. 21). In other embodiments, process 2200 may be independent from process 2200.
  • BW optimization system 105 receives a client request that is detected in one of the ports being monitored. For example, BW optimization system 105 may identify that one of the ports of networked devices, as configured in step 2102, received a client request.
  • the ports being monitored may include physical or virtual ports, and may also include TCP or UDP ports.
  • BW optimization system 105 assigns one or more categories, from request categories, to the client request based on the client request characteristics. In some embodiments, the categories assigned in step 2204 may be based on previous interactions with client devices. For example, categories may be based on a type of compression, accessed storage unit, traffic demands, and/or number of exchanges, among other parameters from previous communications.
  • BW optimization system 105 may generate optimization scenarios independently for each of the categories. Alternatively, or additionally, the categories of step 2204 may be based on repeated tasks. For example, BW optimization system 105 may assign categories to tasks that are performed above a threshold number of times (e.g., if a task is repeated 100 times BW optimization system 105 may create a category for the client request that solicited the task). The categories to categorize client requests may be transmitted to a master algorithm, which stores categories and responses. Moreover, in some embodiments BW optimization system 105 may create new categories, and corresponding responses, according to continuous learning of different requests overtime. [0209] In step 2206, BW optimization system 105 determines whether the assigned request category is above a threat level.
  • BW optimization system 105 determines that the assigned request category is not above a threat level (step 2206: No). BW optimization system 105 determines that the client request can be transmitted and processed in a client network and continues to step 2208. However, if BW optimization system 105 determines that the assigned request category is above a threat level (step 2206: Yes), BW optimization system 105 determines that the client request can be malicious and continue to step 2220.
  • BW optimization system 105 identifies, using an ML model, frequently accessed data for the client request based on the assigned one or more categories. For example, BW optimization system 105 may employ the third ML model of step 2112 to identify frequently accessed data.
  • BW optimization system 105 employs ML models to select a compression tool for the client request. For example, BW optimization system 105 may employ the second ML model of step 2110 to select a compression tool.
  • BW optimization system 105 identifies, using an ML model, a communication pathway for the client request based on the assigned one or more categories. For example, BW optimization system 105 may employ first ML model of step 2108 to identify a communication pathway for the client request that would likely result in the highest transfer rates.
  • BW optimization system 105 stores the frequently accessed data in a fast response module coupled to one or more of the ports. For example, based on results from the third ML model, BW optimization system 105 may store frequently accessed data in FRP memory 350 and FRP cached data 354.
  • BW optimization system 105 configures the compression tool to compress communications associated with the request. For example, BW optimization system 105 may use one or more compression tools as identified by second ML model of step 2110 and engage elements of CS 130, such as compression engine 436 to configure the compression tool to compress communications.
  • BW optimization system 105 configures routers in the network according to the communication pathway that was identified by the first ML model of step 2108.
  • BW optimization system 105 determines that the assigned request category is above a threat level (step 2206: Yes)
  • BW optimization system 105 continues to step 2220 and routes the request to a security processor coupled with a BIND server, such as BIND server 1112 (FIG. 11), to perform scanning and testing as in a security enclave, as security enclave 1106.
  • a security processor coupled with a BIND server, such as BIND server 1112 (FIG. 11
  • BW optimization system 105 receives a filtered request from the BIND server, the filtered request including a validation indicator.
  • BIND server 1112 may transmit a filtered request (stripped of potential malware or adware) to a client network.
  • the filtered request may include signature scripts and/or certifications to minimize potential attacks that bypass security enclaves and appear to have been filtered.
  • BW optimization system 105 transmit the filtered request to one or more of the networked elements based on an approved validation indicator.
  • the validation indicator may include digital certificates, a public key certificate, identity certificate, and/or any electronic document used to prove the ownership of a public key. Alternatively, or additionally, the validation indicator may also include a private key.
  • BW optimization system 105 generates error or decline message based on a disapproved validation indicator.
  • FIG. 23 is a flow chart of a process 2300 for updating machine-learning models based on power anomalies, consistent with disclosed embodiments.
  • elements of system 100 performs process 2300.
  • BW optimization system 105 may perform process 2300.
  • other elements of system 100 may perform one or more steps of process 2300.
  • computing clusters 160 or online resources 190 may perform process 2300, orparts ofprocess 2300.
  • system 1000, orparts of system 1000 may perform process 2300.
  • BW optimization system 105 initiates network analysis. For example, NAO 110 may perform discovery of networked devices and setup port sensors throughout the networked devices.
  • BW optimization system 105 accesses a database with firmware administrator credentials to access logs in the ports and the networked devices. For example, BW optimization system 105 may retrieve firmware or credentials from databases 180 (FIG. 1).
  • BW optimization system 105 identifies a web-based application programming interface (API) for each one of the ports and communicates processors with the ports through the API methods. For example, using API configurator 234 (FIG. 2), BW optimization system 105 may identify and establish connections through web APIs.
  • API application programming interface
  • BW optimization system 105 configures communication sensors to collect power consumption metrics of networked devices during a threshold time.
  • BW optimization system 105 may configure sensors in nodes 902 (FIG. 9) or devices in network 800 (FIG. 8) to report power consumption and/or digital traffic metrics.
  • BW optimization system 105 identifies anomalies in the power consumption. Anomalies may include lower power consumption than expected, based on manufacturer indications, or higher power consumption than expected.
  • BW optimization system 105 may configure sensors, such as port sensors, to collect power consumption metrics of networked devices during the threshold time, identify anomalies in power consumption, and update the machine-learning models to include exceptions based on the anomalies.
  • BW optimization system 105 updates machine-learning models to include exceptions based on the anomalies.
  • BW optimization system 105 may patch ML models generated in process 2100 by creating exceptions that address power anomalies and/or remove faulty devices, ports, or nodes, from optimized communication paths.
  • FIG. 24 is a flow chart illustrating an exemplary process 2400 for generating a predictive machine-learning model based on bandwidth and environmental data, consistent with disclosed embodiments.
  • elements of system 100 perform process 2400.
  • BW optimization system 105 may perform process 2400.
  • other elements of system 100 may perform one or more steps of process 2400.
  • computing clusters 160 or online resources 190 may perform process 2400, or parts of process 2200.
  • system 1000, or parts of system 1000 may perform process 2400.
  • BW optimization system 105 connects to a plurality of wireless communication nodes through at least one of Simple Network Management Protocol (SNMP) or Secure Shell (SSH).
  • BW optimization system 105 may couple to nodes 902, which may be implemented with microwave radios, using an SSH.
  • BW optimization system 105 identifies locations of the communications nodes.
  • BW optimization system 105 may identify the location of communication nodes based on sensors 910 and/or records and communications of nodes 902.
  • BW optimization system 105 may establish remote on/off power control in step 2402.
  • BW optimization system 105 collects time series of packets received or transmitted in the communication nodes for a threshold time .
  • BW optimization system 105 may configure nodes 902 to transmit sample packets and/or traffic metrics to collect time series of sample packets that could be used for building a training dataset.
  • BW optimization system 105 retrieves historic environmental information from a database. Additionally, or alternatively, BW optimization system 105 may collect environmental data from sensors in nodes 902, such as sensors 910.
  • BW optimization system 105 generates a training dataset by combining and correlating the environmental data and the time series based on the locations. For example, BW optimization system 105 may collect training datasets in ML training data 224 (FIG. 2) that correlates environmental data and time series of the sample packets collected in step 2406.
  • ML training data 224 FIG. 2
  • BW optimization system 105 trains a machine-learning model, using the training data set of step 2410, to predict communication anomalies based on environmental conditions at the communication nodes. For example, BW optimization system 105 may train a random forest that correlates a plurality of environmental data and a plurality of nodes with expect communication anomalies.
  • FIG. 25 is a flow chart illustrating an exemplary process 2500 for the selection of nodes and communication parameters in a wireless network, consistent with disclosed embodiments.
  • elements of system 100 perform process 2500.
  • BW optimization system 105 may perform process 2500.
  • other elements of system 100 may perform one or more steps of process 2500.
  • computing clusters 160 or online resources 190 may perform process 2500, or parts of process 2500.
  • system 1000, or parts of system 1000 may perform process 2500.
  • process 2500 follows process 2400. In other embodiments, process 2500 is independent from process 2400.
  • BW optimization system 105 receives a client transmission request.
  • BW optimization system 105 may receive a request for video, audio, of fde transfer from one or more of client devices 150.
  • BW optimization system 105 queries database environmental information from a database.
  • BW optimization system 105 may query databases 180, or other database with environmental information.
  • BW optimization system 105 collects data from environmental sensors.
  • BW optimization system 105 may collect data from sensors 910 (FIG. 9).
  • BW optimization system 105 simulates a range of environmental parameters to determine packet losses, latency, and power configuration of the selected nodes. For example, BW optimization system 105 simulates ranges of humidity, electrostatic contamination, and temperature to simulate potential packet losses with respect to different node parameters 908.
  • BW optimization system 105 selects nodes, from the communication nodes, to establish a communication pathway. For example, BW optimizations system 105 selects a network for nodes 902 that has the highest probability of resolving a client request with lowest BW consumption and highest transfer rates.
  • BW optimization system 105 select communication parameters for the selected nodes for handling the client transmission request.
  • BW optimization system 105 selects node parameters 908 that maximize transfer rates and BW.
  • BW optimization system 105 configures communication parameters such as frequency, communication power, aggregation, modulation, and cache storage. Further, in step 2512 BW optimization system 105 may aggregate, to a training dataset, the selected nodes, the selected communication parameters, and the current environmental conditions when determining the second set of communication metrics are above the threshold.
  • FIG. 26 is a flow chart illustrating an exemplary process 2600 for generating a machine-learning optimization model, in accordance with disclosed embodiments.
  • process 2600 is executed by BW optimization system 105.
  • NAO 110 may perform steps in process 2600.
  • process 2600 may be performed by computer clusters 160.
  • process 2600 may be performed by multiple elements of system 100 or one or more elements of system 1000 (such as module 1010).
  • the description of process 2600 below illustrates an embodiment in which BW optimization system 105 performs steps of process 2600.
  • other elements of system 100, or elements of system 1000 may also be configurable to perform one or more of the steps in process 2600.
  • BW optimization system 105 determines a training dataset and a validation dataset. For example, BW optimization system 105 may partition packet data collected through sensors in a network into a training and a validation portions. BW optimization system 105 receives data including records of packets, network metrics, client requests, and/or reports. In some embodiments, the packets include metadata describing attributes of the record and an associated property. BW optimization system 105 divides the records and generates two groups, one to train machine-learning model and a second to validate the model.
  • BW optimization system 105 generates an input array based on features of the training dataset. For example, BW optimization system 105 may generate a variable including feature information of communication packets, a network topology, and/or records in the training dataset.
  • BW optimization system 105 generates output vectors based on metadata of the training dataset. For example, based on the samples communication packets in the training dataset, the identification system may generate a desired output vector making a prediction of, for example, best performing data compression tool, fastest communication pathway, and/or frequently accessed information.
  • BW optimization system 105 determines sample hyperparameters and activation functions to initialize the model to be created. For example, BW optimization system 105 may select initial hyper-parameters such as a number of layers and nodes, and determine whether the network will be fully or partially connected. In addition, in step 2608 BW optimization system 105 may determine the dimensionality of the network and/or determine stacks of receptive field networks. Moreover, in step 2608 BW optimization system 105 may also associate the model with one or more activation functions. For example, BW optimization system 105 may associate the model with one or more sigmoidal functions. In step 2610 BW optimization system 105 initializes weights for synapsis in the network.
  • BW optimization system 105 inputs a validation dataset in the model. For example, BW optimization system 105 may apply the input array based on features of training dataset of step 2604 to calculate an estimated output and a cost function in step 2614.
  • BW optimization system 105 determines whether the cost function is below a threshold of required accuracy, which may be specified by the user. If BW optimization system 105 determines that the cost function is not below a threshold and the required accuracy has not being achieved (step 2620: No), BW optimization system 105 continues to step 2622 and modifies model parameters.
  • BW optimization system 105 may determine a gradient to modify weights in synapses or modify the activation functions in the different nodes. However, if the cost function if below a threshold (step 2620: Yes), BW optimization system 105 accepts and communicates the model in step 2624.
  • FIG. 27 is a bar plot 2700 illustrating exemplary bandwidth optimization results, in accordance with disclosed embodiments. Bar plot 2700 compares transfer rates measured for a test network before optimization (native legend) and after optimization with the disclosed systems and methods (ML optimized legend).
  • Bar plot 2700 compares transfer rates of native and ML optimized networks for test communications between servers and clients using both TCP and UDP. Comparisons 2702 and 2704 show a large improvement in TCP bandwidth when using ML optimized networks. Comparisons 2706 and 2708 show similar bandwidth improvements for UDP communications. Bar plot 2700 also compares transfer rates for test files. It shows in comparison 2710 that ML optimized networks have faster transfer rates that native networks.
  • bar plot 2700 shows comparative transfer rates for different applications like 4K video and webcam visualization.
  • Comparison 2712 shows improvements in transfer rates of 4K video in ML optimized networks and comparison 2714 shows greater transfer rates during webcam visualization when using ML optimized networks.
  • Bar plot 2700 also shows ML optimized networks have an improved response to saturation scenarios and, thus, are more reliable.
  • Comparisons 2716 and 2718 show improvements in the number of connected clients per second in networks that have been ML optimized. The ability to handle more clients per second allows networks to manage BW availability and improve the network’s reliability.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage unit or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Système d'optimisation de bande passante comprenant au moins un processeur et au moins un dispositif de mémoire comprenant des instructions qui, lorsqu'elles sont exécutées, configurent le processeur pour réaliser des opérations. Les opérations peuvent comprendre l'identification de ports et de dispositifs en réseau dans un réseau de communication et la configuration de capteurs de communication au niveau des ports et la collecte de paquets reçus ou transmis dans les ports pendant une période de temps seuil afin de générer un ensemble de données d'apprentissage. Les opérations peuvent également comprendre l'apprentissage, au moyen de l'ensemble de données d'apprentissage : d'un premier modèle d'apprentissage automatique qui met en corrélation des catégories de requêtes avec des voies de communication optimisées entre les ports et les dispositifs en réseau (les catégories de requêtes étant basées sur des en-têtes de paquets), d'un second modèle d'apprentissage automatique qui met en corrélation les catégories de requêtes avec un ou plusieurs outils de compression pour la compression de fichiers de données, et d'un troisième modèle d'apprentissage automatique qui met en corrélation les catégories de requêtes avec des informations accessibles prédites, les informations d'accès prédites comprenant des données ayant fait l'objet d'une requête par le client.
PCT/IB2021/058471 2020-09-17 2021-09-16 Systèmes et procédés d'optimisation de bande passante basés sur l'intelligence artificielle WO2022058935A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063079575P 2020-09-17 2020-09-17
US63/079,575 2020-09-17

Publications (1)

Publication Number Publication Date
WO2022058935A1 true WO2022058935A1 (fr) 2022-03-24

Family

ID=80776731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/058471 WO2022058935A1 (fr) 2020-09-17 2021-09-16 Systèmes et procédés d'optimisation de bande passante basés sur l'intelligence artificielle

Country Status (1)

Country Link
WO (1) WO2022058935A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395162A (zh) * 2023-12-12 2024-01-12 中孚信息股份有限公司 利用加密流量识别操作系统的方法、系统、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190260204A1 (en) * 2018-02-17 2019-08-22 Electro Industries/Gauge Tech Devices, systems and methods for the collection of meter data in a common, globally accessible, group of servers, to provide simpler configuration, collection, viewing, and analysis of the meter data
US20200021134A1 (en) * 2018-07-16 2020-01-16 Cable Television Laboratories, Inc. System and method for distributed, secure, power grid data collection, consensual voting analysis, and situational awareness and anomaly detection
US20200272625A1 (en) * 2019-02-22 2020-08-27 National Geographic Society Platform and method for evaluating, exploring, monitoring and predicting the status of regions of the planet through time
US20200287791A1 (en) * 2013-10-21 2020-09-10 Vmware, Inc. System and method for observing and controlling a programmable network using cross network learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200287791A1 (en) * 2013-10-21 2020-09-10 Vmware, Inc. System and method for observing and controlling a programmable network using cross network learning
US20190260204A1 (en) * 2018-02-17 2019-08-22 Electro Industries/Gauge Tech Devices, systems and methods for the collection of meter data in a common, globally accessible, group of servers, to provide simpler configuration, collection, viewing, and analysis of the meter data
US20200021134A1 (en) * 2018-07-16 2020-01-16 Cable Television Laboratories, Inc. System and method for distributed, secure, power grid data collection, consensual voting analysis, and situational awareness and anomaly detection
US20200272625A1 (en) * 2019-02-22 2020-08-27 National Geographic Society Platform and method for evaluating, exploring, monitoring and predicting the status of regions of the planet through time

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RAOUF BOUTABA;MOHAMMADA. SALAHUDDIN;NOURA LIMAM;SARA AYOUBI;NASHID SHAHRIAR;FELIPE ESTRADA-SOLANO;OSCARM. CAICEDO: "A comprehensive survey on machine learning for networking: evolution, applications and research opportunities", JOURNAL OF INTERNET SERVICES AND APPLICATIONS, vol. 9, no. 1, 21 June 2018 (2018-06-21), London, UK , pages 1 - 99, XP021257840, ISSN: 1867-4828, DOI: 10.1186/s13174-018-0087-2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395162A (zh) * 2023-12-12 2024-01-12 中孚信息股份有限公司 利用加密流量识别操作系统的方法、系统、设备及介质
CN117395162B (zh) * 2023-12-12 2024-02-23 中孚信息股份有限公司 利用加密流量识别操作系统的方法、系统、设备及介质

Similar Documents

Publication Publication Date Title
US11431550B2 (en) System and method for network incident remediation recommendations
US11632392B1 (en) Distributed malware detection system and submission workflow thereof
US9954901B2 (en) Service delivery controller for learning network security services
US10594713B2 (en) Systems and methods for secure propagation of statistical models within threat intelligence communities
Valdovinos et al. Emerging DDoS attack detection and mitigation strategies in software-defined networks: Taxonomy, challenges and future directions
US20200137115A1 (en) Smart and selective mirroring to enable seamless data collection for analytics
CN111953641A (zh) 未知网络流量的分类
US20200137094A1 (en) Behavioral profiling of service access using intent to access in discovery protocols
US7966391B2 (en) Systems, apparatus and methods for managing networking devices
US10686807B2 (en) Intrusion detection system
Ridwan et al. Applications of machine learning in networking: a survey of current issues and future challenges
US20220109685A1 (en) Network device identification via similarity of operation and auto-labeling
US11863391B2 (en) Distributed telemetry and policy gateway in the cloud for remote devices
US11438376B2 (en) Problematic autonomous system routing detection
Naseer et al. Configanator: A Data-driven Approach to Improving {CDN} Performance.
US11582294B2 (en) Highly scalable RESTful framework
Cherian et al. Secure SDN–IoT framework for DDoS attack detection using deep learning and counter based approach
US20210037061A1 (en) Managing machine learned security for computer program products
WO2022058935A1 (fr) Systèmes et procédés d'optimisation de bande passante basés sur l'intelligence artificielle
El Rajab et al. Zero-touch networks: Towards next-generation network automation
US11502908B1 (en) Geo tagging for advanced analytics and policy enforcement on remote devices
US20230283621A1 (en) Systems, Methods, and Media for Distributed Network Monitoring Using Local Monitoring Devices
US11785022B2 (en) Building a Machine Learning model without compromising data privacy
US11362881B2 (en) Distributed system for self updating agents and provides security
Anbarsu et al. Software-Defined Networking for the Internet of Things: Securing home networks using SDN

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868849

Country of ref document: EP

Kind code of ref document: A1