WO2018071298A1 - Networking as a service - Google Patents

Networking as a service Download PDF

Info

Publication number
WO2018071298A1
WO2018071298A1 PCT/US2017/055526 US2017055526W WO2018071298A1 WO 2018071298 A1 WO2018071298 A1 WO 2018071298A1 US 2017055526 W US2017055526 W US 2017055526W WO 2018071298 A1 WO2018071298 A1 WO 2018071298A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
network
nodes
signal
information
Prior art date
Application number
PCT/US2017/055526
Other languages
French (fr)
Inventor
Xinyu Li
Original Assignee
Xinyu Li
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinyu Li filed Critical Xinyu Li
Publication of WO2018071298A1 publication Critical patent/WO2018071298A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the present invention relates to the field of cloud-assisted networking for sensors and actuators.
  • the local network with the assistance of the cloud, can speak the native language of low cost, low complexity, and low power consumption wireless/wired communication either already supported by or easily added-on to the objects being serviced.
  • FIG. 1 is a logical diagram of the SANAS system, according to an embodiment of the present invention.
  • FIG. 2 is a logical diagram of the field-servicing zone, according to an embodiment of the present invention.
  • FIG. 3 is a process diagram of the cloud service helping a user in customizing a node production process, according to an embodiment of the present invention
  • FIG. 4 is a process diagram of the activation and functioning of a Prime/
  • FIG. 5 is a process diagram of the establishment and evolvement/growing of an Operational/Field-Servicing zone, according to an embodiment of the present invention
  • FIG. 6 is a connection diagram of the root node, and relay nodes to a depth of 2, according to an embodiment of the present invention.
  • FIG. 7 is a functional diagram of a hardware embodying the system, according to an embodiment of the present invention.
  • FIGS. 1 -7 Preferred embodiments of the present invention and their advantages may be understood by referring to FIGS. 1 -7, wherein like reference numerals refer to like elements.
  • the Sensor and Actuator Networking as a Service provides a cloud assisted service to map field operational data sources and sinks to cloud based and user access controlled information resources.
  • Field operational data sources include, but are not limited to: sensors, ID tags, health logger, alarm status, operator inputs, and outputs from telemetry devices, etc.
  • Field operational data sinks include, but are not limited to: actuators, switches, signs, displays, and inputs to control devices, etc.
  • Field operational data sources and sinks can be considered as equivalent to data points of a "Thing" in the reference to the Internet of Things (loT).
  • the cloud 5 is in communication with an information technology domain 10 containing information resources and applications of data sharing, machine learning and artificial intelligence through one or more Application Program
  • a plurality of field-servicing zones 15 are in communication with the cloud 5, wherein information about the operational technology domain is sent, and includes digitized sensor information.
  • the field-servicing zone 15 is shown having a plurality of nodes 20, each in communication with one or more sensors 25 and/or broadcasters 27 and/or actuators 28.
  • the nodes 20 are in communication with one another through a hierarchical tree structure to facilitate routing to/from nodes and corresponding sensors 25 and/or broadcasters 27 and/or actuators 28.
  • a root node 30 has a direct connection to the cloud 5. There is one and only one root node in each zone, be it the Prime/Configurational zone or the field-servicing Operational zone.
  • the user registers an account at step 100, and the cloud confirms registration is complete with the cluster 35 and zone 15.
  • the user then configures and orders zone 15 node hardware, and the order is confirmed at step 1 15.
  • the node is produced and programmed with configuration and security information on the node production line 40.
  • the cluster is added and configured by the user, and the cloud sends a confirmation at step 125.
  • the node is produced and programmed as an operational node with configuration and security information within the hardware on the node production line 40.
  • the node is added and configured by the user, and the cloud sends a confirmation at step 135.
  • step 137 the node is produced and programmed as an operational node with configuration and security information on the node production line 40.
  • step 140 more nodes are added to the zone 15 and this may be continued a number of times.
  • step 145 the orders are filled and the cloud receives a confirmation of the orders filled and shipped, and in step 150 the notification of orders shipped reaches the user.
  • step 160 powers the node on in step 160, and the node connects to the cloud, authenticates and completes the connection in step 1 70.
  • step 175 the user receives a confirmation of connection on the admin interface.
  • step 180 an additional node is powered on, makes an authentication request, which is confirmed in step 185 by the cloud.
  • the node is then activated and gives a confirmation to the user through the admin interface at step 190.
  • FIG. 5 a process diagram for the operational/field-servicing zone is depicted.
  • the authentication process is the same as in FIG. 4, when the unit is powered on.
  • the BLE ADV provides a docking request in step 200, and in step 205 a docking link is established with mutual challenge/response.
  • the node is updated at the cloud with a LLNID authentication request.
  • the admission confirmation is sent and the node information is updated through the admin interface in step 220.
  • a hierarchy of nodes is shown, wherein the root node 300 is connected through a Wi-Fi docking link 305 to a relay node 310 with relay depth of one, and this relay node 310 is connected through a Wi-Fi docking link 315 and a BLE dock- ing link 316 to relay nodes 320 with a relay depth of two.
  • a requestor uplink port 312 of relay node 310 connects with an acceptor downlink docking port 302 of the root node 300.
  • a requestor uplink port 322 of relay node 320 connects with an acceptor downlink docking port 313 of the relay node 310.
  • FIG. 7 a logical diagram of an embodiment of a node is shown, wherein the node has a main MCU module 400 with internal flash memory, and is connected to a Wi-Fi transmitter/receiver.
  • the main MCU module 400 is in communication with the MCU module 410, which has BLE transmitter/receiver, and internal flash memory.
  • Modules 400 and 410 are powered by a DC power source 412.
  • Module 400 transmits control and data messages to the modules 410 through control/data links 415.
  • SANAS provides the cloud based platform service to facilitate information and intelligence applications of monitoring, tracking, execution, control, management, predictive and condition based on maintenance, analytics, information sharing, business intelligence, machine learning and advanced automation, and artificial intelligence, etc.
  • the platform service is not limited to using the information resources mapped from the field operational data sources and sinks.
  • Information resources can be directly fed into SANAS platform via End- to-End-IP (E2EIP) access cloud API.
  • E2EIP End- to-End-IP
  • a SANAS customer/partner's information resources from other internal and external platforms may be integrated into the platform service through cloud side APIs as well. Meanwhile, information resources can be configured by the customer/partner to have their data exposed and/or forwarded to other platforms.
  • SANAS service differs from existing loT cloud platforms in that it approaches the operational technology domain service execution from the "Things'" perspective, rather than from the Cloud perspective.
  • SANAS brings the Cloud to Things, including and especially for existing Things deployed without Internet connection capabilities.
  • SANAS sets out to bring the Cloud to the Things of field terminal objects being serviced. From the Things' point of view, they are able to communicate with the Cloud as if they are communicating with a traditional peer device that they are designed to talk to at minimum cost, power consumption, and complexity, e.g. a PC/laptop, a SmartPhone/Pad, or other specialized equipment with the compatible communication protocols implemented locally. With the help from field-servicing zones, SANAS speaks the native languages of the field terminal objects, or "Things", being serviced, rather than asking them to learn and speak a new language. [0025] SANAS provides the field services with the highest efficiency in Internet connection bandwidth usage and latency management, recognizing that terminal objects are typically scattered in the operational facility of field service.
  • SANAS service executes the field operation interfacing with zones, or local networks of service nodes, rather than a single gateway.
  • a field service zone aggregates data and segregates commands, from/for all the terminal objects being serviced, at packet level, balancing packet size and inter-packet timing intervals to maximize bandwidth usage and minimize connection keep-alive cost, which in turn reduces overall and average latencies.
  • SANAS further lowers the terminal objects' technical complexity requirements by having its field-servicing nodes work together with the cloud side that hosts the profile and application layer logic customization, extension, and upgrades.
  • Such customization, extension, and upgrades can be pushed to the field service zone for local execution; they can also be executed from cloud at the profile or the application layer.
  • a user of SANAS service is typically the owner of the account registered with SANAS service by the operational partner who contracts to use SANAS service along with field-servicing zones.
  • the business entity represented by the user can manage its field terminal objects in the information technology domain, and benefit from ever-advancing information and intelligence applications being employed on the information resources including but not limited to those reliably mapped from the operational technology domain by the SANAS service.
  • SANAS cloud service can be set up to run either on a general Infrastructure as a Service (laaS) cloud environment (in a public, managed, private, or mixed cloud
  • This architecture does not limit the cloud configuration selection.
  • BLE Bluetooth Low Energy
  • a field-servicing zone handles the field interfacing part of mapping the operation- al domain data points of physical objects, such as sensors, actuators, and broadcasters, etc., under its management, influence, support, and care, to information resources of a logical zone in the digital/virtual world sustained/braced by the SANAS service, which provides cloud API accesses to a collection of device and service provisioning and management services, information and intelligence application services, application tools/framework, as well as the extended third-party cloud services.
  • Peripheral devices such as configurable Bluetooth Low Energy (BLE) broadcasting beacons, BLE based long-term temperature monitors, Wi-Fi Smart Lighting controllers, a PLC module of Computer Numeric Control (CNC) machinery station, etc., that can communicate with a field-servicing node in the field-servicing zone local network, over wireless connectivity protocols such as BLE, Wi-Fi, 802.15.4, etc., or a wired connection such as RS232, RS485, General Purpose Input and Out (GPIO), etc.
  • BLE Bluetooth Low Energy
  • Wi-Fi Wi-Fi
  • 802.15.4 Wi-Fi
  • GPIO General Purpose Input and Out
  • These peripheral devices are typically equipped with sensors or actuators and implement the actual source/ sink terminal functions of an information resource serviced with SANAS. They are usually supplied by third-party hardware vendors, and implement standard wireless and/or wired connectivity protocols and optionally proprietary extensions of third parties.
  • Node refers to the field-servicing node working together with other such nodes to form a zone.
  • a node supports the following two functional roles.
  • the base role node is also called “root node”, which has the ability to connect to SANAS cloud service over the backbone network and act as the access portal for the field-servicing zones.
  • a base role node provides ports for relay role nodes to dock to.
  • Relay role node is also known as a "relay node.”
  • a relay node that has no child docking nodes is also called a leaf node.
  • the relay node defines the uplink and downlink docking support from a node. In the uplink direction, a relay role node docks to the base role node or other relay role node(s) in the same field-servicing zone, with compatible docking capabilities.
  • a relay role node also provides ports for other relay role nodes to dock to.
  • a node in a field-servicing zone executes the field services such as operational maintenance and monitoring, sensor data aggregation, and conveyance of actuator commands, over local wireless connectivity protocols such as BLE, 802.15.4, and Wi-Fi, or wired connections such as RS232, RS485, or GPIOs.
  • the capabilities of a node in this regard are referred to as its field Operation Liaison Capabilities (FOLC).
  • FOLC field Operation Liaison Capabilities
  • a node can be configured to allow its BLE broadcasting (sending ADV) and observing (scanning for ADV) capabilities to be utilized as a customer/partner's field operational function, without involving a wireless or wired connection per se with a separate terminal device. Such capabilities are also considered as a type of the nodes FOLC.
  • Prime zones are different from Operational (field-servicing) zones.
  • An Operational field-servicing zone is a regular zone that is planned, configured, and with node hardware provisioned, ordered, delivered, activated, and deployed to service field operations.
  • a prime zone on the other hand, consists of just the root node, and is set up to work with the SANAS cloud side to effect online provisioning and configuration settings pertaining to a user's Operational field-servicing zone deployment.
  • the Internet connection of a prime zone or an Operational field-servicing zone is selected per the user's preference.
  • connection options include, but not limited to, Wi-Fi AP/router with Internet access, Wired LAN switch/router connection with Internet access, Cellular (2.5G, 3G, 4G, 5G, etc.) Data Network access, Fiber Optics, and even over PPP tunnel or LPWAN links such as LoRa, SIGFOX, NBIoT, LTE-M, etc.
  • a cluster is the logical collection of several zones that belong to the same SANAS user.
  • the prime cluster is the collection of all of the user's prime zones. There is one and only one prime cluster per user.
  • an operational cluster shall be set up to organize the operational field-servicing zones servicing the same category of field operation terminal objects and functions, even though and in particular when, the operational field- servicing zones are geographically separated.
  • the user can assign the account's operational field-servicing zones into one or more operational clusters once logged into the SANAS administration/control panel Web interface, or programmatically by calling SANAS cloud function APIs. All the operational field-servicing zones from the same Operational cluster manage the authentication, encryption, and related updates pertaining to the collective operational data with the coordination by the SANAS cloud application server.
  • the prime cluster is created and assigned to the user. Meanwhile, the very first prime zone is also created, and assigned to the prime cluster. The user can add more prime zones to the prime cluster if desired.
  • Zones can be moved from one cluster to another after being added, configured, and even while in operation.
  • operational field-servicing zones cannot be moved to the prime cluster; and prime zones cannot be moved to an operational cluster.
  • Nodes are ordered with a cluster ascription specified, requiring a zone ascription only if a node is configured to be the base role at ordering time.
  • a user can still use a device, such as a SmartPhone or a Laptop computer, with Wi-Fi STA interface and enough 10 capability, to supply the Internet connection information, using a built-in HTTP server programmed into and supported by the node hardware; or alternatively, the user could use a device, such as a SmartPhone or a Laptop computer, with BLE central capability and a software application with enough 10 capability running on it, to supply the Internet connection information, using GATT attribute write.
  • a device such as a SmartPhone or a Laptop computer, with Wi-Fi STA interface and enough 10 capability
  • Forming and sustaining a field-servicing zone uses dynamic docking.
  • a field- servicing zone always comes to service with its root node hardware powered on first.
  • the root node once powered up, connects to SANAS cloud service with authentication done in the process. If, for whatever reason, it fails to connect to SANAS cloud service, it shall go through a retry process, starting from using the current Internet connection for certain amount of time and determining whether the remote service becomes unavailable or the Internet connection itself is broken, and if later, rotate through alternative Internet connection options, such as other Wi-Fi APs on the optional list, Cellular connection if available, or wired LAN if exists.
  • the root node shall check with the cloud side to see if a node admitting window (NAW) should be activated and for how long.
  • a NAW could also be activated from a trigger originated or routed through the SANAS cloud side, after a field-servicing zone is already activated and in service.
  • the acceptor shall start to broadcast encrypted and signed BLE ADV content to indicate that (a) it is admitting a specific node, and (b) whether the allowed and preferred docking interfaces is BLE or WiFi, and the corresponding WiFi AP SSID if WiFi is preferred.
  • the BLE ADV could also include assisting information such as the acceptor's Relay depth (how many Relay links/hops there are from the Root node), and its remaining docking acceptance capacity indicator. If the preferred docking port is over Wi- Fi, the acceptor shall activate a network interface in AP or Soft-AP mode, but with SSID broad- casting disabled.
  • the acceptor shall try to initiate a BLE connection to the requesting node, though it needs to be ready to go back to the aforementioned procedure for docking over WiFi if that's allowed even though not preferred, in case that the requesting node assesses that situation has changed and is now better not to dock over BLE.
  • An accepting node employees a preconfigured algorithm to determine whether to proactively initiate a BLE connection based on information including but not limited to the remaining docking acceptance capacity, and how far away it estimates the requesting node to be.
  • the requesting node might receive "accepting" BLE ADV from multiple nodes around.
  • the most proactive acceptor that decides to initiate BLE connection for the docking most likely will become the winner of the race to accept the docking.
  • the decision becomes the requestor's to choose amongst the multiple nodes that are indicating willingness to accept.
  • the requestor shall employee the preconfigured algorithm to determine which acceptor, based on information including but not limited to the remaining docking acceptance capacity of each potential acceptor and how far away it estimates each of them to be.
  • the requestor then initiates connection to the WiFi AP interface of the chosen acceptor using its uplink WiFi STA interface.
  • the admission shall be expedited, otherwise a challenge and response procedure shall be employed for the requesting node and the admitting node to authenticate each other following the establishment of a BLE central-peripheral connection or a WiFi AP-STA connection.
  • LLC Logical Local Node ID
  • Field operation liaison interfacing with terminal objects takes place by the following process.
  • An active node is instructed by the SANAS service to start/stop actively seeking communication channel establishment with field operation terminal objects, over wireless and/or wired connections and protocols.
  • Such communication is typically one node to multiple field operation terminal object.
  • a message exchange is conducted with the cloud side to get the terminal object's cloud assigned ID and status updated.
  • authentication process is executed by the field operation liaison node working together with cloud side customizable profile and application layer logic.
  • the field operation liaison node maintains a table of terminal objects' cloud assigned ID vs. their native communication channel "local handle.” This table is used by the field operation liaison node to encode/decode message header for data, events, commands from/to the terminal objects.
  • a field-servicing zone can be configured to accommodate automatic roaming of terminal objects having liaison relationship with its own nodes, and even those terminal objects having liaison relationship with nodes in another field-servicing zone, as long as the field-servicing zones are ascribed to the same Operational cluster.
  • disconnection from the current liaison node in addition triggers the active seek of communication channel establishment with terminal objects, of those field-servicing zones ascribed to the same Operational cluster and configured to support cross-field-servicing-zone terminal object roaming.
  • the root node allocates and assigns LLNIDs using a preconfigured algorithm that allows the Relay path between the root node and the assignee to be derived from the LLNID itself, or with the assistance of an associated but separate Relay Path ID (RPID) that is constructed when the node admission message is passed from the acceptor to the root node.
  • RPID Relay Path ID
  • the cloud side When messages of data and events/commands flow from cloud side, through the root node, to a field operation terminal object, the cloud side shall have provided the field operation liaison's LLNID, RPID if exists, and the terminal object's cloud assigned ID. If exists, the RPID is populated in the header of downlink messages to assist each node in the Relay path to determine which "child" node to forward the message to, until the message reaches the parent of the field operation liaison node. If RPID is not used, the LLNID itself shall be enough for each node in the Relay path to determine which "child" node to route/forward the message to. Special LLNID values are reserved for broadcasting purpose, and for reference to the root node.
  • the distance and bandwidth usage weighted sum of Relay depth of all descendent nodes, among other factors including remaining bandwidth etc., is used by a Relay node and eventually the root node, to make decisions on local and/or global zone structure optimization.
  • a "directed structure move" of a specific node is decided on, both the node's "current parent” node and its upcoming "new parent” node are notified, including the preferred new docking link type.
  • the three parties go through a fast-track docking link switch, with the existing link put on hold first, and subsequently torn down if and when the switch succeeds, and resumed if and when the switch fails.
  • a node fails in a field-servicing zone, its uplink docking "parent" node detects the drop of the docking link and reports the event to the root node.
  • the root node reports the status change to SANAS cloud side, inherently propagating status change for all the descendent nodes of the moved-away node. Meanwhile, this triggers the activation of an NAW in the field-servicing zone.
  • the direct "child" nodes of the failed node shall start new docking request procedure immediately after detecting that their parent node is no longer connected, aiming for nearby nodes that are still connected to the field-servicing zone and can complete the admission of these nodes. They can choose to set a timer for being re-admitted into the field-servicing zone, during which time they put their descendent nodes on hold. If this timer expires before the re-admission, they shall broadcast to all the descendent nodes to disconnect from their uplink and downlink docking nodes; otherwise, the re-admission response message from the root node shall contain the information for them to update all the descendent nodes' LLNID.
  • SANAS cloud generates an alert and notifies the owner/user of the field-servicing zone of the failure, and provides continued update on which node failed first, and whether the affected descendent nodes have been re-admitted into the field-servicing zone. The expectation is that a user will locate the failed node, and try to bring it back to operational status, if the failure was not a result of an intentional power-down of the node.
  • a node When a node moves within a field-servicing zone out of the covering range of its parent node or its child node, it triggers the link drop of the uplink docking and/or the downlink docking. Whenever it detects that the uplink docking has dropped, it shall treat the situation as if it is the direct "child" node of a failed node, as described in the above section, and execute the same procedure. If a downlink docking has dropped but the uplink docking remains accessible, it shall report the event to the root node as if its direct child of that downlink docking has failed, as described in the above section.
  • the nodes that have momentarily lost connection to the field-servicing zone broadcast with BLE ADV following the detection/notification of uplink docking link drop This is to aid the node's parent and child nodes in determining whether this is a roaming case or a failure case.
  • the node that has moved logs a state transition from "down in the old field-servicing zone” to "up in the new field-servicing zone.” If the old field-servicing zone has problem admitting/keeping the descendent nodes of the moved node, the user will still be notified of the failure situation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A system for training a network includes a server configured to send to a network node a first signal that conveys at least one instruction to adapt at least one parameter associated with the network, and a network node configured to receive and process the first signal and generate therefrom a control signal based at least in part on the at least one instruction. The network node is further configured to transmit the control signal to one or more other nodes, receive from the one or more other nodes a second signal that conveys network performance information, and transmit the second signal to the server.

Description

Networking as a Service
CROSS REFERENCE TO RELATED APPLICATIONS
The present application is related to U.S. Patent Application Serial No. 82/406,265, filed October 10, 2016.
BACKGROUND OF THE INVENTION
1 . Field of Invention
[0001] The present invention relates to the field of cloud-assisted networking for sensors and actuators.
2. Description of Related Art
[0002] Existing loT platforms require the each and every of the Things to communicate through a more advanced and more complexed mechanism, that translates into higher cost, higher power consumption (on the Things' side), and higher technical complexity in modifying the Things in order to integrate them into an existing loT platform.
[0003] There are billions of operational data sources and sinks, or "Things", already installed and deployed in field operations. Minimizing the cost and complexity, and in a lot of the usage scenarios facilitating battery powered communication in modifying/adapting these Things, are a key factor in deciding to adopt a cloud platform and transformation. In fixed or relatively confined areas of field operation, requiring each and every data source/ sink to add the complexity of communicating directly to the cloud, and learn/adapt to speak cloud language, is a waste of operational technical transformation investment. Each field operation data source/sink having its own cloud connection also results in inefficient use of cloud connection resources.
[0004] Based on the foregoing, there is a need in the art for a network system for sensors as data sources and actuators as data sinks, to facilitate information and intelligence applications of monitoring, tracking, execution, control, management, predictive and condition- based maintenance and analytics. Ideally, the local network, with the assistance of the cloud, can speak the native language of low cost, low complexity, and low power consumption wireless/wired communication either already supported by or easily added-on to the objects being serviced.
[0005] For a more complete understanding of the present invention, the objects and advantages thereof, reference is now made to the ensuing descriptions taken in connection with the accompanying drawings briefly described as follows.
[0006] FIG. 1 is a logical diagram of the SANAS system, according to an embodiment of the present invention; [0007] FIG. 2 is a logical diagram of the field-servicing zone, according to an embodiment of the present invention;
[0008] FIG. 3 is a process diagram of the cloud service helping a user in customizing a node production process, according to an embodiment of the present invention;
[0009] FIG. 4 is a process diagram of the activation and functioning of a Prime/
Configurational zone, according to an embodiment of the present invention;
[0010] FIG. 5 is a process diagram of the establishment and evolvement/growing of an Operational/Field-Servicing zone, according to an embodiment of the present invention;
[0011] FIG. 6 is a connection diagram of the root node, and relay nodes to a depth of 2, according to an embodiment of the present invention; and
[0012] FIG. 7 is a functional diagram of a hardware embodying the system, according to an embodiment of the present invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0013] Preferred embodiments of the present invention and their advantages may be understood by referring to FIGS. 1 -7, wherein like reference numerals refer to like elements.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The Sensor and Actuator Networking as a Service (hereinafter "SANAS") provides a cloud assisted service to map field operational data sources and sinks to cloud based and user access controlled information resources. Field operational data sources include, but are not limited to: sensors, ID tags, health logger, alarm status, operator inputs, and outputs from telemetry devices, etc. Field operational data sinks include, but are not limited to: actuators, switches, signs, displays, and inputs to control devices, etc. Field operational data sources and sinks can be considered as equivalent to data points of a "Thing" in the reference to the Internet of Things (loT).
[0015] With reference to FIG. 1 , the cloud 5 is in communication with an information technology domain 10 containing information resources and applications of data sharing, machine learning and artificial intelligence through one or more Application Program
Interfaces (APIs). A plurality of field-servicing zones 15 are in communication with the cloud 5, wherein information about the operational technology domain is sent, and includes digitized sensor information.
[0016] With reference to FIG. 2, the field-servicing zone 15 is shown having a plurality of nodes 20, each in communication with one or more sensors 25 and/or broadcasters 27 and/or actuators 28. The nodes 20 are in communication with one another through a hierarchical tree structure to facilitate routing to/from nodes and corresponding sensors 25 and/or broadcasters 27 and/or actuators 28. A root node 30 has a direct connection to the cloud 5. There is one and only one root node in each zone, be it the Prime/Configurational zone or the field-servicing Operational zone.
[0017] With reference to FIG. 3, the user registers an account at step 100, and the cloud confirms registration is complete with the cluster 35 and zone 15. At step 1 10 the user then configures and orders zone 15 node hardware, and the order is confirmed at step 1 15. In step 1 1 7, the node is produced and programmed with configuration and security information on the node production line 40. In step 120, the cluster is added and configured by the user, and the cloud sends a confirmation at step 125. In step 127, the node is produced and programmed as an operational node with configuration and security information within the hardware on the node production line 40. In step 130, the node is added and configured by the user, and the cloud sends a confirmation at step 135. In step 137, the node is produced and programmed as an operational node with configuration and security information on the node production line 40. In step 140, more nodes are added to the zone 15 and this may be continued a number of times. In step 145, the orders are filled and the cloud receives a confirmation of the orders filled and shipped, and in step 150 the notification of orders shipped reaches the user.
[0018] With reference to FIG. 4, a process diagram of the prime/configurational zone is depicted. The user powers the node on in step 160, and the node connects to the cloud, authenticates and completes the connection in step 1 70. In step 175, the user receives a confirmation of connection on the admin interface. In step 180, an additional node is powered on, makes an authentication request, which is confirmed in step 185 by the cloud. The node is then activated and gives a confirmation to the user through the admin interface at step 190.
[0019] With reference to FIG. 5, a process diagram for the operational/field-servicing zone is depicted. The authentication process is the same as in FIG. 4, when the unit is powered on. In an embodiment the BLE ADV provides a docking request in step 200, and in step 205 a docking link is established with mutual challenge/response. In step 210 the node is updated at the cloud with a LLNID authentication request. In step 215 the admission confirmation is sent and the node information is updated through the admin interface in step 220.
[0020] With reference to FIG. 6, a hierarchy of nodes is shown, wherein the root node 300 is connected through a Wi-Fi docking link 305 to a relay node 310 with relay depth of one, and this relay node 310 is connected through a Wi-Fi docking link 315 and a BLE dock- ing link 316 to relay nodes 320 with a relay depth of two. In order to create the Wi-Fi docking link 305, a requestor uplink port 312 of relay node 310 connects with an acceptor downlink docking port 302 of the root node 300. In order to establish the BLE docking link 316, a requestor uplink port 322 of relay node 320 connects with an acceptor downlink docking port 313 of the relay node 310.
[0021] With reference to FIG. 7, a logical diagram of an embodiment of a node is shown, wherein the node has a main MCU module 400 with internal flash memory, and is connected to a Wi-Fi transmitter/receiver. The main MCU module 400 is in communication with the MCU module 410, which has BLE transmitter/receiver, and internal flash memory. Modules 400 and 410 are powered by a DC power source 412. Module 400 transmits control and data messages to the modules 410 through control/data links 415.
DETAILED DESCRIPTION
[0022] SANAS provides the cloud based platform service to facilitate information and intelligence applications of monitoring, tracking, execution, control, management, predictive and condition based on maintenance, analytics, information sharing, business intelligence, machine learning and advanced automation, and artificial intelligence, etc. The platform service is not limited to using the information resources mapped from the field operational data sources and sinks. Information resources can be directly fed into SANAS platform via End- to-End-IP (E2EIP) access cloud API. A SANAS customer/partner's information resources from other internal and external platforms may be integrated into the platform service through cloud side APIs as well. Meanwhile, information resources can be configured by the customer/partner to have their data exposed and/or forwarded to other platforms.
[0023] SANAS service differs from existing loT cloud platforms in that it approaches the operational technology domain service execution from the "Things'" perspective, rather than from the Cloud perspective. In other words, SANAS brings the Cloud to Things, including and especially for existing Things deployed without Internet connection capabilities.
[0024] SANAS sets out to bring the Cloud to the Things of field terminal objects being serviced. From the Things' point of view, they are able to communicate with the Cloud as if they are communicating with a traditional peer device that they are designed to talk to at minimum cost, power consumption, and complexity, e.g. a PC/laptop, a SmartPhone/Pad, or other specialized equipment with the compatible communication protocols implemented locally. With the help from field-servicing zones, SANAS speaks the native languages of the field terminal objects, or "Things", being serviced, rather than asking them to learn and speak a new language. [0025] SANAS provides the field services with the highest efficiency in Internet connection bandwidth usage and latency management, recognizing that terminal objects are typically scattered in the operational facility of field service. SANAS service executes the field operation interfacing with zones, or local networks of service nodes, rather than a single gateway. A field service zone aggregates data and segregates commands, from/for all the terminal objects being serviced, at packet level, balancing packet size and inter-packet timing intervals to maximize bandwidth usage and minimize connection keep-alive cost, which in turn reduces overall and average latencies.
[0026] SANAS further lowers the terminal objects' technical complexity requirements by having its field-servicing nodes work together with the cloud side that hosts the profile and application layer logic customization, extension, and upgrades. Such customization, extension, and upgrades can be pushed to the field service zone for local execution; they can also be executed from cloud at the profile or the application layer.
[0027] A user of SANAS service is typically the owner of the account registered with SANAS service by the operational partner who contracts to use SANAS service along with field-servicing zones. The business entity represented by the user can manage its field terminal objects in the information technology domain, and benefit from ever-advancing information and intelligence applications being employed on the information resources including but not limited to those reliably mapped from the operational technology domain by the SANAS service.
[0028] SANAS cloud service can be set up to run either on a general Infrastructure as a Service (laaS) cloud environment (in a public, managed, private, or mixed cloud
configuration), or on a customer/partner private cloud environment. This architecture does not limit the cloud configuration selection.
[0029] A field-servicing zone refers to the local network formed by N (1 <=N<=65535) pieces of customer on-premise equipment, or field-servicing nodes of SANAS service, through multi-level docking over Bluetooth Low Energy (BLE) links and Wi-Fi links.
[0030] A field-servicing zone handles the field interfacing part of mapping the operation- al domain data points of physical objects, such as sensors, actuators, and broadcasters, etc., under its management, influence, support, and care, to information resources of a logical zone in the digital/virtual world sustained/braced by the SANAS service, which provides cloud API accesses to a collection of device and service provisioning and management services, information and intelligence application services, application tools/framework, as well as the extended third-party cloud services.
[0031] Peripheral devices such as configurable Bluetooth Low Energy (BLE) broadcasting beacons, BLE based long-term temperature monitors, Wi-Fi Smart Lighting controllers, a PLC module of Computer Numeric Control (CNC) machinery station, etc., that can communicate with a field-servicing node in the field-servicing zone local network, over wireless connectivity protocols such as BLE, Wi-Fi, 802.15.4, etc., or a wired connection such as RS232, RS485, General Purpose Input and Out (GPIO), etc. These peripheral devices are typically equipped with sensors or actuators and implement the actual source/ sink terminal functions of an information resource serviced with SANAS. They are usually supplied by third-party hardware vendors, and implement standard wireless and/or wired connectivity protocols and optionally proprietary extensions of third parties.
[0032] Node refers to the field-servicing node working together with other such nodes to form a zone. A node supports the following two functional roles. The base role node is also called "root node", which has the ability to connect to SANAS cloud service over the backbone network and act as the access portal for the field-servicing zones. A base role node provides ports for relay role nodes to dock to.
[0033] Relay role node is also known as a "relay node." A relay node that has no child docking nodes is also called a leaf node. The relay node defines the uplink and downlink docking support from a node. In the uplink direction, a relay role node docks to the base role node or other relay role node(s) in the same field-servicing zone, with compatible docking capabilities. A relay role node also provides ports for other relay role nodes to dock to.
[0034] A node in a field-servicing zone executes the field services such as operational maintenance and monitoring, sensor data aggregation, and conveyance of actuator commands, over local wireless connectivity protocols such as BLE, 802.15.4, and Wi-Fi, or wired connections such as RS232, RS485, or GPIOs. The capabilities of a node in this regard are referred to as its field Operation Liaison Capabilities (FOLC). Note that a node can be configured to allow its BLE broadcasting (sending ADV) and observing (scanning for ADV) capabilities to be utilized as a customer/partner's field operational function, without involving a wireless or wired connection per se with a separate terminal device. Such capabilities are also considered as a type of the nodes FOLC.
[0035] Prime zones are different from Operational (field-servicing) zones. An Operational field-servicing zone is a regular zone that is planned, configured, and with node hardware provisioned, ordered, delivered, activated, and deployed to service field operations. A prime zone, on the other hand, consists of just the root node, and is set up to work with the SANAS cloud side to effect online provisioning and configuration settings pertaining to a user's Operational field-servicing zone deployment. The Internet connection of a prime zone or an Operational field-servicing zone, is selected per the user's preference. Such connection options include, but not limited to, Wi-Fi AP/router with Internet access, Wired LAN switch/router connection with Internet access, Cellular (2.5G, 3G, 4G, 5G, etc.) Data Network access, Fiber Optics, and even over PPP tunnel or LPWAN links such as LoRa, SIGFOX, NBIoT, LTE-M, etc.
[0036] A cluster is the logical collection of several zones that belong to the same SANAS user.
[0037] The prime cluster is the collection of all of the user's prime zones. There is one and only one prime cluster per user. In contrast, an operational cluster shall be set up to organize the operational field-servicing zones servicing the same category of field operation terminal objects and functions, even though and in particular when, the operational field- servicing zones are geographically separated. The user can assign the account's operational field-servicing zones into one or more operational clusters once logged into the SANAS administration/control panel Web interface, or programmatically by calling SANAS cloud function APIs. All the operational field-servicing zones from the same Operational cluster manage the authentication, encryption, and related updates pertaining to the collective operational data with the coordination by the SANAS cloud application server.
[0038] When a user successfully completes the initial registration to the SANAS service, the prime cluster is created and assigned to the user. Meanwhile, the very first prime zone is also created, and assigned to the prime cluster. The user can add more prime zones to the prime cluster if desired.
[0039] Before provisioning for field-servicing zones and their nodes, the user is required to have at least one prime zone configured, and the order of its root node hardware placed. A default Operational cluster is automatically created by the SANAS cloud service for the user when this step is completed.
[0040] Then, even while waiting for its root node to arrive in delivery, the user can add and provision more clusters and zones online. All zones are added with clusters specified.
Zones can be moved from one cluster to another after being added, configured, and even while in operation. However, operational field-servicing zones cannot be moved to the prime cluster; and prime zones cannot be moved to an operational cluster.
[0041] Nodes are ordered with a cluster ascription specified, requiring a zone ascription only if a node is configured to be the base role at ordering time.
[0042] When the user receives a node that was ordered specifically for use in a prime zone, she just needs to power it on with the corresponding already-configured Internet connection within reach, which automatically activates and puts in service a prime zone, if the Internet connection information was provided at the time of placing its order. Otherwise, such a node hardware could be placed in the coverage of an existing and active prime zone, owned by the same user, to get activated and populated with the configuration and settings needed for its function. Even if such a prime zone does not exist yet, a user can still use a device, such as a SmartPhone or a Laptop computer, with Wi-Fi STA interface and enough 10 capability, to supply the Internet connection information, using a built-in HTTP server programmed into and supported by the node hardware; or alternatively, the user could use a device, such as a SmartPhone or a Laptop computer, with BLE central capability and a software application with enough 10 capability running on it, to supply the Internet connection information, using GATT attribute write.
[0043] For successive activation of other nodes intended as root node, place the node in the coverage of a prime zone and power it up. It will start BLE ADV indicating that it is waiting to be activated for its target cluster, and its intended zone name if specified at order time. The prime zone detects the activation request and populates the node with security related information, and Internet connection configuration, zone name, and other zone information. After a root node is activated, it is ready to be deployed for forming a zone.
[0044] Forming and sustaining a field-servicing zone uses dynamic docking. A field- servicing zone always comes to service with its root node hardware powered on first. The root node, once powered up, connects to SANAS cloud service with authentication done in the process. If, for whatever reason, it fails to connect to SANAS cloud service, it shall go through a retry process, starting from using the current Internet connection for certain amount of time and determining whether the remote service becomes unavailable or the Internet connection itself is broken, and if later, rotate through alternative Internet connection options, such as other Wi-Fi APs on the optional list, Cellular connection if available, or wired LAN if exists.
[0045] Once the connection to SANAS cloud service is established, the root node shall check with the cloud side to see if a node admitting window (NAW) should be activated and for how long. A NAW could also be activated from a trigger originated or routed through the SANAS cloud side, after a field-servicing zone is already activated and in service.
[0046] When a field-servicing zone has an active NAW, all the nodes already connected as part of the zone, including the root node, shall start BLE scanning process for the NAW's remaining duration. Only when encrypted BLE ADV content consisting of service data and/ or manufacturer data is received and verified with digital signature by a already-connected node, with RSSI (received signal strength index) above cloud-configured threshold, shall that node activate a downlink docking port to start the docking admission process specifically for the requesting node, with its identity determined in the aforementioned scanning and ADV content verification step.
[0047] First of all, the acceptor shall start to broadcast encrypted and signed BLE ADV content to indicate that (a) it is admitting a specific node, and (b) whether the allowed and preferred docking interfaces is BLE or WiFi, and the corresponding WiFi AP SSID if WiFi is preferred. In addition, the BLE ADV could also include assisting information such as the acceptor's Relay depth (how many Relay links/hops there are from the Root node), and its remaining docking acceptance capacity indicator. If the preferred docking port is over Wi- Fi, the acceptor shall activate a network interface in AP or Soft-AP mode, but with SSID broad- casting disabled.
[0048] If the requesting node indicates that it does support docking over BLE, by using connectable ADV in the first place, and the acceptor prefers docking over BLE, the acceptor shall try to initiate a BLE connection to the requesting node, though it needs to be ready to go back to the aforementioned procedure for docking over WiFi if that's allowed even though not preferred, in case that the requesting node assesses that situation has changed and is now better not to dock over BLE. An accepting node employees a preconfigured algorithm to determine whether to proactively initiate a BLE connection based on information including but not limited to the remaining docking acceptance capacity, and how far away it estimates the requesting node to be.
[0049] The requesting node might receive "accepting" BLE ADV from multiple nodes around. The most proactive acceptor that decides to initiate BLE connection for the docking most likely will become the winner of the race to accept the docking. When no node around is willing to proactively make BLE connection, the decision becomes the requestor's to choose amongst the multiple nodes that are indicating willingness to accept. The requestor shall employee the preconfigured algorithm to determine which acceptor, based on information including but not limited to the remaining docking acceptance capacity of each potential acceptor and how far away it estimates each of them to be. The requestor then initiates connection to the WiFi AP interface of the chosen acceptor using its uplink WiFi STA interface.
[0050] Secondly, if the initial BLE ADV content from the requesting node contains a valid session token that was previously granted and is still valid, the admission shall be expedited, otherwise a challenge and response procedure shall be employed for the requesting node and the admitting node to authenticate each other following the establishment of a BLE central-peripheral connection or a WiFi AP-STA connection.
[0051] After a node is admitted, the root node shall be notified and in response assign it a Logical Local Node ID (LLNID) and a session token.
[0052] Field operation liaison interfacing with terminal objects takes place by the following process. An active node is instructed by the SANAS service to start/stop actively seeking communication channel establishment with field operation terminal objects, over wireless and/or wired connections and protocols. Such communication is typically one node to multiple field operation terminal object. When the communication channel with a terminal object is set up, a message exchange is conducted with the cloud side to get the terminal object's cloud assigned ID and status updated. When required, authentication process is executed by the field operation liaison node working together with cloud side customizable profile and application layer logic. The field operation liaison node maintains a table of terminal objects' cloud assigned ID vs. their native communication channel "local handle." This table is used by the field operation liaison node to encode/decode message header for data, events, commands from/to the terminal objects.
[0053] A field-servicing zone can be configured to accommodate automatic roaming of terminal objects having liaison relationship with its own nodes, and even those terminal objects having liaison relationship with nodes in another field-servicing zone, as long as the field-servicing zones are ascribed to the same Operational cluster.
[0054] For terminal objects roaming within the same field-servicing zone, disconnection from the current liaison node triggers the field-servicing zone's active seek of
communication channel establishment with the disconnected target terminal objects; For terminal objects roaming across different field-servicing zones, disconnection from the current liaison node in addition triggers the active seek of communication channel establishment with terminal objects, of those field-servicing zones ascribed to the same Operational cluster and configured to support cross-field-servicing-zone terminal object roaming.
[0055] Messages flow in a field-servicing zone among the root node and the relay nodes. The root node allocates and assigns LLNIDs using a preconfigured algorithm that allows the Relay path between the root node and the assignee to be derived from the LLNID itself, or with the assistance of an associated but separate Relay Path ID (RPID) that is constructed when the node admission message is passed from the acceptor to the root node. When a separate RPID is used, it is updated by the root node to its corresponding cloud side representation.
[0056] When messages of data and events flow from the node at the edge of field operation interfacing, to the Root node, the LLNID of the node is added as part of message header, along with a timestamp and the terminal object's cloud assigned ID to tag the originating source, timing, and field operation liaison. In this direction of message flow, each node is always sending/forwarding the message to its one and only "parent" node.
[0057] When messages of data and events/commands flow from cloud side, through the root node, to a field operation terminal object, the cloud side shall have provided the field operation liaison's LLNID, RPID if exists, and the terminal object's cloud assigned ID. If exists, the RPID is populated in the header of downlink messages to assist each node in the Relay path to determine which "child" node to forward the message to, until the message reaches the parent of the field operation liaison node. If RPID is not used, the LLNID itself shall be enough for each node in the Relay path to determine which "child" node to route/forward the message to. Special LLNID values are reserved for broadcasting purpose, and for reference to the root node.
[0058] Continued zone structure optimization of a field-servicing zone takes place by the following process. In steady state, all the nodes shall periodically broadcast their presence, while also scanning for their peer nodes health summary. Each node shall also aggregate the observed information of its descendent nodes and send to the root node. Such information is used by each node to determine whether parent and child nodes are still within coverage and how far away they are approximately. In addition, non-broadcasting operational messages flowed through a node is monitored and slide-window averaged, for each of its downlink child node.
[0059] The distance and bandwidth usage weighted sum of Relay depth of all descendent nodes, among other factors including remaining bandwidth etc., is used by a Relay node and eventually the root node, to make decisions on local and/or global zone structure optimization. Once a "directed structure move" of a specific node is decided on, both the node's "current parent" node and its upcoming "new parent" node are notified, including the preferred new docking link type. With such explicit instructions, the three parties go through a fast-track docking link switch, with the existing link put on hold first, and subsequently torn down if and when the switch succeeds, and resumed if and when the switch fails.
[0060] When a node fails in a field-servicing zone, its uplink docking "parent" node detects the drop of the docking link and reports the event to the root node. The root node reports the status change to SANAS cloud side, inherently propagating status change for all the descendent nodes of the moved-away node. Meanwhile, this triggers the activation of an NAW in the field-servicing zone.
[0061] The direct "child" nodes of the failed node shall start new docking request procedure immediately after detecting that their parent node is no longer connected, aiming for nearby nodes that are still connected to the field-servicing zone and can complete the admission of these nodes. They can choose to set a timer for being re-admitted into the field-servicing zone, during which time they put their descendent nodes on hold. If this timer expires before the re-admission, they shall broadcast to all the descendent nodes to disconnect from their uplink and downlink docking nodes; otherwise, the re-admission response message from the root node shall contain the information for them to update all the descendent nodes' LLNID.
[0062] SANAS cloud generates an alert and notifies the owner/user of the field-servicing zone of the failure, and provides continued update on which node failed first, and whether the affected descendent nodes have been re-admitted into the field-servicing zone. The expectation is that a user will locate the failed node, and try to bring it back to operational status, if the failure was not a result of an intentional power-down of the node.
[0063] When a node moves within a field-servicing zone out of the covering range of its parent node or its child node, it triggers the link drop of the uplink docking and/or the downlink docking. Whenever it detects that the uplink docking has dropped, it shall treat the situation as if it is the direct "child" node of a failed node, as described in the above section, and execute the same procedure. If a downlink docking has dropped but the uplink docking remains accessible, it shall report the event to the root node as if its direct child of that downlink docking has failed, as described in the above section. In both the node roaming case and the node failure case, the nodes that have momentarily lost connection to the field-servicing zone broadcast with BLE ADV following the detection/notification of uplink docking link drop. This is to aid the node's parent and child nodes in determining whether this is a roaming case or a failure case.
[0064] When it is needed to move a node from one field-servicing zone to another, as long as the two field-servicing zones are ascribed to the same Operational cluster, the procedure is simple: power down the node; move it to the target position, and power it up. The just- moved node and the new admitting field-servicing zone are able to authenticate each other because they ascribe to the same Operational cluster. The node admission procedure described in the above sections is directly applicable to the new field-servicing zone. The old field-servicing zone will go through the procedure of recovering from a failed node, as if that the node that has left is a failed node. On the SANAS cloud side, the node that has moved logs a state transition from "down in the old field-servicing zone" to "up in the new field-servicing zone." If the old field-servicing zone has problem admitting/keeping the descendent nodes of the moved node, the user will still be notified of the failure situation.
[0065] The invention has been described herein using specific embodiments for the purposes of illustration only. It will be readily apparent to one of ordinary skill in the art, however, that the principles of the invention can be embodied in other ways. Therefore, the invention should not be regarded as being limited in scope to the specific embodiments disclosed herein, but instead as being fully commensurate in scope with the following claims.

Claims

Networking as a Service CLAIMS What is claimed is:
1 . A first node, comprising: a receiver configured to receive a first signal that conveys first information from a server, the first information including at least one instruction to adapt at least one parameter associated with a first network, the receiver further configured to receive a second signal that conveys performance information of the first network; a processor coupled to the receiver and configured to process the first signal to generate therefrom a control signal based at least in part on the at least one instruction; and a transmitter coupled to the processor and configured to transmit the control signal to at least one second node, the processor further configured to transmit the second signal to the server.
2. The first node of claim 1 , wherein the first node is a node in the first network, the at least one second node comprises a plurality of second nodes arranged in a layered structure of the first network to communicate with each other and with a plurality of serviceable objects via a plurality of possible communication modes, and the server resides in the cloud, the at least one instruction having been generated based at least in part on the performance information of the first network.
3. The first node of claim 2, the at least one parameter comprising at least one of a bandwidth allocation to at least one of the plurality of second nodes, a bandwidth allocation to at least one layer of the first network, a maximum number of layers of the first network, and a communication mode of the plurality of communication modes to be used by at least one of the plurality of second nodes.
4. The first node of claim 2, wherein the plurality of field serviceable objects comprises peripheral devices including at least one of a BLE broadcast beacon, a temperature monitor, a sensor, and an actuator.
5. The first node of claim 2, wherein the received performance information of the first network includes status information about at least one of the plurality of second nodes, the status information indicating a failure of the at least one second node, the status information conveying a configuration of second nodes, from among the plurality of second nodes of the first network, that were previously coupled by uplinks to the at least one second node having a failure status and have been subsequently coupled by uplinks to other second nodes.
6. A system for training a first network, comprising: a server configured to send a first signal that conveys first information, the first information including at least one instruction to adapt at least one parameter associated with the first network; a first node configured to receive and process the first signal to generate therefrom a control signal based at least in part on the at least one instruction, and to transmit the control signal to at least one second node, the first node further configured to receive from the at least one second node a second signal that conveys performance information of the first network, and to transmit the second signal to the server.
7. The system of claim 6, wherein the at least one second node comprises a plurality of second nodes arranged in a layered structure of the first network to communicate with each other and with a plurality of serviceable objects via a plurality of possible communication modes, and the server resides in the cloud and is configured to generate the at least one instruction based at least in part on the performance information of the first network.
8. The system of claim 7, the at least one parameter comprising at least one of a bandwidth allocation to at least one of the plurality of second nodes, a bandwidth allocation to at least one layer of the first network, a maximum number of layers of the first network, and a communication mode of the plurality of communication modes to be used by at least one of the plurality of second nodes.
9. The system of claim 7, wherein the plurality of field serviceable objects
comprises peripheral devices including at least one of a BLE broadcast beacon, a temperature monitor, a sensor, and an actuator.
10. The system of claim 7, wherein the received performance information of the first network includes status information about at least one of the plurality of second nodes, the status information indicating a failure of the at least one second node, the status information conveying a configuration of second nodes, from among the plurality of second nodes of the first network, that were previously coupled by uplinks to the at least one second node having a failure status and have been subsequently coupled by uplinks to other second nodes.
11 . The system of claim 7, wherein the received performance information of the first network includes status information about at least one third node that seeks to be added to the first network, the status information conveying an identity and a preferred communication mode of the at least one third node.
12. The system of claim 1 1 , wherein the first information further includes a
confirmation and a duration of a node admitting window during which the first node and the plurality of second nodes are required to scan for a signal transmitted by the at least one third note, and the plurality of possible communication modes includes at least WiFi and Bluetooth Low Energy (BLE).
13. The system of claim 7, wherein the first node is configured to analyze the first performance information of the first network and generate resultant second performance information of the first network.
14. A method of training a first network, comprising: receiving, at a first node, from a server a first signal that conveys first information, the first information including at least one instruction to adapt at least one parameter associated with the first network; generating from the first signal a control signal based at least in part on the at least one instruction; transmitting the control signal to at least one second node; receiving from the at least one second node a second signal that conveys performance information of the first network; and transmitting the second signal to the server.
15. The method of claim 14, wherein the transmitting the control signal
comprises transmitting the control signal to a plurality of second nodes arranged in a layered structure of the first network to communicate with each other and with a plurality of serviceable objects via a plurality of possible communication modes, and the receiving from a server comprises receiving from a server that resides in the cloud a first signal that conveys first information, the first information including at least one instruction to adapt at least one parameter associated with the first network, the at least one instruction based at least in part on the performance information of the first network.
16. The method of claim 15, wherein the receiving from a server comprises
receiving from a server that resides in the cloud a first signal that conveys first information, the first information including at least one instruction to adapt at least one parameter associated with the first network, the at least one parameter comprising at least one of a bandwidth allocation to at least one of the plurality of second nodes, a bandwidth allocation to at least one layer of the first network, a maximum number of layers of the first network, and a communication mode of the plurality of communication modes to be used by at least one of the plurality of second nodes.
17. The method of claim 15, wherein the transmitting the control signal comprises transmitting the control signal to a plurality of second nodes arranged in a layered structure of the first network to communicate with each other and with a plurality of serviceable objects including at least one of a BLE broadcast beacon, a temperature monitor, a sensor, and an actuator.
18. The method of claim 15, wherein the receiving from the at least one second node a second signal that conveys performance information of the first network includes receiving status information about at least one of the plurality of second nodes, the status information indicating a failure of the at least one second node, the status information conveying a configuration of second nodes, from among the plurality of second nodes of the first network, that were previously coupled by uplinks to the at least one second node having a failure status and have been subsequently coupled by uplinks to other second nodes.
19. The method of claim 15, wherein the receiving from the at least one second node a second signal that conveys performance information of the first network includes receiving status information about at least one third node that seeks to be added to the first network, the status information conveying an identity and a preferred communication mode of the at least one third node.
20. The method of claim 15, further comprising applying machine learning to the first signal, the control signal, and the second signal to train the first network.
21. The method of claim 20, wherein the applying machine learning comprises training and compressing decision-making models for the first network to periodically update inferences associated with the first signal, the control signal, and the second signal.
PCT/US2017/055526 2016-10-10 2017-10-06 Networking as a service WO2018071298A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662406265P 2016-10-10 2016-10-10
US62/406,265 2016-10-10

Publications (1)

Publication Number Publication Date
WO2018071298A1 true WO2018071298A1 (en) 2018-04-19

Family

ID=61905916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/055526 WO2018071298A1 (en) 2016-10-10 2017-10-06 Networking as a service

Country Status (1)

Country Link
WO (1) WO2018071298A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230328A1 (en) * 2006-08-22 2012-09-13 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US20140187143A1 (en) * 2010-03-18 2014-07-03 Department 13, LLC Wireless Sensor Network
US9100285B1 (en) * 2012-12-18 2015-08-04 Juniper Networks, Inc. Dynamic control channel establishment for software-defined networks having centralized control
US9204485B2 (en) * 2010-05-04 2015-12-01 Giesecke & Devrient Gmbh Network node for a wireless sensor network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230328A1 (en) * 2006-08-22 2012-09-13 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US20140187143A1 (en) * 2010-03-18 2014-07-03 Department 13, LLC Wireless Sensor Network
US9204485B2 (en) * 2010-05-04 2015-12-01 Giesecke & Devrient Gmbh Network node for a wireless sensor network
US9100285B1 (en) * 2012-12-18 2015-08-04 Juniper Networks, Inc. Dynamic control channel establishment for software-defined networks having centralized control

Similar Documents

Publication Publication Date Title
US11310106B2 (en) Cloud-based control of a Wi-Fi network
US11770383B2 (en) Cloud-to-device mediator service from services definition
US11337070B2 (en) User-authorized onboarding using a public authorization service
US9858425B2 (en) Method and apparatus for incrementally sharing greater amounts of information between user devices
EP3195526B1 (en) Layered management server delegation
JP2019068478A (en) Network node availability estimation based on personal history data in the past
CN113455039A (en) Meeting stringent QoS requirements through network control of device routes and locations
CN109891832A (en) The discovery of network slice and selection
CN110268753A (en) The communication system being sliced using network is accessed based on preconfigured access category
CN107079050A (en) Service layer&#39;s conversation shift and shared
EP2385729A2 (en) Control of electronic features in a network
CN106464686A (en) Social-graph aware policy suggestion engine
CN105580339A (en) End-to-end M2M service layer sessions
JP2016530732A5 (en) Communication system, network and UE and their ProSe direct discovery method
CN101969699A (en) Architecture using inexpensive, managed wireless switching points to deliver large scale WLAN
CN107113188A (en) Access network based on cloud
US20200067926A1 (en) Access control in an observe-notify network using callback
CN102685786A (en) Method and system for accessing wireless sensor network (WSN) to telecommunication network
WO2018113402A1 (en) Method and device for joining access node group
Gupta et al. Adaptive fuzzy convolutional neural network for medical image classification
WO2018071298A1 (en) Networking as a service
US20230180026A1 (en) Node control unit and network-service device
US20220191092A1 (en) Method and system for commissioning of a communication gateway
WO2019174751A1 (en) Technique for configuring an overload function in a cellular network
EP4040895B1 (en) Radio access network application deployment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17859873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17859873

Country of ref document: EP

Kind code of ref document: A1