WO2019018409A1 - Systems and methods for distributing partial data to subnetworks - Google Patents
Systems and methods for distributing partial data to subnetworks Download PDFInfo
- Publication number
- WO2019018409A1 WO2019018409A1 PCT/US2018/042508 US2018042508W WO2019018409A1 WO 2019018409 A1 WO2019018409 A1 WO 2019018409A1 US 2018042508 W US2018042508 W US 2018042508W WO 2019018409 A1 WO2019018409 A1 WO 2019018409A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- central repository
- data set
- subnetwork
- node
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Definitions
- the present disclosure relates to computer systems and methods for replicating a portion of a data set to a local repository.
- the present disclosure pertains to computer systems and methods for replicating a portion of a data set to a local repository associated with a subnetwork, the data set being stored on a central repository and associated with one or more subnetworks and the portion of the data set being associated with the subnetwork.
- a device associated with a subnetwork may include one or more processors configured to obtain a portion of a data set from a central repository.
- the data set may be associated with one or more subnetworks, and the portion of the data set may be associated with the subnetwork.
- the one or more processors may be further configured to obtain a request for data originating from a node in the subnetwork.
- the requested data may include at least one of (i) the portion of the data set, and (ii) data generated based on the portion of the data set, and the request may be destined for the central repository.
- the one or more processors may be configured determine whether the central repository is unavailable to provide the requested data, and provide the requested data to the node after the central repository is determined as being unavailable.
- FIG. 1 illustrates an example of a system in accordance with the disclosed embodiments.
- FIG. 2 illustrates another example of a system in accordance with the disclosed embodiments.
- FIG. 3 illustrates an example of a system deployed in an internet-of-things (loT) system in accordance with the disclosed embodiments.
- LoT internet-of-things
- FIG. 4 illustrates an example of a system deployed in an oil rig in accordance with the disclosed embodiments.
- Embodiments are described more fully below with reference to the accompanying drawings, which form a part hereof, and which show specific exemplary embodiments. However, embodiments may be implemented in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope. Embodiments may be practiced as methods, systems or devices. Accordingly, embodiments may take the form of an entirely hardware implementation, an entirely software implementation or an implementation combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
- First subnetwork 1 10 may include a first node 1 12, a second node 1 14, and a third node 1 16. First subnetwork 1 10 may further include a gateway 1 18 connecting first node 1 12, second node 1 14, and third node 1 16 to each other and to gateway 140.
- Second subnetwork 120 may include a fourth node 122, a third subnetwork 150, and a gateway 128. Gateway 128 may connect fourth node 122 and third subnetwork 150 to each other and to gateway 140.
- Third subnetwork 150 may include a fifth node 124 and a gateway 126 that connects fifth node 124 to gateway 128 (e.g., using a network link 146).
- a "subnetwork" may be any logical grouping of nodes in a network.
- a subnetwork may include nodes that are grouped based on the nodes' type, geographical location, ownership, performance, cost (e.g., cost of ownership/use), and/or whether the nodes implement certain communication protocols/standards.
- a subnetwork may include nodes designated by a system administrator of system 100.
- a subnetwork may include nodes selected by an algorithm.
- a single node may be associated with a plurality of subnetworks.
- a subnetwork may be a part of another subnetwork.
- nodes in a subnetwork may communicate with each other using a first communication protocol and/or standard (e.g., Ethernet), and nodes in another subnetwork may communicate with each other using a second communication protocol and/or standard (e.g., Fiber-optic Communications).
- nodes in the two subnetworks may communicate with each other via one or more gateways.
- the gateways as a collective, may be capable of communicating using at least the first and second communication protocols and/or standards.
- a "gateway" may be a node that connects nodes on a subnetwork to a node outside the subnetwork.
- central repository 130 may have access to at least one data set 135.
- a data set may be any collection of data.
- a data set 135 may be a collection of data for a particular system or application.
- a data set 135 may include a collection of identity data (e.g. , public keys associated with nodes and/or users) used for an authentication subsystem of system 100.
- a data set 135 may include a collection of blacklists and whitelists (e.g. , identifying nodes that are prohibited/allowed to communicate) used for a distributed denial-of-service (DDOS) attack prevention subsystem of system 100.
- DDOS distributed denial-of-service
- at least a portion of data set 135 may be stored on central repository 130.
- at least a portion of data set 135 may be stored on a data store external to, and accessible by, central repository 130.
- the nodes may use the portions of data set 135 including identity data to authenticate nodes and/or users.
- the nodes may use the portions of data set 135 including blacklists and whitelists to implement a network filter for preventing and mitigating a DDOS attack.
- central repository 130 may also be considered a node.
- central repository 130 may be, for example, a physical and/or software executing on a personal computer, an internet-of-things device/hub, virtual machine, server, printer, gateway, router, switch, smartphone/cellular phone, smart watch, or tablet.
- central repository 130 may be implemented on gateway 140.
- central repository 130 may include one or more database servers.
- at least some functions of central repository 130 may be implemented on a cloud platform, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
- AWS Amazon Web Services
- central repository 130 may include a server and a data store.
- the server may obtain data from the data store and provide the obtained data to various nodes.
- the server may obtain data from the data store, generate new data based on the obtained data, and provide the generated data to various nodes.
- FIG. 2 illustrates another example of system 200 in accordance with the disclosed embodiments.
- System 200 is similar to system 100 of FIG. 1 , except that FIG . 2 further illustrates data accessible by various nodes in system 200.
- data set 135 of central repository 130 is shown to include data for various nodes in system 200.
- data set 135 may include, for example, data for first node 1 12, data for second node 1 14, data for third node 1 16, data for fourth node 122, and data for fifth node 124.
- data set 135 may include data for a plurality of nodes.
- data set 135 may include data that is intended to be used by two or more nodes in system 200.
- the phrase "data for a node” may refer to any data that may be used by the node.
- first node 1 12 may obtain data for first node 1 12, and first node 1 12 may perform an action based on the obtained data for first node 1 12.
- the phrase "data for a node” may refer to any data that may be used to generate new data that may be used by the node.
- a node e.g. , central repository 130
- first node 1 12 may obtain the generated data and perform an action based on the obtained data.
- central repository 130 may be unavailable for some of the nodes in system 200 to access. More particularly, central repository 130 may be inaccessible and/or undesirable to be accessed by one or more nodes in system 200.
- network link 142 may experience an outage during a scheduled maintenance of network equipment.
- network link 144 may be a satellite communication link that may be expensive to use during peak hours.
- network link 146 may be a wireless network link connecting a portable device (e.g. , fifth node 124 and gateway 126) located underground tunnel to gateway 128.
- central repository 130 may cease to operate, for example, due to a malicious attack (e.g. , distributed denial-of- service attack) or other technical issues.
- data set 135 on central repository 130 may be replicated to local repositories (e.g. , local repository 220, local repository 230, and local repository 232) when central repository 130 is available to be accessed by the local repositories (e.g. , during off peak hours or when central repository 130 is operating normally) .
- the local repositories may be configured to perform at least some of the functions of central repository 130 for the nodes in the same subnetwork using the replicated version of data set 135 stored locally.
- data on central repository 130 may be replicated local repository 220 on gateway 1 18.
- local repository 220 may provide first node 1 12, second node 1 14, and third node 1 16 with the replicated data stored in local repository 220.
- local repository 220 may generate new data based on local repository 220's replicated data and provide the newly generate data to first node 1 12, second node 1 14, and third node 1 16.
- the process used by local repository 220 to generate the new data may be the same, or substantially the same, as the process that would have been used by central repository 130 to generate the new data based on central repository 130's data.
- a local repository may store the replicated data internally or on a data store accessible by the local repository.
- portions of data set 135 are selectively replicated to various local repositories.
- a portion of data set 135 that is associated with a subnetwork may be replicated to a local repository associated with the same subnetwork.
- a portion of data set 135 associated with first subnetwork 1 10 i.e. , data for first node 1 12, data for second node 1 14, and data for third node 1 16
- a portion of data set 135 associated with second subnetwork 120 may be selectively replicated to local repository 230 on gateway 128, and a portion of data set 135 associated with third subnetwork 150 may be selectively replicated to local repository 232.
- the gateways including the local repositories, or the local repositories themselves may perform the functions of central repository 130 using the replicated data stored in the local repositories, for example, after determining that central repository 130 is unavailable.
- nodes with access to the local repositories may continue operating as if central repository 130 is continuously available to the nodes.
- central repository 130 may initiate the process to replicate the portions of data set 135 to the local repositories. That is, the portions of data set 135 are "pushed" to the local repositories.
- central repository 130 may dynamically assign nodes
- data set 135 may be altered by one or more users, administrators, or other nodes.
- data set 135 may include sensor readings from various nodes, and central repository 130 may receive an updated sensor reading from some of the nodes.
- data set 135 may be changed by one or more users via a user interface connected to central repository 130.
- an administrator may directly modify data set 135 stored on central repository 130.
- central repository 130 may provide updated portions of data set
- central repository 130 may initiate the process to provide the updated portions of data set 135 to the local repositories. That is, central repository 130 "push" the updated portions of data set 135 to local repositories.
- portions of data set 135 may be provided to local repositories using one or more trusted communications.
- a trusted communication is a communication where the recipient may verify the identity of the sender.
- a portion of data set 135 may be signed (i.e., generate a signatures) using one or more private keys, and the generated signature may be provided to the local repository.
- the local repository prior to accepting the provided portion of data set 135, may verify the signature using one or more corresponding public keys.
- portions of data set 135 may be provided to local repositories using encrypted communications.
- local repositories may provide the requested data to the nodes such that the nodes may process the data in the same, or substantially the same, manner as the data that was provided by central repository 130.
- the data provided by local repositories may be indistinguishable from the data provided by central repository 130.
- the data provided by the local repositories may be in the same format, or in a substantially the same format, as the data provided by central repository 130.
- the data provided by local repositories may be signed using a private key associated with central repository 130.
- the data provided by local repositories may be signed using a private key shared with central repository 130 or derived from a private key accessible by central repository 130.
- local repositories after determining that central repository 130 is unavailable, may prevent the request from reaching central repository 130.
- local repositories may be implemented on a plurality of nodes.
- local repositories may be implemented on a plurality of gateway devices on the same subnetwork.
- each node in the plurality of nodes may have its own copy of the replicated portion of data set 135.
- the replicated portion of data set 135 may be distributed among the plurality of nodes.
- local repositories may be implemented on edge nodes (e.g., first node 1 12, second node 1 14, and third node 1 16).
- a local repository associated with a subnetwork may be made accessible and/or included in any node that can be accessed by nodes in the same subnetwork.
- local repository 220 may be made accessible and/or included in third node 1 16.
- local repository 232 may be made accessible and/or included in fourth node 122.
- a local repository may also be accessible by, and/or included in, gateway 140.
- Such a local repository may store, for example, data for nodes in first subnetwork 1 10, second subnetwork 120, and third subnetwork 150.
- a portion of data set 135 may be stored in multiple local repositories.
- replicated data for fifth node 124 may be included in both local repository 232 and local repository 230.
- Storing a portion of data set 135 on multiple local repositories may provide additional redundancy, for example, when both central repository 130 and one of the local repository become unavailable.
- replicating portions of data set 135 to local repositories may provide numerous benefits for various types of systems.
- performance of a node may be improved because data needed by the node may be obtained from a local repository which may be accessed with less latency.
- performance may be further improved by including a local repository close to an edge node (e.g., at a local gateway) and/or in the edge node itself.
- FIG. 3 illustrates an example of a system 300 in accordance with the disclosed embodiments.
- System 300 is similar to systems 100 and 200 of FIGS. 1 and 2, except system 300 is deployed as an Internet-of-Things (loT) system.
- first subnetwork 1 10 includes all nodes in a home 310 and gateway 1 18.
- first node 1 12 may be a smart refrigerator
- second node 1 14 may be a smart thermometer
- third node 1 16 may be a smartphone.
- Gateway 1 18 may be located near or inside home 310.
- gateway 1 18 may be a personal Wi-Fi hotspot installed in home 310.
- FIG . 3 further illustrates second subnetwork 120 that includes all nodes in an office building 320, gateway 128, and gateway 126.
- Nodes in office building 320 may include, for example, phone, printers, scanners, fax machine, computers, routers, switches, servers, and smartphones.
- third subnetwork 150 is a part of second subnetwork 120.
- Third subnetwork 130 may include all devices in a portion of office building 320 (e.g.
- nodes may request various types of data from central repository
- such data may include attributes of various nodes in system 300. Attributes may include, for example, capabilities of various nodes, such as whether a node has a certain type of sensor, and/or implements a protocol. In another example, attributes may include last-known status of various nodes (e.g. , last sensor reading by a node, whether a node is active, and/or the last-known user of a node). In yet another example, attributes may include identifier(s) associated with a node, such as the node's IP address or MAC address.
- attributes of the nodes in a subnetwork may be selectively replicated to a local repository in the same subnetwork.
- attributes of first node 1 12, second node 1 14, and third node 1 16 may be replicated to local repository 220 in gateway 1 18.
- attributes for all nodes in office building 320 may be replicated to local repository 230
- attributes for nodes in bottom half of office building 320 may be replicated to local repository 232.
- local repository 230 may not store attributes for nodes in the bottom half of office building 320 to avoid redundancy. Accordingly, even when central repository 130 is unavailable to the nodes in a subnetwork, the nodes may still request attributes of the nodes in the same subnetwork from central repository 130, and subsequently receive the requested data from the local repository in the same subnetwork.
- a node may request data generated based on the data stored in central repository 130, and after determining that central repository 130 is unavailable, a local repository may intercept the request, generate the requested data based on the replicated data stored in the local repository, and provide the generated data to the node.
- a computer in office building 320 may request a list of printers with a particular set of attributes.
- gateway 128 or local repository 230 may perform a query on the replicated data stored on local repository 230 to generate the requested list and provide the generated list to the requesting computer.
- an administrator may add, change, or remove attributes for nodes in system 300 by changing the data stored on central repository 1 30.
- an interface may be available to provide an administer with options to add, change, or remove the attribute data on central repository 130.
- the attributes may change in response to changes alteration of a node's configuration and removal/addition of a node.
- a node's network configuration may change causing the node's IP address to change.
- a new node may be added or an existing node may be removed, requiring the attributes for the node to be added or removed.
- first node 1 12 may be an internet-of-things
- loT loT
- second node 1 14 may be an loT hub for obtaining and processing the sensor readings from loT sensor 1 12.
- Both loT sensor 1 12 and loT hub 1 14 may connected to an on-site gateway 1 18.
- gateway 1 18 may be connected to a remote gateway 140 and a cloud platform 130 via a satellite 420 and satellite links 425.
- system 400 may further include tens of thousands of nodes (e.g. , additional loT sensors and hubs) , some of which may be deployed on another oil rig and some of which may be deployed on oil rig 410.
- central repository 130 contain data for tens of thousands of nodes.
- verifying that the sensor readings are indeed from an authorized sensor may enable system 400 to prevent and/or mitigate malicious attacks on system 400 such as an attack spoofing loT sensor 1 12 in an attempt to inject false sensor readings to system 400.
- loT hub 1 14 may attempt to verify loT sensor 1 12's signature before processing the received sensor readings.
- loT hub 1 14 may verify loT sensor 1 12's signature by obtaining and using loT sensor 1 12's public key.
- public keys associated with the nodes in system 400 are centrally stored on central repository 130.
- system 400 may include tens of thousands of nodes, and thus, storing copies of all public keys locally (e.g., on each of the nodes or gateways) may not be feasible technically or economically.
- loT hub 1 14 may request and obtain loT sensor 1 12's public key from central repository 130.
- oil rig 410 may not have a continuous connection to central repository 130.
- satellite links 425 may not be available during storm or cloudy days. Consequently, in these situations, loT hub 1 14 may not be to verify that the sensors readings are indeed from loT sensor 1 12 unless an alternative data source for the public keys is available to loT hub 1 14.
- a subset of the public keys stored on central repository 130 may be replicated to local repositories (e.g., on-site gateway 1 18). Further, the local repositories may intercept the request for the public keys destined for central repository 130 and provide the requested public keys to loT hub 1 14.
- loT hub 1 14 may verify loT sensor's signature by requesting another node (e.g., central repository 130) to verify the signature. For example, loT hub 1 14 may attempt to provide the obtained sensor readings and loT device 1 12's signature to central repository 130. If central repository 130 is available, central repository 130 may verify the signature using loT sensor 1 12's public key stored on central repository 130 and respond to loT hub 1 14 with a communication indicative of whether the signature is valid or not.
- another node e.g., central repository 130
- loT hub 1 14 may attempt to provide the obtained sensor readings and loT device 1 12's signature to central repository 130. If central repository 130 is available, central repository 130 may verify the signature using loT sensor 1 12's public key stored on central repository 130 and respond to loT hub 1 14 with a communication indicative of whether the signature is valid or not.
- on-site gateway 1 18 may intercept the sensor readings and loT sensor 1 12's signature, verify the loT sensor 1 12's signature using replicated version of loT sensor 1 12's public key, and respond to loT hub 1 14 with a communication indicative of whether the signature is valid or not. Thus, even when central repository 130 is unavailable, trusted communications between the nodes in oil rig 410 may be possible.
- FIG. 5 illustrates a process 500 in accordance with the disclosed embodiments.
- central repository 130 may provide the identified portion of the data set 135 to a local repository associated with the subnetwork.
- central repository 130 may initiate a process to replicate the identified portion of data set 135 to a local repository associated with the subnetwork.
- the local repository associated with the subnetwork may include, for example, a gateway connected to at least one node in the subnetwork.
- the local repository may be implemented on an edge node in the subnetwork.
- the local repository may obtain the portion of the data set 135 provided by central repository 130.
- the local repository may store the obtained portion of the data set 135 on a data store within the local repository and/or on a data store accessible by the local repository.
- the local repository may obtain the updates to the identified portion of the data set 135. After obtaining the updates, the local repository may apply the updates to the portion of the data set 135 on the local repository.
- the local repository may obtain the request for data originating from the node in the subnetwork.
- the local repository may intercept the request for data destined for central repository 130.
- the local repository may preven the request from reaching central repository 130.
- the local repository may provide the requested data to the node after the central repository is determined as being unavailable.
- the node may obtain the requested data.
- the node may process the requested data. In some embodiments, the node may perform an action based on the requested data.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3070412A CA3070412A1 (en) | 2017-07-17 | 2018-07-17 | Systems and methods for distributing partial data to subnetworks |
AU2018302104A AU2018302104A1 (en) | 2017-07-17 | 2018-07-17 | Systems and methods for distributing partial data to subnetworks |
AU2023203129A AU2023203129B2 (en) | 2017-07-17 | 2023-05-18 | Systems and methods for distributing partial data to subnetworks |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/652,089 | 2017-07-17 | ||
US15/652,089 US10958725B2 (en) | 2016-05-05 | 2017-07-17 | Systems and methods for distributing partial data to subnetworks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019018409A1 true WO2019018409A1 (en) | 2019-01-24 |
Family
ID=65016272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/042508 WO2019018409A1 (en) | 2017-07-17 | 2018-07-17 | Systems and methods for distributing partial data to subnetworks |
Country Status (3)
Country | Link |
---|---|
AU (2) | AU2018302104A1 (en) |
CA (1) | CA3070412A1 (en) |
WO (1) | WO2019018409A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070198437A1 (en) * | 2005-12-01 | 2007-08-23 | Firestar Software, Inc. | System and method for exchanging information among exchange applications |
US20120197911A1 (en) * | 2011-01-28 | 2012-08-02 | Cisco Technology, Inc. | Searching Sensor Data |
-
2018
- 2018-07-17 CA CA3070412A patent/CA3070412A1/en active Pending
- 2018-07-17 WO PCT/US2018/042508 patent/WO2019018409A1/en active Application Filing
- 2018-07-17 AU AU2018302104A patent/AU2018302104A1/en not_active Abandoned
-
2023
- 2023-05-18 AU AU2023203129A patent/AU2023203129B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070198437A1 (en) * | 2005-12-01 | 2007-08-23 | Firestar Software, Inc. | System and method for exchanging information among exchange applications |
US20120197911A1 (en) * | 2011-01-28 | 2012-08-02 | Cisco Technology, Inc. | Searching Sensor Data |
Also Published As
Publication number | Publication date |
---|---|
CA3070412A1 (en) | 2019-01-24 |
AU2023203129B2 (en) | 2024-04-04 |
AU2018302104A1 (en) | 2020-03-05 |
AU2023203129A1 (en) | 2023-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220046088A1 (en) | Systems and methods for distributing partial data to subnetworks | |
CN107690800B (en) | Managing dynamic IP address allocation | |
US20230035336A1 (en) | Systems and methods for mitigating and/or preventing distributed denial-of-service attacks | |
US10397273B1 (en) | Threat intelligence system | |
US11652793B2 (en) | Dynamic firewall configuration | |
EP3399716B1 (en) | Network security threat intelligence sharing | |
KR101322947B1 (en) | Distributed caching of files in a network | |
US20210273977A1 (en) | Control access to domains, servers, and content | |
US10887333B1 (en) | Multi-tenant threat intelligence service | |
US11050787B1 (en) | Adaptive configuration and deployment of honeypots in virtual networks | |
EP3238096B1 (en) | System and method for discovering a lan synchronization candidate for a synchronized content management system | |
US20180069787A1 (en) | Exposing a subset of hosts on an overlay network to components external to the overlay network without exposing another subset of hosts on the overlay network | |
CN105340240A (en) | Methods and systems for shared file storage | |
KR20150036597A (en) | Method and apparatus for determining virtual machine drifting | |
US11792194B2 (en) | Microsegmentation for serverless computing | |
US11418458B2 (en) | Systems and methods of creating and operating a cloudless infrastructure of computing devices | |
US11381446B2 (en) | Automatic segment naming in microsegmentation | |
US20220201041A1 (en) | Administrative policy override in microsegmentation | |
JP6540063B2 (en) | Communication information control apparatus, relay system, communication information control method, and communication information control program | |
JP6712744B2 (en) | Network system, cache method, cache program, management device, management method and management program | |
CN103916489A (en) | Method and system for resolving single-domain-name multi-IP domain name | |
KR101703491B1 (en) | Method for providing security service in cloud system and the cloud system thereof | |
AU2023203129B2 (en) | Systems and methods for distributing partial data to subnetworks | |
US10146953B1 (en) | System and method for physical data packets isolation for different tenants in a multi-tenant protection storage environment | |
AU2018304187B2 (en) | Systems and methods for mitigating and/or preventing distributed denial-of-service attacks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18834527 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3070412 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018302104 Country of ref document: AU Date of ref document: 20180717 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18834527 Country of ref document: EP Kind code of ref document: A1 |