EP3117394B1 - Distributed smart grid processing - Google Patents
Distributed smart grid processing Download PDFInfo
- Publication number
- EP3117394B1 EP3117394B1 EP15761205.2A EP15761205A EP3117394B1 EP 3117394 B1 EP3117394 B1 EP 3117394B1 EP 15761205 A EP15761205 A EP 15761205A EP 3117394 B1 EP3117394 B1 EP 3117394B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- time series
- data values
- data
- node
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims description 192
- 238000000034 method Methods 0.000 claims description 74
- 230000008569 process Effects 0.000 claims description 31
- 238000011144 upstream manufacturing Methods 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 13
- 230000005611 electricity Effects 0.000 claims description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 2
- 230000006870 function Effects 0.000 description 74
- 238000004891 communication Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 14
- 230000037406 food intake Effects 0.000 description 14
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 238000012800 visualization Methods 0.000 description 8
- 239000008186 active pharmaceutical agent Substances 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 5
- 230000002547 anomalous effect Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000012384 transportation and delivery Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000010223 real-time analysis Methods 0.000 description 3
- 241001331845 Equus asinus x caballus Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000013144 data compression Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000001932 seasonal effect Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J13/00—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
- H02J13/00006—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment
- H02J13/00016—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment using a wired telecommunication network or a data transmission bus
- H02J13/00017—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment using a wired telecommunication network or a data transmission bus using optical fiber
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R21/00—Arrangements for measuring electric power or power factor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R25/00—Arrangements for measuring phase angle between a voltage and a current or between voltages or currents
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J13/00—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
- H02J13/00006—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment
- H02J13/00022—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment using wireless data transmission
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J13/00—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
- H02J13/00006—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment
- H02J13/00022—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment using wireless data transmission
- H02J13/00024—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by information or instructions transport means between the monitoring, controlling or managing units and monitored, controlled or operated power network element or electrical equipment using wireless data transmission by means of mobile telephony
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J13/00—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
- H02J13/00032—Systems characterised by the controlled or operated power network elements or equipment, the power network elements or equipment not otherwise provided for
- H02J13/00034—Systems characterised by the controlled or operated power network elements or equipment, the power network elements or equipment not otherwise provided for the elements or equipment being or involving an electric power substation
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0888—Throughput
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/51—Discovery or management thereof, e.g. service location protocol [SLP] or web services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/28—Timers or timing mechanisms used in protocols
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J2203/00—Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
- H02J2203/20—Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/007—Arrangements for selectively connecting the load or loads to one or several among a plurality of power lines or power sources
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/04—Circuit arrangements for ac mains or ac distribution networks for connecting networks of the same frequency but supplied from different sources
- H02J3/06—Controlling transfer of power between connected networks; Controlling sharing of load between connected networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/04—Processing captured monitoring data, e.g. for logfile generation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E60/00—Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
- Y04S40/12—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment
- Y04S40/124—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment using wired telecommunication networks or data transmission busses
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
- Y04S40/12—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment
- Y04S40/126—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment using wireless data transmission
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
- Y04S40/18—Network protocols supporting networked applications, e.g. including control of end-device applications over a network
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
- Y04S40/20—Information technology specific aspects, e.g. CAD, simulation, modelling, system security
Definitions
- Embodiments of the present invention relate generally to network architecture and semantics for distributed processing on a data pipeline, and, more specifically, to distributed smart grid processing.
- a conventional electricity distribution infrastructure typically includes a plurality of energy consumers, such as houses, business, and so forth, coupled to a grid of intermediate distribution entities, such as transformers, feeders, substations, etc.
- the grid of distribution entities draws power from upstream power plants and distributes that power to the downstream consumers.
- the consumers, as well as the intermediate distribution entities sometimes include "smart" meters and other monitoring hardware coupled together to form a mesh network.
- the smart meters and other measurement and control devices collect data that reflects the operating state of the grid, as well as consumption and utilization of the grid, and then report the collected data, via the mesh network, to a centralized grid management facility, often referred to as the "back office.”
- Such a configuration is commonly known as a "smart grid.”
- One embodiment of the present invention sets forth a computer-implemented method for identifying events associated with a network environment, as set in claim 16.
- the invention is defined by the features of the independent claims 1, 11 and 16. Preferred embodiments are defined in the dependent claims.
- At least one advantage of the unique architecture described above is that various nodes within a network can interoperate to identify a greater range of trends and events occurring within the utility network compared to traditional approaches.
- FIG. 1 illustrates a utility network 100 configured to implement an infrastructure for distributing electricity, according to one embodiment of the present invention.
- utility network 100 includes consumer 110, transformers 120, feeders 130, substations 140, and a back office 150, coupled together in a sequence.
- Substations 140(1) through 140(T) are configured to draw power from one or more power plants 160 and to distribute that power to feeders 130(1) through 130(S).
- Feeders 130 in turn, distribute that power to transformers 120(1) through 120(R).
- Transformers 120 step down high-voltage power transported by feeders 130 to a low-voltage power, and then transmit the low-voltage power to consumers 110(1) through 110(Q).
- Consumers 110 include houses, business, and other consumers of power.
- Each of consumers 110, transformers 120, feeders 130, and substations 140 may include one or more instances of a node.
- a "node” refers to a computing device that is coupled to an element of utility network 100 and includes a sensor array and a wireless transceiver. An exemplary node is described below in conjunction with Figure 3 . Each such node is configured to monitor operating conditions associated with a specific portion of the utility network 100.
- consumer 110(1) could include a node configured to monitor a number of kilowatt-hours consumed by consumer 110(1).
- transformer 120(R-1) could include a node configured to monitor voltage levels or temperature at transformer 120(R-1).
- feeder 130(S) could include one or more nodes configured to monitor humidity percentages or wind velocities at various locations associated with feeder 130(S).
- the nodes within utility network 110 may be smart meters, Internet of Things (loT) devices configured to stream data, or other computing devices.
- the nodes within utility network 100 may be configured to record physical quantities associated with power distribution and consumption along utility network 100, record physical quantities associated with the environment where utility network 100 resides, record quality of service data, or record any other technically feasible type of data.
- the discovery protocol may also be implemented to determine the hopping sequences of adjacent nodes, i.e., the sequence of channels across which nodes periodically receive payload data.
- a "channel" may correspond to a particular range of frequencies.
- Each intermediate node 230 may be configured to forward the payload data based on the destination address.
- the payload data may include a header field configured to include at least one switch label to define a predetermined path from source node 210 to destination node 212.
- a forwarding database may be maintained by each intermediate node 230 that indicates which of communication links 232 should be used and in what priority to transmit the payload data for delivery to destination node 212.
- the forwarding database may represent multiple paths to the destination address, and each of the multiple paths may include one or more cost values. Any technically feasible type of cost value may characterize a link or a path within network system 200.
- each node within wireless mesh network 202 implements substantially identical functionality and each node may act as a source node, destination node or intermediate node.
- access point 250 is configured to communicate with at least one node within wireless mesh network 202, such as intermediate node 230-4. Communication may include transmission of payload data, timing data, or any other technically relevant data between access point 250 and the at least one node within wireless mesh network 202. For example, a communication link may be established between access point 250 and intermediate node 230-4 to facilitate transmission of payload data between wireless mesh network 202 and network 252.
- Network 252 is coupled to server machine 254 via a communications link.
- Access point 250 is coupled to network 252, which may comprise any wired, optical, wireless, or hybrid network configured to transmit payload data between access point 250 and server machine 254.
- Server machine 254 may execute an application to collect, process, and report those measurements and any other computed values.
- server machine 254 queries nodes 230 within wireless mesh network 202 for certain data. Each queried node replies with the requested data, such as consumption data, system status, health data, and so forth.
- each node within wireless mesh network 202 autonomously reports certain data, which is collected by server machine 254 as the data becomes available via autonomous reporting.
- Persons skilled in the art will recognize that the techniques described herein are applicable to any technically feasible type of network, beyond utility networks.
- server machine 254 is configured to establish and maintain the aforementioned stream network that operates above wireless mesh network 202. More specifically, server machine 254 configures the nodes 230 within wireless mesh network 202 to implement "stream functions" in order to generate data streams and process real-time data.
- a stream function may be any technically feasible algorithm or computational programming function for processing and/or monitoring real-time data.
- a data stream represents real-time data that is generated by execution of a stream function.
- the stream network generally includes the various data streams and the paths through mesh network 202 followed by those data streams. The stream network is described in greater detail below in conjunction with Figures 5-15 .
- server machine 254 may interact with distributed processing cloud 260 to perform some or all of the stream network configuration and stream function execution.
- Distributed processing cloud 260 may be a private or a public distributed processing cloud, and may include a combination of different processing clouds.
- Distributed processing cloud 260 may define a configurable data processing pipeline that affects a logical data network path above the physical node paths within mesh network 102.
- FIG. 3 illustrates a network interface 300 configured to implement multichannel operation, according to one embodiment of the present invention.
- Each node 210, 212, 230 within wireless mesh network 202 of Figures 2A-2B includes at least one instance of network interface 300.
- Network interface 300 may include, without limitation, a microprocessor unit (MPU) 310, a digital signal processor (DSP) 314, digital to analog converters (DACs) 320 and 321, analog to digital converters (ADCs) 322 and 323, analog mixers 324, 325, 326, and 327, a phase shifter 332, an oscillator 330, a power amplifier (PA) 342, a low noise amplifier (LNA) 340, an antenna switch 344, and an antenna 346.
- MPU microprocessor unit
- DSP digital signal processor
- DACs digital to analog converters
- ADCs analog to digital converters
- ADCs analog to digital converters
- phase shifter 332 an oscillator 330
- PA power amplifier
- a memory 312 may be coupled to MPU 310 for local program and data storage.
- a memory 316 may be coupled to DSP 314 for local program and data storage.
- Memory 312 and/or memory 316 may be used to store data structures such as, e.g., a forwarding database, and/or routing tables that include primary and secondary path information, path cost values, and so forth.
- MPU 310 implements procedures for processing IP packets transmitted or received as payload data by network interface 300.
- the procedures for processing the IP packets may include, without limitation, wireless routing, encryption, authentication, protocol translation, and routing between and among different wireless and wired network ports.
- MPU 310 implements the techniques performed by the node when MPU 310 executes firmware and/or software programs stored in memory within network interface 300.
- MPU 314 is coupled to DAC 320 and DAC 321. Each DAC 320, 321 is configured to convert a stream of outbound digital values into a corresponding analog signal. The outbound digital values are computed by the signal processing procedures for modulating one or more channels. MPU 314 is also coupled to ADC 322 and ADC 323. Each of ADC 322 and 323 is configured to sample and quantize an analog signal to generate a stream of inbound digital values. The inbound digital values are processed by the signal processing procedures to demodulate and extract payload data from the inbound digital values.
- network interface 300 represents just one possible network interface that may be implemented within wireless mesh network 202 shown in Figures 2A-2B , and that any other technically feasible device for transmitting and receiving data may be incorporated within any of the nodes within wireless mesh network 202.
- server machine 254 of Figures 2A-2B configures and manages the operation of each node 230 where network interface 300 resides.
- FIG 4A illustrates server machine 254 that is coupled to wireless mesh network 202 of Figure 2 , according to one embodiment of the present invention.
- server machine 254 includes processing unit 400, input/output (I/O) devices 410, and memory unit 420, coupled together.
- Memory unit 420 includes stream network engine 422, stream network data 424, stream software developer kit (SvDK) 426, and database 428.
- Processing unit 400 may be any technically feasible hardware unit or collection of units configured to process data, including a central processing unit (CPU), a graphics processing unit (GPU), a parallel processing unit (PPU), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or any combination thereof.
- Processing unit 400 is configured to perform I/O operations via I/O devices 410, as well as to read data from and write data to memory unit 420.
- processing unit 400 is configured to execute program code included in stream network engine 400 and SvDK 426, generate and/or modify stream network data 424, and read from and/or write to database 428.
- I/O devices 410 may include devices configured to receive input, such as, e.g., a keyboard, a mouse, a digital versatile disc (DVD) tray, and so forth. I/O devices 410 may also include devices configured to generate output, such as, e.g., a display device, a speaker, a printer, and so forth. I/O devices 410 may further include devices configured to both receive input and generate output, such as a touchscreen, a data port, and so forth. I/O devices generally provide connectivity to the Internet, and, specifically, to wireless mesh network 202.
- I/O devices generally provide connectivity to the Internet, and, specifically, to wireless mesh network 202.
- Memory unit 420 may be any technically feasible unit configured to store data, including a hard disk, random access memory (RAM), etc.
- the stored data may include structured data sets, program code, software applications, and so forth.
- Stream network engine 422 is a software application that may be executed by processing unit 400 to establish and maintain the stream network discussed above in conjunction with Figures 1-4 , and, further, below in conjunction with Figures 5-15 . In doing so, stream network engine 422 configures nodes 230 within mesh network 202 to execute stream functions included within stream network data 424.
- the stream functions generally reflect various operations that can be performed by a node 230 in order to process time series data collected by that node.
- stream actors may encapsulate a sequence of one or more stream functions.
- the functionality of stream network engine 422 is performed within distributed processing cloud 260 of Figures 2A-2B .
- server machine 254 executes stream network engine 422 to configure distributed processing cloud 260 to manage nodes 230 and/or execute the stream functions described above.
- SvDK 426 is a software application that, when executed by processing unit 400, provides a template-based composition wizard / application to utility customers that allows creation of stream functions.
- SvDK 426 generates a graphical user interface (GUI) that supports drag-and-drop construction of stream functions and/or node monitoring rules, among other possibilities.
- GUI graphical user interface
- SvDK 426 may be implemented as a server configured to provide access to the aforementioned GUI, among other possibilities.
- SvDK 426 is configured to expose to the customer various abstractions of underlying libraries that encapsulate various application programming interface (API) calls. These abstract libraries enable the customer to generate complex stream functions and stream services that are implemented by complex underlying code, yet require no actual coding on the part of the customer.
- API application programming interface
- FIG. 4B illustrates a GUI 430 that may be used to generate a data stream, according to one embodiment of the present invention.
- GUI 430 includes various GUI elements for making different selections and providing various inputs associated with a data stream, including customer selector 432, input selector 434, device ID input 436, name input 438, attributes selector 440, interval input 442, and options buttons 444.
- a user of SvDK 426 may interact with GUI 430 in order to define a new data stream service.
- the user selects the customer they represent via customer selector 432, and then identifies, via input selector 434, the specific inputs from which the data stream should receive data.
- Those inputs could be derived from specific devices, including other nodes 230, or non-utility network data sources such as Facebook® or Twitter®, NOAA climate data, as well as abstract data sources such as previously created and computed streams.
- the user may also enter a specific device ID via device ID input 436.
- the user may then provide a name via name input 438 and select the particular function or functions that should be executed on the source data via attributes selector 440.
- Interval selector 442 allows the user to adjust the frequency with which elements of the data stream are generated.
- Options buttons 444 allow various other options to be selected.
- GUI 430 may include various selections and inputs
- the user may submit the data stream configuration defined by those selections and inputs to server machine 254.
- server machine 254 then configures distributed processing cloud 260, nodes 230, and so forth, to generate that data stream.
- SvDK 426 may include and/or generate server-side code that executes on processing unit 400 as well as client-side code that executes on a remote computing device or sensory / measurement device associated with a utility customer, as well as code that executes on distributed processing cloud 260.
- SvDK 426 may be a web application that provides users with access to a library of function calls for performing data processing on time series data, including raw time series data generated by a node 230 as well as aggregated data stream time series data received from other nodes. The user may generate a data stream by assembling various function calls via the GUI described above in any desired fashion to process the time series data.
- SvDK 426 The library of function calls and other data used by SvDK 426 may be stored in a local database 428, among other places. Those function calls generally encapsulate specific programmatic operations, including database operations and data processing algorithms, without requiring that the user write actual code. Generally, SvDK 426 allows utility customers to customize a specific portion of the stream network that operates in conjunction with mesh network 202. The stream network discussed thus far is described in greater detail below in conjunction with Figure 5 .
- FIG 5 illustrates a stream network 500 configured to operate in conjunction with mesh network 202 of Figure 2 , according to one embodiment of the present invention.
- stream network 500 operates above mesh network 202 of Figure 2 in an overall network architecture.
- nodes 230 of mesh network 202 execute stream functions 510 in order to generate data streams 520.
- node 230-1 executes stream functions 510-1 to generate data stream 520-1
- node 230-2 executes stream function 510-2 to generate data streams 520-2 and 520-3
- node 230-3 executes stream functions 510-3 to generate data stream 520-4
- node 230-4 executes stream functions 510-4 to generate data streams 520-5 and 520-6
- node 230-5 executes stream functions 510-5 to generate data streams 520-7 and 520-8
- node 230-6 executes stream functions 510-6 to generate stream function 520-9.
- Each data stream 520 includes a time series of data elements, where each data element includes a data value and a corresponding timestamp indicating a time when the data values was computed, recorded or generated.
- a given node 230 may execute one or more stream functions 510 to process raw time series data generated by that node 230.
- a stream function 510 may be a Boolean operation, such as, e.g., a comparison, or a more complex, higher-level function, such as a correlation operation.
- the raw time series data processed by stream functions generally includes various types of sensor data, such as voltage data, current measurements, temperature readings, and other types of environmental and/or non-environmental information.
- the raw time series data may also include sensor data reflective of the operating conditions of node 230. Further, the raw time series data may include network status information, traffic measurements, and so forth.
- each node 230 is configured to access time series data that is derived from various social media outlets, such as Twitter® or Facebook®, among other possibilities. Node 230 could, for example, retrieve tweets in real-time (or near real-time) via an API provided by Twitter®. Node 230 is configured to process the raw time series data to generate one or more data streams 520, and to then transmit the generated data stream(s) 520 to neighboring nodes. Data streams generated by processing raw time series data may be referred to herein as "native data streams.”
- a given node 230 may also execute one or more stream functions 510 to process data streams 520 received from neighboring nodes 230.
- a received data stream 520 could be generated by an upstream node 230 based on raw time series data recorded by that node, or generated based on other data streams 520 received by that upstream node. Similar to above, node 230 is configured to process received data streams 520 to generate additional data streams 520, and to then transmit these data stream(s) 520 to neighboring nodes.
- Data streams generated by processing other data streams may be referred to herein as "abstract data streams.”
- node 230 Upon generating a data stream 520, node 230 is configured to transmit the data stream 520 to back office 150 and/or distributed processing cloud 260, as mentioned.
- Back office 150 collects data streams 520 from nodes 230 within wireless mesh network 202 and may then perform various additional processing operations with those data streams 520 to identify network events associated with utility network 100 and/or wireless mesh network 202 as well as consumption data.
- server machine 254 may characterize time series data associated with nodes 230, including raw time series data and received data streams, and then identify network events associated with abnormal patterns within that time series data. Those network events may include voltage sags/swells, downed power lines, appliance malfunctions, potential fires, and fraud, among others.
- Server machine 254 may also process time series data to identify expected or normal patterns, including consumption data, quality of service data, etc. Server machine 254 may then analyze this data to compute load predictions, demand estimations, and so forth. In doing so, server machine 254 may rely on data from abstract data sources, such as Twitter® or Facebook®, to identify possible surges in electricity usage. Server machine 254 may then provide advanced notification to a utility company.
- abstract data sources such as Twitter® or Facebook®
- a given node 230 could be configured to participate in identifying voltage swells (or sags) by executing a stream function that generates a running average of voltage levels associated with the node 230.
- node 230 could alert server machine 254.
- Server machine 254 could then identify that a voltage swell (or sag) is occurring in the region where the node resides and notify the utility provider.
- Server machine 254 could also identify voltage swells or sags by correlating multiple alerts received from multiple nodes 230 residing within the same region.
- a node 230 may combine data associated with other devices or data streams to draw insights that reflect consumption, service quality and usage, possible causes of deviations from expected values, as well as bill forecasts.
- a given node 230 could be configured to execute a stream function that generates a running average of voltage load associated with a transformer to which the node 230 is coupled. When the running average exceeds a threshold level for some period of time, the node 230 could notify server machine 254 that a fire may be imminent.
- the node 230 could also compute the threshold value dynamically by executing a stream function on time series data that reflects ambient temperature associated with the node 230. The node 230 could then adjust the threshold based on the type of transformer, e.g., by executing a stream function to parse nameplate data associated with that transformer and then generate a nominal load value for that particular type of transformer.
- the node 230 could also receive the threshold value from server machine 254.
- a given node 230 could be configured to participate in identifying usage fraud or theft by executing a stream function to characterize usage patterns associated with a consumer to which the node 230 is coupled and then identify patterns commonly associated with fraud.
- the node 230 could notify server machine 254.
- Such a pattern could be abnormally high consumption compared to prior usage patterns of neighboring consumers, or divergence between measured load at a transformer coupling a set of meters together and total consumed power at those meters, among other possibilities.
- stream functions designed for performing computations related to any consumable utility may also be applicable to any other consumable utility.
- the fraud detection techniques outlined above may be applied to identify loss in the context of water consumption.
- SvDK 426 of Figures 4A-4B is configured to allow stream functions generated for one utility to be applied to performing analogous computations with another utility.
- a given node 230 may identify network events based on parsing data streams collected from a social media outlet (such as the Twitter® API, among others). For example, a data stream gathered from a social media outlet could reflect descriptions of downed power lines, fallen trees, and other events that may impact the functionality of wireless mesh network 202 and utility network 100. Node 230 could execute a stream function to search that data stream for specific references to such events. Users that contribute to the social media outlet mentioned above would generally create the descriptions included in the data stream in the form of posts, tweets, etc. Node 230 could assign a credibility factor or confidence value to each user in order to validate those descriptions. In this fashion, node 230, and stream network 500 as a whole, may incorporate qualitative data provided by human beings with some level of confidence.
- stream network 500 may be configured to perform a wide variety of distributed processing operations to identify events occurring within underlying networks, including wireless mesh network 202 and utility network 100. Stream network 500 may also be configured to perform general processing operations (i.e., beyond event identification).
- server machine 254 may implement a map-reduce type functionality by mapping stream functions to nodes, and then reducing data streams generated by execution of the mapped stream functions by collecting and processing those data streams. In this fashion, server machine 254 is capable of configuring stream network 500 to operate as a generic, distributed computing system. Portions of this distributed computing system may execute on a cloud-based infrastructure in addition to executing on nodes 230.
- server machine 254 may configure stream network 500 to implement any technically feasible form of distributed processing, beyond map-reduce.
- stream network 500 reflects a distributed computing system that combines the processing, extrapolation, interpolation, and analysis of data streams using real-time and historical streams via in-line and parallel batch processing.
- server machine 254 and/or distributed processing cloud 260 are configured to orchestrate the distribution of processing tasks and/or data storage across the various nodes 230 within stream network 500 in a centralized manner. In doing so, server machine 254 and/or distributed processing cloud 260 may assign specific processing operations to different nodes, allocate particular amounts of data storage to different nodes, and generally dictate some or all configuration operations to those nodes.
- nodes 230 perform a self-orchestration procedure that occurs in a relatively distributed fashion, i.e. without the involvement of a centralized unit such as server machine 254 or distributed processing cloud 260.
- each node 230 may execute a stream function in order to negotiate processing and/or data storage responsibilities with neighboring nodes.
- Nodes 230 may perform such negotiations in order to optimize energy usage, processing throughput, bandwidth, data rates, etc.
- nodes 230 could negotiate a distribution of processing tasks that leverages the processing capabilities of solar powered nodes during daylight hours, and then redistributes those operations to nodes powered by utility network 100 during non-daylight hours.
- a group of nodes 230 could negotiate coordinated communications using a specific data rate to optimize power consumption.
- server machine 254 and/or distributed processing cloud 260 may assume direct control over nodes 230, thereby causing nodes 230 to transition form self-orchestration to centralized orchestration.
- Nodes 230 may initiate specific actions based on the execution of one or more stream function 510. For example, a given node 230 could execute a stream function 510 that compares temperature and humidity values to threshold temperature and humidity values. The node 230 could then determine that both temperature and humidity have exceeded the respective threshold values for a specific amount of time, and then determine that mold growth is likely at the location occupied by the node. The node 230 could then take specific steps to counteract such growth, including activating a ventilation device, or simply notifying back office 150. Generally, each node 230 is configured to both process and respond to recorded time series data, received data streams, and generated data streams and to generate insights and/or alerts based on such monitoring.
- a given node 230 may receive control parameters 530 from server machine 254 that influence the execution of those stream functions 510.
- node 230-1 could receive control parameters 530-1 that reflects an average expected voltage load at node 230-1.
- Node 230-1 could record the actual voltage load, compare that recorded value to control parameters 530-1, and then perform a specific action based on the result, such as, e.g., report to back office 150 a binary value indicating whether the average expected voltage load was exceeded, among other possibilities.
- one of stream functions 510-1 executed by node 230-1 would reflect the comparison operation between actual and expected voltage loads.
- server machine 254 may configure nodes 230 to operate according to a policy that indicates guidelines for interacting with the nodes of other networks.
- Each node 230 configured according to the policy may share network resources, route packets according to, and generally interoperate with those other nodes based on the policy.
- node 230 could be configured according to a policy that indicates that 40% of traffic received from a network adjacent to the wireless mesh network 202 should be accepted and routed across wireless mesh network 202 on behalf of the adjacent network.
- node 230 could be configured according to another policy that indicates that traffic from a first adjacent network should be routed according to a first set of guidelines, while traffic associated with a second adjacent network should be routed according to second set of guidelines.
- node 230 could be configured according to a policy that specifies how traffic received from one adjacent network should be routed across wireless mesh network 202 in order to reach another adjacent network.
- the technique described herein allows new nodes 230 to be added to wireless mesh network and then configured according to the same policy or policies already associated with other pre-existing nodes 230 in the wireless mesh network 202.
- this technique allows wireless mesh network 202 to operate in a relatively consistent manner across nodes 230 without requiring continuous querying of server machine 254 with regard to routing decisions. Instead, nodes 230 need only operate according to the configured policy.
- FIG. 6 illustrates a system 600 configured to implement the stream network 500 of Figure 5 , according to one embodiment of the present invention.
- system 600 includes an exemplary portion of utility network 100, including consumer 110, transformer 120, and substation 140.
- Consumer 110 is coupled to node 230-1
- transformer 120 is coupled to node 230-2
- substation 140 is coupled to node 230-3.
- Nodes 230-1 through 230-3 form a portion of wireless mesh network 202 of Figure 2 .
- Each of the nodes 230 within system 600 is coupled to data ingestion cloud 610.
- Data ingestion cloud 610 includes an intake cloud 612 and a formatting cloud 614.
- Data ingestion cloud 610 is coupled to distributed processing cloud 620 and real-time processing cloud 630.
- Distributed processing cloud 620 and real-time processing cloud 630 are coupled to one another, and also both coupled to operations center 640 and customer devices 650.
- Nodes 230 are configured to implement stream network 500 shown in Figure 5 by collecting time series data, processing that data via the execution of stream functions, and then transmitting that data to data ingestion cloud 610.
- Data ingestion cloud 610 includes cloud-based computing devices configured to implement intake cloud 612 and formatting cloud 612.
- Intake cloud 610 receives stream data from nodes 230 routes that data to formatting cloud 614. Formatting cloud 614 then formats that stream data to generate data streams.
- intake cloud 610 executes on a public cloud infrastructure, such as, e.g., Amazon Web Services (AWS), while formatting cloud executes on a private cloud.
- AWS Amazon Web Services
- intake cloud 612 and formatting cloud 614 within data ingestion cloud 610 may be distributed across one or more processing clouds in any technically feasible fashion.
- An exemplary collection of software modules configured to implement intake cloud 612 is described in greater detail below in conjunction with Figure 7 .
- An exemplary collection of software modules configured to implement formatting cloud 614 is described in greater detail below in conjunction with Figure 8 .
- Data ingestion cloud 610 generates data streams and then transmits those streams to distributed processing cloud 620 and real-time processing cloud 630.
- Distributed processing cloud 620 includes cloud-based computing devices configured to (i) archive historical data associated with data streams in a searchable database and (ii) perform batch processing on that historical data via a distributed compute architecture.
- An exemplary collection of software modules configured to implement distributed processing cloud 620 is described in greater detail below in conjunction with Figure 9 .
- Real-time processing cloud 630 includes cloud-based computing devices configured to process data streams in real time. In doing so, real-time processing cloud 630 may monitor data streams and determine whether various conditions have been met and, if so, issue alerts in response. Real-time processing cloud 630 may also publish specific data streams to particular subscribers. An exemplary collection of software modules configured to implement real-time processing cloud 630 is described in greater detail below in conjunction with Figure 10 .
- distributed processing cloud 620 and real-time processing cloud 630 are configured to interoperate with one another in order to process data streams on behalf of customers of utility network 100. In doing so, processing occurring on one of the aforementioned processing clouds can trigger processing on the other processing cloud, and vice versa.
- real-time processing cloud 630 could be configured to process a data stream of voltage values updated every few seconds and monitor that stream for a voltage spike of a threshold magnitude. The occurrence of the voltage spike would trigger real-time processing cloud 630 to initiate an operation on distributed processing cloud that involves the processing of historical stream data spanning a longer time scale, such as months or years.
- Distributed processing cloud 620 could, in response to real-time processing cloud 630, retrieve historical voltage values associated with a range of different times, and then attempt to identify a trend in those values, such as, for example, previous voltage spikes having the threshold value. Based on the identified trend, distributed processing cloud 620 could then predict future voltage spikes associated with the data stream. Distributed processing cloud 620 could then identify subscribers of the data stream when voltage spikes are expected. In this fashion, real-time analysis of stream data can trigger historical analysis of stream data when specific conditions are met. This type of coordination between distributed processing cloud 620 and real-time processing cloud 630 is described in greater detail below in conjunction with Figure 14 .
- distributed processing cloud 620 could be configured to process a data stream of voltage values across a longer time scale, such as months or years. Distributed processing cloud 620 could then identify a trend in that data stream, and then configured real-time processing cloud 630 to specifically monitor that data stream for specific events indicated by the trend, such as sags, swells, and so forth. In this fashion, historical analysis of stream data can be used to initiate in-depth, real-time analysis. This type of coordination between distributed processing cloud 620 and real-time processing cloud 630 is described in greater detail below in conjunction with Figure 15 .
- operations center 640 configures distributed processing cloud 620 and real-time processing cloud 630 to operate in conjunction with one another in either or both of the aforementioned fashions.
- Operations center 640 is, more generally, the governing body of stream network 500 and wireless mesh network 202.
- Operations center 640 may be a control room, a datacenter, and so forth.
- Server machine 254 resides within operations center 640 and may perform some or all of the functionality of operations center 640 discussed herein.
- Operations center 640 configures nodes 230 to implement stream network 500 and, additionally, configures data ingestion cloud 610, distributed processing cloud 620, and real-time processing cloud 630. In doing so, operations center 640 may program firmware within each node 230 to execute specific stream functions.
- Operations center 640 may also instantiate instances of virtual computing devices in order to create the various processing clouds shown.
- Operations center 640 also provides a visualization service that customers may interact with in order to visualize data streams.
- An exemplary collection of software modules configured to execute within operations center 640 is described in greater detail below in conjunction with Figure 11 .
- Customer devices 650 represent computing devices associated with customers of utility network 100.
- a customer device 650 may be any technically feasible form of computing device or platform, including a desktop computer, mobile computer, and so forth.
- a customer may use a customer device 650 to access a web-based portal that allows the customer to subscribe to, generate, and visualize data streams.
- An exemplary collection of software modules configured to implement a customer device 620 is described in greater detail below in conjunction with Figure 12 .
- FIG. 7 illustrates exemplary software modules that are implemented in conjunction with the intake cloud of Figure 6 , according to one embodiment of the present invention.
- intake cloud 612 includes a TIBCO® module 700, a utility IQ (UIQ) module 702, a sensor IQ (SIQ) module 704, data encryption 706, a Java Messaging Service (JMS) mule 710, and a dual port firewall (FW) 712.
- TIBCO® module 700 a utility IQ (UIQ) module 702
- SIQ sensor IQ
- FW dual port firewall
- TIBCO® 700 is a software bus that receives raw data 750 from nodes 230, including time series data and associated time stamps.
- UIQ 702 is a low-frequency interface configured to pull time series data from TIBCO® 700.
- SIQ 704 then collects time series data from UIQ 702 with relatively high frequency for storage in a queue.
- Adapters 706 include various software adapters that allow the various modules described herein to communicate with one another.
- Data encryption 708 is configured to encrypt the time series data that is queued by SIQ 704.
- JMS mule 710 provides a data transport service to move encrypted time series data from data encryption 708 to dual port FW 712. Encrypted time series data 760 exits intake cloud 612 and is then formatted within formatting cloud 614, as described in greater detail below.
- FIG 8 illustrates exemplary software modules that are implemented in conjunction with the formatting cloud of Figure 6 , according to one embodiment of the present invention.
- formatting cloud 614 includes SilverSpring Networks (SSN) Agent 800, web service (WS) and representational state transfer (REST) APIs 802, cloud physical interface (PIF) 804 which includes mapping registry 806 and XML ⁇ JSON 808, data decryption 810, data anonymizer 812, data compression 814, rabbitMQ (RMQ) 816, and time series database (TSDB) 818.
- SSN SilverSpring Networks
- WS web service
- REST representational state transfer
- PPF cloud physical interface
- SSN agent 800 is a software controller for managing formatting cloud 614.
- WS and REST APIs 802 provide a set or uniform resource indicators (URIs) for performing various operations with formatting cloud 614.
- Cloud PIF 808 receives XML data and converts that data to JSON using mapping registry 806 and XML ⁇ JSON 808.
- Data decryption 810 decrypts the encrypted time series data received from intake cloud 612.
- Data anonymizer 812 obfuscates certain identifying information from the decrypted data.
- Data compression 814 compresses the decrypted, anonymous data and then stores that data in RMQ 816.
- TSDB 818 may also be used to store that data. TSDB 818 may be omitted in some embodiments.
- the decrypted anonymous data may then exit formatting cloud 614 as stream data 850.
- Stream data 850 may be consumed by distributed processing cloud 620 or real-time processing cloud 630.
- FIG 9 illustrates exemplary software modules that are implemented in conjunction with the distributed processing cloud of Figure 6 , according to one embodiment of the present invention.
- distributed processing cloud 620 includes a master node 900 and various nodes 902(1) through 902(N).
- Distributed processing cloud 900 also includes a data archive 904.
- Master node 900 and slave nodes 902 are configured to perform distributed, parallel processing operations on stream data. Master node 900 and slave nodes 902 may form a Hadoop processing environment or any other type of distributed processing cluster or architecture.
- Data archive 904 includes historical stream data collected from data ingestion cloud 610 over long time scales. Master node 900 and slave nodes 902 may perform a variety of different processing tasks with the data stored in data archive 904. Data archive 904 may also be exposed to customers, and may be searchable via customer devices 950.
- FIG 10 illustrates exemplary software modules that are implemented in conjunction with the real-time processing cloud of Figure 6 , according to one embodiment of the present invention.
- real-time processing cloud 630 includes a core stream pipeline 1000 that includes REST API 1002, Kafka queues 1004, stream computation engine 1006, stream functions 1008, Scala stream actors 1010, Postgres stream meta data 1012, and Cassandra stream data 1014.
- REST API 1002 provides various URIs for configuring data streams.
- Kafka queues 1004 are configured to queue stream data, including JSON objects and generic messages.
- Stream computation engine 1006 performs operations with data streams by executing Scala stream actors 1010.
- Each Scala stream actor 1010 is a software construct configured to call one or more stream functions 1008.
- a stream function 1008 may be any operation that can be performed with stream data.
- a stream function 1008 could be a sum function, a minimum function, a maximum function, an average function, a function that indicates the fraction of elements in an array that meet a given condition, an interpolation function, a customer function, and so forth.
- Stream functions 1008 may also have configurable parameters, such as, for example, a configurable window within which to calculate an average value, among other parameters relevant to time series computations.
- the Scala stream actor 1010 that calls the stream function 1008 may configure these parameters, among other possibilities.
- Each Scala stream actor 1010 may execute a sequence of stream functions and, potentially, call other stream actors. For example, a first Scala stream actor 1010 could execute a series of stream functions, and upon completion of those functions, call a second Scala stream actor 1010 that would execute a different series of stream functions.
- Scala stream actors 1010 are generally implemented in the Scala programming language, although any technically feasible programming language may also suffice.
- core stream pipeline 1000 may execute a multitude of Scala stream actors 1010 in parallel with one another to perform real-time processing on data streams. Customers may configure Scala stream actors 1010 directly, or may simply subscribe to data streams generated by those actors.
- real-time processing cloud 630 may push specific Scala stream actors 1010 out to nodes 230 in order to configure those nodes to perform "edge processing" on wireless mesh network 202 and stream network 500. With this approach, individual nodes can be configured to perform sequences of specific stream functions.
- Core stream pipeline 1000 is configured by operations center 640, described in greater detail below in conjunction with Figure 11 .
- Figure 11 illustrates exemplary software modules that are implemented in conjunction with the operations center of Figure 6 , according to one embodiment of the present invention.
- operation center 640 includes visualization engine 1100, REST APIs 1108, and SvDK 1116.
- Visualization engine 1100 includes merged views 1102, stream computation 1104, and stream alert monitoring 1106.
- REST APIs 1108 includes controllers 1110, models and views 1112, and configuration and logs 114.
- SvDK includes services 1118, devices 1120, and discoveries 1122.
- Visualization engine 1100 provides a back end for generating visualizations of data streams for customers. Customers may access visualization engine 110 via customer devices 650, as described in greater detail below in conjunction with Figure 12 .
- Merged views 1102 generate views of real-time data and batch (or historical) data.
- Stream computation engine 1104 allows new streams to be registered with visualization engine 1104.
- Stream alert monitoring 1106 monitors data streams and generates alerts when certain conditions are met or specific events occur.
- REST APIs 1108 allow customers to perform various actions, including subscribing to streams, generating alerts, and so forth. Controllers 1110 include logic for performing these actions, while models and views 1112 provide data models and templates for viewing data acquired via REST APIs 1108. Configuration and logs 1114 include configuration data and log files.
- SvDK 1116 is similar to SvDK 426 described above in conjunction with Figure 4A , and includes a specification of various services 1118 a customer may subscribe to, a set of devices 1120 associated with those services, and discoveries 1122 that reflect communication between those devices.
- Operation center 640 as a whole manages the operation of system 600, including the various processing clouds and stream network 500. Operation center 640 also provides back end processing needed to provide customer devices 650 with access to stream data. An exemplary customer device 650 configured to access operation center 640 is described in greater detail below on conjunction with Figure 12 .
- FIG 12 illustrates exemplary software modules that are implemented in conjunction with the customer devices of Figure 6 , according to one embodiment of the present invention.
- customer device 650 includes a portal 1200 that includes a real-time cloud processing interface 1202 and a distributed cloud processing interface 1204.
- Portal 1200 may be a web browser configured to access any of the URIs provided by REST APIs 1108 shown in Figure 11 .
- Real-time cloud processing interface 1202 provides customers with access to data streams processed by real-time processing cloud 630.
- Distributed processing cloud interface 1204 provides customers with the ability to run queries against data archive 904 within distributed processing cloud 620.
- each of data ingestion cloud 610, distributed processing cloud 620, real-time processing cloud 630, operations center 640, and customer devices 650 includes a plurality of computing devices configured to execute software modules such as those described herein.
- Those software modules may be implemented via any technically feasible set of programming languages, beyond those explicitly mentioned above.
- Figures 13-15 describe techniques for configuring the processing clouds described herein, as well as data processing strategies implemented by those clouds.
- Figure 13 is flow diagram of method steps for configuring one or more processing clouds to implement a stream network, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention.
- a method 1300 begins at step 1302, where server machine 254 within operations center 640 configures nodes 230 within wireless mesh network 202 to collect time series data.
- the time series data includes a sequence of data values and corresponding timestamps indicating times when each data value was collected.
- server machine 254 configures data ingestion cloud 610 to receive the time series data and to then format that data in order to generate a data stream.
- Server machine 254 may configure intake cloud 612 to execute on a first cloud computing environment and to receive the time series data, and then configure formatting cloud 614 to execute on a second cloud computing environment and to format the time series data.
- server machine 254 may configure both intake cloud 612 and formatting cloud 614 to execute within the same cloud computing environment.
- server machine 254 configures real-time processing cloud 630 to process the stream data generated by data ingestion cloud 610 in real time. In doing so, server machine 254 may configure one or more instances of virtual computing devices to execute a core stream pipeline such as that shown in Figure 10 .
- server machine 254 configures distributed processing cloud 620 to collect and process historical stream data, and to perform data queries in response to commands issued by customer devices 650. In doing so, server machine 254 may cause distributed processing cloud to accumulate stream data over long periods of time from data ingestion cloud 610, and to store that accumulated stream data within data archive 904. Server machine 254 may also configure master node 900 and slave nodes 902 to perform distributed processing of the data stored in data archive 904.
- server machine 254 generates data visualizations for customer devices 650 based on real-time stream data processed by real-time processing cloud 630 and historical data processed by distributed processing cloud 620.
- server machine 254 implements a web service that responds to requests from customer devices 650 to generate visualizations.
- server machine 254 may further configure distributed processing cloud 620 and real-time processing cloud 630 to interact with one another when certain conditions are met, as described in greater detail below in conjunction with Figures 14-15 .
- Figure 14 is a flow diagram of method steps for triggering distributed processing of stream-based data, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention.
- a method 14 begins at step 1402, where server machine 254 causes real-time processing cloud 630 to generate an alert when a condition is met, based on the processing of stream data, for transmission to distributed processing cloud 620.
- the data stream could reflect a time series of temperature values recorded by a node, and the condition could be the temperature values falling beneath a certain temperature threshold.
- server machine 254 causes distributed processing cloud 620 to receive the alert from real-time processing cloud 630, and, in response, to analyze historical data associated with the data stream to identify a trend.
- server machine 254 could cause distributed processing cloud 620 to analyze historical temperature values gathered by the node and to identify trends in that historical data.
- the trend could indicate a seasonal variation in temperature, or the onset of inclement weather.
- server machine 254 notifies a customer who subscribes to the data stream of the trend that has been identified. In doing so, server machine 254 may indicate to the customer predicted values of the data stream determined based on the trend. For example, if the historical analysis indicated that the temperature change resulted from seasonal temperature variations, then server machine 254 could predict future temperature changes based on those observed during previous years.
- processing that occurs within real-time processing cloud 630 may trigger a different type of processing on distributed processing cloud 620.
- Distributed processing cloud 620 may also trigger processing real-time processing cloud 630, as described in greater detail below in conjunction with Figure 15 .
- Figure 15 is a flow diagram of method steps for triggering real-time processing of stream-based data, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention.
- a method 1500 begins at step 1502, where server machine 254 causes distributed processing cloud 620 to perform a historical analysis on a first data stream to identify a trend for use in configuring real-time processing cloud 630.
- the trend could be a periodic repetition of specific data values, or a predictable change in data values such as a gradual increase or decline in those data values.
- distributed processing cloud 620 notifies real-time processing cloud 630 of that trend.
- server machine 254 causes real-time processing cloud 630 to monitor the first data stream, in real time, to determine the degree to which that data stream complies with the identified trend. For example, if distributed processing cloud 620 determines that data values associated with the first data stream are steadily increasing over time, then real-time processing cloud 630 could determine whether those data values continue to increase as new data values become available.
- server machine 254 notifies a customer who subscribes to the first data stream of the degree to which the first data stream complies with the trend.
- This approach may be applied to detect a variety of different types of trends, including those associated with fraud and other forms of non-technical loss.
- Distributed processing cloud 620 may periodically analyze some or all of the stream data stored in data archive 904, and, in response to that analysis, configure real-time processing cloud 630 to specifically monitor certain data streams for which trends have been detected.
- nodes within a wireless mesh network are configured to monitor time series data associated with a utility network (or any other device network), including voltage fluctuations, current levels, temperature data, humidity measurements, and other observable physical quantities.
- a server coupled to the wireless mesh network configures a data ingestion cloud to receive and process the time series data to generate data streams.
- the server also configures a distributed processing cloud to perform historical analysis on data streams, and a real-time processing cloud to perform real-time analysis on data streams.
- the distributed processing cloud and the real-time processing cloud may interoperate with one another in response to processing the data streams.
- the techniques described herein allow the delivery of "data-as-a-service" (DaaS) that represents an interface between the traditional software-as-a-service (SaaS) and platform-as-a-service (PaaS) approaches.
- DaaS data-as-a-service
- One advantage of the unique architecture described above is that the real-time processing cloud and the distributed processing cloud can interoperate to identify a greater range of events occurring within the utility network compared to traditional approaches. In addition, those different processing clouds provide customers with greater visibility into the types of events occurring within the utility network.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Environmental & Geological Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
- Remote Monitoring And Control Of Power-Distribution Networks (AREA)
- Data Mining & Analysis (AREA)
- Arrangements For Transmission Of Measured Signals (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Multi Processors (AREA)
Description
- Embodiments of the present invention relate generally to network architecture and semantics for distributed processing on a data pipeline, and, more specifically, to distributed smart grid processing.
- A conventional electricity distribution infrastructure typically includes a plurality of energy consumers, such as houses, business, and so forth, coupled to a grid of intermediate distribution entities, such as transformers, feeders, substations, etc. The grid of distribution entities draws power from upstream power plants and distributes that power to the downstream consumers. In a modern electricity distribution infrastructure, the consumers, as well as the intermediate distribution entities, sometimes include "smart" meters and other monitoring hardware coupled together to form a mesh network. The smart meters and other measurement and control devices collect data that reflects the operating state of the grid, as well as consumption and utilization of the grid, and then report the collected data, via the mesh network, to a centralized grid management facility, often referred to as the "back office." Such a configuration is commonly known as a "smart grid."
- In a conventional smart grid, the back office receives a multitude of real-time data from the various smart meters and processes that data to identify specific operating conditions associated with the grid. Those conditions may include electrical events, such as sags or swells, as well as physical events, such as downed power lines or overloaded transformers, among other possibilities. The back office usually includes centralized processing hardware, such as a server room or datacenter, configured to process the smart meter data.
US 2013/0229947 A1 discloses a system, a method and program for detecting anomalous events in a utility network, in which a communication device detects whether anomalous events occur with respect to one node in a utility network based on threshold operating information and situational operating information. A communication device receives operation data from nodes in the network and determines whether the operation data from a node constitutes an anomalous event based on a comparison of the received operation data with (i) the threshold operating information defined for the node and (ii) the situational information. The communication device outputs notification of any determined anomalous event. In the wording of the claims,US 2013/0229947 A1 does not disclose that data is acquired at one computing cloud, the data is then transmitted to a different computing cloud, and then in response to receiving the data at the different computing cloud, the data is processed and analysed at the different computing cloud. - One problem with approach described above is that, with the expansion of smart grid infrastructure, the amount of data that must be transmitted to the back office for processing is growing quickly. Consequently, the network across which the smart meters transmit data can become quickly over-burdened with traffic and, therefore, suffer from throughput and latency issues. In addition, the processing hardware implemented by the back office may quickly become too slow, and therefore obsolete, as the amount of data that must be processed continues to grow in response to increased demand. As a general matter, the infrastructure required to transport and process data generated by a smart grid cannot scale nearly as quickly as the amount of data that is generated by the smart grid system.
- As the foregoing illustrates, what is needed in the art is a more effective approach for transporting and processing data within large-scale network architectures.
- One embodiment of the present invention sets forth a computer-implemented method for identifying events associated with a network environment, as set in claim 16.
The invention is defined by the features of theindependent claims 1, 11 and 16. Preferred embodiments are defined in the dependent claims. - At least one advantage of the unique architecture described above is that various nodes within a network can interoperate to identify a greater range of trends and events occurring within the utility network compared to traditional approaches.
- So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
-
Figure 1 illustrates a utility network configured to implement an infrastructure for distributing electricity, according to one embodiment of the present invention; -
Figure 2A illustrates a mesh network that operates in conjunction with the utility network ofFigure 1 , according to one embodiment of the present invention; -
Figure 2B illustrates the mesh network ofFigure 2A coupled to a machine-to-machine (M2M) network, according to one embodiment of the present invention; -
Figure 3 illustrates a network interface configured to implement multichannel operation, according to one embodiment of the present invention; -
Figure 4A illustrates a server machine coupled to the mesh network ofFigure 2 , according to one embodiment of the present invention; -
Figure 4B illustrates a graphical user interface that may be used to define and generate one or more data streams, according to one embodiment of the present invention; -
Figure 5 illustrates a stream network configured to operate in conjunction with the mesh network ofFigure 2 , according to one embodiment of the present invention; -
Figure 6 illustrates a system configured to implement the stream network ofFigure 5 , according to one embodiment of the present invention; -
Figure 7 illustrates exemplary software modules that are implemented in conjunction with the intake cloud ofFigure 6 , according to one embodiment of the present invention; -
Figure 8 illustrates exemplary software modules that are implemented in conjunction with the formatting cloud ofFigure 6 , according to one embodiment of the present invention; -
Figure 9 illustrates exemplary software modules that are implemented in conjunction with the distributed processing cloud ofFigure 6 , according to one embodiment of the present invention; -
Figure 10 illustrates exemplary software modules that are implemented in conjunction with the real-time processing cloud ofFigure 6 , according to one embodiment of the present invention; -
Figure 11 illustrates exemplary software modules that are implemented in conjunction with the operations center ofFigure 6 , according to one embodiment of the present invention; -
Figure 12 illustrates exemplary software modules that are implemented in conjunction with the customer devices ofFigure 6 , according to one embodiment of the present invention; -
Figure 13 is flow diagram of method steps for configuring one or more processing clouds to implement a stream network, according to one embodiment of the present invention; -
Figure 14 is a flow diagram of method steps for triggering distributed processing of stream-based data, according to one embodiment of the present invention; and -
Figure 15 is a flow diagram of method steps for triggering real-time processing of stream-based data, according to one embodiment of the present invention. - In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
- In the following disclosure, a multi-layered network architecture is described that includes a utility network, illustrated in
Figure 1 , a wireless mesh network, illustrated inFigures 2A-2B , and a stream network, illustrated inFigure 5 . The utility network includes hardware configured to transport and distribute electricity. The wireless mesh network includes hardware nodes residing within elements of that utility network, where those nodes are configured to execute firmware and/or software to (i) monitor the utility network and (ii) establish and maintain the wireless mesh network. In addition, the nodes are also configured to execute firmware and/or software to generate the stream network. The stream network includes time series data that is generated and processed by the nodes, and shared between nodes via the wireless mesh network. The stream network operates above the wireless mesh network, which, in turn, operates above the electricity distribution layer. -
Figure 1 illustrates autility network 100 configured to implement an infrastructure for distributing electricity, according to one embodiment of the present invention. As shown,utility network 100 includesconsumer 110,transformers 120,feeders 130,substations 140, and aback office 150, coupled together in a sequence. Substations 140(1) through 140(T) are configured to draw power from one ormore power plants 160 and to distribute that power to feeders 130(1) through 130(S).Feeders 130, in turn, distribute that power to transformers 120(1) through 120(R).Transformers 120 step down high-voltage power transported byfeeders 130 to a low-voltage power, and then transmit the low-voltage power to consumers 110(1) through 110(Q).Consumers 110 include houses, business, and other consumers of power. - Each of
consumers 110,transformers 120,feeders 130, andsubstations 140 may include one or more instances of a node. In the context of this disclosure, a "node" refers to a computing device that is coupled to an element ofutility network 100 and includes a sensor array and a wireless transceiver. An exemplary node is described below in conjunction withFigure 3 . Each such node is configured to monitor operating conditions associated with a specific portion of theutility network 100. For example, consumer 110(1) could include a node configured to monitor a number of kilowatt-hours consumed by consumer 110(1). In another example, transformer 120(R-1) could include a node configured to monitor voltage levels or temperature at transformer 120(R-1). In yet another example, feeder 130(S) could include one or more nodes configured to monitor humidity percentages or wind velocities at various locations associated with feeder 130(S). As a general matter, the nodes withinutility network 110 may be smart meters, Internet of Things (loT) devices configured to stream data, or other computing devices. The nodes withinutility network 100 may be configured to record physical quantities associated with power distribution and consumption alongutility network 100, record physical quantities associated with the environment whereutility network 100 resides, record quality of service data, or record any other technically feasible type of data. - The nodes residing within
utility network 100 are configured to communicate with one another to form an interconnected wireless mesh network. An exemplary wireless mesh network is described in greater detail below in conjunction withFigures 2A-2B .Back office 150 is coupled to this wireless mesh network and configured to coordinate the overall operation of the network and, in some cases, the corresponding nodes. In doing so,back office 150 may configure nodes to record specific data and to establish communication with neighboring nodes. In addition,back office 150 may program the nodes to execute "stream functions" to process incoming time series data, thereby generating data streams. In one embodiment, this configuration is performed in a distributed processing cloud. The incoming time series data could include raw data recorded at the node, or data streams received from neighboring nodes.Back office 150 may collect the generated data streams, and, by processing those streams, identify various events occurring withinutility network 100.Back office 150 may then take specific actions in response to those identified events. In some embodiments, the node management functionality discussed above is performed by a separate "operations center," discussed in greater detail below in conjunction withFigure 6 . In other embodiments, any of the aforementioned management functionality may also occur within a cloud-based processing environment. -
Figure 2A illustrates a mesh network that operates in conjunction withutility network 100 ofFigure 1 , according to one embodiment of the present invention. As shown, anetwork system 200 includes awireless mesh network 202, which may include asource node 210,intermediate nodes 230 anddestination node 212.Source node 210 is able to communicate with certainintermediate nodes 230 via communication links 232.Intermediate nodes 230 communicate amongst themselves via communication links 232.Intermediate nodes 230 communicate withdestination node 212 via communication links 232. In one embodiment, each ofnodes 230 may communicate with other nodes using a specific set of frequencies, and may respond to queries from other (non-node) devices using a different set of frequencies.Network system 200 may also include anaccess point 250, anetwork 252, aserver machine 254, and arouter 256.Network 252 andserver machine 254 may be coupled to a distributedprocessing cloud 260, which generally resides outside ofnetwork system 200. As mentioned above in conjunction withFigure 1 , a given node 230 (or asource node 210 or a destination node 212) may reside within any of the elements ofutility network 100, includingconsumers 110,transformers 120, and so forth. Thevarious nodes 230 shown inFigure 2A may also be coupled to various loT devices, as described in greater detail below in conjunction withFigure 2B . -
Figure 2B illustrates the mesh network ofFigure 2A coupled to a machine-to-machine (M2M) network, according to one embodiment of the present invention. As shown, devices 240 are coupled to one another and tovarious nodes 230 viaconnections 244, thereby formingM2M network 244. Each of devices 240 may be any technically feasible loT device, including a smart appliance, a smart traffic light, or any other device configured to perform wireless communications. Devices 240 may communicate with one another directly via specific connections 242, or communication with one another indirectly by way ofnodes 230. Each device 240 may gather various types of data and communicate that data to one ormore nodes 230. The data gathered by a given device 240 generally includes real-time data such as, e.g., a sequence of recorded values and timestamps indicating when each value was recorded. - Referring generally to
Figures 2A-2B , a discovery protocol may be implemented to determine node adjacency to one or more adjacent nodes. For example, intermediate node 230-2 may execute the discovery protocol to determine thatnodes 210, 230-4, and 230-5 are adjacent to node 230-2. Furthermore, this node adjacency indicates that communication links 232-2, 232-5, and 232-6 may be established withnodes 110, 230-4, and 230-5, respectively. Any technically feasible discovery protocol, including one related to loT and/or M2M principles, may be implemented. - The discovery protocol may also be implemented to determine the hopping sequences of adjacent nodes, i.e., the sequence of channels across which nodes periodically receive payload data. As is known in the art, a "channel" may correspond to a particular range of frequencies. Once adjacency is established between
source node 210 and at least oneintermediate node 230,source node 210 may generate payload data for delivery todestination node 212, assuming a path is available. The payload data may comprise an Internet protocol (IP) packet, an Ethernet frame, or any other technically feasible unit of data. Similarly, any technically feasible addressing and forwarding techniques may be implemented to facilitate delivery of the payload data fromsource node 210 todestination node 212. For example, the payload data may include a header field configured to include a destination address, such as an IP address or Ethernet media access control (MAC) address. - Each
intermediate node 230 may be configured to forward the payload data based on the destination address. Alternatively, the payload data may include a header field configured to include at least one switch label to define a predetermined path fromsource node 210 todestination node 212. A forwarding database may be maintained by eachintermediate node 230 that indicates which of communication links 232 should be used and in what priority to transmit the payload data for delivery todestination node 212. The forwarding database may represent multiple paths to the destination address, and each of the multiple paths may include one or more cost values. Any technically feasible type of cost value may characterize a link or a path withinnetwork system 200. In one embodiment, each node withinwireless mesh network 202 implements substantially identical functionality and each node may act as a source node, destination node or intermediate node. - In
network system 200,access point 250 is configured to communicate with at least one node withinwireless mesh network 202, such as intermediate node 230-4. Communication may include transmission of payload data, timing data, or any other technically relevant data betweenaccess point 250 and the at least one node withinwireless mesh network 202. For example, a communication link may be established betweenaccess point 250 and intermediate node 230-4 to facilitate transmission of payload data betweenwireless mesh network 202 andnetwork 252.Network 252 is coupled toserver machine 254 via a communications link.Access point 250 is coupled tonetwork 252, which may comprise any wired, optical, wireless, or hybrid network configured to transmit payload data betweenaccess point 250 andserver machine 254. - In one embodiment,
server machine 254 represents a destination for payload data originating withinwireless mesh network 202 and a source of payload data destined for one or more nodes withinwireless mesh network 202.Server machine 254 generally resides within an operations center or other cloud-based environment configured to managewireless mesh network 202. For example,server machine 254 could be implemented by a datacenter that includes a number of different computing devices networked together. In one embodiment,server machine 254 executes an application for interacting with nodes withinwireless mesh network 202. For example, nodes withinwireless mesh network 202 may perform measurements to generate data that reflects operating conditions ofutility network 100 ofFigure 1 , including, e.g., power consumption data, among other measurements.Server machine 254 may execute an application to collect, process, and report those measurements and any other computed values. In one embodiment,server machine 254queries nodes 230 withinwireless mesh network 202 for certain data. Each queried node replies with the requested data, such as consumption data, system status, health data, and so forth. In an alternative embodiment, each node withinwireless mesh network 202 autonomously reports certain data, which is collected byserver machine 254 as the data becomes available via autonomous reporting. Persons skilled in the art will recognize that the techniques described herein are applicable to any technically feasible type of network, beyond utility networks. - As described in greater detail below in conjunction with
Figures 4-15 ,server machine 254 is configured to establish and maintain the aforementioned stream network that operates abovewireless mesh network 202. More specifically,server machine 254 configures thenodes 230 withinwireless mesh network 202 to implement "stream functions" in order to generate data streams and process real-time data. A stream function may be any technically feasible algorithm or computational programming function for processing and/or monitoring real-time data. A data stream represents real-time data that is generated by execution of a stream function. The stream network generally includes the various data streams and the paths throughmesh network 202 followed by those data streams. The stream network is described in greater detail below in conjunction withFigures 5-15 . - In one embodiment,
server machine 254 may interact with distributedprocessing cloud 260 to perform some or all of the stream network configuration and stream function execution. Distributedprocessing cloud 260 may be a private or a public distributed processing cloud, and may include a combination of different processing clouds. Distributedprocessing cloud 260 may define a configurable data processing pipeline that affects a logical data network path above the physical node paths within mesh network 102. - The techniques described herein are sufficiently flexible to be utilized within any technically feasible network environment including, without limitation, a wide-area network (WAN) or a local-area network (LAN). Moreover, multiple network types may exist within a given
network system 200. For example, communications between twonodes 230 or between anode 230 and thecorresponding access point 250 may be via a radio-frequency local-area network (RF LAN), while communications betweenmultiple access points 250 and the network may be via a WAN such as a general packet radio service (GPRS). As mentioned above, eachnode 230 withinwireless mesh network 202 includes a network interface that enables the node to communicate wirelessly with other nodes. An exemplary network interface is described below in conjunction withFigure 3 . -
Figure 3 illustrates anetwork interface 300 configured to implement multichannel operation, according to one embodiment of the present invention. Eachnode wireless mesh network 202 ofFigures 2A-2B includes at least one instance ofnetwork interface 300.Network interface 300 may include, without limitation, a microprocessor unit (MPU) 310, a digital signal processor (DSP) 314, digital to analog converters (DACs) 320 and 321, analog to digital converters (ADCs) 322 and 323,analog mixers phase shifter 332, anoscillator 330, a power amplifier (PA) 342, a low noise amplifier (LNA) 340, anantenna switch 344, and anantenna 346. A memory 312 may be coupled toMPU 310 for local program and data storage. Similarly, amemory 316 may be coupled toDSP 314 for local program and data storage. Memory 312 and/ormemory 316 may be used to store data structures such as, e.g., a forwarding database, and/or routing tables that include primary and secondary path information, path cost values, and so forth. - In one embodiment,
MPU 310 implements procedures for processing IP packets transmitted or received as payload data bynetwork interface 300. The procedures for processing the IP packets may include, without limitation, wireless routing, encryption, authentication, protocol translation, and routing between and among different wireless and wired network ports. In one embodiment,MPU 310 implements the techniques performed by the node whenMPU 310 executes firmware and/or software programs stored in memory withinnetwork interface 300. -
MPU 314 is coupled toDAC 320 andDAC 321. EachDAC MPU 314 is also coupled toADC 322 andADC 323. Each ofADC network interface 300 represents just one possible network interface that may be implemented withinwireless mesh network 202 shown inFigures 2A-2B , and that any other technically feasible device for transmitting and receiving data may be incorporated within any of the nodes withinwireless mesh network 202. As a general matter,server machine 254 ofFigures 2A-2B configures and manages the operation of eachnode 230 wherenetwork interface 300 resides. -
Figure 4A illustratesserver machine 254 that is coupled towireless mesh network 202 ofFigure 2 , according to one embodiment of the present invention. As shown,server machine 254 includesprocessing unit 400, input/output (I/O)devices 410, andmemory unit 420, coupled together.Memory unit 420 includesstream network engine 422,stream network data 424, stream software developer kit (SvDK) 426, anddatabase 428. -
Processing unit 400 may be any technically feasible hardware unit or collection of units configured to process data, including a central processing unit (CPU), a graphics processing unit (GPU), a parallel processing unit (PPU), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or any combination thereof.Processing unit 400 is configured to perform I/O operations via I/O devices 410, as well as to read data from and write data tomemory unit 420. In particular, processingunit 400 is configured to execute program code included instream network engine 400 andSvDK 426, generate and/or modifystream network data 424, and read from and/or write todatabase 428. - I/
O devices 410 may include devices configured to receive input, such as, e.g., a keyboard, a mouse, a digital versatile disc (DVD) tray, and so forth. I/O devices 410 may also include devices configured to generate output, such as, e.g., a display device, a speaker, a printer, and so forth. I/O devices 410 may further include devices configured to both receive input and generate output, such as a touchscreen, a data port, and so forth. I/O devices generally provide connectivity to the Internet, and, specifically, towireless mesh network 202. -
Memory unit 420 may be any technically feasible unit configured to store data, including a hard disk, random access memory (RAM), etc. The stored data may include structured data sets, program code, software applications, and so forth.Stream network engine 422 is a software application that may be executed by processingunit 400 to establish and maintain the stream network discussed above in conjunction withFigures 1-4 , and, further, below in conjunction withFigures 5-15 . In doing so,stream network engine 422 configuresnodes 230 withinmesh network 202 to execute stream functions included withinstream network data 424. The stream functions generally reflect various operations that can be performed by anode 230 in order to process time series data collected by that node. As described in greater detail below in conjunction withFigures 6 and11 , "stream actors" may encapsulate a sequence of one or more stream functions. In one embodiment, the functionality ofstream network engine 422 is performed within distributedprocessing cloud 260 ofFigures 2A-2B . In another embodiment,server machine 254 executesstream network engine 422 to configure distributedprocessing cloud 260 to managenodes 230 and/or execute the stream functions described above. -
SvDK 426 is a software application that, when executed by processingunit 400, provides a template-based composition wizard / application to utility customers that allows creation of stream functions.SvDK 426 generates a graphical user interface (GUI) that supports drag-and-drop construction of stream functions and/or node monitoring rules, among other possibilities.SvDK 426 may be implemented as a server configured to provide access to the aforementioned GUI, among other possibilities.SvDK 426 is configured to expose to the customer various abstractions of underlying libraries that encapsulate various application programming interface (API) calls. These abstract libraries enable the customer to generate complex stream functions and stream services that are implemented by complex underlying code, yet require no actual coding on the part of the customer.SvDK 426 enables the customer to generate stream functions and stream services from scratch or based on other stream functions and/or stream services. An exemplary GUI that may be generated bySvDK 426 is described below inFigure 4B . -
Figure 4B illustrates aGUI 430 that may be used to generate a data stream, according to one embodiment of the present invention. As shown,GUI 430 includes various GUI elements for making different selections and providing various inputs associated with a data stream, includingcustomer selector 432,input selector 434,device ID input 436,name input 438, attributesselector 440,interval input 442, andoptions buttons 444. A user ofSvDK 426 may interact withGUI 430 in order to define a new data stream service. - In practice, the user selects the customer they represent via
customer selector 432, and then identifies, viainput selector 434, the specific inputs from which the data stream should receive data. Those inputs could be derived from specific devices, includingother nodes 230, or non-utility network data sources such as Facebook® or Twitter®, NOAA climate data, as well as abstract data sources such as previously created and computed streams. The user may also enter a specific device ID viadevice ID input 436. The user may then provide a name vianame input 438 and select the particular function or functions that should be executed on the source data viaattributes selector 440.Interval selector 442 allows the user to adjust the frequency with which elements of the data stream are generated.Options buttons 444 allow various other options to be selected. Once the user has configuredGUI 430 to include various selections and inputs, the user may submit the data stream configuration defined by those selections and inputs toserver machine 254. In response,server machine 254 then configures distributedprocessing cloud 260,nodes 230, and so forth, to generate that data stream. - Referring back now to
Figure 4A ,SvDK 426 may include and/or generate server-side code that executes onprocessing unit 400 as well as client-side code that executes on a remote computing device or sensory / measurement device associated with a utility customer, as well as code that executes on distributedprocessing cloud 260. In one embodiment, as mentioned above,SvDK 426 may be a web application that provides users with access to a library of function calls for performing data processing on time series data, including raw time series data generated by anode 230 as well as aggregated data stream time series data received from other nodes. The user may generate a data stream by assembling various function calls via the GUI described above in any desired fashion to process the time series data. The library of function calls and other data used bySvDK 426 may be stored in alocal database 428, among other places. Those function calls generally encapsulate specific programmatic operations, including database operations and data processing algorithms, without requiring that the user write actual code. Generally,SvDK 426 allows utility customers to customize a specific portion of the stream network that operates in conjunction withmesh network 202. The stream network discussed thus far is described in greater detail below in conjunction withFigure 5 . -
Figure 5 illustrates astream network 500 configured to operate in conjunction withmesh network 202 ofFigure 2 , according to one embodiment of the present invention. Again, as illustrated in greater detail below,stream network 500 operates abovemesh network 202 ofFigure 2 in an overall network architecture. As shown,nodes 230 ofmesh network 202 execute stream functions 510 in order to generate data streams 520. - Specifically, node 230-1 executes stream functions 510-1 to generate data stream 520-1, node 230-2 executes stream function 510-2 to generate data streams 520-2 and 520-3, node 230-3 executes stream functions 510-3 to generate data stream 520-4, node 230-4 executes stream functions 510-4 to generate data streams 520-5 and 520-6, node 230-5 executes stream functions 510-5 to generate data streams 520-7 and 520-8, and node 230-6 executes stream functions 510-6 to generate stream function 520-9. Each data stream 520 includes a time series of data elements, where each data element includes a data value and a corresponding timestamp indicating a time when the data values was computed, recorded or generated.
- A given
node 230 may execute one or more stream functions 510 to process raw time series data generated by thatnode 230. A stream function 510 may be a Boolean operation, such as, e.g., a comparison, or a more complex, higher-level function, such as a correlation operation. The raw time series data processed by stream functions generally includes various types of sensor data, such as voltage data, current measurements, temperature readings, and other types of environmental and/or non-environmental information. The raw time series data may also include sensor data reflective of the operating conditions ofnode 230. Further, the raw time series data may include network status information, traffic measurements, and so forth. In one embodiment, eachnode 230 is configured to access time series data that is derived from various social media outlets, such as Twitter® or Facebook®, among other possibilities.Node 230 could, for example, retrieve tweets in real-time (or near real-time) via an API provided by Twitter®.Node 230 is configured to process the raw time series data to generate one or more data streams 520, and to then transmit the generated data stream(s) 520 to neighboring nodes. Data streams generated by processing raw time series data may be referred to herein as "native data streams." - A given
node 230 may also execute one or more stream functions 510 to process data streams 520 received from neighboringnodes 230. A received data stream 520 could be generated by anupstream node 230 based on raw time series data recorded by that node, or generated based on other data streams 520 received by that upstream node. Similar to above,node 230 is configured to process received data streams 520 to generate additional data streams 520, and to then transmit these data stream(s) 520 to neighboring nodes. Data streams generated by processing other data streams may be referred to herein as "abstract data streams." - Upon generating a data stream 520,
node 230 is configured to transmit the data stream 520 toback office 150 and/or distributedprocessing cloud 260, as mentioned.Back office 150 collects data streams 520 fromnodes 230 withinwireless mesh network 202 and may then perform various additional processing operations with those data streams 520 to identify network events associated withutility network 100 and/orwireless mesh network 202 as well as consumption data. In doing so,server machine 254 may characterize time series data associated withnodes 230, including raw time series data and received data streams, and then identify network events associated with abnormal patterns within that time series data. Those network events may include voltage sags/swells, downed power lines, appliance malfunctions, potential fires, and fraud, among others.Server machine 254 may also process time series data to identify expected or normal patterns, including consumption data, quality of service data, etc.Server machine 254 may then analyze this data to compute load predictions, demand estimations, and so forth. In doing so,server machine 254 may rely on data from abstract data sources, such as Twitter® or Facebook®, to identify possible surges in electricity usage.Server machine 254 may then provide advanced notification to a utility company. - For example, a given
node 230 could be configured to participate in identifying voltage swells (or sags) by executing a stream function that generates a running average of voltage levels associated with thenode 230. When the voltage level at a given point in time exceeds (or falls below) the running average by a threshold amount, value,node 230 could alertserver machine 254.Server machine 254 could then identify that a voltage swell (or sag) is occurring in the region where the node resides and notify the utility provider.Server machine 254 could also identify voltage swells or sags by correlating multiple alerts received frommultiple nodes 230 residing within the same region. In general, anode 230 may combine data associated with other devices or data streams to draw insights that reflect consumption, service quality and usage, possible causes of deviations from expected values, as well as bill forecasts. - In another example, a given
node 230 could be configured to execute a stream function that generates a running average of voltage load associated with a transformer to which thenode 230 is coupled. When the running average exceeds a threshold level for some period of time, thenode 230 could notifyserver machine 254 that a fire may be imminent. Thenode 230 could also compute the threshold value dynamically by executing a stream function on time series data that reflects ambient temperature associated with thenode 230. Thenode 230 could then adjust the threshold based on the type of transformer, e.g., by executing a stream function to parse nameplate data associated with that transformer and then generate a nominal load value for that particular type of transformer. Thenode 230 could also receive the threshold value fromserver machine 254. - In yet another example, a given
node 230 could be configured to participate in identifying usage fraud or theft by executing a stream function to characterize usage patterns associated with a consumer to which thenode 230 is coupled and then identify patterns commonly associated with fraud. When a usage pattern commonly associated with fraud is detected, thenode 230 could notifyserver machine 254. Such a pattern could be abnormally high consumption compared to prior usage patterns of neighboring consumers, or divergence between measured load at a transformer coupling a set of meters together and total consumed power at those meters, among other possibilities. - Persons skilled in the art will recognize that stream functions designed for performing computations related to any consumable utility may also be applicable to any other consumable utility. For example, the fraud detection techniques outlined above may be applied to identify loss in the context of water consumption.
SvDK 426 ofFigures 4A-4B is configured to allow stream functions generated for one utility to be applied to performing analogous computations with another utility. - A given
node 230 may identify network events based on parsing data streams collected from a social media outlet (such as the Twitter® API, among others). For example, a data stream gathered from a social media outlet could reflect descriptions of downed power lines, fallen trees, and other events that may impact the functionality ofwireless mesh network 202 andutility network 100.Node 230 could execute a stream function to search that data stream for specific references to such events. Users that contribute to the social media outlet mentioned above would generally create the descriptions included in the data stream in the form of posts, tweets, etc.Node 230 could assign a credibility factor or confidence value to each user in order to validate those descriptions. In this fashion,node 230, andstream network 500 as a whole, may incorporate qualitative data provided by human beings with some level of confidence. - Generally,
stream network 500 may be configured to perform a wide variety of distributed processing operations to identify events occurring within underlying networks, includingwireless mesh network 202 andutility network 100.Stream network 500 may also be configured to perform general processing operations (i.e., beyond event identification). In one embodiment,server machine 254 may implement a map-reduce type functionality by mapping stream functions to nodes, and then reducing data streams generated by execution of the mapped stream functions by collecting and processing those data streams. In this fashion,server machine 254 is capable of configuringstream network 500 to operate as a generic, distributed computing system. Portions of this distributed computing system may execute on a cloud-based infrastructure in addition to executing onnodes 230. Persons skilled in the art will recognize thatserver machine 254 may configurestream network 500 to implement any technically feasible form of distributed processing, beyond map-reduce. Generally,stream network 500 reflects a distributed computing system that combines the processing, extrapolation, interpolation, and analysis of data streams using real-time and historical streams via in-line and parallel batch processing. - In one embodiment,
server machine 254 and/or distributedprocessing cloud 260 are configured to orchestrate the distribution of processing tasks and/or data storage across thevarious nodes 230 withinstream network 500 in a centralized manner. In doing so,server machine 254 and/or distributedprocessing cloud 260 may assign specific processing operations to different nodes, allocate particular amounts of data storage to different nodes, and generally dictate some or all configuration operations to those nodes. - In another embodiment,
nodes 230 perform a self-orchestration procedure that occurs in a relatively distributed fashion, i.e. without the involvement of a centralized unit such asserver machine 254 or distributedprocessing cloud 260. In doing so, eachnode 230 may execute a stream function in order to negotiate processing and/or data storage responsibilities with neighboring nodes.Nodes 230 may perform such negotiations in order to optimize energy usage, processing throughput, bandwidth, data rates, etc. For example,nodes 230 could negotiate a distribution of processing tasks that leverages the processing capabilities of solar powered nodes during daylight hours, and then redistributes those operations to nodes powered byutility network 100 during non-daylight hours. In another example, a group ofnodes 230 could negotiate coordinated communications using a specific data rate to optimize power consumption. At any given time,server machine 254 and/or distributedprocessing cloud 260 may assume direct control overnodes 230, thereby causingnodes 230 to transition form self-orchestration to centralized orchestration. -
Nodes 230 may initiate specific actions based on the execution of one or more stream function 510. For example, a givennode 230 could execute a stream function 510 that compares temperature and humidity values to threshold temperature and humidity values. Thenode 230 could then determine that both temperature and humidity have exceeded the respective threshold values for a specific amount of time, and then determine that mold growth is likely at the location occupied by the node. Thenode 230 could then take specific steps to counteract such growth, including activating a ventilation device, or simply notifyingback office 150. Generally, eachnode 230 is configured to both process and respond to recorded time series data, received data streams, and generated data streams and to generate insights and/or alerts based on such monitoring. - When executing a stream function 510, a given
node 230 may receive control parameters 530 fromserver machine 254 that influence the execution of those stream functions 510. For example, node 230-1 could receive control parameters 530-1 that reflects an average expected voltage load at node 230-1. Node 230-1 could record the actual voltage load, compare that recorded value to control parameters 530-1, and then perform a specific action based on the result, such as, e.g., report to back office 150 a binary value indicating whether the average expected voltage load was exceeded, among other possibilities. In the above example, one of stream functions 510-1 executed by node 230-1 would reflect the comparison operation between actual and expected voltage loads. - In one embodiment,
server machine 254 may configurenodes 230 to operate according to a policy that indicates guidelines for interacting with the nodes of other networks. Eachnode 230 configured according to the policy may share network resources, route packets according to, and generally interoperate with those other nodes based on the policy. For example,node 230 could be configured according to a policy that indicates that 40% of traffic received from a network adjacent to thewireless mesh network 202 should be accepted and routed acrosswireless mesh network 202 on behalf of the adjacent network. In another example,node 230 could be configured according to another policy that indicates that traffic from a first adjacent network should be routed according to a first set of guidelines, while traffic associated with a second adjacent network should be routed according to second set of guidelines. In yet another example,node 230 could be configured according to a policy that specifies how traffic received from one adjacent network should be routed acrosswireless mesh network 202 in order to reach another adjacent network. The technique described herein allowsnew nodes 230 to be added to wireless mesh network and then configured according to the same policy or policies already associated with otherpre-existing nodes 230 in thewireless mesh network 202. In addition, this technique allowswireless mesh network 202 to operate in a relatively consistent manner acrossnodes 230 without requiring continuous querying ofserver machine 254 with regard to routing decisions. Instead,nodes 230 need only operate according to the configured policy. - Persons skilled in the art will understand that the techniques described thus far may be implemented in any technically feasible architecture, including public and/or private cloud-based implementations, centralized or decentralized implementations, and so forth. One exemplary implementation of the aforementioned techniques is described in greater detail below in conjunction with
Figures 6-15 . -
Figure 6 illustrates asystem 600 configured to implement thestream network 500 ofFigure 5 , according to one embodiment of the present invention. As shown,system 600 includes an exemplary portion ofutility network 100, includingconsumer 110,transformer 120, andsubstation 140.Consumer 110 is coupled to node 230-1,transformer 120 is coupled to node 230-2, andsubstation 140 is coupled to node 230-3. Nodes 230-1 through 230-3 form a portion ofwireless mesh network 202 ofFigure 2 . Each of thenodes 230 withinsystem 600 is coupled todata ingestion cloud 610.Data ingestion cloud 610 includes anintake cloud 612 and aformatting cloud 614.Data ingestion cloud 610 is coupled to distributedprocessing cloud 620 and real-time processing cloud 630. Distributedprocessing cloud 620 and real-time processing cloud 630 are coupled to one another, and also both coupled tooperations center 640 andcustomer devices 650. -
Nodes 230 are configured to implementstream network 500 shown inFigure 5 by collecting time series data, processing that data via the execution of stream functions, and then transmitting that data todata ingestion cloud 610.Data ingestion cloud 610 includes cloud-based computing devices configured to implementintake cloud 612 andformatting cloud 612.Intake cloud 610 receives stream data fromnodes 230 routes that data to formattingcloud 614.Formatting cloud 614 then formats that stream data to generate data streams. In one embodiment,intake cloud 610 executes on a public cloud infrastructure, such as, e.g., Amazon Web Services (AWS), while formatting cloud executes on a private cloud. Generally,intake cloud 612 andformatting cloud 614 withindata ingestion cloud 610 may be distributed across one or more processing clouds in any technically feasible fashion. An exemplary collection of software modules configured to implementintake cloud 612 is described in greater detail below in conjunction withFigure 7 . An exemplary collection of software modules configured to implementformatting cloud 614 is described in greater detail below in conjunction withFigure 8 . -
Data ingestion cloud 610 generates data streams and then transmits those streams to distributedprocessing cloud 620 and real-time processing cloud 630. Distributedprocessing cloud 620 includes cloud-based computing devices configured to (i) archive historical data associated with data streams in a searchable database and (ii) perform batch processing on that historical data via a distributed compute architecture. An exemplary collection of software modules configured to implement distributedprocessing cloud 620 is described in greater detail below in conjunction withFigure 9 . - Real-
time processing cloud 630 includes cloud-based computing devices configured to process data streams in real time. In doing so, real-time processing cloud 630 may monitor data streams and determine whether various conditions have been met and, if so, issue alerts in response. Real-time processing cloud 630 may also publish specific data streams to particular subscribers. An exemplary collection of software modules configured to implement real-time processing cloud 630 is described in greater detail below in conjunction withFigure 10 . - In one embodiment, distributed
processing cloud 620 and real-time processing cloud 630 are configured to interoperate with one another in order to process data streams on behalf of customers ofutility network 100. In doing so, processing occurring on one of the aforementioned processing clouds can trigger processing on the other processing cloud, and vice versa. - For example, real-
time processing cloud 630 could be configured to process a data stream of voltage values updated every few seconds and monitor that stream for a voltage spike of a threshold magnitude. The occurrence of the voltage spike would trigger real-time processing cloud 630 to initiate an operation on distributed processing cloud that involves the processing of historical stream data spanning a longer time scale, such as months or years. Distributedprocessing cloud 620 could, in response to real-time processing cloud 630, retrieve historical voltage values associated with a range of different times, and then attempt to identify a trend in those values, such as, for example, previous voltage spikes having the threshold value. Based on the identified trend, distributedprocessing cloud 620 could then predict future voltage spikes associated with the data stream. Distributedprocessing cloud 620 could then identify subscribers of the data stream when voltage spikes are expected. In this fashion, real-time analysis of stream data can trigger historical analysis of stream data when specific conditions are met. This type of coordination between distributed processingcloud 620 and real-time processing cloud 630 is described in greater detail below in conjunction withFigure 14 . - In another example, distributed
processing cloud 620 could be configured to process a data stream of voltage values across a longer time scale, such as months or years. Distributedprocessing cloud 620 could then identify a trend in that data stream, and then configured real-time processing cloud 630 to specifically monitor that data stream for specific events indicated by the trend, such as sags, swells, and so forth. In this fashion, historical analysis of stream data can be used to initiate in-depth, real-time analysis. This type of coordination between distributed processingcloud 620 and real-time processing cloud 630 is described in greater detail below in conjunction withFigure 15 . - As a general matter,
operations center 640 configures distributedprocessing cloud 620 and real-time processing cloud 630 to operate in conjunction with one another in either or both of the aforementioned fashions. -
Operations center 640 is, more generally, the governing body ofstream network 500 andwireless mesh network 202.Operations center 640 may be a control room, a datacenter, and so forth.Server machine 254 resides withinoperations center 640 and may perform some or all of the functionality ofoperations center 640 discussed herein.Operations center 640 configuresnodes 230 to implementstream network 500 and, additionally, configuresdata ingestion cloud 610, distributedprocessing cloud 620, and real-time processing cloud 630. In doing so,operations center 640 may program firmware within eachnode 230 to execute specific stream functions.Operations center 640 may also instantiate instances of virtual computing devices in order to create the various processing clouds shown.Operations center 640 also provides a visualization service that customers may interact with in order to visualize data streams. An exemplary collection of software modules configured to execute withinoperations center 640 is described in greater detail below in conjunction withFigure 11 . -
Customer devices 650 represent computing devices associated with customers ofutility network 100. Acustomer device 650 may be any technically feasible form of computing device or platform, including a desktop computer, mobile computer, and so forth. A customer may use acustomer device 650 to access a web-based portal that allows the customer to subscribe to, generate, and visualize data streams. An exemplary collection of software modules configured to implement acustomer device 620 is described in greater detail below in conjunction withFigure 12 . -
Figure 7 illustrates exemplary software modules that are implemented in conjunction with the intake cloud ofFigure 6 , according to one embodiment of the present invention. As shown,intake cloud 612 includes aTIBCO® module 700, a utility IQ (UIQ)module 702, a sensor IQ (SIQ)module 704,data encryption 706, a Java Messaging Service (JMS)mule 710, and a dual port firewall (FW) 712. - Persons skilled in the art will understand that some of the various software modules shown in
Figure 7 are commonly associated with specific vendors and may represent specific brands. However, these specific modules are provided to illustrate one possible implementation of the general functionality associated withintake cloud 600, and are not meant to limit the scope of the present invention to the particular vendors / brands shown. In addition, although specific data types may be shown, such as, e.g., JSON, among others, the techniques described herein are not limited to those specific data types, and may be practiced with any technically feasible type of data. -
TIBCO® 700 is a software bus that receivesraw data 750 fromnodes 230, including time series data and associated time stamps.UIQ 702 is a low-frequency interface configured to pull time series data fromTIBCO® 700.SIQ 704 then collects time series data fromUIQ 702 with relatively high frequency for storage in a queue.Adapters 706 include various software adapters that allow the various modules described herein to communicate with one another.Data encryption 708 is configured to encrypt the time series data that is queued bySIQ 704.JMS mule 710 provides a data transport service to move encrypted time series data fromdata encryption 708 todual port FW 712. Encryptedtime series data 760 exitsintake cloud 612 and is then formatted withinformatting cloud 614, as described in greater detail below. -
Figure 8 illustrates exemplary software modules that are implemented in conjunction with the formatting cloud ofFigure 6 , according to one embodiment of the present invention. As shown, formattingcloud 614 includes SilverSpring Networks (SSN)Agent 800, web service (WS) and representational state transfer (REST)APIs 802, cloud physical interface (PIF) 804 which includesmapping registry 806 and XML→JSON 808, data decryption 810, data anonymizer 812,data compression 814, rabbitMQ (RMQ) 816, and time series database (TSDB) 818. - Persons skilled in the art will understand that some of the software modules shown in
Figure 8 are commonly associated with specific vendors and may represent specific brands. However, these specific modules are provided to illustrate one possible implementation of the general functionality associated withformatting cloud 614, and are not meant to limit the scope of the present invention to the particular vendors / brands shown. In addition, although specific data types may be shown, such as, e.g., JSON, among others, the techniques described herein are not limited to those specific data types, and may be practiced with any technically feasible type of data. -
SSN agent 800 is a software controller for managingformatting cloud 614. WS andREST APIs 802 provide a set or uniform resource indicators (URIs) for performing various operations with formattingcloud 614.Cloud PIF 808 receives XML data and converts that data to JSON usingmapping registry 806 and XML→JSON 808.Data decryption 810 decrypts the encrypted time series data received fromintake cloud 612.Data anonymizer 812 obfuscates certain identifying information from the decrypted data.Data compression 814 compresses the decrypted, anonymous data and then stores that data inRMQ 816. In one embodiment,TSDB 818 may also be used to store that data.TSDB 818 may be omitted in some embodiments. The decrypted anonymous data may then exit formattingcloud 614 asstream data 850.Stream data 850 may be consumed by distributedprocessing cloud 620 or real-time processing cloud 630. -
Figure 9 illustrates exemplary software modules that are implemented in conjunction with the distributed processing cloud ofFigure 6 , according to one embodiment of the present invention. As shown, distributedprocessing cloud 620 includes amaster node 900 and various nodes 902(1) through 902(N). Distributedprocessing cloud 900 also includes adata archive 904. - Persons skilled in the art will understand that some of the various software modules shown in
Figure 9 are commonly associated with specific vendors and may represent specific brands. However, these specific modules are provided to illustrate one possible implementation of the general functionality associated with distributedprocessing cloud 620, and are not meant to limit the scope of the present invention to the particular vendors / brands shown. In addition, although specific data types may be shown, such as, e.g., JSON, among others, the techniques described herein are not limited to those specific data types, and may be practiced with any technically feasible type of data. -
Master node 900 andslave nodes 902 are configured to perform distributed, parallel processing operations on stream data.Master node 900 andslave nodes 902 may form a Hadoop processing environment or any other type of distributed processing cluster or architecture. Data archive 904 includes historical stream data collected fromdata ingestion cloud 610 over long time scales.Master node 900 andslave nodes 902 may perform a variety of different processing tasks with the data stored in data archive 904. Data archive 904 may also be exposed to customers, and may be searchable via customer devices 950. -
Figure 10 illustrates exemplary software modules that are implemented in conjunction with the real-time processing cloud ofFigure 6 , according to one embodiment of the present invention. As shown, real-time processing cloud 630 includes acore stream pipeline 1000 that includesREST API 1002,Kafka queues 1004,stream computation engine 1006,stream functions 1008,Scala stream actors 1010, Postgresstream meta data 1012, andCassandra stream data 1014. - Persons skilled in the art will understand that some of the various software modules shown in
Figure 10 are commonly associated with specific vendors and may represent specific brands. However, these specific modules are provided to illustrate one possible implementation of the general functionality associated with real-time processing cloud 630, and are not meant to limit the scope of the present invention to the particular vendors / brands shown. In addition, although specific data types may be shown, such as, e.g., JSON, among others, the techniques described herein are not limited to those specific data types, and may be practiced with any technically feasible type of data. -
REST API 1002 provides various URIs for configuring data streams.Kafka queues 1004 are configured to queue stream data, including JSON objects and generic messages.Stream computation engine 1006 performs operations with data streams by executingScala stream actors 1010. EachScala stream actor 1010 is a software construct configured to call one or more stream functions 1008. Astream function 1008 may be any operation that can be performed with stream data. For example, astream function 1008 could be a sum function, a minimum function, a maximum function, an average function, a function that indicates the fraction of elements in an array that meet a given condition, an interpolation function, a customer function, and so forth. Stream functions 1008 may also have configurable parameters, such as, for example, a configurable window within which to calculate an average value, among other parameters relevant to time series computations. TheScala stream actor 1010 that calls thestream function 1008 may configure these parameters, among other possibilities. - Each
Scala stream actor 1010 may execute a sequence of stream functions and, potentially, call other stream actors. For example, a firstScala stream actor 1010 could execute a series of stream functions, and upon completion of those functions, call a secondScala stream actor 1010 that would execute a different series of stream functions.Scala stream actors 1010 are generally implemented in the Scala programming language, although any technically feasible programming language may also suffice. - During stream processing,
core stream pipeline 1000 may execute a multitude ofScala stream actors 1010 in parallel with one another to perform real-time processing on data streams. Customers may configureScala stream actors 1010 directly, or may simply subscribe to data streams generated by those actors. In various embodiments, real-time processing cloud 630 may push specificScala stream actors 1010 out tonodes 230 in order to configure those nodes to perform "edge processing" onwireless mesh network 202 andstream network 500. With this approach, individual nodes can be configured to perform sequences of specific stream functions.Core stream pipeline 1000 is configured byoperations center 640, described in greater detail below in conjunction withFigure 11 . -
Figure 11 illustrates exemplary software modules that are implemented in conjunction with the operations center ofFigure 6 , according to one embodiment of the present invention. As shown,operation center 640 includesvisualization engine 1100,REST APIs 1108, andSvDK 1116.Visualization engine 1100 includesmerged views 1102,stream computation 1104, and streamalert monitoring 1106.REST APIs 1108 includescontrollers 1110, models andviews 1112, and configuration and logs 114. SvDK includesservices 1118,devices 1120, anddiscoveries 1122. -
Visualization engine 1100 provides a back end for generating visualizations of data streams for customers. Customers may accessvisualization engine 110 viacustomer devices 650, as described in greater detail below in conjunction withFigure 12 .Merged views 1102 generate views of real-time data and batch (or historical) data.Stream computation engine 1104 allows new streams to be registered withvisualization engine 1104. Streamalert monitoring 1106 monitors data streams and generates alerts when certain conditions are met or specific events occur. -
REST APIs 1108 allow customers to perform various actions, including subscribing to streams, generating alerts, and so forth.Controllers 1110 include logic for performing these actions, while models andviews 1112 provide data models and templates for viewing data acquired viaREST APIs 1108. Configuration and logs 1114 include configuration data and log files. -
SvDK 1116 is similar toSvDK 426 described above in conjunction withFigure 4A , and includes a specification of various services 1118 a customer may subscribe to, a set ofdevices 1120 associated with those services, anddiscoveries 1122 that reflect communication between those devices. -
Operation center 640 as a whole manages the operation ofsystem 600, including the various processing clouds andstream network 500.Operation center 640 also provides back end processing needed to providecustomer devices 650 with access to stream data. Anexemplary customer device 650 configured to accessoperation center 640 is described in greater detail below on conjunction withFigure 12 . -
Figure 12 illustrates exemplary software modules that are implemented in conjunction with the customer devices ofFigure 6 , according to one embodiment of the present invention. As shown,customer device 650 includes a portal 1200 that includes a real-timecloud processing interface 1202 and a distributedcloud processing interface 1204.Portal 1200 may be a web browser configured to access any of the URIs provided byREST APIs 1108 shown inFigure 11 . Real-timecloud processing interface 1202 provides customers with access to data streams processed by real-time processing cloud 630. Distributedprocessing cloud interface 1204 provides customers with the ability to run queries against data archive 904 within distributedprocessing cloud 620. - Referring generally to
Figure 7-12 , persons skilled in the art will understand that the various exemplary software modules shown represent executable program code that, when executed by a processing unit, causes the processing unit to perform the various functionality described above. Generally, each ofdata ingestion cloud 610, distributedprocessing cloud 620, real-time processing cloud 630,operations center 640, andcustomer devices 650 includes a plurality of computing devices configured to execute software modules such as those described herein. Those software modules may be implemented via any technically feasible set of programming languages, beyond those explicitly mentioned above.Figures 13-15 describe techniques for configuring the processing clouds described herein, as well as data processing strategies implemented by those clouds. -
Figure 13 is flow diagram of method steps for configuring one or more processing clouds to implement a stream network, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems ofFigures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention. - A
method 1300 begins atstep 1302, whereserver machine 254 withinoperations center 640 configuresnodes 230 withinwireless mesh network 202 to collect time series data. The time series data includes a sequence of data values and corresponding timestamps indicating times when each data value was collected. - At
step 1304,server machine 254 configuresdata ingestion cloud 610 to receive the time series data and to then format that data in order to generate a data stream.Server machine 254 may configureintake cloud 612 to execute on a first cloud computing environment and to receive the time series data, and then configure formattingcloud 614 to execute on a second cloud computing environment and to format the time series data. Alternatively,server machine 254 may configure bothintake cloud 612 andformatting cloud 614 to execute within the same cloud computing environment. - At
step 1306,server machine 254 configures real-time processing cloud 630 to process the stream data generated bydata ingestion cloud 610 in real time. In doing so,server machine 254 may configure one or more instances of virtual computing devices to execute a core stream pipeline such as that shown inFigure 10 . - At
step 1308,server machine 254 configures distributedprocessing cloud 620 to collect and process historical stream data, and to perform data queries in response to commands issued bycustomer devices 650. In doing so,server machine 254 may cause distributed processing cloud to accumulate stream data over long periods of time fromdata ingestion cloud 610, and to store that accumulated stream data withindata archive 904.Server machine 254 may also configuremaster node 900 andslave nodes 902 to perform distributed processing of the data stored in data archive 904. - At
step 1310,server machine 254 generates data visualizations forcustomer devices 650 based on real-time stream data processed by real-time processing cloud 630 and historical data processed by distributedprocessing cloud 620. In one embodiment,server machine 254 implements a web service that responds to requests fromcustomer devices 650 to generate visualizations. - Once
server machine 254 has completed the above configuration steps,server machine 254 may further configure distributedprocessing cloud 620 and real-time processing cloud 630 to interact with one another when certain conditions are met, as described in greater detail below in conjunction withFigures 14-15 . -
Figure 14 is a flow diagram of method steps for triggering distributed processing of stream-based data, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems ofFigures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention. - As shown, a method 14 begins at
step 1402, whereserver machine 254 causes real-time processing cloud 630 to generate an alert when a condition is met, based on the processing of stream data, for transmission to distributedprocessing cloud 620. For example, the data stream could reflect a time series of temperature values recorded by a node, and the condition could be the temperature values falling beneath a certain temperature threshold. - At
step 1404,server machine 254 causes distributedprocessing cloud 620 to receive the alert from real-time processing cloud 630, and, in response, to analyze historical data associated with the data stream to identify a trend. Returning to the example above,server machine 254 could cause distributedprocessing cloud 620 to analyze historical temperature values gathered by the node and to identify trends in that historical data. In this example, the trend could indicate a seasonal variation in temperature, or the onset of inclement weather. - At
step 1406,server machine 254 notifies a customer who subscribes to the data stream of the trend that has been identified. In doing so,server machine 254 may indicate to the customer predicted values of the data stream determined based on the trend. For example, if the historical analysis indicated that the temperature change resulted from seasonal temperature variations, thenserver machine 254 could predict future temperature changes based on those observed during previous years. - With the approach, processing that occurs within real-
time processing cloud 630 may trigger a different type of processing on distributedprocessing cloud 620. Distributedprocessing cloud 620 may also trigger processing real-time processing cloud 630, as described in greater detail below in conjunction withFigure 15 . -
Figure 15 is a flow diagram of method steps for triggering real-time processing of stream-based data, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems ofFigures 1-12 , persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present invention. - As shown, a
method 1500 begins atstep 1502, whereserver machine 254 causes distributedprocessing cloud 620 to perform a historical analysis on a first data stream to identify a trend for use in configuring real-time processing cloud 630. The trend could be a periodic repetition of specific data values, or a predictable change in data values such as a gradual increase or decline in those data values. When a trend is identified, distributedprocessing cloud 620 notifies real-time processing cloud 630 of that trend. - At
step 1504,server machine 254 causes real-time processing cloud 630 to monitor the first data stream, in real time, to determine the degree to which that data stream complies with the identified trend. For example, if distributedprocessing cloud 620 determines that data values associated with the first data stream are steadily increasing over time, then real-time processing cloud 630 could determine whether those data values continue to increase as new data values become available. - At
step 1506,server machine 254 notifies a customer who subscribes to the first data stream of the degree to which the first data stream complies with the trend. This approach may be applied to detect a variety of different types of trends, including those associated with fraud and other forms of non-technical loss. Distributedprocessing cloud 620 may periodically analyze some or all of the stream data stored in data archive 904, and, in response to that analysis, configure real-time processing cloud 630 to specifically monitor certain data streams for which trends have been detected. - Persons skilled in the art will recognize that the
methods time processing cloud 630 could trigger processing on distributedprocessing cloud 620, and distributedprocessing cloud 620 could, in parallel, trigger processing on real-time processing cloud 630. - In sum, nodes within a wireless mesh network are configured to monitor time series data associated with a utility network (or any other device network), including voltage fluctuations, current levels, temperature data, humidity measurements, and other observable physical quantities. A server coupled to the wireless mesh network configures a data ingestion cloud to receive and process the time series data to generate data streams. The server also configures a distributed processing cloud to perform historical analysis on data streams, and a real-time processing cloud to perform real-time analysis on data streams. The distributed processing cloud and the real-time processing cloud may interoperate with one another in response to processing the data streams. The techniques described herein allow the delivery of "data-as-a-service" (DaaS) that represents an interface between the traditional software-as-a-service (SaaS) and platform-as-a-service (PaaS) approaches.
- One advantage of the unique architecture described above is that the real-time processing cloud and the distributed processing cloud can interoperate to identify a greater range of events occurring within the utility network compared to traditional approaches. In addition, those different processing clouds provide customers with greater visibility into the types of events occurring within the utility network.
- The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed.
- Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable processors.
- Embodiments of the disclosure may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in "the cloud," without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources.
- Typically, cloud computing resources are provided to a user on a pay-peruse basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present disclosure, a user may access applications (e.g., video processing and/or speech analysis applications) or related data available in the cloud.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
- While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (16)
- A non-transitory computer-readable medium storing program instructions that, when executed by a processor (310), causes the processor to identify events associated with a network environment, characterized by performing the steps of:obtaining at one or more first computing devices implemented as part of a first computing cloud (620,630) a first time series of data values from a first node (230) in the network (100);obtaining, at the one or more first computing devices, a second time series of data values from a second node (230) in the network (100);transmitting, via the one or more first computing devices, the first time series of data values and the second time series of data values to one or more second computing devices implemented as part of a second computing cloud (620,630), wherein the second computing cloud (620,630) is different than the first computing cloud (620,630);in response to receiving the first time series of data values and the second time series of data values;processing, by the one or more second computing devices, the first time series of data values and the second time series of data values to identify a first data trend; andbased on the first data trend, identifying by the one or more second computing devices, a first network event associated with a first region of the network environment, wherein the first node (230) and the second node (230) reside within the first region.
- The non-transitory computer-readable medium of claim 1, further comprising the steps of:configuring the first node (230) to:collect first raw sensor data to generate the first time series of data values, orprocess a third time series of data values received from a third node (230) in the network environment to generate the first time series of data values; andconfiguring the second node (230) to:collect second raw sensor data to generate the second time series of data values, orprocess a fourth time series of data values received from a fourth node (230) in the network environment to generate the second time series of data values.
- The non-transitory computer-readable medium of claim 1, further comprising the steps of:configuring the first node (230) to process first textual data acquired from a first application programming interface, named "API" in the following, associated with a social media outlet to generate the first time series of data values; andconfiguring the second node (230) to process second textual data acquired from the first API to generate the second time series of data values.
- The non-transitory computer-readable medium of claim 3, further comprising the steps of:extracting a first portion of text from the first textual data that references the first network event; andextracting a second portion of text from the second textual data that references the first network event.
- The non-transitory computer-readable medium of claim 4, wherein the step of processing the first time series of data values and the second time series of data values comprises determining that both the first portion of text and the second portion of text reference the first network event.
- The non-transitory computer-readable medium of claim 1, further comprising the steps of:configuring the first node (230) to measure a first consumption level associated with a first utility network component to generate the first time series of data values; andconfiguring the second node (230) to measure an aggregated consumption level associated with a plurality of second utility network components coupled to the first utility network component to generate the second time series of data values.
- The non-transitory computer-readable medium of claim 6, wherein the step of processing the first time series of data values and the second time series of data values comprises determining that the first consumption level is substantially different than the aggregate consumption level, and wherein the step of identifying the first network event comprises determining that a non-technical loss has occurred within the first region.
- The non-transitory computer-readable medium of claim 1, further comprising the steps of:configuring the first node (230) to:measure a first voltage level associated with a first utility network component, andcompare the first voltage level to a running average of the first voltage level to generate the first time series of data values; andconfiguring the second node (230) to:measure a second voltage level associated with a second utility network component, andcompare the second voltage level to a running average of the second voltage level to generate the second time series of data values.
- The non-transitory computer-readable medium of claim 8, wherein processing the first time series of data values and the second time series of data values comprises:determining, based on the first time series of data values, that the first voltage level substantially diverges from the running average of the first voltage level; anddetermining, based on the second time series of data values, that the second voltage level substantially diverges from the running average of the second voltage level.
- The non-transitory computer-readable medium of claim 9, wherein the step of identifying the first network (100) event comprises determining that a sag or swell has occurred within a portion of a utility network associated with the first region.
- A system for identifying events associated with a network environment, characterized by comprising:a first computing cloud (620,630) comprising a first computing device, the first computing device including:a first memory configured to store a first program code,a first processor (310) configured to execute the first program code to:obtain a first time series of data values from a first upstream node (230) in the network (100);obtain a second time series of data values from a second upstream node (230) in the network (100);transmit, the first time series of data values and the second time series of data values to a second computing device implemented as part of a second computing cloud (620,630), wherein the second computing cloud (620,630) is different than the first computing cloud (620,630); andthe second computing cloud (620,630); comprising the second computing device,
the second computing device including:a second memory configured to store second program code,a second processor (310) configured to execute the second program code to:in response to receiving the first time series of data values and the second time series of data valuesprocess the first time series of data values and the second time series of data values to identify a first data trend; andbased on the first data trend, identify a first network event associated with a first region of the network environment, wherein the node (230), the first upstream node (230), and the second upstream node (230) reside within the first region. - The system of claim 11, further comprising:the first upstream node (230) comprising a processor (310), configured to execute program code to:collect first raw sensor data derived from an underlying network (100) to generate the first time series of data values, orprocess a third time series of data values received from a third node (230) in an overarching network (100) to generate the first time series of data values; andthe second upstream node (230) comprising a processor (310), configured to execute program code to:collect second raw sensor data derived from an underlying network (100) to generate the second time series of data values, orprocess a fourth time series of data values received from a fourth node (230) in an overarching network (100) to generate the second time series of data values.
- The system of claim 11,
wherein the first upstream node (230) is further configured to:measure a first demand level associated with upstream component within an utility distribution infrastructure to generate the first time series of data values, andwherein the second upstream node (230) is configured to:
measure an aggregated demand level across a plurality of downstream components in the utility distribution infrastructure coupled downstream of the upstream component to generate the second time series of data values. - The system of claim 13, wherein the second computing device is configured to process the first time series of data values and the second time series of data values by determining that the first demand level is substantially different than the aggregated demand level, and wherein the step of identifying the first network event comprises determining that a non-technical loss of electricity or water has occurred within the first region.
- The non-transitory computer-readable medium of claim 1, further comprising the steps of:configuring the first upstream node (230) to:measure a first voltage level associated with a first utility network component, andcompare the first voltage level to a running average of the first voltage level to generate the first time series of data values; andconfiguring the second upstream node (230) to:measure a second voltage level associated with a second utility network component, andcompare the second voltage level to a running average of the second voltage level to generate the second time series of data values,wherein the second computing device processes the first time series of data values and the second time series of data values by:determining, based on the first time series of data values, that the first voltage level substantially diverges from the running average of the first voltage level; anddetermining, based on the second time series of data values, that the second voltage level substantially diverges from the running average of the second voltage level.
- A computer-implemented method for identifying events associated with a network environment, the method being characterized by comprising:obtaining at one or more first computing devices implemented as part of a first computing cloud (620,630), a first time series of data values from a first node (230) in the network (100);obtaining, at the one or more first computing devices, a second time series of data values from a second node (230) in the network (100);transmitting, via the one or more first computing devices, the first time series of data values and the second time series of data values to one or more second computing devices implemented as part of a second computing cloud (620,630), wherein the second computing cloud (620,630) is different than the first computing cloud (620,630);in response to receiving the first time series of data values and the second time series of data valuesprocessing by the one or more second computing devices the first time series of data values and the second time series of data values to identify a first data trend and to determine that a condition is met for issuing an alert for each of the first time series of data values and the second time series of data values; andbased on the first data trend and on a correlation of the alerts for the first time series of data values and the second time series of data values, identifying, by the one or more second computing devices, a first network event associated with a first region of the network environment, wherein the first node (230) and the second node (230) reside within the first region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18205149.0A EP3467985A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461950425P | 2014-03-10 | 2014-03-10 | |
US201462045423P | 2014-09-03 | 2014-09-03 | |
US201462094907P | 2014-12-19 | 2014-12-19 | |
PCT/US2015/019733 WO2015138468A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18205149.0A Division EP3467985A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
EP18205149.0A Division-Into EP3467985A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3117394A1 EP3117394A1 (en) | 2017-01-18 |
EP3117394A4 EP3117394A4 (en) | 2017-08-09 |
EP3117394B1 true EP3117394B1 (en) | 2019-01-09 |
Family
ID=54017112
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18205149.0A Withdrawn EP3467985A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
EP15761724.2A Active EP3117225B1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
EP15761205.2A Active EP3117394B1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
EP23161189.8A Pending EP4215925A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18205149.0A Withdrawn EP3467985A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
EP15761724.2A Active EP3117225B1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP23161189.8A Pending EP4215925A1 (en) | 2014-03-10 | 2015-03-10 | Distributed smart grid processing |
Country Status (6)
Country | Link |
---|---|
US (5) | US9791485B2 (en) |
EP (4) | EP3467985A1 (en) |
CN (3) | CN106464714B (en) |
AU (4) | AU2015229578A1 (en) |
ES (1) | ES2947227T3 (en) |
WO (2) | WO2015138468A1 (en) |
Families Citing this family (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8078332B2 (en) | 2007-07-26 | 2011-12-13 | Areva T & D, Inc. | Methods for managing high or low voltage conditions from selected areas of a power system of a utility company |
US9558250B2 (en) | 2010-07-02 | 2017-01-31 | Alstom Technology Ltd. | System tools for evaluating operational and financial performance from dispatchers using after the fact analysis |
US9727828B2 (en) | 2010-07-02 | 2017-08-08 | Alstom Technology Ltd. | Method for evaluating operational and financial performance for dispatchers using after the fact analysis |
US8538593B2 (en) | 2010-07-02 | 2013-09-17 | Alstom Grid Inc. | Method for integrating individual load forecasts into a composite load forecast to present a comprehensive synchronized and harmonized load forecast |
US8972070B2 (en) | 2010-07-02 | 2015-03-03 | Alstom Grid Inc. | Multi-interval dispatch system tools for enabling dispatchers in power grid control centers to manage changes |
US10989977B2 (en) | 2011-03-16 | 2021-04-27 | View, Inc. | Onboard controller for multistate windows |
US11153333B1 (en) * | 2018-03-07 | 2021-10-19 | Amdocs Development Limited | System, method, and computer program for mitigating an attack on a network by effecting false alarms |
US11079417B2 (en) | 2014-02-25 | 2021-08-03 | Itron, Inc. | Detection of electric power diversion |
US10571493B2 (en) | 2014-02-25 | 2020-02-25 | Itron, Inc. | Smart grid topology estimator |
JP6134437B2 (en) * | 2014-03-18 | 2017-05-24 | 株式会社日立製作所 | Data transfer monitoring system, data transfer monitoring method, and base system |
US10859887B2 (en) | 2015-09-18 | 2020-12-08 | View, Inc. | Power distribution networks for electrochromic devices |
US10365532B2 (en) * | 2015-09-18 | 2019-07-30 | View, Inc. | Power distribution networks for electrochromic devices |
CN104092481B (en) * | 2014-07-17 | 2016-04-20 | 江苏林洋能源股份有限公司 | A kind of by voltage characteristic differentiation Tai Qu and phase method for distinguishing |
US9835662B2 (en) * | 2014-12-02 | 2017-12-05 | Itron, Inc. | Electrical network topology determination |
US11172273B2 (en) | 2015-08-10 | 2021-11-09 | Delta Energy & Communications, Inc. | Transformer monitor, communications and data collection device |
WO2017027682A1 (en) | 2015-08-11 | 2017-02-16 | Delta Energy & Communications, Inc. | Enhanced reality system for visualizing, evaluating, diagnosing, optimizing and servicing smart grids and incorporated components |
WO2017041093A1 (en) | 2015-09-03 | 2017-03-09 | Delta Energy & Communications, Inc. | System and method for determination and remediation of energy diversion in a smart grid network |
EP3350651B1 (en) | 2015-09-18 | 2023-07-26 | View, Inc. | Power distribution networks for electrochromic devices |
US11384596B2 (en) | 2015-09-18 | 2022-07-12 | View, Inc. | Trunk line window controllers |
US11196621B2 (en) * | 2015-10-02 | 2021-12-07 | Delta Energy & Communications, Inc. | Supplemental and alternative digital data delivery and receipt mesh net work realized through the placement of enhanced transformer mounted monitoring devices |
US10419540B2 (en) * | 2015-10-05 | 2019-09-17 | Microsoft Technology Licensing, Llc | Architecture for internet of things |
WO2017070648A1 (en) | 2015-10-22 | 2017-04-27 | Delta Energy & Communications, Inc. | Augmentation, expansion and self-healing of a geographically distributed mesh network using unmanned aerial vehicle technology |
WO2017070646A1 (en) | 2015-10-22 | 2017-04-27 | Delta Energy & Communications, Inc. | Data transfer facilitation across a distributed mesh network using light and optical based technology |
US10108480B2 (en) * | 2015-11-06 | 2018-10-23 | HomeAway.com, Inc. | Data stream processor and method to counteract anomalies in data streams transiting a distributed computing system |
US10419401B2 (en) | 2016-01-08 | 2019-09-17 | Capital One Services, Llc | Methods and systems for securing data in the public cloud |
CA3054546C (en) | 2016-02-24 | 2022-10-11 | Delta Energy & Communications, Inc. | Distributed 802.11s mesh network using transformer module hardware for the capture and transmission of data |
FR3048536A1 (en) * | 2016-03-01 | 2017-09-08 | Atos Worldgrid | USE OF AN INTELLIGENT KNOB IN AN INTELLIGENT AND UNIVERSAL SYSTEM OF SUPERVISION OF INDUSTRIAL PROCESSES |
US10935188B2 (en) | 2016-03-04 | 2021-03-02 | Aclara Technologies Llc | Systems and methods for reporting pipeline pressures |
US10291494B2 (en) | 2016-04-20 | 2019-05-14 | Cisco Technology, Inc. | Distributing data analytics in a hierarchical network based on computational complexity |
CN109416379B (en) * | 2016-04-22 | 2021-01-29 | 戴普赛斯股份公司 | Method for determining mutual inductance voltage sensitivity coefficient among multiple measurement nodes of power network |
US10430251B2 (en) * | 2016-05-16 | 2019-10-01 | Dell Products L.P. | Systems and methods for load balancing based on thermal parameters |
US10320689B2 (en) * | 2016-05-24 | 2019-06-11 | International Business Machines Corporation | Managing data traffic according to data stream analysis |
US10387198B2 (en) | 2016-08-11 | 2019-08-20 | Rescale, Inc. | Integrated multi-provider compute platform |
US10652633B2 (en) | 2016-08-15 | 2020-05-12 | Delta Energy & Communications, Inc. | Integrated solutions of Internet of Things and smart grid network pertaining to communication, data and asset serialization, and data modeling algorithms |
CN109716322B (en) | 2016-09-15 | 2023-03-21 | 甲骨文国际公司 | Complex event handling for micro-batch streaming |
US11977549B2 (en) | 2016-09-15 | 2024-05-07 | Oracle International Corporation | Clustering event processing engines |
US10627882B2 (en) | 2017-02-15 | 2020-04-21 | Dell Products, L.P. | Safeguard and recovery of internet of things (IoT) devices from power anomalies |
WO2018152249A1 (en) | 2017-02-16 | 2018-08-23 | View, Inc. | Solar power dynamic glass for heating and cooling buildings |
WO2018169430A1 (en) | 2017-03-17 | 2018-09-20 | Oracle International Corporation | Integrating logic in micro batch based event processing systems |
WO2018169429A1 (en) * | 2017-03-17 | 2018-09-20 | Oracle International Corporation | Framework for the deployment of event-based applications |
CN107194976B (en) * | 2017-03-31 | 2021-11-12 | 上海浩远智能科技有限公司 | Temperature cloud picture processing method and device |
US11161307B1 (en) | 2017-07-07 | 2021-11-02 | Kemeera Inc. | Data aggregation and analytics for digital manufacturing |
KR101936942B1 (en) * | 2017-08-28 | 2019-04-09 | 에스케이텔레콤 주식회사 | Distributed computing acceleration platform and distributed computing acceleration platform control method |
US10418811B2 (en) * | 2017-09-12 | 2019-09-17 | Sas Institute Inc. | Electric power grid supply and load prediction using cleansed time series data |
US10331490B2 (en) * | 2017-11-16 | 2019-06-25 | Sas Institute Inc. | Scalable cloud-based time series analysis |
US10503498B2 (en) * | 2017-11-16 | 2019-12-10 | Sas Institute Inc. | Scalable cloud-based time series analysis |
US12095794B1 (en) * | 2017-11-27 | 2024-09-17 | Lacework, Inc. | Universal cloud data ingestion for stream processing |
EP3493349A1 (en) * | 2017-12-01 | 2019-06-05 | Telefonica Innovacion Alpha S.L | Electrical energy optimal routing method between a source node and a destination node of a peer-to-peer energy network and system |
CN108400586A (en) * | 2018-02-12 | 2018-08-14 | 国网山东省电力公司莱芜供电公司 | A kind of distributed fault self-healing method suitable for active power distribution network |
EP4328750A3 (en) * | 2018-03-01 | 2024-03-20 | V2Com S.A. | System and method for secure distributed processing across networks of heterogeneous processing nodes |
CN110297625B (en) * | 2018-03-22 | 2023-08-08 | 阿里巴巴集团控股有限公司 | Application processing method and device |
US10594441B2 (en) * | 2018-04-23 | 2020-03-17 | Landis+Gyr Innovations, Inc. | Gap data collection for low energy devices |
US10237338B1 (en) * | 2018-04-27 | 2019-03-19 | Landis+Gyr Innovations, Inc. | Dynamically distributing processing among nodes in a wireless mesh network |
US10560313B2 (en) | 2018-06-26 | 2020-02-11 | Sas Institute Inc. | Pipeline system for time-series data forecasting |
US10685283B2 (en) | 2018-06-26 | 2020-06-16 | Sas Institute Inc. | Demand classification based pipeline system for time-series data forecasting |
CN113039521B (en) * | 2018-09-10 | 2024-07-02 | 阿韦瓦软件有限责任公司 | State edge module server system and method |
EP3847548A4 (en) * | 2018-09-10 | 2022-06-01 | AVEVA Software, LLC | Edge hmi module server system and method |
US20210376853A1 (en) * | 2018-11-02 | 2021-12-02 | Indian Institute Of Technology Delhi | Multivariate data compression system and method thereof |
US11212172B2 (en) * | 2018-12-31 | 2021-12-28 | Itron, Inc. | Techniques for dynamically modifying operational behavior of network devices in a wireless network |
US10785127B1 (en) | 2019-04-05 | 2020-09-22 | Nokia Solutions And Networks Oy | Supporting services in distributed networks |
CN111800443B (en) * | 2019-04-08 | 2022-04-29 | 阿里巴巴集团控股有限公司 | Data processing system and method, device and electronic equipment |
CN111797062B (en) * | 2019-04-09 | 2023-10-27 | 华为云计算技术有限公司 | Data processing method, device and distributed database system |
US11411598B2 (en) | 2019-05-29 | 2022-08-09 | Itron Global Sarl | Electrical phase computation using RF media |
US20210105625A1 (en) * | 2019-10-07 | 2021-04-08 | Instant! Communications LLC | Dynamic radio access mesh network |
CN111082418B (en) * | 2019-12-02 | 2021-06-04 | 国网宁夏电力有限公司经济技术研究院 | System and method for analyzing topological relation of power distribution network equipment |
WO2021113680A1 (en) * | 2019-12-05 | 2021-06-10 | Aclara Technologies Llc | Wireless synchronized measurements in power distribution networks |
TW202206925A (en) | 2020-03-26 | 2022-02-16 | 美商視野公司 | Access and messaging in a multi client network |
US11631493B2 (en) | 2020-05-27 | 2023-04-18 | View Operating Corporation | Systems and methods for managing building wellness |
CN111784026B (en) * | 2020-05-28 | 2022-08-23 | 国网信通亿力科技有限责任公司 | Cloud-side cooperative sensing-based all-dimensional physical examination system for electrical equipment of transformer substation |
CN112564292A (en) * | 2020-06-14 | 2021-03-26 | 石霜霜 | Data management method and system based on edge computing and cloud computing |
US11363094B2 (en) * | 2020-07-20 | 2022-06-14 | International Business Machines Corporation | Efficient data processing in a mesh network of computing devices |
CN111970342B (en) * | 2020-08-03 | 2024-01-30 | 江苏方天电力技术有限公司 | Edge computing system of heterogeneous network |
CN112492669A (en) * | 2020-11-06 | 2021-03-12 | 国网江苏省电力有限公司电力科学研究院 | Wireless communication method and system for node equipment of power transmission and transformation equipment internet of things |
US20220190641A1 (en) * | 2020-12-15 | 2022-06-16 | Landis+Gyr Innovations, Inc. | Adaptive metering in a smart grid |
US20220276894A1 (en) * | 2021-02-26 | 2022-09-01 | TurbineOne, Inc. | Resource-sharing mesh-networked mobile nodes |
CN114968032B (en) * | 2021-04-27 | 2024-02-02 | 广州地铁集团有限公司 | Policy arrangement processing method, device, equipment, system and storage medium |
CN113315172B (en) * | 2021-05-21 | 2022-09-20 | 华中科技大学 | Distributed source load data scheduling system of electric heating comprehensive energy |
US20230261776A1 (en) * | 2022-01-14 | 2023-08-17 | TurbineOne, Inc. | Lightweight Node Synchronization Protocol for Ad-Hoc Peer-To-Peer Networking of On-Body Combat Systems |
GB2618315B (en) * | 2022-04-26 | 2024-07-24 | Kraken Tech Limited | Systems for and methods of operational metering for a distributed energy system |
US20230388895A1 (en) * | 2022-05-24 | 2023-11-30 | Gigaband IP, LLC | Methods and systems for providing wireless broadband using a local mesh network |
Family Cites Families (118)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US872711A (en) * | 1907-01-28 | 1907-12-03 | Edwin M Case | Attachment for ink-bottles. |
US7284769B2 (en) | 1995-06-07 | 2007-10-23 | Automotive Technologies International, Inc. | Method and apparatus for sensing a vehicle crash |
US20020016639A1 (en) | 1996-10-01 | 2002-02-07 | Intelihome, Inc., Texas Corporation | Method and apparatus for improved building automation |
US7487112B2 (en) | 2000-06-29 | 2009-02-03 | Barnes Jr Melvin L | System, method, and computer program product for providing location based services and mobile e-commerce |
JP3730486B2 (en) | 2000-07-14 | 2006-01-05 | 株式会社東芝 | Weather radar |
US20030126276A1 (en) | 2002-01-02 | 2003-07-03 | Kime Gregory C. | Automated content integrity validation for streaming data |
US6930596B2 (en) | 2002-07-19 | 2005-08-16 | Ut-Battelle | System for detection of hazardous events |
US8161152B2 (en) * | 2003-03-18 | 2012-04-17 | Renesys Corporation | Methods and systems for monitoring network routing |
US7630336B2 (en) * | 2004-10-27 | 2009-12-08 | Honeywell International Inc. | Event-based formalism for data management in a wireless sensor network |
US7180300B2 (en) * | 2004-12-10 | 2007-02-20 | General Electric Company | System and method of locating ground fault in electrical power distribution system |
US10645347B2 (en) | 2013-08-09 | 2020-05-05 | Icn Acquisition, Llc | System, method and apparatus for remote monitoring |
US7609158B2 (en) * | 2006-10-26 | 2009-10-27 | Cooper Technologies Company | Electrical power system control communications network |
US7840377B2 (en) * | 2006-12-12 | 2010-11-23 | International Business Machines Corporation | Detecting trends in real time analytics |
US7853417B2 (en) | 2007-01-30 | 2010-12-14 | Silver Spring Networks, Inc. | Methods and system for utility network outage detection |
US7853545B2 (en) | 2007-02-26 | 2010-12-14 | International Business Machines Corporation | Preserving privacy of one-dimensional data streams using dynamic correlations |
US7472590B2 (en) * | 2007-04-30 | 2009-01-06 | Hunter Solheim | Autonomous continuous atmospheric present weather, nowcasting, and forecasting system |
US7657648B2 (en) * | 2007-06-21 | 2010-02-02 | Microsoft Corporation | Hybrid tree/mesh overlay for data delivery |
US7969155B2 (en) * | 2007-07-03 | 2011-06-28 | Thomas & Betts International, Inc. | Directional fault current indicator |
US8595642B1 (en) | 2007-10-04 | 2013-11-26 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
US8005956B2 (en) * | 2008-01-22 | 2011-08-23 | Raytheon Company | System for allocating resources in a distributed computing system |
US20110004446A1 (en) * | 2008-12-15 | 2011-01-06 | Accenture Global Services Gmbh | Intelligent network |
US8059541B2 (en) * | 2008-05-22 | 2011-11-15 | Microsoft Corporation | End-host based network management system |
US20130262035A1 (en) * | 2012-03-28 | 2013-10-03 | Michael Charles Mills | Updating rollup streams in response to time series of measurement data |
US8271974B2 (en) * | 2008-10-08 | 2012-09-18 | Kaavo Inc. | Cloud computing lifecycle management for N-tier applications |
CN106131178A (en) | 2008-12-05 | 2016-11-16 | 社会传播公司 | Real-time kernel |
US8274895B2 (en) * | 2009-01-26 | 2012-09-25 | Telefonaktiebolaget L M Ericsson (Publ) | Dynamic management of network flows |
US20100198655A1 (en) | 2009-02-04 | 2010-08-05 | Google Inc. | Advertising triggers based on internet trends |
CN102449575B (en) * | 2009-03-25 | 2016-06-08 | 惠普开发有限公司 | Power distributing unit-device is correlated with |
US9218000B2 (en) | 2009-04-01 | 2015-12-22 | Honeywell International Inc. | System and method for cloud computing |
US9059917B2 (en) * | 2009-05-04 | 2015-06-16 | France Telecom | Technique for processing flows in a communications network |
KR101698224B1 (en) | 2009-05-08 | 2017-01-19 | 액센츄어 글로벌 서비시즈 리미티드 | Building energy consumption analysis system |
US9606520B2 (en) * | 2009-06-22 | 2017-03-28 | Johnson Controls Technology Company | Automated fault detection and diagnostics in a building management system |
US9977045B2 (en) | 2009-07-29 | 2018-05-22 | Michigan Aerospace Cororation | Atmospheric measurement system |
GB0913691D0 (en) | 2009-08-05 | 2009-09-16 | Ceravision Ltd | Light source |
US10110440B2 (en) | 2009-09-30 | 2018-10-23 | Red Hat, Inc. | Detecting network conditions based on derivatives of event trending |
US8417938B1 (en) * | 2009-10-16 | 2013-04-09 | Verizon Patent And Licensing Inc. | Environment preserving cloud migration and management |
US8743198B2 (en) * | 2009-12-30 | 2014-06-03 | Infosys Limited | Method and system for real time detection of conference room occupancy |
US9129086B2 (en) | 2010-03-04 | 2015-09-08 | International Business Machines Corporation | Providing security services within a cloud computing environment |
US8630283B1 (en) * | 2010-03-05 | 2014-01-14 | Sprint Communications Company L.P. | System and method for applications based on voice over internet protocol (VoIP) Communications |
WO2011123104A1 (en) | 2010-03-31 | 2011-10-06 | Hewlett-Packard Development Company, L.P. | Cloud anomaly detection using normalization, binning and entropy determination |
US20110298301A1 (en) | 2010-04-20 | 2011-12-08 | Equal Networks, Inc. | Apparatus, system, and method having a wi-fi compatible alternating current (ac) power circuit module |
US8504689B2 (en) | 2010-05-28 | 2013-08-06 | Red Hat, Inc. | Methods and systems for cloud deployment analysis featuring relative cloud resource importance |
US8555163B2 (en) * | 2010-06-09 | 2013-10-08 | Microsoft Corporation | Smooth streaming client component |
CN103380423B (en) | 2010-07-09 | 2016-01-27 | 道富公司 | For the system and method for private cloud computing |
US8423637B2 (en) | 2010-08-06 | 2013-04-16 | Silver Spring Networks, Inc. | System, method and program for detecting anomalous events in a utility network |
WO2012031165A2 (en) * | 2010-09-02 | 2012-03-08 | Zaretsky, Howard | System and method of cost oriented software profiling |
WO2012034273A1 (en) | 2010-09-15 | 2012-03-22 | Empire Technology Development Llc | Task assignment in cloud computing environment |
US8478800B1 (en) | 2010-09-27 | 2013-07-02 | Amazon Technologies, Inc. | Log streaming facilities for computing applications |
US9329908B2 (en) * | 2010-09-29 | 2016-05-03 | International Business Machines Corporation | Proactive identification of hotspots in a cloud computing environment |
US8612615B2 (en) | 2010-11-23 | 2013-12-17 | Red Hat, Inc. | Systems and methods for identifying usage histories for producing optimized cloud utilization |
US8713147B2 (en) | 2010-11-24 | 2014-04-29 | Red Hat, Inc. | Matching a usage history to a new cloud |
JP2012113670A (en) | 2010-11-29 | 2012-06-14 | Renesas Electronics Corp | Smart meter and meter reading system |
KR20120066180A (en) | 2010-12-14 | 2012-06-22 | 한국전자통신연구원 | System for providing semantic home network management, cloud inference apparatus for semantic home network management, semantic home network, semantic home network gateway |
US8863256B1 (en) * | 2011-01-14 | 2014-10-14 | Cisco Technology, Inc. | System and method for enabling secure transactions using flexible identity management in a vehicular environment |
US9275093B2 (en) * | 2011-01-28 | 2016-03-01 | Cisco Technology, Inc. | Indexing sensor data |
US8774975B2 (en) | 2011-02-08 | 2014-07-08 | Avista Corporation | Outage management algorithm |
US9419928B2 (en) | 2011-03-11 | 2016-08-16 | James Robert Miner | Systems and methods for message collection |
US20120239468A1 (en) | 2011-03-18 | 2012-09-20 | Ramana Yerneni | High-performance supply forecasting using override rules in display advertising systems |
US8856321B2 (en) | 2011-03-31 | 2014-10-07 | International Business Machines Corporation | System to improve operation of a data center with heterogeneous computing clouds |
US20120290651A1 (en) * | 2011-05-13 | 2012-11-15 | Onzo Limited | Nodal data processing system and method |
US9223632B2 (en) | 2011-05-20 | 2015-12-29 | Microsoft Technology Licensing, Llc | Cross-cloud management and troubleshooting |
US9450454B2 (en) | 2011-05-31 | 2016-09-20 | Cisco Technology, Inc. | Distributed intelligence architecture with dynamic reverse/forward clouding |
US20120310435A1 (en) * | 2011-05-31 | 2012-12-06 | Cisco Technology, Inc. | Control command disaggregation and distribution within a utility grid |
US8880925B2 (en) | 2011-06-30 | 2014-11-04 | Intel Corporation | Techniques for utilizing energy usage information |
US20130013284A1 (en) | 2011-07-05 | 2013-01-10 | Haiqin Wang | System and methods for modeling and analyzing quality of service characteristics of federated cloud services in an open eco-system |
US9118219B2 (en) | 2011-07-07 | 2015-08-25 | Landis+Gyr Innovations, Inc. | Methods and systems for determining an association between nodes and phases via a smart grid |
JP2013033375A (en) | 2011-08-02 | 2013-02-14 | Sony Corp | Information processing apparatus, information processing method, and program |
US8789157B2 (en) | 2011-09-06 | 2014-07-22 | Ebay Inc. | Hybrid cloud identity mapping infrastructure |
US8612599B2 (en) | 2011-09-07 | 2013-12-17 | Accenture Global Services Limited | Cloud service monitoring system |
US20130091266A1 (en) | 2011-10-05 | 2013-04-11 | Ajit Bhave | System for organizing and fast searching of massive amounts of data |
US8547036B2 (en) * | 2011-11-20 | 2013-10-01 | Available For Licensing | Solid state light system with broadband optical communication capability |
US8826277B2 (en) * | 2011-11-29 | 2014-09-02 | International Business Machines Corporation | Cloud provisioning accelerator |
EP2786270A1 (en) | 2011-11-30 | 2014-10-08 | The University of Surrey | System, process and method for the detection of common content in multiple documents in an electronic system |
DE102011122807B3 (en) | 2011-12-31 | 2013-04-18 | Elwe Technik Gmbh | Self-activating adaptive network and method for registering weak electromagnetic signals, in particular Spherics burst signals |
GB2498708B (en) | 2012-01-17 | 2020-02-12 | Secure Cloudlink Ltd | Security management for cloud services |
US9294552B2 (en) | 2012-01-27 | 2016-03-22 | MicroTechnologies LLC | Cloud computing appliance that accesses a private cloud and a public cloud and an associated method of use |
US9967159B2 (en) | 2012-01-31 | 2018-05-08 | Infosys Limited | Systems and methods for providing decision time brokerage in a hybrid cloud ecosystem |
US8553965B2 (en) | 2012-02-14 | 2013-10-08 | TerraRecon, Inc. | Cloud-based medical image processing system with anonymous data upload and download |
US9547509B2 (en) * | 2012-02-23 | 2017-01-17 | Samsung Electronics Co., Ltd. | System and method for information acquisition of wireless sensor network data as cloud based service |
US9479592B2 (en) | 2012-03-30 | 2016-10-25 | Intel Corporation | Remote management for a computing device |
US9027141B2 (en) | 2012-04-12 | 2015-05-05 | Netflix, Inc. | Method and system for improving security and reliability in a networked application environment |
US9319372B2 (en) | 2012-04-13 | 2016-04-19 | RTReporter BV | Social feed trend visualization |
US8862727B2 (en) | 2012-05-14 | 2014-10-14 | International Business Machines Corporation | Problem determination and diagnosis in shared dynamic clouds |
US20130325924A1 (en) | 2012-06-05 | 2013-12-05 | Mehran Moshfeghi | Method and system for server-assisted remote probing and data collection in a cloud computing environment |
US20140012574A1 (en) | 2012-06-21 | 2014-01-09 | Maluuba Inc. | Interactive timeline for presenting and organizing tasks |
US9214836B2 (en) | 2012-07-05 | 2015-12-15 | Silver Spring Networks, Inc. | Power grid topology discovery via time correlation of passive measurement events |
US9436687B2 (en) * | 2012-07-09 | 2016-09-06 | Facebook, Inc. | Acquiring structured user data using composer interface having input fields corresponding to acquired structured data |
US9253054B2 (en) | 2012-08-09 | 2016-02-02 | Rockwell Automation Technologies, Inc. | Remote industrial monitoring and analytics using a cloud infrastructure |
US9569804B2 (en) | 2012-08-27 | 2017-02-14 | Gridium, Inc. | Systems and methods for energy consumption and energy demand management |
US9288123B1 (en) * | 2012-08-31 | 2016-03-15 | Sprinklr, Inc. | Method and system for temporal correlation of social signals |
EP3441896B1 (en) * | 2012-09-14 | 2021-04-21 | InteraXon Inc. | Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data |
US9689710B2 (en) * | 2012-09-21 | 2017-06-27 | Silver Spring Networks, Inc. | Power outage notification and determination |
US8659302B1 (en) | 2012-09-21 | 2014-02-25 | Nest Labs, Inc. | Monitoring and recoverable protection of thermostat switching circuitry |
US9264478B2 (en) | 2012-10-30 | 2016-02-16 | Microsoft Technology Licensing, Llc | Home cloud with virtualized input and output roaming over network |
WO2014078585A2 (en) | 2012-11-14 | 2014-05-22 | University Of Virginia Patent Foundation | Methods, systems and computer readable media for detecting command injection attacks |
US9734220B2 (en) * | 2012-12-04 | 2017-08-15 | Planet Os Inc. | Spatio-temporal data processing systems and methods |
US9674211B2 (en) | 2013-01-30 | 2017-06-06 | Skyhigh Networks, Inc. | Cloud service usage risk assessment using darknet intelligence |
US9558220B2 (en) * | 2013-03-04 | 2017-01-31 | Fisher-Rosemount Systems, Inc. | Big data in process control systems |
US9519019B2 (en) * | 2013-04-11 | 2016-12-13 | Ge Aviation Systems Llc | Method for detecting or predicting an electrical fault |
US10205640B2 (en) | 2013-04-11 | 2019-02-12 | Oracle International Corporation | Seasonal trending, forecasting, anomaly detection, and endpoint prediction of java heap usage |
US9438648B2 (en) | 2013-05-09 | 2016-09-06 | Rockwell Automation Technologies, Inc. | Industrial data analytics in a cloud platform |
US20140337274A1 (en) | 2013-05-10 | 2014-11-13 | Random Logics Llc | System and method for analyzing big data in a network environment |
US20160125083A1 (en) * | 2013-06-07 | 2016-05-05 | Zhicheng Dou | Information sensors for sensing web dynamics |
US20160239264A1 (en) | 2013-06-10 | 2016-08-18 | Ge Intelligent Platforms, Inc. | Re-streaming time series data for historical data analysis |
US20140366155A1 (en) | 2013-06-11 | 2014-12-11 | Cisco Technology, Inc. | Method and system of providing storage services in multiple public clouds |
US8706798B1 (en) * | 2013-06-28 | 2014-04-22 | Pepperdata, Inc. | Systems, methods, and devices for dynamic resource monitoring and allocation in a cluster system |
US20150019301A1 (en) * | 2013-07-12 | 2015-01-15 | Xerox Corporation | System and method for cloud capability estimation for user application in black-box environments using benchmark-based approximation |
US20150032464A1 (en) | 2013-07-26 | 2015-01-29 | General Electric Company | Integrating theranostics into a continuum of care |
US20160216698A1 (en) | 2013-07-26 | 2016-07-28 | Empire Technology Development Llc | Control of electric power consumption |
US10212207B2 (en) | 2013-08-21 | 2019-02-19 | At&T Intellectual Property I, L.P. | Method and apparatus for accessing devices and services |
US20160239756A1 (en) * | 2013-10-10 | 2016-08-18 | Ge Intelligent Platforms, Inc. | Correlation and annotation of time series data sequences to extracted or existing discrete data |
US10404525B2 (en) | 2013-10-18 | 2019-09-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Classification of detected network anomalies using additional data |
US9836502B2 (en) | 2014-01-30 | 2017-12-05 | Splunk Inc. | Panel templates for visualization of data within an interactive dashboard |
US10037128B2 (en) | 2014-02-04 | 2018-07-31 | Falkonry, Inc. | Operating behavior classification interface |
US9436721B2 (en) * | 2014-02-28 | 2016-09-06 | International Business Machines Corporation | Optimization of mixed database workload scheduling and concurrency control by mining data dependency relationships via lock tracking |
US9923767B2 (en) * | 2014-04-15 | 2018-03-20 | Splunk Inc. | Dynamic configuration of remote capture agents for network data capture |
US10567557B2 (en) * | 2014-10-31 | 2020-02-18 | Splunk Inc. | Automatically adjusting timestamps from remote systems based on time zone differences |
WO2016091278A1 (en) | 2014-12-08 | 2016-06-16 | Nec Europe Ltd. | Method and system for filtering data series |
-
2014
- 2014-12-30 US US14/586,536 patent/US9791485B2/en active Active
-
2015
- 2015-03-10 AU AU2015229578A patent/AU2015229578A1/en not_active Abandoned
- 2015-03-10 US US14/644,003 patent/US10598709B2/en active Active
- 2015-03-10 EP EP18205149.0A patent/EP3467985A1/en not_active Withdrawn
- 2015-03-10 EP EP15761724.2A patent/EP3117225B1/en active Active
- 2015-03-10 AU AU2015229599A patent/AU2015229599A1/en not_active Abandoned
- 2015-03-10 ES ES15761724T patent/ES2947227T3/en active Active
- 2015-03-10 US US14/643,978 patent/US10809288B2/en active Active
- 2015-03-10 CN CN201580024041.7A patent/CN106464714B/en active Active
- 2015-03-10 WO PCT/US2015/019733 patent/WO2015138468A1/en active Application Filing
- 2015-03-10 WO PCT/US2015/019703 patent/WO2015138447A1/en active Application Filing
- 2015-03-10 CN CN202010196015.5A patent/CN111526035B/en active Active
- 2015-03-10 CN CN201580024037.0A patent/CN106461708B/en active Active
- 2015-03-10 US US14/643,993 patent/US10151782B2/en active Active
- 2015-03-10 EP EP15761205.2A patent/EP3117394B1/en active Active
- 2015-03-10 EP EP23161189.8A patent/EP4215925A1/en active Pending
- 2015-03-10 US US14/643,985 patent/US10962578B2/en active Active
-
2020
- 2020-01-24 AU AU2020200520A patent/AU2020200520B2/en active Active
-
2021
- 2021-10-20 AU AU2021254571A patent/AU2021254571B2/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3117394B1 (en) | Distributed smart grid processing | |
JP7057796B2 (en) | Limitations of customer-oriented networks in distributed systems | |
US10191529B2 (en) | Real-time data management for a power grid | |
JP2020167723A (en) | Centralized network configuration in distributed system | |
US8984169B2 (en) | Data collecting device, computer readable medium, and data collecting system | |
US20210176192A1 (en) | Hierarchical capacity management in a virtualization environment | |
CN113344737A (en) | Device control method, device, electronic device and computer readable medium | |
CN112436951B (en) | Method and device for predicting flow path | |
CN114896296A (en) | Cloud service resource configuration method and device, electronic equipment and computer readable medium | |
CN103368862B (en) | Load balance dispatching method and load balance dispatching device | |
CN113852570B (en) | Recommended node bandwidth generation method, recommended node bandwidth generation device, recommended node bandwidth generation equipment and computer readable medium | |
US9088530B2 (en) | Maximizing bottleneck link utilization under constraint of minimizing queuing delay for targeted delay-sensitive traffic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20160927 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: FLAMMER III, GEORGE P. Inventor name: SUM, CHARLES P. |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: SILVER SPRING NETWORKS, INC. |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602015023243 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G06Q0050000000 Ipc: H02J0003000000 |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170710 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 29/06 20060101ALI20170704BHEP Ipc: H04L 12/24 20060101ALI20170704BHEP Ipc: H02J 3/06 20060101ALI20170704BHEP Ipc: H02J 3/00 20060101AFI20170704BHEP Ipc: H04L 12/26 20060101ALI20170704BHEP Ipc: H02J 13/00 20060101ALI20170704BHEP Ipc: G01R 25/00 20060101ALI20170704BHEP Ipc: H04L 29/08 20060101ALI20170704BHEP Ipc: G01R 21/00 20060101ALI20170704BHEP Ipc: G06Q 50/00 20120101ALI20170704BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20180709 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
GRAR | Information related to intention to grant a patent recorded |
Free format text: ORIGINAL CODE: EPIDOSNIGR71 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
INTC | Intention to grant announced (deleted) | ||
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: FLAMMER III, GEORGE H. Inventor name: SUM, CHARLES P. |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
INTG | Intention to grant announced |
Effective date: 20181203 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1088531 Country of ref document: AT Kind code of ref document: T Effective date: 20190115 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602015023243 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20190109 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1088531 Country of ref document: AT Kind code of ref document: T Effective date: 20190109 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190509 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190409 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190509 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190409 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190410 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602015023243 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190310 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190331 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602015023243 Country of ref document: DE Owner name: ITRON NETWORKED SOLUTIONS, INC. (N.D.GES.D. ST, US Free format text: FORMER OWNER: SILVER SPRING NETWORKS, INC., SAN JOSE, CALIF., US |
|
26N | No opposition filed |
Effective date: 20191010 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190310 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190331 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190310 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20150310 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190109 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230519 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20231229 Year of fee payment: 10 Ref country code: GB Payment date: 20240108 Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240103 Year of fee payment: 10 |