WO2021033110A1 - System and method for programming devices - Google Patents

System and method for programming devices Download PDF

Info

Publication number
WO2021033110A1
WO2021033110A1 PCT/IB2020/057689 IB2020057689W WO2021033110A1 WO 2021033110 A1 WO2021033110 A1 WO 2021033110A1 IB 2020057689 W IB2020057689 W IB 2020057689W WO 2021033110 A1 WO2021033110 A1 WO 2021033110A1
Authority
WO
WIPO (PCT)
Prior art keywords
software
container
microcontroller
devices
pipeline
Prior art date
Application number
PCT/IB2020/057689
Other languages
French (fr)
Inventor
Thomas Yates
David RAUSCHENBACH
Michael Gray
Original Assignee
Nubix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubix, Inc. filed Critical Nubix, Inc.
Priority to EP20854309.0A priority Critical patent/EP4014113A4/en
Publication of WO2021033110A1 publication Critical patent/WO2021033110A1/en
Priority to US17/673,732 priority patent/US11874692B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation

Definitions

  • This invention relates to a system and method of programming electronic devices, including the programming of microcontrollers, microprocessors and other Internet of Things (IoT) devices.
  • IoT Internet of Things
  • Edge devices such as Internet of Things (IoT) devices may include sensors, gauges, input devices (e.g., switches, rotary encoders, buttons, etc.), actuators, and other types of devices that may collect real-time, real-world data relating to their environment.
  • edge devices may include sensors that measure temperature (e.g., Internet controlled thermostats), pressure, acceleration, sound, optical signals (e.g, security cameras), humidity, gravity, geographic location, health parameters, system failures, and other types of parameters.
  • the devices may include microcontrollers, microprocessors, field programmable gate arrays (FPGAs), and other types of processors programmed to control the device, to acquire the sensed data. In present systems, these devices output the sensed data for processing, typically to a centralized data center or cloud server.
  • the cloud server may then process the data and/or perform analytics upon the collection of data to discover and interpret meaningful patterns in the data, and to apply those patterns towards effective decision making. Decisions made at the server level regarding the data may be communicated back to the edge device (or to a different device) and implemented as new instructions for the device. This processing of the data at a centralized cloud server may be referred to as cloud computing.
  • latency The amount to time it may take to send the data from the device to the cloud for processing and then back to the device may be referred to as latency.
  • the latency may be insignificant (e.g, when adjusting the temperature of a household over the Internet using an IoT thermostat) while in other scenarios latency may be catastrophic (e.g ., when processing data within anti-collision systems).
  • Latency is not the only problem associated with moving data off edge devices.
  • Edge devices may collect vast amounts of data that may require excessive bandwidth utilization to upload. For example, an oil field may generate petabytes of data per day, which is far too much data for the remote, cellular gateways to handle both technically and financially.
  • the data may be stored locally on recordable media and then physically shipped to a centralized location for analysis (sometimes taking weeks or even months to receive the results).
  • edge devices may suffer from unreliable or intermittent connectivity, making it difficult to upload collected data on a consistent basis.
  • a refrigerated truck transporting temperature-sensitive cargo e.g., salmon roe from Alaska to Los Angeles, California
  • edge devices It is desirable and an object hereof to perform processing at the edge (i.e., on edge devices). Accordingly, it is desirable and an object hereof program edge devices to perform data processing locally at the point of the data retrieval in order to avoid the latency associated with cloud computing, excessive bandwidth requirements, and/or unreliable connectivity.
  • a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
  • One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • One general aspect includes a method including: (a) embedding a container runtime on a microcontroller.
  • the method also includes (b) embedding a first at least one container on the microcontroller, the first at least one container including a first at least one software pipeline, where the container runtime provides an execution environment for the first at least one container on the microcontroller.
  • the method may also include (c) embedding a second at least one container on the microcontroller, the second at least one container including a second at least one software pipeline, where the second at least one software pipeline is distinct from the first at least one software pipeline, and where the container runtime provides an execution environment for the second at least one container on the microcontroller.
  • Implementations may include one or more of the following features, alone or in various combination(s):
  • the method further including, after the embedding in (b), the first at least one software pipeline is run on the microcontroller.
  • the method further including, after the embedding in (c), the second at least one software pipeline is run on the microcontroller.
  • the method where the device is an Internet-of-things (IoT) device.
  • IoT Internet-of-things
  • microcontroller is associated with a device, and where the one or more sensors are on or co-located with the device.
  • microcontroller is associated with a device, and where the device receives inputs from one or more devices, including human interface devices, and where the one or inputs are on and/or co-located with the device.
  • the software pipelines include an application for an already- programmed device. • The method where (i) the first at least one software pipeline, and/or (ii) the second at least one software pipeline change or augment at least one functionality of the microcontroller.
  • microcontroller includes hardware including at least one processor and a memory.
  • the method where the first at least one container includes first one or more library routines and/or functions needed by the first at least one software pipeline.
  • the method where the first one or more library routines and/or functions include all routines and/or functions needed by the first at least one software pipeline.
  • the method further including repeating acts (a) and (b) on multiple microcontrollers.
  • Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • Another general aspect includes a method including: (a) providing at least one container runtime, where the at least one container runtime includes a first particular container runtime for a first type of device.
  • the method also includes (b) creating a first at least one software pipeline.
  • the method also includes (c) on a first device of the first type, embedding the first particular container runtime.
  • the method may also include (d) on the first device, embedding a first container including the first at least one software pipeline, where the first particular container runtime provides an execution environment for the first container on devices of the first type.
  • the method may also include (e) creating a second at least one software pipeline, distinct from the first at least one software pipeline.
  • the method may also include (f) embedding, on the first device, a second container including the second at least one software pipeline, where the first particular container runtime provides an execution environment for the second container on devices of the first type.
  • the method where the at least one container runtime includes a plurality of container runtimes, including a container runtime for each of a plurality of distinct types of devices.
  • the method where the plurality of distinct types of devices include a plurality of internet-of-things (IoT) devices.
  • IoT internet-of-things
  • the method may also include where the running of the second at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime.
  • the method where the first device is an internet-of-things (IoT) device.
  • IoT internet-of-things
  • the method where the plurality of distinct types of devices include devices with distinct processors and/or with distinct types of processors.
  • the method may also include where the at least one container runtime includes a second particular container runtime for a second type of device, distinct from the first type of device.
  • the method may also include (c2) on a second device of the second type, embedding the second particular container runtime.
  • the method may also include (d2) on the second device, embedding a first container including the first at least one software pipeline, where the second particular container runtime provides an execution environment for the first container on devices of the second type.
  • the method may also include (f2) embedding, on the second device, the second container including the second at least one software pipeline, where the second particular container runtime provides an execution environment for the second container on devices of the second type.
  • the method where the software pipelines include at least one mechanism for maintaining data on the first device when the first device is disconnected from other devices.
  • Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • a method comprising:
  • P12 The method of any of embodiments P10 to Pll, wherein the embedding in (A) and/or (B) occurs after the device has been deployed.
  • P13 The method of any of embodiments P10 to P12, wherein the embedding in (A) and/or (B) occurs before the device has been deployed.
  • P20 The method of embodiment(s) of P19, wherein the one or more inputs are and/or co-located with the device.
  • P27 The method of any of the preceding embodiments, wherein the software pipelines comprise an application for an already-programmed device.
  • P28 The method of any of the preceding embodiments P2-P27, wherein the previous programming comprises firmware and/or scripts from at least one prior installation.
  • microcontroller comprises hardware including at least one processor and a memory.
  • a method comprising:
  • A providing at least one container runtime, wherein said at least one container runtime includes a first particular container runtime for a first type of device;
  • B creating a first at least one software pipeline;
  • P41 The method of embodiment(s) P40, wherein said at least one container runtime comprises a plurality of container runtimes, including a container runtime for each of a plurality of distinct types of devices.
  • P42 The method of embodiment(s) P41, wherein said plurality of distinct types of devices comprise a plurality of Intemet-of-things (IoT) devices.
  • IoT Intemet-of-things
  • P43 The method of embodiments P41 or P42, wherein said plurality of distinct types of devices include devices with distinct processors and/or with distinct types of processors.
  • P44 The method of any of embodiments P41- P43, wherein said at least one container runtime includes a second particular container runtime for a second type of device, distinct from said first type of device, the method further comprising: (C2) on a second device of said second type, embedding the second particular container runtime; and (D2) on said second device, embedding a first container comprising the first at least one software pipeline, wherein said second particular container runtime provides an execution environment for said first container on devices of said second type.
  • P45 The method of embodiment(s) P44, further comprising: (F2) embedding, on said second device, said second container comprising the second at least one software pipeline, wherein said second particular container runtime provides an execution environment for said second container on devices of said second type.
  • P46 The method of any of embodiments P40- P45, wherein said embedding of the first particular container runtime in (C) does not replace previous programming in said first device.
  • P49 The method of one of embodiments P40- P48, wherein the second at least one software pipeline is run on the first device.
  • P50 The method of one of embodiments P40- P49, wherein running of the first at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime, and wherein the running of the second at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime.
  • P51 The method of any of embodiments P40 to P50, wherein the embedding in
  • (D) and/or (F) is controlled, at least in part, of the first particular container runtime.
  • P52 The method of any of embodiments P40 to P51, wherein the first device is an Internet-of-things (IoT) device.
  • P53 The method of any of embodiments P40-P52, wherein the software pipelines obtain sensor data from one or more sensors.
  • P54 The method of embodiment(s) P53, wherein the one or more sensors and/or the one or more input devices, and/or the one or more actuators are on or co-located with the first device.
  • P55 The method of any of the preceding embodiments P40-P54, wherein the first at least one software pipeline and/or the second at least one software pipeline controls one or more actuators.
  • P56 The method of embodiment(s) P55, wherein the one or more actuators are on and/or co-located with the first device.
  • P57 The method of any of the preceding embodiments P40-P56, wherein the software pipelines run when said microcontroller is connected to at least one other device.
  • P58 The method of any of embodiments P40-P57, wherein the software pipelines run when said first device is disconnected from other devices.
  • P59 The method of any of embodiments P40- P58, wherein the software pipelines include at least one mechanism for maintaining data on said first device when said first device is disconnected from other devices.
  • P61 The method of any one embodiments P40-P60, wherein the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
  • P62 The method of any of embodiments P40-P61, wherein the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
  • C63 A computer-readable medium with one or more computer programs stored therein that, when executed by one or more processors of a device, cause the one or more processors to perform, the operations of the method of any one of embodiment s) / aspect(s) P1-P62.
  • FIG. 1 depicts aspects of a device programming system according to exemplary embodiments hereof;
  • FIG. 2 depicts aspects of a software code structure according to exemplary embodiments hereof;
  • FIGS. 3A-3D are screenshots showing aspects of a code development tool GUI according to exemplary embodiments hereof;
  • FIG. 4 is a flowchart showing aspects of an exemplary workflow according to exemplary embodiments hereof;
  • FIGS. 5A-5M are screenshots showing aspects of a code development GUI according to exemplary embodiments hereof;
  • FIG. 6 is a flowchart showing aspects of an exemplary workflow according to exemplary embodiments hereof;
  • FIGS. 7A-7D are screenshots showing aspects of an application development
  • FIGS. 8A-8C depict aspects of devices according to exemplary embodiments hereof;
  • FIGS. 9A-9C depict aspects of code deployment topologies according to exemplary embodiments hereof;
  • FIGS. 10A-10F depict aspects of use cases of a device programming system according to exemplary embodiments hereof.
  • FIG. 11 is a logical block diagram depicting aspects of a computer system.
  • AMQP means Advanced Message Queuing Protocol
  • API means application program (or programming) interface
  • GUI means graphical user interface
  • PC means Inter-Integrated Circuit
  • IoT means Internet of Things
  • IP Internet Protocol
  • LAN means local area network
  • Lua is a lightweight, multi-paradigm programming language designed primarily for embedded use in applications
  • ML means machine learning
  • OS means operating system
  • RDD Resilient Distributed Dataset
  • SPI Serial Peripheral Interface
  • TSDB time series database
  • UI user interface
  • USB means Universal Serial Bus
  • WAN means Wide Area Network
  • Compiling is the general term for taking source code written in one language and transforming into another. Transpiling refers to taking source code written in one language and transforming into another language that has a similar level of abstraction.
  • the term “mechanism,” as used herein, refers to any device(s), process(es), service(s), or combination thereof.
  • a mechanism may be implemented in hardware, software, firmware, using a special-purpose device, or any combination thereof.
  • a mechanism may be mechanical or electrical or a combination thereof.
  • a mechanism may be integrated into a single device or it may be distributed over multiple devices. The various components of a mechanism may be co-located or distributed. The mechanism may be formed from other mechanisms.
  • the term “mechanism” may thus be considered shorthand for the term device(s) and/or process(es) and/or service(s).
  • Edge devices such as Internet of Things (IoT) devices may include sensors, gauges and other types of devices that may collect real-time real-world data relating to their environment.
  • edge devices may include sensors that measure temperature, pressure, acceleration, sound, optical inputs, humidity, gravity, geographic location, health parameters, system failures, and other types of parameters.
  • the devices may include microcontrollers, microprocessors, field programmable gate arrays (FPGAs) and other types of processors programmed to control the device, to acquire the sensed data and to output the data for processing, typically to a centralized data center or cloud server.
  • FPGAs field programmable gate arrays
  • Edge devices e.g., IoT devices and the like
  • edge devices may be processed remotely (using so-called “cloud” computing) and/or on the edge devices themselves.
  • Cloud computing requires uploading data to a remote platform to be processed, with the results of such processing then distributed as needed.
  • Cloud computing includes potentially undesirable latency. Latency may be generally defined as the total amount of time it takes to upload and process data and to distribute the results. Latency may be problematic, especially when real-time results are required or desired. Cloud computing may also require excessive bandwidth (e.g., when uploading large amounts of collected data), and may suffer from unreliable connectivity.
  • Edge computing processes the collected data locally (e.g, at the location of the edge device where the data are generated and/or created and/or retrieved).
  • the edge device itself may be programmed with the applications necessary to both collect and process the data. This allows the edge device to make decisions without the latency associated with cloud computing, without utilizing excessive bandwidth and without suffering from unreliable connectivity.
  • a system 100 may include an orchestration hub 102, container runtimes 200 deployable onto edge devices 300, and code 400 (also referred to as container code 400 or container 400) that the container runtimes 200 may enable the edge devices 300 to run.
  • the container code 400 may program the edge devices 300 to perform various / new functionalities.
  • FIG. 1 depicts the details of two edge devices 300, 300’ for demonstrational purposes. Other edge devices 300 are shown without detail. Those of ordinary skill in the art will understand, upon reading this description, that the system 100 may be configured with a multitude of edge devices 300 simultaneously and in other configurations, as will be described in other sections.
  • An edge device 300 may be any kind of device, including an IoT device.
  • An edge device 300 is a computer system in the sense that it provides a general-purpose programming system (at least one processor and memory), albeit generally with very limited processing and storage and function specific capabilities.
  • the edge devices 300 may be standalone devices or they may be incorporated into other devices, appliances and/or systems.
  • the multiple edge devices 300 need not be homogenous.
  • the orchestration hub 102 comprises a computer system (e.g., as described below) and may include a centralized server and/or cloud platform that may include tools for developing, compiling, testing, building, releasing, deploying, distributing and/or updating the code 400 to the devices 300.
  • the orchestration hub 102 may provide one or more user interfaces (UIs) including at least one graphical user interface (GUI) 500 with which users may interact while using various of the hub’s tools.
  • UIs user interfaces
  • GUI graphical user interface
  • the orchestration hub 102 may also provide and deploy associated container runtimes 200 to the edge devices 300 that may include any and/or all software components required to provide communication and processing capabilities to the devices 300 to run the code 400.
  • the container runtimes 200 may generally manage the running of the code 400 on that device. For example, the container runtimes 200 may run the code 400, pause and/or terminate the code 400, set activation times for the code 400, update the code 400, send data produced by the code 400 to the orchestration hub 102 or elsewhere, and perform other operations regarding the code 400 on the edge devices 300.
  • the code 400 may correspond to an application providing certain functionality. When deployed on an edge device 300, the code 400 effectively becomes an edge- based application.
  • Different container runtimes 200 may be designed and made available for different edge devices 300, depending, e.g., on the edge device’s operating platform.
  • the system 100 may provide cross-platform Linux container runtimes 200 for running on Linux-based and/or Unix-based systems.
  • the system 100 may provide container runtimes 200 that may include platform-specific firmware for running on edge devices 300 with other specific types of platforms.
  • the term container runtime refers to both the cross-platform container runtime 200 and/or the platform- specific firmware 200 that may function as a container runtime. Further details of the container runtimes 200 are provided elsewhere herein.
  • An orchestration hub 100 may be integrated with other orchestration products to maintain any vendor-preferred orchestration mechanisms.
  • the container runtimes 200 and/or the code 400 may be deployed and distributed to the edge devices 300 (e.g., via RPM, as firmware, etc.) using local connections (e.g, USB, serial cable, etc.) as depicted by arrow Bl, through one or more networks 104 (e.g, the Internet, LAN, WAN, cellular, and/or any other types of networks) as depicted by arrow Bl’, or by other installation techniques.
  • local connections e.g, USB, serial cable, etc.
  • networks 104 e.g, the Internet, LAN, WAN, cellular, and/or any other types of networks
  • a device 300 may also include an operating system 302 and one or more previous versions of code (e.g, previous applications) that may, e.g., have been loaded onto the device 300 prior to the engagement of the system 100. These previous versions are denoted previous code 304. As should be appreciated, the previous code 304 may have different functionality from the code 400. In some exemplary embodiments hereof, the container runtimes 200 and the code 400 may interact with the device’s operating system (OS) 302, although they preferably do not interfere with the device’s OS 302 or previous code 304.
  • OS operating system
  • the system 100 may delete, remove or otherwise render inoperable the device’s OS 302 and/or the device’s previous code 304, and replace it with a container runtime 200 and/or the code 400.
  • the device 300 includes an OS 302 and previous code 304, whereas the device 300’ has no OS or previous code.
  • the orchestration hub 102 and the container runtimes 200 may form an edge-native system 100 for developing, deploying, and managing edge-based applications 400.
  • the code 400 may be developed and distributed as one or more software pipelines 402.
  • Each software pipeline 402 may include a chain of processing elements (e.g., processes, threads, co-routines, functions, scripts, algorithms, etc.) that may be arranged so that the output of one element may be the input to the next. These sequential elements may be referred to as stages 404-1, 404-2, ... 404-n (individually and collectively 404). In this way, the pipelines 402 may describe the flow of data from one stage 404 -j to another stage 404-L. In some implementations, the flow of data may be linear and one-directional (e.g, upstream) from one stage 404 / to the next (404 /+/ ). In other implementations the pipeline 402 may include some flow of data in different directions (e.g, downstream), often referred to as return channel or backchannel, or the pipeline 402 may be fully bi-directional.
  • processing elements e.g., processes, threads, co-routines, functions, scripts, algorithms, etc.
  • Each stage 404 may include software that may execute a particular functionality.
  • a pipeline 402 may include one or more beginning stages 404-b, one or more intermediate stages 404-i, and one or more end stages 404-e.
  • data may be acquired by the beginning stage(s) 404-b, processed in the intermediate stage(s) 404-i, and saved or published in the end stage(s) 404-e.
  • Beginning stage(s) 404-b may include one or more emitter stages 404 that may receive input, e.g., from a pin, from a bus connected to sensors emitting data, a timer or other types of emitter stages 404.
  • Intermediate stage(s) 404-i may include middleware such as transforms, filters, down-sampling and/or ML algorithms such as k-means clustering or analytics scripts.
  • Intermediate stage(s) 404-i may include emitter stages 404 and/or collector stages 404, depending on their configuration within the pipeline 402.
  • End stage(s) 404-e may include collector stages 404 that may publish processed data to output pins or busses, and/or store the output data to a destination such as a time-series database or to the cloud.
  • Lua is a lightweight, multi-paradigm programming language designed primarily for embedded use in applications and is particularly suitable for edge device (IoT) applications.
  • a system provides for the development, testing, distribution, management and administration of machine learning (ML) algorithms, data processing directives and/or other types of software to edge devices.
  • ML machine learning
  • data processing directives and/or other types of software developed by the system for the edge devices may be referred to as code.
  • a system may be used to perform one or more of the following operations, alone or in combination, and without limitation: 1. Develop new code to run on edge devices;
  • the platform may also be used to administer access rights, roles and privileges, and other functionalities.
  • capabilities of pipelines 402 may include some or all of the following (without limitation):
  • Spark Streaming may use standard Hadoop protocols to load static files before starting their control loop.
  • RDDs Resilient Distributed Datasets
  • the container runtime 200 may be a WebHDFS endpoint, which may offer generic hierarchical file storage.
  • the system 100 may also support ML tools and standards such as, PMML, ONNX, TensorFlow models, Caffe2 Model Zoo, Torch, Core ML, MLeap, and others.
  • Scripts may include custom application logic but may not rely on the
  • Scripts may be written in the Lua programming (or other software languages) and may load a wide variety of Lua libraries for mathematical and other operations.
  • the system 100 may also be integrated with LuaRocks (or other third-party sources), a library manager for Lua modules, and may include libraries published in LuaRocks (or other third-party sources).
  • Durable Ring Buffer A durable subscription interface may allow the container runtime 200 to reliably publish data in an intermittently connected environment.
  • the container runtime 200 may collect metrics at a specified interval, store them in a store-and-forward ring buffer or “capped collection” or the like, and later forward the data.
  • Delivery of the queued messages may use a reliable 2-phase commit handshake similar to AMQP (Advanced Message Queuing Protocol) or Kafka. If delivery of the data does not occur within a defined window, the data records may be recycled.
  • AMQP Advanced Message Queuing Protocol
  • Kafka Kafka
  • the orchestration hub 102 may provide the tools necessary to develop the software pipelines 402.
  • the orchestration hub 102 may package the pipeline(s) 402 into software containers 406 (e.g ., micro-containers) that may include the pipelines 402 and their dependencies, such as runtimes, system tools, system libraries, settings and other elements.
  • the containers 406 may include a containerized operating system or they may run on the device’s operating system 302. In this way, the containers 406 may include everything that the pipelines 402 may require to run on the devices 300, regardless of the device’s infrastructure or computing environment.
  • the containers 406 may isolate their software from the device’s environment to ensure that the pipelines 402 may run uniformly and reliably on the devices 300.
  • the containers 406 may be cross-platform
  • Linux containers for running on Linux-based and/or Unix-based systems, or platform-specific firmware for running on devices 300 with other specific types of platforms.
  • the pipelines 402 are preferably cross-platform. When used with a cross platform Linux container runtime 200 and/or container 406, no transpiling may be necessary. When used with a platform-specific firmware container runtime 200, the pipelines 402 may be transpiled to match the agent/OS.
  • the orchestration hub 102 may also provide tools to test the pipelines 402
  • the orchestration hub 102 may include a simulation module that may provide line-by-line execution of the pipelines 402, as well as a visual representation of the variables defined by the code. Debugging the code through the simulation software may allow for early detection and the remedying of problems if the pipelines 402 do not behave as expected.
  • Other testing methodologies may include the use of a single-board computer (e.g., a Raspberry Pi) as will be described herein.
  • the system 100 may delete, erase or otherwise render inoperable any or all of the pre-existing OS 302 and/or previous applications 304 on the device 300.
  • the system 100 may leave some or all of the devices OSs 302 and/or the previous code 304 intact.
  • the orchestration hub 102 may then distribute the containers 406 (and their included pipelines 402) to the edge devices 300 along with corresponding container runtime(s) 200.
  • the container runtimes 200 may then run and generally manage the containers 406 and their associated pipelines 402 on the edge devices 300. As the pipelines 402 may run to collect and process data, the container runtimes 200 may provide the data to the orchestration hub 102 and/or to other destinations as desired. One or more container runtimes 200 managing one or more containers 406 (that may each include one or more pipelines 402) may be included onto any edge device 300.
  • the orchestration hub 102 may provide the tools to manage the ecosystem of containers 406 to ensure that deployed containers 406 are running correctly and performing their desired functionalities.
  • the orchestration hub 102 may also assign IP addresses (or other types of communication mechanisms) to the containers 406 as necessary ( e.g ., to allow the containers 406 to communicate with one another).
  • the orchestration hub 102 may include a variety of software units that may be used to perform the operations of the orchestration hub 102. Access to these units may be provided through the hub GUI 500 as shown, e.g., in FIG. 3A.
  • the orchestration hub 102 may include an applications unit 502 that may be used to develop the pipelines 402.
  • the hub may also include an administration unit and other units and elements as necessary to fulfill its functionalities as will be described herein.
  • the orchestration hub 102 may include tools to develop new pipelines 402 for particular devices 300, tools to modify or otherwise convert non-edge software (e.g., cloud computing applications) to run on particular devices 300, and other software development tools.
  • non-edge software e.g., cloud computing applications
  • the system 100 may integrate with other systems (e.g, Amazon IoT, AWS lambda pipelines, etc.) to receive the non-edge code and to convert it (e.g, miniaturize it) for use with the system 100 and the edge devices 300.
  • the applications unit 502 may include at the least some of the following modules:
  • a pipeline development module 504 is provided.
  • the pipelines 402 may be developed using the pipeline development module 504.
  • the pipeline development module 504 may include a shared stage pallet 512, a pipeline layout pane 514 and a stage information pane 516.
  • available stages 404 may be chosen from the shared stage pallet 512 and moved into the pipeline layout pane 514.
  • the stages 404 may be linked to form a pipeline 402.
  • Each stage 404 may be highlighted in the layout area 512 to show its associated details in the stage information area 516.
  • Stages 404 in the shared stage pallet 512 may be divided into categories such as
  • Control stages 404-CTL may control devices such as a GPIO Pin or an LED matrix driver.
  • Data acquisition stages 404-DA may include drivers for devices such as sensors, inputs, actuators, and buses (e.g. I2C, I2S, SPI, etc.).
  • Middleware stages 404-MID may include script editors (e.g., Lua script editor) that may allow for the creation of custom scripts (e.g, Lua scripts).
  • Middleware / Analytics stages 404- MA may include a Spark Streaming utility that may process results using a Spark Streaming control loop (e.g, via the native Stuart resource).
  • Middleware / Signal Conditioning stages 404- SC may include scripts that perform particular transforms or other calculations on the pipeline data (e.g, a cumulative moving average calculation).
  • DB stage(s) 404-DB may enable for the pipeline data to be stored to and/or read from sources such as databases (e.g., Time Series, Relational, Graph), durable ring buffers, etc.
  • Remote Storage stages 404-RS may enable the pipeline data to be stored to and/or read from remote storage resources such as databases (e.g., Time Series, Relational, Graph), message queues (e.g., MQTT, Kafka) and other data sources.
  • stages 404 e.g, 404-CTL, 404-DA, 404-MW, 404-MA, and 404-SC, 404-DB, and 404- RS
  • stages 404-i of FIG. 2 may correspond to stages 404-i of FIG. 2.
  • the stages 404 may include programming that may trigger external actions.
  • the stages 404 may include software-based triggers through APIs and messaging systems that may trigger external software applications to perform other functionalities.
  • the stages 404 may include programming that may include hardware triggers (e.g. , that may send values over a pin) to trigger an external system.
  • the stages 404 may include software modules 408 and/or software packages 410
  • a software module 408 may include a script (e.g ., a Lua script) that may perform a particular functionality (e.g ., that may include a driver for a particular device 300).
  • the modules 408 may be associated with one or more stages 404 and may be preloaded and referenced by a stage 404 using a function call within the stage (e.g., require( ‘your. module. name )).
  • the packages 410 may include bundles of modules 408 that may perform the functionalities of the combined bundled modules 408.
  • the packages 410 may be associated with one or more stages 404 and may be preloaded and referenced by a stage 404 using a function call (e.g, require( ‘your. package. name )) within the stage 404.
  • Information regarding the modules 408 and the stages 410 may be available in the library module 506.
  • modules 408 may be added to the stage 404 by highlighting the stage 404 in the layout pane 514 and then using a module drop-down menu 518 in the informational pane 516 (as shown in FIG. 3B).
  • packages 410 may be added to a highlighted stage 404 via a package drop-down menu 520 (as shown in FIG. 3C).
  • modules 408 and packages 410 may be added to the stages 404 using different and/or other functionalities of the orchestration hub 102.
  • Table I shows a variety of exemplary shared stages 404 and their associated functionalities that the system 100 may include. It is understood that this list is not all-inclusive and that the system 100 may include some or all of these shared stages 404 as well as other shared stages 404 not listed. The scope of the system 100 is not limited in any way by the shared stages 404 that the system 100 may include or the pipelines 402 that may be developed or otherwise provided.
  • An informational pane 516 may include information regarding a highlighted stage 404.
  • the informational pane 516 may include (without limitation):
  • the modules 408 that the stage 404 may be include. This may include the drop down menu 518 that may allow for additional modules 408 to be added to the stage 404.
  • the packages 410 that the stage 404 may include. This may include the drop down menu 520 that may allow for additional packages 410 to be added to the stage 404. 4. Other packages 410 from other sources that the stage 404 may include ( e.g ., LuaRocks packages). This may include a drop-down 522 that may allow for the additional packages 410 to be added to the stage 404.
  • the script that the stage may include.
  • the code shown in this pane may be editable to allow a user to make changes to the code as required (e.g., when a new module 408 and/or package 410 may be added to the stage 404 and the corresponding requireQ function calls may be added to the script as necessary).
  • the stage 404 may be linked to other stages 404 in the layout pane 514 by using the stage linking drop-down menu 524 (as shown in FIG. 3D).
  • the system 100 may use the pipeline metadata to determine which pipeline stages may include compatible inputs and/or outputs, and the drop-down menu 524 may present possible linkages between the highlighted stage 404 and other available linkable stages 404 within the layout 514.
  • the stages 404 may be linked and a linkage arrow 526 may appear in the layout pane 514 between the newly linked stages 404.
  • the linkage drop-down 524 may also be used to unlink currently linked stages 404 and/or to delete stages 404 as desired.
  • the system 100 may also include an automatic stage linking mechanism that may suggest logical linkages between stages 404 in the layout pane 514, and that may automatically link stages 404 as it may suggest. If this functionality is enabled and the user does not wish the stages 404 to be linked as suggested by the system 100, the user my use the linkage drop down 524 to remove the automatic links.
  • the system 100 may also provide the tools to create pipelines 402 as pipeline stages 404 (that is, a stage 404 may include a sub -pipeline within the stage 404).
  • the system 100 may provide the tools to add multiple metrics into a single pipeline 402, to send a metric to two pipeline stages 404 (e.g, EWMA, SMA) at the same time, and to run two separate pipelines 402 off the same metric.
  • FIGS. 7-20 Further aspects of the pipeline development module 504 will now be highlighted through the description of an example pipeline development workflow as shown in FIGS. 7-20.
  • This sample workflow (FIG. 4) may develop a pipeline 402 that may take readings from an air quality sensor and an accelerometer in parallel, process the data from each device and display the results via an LED readout. It is understood that this sample workflow and resulting pipeline 402 are meant for demonstration purposes and do not limit the scope of the system 100 in any way.
  • a new pipeline may be created by choosing “New pipeline” in the add dropdown 526 (FIG. 5A). This may open a new pipeline setup dialog 528 (FIG. 5B) wherein the new pipeline 402 may be given a name and associated with a user and/or an application.
  • the shared stage pane 512 and an empty stage layout pane 514 may be loaded (FIG. 5C).
  • the device driver stage 404 for the air quality sensor may be chosen from the shared stage pallet 512, after which an icon representing the stage 404-1 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5D).
  • this information may include the name of the stage 404-1 (SGP30), the sample period for the driver (1 second), the software packages to include (sgp30 (0.1.0-1)) and the software script.
  • the software script may include the function call “ require ‘sgp30 ”’ to call the package sgp30.
  • the device driver stage 404 for the accelerometer may be chosen from the shared stage pallet 512, after which the icon representing the stage 404-2 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5E).
  • this information may include the name of the stage (ADXL345), the bandwidth rate (102 HZ), the range (2 G), the sample period (1 minute), the software packages to include (adxl345 (0.1.0-1)) and the software script.
  • the software script may include the function call “ require ‘adxl345”
  • a script stage 404 (e.g ., Lua script) may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-3 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5F).
  • the informational stage may include a default baseline script into which the necessary code may be added to form the desired script for the stage 400-3.
  • the other fields such as the name of the script may also be blank.
  • step 608 the desired code may be added to the script within the editable script field in the informational pane 516 and the script may be named.
  • the stage 404-3 may be linked to another stage in the pipeline 402 (e.g., to stage 400-1) by using the action required drop-down menu 530 (FIG. 5G).
  • the stages 404-1 and 404-2 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-1, 404-3.
  • a script stage 404 (e.g, Lua script) may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-4 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5H).
  • the informational stage may include a default baseline script into which the necessary code may be added to form the desired script for the stage 400-4.
  • other fields (such as the name of the script) may also be validated and/or prepopulated, or the fields may be blank and editable.
  • step 612 the desired code may be added to the script within the editable script field in the informational pane 516 and the script may be named.
  • the system 100 may have linked the new stage 404-4 to stage 404-3, and that the new stage, being the script to process data from the accelerometer driver stage 404-2, may therefore need to be unlinked (using the dropdown 524) as shown in FIG. 51.
  • the stage 404-4 may then be linked to the stage 404-2 using the action required dropdown 530 (FIG. 51).
  • the stages 404-4 and 404-3 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-4, 404-3 (FIG. 5J)
  • the device driver stage 404 for the LED matrix readout may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-5 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5K).
  • this information may include the name of the stage (HT16K33), the software packages to include (htl6k33 (0.1.0-1)) and the software script.
  • the software script may include the function call “ require ‘htl6k33 to call the package htl6k33.
  • the system 100 may have properly automatically linked the stage 404-3 to the new stage 404-5.
  • the stage 404-5 may also receive data from the stage 404-4, these two stages may also need to be linked.
  • the stage 404- 4 may be highlighted and the dropdown menu 524 may be used to link it to stage 404-5 (step 616 and FIG. 5L).
  • the stages 404-4 and 404-5 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-4, 404-5. This may result in the final pipeline 402 as shown in FIG. 5M.
  • the exemplary pipeline 402 shown in FIG. 5M may collect data from an air quality detector (e.g ., a SGP30 detector) and an accelerometer (e.g, an ADXL345 accelerometer), process the data from the air quality detector using a first script (e.g, a custom Lua script), process the data from the accelerometer using a second script (e.g, a custom Lua script), and display the processed data from both scripts on an 8x8 LED matrix (e.g, an HT16K33).
  • an air quality detector e.g a SGP30 detector
  • an accelerometer e.g, an ADXL345 accelerometer
  • a first script e.g, a custom Lua script
  • a second script e.g, a custom Lua script
  • display the processed data from both scripts on an 8x8 LED matrix e.g, an HT16K33.
  • the pipeline(s) 402 may be packaged into edge applications 412 by bundling their source code, assets, embedded microservices and other elements together and containerizing them (placing them into microcontainers 406 and/or firmware 406).
  • the containers 406 may then be transferred to the edge devices 300 and run.
  • the build is attached to its config (e.g., backing services, credentials to external services, per-deploy values, etc.).
  • the “release” then includes both the build and its config and is ready for execution in the execution environment (e.g, on the cloud platform).
  • the released application may then be “run” on the cloud platform.
  • the system 100 may be used to develop and deploy edge applications 412 to run on edge devices 300 (and not to run on cloud platforms), this standard workflow may not apply. Instead, because the application 412 may be edge-native, the application’s config may not merely be attached to the build, but instead, the config must be embedded with the application 412.
  • the system’s workflow may include the releasing of the application, followed by the building of the application, followed by the running of the application (in this order).
  • the file may be trimmed (before being deployed) based on the routines in the file that are not in and/or needed by the pipelines being used.
  • a distributed container only contains routines/functions that may be used/needed, and does not contain routines/functions that are not going to be used or needed.
  • the pruning process may omit (or fail to remove) some functions or routines that are not in the pipelines being used. However, it is preferable that the pruning process removes most (if not all) of the functions/routines that will not be used.
  • pipelines 402 may be developed as described in other sections of this specification.
  • a new application 412 may be added by selecting the add (+) button and choosing “Add new application”.
  • the new application dialog 532 (FIG. 7A) may appear that may allow the user to name the new application 412 and set the owner.
  • the dialog 532 may also provide a number of available services that the user may add to the application. These services may include (without limitation), TSDB Edge, microcontainers, Stuart Accelerator, pipelines and other services. The user may choose the services to include by checking the appropriate checkbox(es).
  • the dialog 532 may also provide an editable description field into which the user may add a description of the application 412.
  • step 704 the user may click on the Save button and the new application 412 may be created and saved by the system 100.
  • This may launch the application information dialog 534 (FIG. 7B) that may include an overview tab 536, a resources tab 538, a settings tab 540, a releases tab 542 and other tabs relating to the chosen application 412.
  • the user may simply click on the applications module 508 link (FIG. 3) and choose the desired application 412 from the list of available applications 412.
  • the overview tab 536 may display information such as the name of the application 412, the owner of the application 412, the services, the latest release, the pipelines 402 included in the application 412 as well as other information.
  • the user may choose the resources tab 538 to view and/or add resources to the application 412.
  • the loaded resources may be listed in the tab 538.
  • the user may click on the “Find more add-ons” button 542 and choose new add-ons from the list. For example, resources such as microcontainers 406, pipelines 402, the Stuart accelerator, TSDB Edge, and other resources may be added. Once added, the resources will be displayed in the dialog 538.
  • the settings tab 540 may include the name of the application 412, the pipelines 402 associated with the application 412, and an editable description of the application 412.
  • the releases tab 542 may show the current release information for the chosen edge application 412.
  • the dialog may show the release version (e.g ., vl), the release date and other information.
  • new releases of the application 412 may be created by selecting the “Create Release” button 544.
  • the system 100 may perform a snapshotting of the application’s resources and embed the config with the pipelines 402. [0136] The release may then be built into a container/firmware 406 with the system 100 translating the pipelines 402 and scripts into a high-level programming language (e.g ., C), compiling the file and generating the firmware binary file.
  • a high-level programming language e.g ., C
  • a user may run the microcontainer runtime 406 on a single-board computer (e.g., a Raspberry Pi), download the application 412 from the orchestration hub 102 and run the application.
  • a single-board computer e.g., a Raspberry Pi
  • the build may then be transferred to an edge device 300 using a local connection
  • the container runtime 200 and build may run on less than 10 MB of memory and consume less than 25 MB of hard disk space (e.g, in Linux environments). In other exemplary embodiments, the container runtime 200 and the build may run on less than 32 K of RAM (e.g, in platform-specific embedded systems).
  • the system 100 may deploy containerized pipelines 402 and associated container runtimes 200 to edge devices 300 in various stages of their lifecycles, such as devices 300 in the following categories (without limitation):
  • Edge devices 300 that have not yet been programmed e.g. new devices 300
  • Edge devices that have been programmed e.g, devices that include previous programming 302, 304;
  • the devices 300 may be stand-alone devices 300 or devices as a part of a product or system.
  • a containerized pipeline 402 and its associated container runtime 200 may be embedded into a device 300 that may include an operating system (OS) 302 but that may not include other applications.
  • the OS 302 may be referred to as previous programming.
  • the device 300 may or may not yet be deployed into the field.
  • the embedded containerized pipeline 402 may interface and run directly on the OS 302.
  • the container runtime 200 may interface with the pipeline 402 and the OS 302 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described herein.
  • a containerized pipeline 402 and its associated container runtime 200 may be embedded into a device 300 that may include an operating system (OS) 302 and previous applications 304.
  • the OS 302 and the previous applications 304 may be referred to as previous programming.
  • the device 300 may or may not yet be deployed into the field.
  • the embedded containerized pipeline 402 may interface and run directly on the OS 302.
  • the agent 200 may interface with the pipeline 402 and the OS 302 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described in other sections.
  • the containerized pipeline 402 may be installed and/or updated whenever required by the system 100 without affecting the previous functionality of the device 300 and without requiring the updating and/or modification of the
  • the containerized pipeline 402 may be updated with new ML or AI models (and new pipelines 402) as they may become available from the orchestration hub 102 and that once installed and/or updated (as frequently as necessary) may run alongside the previous applications 304 that may have shipped with the device 300.
  • the system 100 may delete, erase, overwrite, or otherwise render inoperable any or all of the pre-existing OS 302 and/or previous applications 304 on the device 300.
  • the system 100 may then embed the containerized pipelines 402 and the associated container runtimes 200 into the device 300.
  • the result is shown in FIG. 8C.
  • the containers 406 may include a containerized operating system and everything that the pipelines 402 may require to run on the devices 300.
  • the container runtime 200 may interface with the pipeline 402 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described herein.
  • an application release "BL v2" may be running on the device 300 from flash, and the system 100 may download a release (e.g ., " BL v3") which may contain an extra pipeline 402, an updated model text file or other new element.
  • the new release (“v3”) may then be stored to any available storage on the device 300 (e.g., Flash or MicroSD), and the new microcontainers 406 on the storage device may augment the microcontainers 406 already embedded.
  • the OS 302 and/or the previous programming 304 may not require any updating and/or other modification and may not be adversely affected by the agent 200 and/or the pipelines 402 (original or new release).
  • the device 300 may be over provisioned with additional containers 406 and/or additional associated agents 200.
  • additional containers 406 may not necessarily include pipelines 402 upon deployment into the device 300, but instead may be placeholders for future pipelines 402 (i.e., future releases to add new functionalities to the device 300) to be realized and implemented at later dates.
  • the corresponding pipeline 402 may be developed, containerized and augmented into the awaiting container 406 already installed on the device 300. This may allow developers to later monetize the devices 300 by adding or adjusting the device workloads as the needs evolve.
  • the pipelines 402 need not rely on a connection to the orchestration hub 102 or other devices for execution and may run untethered in disconnected environments.
  • the system 100 may include different container runtime 200 deployment topologies such as a 1 : 1 agent 200 to edge device 300 deployment, a 1 :N agent 200 to edge device 300 deployment, a hybrid deployment and other types of deployment topologies.
  • the orchestration hub 102 may download the software agents 200 to the devices 300 via a local connection (e.g, USB or serial cable) as represented by arrows Bl, B2, ... Bn, a network 104 (e.g, the Internet) as represented by download lines Bl’, B2’, ... Bn’ or by other methods.
  • the agent 200 may establish communication protocols within the device 300 such that the orchestration hub 102 may then communicate with the software agents 200 as represented by communication lines Al, A2, ... An.
  • the software agents 200 may receive containerized pipelines 402 from the orchestration hub 102 and enable the edge devices 300 to run the pipelines 402.
  • the software agents 200 may also upload data (e.g, results from edge device data processing) from the device 300 to the orchestration hub 102 as represented by communication lines Al, A2, ... An.
  • the agent 102 may also direct the edge device 300-1 to communicate with other edge devices 300-2, 300-n as represented by communication lines Cl-2 and Cl- «. This may allow the agent 200 to collect data from locked-down devices 300 (devices 300 that the agents 200 cannot be directly installed onto) using remote communication protocols (such as SNMP), and to then run analytics or data processing on the collected data on the device 300 hosting the agent 200.
  • locked-down devices 300 devices 300 that the agents 200 cannot be directly installed onto
  • remote communication protocols such as SNMP
  • agents 200 may only communicate with the orchestration hub 102 (e.g., agent 200-1) and other agents 200 may communicate with the orchestration hub 102 and with other edge devices 300 (e.g, agent 200-2 may communicate with the orchestration hub 102 and with edge devices 300- 3, 300-n).
  • the system 100 may include data acquisition capabilities including ADLink Data River, Sonim, Influx, MQTT, Redis, SNMP, CPU, ModBus, CAN, I2C, SPI, OPC-UA, SCAD A, CoAP, Kafka and others.
  • the administration unit 504 may include at the least some of the following modules:
  • the system 100 was used to provide real-time edge analytics during the process of directional drilling and geo-steering at oil and gas drilling facilities.
  • Geo-steering is the process of adjusting the borehole position (inclination and azimuth angles) in real-time during the drilling of the borehole in order to reach one or more geological targets. These adjustments may be based on geological information gathered from drill-head sensors (edge devices 300) while drilling.
  • the drill-head sensors 300 may include, without limitation one or more of: weight on bit (WOB) sensor(s), differential pressure sensor(s), shock sensor(s), vibration sensor(s), torque sensor(s), temperature sensor(s), accelerometer (inclinometer) sensor(s), magnetometer (azimuth) sensor(s), gamma ray sensor(s), and/or others.
  • logging while drilling (LWD) and measurement while drilling (MWD) data may be collected downhole by the sensors 300, converted into amplitude- and/or frequency-modulated pulses and transmitted up through the mud column by a downhole mud pulser.
  • the baud rates may be very slow (e.g ., lOObits/sec).
  • the mud-pulse transmissions may include a great deal of latency, especially with increased well depth.
  • the data received topside may include raw data plotted on an X-Y and/or polar coordinate system or otherwise.
  • the same telemetry system may be used to transmit signal commands from the surface to the downhole sensors.
  • the raw data may be displayed for the drilling engineer to monitor in real time, looking for anomalies that may suggest a problem or that may provide insight to the drilling effectiveness.
  • the recognition of problems and/or interpretation of the raw data to discover patterns may be subjective and heavily reliant on the experience of the engineer, and as such, may be difficult to adequately perform in real time.
  • a typical procedure includes calculating the best path prior to the drilling using limited information, performing the drilling using these calculations, and then sending the data to a centralized data center after the drilling is complete to determine the success and effectiveness of the work. This process may typically take weeks if not months, thereby only confirming the status of the borehole long after the fact.
  • drill-head sensors edge devices 300 at a drilling location were retrofitted with software agents 200 and containerized software pipelines 402 (as shown, e.g., in FIG. 10B).
  • the pipelines 402 programmed the devices to apply transforms and machine learning algorithms to the raw data to facilitate a better understanding of the data in real time and to allow for adjustments to be made to the drilling path based on the data.
  • the software pipelines 402 processed the data into meaningful parameters such as mechanical specific energy (MSE).
  • MSE may reflect the energy required to remove a unit volume of rock.
  • the objective may be to minimize the MSE and to maximize the rate of penetration (ROP) of the drill-head.
  • ROP rate of penetration
  • the software pipelines 402 applied machine learning (ML) algorithms and analytics to the data to discover meaningful patterns and to facilitate best path predictions for the drill-head based on the real time data received from the devices 300.
  • ML machine learning
  • the edge analytics also facilitated the optimization of the drilling rates and the minimization of the drilling hazards.
  • the software pipelines 402 applied smoothing to the data to reduce the noise (e.g ., noise introduced by the attenuation through the mud column or from other sources) thus making the data streams easier to view and understand.
  • the noise e.g ., noise introduced by the attenuation through the mud column or from other sources
  • the software pipelines 402 correlated the data across the different sensors and different feeds.
  • Well construction includes the insertion of the casing, securing the casing with cement, perforation as well as other initiatives (as depicted, e.g., in FIG. IOC).
  • the casings must be cemented correctly within the well, and once secured, must be monitored during the fracking operations to ensure their continued integrity.
  • the system 100 was used to provide real-time edge analytics during well construction and use.
  • Sensors such as one or more treating pressure sensor(s), pump rate sensor(s), temperature sensor(s), sand concentration sensor(s), and other types of sensors 300 were deployed within the well to monitor the pouring of the casing cement.
  • the sensors 300 were retrofitted with software agents 200 and containerized software pipelines 402 that programmed the devices 300 to calculate real time effective pressure and to apply machine learning (ML) algorithms to the collected data to development pump rate models, sand concentration models and other types of models to predict adjustments to be made during the process.
  • the ML algorithms were also used to predict the performance of the casings when the fracking moved forward.
  • the software pipelines 402 correlated the data with litho- facies map data to better understand the sedimentation processes and their deposits in the area.
  • the implementation of the system 100 in this use case resulted in an improved extraction rate of 30%-40%.
  • Well completion is the process of finalizing a well for production (or injection).
  • the system 100 was used to provide real-time edge analytics during the well completion process.
  • Downhole pressure, temperature, proppant concentration, chemical concentration and other types of gauges may be secured to the outside of the tubing string to collect data and to send it topside electrically, via fiber optics or through acoustic signals in the tubing wall.
  • the pressure and temperature sensors were retrofitted with software agents 200 and containerized software pipelines 402.
  • the pipelines 402 programmed the devices 300 to apply transforms and machine learning algorithms to the raw data to facilitate a better understanding of the data in real time and to allow for adjustments to be made to the completion in-process based on the data.
  • Nolte plots are used to interpret net-pressure behavior in the well to determine estimates of fracture growth patterns, where net pressure is the pressure in the fracture minus the in-situ stress.
  • the software pipelines 402 processed the data collected by the edge devices 300 into meaningful parameters such as net pressure and Nolte plots (as shown, e.g., in FIG. 10E). The processed data were then transmitted topside and made immediately available to the drilling engineers. The engineers were then able to make real-time adjustments to the completion processes as the data was retrieved. This optimized the completion and allowed the engineers to avert critical problems such as when the fracture height may be increasing too rapidly (referred to as Mode IV) in which case the fracture treatment was flushed out or terminated.
  • Mode IV critical problems
  • the software pipelines 402 applied machine learning (ML) algorithms and analytics to the Nolte plot data to provide predictions of problematic pressures within the fractures given the real-time conditions of the well.
  • ML machine learning
  • the software pipelines 402 calculated real-time changes in the injection rates, the proppant concentrations, fluid viscosities and other parameters to facilitate the optimization of the stimulation effectiveness. [0184] In another example, the software pipelines 402 applied smoothing to the data to reduce the noise introduced by the attenuation through the mud column thus making the data streams easier to view and understand.
  • the software pipelines 402 correlated the data across the different sensors and different feeds.
  • the implementation of the system 100 in this use case resulted in improved well productivity, a significant increase in hydrocarbon production and a 100% savings on the overall completion processes.
  • the system 100 was used to provide real-time edge analytics during pump operations including artificial lift.
  • Artificial lift is the process used to increase the pressure within the reservoir to extract the oil when the natural drive energy is not strong enough to push the oil to the surface.
  • Overall pump operations include methodologies to keep pumps active and productive, and to minimize downtime (as shown, e.g., in FIG. 10F).
  • the pressure, temperature, flow, suction, level and other sensors were retrofitted with software agents 200 and software pipelines 402.
  • the pipelines 402 programmed the devices 300 to apply machine learning (ML) algorithms to the raw data to predict when artificial lift may be required, and what level of lift may be needed. This optimized the pump’s productivity and yield.
  • ML machine learning
  • the devices 300 were also programmed to apply machine learning (ML) algorithms to predict maintenance requirements for the pumps prior to pump failure thus avoiding broken wells. This implementation significantly reduced pump downtimes thus increasing pump production, yield and revenue. [0191] Once the ML models were optimized for a smaller group of pumps, the models were pushed to all of the pumps across the oil field, thus lowering the aggregate maintenance costs while significantly improving production volumes and overall revenue.
  • ML machine learning
  • the edge devices were programmed by the system 100 to calculate a variety of ML models that were then used to predict behavior of the various parameters of interest during the well’s processes described.
  • the predictions from each ML model were compared to actual results upon completion of the processes to determine which ML models were the most accurate in predicting the well’s behavior.
  • k-means, LVQ, SVN, CNN, ARIMA, RNN-LSTM and other ML models may be implemented by the system 100 to find patterns in the data and to create insights and/or predictions. Once the comparisons may be made, different pieces of data from the different models may be combined into one or more insights that may best represent the behavior of the well’s processes. Using the system 100 integrated into the edge devices 300 within the well, these calculations may happen as frequently as desired (e.g., every one second or faster) to fine tune the ML models in real time.
  • the resulting insights may improve the well’s efficiency and productivity and may allow for better automation of complex processes.
  • Programs that implement such methods may be stored and transmitted using a variety of media (e.g, computer readable media) in a number of manners.
  • Hard-wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments.
  • various combinations of hardware and software may be used instead of software only.
  • FIG. 11 is a schematic diagram of a computer system 1100 upon which embodiments of the present disclosure may be implemented and carried out.
  • the computer system 1100 includes a bus 1102
  • Communication port(s) 1114 may be connected to one or more networks (not shown) by way of which the computer system 1100 may receive and/or transmit data.
  • a “processor” means one or more microprocessors, central processing units (CPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture.
  • An apparatus that performs a process can include, e.g ., a processor and those devices such as input devices and output devices that are appropriate to perform the process.
  • Processor(s) 1104 can be any known processor. Typically Intel x86 processors are used for cloud and gateways, ARM A-class processors may be used for gateways and larger IoT devices, and ARM M-class may be used for IoT devices.
  • Communications port(s) 1114 can be any of an Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 1114 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), or any network to which the computer system 1100 connects.
  • LAN Local Area Network
  • WAN Wide Area Network
  • Main memory 1106 can be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art.
  • RAM Random Access Memory
  • ROM Read-only memory
  • Mass storage 1112 can be used to store information and instructions. For example, hard disk drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), or any other mass storage devices may be used.
  • Bus 1102 communicatively couples processor(s) 1104 with the other memory, storage and communications blocks.
  • Bus 1102 can be any bus including an PC (Inter-Integrated Circuit or I2C) bus, an SPI (Serial Peripheral Interface) bus, a PCI / PCI-X, SCSI, a Modbus bus, a Controller Area Network (CAN) bus, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like.
  • PC Inter-Integrated Circuit
  • SPI Serial Peripheral Interface
  • PCI / PCI-X PCI / PCI-X
  • SCSI Serial Peripheral Interface
  • Modbus Serial Bus
  • CAN Controller Area Network
  • USB Universal Serial Bus
  • PC busses are frequently used for sensors, and SPI busses are used for some sensors and often for memory.
  • Removable storage media 1110 can be any kind of external storage, including non-volatile memory cards (such as a microSD card or the like), hard-drives, floppy drives, USB drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Versatile Disk - Read Only Memory (DVD-ROM), etc.
  • non-volatile memory cards such as a microSD card or the like
  • hard-drives such as a microSD card or the like
  • floppy drives such as a microSD card or the like
  • USB drives such as Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Versatile Disk - Read Only Memory (DVD-ROM), etc.
  • CD-ROM Compact Disc - Read Only Memory
  • CD-RW Compact Disc - Re-Writable
  • DVD-ROM Digital Versatile Disk - Read Only Memory
  • Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process.
  • machine-readable medium refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g ., instructions, data structures) which may be read by a computer, a processor or a like device.
  • Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • Non-volatile media include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media include dynamic random-access memory, which typically constitutes the main memory of the computer.
  • Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • the machine-readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions.
  • embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
  • data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
  • a computer-readable medium can store (in any appropriate format) those program elements which are appropriate to perform the methods.
  • main memory 1106 is encoded with application(s) 1122 that support(s) the functionality as discussed herein (the application(s) 1122 may be an application(s) that provides some or all of the functionality of the services / mechanisms described herein.
  • Application(s) 1122 (and/or other resources as described herein) can be embodied as software code such as data and/or logic instructions (e.g ., code stored in the memory or on another computer readable medium such as a disk) that supports processing functionality according to different embodiments described herein.
  • processor(s) 1104 accesses main memory 1106 via the use of bus 1102 in order to launch, run, execute, interpret or otherwise perform the logic instructions of the application(s) 1122.
  • Execution of application(s) 1122 produces processing functionality of the service related to the application(s).
  • the process(es) 1124 represent one or more portions of the application(s) 1122 performing within or upon the processor(s) 1104 in the computer system 1100.
  • the application 1122 itself (i.e., the un-executed or non-performing logic instructions and/or data).
  • the application 1122 may be stored on a computer readable medium (e.g., a repository) such as a disk or in an optical medium.
  • the application 1122 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the main memory 1106 (e.g, within Random Access Memory or RAM).
  • application(s) 1122 may also be stored in removable storage media 1110, read-only memory 1108, and/or mass storage device 1112.
  • the computer system 1100 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
  • embodiments of the present invention include various steps or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware.
  • the term “module” refers to a self-contained functional component, which can include hardware, software, firmware or any combination thereof.
  • an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.
  • Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process.
  • process may operate without any user intervention.
  • process includes some human intervention (e.g ., a step is performed by or with the assistance of a human).
  • the phrase “at least some” means “one or more,” and includes the case of only one.
  • the phrase “at least some ABCs” means “one or more ABCs”, and includes the case of only one ABC.
  • portion means some or all. So, for example, “A portion of X” may include some of “X” or all of “X”. In the context of a conversation, the term “portion” means some or all of the conversation.
  • the phrase “using” means “using at least,” and is not exclusive. Thus, e.g, the phrase “using X” means “using at least X.” Unless specifically stated by use of the word “only”, the phrase “using X” does not mean “using only X ”
  • the phrase “based on” means “based in part on” or “based, at least in part, on,” and is not exclusive.
  • the phrase “based on factor X” means “based in part on factor X” or “based, at least in part, on factor X.” Unless specifically stated by use of the word “only”, the phrase “based on X” does not mean “based only on X.”
  • the phrase “distinct” means “at least partially distinct.” Unless specifically stated, distinct does not mean fully distinct. Thus, e.g, the phrase, “X is distinct from Y” means that “X is at least partially distinct from Y,” and does not mean that “X is fully distinct from Y.” Thus, as used herein, including in the claims, the phrase “X is distinct from Y” means that X differs from Y in at least some way.
  • the phrase “multiple ABCs,” means “two or more ABCs,” and includes “two ABCs.”
  • the phrase “multiple PQRs,” means “two or more PQRs,” and includes “two PQRs.”
  • the present invention also covers the exact terms, features, values and ranges, etc. in case these terms, features, values and ranges etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least etc. (i.e., "about 3” or “approximately 3” shall also cover exactly 3 or “substantially constant” shall also cover exactly constant).
  • the present invention also covers the exact terms, features, values and ranges, etc. in case these terms, features, values and ranges etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least etc. (i.e., "about 3” shall also cover exactly 3 or “substantially constant” shall also cover exactly constant).

Abstract

A method includes: (A) providing at least one container runtime, wherein said at least one container runtime includes a first particular container runtime for a first type of device; (B) creating a first at least one software pipeline; (C) on a first device of said first type, embedding the first particular container runtime; (D) on said first device, embedding a first container comprising the first at least one software pipeline, wherein said first particular container runtime provides an execution environment for said first container on devices of said first type; (E) creating a second at least one software pipeline, distinct from the first at least one software pipeline; and (F) embedding, on said first device, a second container comprising the second at least one software pipeline, wherein said first particular container runtime provides an execution environment for said second container on devices of said first type.

Description

SYSTEM AND METHOD FOR PROGRAMMING DEVICES
Copyright Statement
[0001] This patent document contains material subject to copyright protection. The copyright owner has no objection to the reproduction of this patent document or any related materials in the files of the United States Patent and Trademark Office, but otherwise reserves all copyrights whatsoever.
Related Application
[0002] This application claims the benefit of U.S. Provisional patent application
No. 62/887,972, filed August 16, 2019, titled "System and Method For Programming Devices," the entire contents of which are hereby fully incorporated herein by reference for all purposes.
Field of the Invention
[0003] This invention relates to a system and method of programming electronic devices, including the programming of microcontrollers, microprocessors and other Internet of Things (IoT) devices.
Background
[0004] Edge devices such as Internet of Things (IoT) devices may include sensors, gauges, input devices (e.g., switches, rotary encoders, buttons, etc.), actuators, and other types of devices that may collect real-time, real-world data relating to their environment. For example, edge devices may include sensors that measure temperature (e.g., Internet controlled thermostats), pressure, acceleration, sound, optical signals (e.g, security cameras), humidity, gravity, geographic location, health parameters, system failures, and other types of parameters. The devices may include microcontrollers, microprocessors, field programmable gate arrays (FPGAs), and other types of processors programmed to control the device, to acquire the sensed data. In present systems, these devices output the sensed data for processing, typically to a centralized data center or cloud server.
[0005] The cloud server may then process the data and/or perform analytics upon the collection of data to discover and interpret meaningful patterns in the data, and to apply those patterns towards effective decision making. Decisions made at the server level regarding the data may be communicated back to the edge device (or to a different device) and implemented as new instructions for the device. This processing of the data at a centralized cloud server may be referred to as cloud computing.
[0006] The amount to time it may take to send the data from the device to the cloud for processing and then back to the device may be referred to as latency. In some scenarios, the latency may be insignificant (e.g, when adjusting the temperature of a household over the Internet using an IoT thermostat) while in other scenarios latency may be catastrophic ( e.g ., when processing data within anti-collision systems).
[0007] Latency is not the only problem associated with moving data off edge devices.
Edge devices may collect vast amounts of data that may require excessive bandwidth utilization to upload. For example, an oil field may generate petabytes of data per day, which is far too much data for the remote, cellular gateways to handle both technically and financially.
Presently, instead of uploading this data to a cloud server in real time, the data may be stored locally on recordable media and then physically shipped to a centralized location for analysis (sometimes taking weeks or even months to receive the results).
[0008] Furthermore, some edge devices may suffer from unreliable or intermittent connectivity, making it difficult to upload collected data on a consistent basis. For example, a refrigerated truck transporting temperature-sensitive cargo (e.g., salmon roe from Alaska to Los Angeles, California) may require real time monitoring and/or analysis of the cargo during the trip but may travel through areas with no connectivity making it impossible to do so via cloud computing.
[0009] Additionally, in a highly-regulated world, data may not be able to be moved in order to maintain compliance with legal requirements such as GDPR (General Data Protection Regulation) and privacy legislation.
[0010] It is desirable and an object hereof to perform processing at the edge (i.e., on edge devices). Accordingly, it is desirable and an object hereof program edge devices to perform data processing locally at the point of the data retrieval in order to avoid the latency associated with cloud computing, excessive bandwidth requirements, and/or unreliable connectivity.
Summary
[0011] The present invention is specified in the claims as well as in the below description. Preferred embodiments are particularly specified in the dependent claims and the description of various embodiments.
[0012] A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
[0013] One general aspect includes a method including: (a) embedding a container runtime on a microcontroller. The method also includes (b) embedding a first at least one container on the microcontroller, the first at least one container including a first at least one software pipeline, where the container runtime provides an execution environment for the first at least one container on the microcontroller. The method may also include (c) embedding a second at least one container on the microcontroller, the second at least one container including a second at least one software pipeline, where the second at least one software pipeline is distinct from the first at least one software pipeline, and where the container runtime provides an execution environment for the second at least one container on the microcontroller.
[0014] Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0015] Implementations may include one or more of the following features, alone or in various combination(s):
• The method where the container runtime replaces previous programming on the microcontroller.
• The method where the previous programming includes an operating system (OS) for the microcontroller.
• The method where the previous programming includes at least one application distinct from the OS.
• The method where the previous programming includes firmware and/or scripts from at least one prior installation.
• The method further including, after the embedding in (b), the first at least one software pipeline is run on the microcontroller.
• The method further including, after the embedding in (c), the second at least one software pipeline is run on the microcontroller.
• The method where running of the first at least one software pipeline on the microcontroller is controlled, at least in part, by the container runtime.
• The method where running of the second at least one software pipeline on the microcontroller is controlled, at least in part, by the container runtime.
• The method where the embedding in (b) and/or (c) is controlled, at least in part, by the container runtime.
• The method where the microcontroller is associated with a device.
• The method where the device is an Internet-of-things (IoT) device.
• The method where the embedding in (a) and/or (b) occurs after the device has been deployed.
• The method where the embedding in (a) and/or (b) occurs before the device has been deployed. • The method where the embedding in (a) occurs before the device has been deployed where the embedding in (b) occurs after the device has been deployed.
• The method where the device is provisioned with multiple containers.
• The method where at least one of the multiple containers is a placeholder for a future pipeline.
• The method where the software pipelines obtain sensor data from one or more sensors.
• The method where the software pipelines accept inputs from other devices, including human interface devices.
• The method where the software pipelines send data and/or control signals to one or more actuators.
• The method where the microcontroller is associated with a device, and where the one or more sensors are on or co-located with the device.
• The method where the software pipelines control one or more actuators.
• The method where the microcontroller is associated with a device, and where the software pipelines control one or more actuators, and where the one or more actuators are on or co-located with the device.
• The method where the microcontroller is associated with a device, and where the device receives inputs from one or more devices, including human interface devices, and where the one or inputs are on and/or co-located with the device.
• The method where the software pipelines run when the microcontroller is disconnected from other devices.
• The method where the software pipelines run when the microcontroller is connected to at least one other device.
• The method where the software pipelines include at least one mechanism for maintaining data on the microcontroller when the microcontroller is disconnected from other devices.
• The method where the software pipelines include at least one mechanism for transmitting data from the microcontroller.
• The method where the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
• The method where the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
• The method where the software pipelines include an application for an already- programmed device. • The method where (i) the first at least one software pipeline, and/or (ii) the second at least one software pipeline change or augment at least one functionality of the microcontroller.
• The method where the microcontroller includes hardware including at least one processor and a memory.
• The method where the first at least one container includes first one or more library routines and/or functions needed by the first at least one software pipeline.
• The method where the first one or more library routines and/or functions include all routines and/or functions needed by the first at least one software pipeline.
• The method where the first one or more library routines and/or functions were determined based on routines and/or functions in the first at least one software pipeline.
• The method where the first one or more library routines and/or functions was determined from at least one library and excludes routines and/or functions from the library that are not in the first at least one software pipeline.
• The method further including repeating acts (a) and (b) on multiple microcontrollers.
• The method where the multiple microcontrollers are homogeneous.
• The method where the multiple microcontrollers are heterogeneous.
[0016] Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0017] Another general aspect includes a method including: (a) providing at least one container runtime, where the at least one container runtime includes a first particular container runtime for a first type of device. The method also includes (b) creating a first at least one software pipeline. The method also includes (c) on a first device of the first type, embedding the first particular container runtime. The method may also include (d) on the first device, embedding a first container including the first at least one software pipeline, where the first particular container runtime provides an execution environment for the first container on devices of the first type. The method may also include (e) creating a second at least one software pipeline, distinct from the first at least one software pipeline. The method may also include (f) embedding, on the first device, a second container including the second at least one software pipeline, where the first particular container runtime provides an execution environment for the second container on devices of the first type.
[0018] Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0019] Implementations may include one or more of the following features, alone and/or in various combination(s):
• The method where the at least one container runtime includes a plurality of container runtimes, including a container runtime for each of a plurality of distinct types of devices.
• The method where the plurality of distinct types of devices include a plurality of internet-of-things (IoT) devices.
• The method where the second at least one software pipeline is run on the first device.
• The method where running of the first at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime.
• The method may also include where the running of the second at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime.
• The method where the embedding in (d) and/or (f) is controlled, at least in part, of the first particular container runtime.
• The method where the first device is an internet-of-things (IoT) device.
• The method where the plurality of distinct types of devices include devices with distinct processors and/or with distinct types of processors.
• The method may also include where the at least one container runtime includes a second particular container runtime for a second type of device, distinct from the first type of device.
• The method may also include (c2) on a second device of the second type, embedding the second particular container runtime.
• The method may also include (d2) on the second device, embedding a first container including the first at least one software pipeline, where the second particular container runtime provides an execution environment for the first container on devices of the second type.
• The method may also include (f2) embedding, on the second device, the second container including the second at least one software pipeline, where the second particular container runtime provides an execution environment for the second container on devices of the second type.
• The method where the embedding of the first particular container runtime in (c) does not replace previous programming in the first device. • The method where the embedding of the second container in (c) replaces the first container on the first device.
• The method where the first at least one software pipeline is run on the first device.
• The method where the software pipelines obtain sensor data from one or more sensors.
• The method where the one or more sensors are on or co-located with the first device.
• The method where the first at least one software pipeline controls one or more actuators that are on or co-located with the first device.
• The method where the software pipelines run when the first device is disconnected from other devices.
• The method where the software pipelines accept inputs from other devices, including human interface devices.
• The method where the one or more input devices are on or co-located with the first device.
• The method where the software pipelines send data and/or signals to one or more actuators are on or co-located with the first device.
• The method where the software pipelines run when the first device is connected to at least one other device.
• The method where the software pipelines include at least one mechanism for maintaining data on the first device when the first device is disconnected from other devices.
• The method where the software pipelines include at least one mechanism for transmitting data from the first device.
• The method where the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
• The method where the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
[0020] Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0021] Below is a list of process (or method) embodiments. Those will be indicated with a letter “P”. Whenever such embodiments are referred to, this will be done by referring to “P” embodiments.
PI. A method comprising:
(A) embedding a container runtime on a microcontroller; (B) embedding a first at least one container on said microcontroller, said first at least one container comprising a first at least one software pipeline, wherein said container runtime provides an execution environment for said first at least one container on said microcontroller; and then
(C) embedding a second at least one container on the microcontroller, said second at least one container comprising a second at least one software pipeline, wherein said second at least one software pipeline is distinct from the first at least one software pipeline, and wherein said container runtime provides an execution environment for said second at least one container on said microcontroller.
P2. The method of embodiment(s) PI, wherein said container runtime replaces previous programming on said microcontroller.
P3. The method of embodiments PI or P2, wherein the previous programming comprises an operating system (OS) for the microcontroller.
P4. The method of embodiment(s) P3, wherein said previous programming comprises at least one application distinct from the OS.
P5. The method of any of the preceding embodiments, further comprising: after said embedding in (B), the first at least one software pipeline is run on the microcontroller.
P6. The method of any of the preceding embodiments, further comprising: after said embedding in (C), the second at least one software pipeline is run on the microcontroller.
P7. The method of any of the preceding embodiments, wherein running of the first at least one software pipeline on said microcontroller is controlled, at least in part, by the container runtime.
P8. The method of any of the preceding embodiments, wherein running of the second at least one software pipeline on said microcontroller is controlled, at least in part, by the container runtime.
P9. The method of any of the preceding embodiments, wherein the embedding in (B) and/or (C) is controlled, at least in part, by the container runtime.
P10. The method of any of the preceding embodiments, wherein the microcontroller is associated with a device.
Pll. The method of embodiment(s) P10, wherein the device is an Internet-of- things (IoT) device.
P12. The method of any of embodiments P10 to Pll, wherein the embedding in (A) and/or (B) occurs after the device has been deployed. P13. The method of any of embodiments P10 to P12, wherein the embedding in (A) and/or (B) occurs before the device has been deployed.
P14. The method of embodiment(s) P10, wherein the embedding in (A) occurs before the device has been deployed wherein the embedding in (B) occurs after the device has been deployed.
P15. The method of any of the preceding embodiments, wherein the software pipelines obtain sensor data from one or more sensors.
P16. The method of embodiment(s) P15, wherein the microcontroller is associated with a device.
P17. The method of any of the preceding embodiments, wherein the software pipelines control one or more actuators.
P18. The method of embodiment(s) P17, wherein the one or more actuators are on and/or co-located with the device.
P19. The method of any of the preceding embodiments, wherein the device accepts inputs from other devices, including human interface devices.
P20. The method of embodiment(s) of P19, wherein the one or more inputs are and/or co-located with the device.
P21. The method of any of the preceding embodiments, wherein the software pipelines run when said microcontroller is connected to at least one other device. P22. The method of any of the preceding embodiments, wherein the software pipelines run when said microcontroller is disconnected from other devices.
P23. The method of any of the preceding embodiments, wherein the software pipelines include at least one mechanism for maintaining data on said microcontroller when said microcontroller is disconnected from other devices. P24. The method of any of the preceding embodiments, wherein the software pipelines include at least one mechanism for transmitting data from said microcontroller.
P25. The method of any of the preceding embodiments, wherein the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
P26. The method of any of the preceding embodiments, wherein the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
P27. The method of any of the preceding embodiments, wherein the software pipelines comprise an application for an already-programmed device. P28. The method of any of the preceding embodiments P2-P27, wherein the previous programming comprises firmware and/or scripts from at least one prior installation.
P29. The method of any of the preceding embodiments, wherein (i) said first at least one software pipeline, and/or (ii) said second at least one software pipeline change or augment at least one functionality of said microcontroller.
P30. The method of any of the preceding embodiments, wherein said microcontroller comprises hardware including at least one processor and a memory.
P31. The method of any of embodiments P10- P30, wherein the device is provisioned with multiple containers.
P32. The method of embodiment(s) P31, wherein at least one of the multiple containers is a placeholder for a future pipeline.
P33. The method of any of the preceding embodiments, wherein the first at least one container comprises first one or more library routines and/or functions potentially used or needed by the first at least one software pipeline.
P34. The method of embodiment(s) P33, wherein the first one or more library routines and/or functions include all routines and/or functions needed by the first at least one software pipeline.
P35. The method of embodiments P33 or P34, wherein the first one or more library routines and/or functions were determined based on routines and/or functions in the first at least one software pipeline.
P36. The method of any of embodiments P33-P35, wherein the first one or more library routines and/or functions was determined from at least one library and excludes routines and/or functions from said library that are not in the first at least one software pipeline.
P37. The method of any of the preceding embodiments, further comprising: repeating acts (A) and (B) on multiple microcontrollers.
P38. The method of embodiment(s) P37, wherein the multiple microcontrollers are homogeneous.
P39. The method of embodiment(s) P37, wherein the multiple microcontrollers are heterogeneous.
P40. A method comprising:
(A) providing at least one container runtime, wherein said at least one container runtime includes a first particular container runtime for a first type of device; (B) creating a first at least one software pipeline;
(C) on a first device of said first type, embedding the first particular container runtime;
(D) on said first device, embedding a first container comprising the first at least one software pipeline, wherein said first particular container runtime provides an execution environment for said first container on devices of said first type;
(E) creating a second at least one software pipeline, distinct from the first at least one software pipeline; and
(F) embedding, on said first device, a second container comprising the second at least one software pipeline, wherein said first particular container runtime provides an execution environment for said second container on devices of said first type.
P41. The method of embodiment(s) P40, wherein said at least one container runtime comprises a plurality of container runtimes, including a container runtime for each of a plurality of distinct types of devices. P42. The method of embodiment(s) P41, wherein said plurality of distinct types of devices comprise a plurality of Intemet-of-things (IoT) devices.
P43. The method of embodiments P41 or P42, wherein said plurality of distinct types of devices include devices with distinct processors and/or with distinct types of processors. P44. The method of any of embodiments P41- P43, wherein said at least one container runtime includes a second particular container runtime for a second type of device, distinct from said first type of device, the method further comprising: (C2) on a second device of said second type, embedding the second particular container runtime; and (D2) on said second device, embedding a first container comprising the first at least one software pipeline, wherein said second particular container runtime provides an execution environment for said first container on devices of said second type.
P45. The method of embodiment(s) P44, further comprising: (F2) embedding, on said second device, said second container comprising the second at least one software pipeline, wherein said second particular container runtime provides an execution environment for said second container on devices of said second type. P46. The method of any of embodiments P40- P45, wherein said embedding of the first particular container runtime in (C) does not replace previous programming in said first device.
P47. The method of any of embodiments P40- P45, wherein said embedding of the second container in (C) replaces the first container on the first device.
P48. The method of any of embodiments P40- P47, wherein the first at least one software pipeline is run on the first device.
P49. The method of one of embodiments P40- P48, wherein the second at least one software pipeline is run on the first device. P50. The method of one of embodiments P40- P49, wherein running of the first at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime, and wherein the running of the second at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime. P51. The method of any of embodiments P40 to P50, wherein the embedding in
(D) and/or (F) is controlled, at least in part, of the first particular container runtime.
P52. The method of any of embodiments P40 to P51, wherein the first device is an Internet-of-things (IoT) device. P53. The method of any of embodiments P40-P52, wherein the software pipelines obtain sensor data from one or more sensors.
P54. The method of embodiment(s) P53, wherein the one or more sensors and/or the one or more input devices, and/or the one or more actuators are on or co-located with the first device. P55. The method of any of the preceding embodiments P40-P54, wherein the first at least one software pipeline and/or the second at least one software pipeline controls one or more actuators.
P56. The method of embodiment(s) P55, wherein the one or more actuators are on and/or co-located with the first device. P57. The method of any of the preceding embodiments P40-P56, wherein the software pipelines run when said microcontroller is connected to at least one other device.
P58. The method of any of embodiments P40-P57, wherein the software pipelines run when said first device is disconnected from other devices. P59. The method of any of embodiments P40- P58, wherein the software pipelines include at least one mechanism for maintaining data on said first device when said first device is disconnected from other devices.
P60. The method of any of embodiments P40-P59, wherein the software pipelines include at least one mechanism for transmitting data from said first device.
P61. The method of any one embodiments P40-P60, wherein the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller. P62. The method of any of embodiments P40-P61, wherein the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
[0022] Below is a list of computer-readable medium embodiments. Those will be indicated with a letter “C”. Whenever such embodiments are referred to, this will be done by referring to “C” embodiments.
C63. A computer-readable medium with one or more computer programs stored therein that, when executed by one or more processors of a device, cause the one or more processors to perform, the operations of the method of any one of embodiment s) / aspect(s) P1-P62. C64. The computer-readable medium of embodiment s) / aspect s) C63, wherein the medium is non-transitory.
[0023] The above features along with additional details of the invention, are described further in the examples herein, which are intended to further illustrate the invention but are not intended to limit its scope in any way. Brief Description of the Drawings
[0024] Objects, features, and characteristics of the present invention as well as the methods of operation and functions of the related elements of structure, and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification.
[0025] FIG. 1 depicts aspects of a device programming system according to exemplary embodiments hereof;
[0026] FIG. 2 depicts aspects of a software code structure according to exemplary embodiments hereof; [0027] FIGS. 3A-3D are screenshots showing aspects of a code development tool GUI according to exemplary embodiments hereof;
[0028] FIG. 4 is a flowchart showing aspects of an exemplary workflow according to exemplary embodiments hereof;
[0029] FIGS. 5A-5M are screenshots showing aspects of a code development GUI according to exemplary embodiments hereof;
[0030] FIG. 6 is a flowchart showing aspects of an exemplary workflow according to exemplary embodiments hereof;
[0031] FIGS. 7A-7D are screenshots showing aspects of an application development
GUI according to exemplary embodiments hereof;
[0032] FIGS. 8A-8C depict aspects of devices according to exemplary embodiments hereof;
[0033] FIGS. 9A-9C depict aspects of code deployment topologies according to exemplary embodiments hereof;
[0034] FIGS. 10A-10F depict aspects of use cases of a device programming system according to exemplary embodiments hereof; and
[0035] FIG. 11 is a logical block diagram depicting aspects of a computer system.
Detailed Description of the Presently Preferred Exemplary Embodiments Glossary and abbreviations
[0036] As used herein, the following terms have the following meanings unless specifically stated otherwise:
[0037] AMQP means Advanced Message Queuing Protocol;
[0038] API means application program (or programming) interface;
[0039] GUI means graphical user interface;
[0040] PC (or I2C) means Inter-Integrated Circuit;
[0041] IoT means Internet of Things;
[0042] IP means Internet Protocol;
[0043] LAN means local area network;
[0044] Lua is a lightweight, multi-paradigm programming language designed primarily for embedded use in applications;
[0045] ML means machine learning;
[0046] OS means operating system;
[0047] RDD means Resilient Distributed Dataset;
[0048] SPI means Serial Peripheral Interface;
[0049] TSDB means time series database; [0050] UI means user interface;
[0051] USB means Universal Serial Bus;
[0052] WAN means Wide Area Network;
[0053] Compiling is the general term for taking source code written in one language and transforming into another. Transpiling refers to taking source code written in one language and transforming into another language that has a similar level of abstraction.
[0054] The term “mechanism,” as used herein, refers to any device(s), process(es), service(s), or combination thereof. A mechanism may be implemented in hardware, software, firmware, using a special-purpose device, or any combination thereof. A mechanism may be mechanical or electrical or a combination thereof. A mechanism may be integrated into a single device or it may be distributed over multiple devices. The various components of a mechanism may be co-located or distributed. The mechanism may be formed from other mechanisms. In general, as used herein, the term “mechanism” may thus be considered shorthand for the term device(s) and/or process(es) and/or service(s).
Description
[0055] Edge devices such as Internet of Things (IoT) devices may include sensors, gauges and other types of devices that may collect real-time real-world data relating to their environment. For example, edge devices may include sensors that measure temperature, pressure, acceleration, sound, optical inputs, humidity, gravity, geographic location, health parameters, system failures, and other types of parameters. The devices may include microcontrollers, microprocessors, field programmable gate arrays (FPGAs) and other types of processors programmed to control the device, to acquire the sensed data and to output the data for processing, typically to a centralized data center or cloud server.
[0056] Data collected by edge devices (e.g., IoT devices and the like) may be processed remotely (using so-called “cloud” computing) and/or on the edge devices themselves.
[0057] Cloud computing requires uploading data to a remote platform to be processed, with the results of such processing then distributed as needed. Cloud computing includes potentially undesirable latency. Latency may be generally defined as the total amount of time it takes to upload and process data and to distribute the results. Latency may be problematic, especially when real-time results are required or desired. Cloud computing may also require excessive bandwidth (e.g., when uploading large amounts of collected data), and may suffer from unreliable connectivity.
[0058] Edge computing, on the other hand, processes the collected data locally (e.g, at the location of the edge device where the data are generated and/or created and/or retrieved). In this case, the edge device itself may be programmed with the applications necessary to both collect and process the data. This allows the edge device to make decisions without the latency associated with cloud computing, without utilizing excessive bandwidth and without suffering from unreliable connectivity.
I. The System
[0059] As shown in FIG. 1, a system 100 according to exemplary embodiments hereof may include an orchestration hub 102, container runtimes 200 deployable onto edge devices 300, and code 400 (also referred to as container code 400 or container 400) that the container runtimes 200 may enable the edge devices 300 to run. The container code 400 may program the edge devices 300 to perform various / new functionalities. FIG. 1 depicts the details of two edge devices 300, 300’ for demonstrational purposes. Other edge devices 300 are shown without detail. Those of ordinary skill in the art will understand, upon reading this description, that the system 100 may be configured with a multitude of edge devices 300 simultaneously and in other configurations, as will be described in other sections.
[0060] An edge device 300 may be any kind of device, including an IoT device. An edge device 300 is a computer system in the sense that it provides a general-purpose programming system (at least one processor and memory), albeit generally with very limited processing and storage and function specific capabilities.
[0061] The edge devices 300 may be standalone devices or they may be incorporated into other devices, appliances and/or systems. The multiple edge devices 300 need not be homogenous.
[0062] The orchestration hub 102 comprises a computer system (e.g., as described below) and may include a centralized server and/or cloud platform that may include tools for developing, compiling, testing, building, releasing, deploying, distributing and/or updating the code 400 to the devices 300. The orchestration hub 102 may provide one or more user interfaces (UIs) including at least one graphical user interface (GUI) 500 with which users may interact while using various of the hub’s tools.
[0063] The orchestration hub 102 may also provide and deploy associated container runtimes 200 to the edge devices 300 that may include any and/or all software components required to provide communication and processing capabilities to the devices 300 to run the code 400. Once the code 400 is on a device 300 (e.g., is embedded onto a device 300), the container runtimes 200 may generally manage the running of the code 400 on that device. For example, the container runtimes 200 may run the code 400, pause and/or terminate the code 400, set activation times for the code 400, update the code 400, send data produced by the code 400 to the orchestration hub 102 or elsewhere, and perform other operations regarding the code 400 on the edge devices 300. The code 400 may correspond to an application providing certain functionality. When deployed on an edge device 300, the code 400 effectively becomes an edge- based application.
[0064] Different container runtimes 200 may be designed and made available for different edge devices 300, depending, e.g., on the edge device’s operating platform. For example, the system 100 may provide cross-platform Linux container runtimes 200 for running on Linux-based and/or Unix-based systems. In another example, the system 100 may provide container runtimes 200 that may include platform-specific firmware for running on edge devices 300 with other specific types of platforms. For the purposes of this description, the term container runtime refers to both the cross-platform container runtime 200 and/or the platform- specific firmware 200 that may function as a container runtime. Further details of the container runtimes 200 are provided elsewhere herein.
[0065] An orchestration hub 100 may be integrated with other orchestration products to maintain any vendor-preferred orchestration mechanisms.
[0066] The container runtimes 200 and/or the code 400 may be deployed and distributed to the edge devices 300 (e.g., via RPM, as firmware, etc.) using local connections (e.g, USB, serial cable, etc.) as depicted by arrow Bl, through one or more networks 104 (e.g, the Internet, LAN, WAN, cellular, and/or any other types of networks) as depicted by arrow Bl’, or by other installation techniques. It may sometimes be necessary first to install the container runtimes 200 onto the devices 300 in order to use such container runtimes to establish a network communication (e.g, an IP address or node) between the devices 300 and the orchestration hub 102 (as depicted by arrow Al) and then to install the code 400 using that connection.
[0067] A device 300 may also include an operating system 302 and one or more previous versions of code (e.g, previous applications) that may, e.g., have been loaded onto the device 300 prior to the engagement of the system 100. These previous versions are denoted previous code 304. As should be appreciated, the previous code 304 may have different functionality from the code 400. In some exemplary embodiments hereof, the container runtimes 200 and the code 400 may interact with the device’s operating system (OS) 302, although they preferably do not interfere with the device’s OS 302 or previous code 304. In other exemplary embodiments hereof, the system 100 may delete, remove or otherwise render inoperable the device’s OS 302 and/or the device’s previous code 304, and replace it with a container runtime 200 and/or the code 400. For example, as shown in FIG. 1, the device 300 includes an OS 302 and previous code 304, whereas the device 300’ has no OS or previous code. [0068] Together, the orchestration hub 102 and the container runtimes 200 may form an edge-native system 100 for developing, deploying, and managing edge-based applications 400. [0069] In exemplary embodiments hereof, e.g., as shown in FIG. 2, the code 400 may be developed and distributed as one or more software pipelines 402. Each software pipeline 402 may include a chain of processing elements (e.g., processes, threads, co-routines, functions, scripts, algorithms, etc.) that may be arranged so that the output of one element may be the input to the next. These sequential elements may be referred to as stages 404-1, 404-2, ... 404-n (individually and collectively 404). In this way, the pipelines 402 may describe the flow of data from one stage 404 -j to another stage 404-L. In some implementations, the flow of data may be linear and one-directional (e.g, upstream) from one stage 404/ to the next (404/+/). In other implementations the pipeline 402 may include some flow of data in different directions (e.g, downstream), often referred to as return channel or backchannel, or the pipeline 402 may be fully bi-directional.
[0070] Each stage 404 may include software that may execute a particular functionality.
Generally, a pipeline 402 may include one or more beginning stages 404-b, one or more intermediate stages 404-i, and one or more end stages 404-e. For example, data may be acquired by the beginning stage(s) 404-b, processed in the intermediate stage(s) 404-i, and saved or published in the end stage(s) 404-e.
[0071] Beginning stage(s) 404-b may include one or more emitter stages 404 that may receive input, e.g., from a pin, from a bus connected to sensors emitting data, a timer or other types of emitter stages 404. Intermediate stage(s) 404-i may include middleware such as transforms, filters, down-sampling and/or ML algorithms such as k-means clustering or analytics scripts. Intermediate stage(s) 404-i may include emitter stages 404 and/or collector stages 404, depending on their configuration within the pipeline 402. End stage(s) 404-e may include collector stages 404 that may publish processed data to output pins or busses, and/or store the output data to a destination such as a time-series database or to the cloud.
[0072] While software applications may be written in any computer language(s) or combinations thereof, preferred implementations may use the Lua programming language. Lua is a lightweight, multi-paradigm programming language designed primarily for embedded use in applications and is particularly suitable for edge device (IoT) applications.
[0073] In general, a system according to exemplary embodiments hereof provides for the development, testing, distribution, management and administration of machine learning (ML) algorithms, data processing directives and/or other types of software to edge devices. For the purposes of this specification, the ML algorithms, data processing directives and/or other types of software developed by the system for the edge devices may be referred to as code.
[0074] A system may be used to perform one or more of the following operations, alone or in combination, and without limitation: 1. Develop new code to run on edge devices;
2. Modify existing code to run on edge devices;
3. Test the code created in (1) and/or (2);
4. Provide support elements necessary for the code to run on edge devices;
5. Prepare the code for deployment onto edge devices;
6. Prepare the edge devices ( e.g ., delete and/or replace any existing code from the device);
7. Distribute the code created in (1) and/or (2) to edge devices;
8. Communicate with the deployed code;
Collect data outputs from the code running on edge devices;
Update the code as required;
Generally manage the running of the code;
9. Provide a centralized platform that may administer and manage the distributed software ecosystem. The platform may also be used to administer access rights, roles and privileges, and other functionalities.
[0075] In some exemplary embodiments hereof, capabilities of pipelines 402 may include some or all of the following (without limitation):
[0076] Spark Streaming·. Spark Streaming jobs (e.g, ML, analytics, etc.) may use standard Hadoop protocols to load static files before starting their control loop. A simplified version of RDDs (Resilient Distributed Datasets) may be available, so developers may have access to the same functions at the edge as in the cloud. The container runtime 200 may be a WebHDFS endpoint, which may offer generic hierarchical file storage.
[0077] ML tools and Standards: The system 100 may also support ML tools and standards such as, PMML, ONNX, TensorFlow models, Caffe2 Model Zoo, Torch, Core ML, MLeap, and others.
[0078] Scripts. Scripts may include custom application logic but may not rely on the
Spark Streaming semantics. Scripts may be written in the Lua programming (or other software languages) and may load a wide variety of Lua libraries for mathematical and other operations. The system 100 may also be integrated with LuaRocks (or other third-party sources), a library manager for Lua modules, and may include libraries published in LuaRocks (or other third-party sources).
[0079] Durable Ring Buffer : A durable subscription interface may allow the container runtime 200 to reliably publish data in an intermittently connected environment. The container runtime 200 may collect metrics at a specified interval, store them in a store-and-forward ring buffer or “capped collection” or the like, and later forward the data. Delivery of the queued messages may use a reliable 2-phase commit handshake similar to AMQP (Advanced Message Queuing Protocol) or Kafka. If delivery of the data does not occur within a defined window, the data records may be recycled.
[0080] A person of ordinary skill in the art will understand that the example pipeline capabilities described herein are meant for demonstration and that the pipelines 402 may include any other capabilities, and that the scope of the system 100 is not limited in any way by the capabilities of the pipelines 402 that it may include.
[0081] As described herein, the orchestration hub 102 may provide the tools necessary to develop the software pipelines 402.
[0082] Once the pipelines 402 are developed, the orchestration hub 102 may package the pipeline(s) 402 into software containers 406 ( e.g ., micro-containers) that may include the pipelines 402 and their dependencies, such as runtimes, system tools, system libraries, settings and other elements. The containers 406 may include a containerized operating system or they may run on the device’s operating system 302. In this way, the containers 406 may include everything that the pipelines 402 may require to run on the devices 300, regardless of the device’s infrastructure or computing environment. The containers 406 may isolate their software from the device’s environment to ensure that the pipelines 402 may run uniformly and reliably on the devices 300.
[0083] Similar to the container runtimes 200, the containers 406 may be cross-platform
Linux containers for running on Linux-based and/or Unix-based systems, or platform-specific firmware for running on devices 300 with other specific types of platforms.
[0084] The pipelines 402 are preferably cross-platform. When used with a cross platform Linux container runtime 200 and/or container 406, no transpiling may be necessary. When used with a platform-specific firmware container runtime 200, the pipelines 402 may be transpiled to match the agent/OS.
[0085] The orchestration hub 102 may also provide tools to test the pipelines 402
(individually and/or containerized). For example, the orchestration hub 102 may include a simulation module that may provide line-by-line execution of the pipelines 402, as well as a visual representation of the variables defined by the code. Debugging the code through the simulation software may allow for early detection and the remedying of problems if the pipelines 402 do not behave as expected. Other testing methodologies may include the use of a single-board computer (e.g., a Raspberry Pi) as will be described herein.
[0086] In some exemplary embodiments hereof, prior to distributing the containers 406 and the container runtimes 200 to devices 300 that may include pre-existing operating systems (OSs) 302 and/or previous applications 304, the system 100 may delete, erase or otherwise render inoperable any or all of the pre-existing OS 302 and/or previous applications 304 on the device 300. In other exemplary embodiments hereof, the system 100 may leave some or all of the devices OSs 302 and/or the previous code 304 intact. In either case, the orchestration hub 102 may then distribute the containers 406 (and their included pipelines 402) to the edge devices 300 along with corresponding container runtime(s) 200. The container runtimes 200 may then run and generally manage the containers 406 and their associated pipelines 402 on the edge devices 300. As the pipelines 402 may run to collect and process data, the container runtimes 200 may provide the data to the orchestration hub 102 and/or to other destinations as desired. One or more container runtimes 200 managing one or more containers 406 (that may each include one or more pipelines 402) may be included onto any edge device 300.
[0087] The orchestration hub 102 may provide the tools to manage the ecosystem of containers 406 to ensure that deployed containers 406 are running correctly and performing their desired functionalities. The orchestration hub 102 may also assign IP addresses (or other types of communication mechanisms) to the containers 406 as necessary ( e.g ., to allow the containers 406 to communicate with one another).
II. Pipeline Development
[0088] The orchestration hub 102 may include a variety of software units that may be used to perform the operations of the orchestration hub 102. Access to these units may be provided through the hub GUI 500 as shown, e.g., in FIG. 3A. In exemplary embodiments hereof, the orchestration hub 102 may include an applications unit 502 that may be used to develop the pipelines 402. The hub may also include an administration unit and other units and elements as necessary to fulfill its functionalities as will be described herein.
[0089] In some exemplary embodiments hereof, the orchestration hub 102 may include tools to develop new pipelines 402 for particular devices 300, tools to modify or otherwise convert non-edge software (e.g., cloud computing applications) to run on particular devices 300, and other software development tools. In the case of converting non-edge code to system pipelines 402, the system 100 may integrate with other systems (e.g, Amazon IoT, AWS lambda pipelines, etc.) to receive the non-edge code and to convert it (e.g, miniaturize it) for use with the system 100 and the edge devices 300.
[0090] In exemplary embodiments hereof, the applications unit 502 may include at the least some of the following modules:
1. A pipeline development module 504;
2. A libraries module 506.
3. An applications module 508; and
4. A storage mounting module 510. [0091] In exemplary embodiments hereof, the pipelines 402 may be developed using the pipeline development module 504. Once opened ( e.g ., by selecting the Pipeline link in the left toolbar), the pipeline development module 504 may include a shared stage pallet 512, a pipeline layout pane 514 and a stage information pane 516. In general, available stages 404 may be chosen from the shared stage pallet 512 and moved into the pipeline layout pane 514. The stages 404 may be linked to form a pipeline 402. Each stage 404 may be highlighted in the layout area 512 to show its associated details in the stage information area 516.
[0092] Stages 404 in the shared stage pallet 512 may be divided into categories such as
Control, Data Acquisition, Middleware, Middleware / Analytics, Middleware / Signal Conditioning, TSDB Edge, Remote Storage and other categories. Control stages 404-CTL may control devices such as a GPIO Pin or an LED matrix driver. Data acquisition stages 404-DA may include drivers for devices such as sensors, inputs, actuators, and buses (e.g. I2C, I2S, SPI, etc.). Middleware stages 404-MID may include script editors (e.g., Lua script editor) that may allow for the creation of custom scripts (e.g, Lua scripts). Middleware / Analytics stages 404- MA may include a Spark Streaming utility that may process results using a Spark Streaming control loop (e.g, via the native Stuart resource). Middleware / Signal Conditioning stages 404- SC may include scripts that perform particular transforms or other calculations on the pipeline data (e.g, a cumulative moving average calculation). DB stage(s) 404-DB may enable for the pipeline data to be stored to and/or read from sources such as databases (e.g., Time Series, Relational, Graph), durable ring buffers, etc. Remote Storage stages 404-RS may enable the pipeline data to be stored to and/or read from remote storage resources such as databases (e.g., Time Series, Relational, Graph), message queues (e.g., MQTT, Kafka) and other data sources. [0093] Those of skill in the art will understand, upon reading this description, that the various stages 404 (e.g, 404-CTL, 404-DA, 404-MW, 404-MA, and 404-SC, 404-DB, and 404- RS) provided by the pallet 512 may correspond to stages 404-i of FIG. 2.
[0094] Those of skill in the art will also understand, upon reading this description, that the categories and example functionalities listed above are merely exemplary and are meant for demonstration, and that the shared stage pallet 512 may include any of the stages and/or types of stages 404 listed and/or any other stages and/or types of stages 404.
[0095] In exemplary embodiments hereof, the stages 404 (and the pipelines 402) may include programming that may trigger external actions. For example, the stages 404 may include software-based triggers through APIs and messaging systems that may trigger external software applications to perform other functionalities. In addition, the stages 404 may include programming that may include hardware triggers (e.g. , that may send values over a pin) to trigger an external system. [0096] The stages 404 may include software modules 408 and/or software packages 410
(bundles of software modules 408) that the system 100 may provide, that may be custom developed, that may be received from other sources and any combination thereof. A software module 408 may include a script ( e.g ., a Lua script) that may perform a particular functionality ( e.g ., that may include a driver for a particular device 300). The modules 408 may be associated with one or more stages 404 and may be preloaded and referenced by a stage 404 using a function call within the stage (e.g., require( ‘your. module. name )).
[0097] The packages 410 may include bundles of modules 408 that may perform the functionalities of the combined bundled modules 408. The packages 410 may be associated with one or more stages 404 and may be preloaded and referenced by a stage 404 using a function call (e.g, require( ‘your. package. name )) within the stage 404. Information regarding the modules 408 and the stages 410 may be available in the library module 506.
[0098] It is understood that the scope of the system 100 is not limited in any way by the modules 408 and/or the packages 410 that the system 100 may include.
[0099] In exemplary embodiments hereof, modules 408 may be added to the stage 404 by highlighting the stage 404 in the layout pane 514 and then using a module drop-down menu 518 in the informational pane 516 (as shown in FIG. 3B). In addition, packages 410 may be added to a highlighted stage 404 via a package drop-down menu 520 (as shown in FIG. 3C).
It is understood that in some embodiments, modules 408 and packages 410 may be added to the stages 404 using different and/or other functionalities of the orchestration hub 102.
[0100] Table I (in Appendix I hereto) shows a variety of exemplary shared stages 404 and their associated functionalities that the system 100 may include. It is understood that this list is not all-inclusive and that the system 100 may include some or all of these shared stages 404 as well as other shared stages 404 not listed. The scope of the system 100 is not limited in any way by the shared stages 404 that the system 100 may include or the pipelines 402 that may be developed or otherwise provided.
[0101] An informational pane 516 may include information regarding a highlighted stage 404. For example, the informational pane 516 may include (without limitation):
1. The name of the stage 404;
2. The modules 408 that the stage 404 may be include. This may include the drop down menu 518 that may allow for additional modules 408 to be added to the stage 404.
3. The packages 410 that the stage 404 may include. This may include the drop down menu 520 that may allow for additional packages 410 to be added to the stage 404. 4. Other packages 410 from other sources that the stage 404 may include ( e.g ., LuaRocks packages). This may include a drop-down 522 that may allow for the additional packages 410 to be added to the stage 404.
5 The script that the stage may include. The code shown in this pane may be editable to allow a user to make changes to the code as required (e.g., when a new module 408 and/or package 410 may be added to the stage 404 and the corresponding requireQ function calls may be added to the script as necessary).
6. Other types of information particular to the highlighted stage such as bandwidth rate, range, sample period and other information. Some or all of this information may be editable.
[0102] Once a stage 404 has been chosen and properly configured, the stage 404 may be linked to other stages 404 in the layout pane 514 by using the stage linking drop-down menu 524 (as shown in FIG. 3D). The system 100 may use the pipeline metadata to determine which pipeline stages may include compatible inputs and/or outputs, and the drop-down menu 524 may present possible linkages between the highlighted stage 404 and other available linkable stages 404 within the layout 514. By choosing a linkage in the drop-down menu 524, the stages 404 may be linked and a linkage arrow 526 may appear in the layout pane 514 between the newly linked stages 404. The linkage drop-down 524 may also be used to unlink currently linked stages 404 and/or to delete stages 404 as desired.
[0103] The system 100 may also include an automatic stage linking mechanism that may suggest logical linkages between stages 404 in the layout pane 514, and that may automatically link stages 404 as it may suggest. If this functionality is enabled and the user does not wish the stages 404 to be linked as suggested by the system 100, the user my use the linkage drop down 524 to remove the automatic links.
[0104] In exemplary embodiments hereof, the system 100 may also provide the tools to create pipelines 402 as pipeline stages 404 (that is, a stage 404 may include a sub -pipeline within the stage 404). In addition, the system 100 may provide the tools to add multiple metrics into a single pipeline 402, to send a metric to two pipeline stages 404 (e.g, EWMA, SMA) at the same time, and to run two separate pipelines 402 off the same metric.
III. Sample Pipeline Development Workflow
[0105] Further aspects of the pipeline development module 504 will now be highlighted through the description of an example pipeline development workflow as shown in FIGS. 7-20. This sample workflow (FIG. 4) may develop a pipeline 402 that may take readings from an air quality sensor and an accelerometer in parallel, process the data from each device and display the results via an LED readout. It is understood that this sample workflow and resulting pipeline 402 are meant for demonstration purposes and do not limit the scope of the system 100 in any way.
[0106] In step 600, a new pipeline may be created by choosing “New pipeline” in the add dropdown 526 (FIG. 5A). This may open a new pipeline setup dialog 528 (FIG. 5B) wherein the new pipeline 402 may be given a name and associated with a user and/or an application.
Once named and saved (step 602), the shared stage pane 512 and an empty stage layout pane 514 may be loaded (FIG. 5C).
[0107] In step 604, the device driver stage 404 for the air quality sensor may be chosen from the shared stage pallet 512, after which an icon representing the stage 404-1 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5D). In this example, this information may include the name of the stage 404-1 (SGP30), the sample period for the driver (1 second), the software packages to include (sgp30 (0.1.0-1)) and the software script. Note that the software script may include the function call “ require ‘sgp30 ”’ to call the package sgp30.
[0108] In step 606, the device driver stage 404 for the accelerometer may be chosen from the shared stage pallet 512, after which the icon representing the stage 404-2 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5E). In this example, this information may include the name of the stage (ADXL345), the bandwidth rate (102 HZ), the range (2 G), the sample period (1 minute), the software packages to include (adxl345 (0.1.0-1)) and the software script. Note that the software script may include the function call “ require ‘adxl345”
[0109] In step 606, a script stage 404 ( e.g ., Lua script) may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-3 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5F). In this example, the informational stage may include a default baseline script into which the necessary code may be added to form the desired script for the stage 400-3. The other fields such as the name of the script may also be blank.
[0110] In step 608, the desired code may be added to the script within the editable script field in the informational pane 516 and the script may be named. In addition, the stage 404-3 may be linked to another stage in the pipeline 402 (e.g., to stage 400-1) by using the action required drop-down menu 530 (FIG. 5G). The stages 404-1 and 404-2 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-1, 404-3. [0111] In step 610, a script stage 404 (e.g, Lua script) may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-4 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5H). In this example, the informational stage may include a default baseline script into which the necessary code may be added to form the desired script for the stage 400-4. In some embodiments, other fields (such as the name of the script) may also be validated and/or prepopulated, or the fields may be blank and editable.
[0112] In step 612, the desired code may be added to the script within the editable script field in the informational pane 516 and the script may be named. Note that the system 100 may have linked the new stage 404-4 to stage 404-3, and that the new stage, being the script to process data from the accelerometer driver stage 404-2, may therefore need to be unlinked (using the dropdown 524) as shown in FIG. 51. The stage 404-4 may then be linked to the stage 404-2 using the action required dropdown 530 (FIG. 51). The stages 404-4 and 404-3 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-4, 404-3 (FIG. 5J)
[0113] In step 614, the device driver stage 404 for the LED matrix readout may be chosen from the shared stage pallet 512, after which the icon representing the stage 400-5 may appear in the layout pane 514 and the stage’s information may appear in the informational pane 516 (FIG. 5K). In this example, this information may include the name of the stage (HT16K33), the software packages to include (htl6k33 (0.1.0-1)) and the software script. Note that the software script may include the function call “ require ‘htl6k33 to call the package htl6k33.
[0114] As shown in FIG. 5K, the system 100 may have properly automatically linked the stage 404-3 to the new stage 404-5. However, since the stage 404-5 may also receive data from the stage 404-4, these two stages may also need to be linked. To accomplish this, the stage 404- 4 may be highlighted and the dropdown menu 524 may be used to link it to stage 404-5 (step 616 and FIG. 5L). The stages 404-4 and 404-5 may then be linked and a linkage arrow 526 in the layout pane 514 may appear between the linked stages 404-4, 404-5. This may result in the final pipeline 402 as shown in FIG. 5M.
[0115] In summary, the exemplary pipeline 402 shown in FIG. 5M may collect data from an air quality detector ( e.g ., a SGP30 detector) and an accelerometer (e.g, an ADXL345 accelerometer), process the data from the air quality detector using a first script (e.g, a custom Lua script), process the data from the accelerometer using a second script (e.g, a custom Lua script), and display the processed data from both scripts on an 8x8 LED matrix (e.g, an HT16K33).
IV. Edge Applications
[0116] Once one or more pipelines 402 may be developed, the pipeline(s) 402 may be packaged into edge applications 412 by bundling their source code, assets, embedded microservices and other elements together and containerizing them (placing them into microcontainers 406 and/or firmware 406). The containers 406 may then be transferred to the edge devices 300 and run.
[0117] As is known in the art, applications built to run on cloud platforms (i.e., not edge devices 300) may be first “built”, then “released” and then “run” (in this specific order).
[0118] When an application is “built” for a cloud platform, the code ( e.g ., the code repository) is converted, along with the code’s dependencies, into an executable bundle sometimes known as a “build”.
[0119] When an application is “released” for a cloud platform, the build is attached to its config (e.g., backing services, credentials to external services, per-deploy values, etc.). The “release” then includes both the build and its config and is ready for execution in the execution environment (e.g, on the cloud platform).
[0120] The released application may then be “run” on the cloud platform.
[0121] However, because the system 100 may be used to develop and deploy edge applications 412 to run on edge devices 300 (and not to run on cloud platforms), this standard workflow may not apply. Instead, because the application 412 may be edge-native, the application’s config may not merely be attached to the build, but instead, the config must be embedded with the application 412.
[0122] Accordingly, in some exemplary embodiments hereof, the system’s workflow may include the releasing of the application, followed by the building of the application, followed by the running of the application (in this order).
AMALGAMATION and PRUNING
[0123] Applications (e.g., comprising pipelines as developed above) typically need one or more libraries or library routines to support their execution. In the case, e.g., of edge or IoT devices, the devices will likely not have the ability to obtain any library or routine dynamically. Accordingly, it is desirable for software containers (comprising one or more pipelines) to include all libraries or library routines referenced / used in the pipelines.
[0124] In an amalgamation process, when software is being built, the system obtains all of the different libraries that are or may be needed in the pipelines being used. This may result in one single standalone file that may be deployed.
[0125] In a pruning process, the file may be trimmed (before being deployed) based on the routines in the file that are not in and/or needed by the pipelines being used.
[0126] Preferably a distributed container only contains routines/functions that may be used/needed, and does not contain routines/functions that are not going to be used or needed. As should be appreciated, however, the pruning process may omit (or fail to remove) some functions or routines that are not in the pipelines being used. However, it is preferable that the pruning process removes most (if not all) of the functions/routines that will not be used.
[0127] In exemplary embodiments hereof, this may result in the example workflow described below and as shown in FIGS. 21-25. [0128] First, in step 700, pipelines 402 may be developed as described in other sections of this specification.
[0129] Next, in step 702, a new application 412 may be added by selecting the add (+) button and choosing “Add new application”. After this, the new application dialog 532 (FIG. 7A) may appear that may allow the user to name the new application 412 and set the owner. The dialog 532 may also provide a number of available services that the user may add to the application. These services may include (without limitation), TSDB Edge, microcontainers, Stuart Accelerator, pipelines and other services. The user may choose the services to include by checking the appropriate checkbox(es). The dialog 532 may also provide an editable description field into which the user may add a description of the application 412. [0130] In step 704, the user may click on the Save button and the new application 412 may be created and saved by the system 100. This may launch the application information dialog 534 (FIG. 7B) that may include an overview tab 536, a resources tab 538, a settings tab 540, a releases tab 542 and other tabs relating to the chosen application 412. To open this dialog 534 for an existing application 412, the user may simply click on the applications module 508 link (FIG. 3) and choose the desired application 412 from the list of available applications 412.
[0131] The overview tab 536 (FIG. 7B) may display information such as the name of the application 412, the owner of the application 412, the services, the latest release, the pipelines 402 included in the application 412 as well as other information. [0132] In step 706 (FIG. 7C), the user may choose the resources tab 538 to view and/or add resources to the application 412. The loaded resources may be listed in the tab 538. To add additional resources, the user may click on the “Find more add-ons” button 542 and choose new add-ons from the list. For example, resources such as microcontainers 406, pipelines 402, the Stuart accelerator, TSDB Edge, and other resources may be added. Once added, the resources will be displayed in the dialog 538.
[0133] The settings tab 540 may include the name of the application 412, the pipelines 402 associated with the application 412, and an editable description of the application 412.
[0134] The releases tab 542 (FIG. 7D) may show the current release information for the chosen edge application 412. For example, the dialog may show the release version ( e.g ., vl), the release date and other information. In step 708, new releases of the application 412 may be created by selecting the “Create Release” button 544.
[0135] Once an application 412 may be released, the system 100 may perform a snapshotting of the application’s resources and embed the config with the pipelines 402. [0136] The release may then be built into a container/firmware 406 with the system 100 translating the pipelines 402 and scripts into a high-level programming language ( e.g ., C), compiling the file and generating the firmware binary file.
[0137] To test the application 412, a user may run the microcontainer runtime 406 on a single-board computer (e.g., a Raspberry Pi), download the application 412 from the orchestration hub 102 and run the application.
[0138] The build may then be transferred to an edge device 300 using a local connection
(e.g., USB or serial cable), a firmware distribution cloud service (e.g, the orchestration hub 102, the Pelion Device Management service by ARM, and/or other services that the system 100 may be integrated with) or by other installation methods, and run on the device 300. [0139] In some exemplary embodiments hereof, the container runtime 200 and build may run on less than 10 MB of memory and consume less than 25 MB of hard disk space (e.g, in Linux environments). In other exemplary embodiments, the container runtime 200 and the build may run on less than 32 K of RAM (e.g, in platform-specific embedded systems).
V. Deployment Topologies [0140] In some exemplary embodiments hereof, the system 100 may deploy containerized pipelines 402 and associated container runtimes 200 to edge devices 300 in various stages of their lifecycles, such as devices 300 in the following categories (without limitation):
1. Edge devices 300 that have not yet been programmed (e.g, new devices 300);
2. Edge devices that have been programmed (e.g, devices that include previous programming 302, 304);
3. Other categories of devices and any combination thereof.
[0141] The devices 300 may be stand-alone devices 300 or devices as a part of a product or system.
[0142] In exemplary embodiments hereof, e.g., as shown in FIG. 8A, a containerized pipeline 402 and its associated container runtime 200 may be embedded into a device 300 that may include an operating system (OS) 302 but that may not include other applications. The OS 302 may be referred to as previous programming. The device 300 may or may not yet be deployed into the field. In this case, the embedded containerized pipeline 402 may interface and run directly on the OS 302. The container runtime 200 may interface with the pipeline 402 and the OS 302 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described herein.
[0143] In exemplary embodiments hereof, e.g., as shown in FIG. 8B, a containerized pipeline 402 and its associated container runtime 200 may be embedded into a device 300 that may include an operating system (OS) 302 and previous applications 304. The OS 302 and the previous applications 304 may be referred to as previous programming. The device 300 may or may not yet be deployed into the field. In this case, the embedded containerized pipeline 402 may interface and run directly on the OS 302. The agent 200 may interface with the pipeline 402 and the OS 302 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described in other sections.
[0144] However, it may be preferable that neither the containerized pipeline 402 nor the container runtime 200 interfere or otherwise affect the previous programming (the previous application(s) 304 and/or the OS 302). In this way, the containerized pipeline 402 may be installed and/or updated whenever required by the system 100 without affecting the previous functionality of the device 300 and without requiring the updating and/or modification of the
OS 302 and/or the previous application(s) 304. For example, the containerized pipeline 402 may be updated with new ML or AI models (and new pipelines 402) as they may become available from the orchestration hub 102 and that once installed and/or updated (as frequently as necessary) may run alongside the previous applications 304 that may have shipped with the device 300.
[0145] In exemplary embodiments hereof, prior to embedding the containers 406 and the container runtimes 200 into a device 300 that may include a pre-existing operating system (OS) 302 and/or previous applications 304, the system 100 may delete, erase, overwrite, or otherwise render inoperable any or all of the pre-existing OS 302 and/or previous applications 304 on the device 300. The system 100 may then embed the containerized pipelines 402 and the associated container runtimes 200 into the device 300. The result is shown in FIG. 8C. The containers 406 may include a containerized operating system and everything that the pipelines 402 may require to run on the devices 300. The container runtime 200 may interface with the pipeline 402 to run the pipeline 402 on the device 300 and to perform other operations regarding the pipeline 402 as described herein.
[0146] In any of these cases (FIG. 8 A, FIG. 8B and/or FIG. 8C), once the container runtime 200 and the microcontainer 406 are installed on a particular device, future microcontainer updates (releases) may be installed on the device without replacement or modification the container runtime 200. That is, once the container runtime and microcontainer are installed, future microcontainer updates (releases) may be downloaded without the container runtime and/or the pre-existing microcontainer having to be upgraded and/or modified.
[0147] For example, an application release "BL v2" may be running on the device 300 from flash, and the system 100 may download a release ( e.g ., " BL v3") which may contain an extra pipeline 402, an updated model text file or other new element. The new release (“v3”) may then be stored to any available storage on the device 300 (e.g., Flash or MicroSD), and the new microcontainers 406 on the storage device may augment the microcontainers 406 already embedded. During this process, the OS 302 and/or the previous programming 304 may not require any updating and/or other modification and may not be adversely affected by the agent 200 and/or the pipelines 402 (original or new release).
[0148] In addition, in either case (FIG. 8A and/or FIG. 8B), the device 300 may be over provisioned with additional containers 406 and/or additional associated agents 200. These additional containers 406 may not necessarily include pipelines 402 upon deployment into the device 300, but instead may be placeholders for future pipelines 402 (i.e., future releases to add new functionalities to the device 300) to be realized and implemented at later dates. When a new functionality for the devices 300 is determined, the corresponding pipeline 402 may be developed, containerized and augmented into the awaiting container 406 already installed on the device 300. This may allow developers to later monetize the devices 300 by adding or adjusting the device workloads as the needs evolve.
[0149] In all cases, the pipelines 402 need not rely on a connection to the orchestration hub 102 or other devices for execution and may run untethered in disconnected environments. [0150] In some exemplary embodiments hereof, the system 100 may include different container runtime 200 deployment topologies such as a 1 : 1 agent 200 to edge device 300 deployment, a 1 :N agent 200 to edge device 300 deployment, a hybrid deployment and other types of deployment topologies.
[0151] Using a 1 : 1 agent 200 to edge device 300 deployment topology (as shown, e.g., in
FIG. 9A), the orchestration hub 102 may download the software agents 200 to the devices 300 via a local connection (e.g, USB or serial cable) as represented by arrows Bl, B2, ... Bn, a network 104 (e.g, the Internet) as represented by download lines Bl’, B2’, ... Bn’ or by other methods. The agent 200 may establish communication protocols within the device 300 such that the orchestration hub 102 may then communicate with the software agents 200 as represented by communication lines Al, A2, ... An. The software agents 200 may receive containerized pipelines 402 from the orchestration hub 102 and enable the edge devices 300 to run the pipelines 402. The software agents 200 may also upload data (e.g, results from edge device data processing) from the device 300 to the orchestration hub 102 as represented by communication lines Al, A2, ... An.
[0152] Using a 1 :N agent 200 to edge device 300 deployment topology (as shown, e.g., in FIG. 9B), the agent 102 may also direct the edge device 300-1 to communicate with other edge devices 300-2, 300-n as represented by communication lines Cl-2 and Cl-«. This may allow the agent 200 to collect data from locked-down devices 300 (devices 300 that the agents 200 cannot be directly installed onto) using remote communication protocols (such as SNMP), and to then run analytics or data processing on the collected data on the device 300 hosting the agent 200.
[0153] Using a hybrid deployment topology (as shown, e.g., in FIG. 9C), some agents 200 may only communicate with the orchestration hub 102 (e.g., agent 200-1) and other agents 200 may communicate with the orchestration hub 102 and with other edge devices 300 (e.g, agent 200-2 may communicate with the orchestration hub 102 and with edge devices 300- 3, 300-n).
[0154] In any of the exemplary embodiments herein, the system 100 may include data acquisition capabilities including ADLink Data River, Sonim, Influx, MQTT, Redis, SNMP, CPU, ModBus, CAN, I2C, SPI, OPC-UA, SCAD A, CoAP, Kafka and others.
VI. Administration
[0155] In exemplary embodiments hereof, the administration unit 504 may include at the least some of the following modules:
1. One or more container runtime modules;
2. A runtimes streams module;
3. A users module;
4. An organizations module;
5. An API keys module; and
6. A sessions module.
VII. Example Use Cases
Use Case: Oil and Gas - Directional Drilling and Geo-steering
[0156] In one example implementation, the system 100 was used to provide real-time edge analytics during the process of directional drilling and geo-steering at oil and gas drilling facilities.
[0157] Geo-steering is the process of adjusting the borehole position (inclination and azimuth angles) in real-time during the drilling of the borehole in order to reach one or more geological targets. These adjustments may be based on geological information gathered from drill-head sensors (edge devices 300) while drilling. [0158] The drill-head sensors 300 may include, without limitation one or more of: weight on bit (WOB) sensor(s), differential pressure sensor(s), shock sensor(s), vibration sensor(s), torque sensor(s), temperature sensor(s), accelerometer (inclinometer) sensor(s), magnetometer (azimuth) sensor(s), gamma ray sensor(s), and/or others.
[0159] Prior to the implementation of the system 100, logging while drilling (LWD) and measurement while drilling (MWD) data may be collected downhole by the sensors 300, converted into amplitude- and/or frequency-modulated pulses and transmitted up through the mud column by a downhole mud pulser. Depending on the ground composition and the technology used to transmit and receive the pulses, the baud rates may be very slow ( e.g ., lOObits/sec). In addition, the mud-pulse transmissions may include a great deal of latency, especially with increased well depth. These limitations may limit the data that may be transmitted and, in many cases, the data may not be received topside until the drill head has already been removed from the bore (thus limiting the usefulness of the data).
[0160] As represented in FIG. 10A, the data received topside may include raw data plotted on an X-Y and/or polar coordinate system or otherwise. The same telemetry system may be used to transmit signal commands from the surface to the downhole sensors.
[0161] Once topside, the raw data may be displayed for the drilling engineer to monitor in real time, looking for anomalies that may suggest a problem or that may provide insight to the drilling effectiveness. However, the recognition of problems and/or interpretation of the raw data to discover patterns may be subjective and heavily reliant on the experience of the engineer, and as such, may be difficult to adequately perform in real time. A typical procedure includes calculating the best path prior to the drilling using limited information, performing the drilling using these calculations, and then sending the data to a centralized data center after the drilling is complete to determine the success and effectiveness of the work. This process may typically take weeks if not months, thereby only confirming the status of the borehole long after the fact. [0162] In one exemplary implementation of the system 100, drill-head sensors (edge devices 300) at a drilling location were retrofitted with software agents 200 and containerized software pipelines 402 (as shown, e.g., in FIG. 10B). The pipelines 402 programmed the devices to apply transforms and machine learning algorithms to the raw data to facilitate a better understanding of the data in real time and to allow for adjustments to be made to the drilling path based on the data.
[0163] Implementation of the system 100 caused no interruption of existing well completion processes.
[0164] In one example, the software pipelines 402 processed the data into meaningful parameters such as mechanical specific energy (MSE). As is known in the art, MSE may reflect the energy required to remove a unit volume of rock. For optimum drilling efficiency, the objective may be to minimize the MSE and to maximize the rate of penetration (ROP) of the drill-head. Once the MSE data were made available in real time, the drillers were able optimize the MSE by adjusting the weight on bit (WOB), torque, ROP, and drill bit revolutions per minute.
[0165] In another example, the software pipelines 402 applied machine learning (ML) algorithms and analytics to the data to discover meaningful patterns and to facilitate best path predictions for the drill-head based on the real time data received from the devices 300. The edge analytics also facilitated the optimization of the drilling rates and the minimization of the drilling hazards.
[0166] In another example, the software pipelines 402 applied smoothing to the data to reduce the noise ( e.g ., noise introduced by the attenuation through the mud column or from other sources) thus making the data streams easier to view and understand.
[0167] In another example, the software pipelines 402 correlated the data across the different sensors and different feeds.
[0168] The implementation of the system 100 in this use case resulted in improved efficiency across multiple variables. For example, the time to drill the wells was reduced by half (50% reduction in time). In addition, 42% less drilling rigs were able to drill 30% more wells in the same amount of time, resulting in a 46% reduction in wellsite personnel (massive cost savings).
Use Case: Oil and Gas - Well Construction and Integrity
[0169] Well construction includes the insertion of the casing, securing the casing with cement, perforation as well as other initiatives (as depicted, e.g., in FIG. IOC). The casings must be cemented correctly within the well, and once secured, must be monitored during the fracking operations to ensure their continued integrity.
[0170] In one example implementation, the system 100 was used to provide real-time edge analytics during well construction and use.
[0171] Sensors (edge devices 300) such as one or more treating pressure sensor(s), pump rate sensor(s), temperature sensor(s), sand concentration sensor(s), and other types of sensors 300 were deployed within the well to monitor the pouring of the casing cement. In one exemplary implementation of the system 100, the sensors 300 were retrofitted with software agents 200 and containerized software pipelines 402 that programmed the devices 300 to calculate real time effective pressure and to apply machine learning (ML) algorithms to the collected data to development pump rate models, sand concentration models and other types of models to predict adjustments to be made during the process. The ML algorithms were also used to predict the performance of the casings when the fracking moved forward.
[0172] Implementation of the system 100 caused no interruption of existing well construction processes. [0173] In another example, the software pipelines 402 applied smoothing to the data to reduce the noise thus making the data streams easier to view and understand.
[0174] In another example, the software pipelines 402 correlated the data with litho- facies map data to better understand the sedimentation processes and their deposits in the area. [0175] The implementation of the system 100 in this use case resulted in an improved extraction rate of 30%-40%.
Use Case: Oil and Gas - Completion Process
[0176] Well completion is the process of finalizing a well for production (or injection).
This may involve preparing the bottom of the borehole, installing the production tubing and the associated downhole tools, perforation, stimulation and other activities (as shown, e.g., in FIG. 10D).
[0177] In one example implementation, the system 100 was used to provide real-time edge analytics during the well completion process.
[0178] Downhole pressure, temperature, proppant concentration, chemical concentration and other types of gauges (edge devices 300) may be secured to the outside of the tubing string to collect data and to send it topside electrically, via fiber optics or through acoustic signals in the tubing wall.
[0179] In one exemplary implementation of the system 100, the pressure and temperature sensors (edge devices 300) were retrofitted with software agents 200 and containerized software pipelines 402. The pipelines 402 programmed the devices 300 to apply transforms and machine learning algorithms to the raw data to facilitate a better understanding of the data in real time and to allow for adjustments to be made to the completion in-process based on the data.
[0180] Implementation of the system 100 caused no interruption of existing well completion processes.
[0181] Nolte plots are used to interpret net-pressure behavior in the well to determine estimates of fracture growth patterns, where net pressure is the pressure in the fracture minus the in-situ stress. In one example, the software pipelines 402 processed the data collected by the edge devices 300 into meaningful parameters such as net pressure and Nolte plots (as shown, e.g., in FIG. 10E). The processed data were then transmitted topside and made immediately available to the drilling engineers. The engineers were then able to make real-time adjustments to the completion processes as the data was retrieved. This optimized the completion and allowed the engineers to avert critical problems such as when the fracture height may be increasing too rapidly (referred to as Mode IV) in which case the fracture treatment was flushed out or terminated.
[0182] In another example, the software pipelines 402 applied machine learning (ML) algorithms and analytics to the Nolte plot data to provide predictions of problematic pressures within the fractures given the real-time conditions of the well.
[0183] In another example, the software pipelines 402 calculated real-time changes in the injection rates, the proppant concentrations, fluid viscosities and other parameters to facilitate the optimization of the stimulation effectiveness. [0184] In another example, the software pipelines 402 applied smoothing to the data to reduce the noise introduced by the attenuation through the mud column thus making the data streams easier to view and understand.
[0185] In another example, the software pipelines 402 correlated the data across the different sensors and different feeds. [0186] The implementation of the system 100 in this use case resulted in improved well productivity, a significant increase in hydrocarbon production and a 100% savings on the overall completion processes.
Use Case: Oil and Gas - Artificial Lift and Pump Operations
[0187] In one example implementation, the system 100 was used to provide real-time edge analytics during pump operations including artificial lift.
[0188] Artificial lift is the process used to increase the pressure within the reservoir to extract the oil when the natural drive energy is not strong enough to push the oil to the surface. Overall pump operations include methodologies to keep pumps active and productive, and to minimize downtime (as shown, e.g., in FIG. 10F). [0189] In one exemplary implementation of the system 100, the pressure, temperature, flow, suction, level and other sensors (edge devices 300) were retrofitted with software agents 200 and software pipelines 402. The pipelines 402 programmed the devices 300 to apply machine learning (ML) algorithms to the raw data to predict when artificial lift may be required, and what level of lift may be needed. This optimized the pump’s productivity and yield. [0190] The devices 300 were also programmed to apply machine learning (ML) algorithms to predict maintenance requirements for the pumps prior to pump failure thus avoiding broken wells. This implementation significantly reduced pump downtimes thus increasing pump production, yield and revenue. [0191] Once the ML models were optimized for a smaller group of pumps, the models were pushed to all of the pumps across the oil field, thus lowering the aggregate maintenance costs while significantly improving production volumes and overall revenue.
Use Case: Oil and Gas - Building and Using ML Models
[0192] In at least some of the use cases presented, the edge devices were programmed by the system 100 to calculate a variety of ML models that were then used to predict behavior of the various parameters of interest during the well’s processes described. In exemplary embodiments hereof, the predictions from each ML model were compared to actual results upon completion of the processes to determine which ML models were the most accurate in predicting the well’s behavior.
[0193] For example, k-means, LVQ, SVN, CNN, ARIMA, RNN-LSTM and other ML models may be implemented by the system 100 to find patterns in the data and to create insights and/or predictions. Once the comparisons may be made, different pieces of data from the different models may be combined into one or more insights that may best represent the behavior of the well’s processes. Using the system 100 integrated into the edge devices 300 within the well, these calculations may happen as frequently as desired (e.g., every one second or faster) to fine tune the ML models in real time.
[0194] The resulting insights may improve the well’s efficiency and productivity and may allow for better automation of complex processes.
VIII. Computing
[0195] The functionalities, applications, services, mechanisms, operations, and acts shown and described above are implemented, at least in part, by software running on one or more computers (e.g., the orchestration hub 102).
[0196] Programs that implement such methods (as well as other types of data) may be stored and transmitted using a variety of media (e.g, computer readable media) in a number of manners. Hard-wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments. Thus, various combinations of hardware and software may be used instead of software only.
[0197] One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that the various processes described herein may be implemented by, e.g, appropriately programmed computers, special purpose computers and computing devices. One or more such computers or computing devices may be referred to as a computer system. [0198] FIG. 11 is a schematic diagram of a computer system 1100 upon which embodiments of the present disclosure may be implemented and carried out. [0199] According to the present example, the computer system 1100 includes a bus 1102
(i.e., interconnect), one or more processors 1104, a main memory 1106, read-only memory 1108, removable storage media 1110, mass storage 1112, and one or more communications ports 1114. Communication port(s) 1114 may be connected to one or more networks (not shown) by way of which the computer system 1100 may receive and/or transmit data.
[0200] As used herein, a “processor” means one or more microprocessors, central processing units (CPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture. An apparatus that performs a process can include, e.g ., a processor and those devices such as input devices and output devices that are appropriate to perform the process.
[0201] Processor(s) 1104 can be any known processor. Typically Intel x86 processors are used for cloud and gateways, ARM A-class processors may be used for gateways and larger IoT devices, and ARM M-class may be used for IoT devices. Communications port(s) 1114 can be any of an Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 1114 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), or any network to which the computer system 1100 connects. The computer system 1100 may be in communication with peripheral devices (e.g, display screen 1116, input device(s) 1118) via Input / Output (I/O) port 1120. [0202] Main memory 1106 can be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art. Read-only memory (ROM) 1108 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor(s) 1104. Mass storage 1112 can be used to store information and instructions. For example, hard disk drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), or any other mass storage devices may be used.
[0203] Bus 1102 communicatively couples processor(s) 1104 with the other memory, storage and communications blocks. Bus 1102 can be any bus including an PC (Inter-Integrated Circuit or I2C) bus, an SPI (Serial Peripheral Interface) bus, a PCI / PCI-X, SCSI, a Modbus bus, a Controller Area Network (CAN) bus, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like.
[0204] PC busses are frequently used for sensors, and SPI busses are used for some sensors and often for memory.
[0205] Removable storage media 1110 can be any kind of external storage, including non-volatile memory cards (such as a microSD card or the like), hard-drives, floppy drives, USB drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Versatile Disk - Read Only Memory (DVD-ROM), etc.
[0206] Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. As used herein, the term “machine-readable medium” refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data ( e.g ., instructions, data structures) which may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random-access memory, which typically constitutes the main memory of the computer. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
[0207] The machine-readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
[0208] Various forms of computer readable media may be involved in carrying data (e.g. sequences of instructions) to a processor. For example, data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
[0209] A computer-readable medium can store (in any appropriate format) those program elements which are appropriate to perform the methods.
[0210] As shown, main memory 1106 is encoded with application(s) 1122 that support(s) the functionality as discussed herein (the application(s) 1122 may be an application(s) that provides some or all of the functionality of the services / mechanisms described herein. Application(s) 1122 (and/or other resources as described herein) can be embodied as software code such as data and/or logic instructions ( e.g ., code stored in the memory or on another computer readable medium such as a disk) that supports processing functionality according to different embodiments described herein.
[0211] During operation of one embodiment, processor(s) 1104 accesses main memory 1106 via the use of bus 1102 in order to launch, run, execute, interpret or otherwise perform the logic instructions of the application(s) 1122. Execution of application(s) 1122 produces processing functionality of the service related to the application(s). In other words, the process(es) 1124 represent one or more portions of the application(s) 1122 performing within or upon the processor(s) 1104 in the computer system 1100.
[0212] It should be noted that, in addition to the process(es) 1124 that carries (carry) out operations as discussed herein, other embodiments herein include the application 1122 itself (i.e., the un-executed or non-performing logic instructions and/or data). The application 1122 may be stored on a computer readable medium (e.g., a repository) such as a disk or in an optical medium. According to other embodiments, the application 1122 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the main memory 1106 (e.g, within Random Access Memory or RAM). For example, application(s) 1122 may also be stored in removable storage media 1110, read-only memory 1108, and/or mass storage device 1112.
[0213] Those of ordinary skill in the art will understand that the computer system 1100 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
[0214] As discussed herein, embodiments of the present invention include various steps or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware. The term “module” refers to a self-contained functional component, which can include hardware, software, firmware or any combination thereof.
[0215] One of ordinary skill in the art will readily appreciate and understand, upon reading this description, that embodiments of an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.
[0216] Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process. Conclusion
[0217] Where a process is described herein, those of ordinary skill in the art will appreciate that the process may operate without any user intervention. In another embodiment, the process includes some human intervention ( e.g ., a step is performed by or with the assistance of a human).
[0218] As used herein, including in the claims, the phrase “at least some” means “one or more,” and includes the case of only one. Thus, e.g., the phrase “at least some ABCs” means “one or more ABCs”, and includes the case of only one ABC.
[0219] As used herein, including in the claims, term “at least one” should be understood as meaning “one or more”, and therefore includes both embodiments that include one or multiple components. Furthermore, dependent claims that refer to independent claims that describe features with “at least one” have the same meaning, both when the feature is referred to as “the” and “the at least one”.
[0220] As used in this description, the term “portion” means some or all. So, for example, “A portion of X” may include some of “X” or all of “X”. In the context of a conversation, the term “portion” means some or all of the conversation.
[0221] As used herein, including in the claims, the phrase “using” means “using at least,” and is not exclusive. Thus, e.g, the phrase “using X” means “using at least X.” Unless specifically stated by use of the word “only”, the phrase “using X” does not mean “using only X ”
[0222] As used herein, including in the claims, the phrase “based on” means “based in part on” or “based, at least in part, on,” and is not exclusive. Thus, e.g, the phrase “based on factor X” means “based in part on factor X” or “based, at least in part, on factor X.” Unless specifically stated by use of the word “only”, the phrase “based on X” does not mean “based only on X.”
[0223] In general, as used herein, including in the claims, unless the word “only” is specifically used in a phrase, it should not be read into that phrase.
[0224] As used herein, including in the claims, the phrase “distinct” means “at least partially distinct.” Unless specifically stated, distinct does not mean fully distinct. Thus, e.g, the phrase, “X is distinct from Y” means that “X is at least partially distinct from Y,” and does not mean that “X is fully distinct from Y.” Thus, as used herein, including in the claims, the phrase “X is distinct from Y” means that X differs from Y in at least some way.
[0225] It should be appreciated that the words “first,” “second,” and so on, in the description and claims, are used to distinguish or identify, and not to show a serial or numerical limitation. Similarly, letter labels (e.g, “(A)”, “(B)”, “(C)”, and so on, or “(a)”, “(b)”, and so on) and/or numbers ( e.g ., “(i)”, “(ii)”, and so on) are used to assist in readability and to help distinguish and / or identify, and are not intended to be otherwise limiting or to impose or imply any serial or numerical limitations or orderings. Similarly, words such as “particular,”
“specific,” “certain,” and “given,” in the description and claims, if used, are to distinguish or identify, and are not intended to be otherwise limiting.
[0226] As used herein, including in the claims, the terms “multiple” and “plurality” mean
“two or more,” and include the case of “two.” Thus, e.g., the phrase “multiple ABCs,” means “two or more ABCs,” and includes “two ABCs.” Similarly, e.g, the phrase “multiple PQRs,” means “two or more PQRs,” and includes “two PQRs.”
[0227] The present invention also covers the exact terms, features, values and ranges, etc. in case these terms, features, values and ranges etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least etc. (i.e., "about 3" or “approximately 3” shall also cover exactly 3 or "substantially constant" shall also cover exactly constant).
[0228] As used herein, including in the claims, singular forms of terms are to be construed as also including the plural form and vice versa, unless the context indicates otherwise. Thus, it should be noted that as used herein, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0229] Throughout the description and claims, the terms “comprise”, “including”,
“having”, and “contain” and their variations should be understood as meaning “including but not limited to”, and are not intended to exclude other components unless specifically so stated.
[0230] It will be appreciated that variations to the embodiments of the invention can be made while still falling within the scope of the invention. Alternative features serving the same, equivalent or similar purpose can replace features disclosed in the specification, unless stated otherwise. Thus, unless stated otherwise, each feature disclosed represents one example of a generic series of equivalent or similar features.
[0231] The present invention also covers the exact terms, features, values and ranges, etc. in case these terms, features, values and ranges etc. are used in conjunction with terms such as about, around, generally, substantially, essentially, at least etc. (i.e., "about 3" shall also cover exactly 3 or "substantially constant" shall also cover exactly constant).
[0232] Use of exemplary language, such as “for instance”, “such as”, “for example”
(“e.g.,”) and the like, is merely intended to better illustrate the invention and does not indicate a limitation on the scope of the invention unless specifically so claimed.
[0233] While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. Appendix I
TABLE I: Example shared stages
Figure imgf000045_0001
Figure imgf000046_0001

Claims

What is claimed: We claim:
1. A method comprising:
(A) embedding a container runtime on a microcontroller; (B) embedding a first at least one container on said microcontroller, said first at least one container comprising a first at least one software pipeline, wherein said container runtime provides an execution environment for said first at least one container on said microcontroller; and then
(C) embedding a second at least one container on the microcontroller, said second at least one container comprising a second at least one software pipeline, wherein said second at least one software pipeline is distinct from the first at least one software pipeline, and wherein said container runtime provides an execution environment for said second at least one container on said microcontroller.
2. The method of claim 1, wherein said container runtime replaces previous programming on said microcontroller.
3. The method of claim 1, wherein the previous programming comprises an operating system (OS) for the microcontroller.
4. The method of claim 3, wherein said previous programming comprises at least one application distinct from the OS.
5. The method of claim 1, further comprising: after said embedding in (B), the first at least one software pipeline is run on the microcontroller.
6. The method of claim 1, further comprising: after said embedding in (C), the second at least one software pipeline is run on the microcontroller.
7. The method of claim 1, wherein running of the first at least one software pipeline on said microcontroller is controlled, at least in part, by the container runtime.
8. The method of claim 1, wherein running of the second at least one software pipeline on said microcontroller is controlled, at least in part, by the container runtime.
9. The method of claim 1, wherein the embedding in (B) and/or (C) is controlled, at least in part, by the container runtime.
10. The method of claim 1, wherein the microcontroller is associated with a device.
11. The method of claim 10, wherein the device is an Intemet-of-things (IoT) device.
12. The method of claim 10, wherein the embedding in (A) and/or (B) occurs after the device has been deployed.
13. The method of claim 10, wherein the embedding in (A) and/or (B) occurs before the device has been deployed.
14. The method of claim 10, wherein the embedding in (A) occurs before the device has been deployed wherein the embedding in (B) occurs after the device has been deployed.
15. The method of claim 1, wherein the software pipelines obtain sensor data from one or more sensors.
16. The method of claim 15, wherein the microcontroller is associated with a device, and wherein the one or more sensors are on and/or co-located with the device.
17. The method of claim 1, wherein the microcontroller controls one or more actuators.
18. The method of claim 17, wherein the one or more actuators are on and/or co-located with the microcontroller.
19. The method of claim 1, where the software pipelines obtain inputs from one or more input devices.
20. The method of claim 19, wherein the one or more input devices include one or more human interface devices.
21. The method of claim 19, wherein the microcontroller is associated with a device, and wherein one or more input devices are on and/or co-located with the device.
22. The method of claim 1, wherein the software pipelines run when said microcontroller is connected to at least one other device.
23. The method of claim 1, wherein the software pipelines run when said microcontroller is disconnected from other devices.
24. The method of claim 1, wherein the software pipelines include at least one mechanism for maintaining data on said microcontroller when said microcontroller is disconnected from other devices.
25. The method of claim 1, wherein the software pipelines include at least one mechanism for transmitting data from said microcontroller.
26. The method of claim 1, wherein the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
27. The method of claim 1, wherein the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
28. The method of claim 1, wherein the software pipelines comprise an application for an already-programmed device.
29. The method of claim 2, wherein the previous programming comprises firmware and/or scripts from at least one prior installation.
30. The method of claim 1, wherein (i) said first at least one software pipeline, and/or (ii) said second at least one software pipeline change or augment at least one functionality of said microcontroller.
31. The method of claim 1, wherein said microcontroller comprises hardware including at least one processor and a memory.
32. The method of claim 10, wherein the device is provisioned with multiple containers.
33. The method of claim 32, wherein at least one of the multiple containers is a placeholder for a future pipeline.
34. The method of claim 1, wherein the first at least one container comprises first one or more library routines and/or functions potentially used or needed by the first at least one software pipeline.
35. The method of claim 34, wherein the first one or more library routines and/or functions include all routines and/or functions needed by the first at least one software pipeline.
36. The method of claim 34, wherein the first one or more library routines and/or functions were determined based on routines and/or functions in the first at least one software pipeline.
37. The method of claim 34, wherein the first one or more library routines and/or functions was determined from at least one library and excludes routines and/or functions from said library that are not in the first at least one software pipeline.
38. The method of claim 1, further comprising: repeating acts (A) and (B) on multiple microcontrollers.
39. The method of claim 38, wherein the multiple microcontrollers are homogeneous.
40. The method of claim 38, wherein the multiple microcontrollers are heterogeneous.
41. A method compri sing :
(A) providing at least one container runtime, wherein said at least one container runtime includes a first particular container runtime for a first type of device;
(B) creating a first at least one software pipeline; (C) on a first device of said first type, embedding the first particular container runtime;
(D) on said first device, embedding a first container comprising the first at least one software pipeline, wherein said first particular container runtime provides an execution environment for said first container on devices of said first type; (E) creating a second at least one software pipeline, distinct from the first at least one software pipeline; and
(F) embedding, on said first device, a second container comprising the second at least one software pipeline, wherein said first particular container runtime provides an execution environment for said second container on devices of said first type.
42. The method of claim 41, wherein said at least one container runtime comprises a plurality of container runtimes, including a container runtime for each of a plurality of distinct types of devices.
43. The method of claim 42, wherein said plurality of distinct types of devices comprise a plurality of Internet-of-things (IoT) devices.
44. The method of claim 42, wherein said plurality of distinct types of devices include devices with distinct processors and/or with distinct types of processors.
45. The method of claim 42, wherein said at least one container runtime includes a second particular container runtime for a second type of device, distinct from said first type of device, the method further comprising: (C2) on a second device of said second type, embedding the second particular container runtime; and
(D2) on said second device, embedding a first container comprising the first at least one software pipeline, wherein said second particular container runtime provides an execution environment for said first container on devices of said second type.
46. The method of claim 45, further comprising:
(F2) embedding, on said second device, said second container comprising the second at least one software pipeline, wherein said second particular container runtime provides an execution environment for said second container on devices of said second type.
47. The method of claim 41, wherein said embedding of the first particular container runtime in (C) does not replace previous programming in said first device.
48. The method of claim 41, wherein said embedding of the second container in (C) replaces the first container on the first device.
49. The method of claim 41, wherein the first at least one software pipeline is run on the first device.
50. The method of claim 41, wherein the second at least one software pipeline is run on the first device.
51. The method of claim 41, wherein running of the first at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime, and wherein the running of the second at least one software pipeline on the first device is controlled, at least in part, by the first particular container runtime.
52. The method of claim 41, wherein the embedding in (D) and/or (F) is controlled, at least in part, of the first particular container runtime.
53. The method of claim 41, wherein the first device is an Intemet-of-things (IoT) device.
54. The method of claim 41, wherein the software pipelines obtain sensor data from one or more sensors.
55. The method of claim 54, wherein the one or more sensors are on and/or co located with the first device.
56. The method of claim 41, wherein the software pipelines control one or more actuators.
57. The method of claim 56, wherein the one or more actuators are on and/or co-located with the device.
58. The method of claim 41, where the software pipelines obtain inputs from one or more input devices.
59. The method of claim 58, wherein the one or more input devices include one or more human interface devices.
60. The method claim 58, wherein one or more input devices are on and/or co located with the device.
61. The method of claim 41, wherein the software pipelines run when said first device is connected to at least one other device.
62. The method of claim 41, wherein the software pipelines run when said first device is disconnected from other devices.
63. The method of claim 41, wherein the software pipelines include at least one mechanism for maintaining data on said first device when said first device is disconnected from other devices.
64. The method of claim 41, wherein the software pipelines include at least one mechanism for transmitting data from said first device.
65. The method of claim 41, wherein the software pipelines include at least one mechanism for obtaining data from at least one other microcontroller.
66. The method of claim 41, wherein the software pipelines include at least one mechanism for providing data to at least one other microcontroller.
67. A computer-readable medium with one or more computer programs stored therein that, when executed by one or more processors of a device, cause the one or more processors to perform the method of claim 1.
68. The computer-readable medium of claim 67, wherein the computer- readable medium is non-transitory.
PCT/IB2020/057689 2019-08-16 2020-08-14 System and method for programming devices WO2021033110A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20854309.0A EP4014113A4 (en) 2019-08-16 2020-08-14 System and method for programming devices
US17/673,732 US11874692B2 (en) 2019-08-16 2022-02-16 Method for deploying containerized security technologies on embedded devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962887972P 2019-08-16 2019-08-16
US62/887,972 2019-08-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/209,100 Continuation-In-Part US11875167B2 (en) 2019-08-16 2021-03-22 Method for deploying containerized protocols on very small devices

Publications (1)

Publication Number Publication Date
WO2021033110A1 true WO2021033110A1 (en) 2021-02-25

Family

ID=74660680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/057689 WO2021033110A1 (en) 2019-08-16 2020-08-14 System and method for programming devices

Country Status (2)

Country Link
EP (1) EP4014113A4 (en)
WO (1) WO2021033110A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113759815A (en) * 2021-08-03 2021-12-07 北京工业职业技术学院 IOTPLC processing platform of interconnected factory based on edge calculation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359552A1 (en) * 2011-09-19 2014-12-04 Tata Consultancy Services Limited Computer Platform for Development and Deployment of Sensor Data Based Applications and Services
US20170060574A1 (en) * 2015-08-27 2017-03-02 FogHorn Systems, Inc. Edge Intelligence Platform, and Internet of Things Sensor Streams System
US20170083386A1 (en) * 2015-09-17 2017-03-23 Salesforce.Com, Inc. PROCESSING EVENTS GENERATED BY INTERNET OF THINGS (IoT)
US20180295485A1 (en) * 2017-04-07 2018-10-11 Telia Company Ab METHODS AND APPARATUSES FOR PROVIDING A SERVICE TO AN IoT DEVICE
WO2018208409A1 (en) * 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Cloud management of low-resource devices via an intermediary device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10579403B2 (en) * 2015-06-29 2020-03-03 Vmware, Inc. Policy based provisioning of containers
US20170060609A1 (en) * 2015-08-28 2017-03-02 International Business Machines Corporation Managing a shared pool of configurable computing resources which has a set of containers
US9817648B2 (en) * 2016-01-15 2017-11-14 Google Inc. Application containers with dynamic sub-package loading
US10348831B2 (en) * 2016-04-27 2019-07-09 Unisys Corporation Method and system for containerized internet of things (IoT) devices
US10650157B2 (en) * 2017-04-30 2020-05-12 Microsoft Technology Licensing, Llc Securing virtual execution environments

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359552A1 (en) * 2011-09-19 2014-12-04 Tata Consultancy Services Limited Computer Platform for Development and Deployment of Sensor Data Based Applications and Services
US20170060574A1 (en) * 2015-08-27 2017-03-02 FogHorn Systems, Inc. Edge Intelligence Platform, and Internet of Things Sensor Streams System
US20170083386A1 (en) * 2015-09-17 2017-03-23 Salesforce.Com, Inc. PROCESSING EVENTS GENERATED BY INTERNET OF THINGS (IoT)
US20180295485A1 (en) * 2017-04-07 2018-10-11 Telia Company Ab METHODS AND APPARATUSES FOR PROVIDING A SERVICE TO AN IoT DEVICE
WO2018208409A1 (en) * 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Cloud management of low-resource devices via an intermediary device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113759815A (en) * 2021-08-03 2021-12-07 北京工业职业技术学院 IOTPLC processing platform of interconnected factory based on edge calculation

Also Published As

Publication number Publication date
EP4014113A1 (en) 2022-06-22
EP4014113A4 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
US9952852B2 (en) Automated deployment and servicing of distributed applications
US10755006B2 (en) Cloud-based reservoir simulation environment
US7895220B2 (en) Middleware method and apparatus and program storage device adapted for linking data sources to software applications
US8429671B2 (en) Integrated workflow builder for disparate computer programs
US20150317184A1 (en) Systems and Methods For Processing Drilling Data
EP2575089A1 (en) Customizable user interface for real-time oilfield data visualization
US20020083182A1 (en) Real-time streamed data download system and method
US11578568B2 (en) Well management on cloud computing system
JP2008536210A (en) Module application for mobile data systems
US9708897B2 (en) Oilfield application framework
US20210026030A1 (en) Geologic formation operations framework
CN110825647A (en) Test method for automatically testing logical equipment interface
EP2851853A1 (en) Project data management
US8483852B2 (en) Representing geological objects specified through time in a spatial geology modeling framework
US8942960B2 (en) Scenario analyzer plug-in framework
US20210073697A1 (en) Consolidating oil field project data for project tracking
US20220103499A1 (en) Notification and task management system
US20190265375A1 (en) Cloud Framework System
WO2021033110A1 (en) System and method for programming devices
WO2018102732A1 (en) Coupled reservoir-geomechanical models using compaction tables
Tavallali et al. Optimal drilling planning by considering the subsurface dynamics—combing the flexibilities of modeling and a reservoir simulator
US20240086600A1 (en) Petro-Technical Global Fluid Identity Repository
WO2022139836A1 (en) Geological property modeling with neural network representations
US20230342690A1 (en) Presentation of automated petrotechnical data management in a cloud computing environment
US11416276B2 (en) Automated image creation and package management for exploration and production cloud-based applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20854309

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020854309

Country of ref document: EP

Effective date: 20220316