EP3759600A1 - Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène - Google Patents

Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène

Info

Publication number
EP3759600A1
EP3759600A1 EP19711473.9A EP19711473A EP3759600A1 EP 3759600 A1 EP3759600 A1 EP 3759600A1 EP 19711473 A EP19711473 A EP 19711473A EP 3759600 A1 EP3759600 A1 EP 3759600A1
Authority
EP
European Patent Office
Prior art keywords
thread
command center
member device
network interface
job bundle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19711473.9A
Other languages
German (de)
English (en)
Inventor
Guilherme Spina
Leonardo De Moura Rocha Lima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
V2com SA
Original Assignee
V2com SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by V2com SA filed Critical V2com SA
Priority to EP23203903.2A priority Critical patent/EP4328750A3/fr
Publication of EP3759600A1 publication Critical patent/EP3759600A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities

Definitions

  • This disclosure relates generally to distributed computing and the Internet of Things (IoT). More specifically, this disclosure relates to systems and methods for secure distributed processing across networks of heterogeneous processing nodes.
  • IoT Internet of Things
  • IoT Internet of Things
  • the controllers used in certain IoT home automation systems may, during the course of their intended operation, spend the bulk of their time monitoring and collecting temperatures inside and outside of a house. While monitoring and recording temperature data is typically not computationally expensive, as such, does not require a powerful processor, an IoT controller may, nonetheless possess substantial processing and memory resources to support occasional, computationally demanding operations, such as voice recognition.
  • a modern home may have multiple devices having similar sensor technology and processing capabilities as the home automation controller described above. Taken together, these devices can potentially be used to collect a wealth of highly granular data as to their environment (for example, block by block temperature data), as well as unused processing capacity to analyze and process such data. Beyond the possibilities for data science and mining data on massive scales, the ability to orchestrate and collectivize the processing power of a cloud or“fog” of small, networked processors presents new opportunities to extend the useful life of legacy devices, such as older personal computers, tablets, and phones, whose processing and data collection resources, while perhaps no longer suitable to support applications on subsequent generations of the same devices, could still be usefully applied as part of a larger processing network. Further applications of secure distributed processing across networks of heterogeneous processing nodes include, without limitation, providing CPU power to support proof-of-work based systems for verifying transactions recorded in a distributed ledger.
  • Embodiments as disclosed and claimed herein address these technical challenges by providing systems and methods for secure distributed processing across networks of heterogeneous processing nodes.
  • This disclosure provides systems and methods for secure distributed processing across networks of heterogeneous processing nodes.
  • a method for distributed processing includes receiving a job bundle at a command center, wherein the command center includes a processor, a network interface, and a memory. Further, the method includes determining a value of a dimension of the job bundle, determining, based on a predetermined rule applied to the determined value of the dimension of the job bundle, an aggregate processing cost for the job bundle. Further operations include identifying one or more available member devices communicatively connected to the command center via the network interface and splitting the job bundle into one or more threads based on at least one of the determined value of the dimension, the aggregate processing cost or the available member devices. Additionally, the method includes the operations of apportioning a thread of the one or more threads to a member device and transmitting, via the network interface, the apportioned thread to a secure processing environment of the member device.
  • a method for distributed processing includes receiving, via a network interface, at a member device comprising a processor and a memory, a thread from a command center, receiving from the command center, via the network interface, a control parameter for the thread and processing the thread based on the control parameter in a secure processing environment of the member device.
  • a non-transitory computer-readable medium contains program code, which when executed by a processor, causes a command center to receive a job bundle at the command center, the command center comprising the processor, a network interface, and a memory.
  • the non-transitory computer-readable medium further contains program code, which when executed by the processor, causes the command center to determine a value of a dimension of the job bundle, determine, based on a predetermined rule applied to the determined value of the dimension of the job bundle, an aggregate processing cost for the job bundle, identify one or more available member devices communicatively connected to the command center via the network interface, split the job bundle into one or more threads based on at least one of the determined value of the dimension, the aggregate processing cost or the available member devices, apportion a thread of the one or more threads to a member device and transmit, via the network interface, the apportioned thread to a secure processing environment of the member device.
  • a command center comprises a processor, a network interface, and a memory containing instructions, which when executed by the processor, cause the command center to receive a job bundle.
  • the instructions when executed by the processor, further cause the command center to determine a value of a dimension of the job bundle, determine, based on a predetermined rule applied to the determined value of the dimension of the job bundle, an aggregate processing cost for the job bundle, identify one or more available member devices communicatively connected to the command center via the network interface, split the job bundle into one or more threads based on at least one of the determined value of the dimension, the aggregate processing cost or the available member devices, apportion a thread of the one or more threads to a member device, and transmit, via the network interface, the apportioned thread to a secure processing environment of the member device.
  • a member device comprises a network interface, a processor and a memory containing instructions which, when executed by the processor, cause the member device to receive, via the network interface, a thread from a command center, receive from the command center, a control parameter for the thread, and process the thread based on the control parameter in a secure processing environment of the member device.
  • Couple and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another.
  • the term“or” is inclusive, meaning and/or.
  • controller means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware.
  • any particular controller may be centralized or distributed, whether locally or remotely.
  • the phrase“at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed.
  • “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
  • various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium.
  • application and“program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code.
  • computer readable program code includes any type of computer code, including source code, object code, and executable code.
  • the phrase“computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
  • ROM read only memory
  • RAM random access memory
  • CD compact disc
  • DVD digital video disc
  • A“non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals.
  • a non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
  • FIGURE 1 illustrates an example of a network context including a command center and heterogeneous processing nodes according to certain embodiments of this disclosure
  • FIGURE 2 illustrates an example of a command center according to certain embodiments of this disclosure
  • FIGURE 3 illustrates an example of a member device according to certain embodiments of this disclosure
  • FIGURE 4 illustrates layers of an I/O stack implemented in member devices according to certain embodiments of this disclosure
  • FIGURE 5 illustrates operations of a method for secure distributed processing across networks of heterogeneous processing nodes at a command center according to certain embodiments of this disclosure.
  • FIGURE 6 illustrates operations of a method for secure distributed processing across networks of heterogeneous processing nodes at a member device according to certain embodiments of this disclosure.
  • FIGURES 1 through 6, discussed below, and the various embodiments used to describe the principles of this disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of this disclosure may be implemented in any suitably arranged wireless communication system.
  • FIGURE 1 illustrates an example of a network context 100 including a command center and heterogeneous processing nodes according to certain embodiments of this disclosure.
  • network context 100 includes a command center 105, one or more machines 1 lOa, 1 lOb, 1 lOc and 1 lOd providing an interface layer, one or more networks 115, and heterogeneous processing nodes l20a, l20b, l20c, l20d and l20e.
  • command center 105 is a management server, such as a Hewlett-Packard Proliant server embodied on a single server rack.
  • command center 105 includes program code, which when executed by one of the cores of command center l05’s processor, causes it to provide an interface layer for connecting through a network 115 (such as the internet) with processing nodes l20a-e.
  • a network 115 such as the internet
  • processing nodes l20a-e such as the internet
  • multiple architectures for implementing command center 105 are possible and within the scope of this disclosure.
  • command center 105 is a single server, and machines l lOa-d comprise virtual machines executing on the server.
  • command center 105 may comprise multiple physical servers, with each of machines l lOa-d implemented on its own server. According to still other embodiments, command center 105 and machines l lOa-d are implemented in the cloud, across a variety of machines. Numerous embodiments, wherein a command center comprises a network interface and processing and memory resources for implementing an appropriate interface layer and performing operations of the disclosed methods are possible and within the scope of this disclosure.
  • network 115 is a wired network connecting command center 105 to each of heterogeneous processing nodes l20a, l20c and l20d.
  • network 115 is a wireless network, such as a Wi-Fi or 3G wireless network.
  • Network 115 hosts communications using contextually appropriate network protocols, including without limitation, HTTP, Aeron, Message Queueing Telemetry Transport (MQTT), NanolP, ROLL (Routing Over Low power and Lossy networks), uIP and UDP. Other communication protocols are possible and within the scope of the present disclosure.
  • one or more networks of heterogeneous processing nodes l20a-l20e connect to command center 105 through network 115.
  • a processing node may connect directly to network 115, as shown in the non-limiting example of FIGURE 1 by processing node l20a.
  • a processing node may connect indirectly to network 115 through another processing node, as shown in the non-limiting example of FIGURE 1 by processing node l20e.
  • a processing node may connect both directly and indirectly to network 115, as shown in the non-limiting example of FIGURE 1 by processing node l20c.
  • processing nodes connected, either directly or indirectly to command center 105 through network 115 comprise“member devices” of a processing network under the control of command center 105.
  • the member devices at each of processing nodes l20a-l20e are, at a minimum, heterogeneous with regard to their processing and job handling capacities.
  • Facets of heterogeneity between the member devices at processing nodes l20a-l20e include, without limitation, processing speed, available RAM, power requirements available storage, types of network connectivity (for example, Wi-Fi connectivity, 3G connectivity, GigaBit Ethernet), available sensors (for example, microphones, thermometers, cameras, barometers), processor functionality (for example, floating point processing capability, capacity for true random number generation), and number of processor cores.
  • member devices may be classified by the command center based on one or more of these attributes.
  • a device having resources analogous to those of certain generations of the Raspberry Pi would for the purposes of the command center’s job orchestration logic, be classified as a“10X” device.
  • a device having resources analogous to certain generations of an Amazon Echo may for the purposes of the command center’s job orchestration logic, be classified as a“100X” device.
  • the heterogeneity of the member devices at the processing nodes includes additional dimensions, such as their availability to take on work at different times, and the processing resources available at different times.
  • additional dimensions such as their availability to take on work at different times
  • the processing resources available at different times are not be time dependent
  • certain embodiments as disclosed and claimed can operate in contexts in which the underlying capacities of the available processing devices are more fluid and can change when, for example, devices are added or subtracted to the network, or devices are required to devote computing resources to tasks other than those orchestrated by the command center.
  • FIGURE 1 illustrates one example of a network context including a command center and heterogeneous processing nodes
  • a command center and processing node may be embodied on the same device.
  • FIGURE 2 illustrates an example of a command center 200 according to certain embodiments of this disclosure.
  • command center 200 is implemented on a server and comprises a memory 205, a processor 210, a job orchestrator 215 and a network interface 230. Additionally, command center 200 is configured to implement an interface layer 220 comprising one or more instances shown as 225a-225d of application programming interfaces (API) through which command center 200 interacts with networked member devices (for example, the member devices at processing nodes l20a-l20e in FIGURE 1).
  • API application programming interfaces
  • memory 205 comprise a non-transitory memory containing program code, which, when executed by processor 210, causes the command center to receive a job bundle via network interface 230, split the job bundle into threads according to one or more determined dimensions of the job bundle, apportion the threads to available member devices, and transmit each apportioned thread to a secure processing environment on each member device.
  • memory 205 comprises libraries of rules and of capability information for member devices acting as processing nodes.
  • memory 205 includes a library of rules mapping dimensions of job bundles (for example, a specified deadline for completion of a processing task, a networking requirement ( e.g ., is the member device required to send and receive data over a 3G network?) a volume of data to be processed, a parameter of data to be collected such as a number of sensors to be used or a time interval over which data is to be collected, availability of parallel processing, load sharing constraints, processor or sensor requirements (e.g., clock speed, whether a floating point processor or true random number generation is required) to processing cost values.
  • a library of rules mapping dimensions of job bundles for example, a specified deadline for completion of a processing task, a networking requirement (e.g ., is the member device required to send and receive data over a 3G network)
  • a volume of data to be processed for example, a specified deadline for completion of a processing task, a networking requirement (e.g .
  • memory 205 may contain one or more libraries of data specifying the processing capabilities of member devices.
  • libraries of member device capabilities may be maintained at a high level of granularity, with data for each member device, including, without limitation, information as to the device’s specific processor and memory resources.
  • the library of device capability information consists of a dictionary of device identifiers and a classification (e.g.,“10X” or“100X”) determined for the device. Numerous variations are possible and within the scope of this disclosure.
  • processor 210 is a central processing unit (CPU) chip provided within a server.
  • command center 200 may be implemented across multiple servers, or as part of a cloud computing system.
  • processor 210 consists of multiple separate processors operating across a dynamically changeable cast of server machines.
  • command center 200 may be implemented on a virtual machine, and processor 210 is a virtual CPU consisting of an assignment of processor resources of the physical machine(s) implementing the virtual machine.
  • command center 200 includes a job orchestrator 215.
  • job orchestrator 215 receives, through network interface 230, information regarding the availability and usage of member devices at processing nodes. Job orchestrator 215 further receives, from processor 210, information regarding the currently apportioned threads and received job bundles. In some embodiments, the information regarding the currently apportioned threads and received job bundles may be further processed and re- expressed as one or more metrics of system load at the node and command center levels. Further, in some embodiments, member devices at processing nodes may periodically send updates to the command center regarding their current processing load and/or job status information.
  • job orchestrator 215 operates to apportion threads between member devices based in part on current usage data received from member devices and expected usage based on information regarding queued job bundles received from memory 205 and processor 210. According to certain embodiments, job orchestrator 215 dynamically apportions job bundles to available member devices according to the devices’ availability and capabilities.
  • job orchestrator 215 implements predetermined rules applied to, for example, the determined parameters of the received job bundles and stored information regarding the capabilities of member devices at processing nodes, to apportion threads in a way that optimizes the value of one or more metrics of processing performance.
  • Metrics of processing performance whose values may be maximized at job orchestrator 215 include, without limitation, job bundle execution time, electricity consumption at the processing node, redundancy among processing nodes (to increase the success of probabilistic execution of threads), and utilization of processing nodes (for example, it may be desirable to apportion the threads of a job bundle to a minimum number of processing nodes, in order to hold other nodes in reserve for other job bundles).
  • interface layer 220 includes network interface 230 and instances 225a-225d of application programming interfaces (API) through which command center 200 interacts with networked member devices (for example, the member devices at processing nodes l20a-l20e in FIGURE 1) .
  • API application programming interfaces
  • network interface 230 operates to interconnect command center 200 with one or more networks (for example, network 115 in FIGURE 1).
  • Network interface 230 may, depending on embodiments, have a network address expressed as a node ID, a port number or an IP address.
  • network interface 230 is implemented as hardware, such as by a network interface card (NIC).
  • NIC network interface card
  • network interface 230 may be implemented as software, such as by an instance of the java. net. Networklnterface class.
  • network interface 230 supports communications over multiple protocols, such as TCP/IP as well as wireless protocols, such as 3G or Bluetooth.
  • interface layer 220 contains one or more application programming interfaces (APIs) 225a-225d for establishing communications between command center 200 and member devices at processing nodes.
  • APIs application programming interfaces
  • each of APIs 225a-225d may designate portions of memory 205 as reserved for inputs and outputs of job bundles which have been apportioned and sent out for processing.
  • each API of APIs 225a-225d maps a set of preset processing functions (for example, to perform a hash function) associated with the performance of a thread to be processed.
  • the preset processing functions may be expressed in an object oriented format, and as comprising methods and attributes of APIs 225a-225d.
  • the preset functions may map to API objects of an IoT specific operating system, such as the Conera platform developed by V2COM, Inc.
  • the preset processing functions may be mapped to device or operating system specific commands.
  • the API may map command center level function“hash” to the Java method“hashCode.”
  • each API of APIs 225a-225d may map preset functions to command sets appropriate to the processing power of a particular category of member device (for example, the“10X” and“100X” devices discussed above.)
  • there may only be a single API which maps preset functions to commands across the full spectrum of operating systems and processing capabilities found in member devices at the processing nodes.
  • individual APIs or sets of APIs among APIs 225a-225d may provide mappings of command center level preset functions to different data models, in order to support communications across a wide range of networking contexts and capabilities of member devices.
  • APIs 225a & 225b may provide representational state transfer (REST) APIs associated with a JavaScript Object Notation (JSON) data model
  • APIs 225c & 225d may provide simple object access protocol (SOAP) APIs associated with an extensible markup language (XML) data model.
  • REST representational state transfer
  • SOAP simple object access protocol
  • FIGURE 3 illustrates an example of a member device 300 according to certain embodiments of this disclosure.
  • member device 300 is a “smart” type of another device, such as a household appliance or controller of other household appliances.
  • member device 300 is a smartphone or a tablet computer.
  • member device 300 is a networked meter, or a development board attached to a point of presence (for example, a streetlight, a switch, a traffic light or a utility pole) of a public utility.
  • member device 300 can be a gateway device, such as an internet of things (IoT) gateway device. Examples of gateway devices which can function as a member device 300 include without limitation, the Neuron C Smart Concentrator and Dendrion Power Smart Submetering Unit by V2COM, Inc. and Numerous embodiments are possible and within the scope of this disclosure.
  • IoT internet of things
  • member device 300 includes a processor 305.
  • processor 305 is a multi-function processor (as opposed to, for example, a function-specific ASIC) with one or more cores, which capable of executing program code stored in a non-transitory memory 310.
  • processor 305 includes, or is coupled with a clock 307, the speed of which provides one measure of the processor 305’s processing capability.
  • processor 305 has features such as, for example and without limitation, floating point processing, true random number generation, or a smart cache, not found in the processor of other member devices.
  • memory 310 contains program code which, when executed by processor 305, causes the processor to perform the functions of the member device 300 of which it is a part.
  • memory 310 contains instructions, which when executed by processor 305, cause it to perform functions, such as reading and writing temperature data to memory 310, controlling the operation of input/output devices 320 to send control signals for devices within the home (for example, smart lightbulbs, electronically controlled heaters and air conditioners, and electronically controlled window shades).
  • regions of memory 310 can be allocated for data storage and storing additional program code, including one or more applications 315.
  • applications 315 comprise program code or software which can be written to memory 310 and read and executed by processor 305.
  • applications 315 include, applications associated with the core function of the member device (for example, home automation applications, such as an application for automated control of internet connected lightbulbs), and virtual machines hosted on member device 300.
  • member device 300 has input/output devices 320.
  • input/output devices 300 include a display (such as an LED display), a keyboard and a mouse.
  • I/O devices 320 are a set of input/output pins connected to pins of a processor or controller, such as processor
  • member device 300 has an operating system (OS) 325, which supports member device 300’s basic functions, including, without limitation, executing applications 315, receiving data from sensors 330, and sending and receiving data via network interface 325.
  • OS 325 may be a general purpose operating system, such as Android or iOS.
  • OS 325 may be a proprietary operating system such as Fire OS.
  • OS 325 is an IoT-oriented operating system, such as Conera by V2COM, Inc.
  • member device 300 includes sensors 330.
  • sensors 330 include, without limitation, cameras, microphones, thermometers, barometric sensors, light meters, energy monitors, water meters, and rain sensors.
  • network interface 325 operates to interconnect member device 300 with one or more networks (for example, network 115 in FIGURE 1).
  • Network interface 325 may, depending on embodiments have a network address expressed as a node ID, a port number or an IP address.
  • network interface 325 is implemented as hardware, such as by a network interface card (NIC).
  • NIC network interface card
  • network interface 325 may be implemented as software, such as by an instance of the java. net. Networklnterface class.
  • network interface 325 supports communications over multiple protocols, such as TCP/IP as well as wireless protocols, such as 3G or Bluetooth.
  • FIGURE 4 illustrates layers of an input/output (“I/O”) stack implemented in member devices according to certain embodiments of this disclosure.
  • I/O input/output
  • two distinct member devices 400 and 405 are shown.
  • the processing architecture of each of member devices 400 is depicted as a hierarchy of abstraction layers, sometimes referred to collectively as an“I/O stack.”
  • hardware layers 410 and 415 are the foundational level of the I/O stacks of member devices 400 and 405.
  • each of hardware layers 410 and 415 comprise the processor and memory of the memory device (for example processor 305 and memory 310 shown in FIGURE 3), as well the componentry of the member device under the direct control of a processor of the member device (for example, sensors 330 and network interface 325 shown in FIGURE 3).
  • operating system, or OS layers 420 and 425 comprise the next level of the EO stacks of member devices 400 and 405.
  • OS layers 420 and 425 comprise the next level of the EO stacks of member devices 400 and 405.
  • OS layers 420 and 425 correspond to the functionalities provided by the operating systems (for example, OS 325 shown in FIGURE 3) of each of member devices 400 and 405. Further, in some embodiments the OS on each member device is the native, or originally provided OS of the member device. For example, if member device 400 is an Apple ® smartphone, operating system
  • 420 is, according to some embodiments, the“iOS” operating system.
  • operating system 425 is, according to some embodiments, the“Fire” operating system.
  • the operating system at layers 420 and 425 is a specially adapted operating system, which replaces or augments the member devices’ native operating systems, such as where the member devices’ native operating systems only supports low-level communications (such as at the physical layer in the OSI model). Examples of suitable operating systems include, without limitation, the Conera IoT OS developed by V2COM, Inc.
  • a replacement operating system may enable the member device to communicate at a higher level within the OSI model and transmit data in formats beyond raw bit streams.
  • virtual machine layers 430 and 435 comprise the next layer in the I/O stacks of member devices 400 and 405.
  • virtual machine layers 430 and 435 are implemented as Java Virtual Machines, Azure Virtual Machines or as components of a Kubernetes system.
  • each of virtual machine layers 430 and 435 operates to provide a secure container separating data, memory and processing resources for performing secure distributed processing across heterogeneous processing nodes.
  • each of virtual machine layers 430 and 435 separates the data and execution environment for processing apportioned threads from processes associated with the member device’s native, or originally intended, functionality.
  • member device 400 is a network connected home automation controller
  • processes associated with the member device processes associated with the member device’s functionality as a home automation controller, such as an interface for a web-based control application, cannot access data associated with an apportioned thread which is being processed at the member device.
  • frameworks 440 and 445 comprise the next layer in the I/O stacks of member devices 400 and 405. According to some embodiments, frameworks 440 and 445 provide, at a logical level, the point of access for networked communications between member devices and between member devices and a command center
  • framework layer 440 of member device 400 is shown as connected to framework layer 445 of member device 405.
  • each of framework layers 440 and 445 may implement system specific protocols or use commands (such as the preset functions implemented in the interface layer (for example, interface layer 220 shown in FIGURE 2).
  • the system specific protocols and commands may define separate data and control channels for communications between heterogeneous nodes and the command center of the distributed processing network.
  • an apportioned thread may be transmitted in part or in whole to a member device using a control channel provided at one of framework layers 440 and 445.
  • an apportioned thread may be transmitted in part or in whole to a member device using a data channel provided at one of framework layers 440 and 445.
  • a member device transmits a thread for which processing has been completed, to a command center via the data channel or the control channel, or a combination thereof.
  • frameworks 440 and 445 further enforce the security of the data and processes performed within virtual machine layers 430 and 435 and above. For example, by only responding to commands and inputs expressed according to the preset commands of a command center, frameworks 440 and 445 may be less susceptible to spoofing attacks or requests expressed according to the command set of operating systems 420 and 425.
  • secure processing environments (“SPEs”) 450 and 455 comprise the next layer in the I/O stacks of member devices 400 and 405.
  • each of secure processing environments 450 and 455 provide, within each of virtual machines 430 and 435, an analogue to the trusted execution environment or“secure world” provided by certain physical processors.
  • threads 460, 465, 470 and 475 apportioned and transmitted by a command center are processed within each of SPEs 450 and 455.
  • multiple job threads are executed within a single SPE provided in the member device.
  • each of the threads within an SPE are threads of different job bundles received and apportioned by a command center.
  • a separate SPE is provided by the virtual machine and framework layers for each thread. According to such embodiments, proprietary data associated with one thread is not available to any other threads, thereby enhancing security.
  • FIGETRE 4 are possible and contemplated as part of this disclosure.
  • a member device such as member device 400, may implement multiple virtual machines. Further, each virtual machine may have its own secure processing environment. Additionally, according to other embodiments, multiple layers may be combined or implemented as part of a single abstraction layer. For example, in certain implementations utilizing the Conera operating system by V2COM, the Conera OS provides the framework layer and operating system layers.
  • FIGURE 5 illustrates of a method 500 for secure distributed processing across networks of heterogeneous processing nodes at a command center, according to certain embodiments of this disclosure.
  • job bundles are received at a command center, split into threads, and the threads are apportioned to member devices.
  • the threads are apportioned according to predetermined rules for optimizing the value of one or more metrics of processing performance, such as the time required to process a job bundle, or in cases where probabilistic execution is implemented, the overall probability of successful processing.
  • method 500 comprises operation 505, wherein a job bundle is received at a command center.
  • the job bundle is a request for a processing task to be performed by a set of networked member devices, and which is received via a an interface provided by the command center.
  • the interface for receiving the job bundle may be a web interface, wherein the parameters are set by a visitor to a website. In other cases, the interface may receive job bundles from another machine or process without human interaction.
  • a job bundle is received as a set of data upon which one or more processing operations (for example, hashing, verification, or optical character recognition), along with a set of processing parameters (for example, a sensor requirement for member devices, a deadline for the processing task or a failure or interrupt handling parameter).
  • a job bundle may contain executable code, and may also be received as an object comprising methods and attributes.
  • the job bundle is received via a network interface of the command center (for example, network interface 230 shown in FIGURE 2) via a network (such as network 115), as a request for a data collection and processing subject to certain job parameters.
  • Examples of such embodiments include, without limitation, an ongoing request to collect and analyze temperature data collected across a set of member devices (for example, gateways, meters or controllers located on houses or points of presence of public utilities) covering a particular geographic area.
  • the job bundle is received in a structured format, with data to be processed conforming to requirements (such as data to be analyzed being received in .SQL format) enforced by the interface.
  • data to be processed is unstructured (such as columns of a columnar database) and only the parameters defining the processing to be performed on the data conform to formats expected by the command center. Numerous variations are possible and within the intended scope of this disclosure.
  • the job bundle received at the command center at operation 505 may have a payload developed and expressed in terms of the methods of the APIs (for example, APIs 225a-225d shown in FIGURE 2).
  • the job bundle may be developed based on the APIs provided with the Conera IoT OS.
  • the job bundle received at the command center may further comprise information defining shared objects and common memory areas within the job bundle. For example, certain processing operations may require memory resources beyond those provided at any one processing node. In such cases, the command center may make some of its own memory available as a shared memory resource for processing the job bundle. In some embodiments, the memory made available as a shared resource may be the“input” or “output” memory provided by each of the APIs of the command center.
  • a job bundle may be initially structured (or split by job orchestrator at operation 525) in a way that permits probabilistic execution and optimization of a value of one or more metrics of processing performance.
  • probabilistic execution encompasses the concept of assigning certain constituent threads of the job bundle to multiple processing nodes, in the expectation that one or more processing nodes will fail or be unable to execute its assigned thread.
  • the number of processing nodes to which a thread has been assigned corresponds to an overall probability of the thread’s successful execution by at least one processing node.
  • threads of a job bundle which may be assigned for probabilistic execution may be initially expressed in the job bundle with reference to idempotent methods of the command center APIs.
  • idempotent methods encompass operations which, for a given input, provide the same output, regardless of time or place ( e.g different processing nodes) of execution.
  • threads of a job bundle expressed with reference to idempotent methods are apportioned to multiple processing nodes for concurrent, probabilistic execution. In the non limiting example of FIGURE 5, all of the job threads, including the job bundle itself are cancellable.
  • only threads assigned for probabilistic execution are each individually cancellable, and can be cancelled in response to a signal from the command center that the thread has been successfully executed at a different node, or in response to a condition at the processing node (for example, a resource intensive task associated with the node’s native function, such as speech recognition).
  • method 500 proceeds to operation 510, wherein the command center determines one or more values of a dimension of the received job bundle.
  • the determined dimensions of a received job bundle are the amount of computational horsepower (expressed, for example, according to the number of
  • FLOPS FLOPS required to process the job bundle
  • memory requirements associated with processing the job bundle for example, a minimum number of member devices required for successful probabilistic execution, or alternatively, a number of member devices which need to be kept free to process or receive other job bundles.
  • dimensions of a job bundle include, without limitation, memory usage associated with processing the job bundle, load sharing constraints between available member devices, processor requirements (for example, whether a floating point processor or true random number generation is required) associated with processing the job bundle, sensor requirements
  • Additional dimensions of a job bundle determined at operation 510 include further analysis of the memory requirements associated with processing the bundle, such as the relative amounts of volatile and permanent memory required at one or more processing nodes. Still further dimensions of the job bundle which can be determined at operation 510 include a determination as to whether the execution memory for the job bundle can be shared between multiple processing nodes, and rules for sharing memory resources between processing nodes.
  • certain embodiments according to this disclosure achieve a degree of biomimicry in their operation, in that, just as the human brain consolidates memories in particular regions of the brain during rapid eye motion (“REM”) sleep, systems according to this disclosure may likewise consolidate the operational memory for executing the job bundle.
  • REM rapid eye motion
  • the determination of a dimension of a job bundle may be performed by analyzing metadata provided as a part of a request from an interface where the job bundle was initially received by the command center. For example, in submitting the job for processing, a client of the interface may also specify dimensions of the job bundle and values thereof. Alternatively, determination of a value of a dimension of a job bundle may be performed by a rules engine at the command center, which applies one or more predetermined rules to analyze the data and instructions of the job bundle.
  • method 500 then proceeds to operation 515, wherein the command center determines an aggregate processing cost for processing the job bundle.
  • the aggregate processing cost is a static analysis performed by applying one or more predetermined rules to the one or more values of dimensions of the job bundle (for example dimensions of the job bundle determined at operation 510). For example, if, at operation 510, the number of member devices required to process a task and classification of the capabilities of the member devices (for example“10X,”“100X” devices, etc.) are dimensions of the job bundle, then in the case where the command center determines that at least five (5) “10X” member devices are required to process the job, a predetermined rule specifying the number of devices required multiplied by the classification of the devices is applied. In this non limiting example, the job bundle is determined to have a cost of fifty (50) units.
  • the aggregate processing cost determined at operation 515 may serve as the basis of a monetary cost to be charged to the entity submitting the job request.
  • the determination of an aggregate processing cost at operation 515 provides a gating function, excluding fraudulent job bundles or job bundles which do not conform to expected values of aggregated processing costs or expected values of specified dimensions.
  • method 500 then proceeds to operation 520, wherein the command center identifies one or more available member devices communicatively connected to the command center via the network interface.
  • the command center identifies one or more available member devices communicatively connected to the command center via the network interface.
  • FIGURE 5 member devices communicatively connected to the command center via the network transmit“heartbeat” messages at predetermined intervals, informing the command center of their connectivity and availability. Based on the received“heartbeat” messages, a register of job bundles currently being processed and information as to job bundles in a processing queue, the command center is able to determine which member devices are available. According to other embodiments, the command center performs an identification of available devices based on member devices satisfying defined constraints on the job bundle (such as processing capabilities).
  • method 500 then proceeds to operation 525, wherein the control center splits the job bundle into threads for processing by available member devices.
  • a job bundle is split into threads based on the determined value of one or more dimensions of the job bundle. For example, if a job bundle requires processing a total of 10 gigabytes of data, and each of the available member devices has only 1 GB of available storage, the command center may split the job bundle into ten or more threads, based on data volume and memory resources of the available devices as determined dimensions of the job bundle.
  • a job bundle is split into threads based on the determined aggregate processing cost of the job bundle or according to the member devices available to process the job.
  • the command center splits the job bundle into threads based on other factors, such as deadline for completing the task.
  • the command center also generates a set of commands for the member devices.
  • the commands for the member devices may be expressed in terms of the preset functions specified by an API of the command center (for example, APIs 225a-225d shown in FIGURE 2), or preset functions utilized by a framework (for example, framework layers 440 and 445 shown in FIGURE 4) on member devices processing the job thread.
  • splitting the job bundle into threads at operation 525 can, according to certain embodiments of this disclosure, include encrypting sensitive or user-specific data to be distributed to member devices as part of a thread for processing. According to such embodiments, the data of the entity submitting the job is secured against possible thefts or hacking of member devices processing a thread of the job bundle.
  • each of the determined threads is apportioned to an available member device.
  • such apportionment comprises: a.) specifically assigning each of the determined threads to a member device; and b.) creating or updating a registry or data structure recording the apportionment of each thread a job bundle to a specific member device.
  • this data structure is maintained at the command center.
  • the data structure recording the assignment or apportionment of job threads to member devices may be maintained across multiple machines as a safeguard against failure or loss of a copy of the registry at the command center.
  • the command center transmits each of the apportioned threads to a secure processing environment (for example, SPEs 450 and 455 shown in FIGURE 5) of member devices comprising processing nodes of a distributed network.
  • a secure processing environment for example, SPEs 450 and 455 shown in FIGURE 5
  • the apportioned threads are transmitted via a network interface of the command center, to a network, and then to a framework on each of the member devices.
  • the apportioned threads are transmitted through a control channel provided by the API.
  • certain embodiments transmit one or more control parameters, such as a deadline for completing the processing task or failure/interrupt parameters corresponding to each thread of the apportioned threads.
  • the command center also transmits a copy of the command center’s public key for encryption, so that, upon completing processing a thread, the member devices can encrypt the results of their processing before transmitting the completed thread back to the command center.
  • threads for execution may be separately transmitted and triggered for execution.
  • the command center may transmit the threads to the member devices through a control channel provided by APIs at the control center.
  • the threads may then be received at the member devices and pre-loaded into the secure processing environments of the member devices.
  • a predetermined condition for example, receiving confirmation that a threshold number of threads have been successfully received and loaded in member devices’ SPEs
  • the command center may then issue a call stack, triggering execution of each of the received threads.
  • FIGURE 6 illustrates operations of a method 600 for secure distributed processing across networks of heterogeneous processing nodes at a member device according to certain embodiments of this disclosure.
  • method 600 begins at operation 605, wherein a member device receives an apportioned thread from a command center via a network.
  • a member device may receive an apportioned thread from a command center indirectly, such as via a connection with another member device (for example, the connection between frameworks 440 and 445 shown in FIGURE 4).
  • another member device for example, the connection between frameworks 440 and 445 shown in FIGURE 4.
  • each member device may only receive a single thread of a job bundle from the command center, or member devices may receive multiple threads of a job bundle from the command center.
  • each member device receives one or more control parameters associated with processing the thread from a command center.
  • the control parameter may specify at least one of, a deadline for completion of the apportioned thread, failure/interrupt instructions for processing the apportioned thread, or encryption/decryption instructions for transmitting the thread to the command center upon completion.
  • the member device processes the apportioned thread in the secure processing environment of the member device according to the control parameter received from the control center. Further, at operation 620, the member device provides the processed thread to the command center via a network. According to certain embodiments, as part of transmitting the completed job thread back to the command center, the member device may sign or encrypt the completed thread using the public key of the command center, or some other indicia of trustworthiness to the command center that the processed thread was received from the member device to which it was assigned.
  • the security of the system is backstopped by the security in the top layer of the I/O stack (for example, the EO stack shown in
  • FIGURE 4 by the application-layer security provided through the secure processing environment of the member device.
  • the security of the system for distributed processing is enforced because the secure processing environment of each member device is inaccessible to other processes executing on the member device.
  • the security of the system may be further enforced by encrypting content entering and exiting the SPE, and the SPE only accepting content using the command set of the command center’s APIs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Abstract

Un procédé de traitement distribué comprend la réception d'un lot de tâches dans un centre de commande comprenant un processeur, une interface de réseau et une mémoire. Le procédé comprend la détermination d'une valeur d'une dimension du lot de tâches, la détermination, sur la base d'une règle prédéfinie appliquée à la valeur déterminée de la dimension du lot de tâches, d'un coût de traitement groupé pour le lot de tâches et l'identification d'un ou de plusieurs dispositifs membres disponibles connectés de manière communicative au centre de commande via l'interface de réseau. De plus, le procédé comprend les opérations consistant à diviser le lot de tâches en un ou plusieurs fils d'exécution sur la base de la valeur déterminée de la dimension et/ou du coût de traitement groupé et/ou des dispositifs membres disponibles, à attribuer un fil d'exécution du ou des fils d'exécution à un dispositif membre et à transmettre, via l'interface de réseau, le fil d'exécution attribué à un environnement de traitement sécurisé du dispositif membre.
EP19711473.9A 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène Withdrawn EP3759600A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23203903.2A EP4328750A3 (fr) 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862637267P 2018-03-01 2018-03-01
PCT/US2019/019896 WO2019169035A1 (fr) 2018-03-01 2019-02-27 Système et procédé de traitement distribué sécurisé dans des réseaux de nœuds de traitement hétérogènes

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP23203903.2A Division EP4328750A3 (fr) 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène

Publications (1)

Publication Number Publication Date
EP3759600A1 true EP3759600A1 (fr) 2021-01-06

Family

ID=65802176

Family Applications (2)

Application Number Title Priority Date Filing Date
EP23203903.2A Pending EP4328750A3 (fr) 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène
EP19711473.9A Withdrawn EP3759600A1 (fr) 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP23203903.2A Pending EP4328750A3 (fr) 2018-03-01 2019-02-27 Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène

Country Status (5)

Country Link
EP (2) EP4328750A3 (fr)
CN (1) CN111936975A (fr)
BR (1) BR112020017898A2 (fr)
MX (1) MX2020009034A (fr)
WO (1) WO2019169035A1 (fr)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524192B2 (en) * 2010-05-07 2016-12-20 Microsoft Technology Licensing, Llc Distributed workflow execution
CN106933669B (zh) * 2015-12-29 2021-01-08 伊姆西Ip控股有限责任公司 用于数据处理的装置和方法
US9935893B2 (en) * 2016-03-28 2018-04-03 The Travelers Indemnity Company Systems and methods for dynamically allocating computing tasks to computer resources in a distributed processing environment
US10877816B2 (en) * 2016-04-20 2020-12-29 Samsung Electronics Co., Ltd. Optimal task scheduler

Also Published As

Publication number Publication date
BR112020017898A2 (pt) 2020-12-22
EP4328750A2 (fr) 2024-02-28
WO2019169035A1 (fr) 2019-09-06
EP4328750A3 (fr) 2024-03-20
CN111936975A (zh) 2020-11-13
MX2020009034A (es) 2021-01-08

Similar Documents

Publication Publication Date Title
US20210224241A1 (en) Distributed Storage of Metadata For Large Binary Data
EP3798833A1 (fr) Procédés, système, articles de fabrication et appareil pour gérer des données télémétriques dans un environnement de bord
US10853142B2 (en) Stateless instance backed mobile devices
US10868866B2 (en) Cloud storage methods and systems
US10560465B2 (en) Real time anomaly detection for data streams
Morabito et al. LEGIoT: A lightweight edge gateway for the Internet of Things
US10476985B1 (en) System and method for resource management and resource allocation in a self-optimizing network of heterogeneous processing nodes
US9882985B1 (en) Data storage path optimization for internet of things computing system
US11403199B2 (en) Secure detection and correction of inefficient application configurations
CN107645483B (zh) 风险识别方法、风险识别装置、云风险识别装置及系统
EP3446440A1 (fr) Découverte de réseau multi-étapes
US20190005411A1 (en) Flat representation of machine learning model
US10432731B2 (en) Electronic device and method of controlling sensors connected through network
US10761900B1 (en) System and method for secure distributed processing across networks of heterogeneous processing nodes
EP4328750A2 (fr) Systeme et procede de traitement distribué sécurisé sur les réseaux de noeuds de traitement hétérogène
Lee et al. Enabling actionable analytics for mobile devices: performance issues of distributed analytics on Hadoop mobile clusters
EP3777047B1 (fr) Système et procédé de gestion de ressources et d'attribution de ressources dans un réseau d'auto-optimisation de noeuds de traitement hétérogènes
Kim et al. Optimizing Logging and Monitoring in Heterogeneous Cloud Environments for IoT and Edge Applications
Μαλλιά-Στολίδου Internet of things: sensor function virtualization and end-to-end IoT cloud platforms
KR20220069806A (ko) 5g 기반 iot 환경을 위한 지능형 트러스트 인에이블러 시스템
CN117032876A (zh) 程序数据的访问方法、装置、存储介质及电子装置
Ngwenya Leveraging virtualization and cloud computing techniques for the next generation mobile communication networks

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200827

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220623

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20231018