WO2019204898A1 - Workload scheduling in a distributed computing environment based on an applied computational value - Google Patents

Workload scheduling in a distributed computing environment based on an applied computational value Download PDF

Info

Publication number
WO2019204898A1
WO2019204898A1 PCT/CA2019/000054 CA2019000054W WO2019204898A1 WO 2019204898 A1 WO2019204898 A1 WO 2019204898A1 CA 2019000054 W CA2019000054 W CA 2019000054W WO 2019204898 A1 WO2019204898 A1 WO 2019204898A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing
computational
task
computing system
measured
Prior art date
Application number
PCT/CA2019/000054
Other languages
French (fr)
Inventor
Alexander Martin-Bale
Daniel Assouline
Original Assignee
10518590 Canada Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10518590 Canada Inc. filed Critical 10518590 Canada Inc.
Publication of WO2019204898A1 publication Critical patent/WO2019204898A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/04Payment circuits
    • G06Q20/06Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
    • G06Q20/065Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/501Performance criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q2220/00Business processing using cryptography

Definitions

  • This present disclosure relates generally to distributed computing, and more particularly to a system and method of scheduling computing tasks carried out by a computing system in a distributed computing environment.
  • the computing tasks in question may relate to block-chain based cryptocurrency applications to carry out computationally intensive tasks in order to verify transactions recorded in the block-chain.
  • Distributed computing systems have been developed as an alternative to a standalone computing system.
  • Distributed systems comprise of multiple interconnected autonomous computing systems, or“nodes”, in which each node performs a portion or share of an overall distributed computing task.
  • Distributed systems can be scaled up or scaled down in a manner not normally possible with a standalone computing system. For example, more computing systems can be added or‘recruited’ to a distributed computing environment if a computing task is complex. Fewer computing systems may be needed when the computing task in question is less complex.
  • Each node may be configured differently from another node such that the corresponding computational power of one node may be different from the computational power of another node.
  • the present specification describes a system and method for scheduling computing tasks in a computing system within a distributed computing environment.
  • method to schedule computing tasks executed by a computing system in a distributed computing environment comprising: gathering system configuration data and system telemetry data of the computing system; generating a computing system profile of the computing system, the profile is computed based on the system configuration data and system telemetry data; calculating an applied monetary value of the computing system based on the system profile and marketplace valuations associated with at least one specified computational task; and selecting a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
  • the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply
  • BIOS Basic Input/Output System
  • the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
  • the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, application-specific integrated circuit (ASIC) devices.
  • the system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
  • system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
  • the measured system uptime is based on calculated percentage uptime for the immediately preceding 30 days from the date of measurement.
  • system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
  • the computing system profile is stored in a system profile database.
  • the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting of the computational performance based on the system telemetry data.
  • the computational performance is determined by detecting at least one hardware element associated with the computing system based on the hardware configuration data; determining, for each of the at least one hardware element, a performance potential for that hardware element; and calculating the computational performance based on the performance potential for each of the at least one hardware element.
  • the performance potential for each of the at least one hardware element is stored within a hardware profile database.
  • the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element.
  • the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
  • the method further comprising updating the computing system profile based on at least one of updated system telemetry data and updated configuration data.
  • the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
  • the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time.
  • the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
  • the method further comprising receiving an updated marketplace valuation associated with the at least one computational task and updating the applied monetary value of the computing system.
  • the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the method further comprises: executing the first computational task when the first applied monetary value is greater than the second applied monetary value; and executing the second computational task when the second applied monetary value is greater than the first applied monetary value.
  • a system for scheduling computing tasks executed by a computing system in a distributed computing environment comprising: a plurality of computing systems, each computing system being operable to perform at least one specified computational task; a device manager operable to gather system configuration data and telemetry data of each of the computing system of the plurality of computing systems; a system profile database for storing a plurality of computing system profiles, the plurality of system profiles being generated based on the system configuration data and the system telemetry data of the plurality of the computing systems; a communication interface operable to receive marketplace valuations associated with at least one specified computational task; and a task manager operable to: calculate an applied monetary value of the computing system based on the computing system profile and marketplace valuations; and select a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
  • the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply
  • the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
  • the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, ASIC devices.
  • system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
  • system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
  • the measured system uptime is based on a calculated percentage uptime for the immediately preceding 30 days from the date of measurement.
  • the system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
  • the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting the computational performance based on the system telemetry data.
  • the computational performance is determined by detecting at least one hardware element associated with the computing system based on the hardware configuration data; determining, for each of the at least one hardware element, a corresponding performance potential for that hardware element; and calculating the computational performance based on the performance potential for each of the at least one hardware element.
  • system further comprising a hardware profile database for storing a performance potential for each of the at least one hardware element.
  • the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element.
  • the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
  • the computing system profile is updated based on at least one of updated system telemetry data and updated configuration data gathered by the device manager.
  • the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
  • the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time.
  • the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
  • the communication interface is further operable to receive an updated marketplace valuation associated with the at least one computational task and the task manager is further operable to update the applied monetary value of the computing system.
  • the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the task manager is further operable to: execute the first computational task when the first applied monetary value is greater than the second applied monetary value; and execute the second computational task when the second applied monetary value is greater than the first applied monetary value.
  • the distributed computing environment comprises a plurality of network systems, each of the plurality of network systems having at least one computing system of the plurality of computing systems and being operable to send and receive electronic signals to and from any another network system of the plurality of network systems.
  • system further comprising a task supervisor operable to coordinate the execution, by the plurality of computing systems, of the at least one specified computing task.
  • Figure 1 is a system block diagram of a distributed computing system with a plurality of nodes
  • Figure 2 is a system block diagram of a node of the distributed computing system of Figure 1 ;
  • Figure 3 is a flowchart of a method to determine the applied computational value of a node in the distributed computing system of Figure 1.
  • A“node” encompasses a computing system connected to a computer network or a distributed computing network.
  • the term “distributed computing network” and“distributed computing system” can be used interchangeably to describe an interconnected plurality of nodes operable to carry out a distributed computing task.
  • SETI@home an Internet-based distributed computing project developed by the University of California, Berkeley and released in 1999.
  • the project allows members of the public to volunteer computing resources available in their own computers to assist with processing observational radio telescope data taken from the Arecibo radio telescope and the Green Bank Telescope to identify possible evidence of radio transmissions from extraterrestrial intelligence.
  • SETI@home sends small chunks of the data to home computers for analysis. If the project were carried out in a non-distributed manner, the computational task would be an extremely onerous one.
  • the scability and distributed nature of distributed computing systems have been structured and configured to operate as content distribution networks (CDNs) to distribute content such as images, text and live or on-demand audio and video data to a large number of audiences spread across a large geographic area (e.g. regionally, and globally). While the recipients of such data perceive the distributed computing system as a unitary system, the geographically distributed nodes of the CDN allows the CDN to serve any requested data with minimal network delay, by transmitting the requested data using a node that is physically proximal to the requester.
  • the growth of machine learning and artificial intelligence applications also require significant computational power to process large amounts of training data so as to suitably “train” such machine learning systems. Once these systems have been trained, such distributed computing systems may be relied upon to carry out so-called“big data” analysis tasks on large volumes of data to identify patterns and insight therefrom.
  • the verification process is generally conducted in a manner in which many networked computing systems compete with each other to be the first system to successfully verify a block of transactions for recordation into the block-chain.
  • the first computing system that successfully verifies a block of transactions is awarded a small amount of newly created digital currency called the“block reward”.
  • the verification process is sometimes referred to in the art as “mining”.
  • Multiple computing systems can be pooled together into a distributed computing system to form a“mining cluster” or“mining pool” so that the combined computational power of each computing system in the mining pool can translate to a higher likelihood of the pooled distributed computing system being the first to verify a block of transactions.
  • the applied computational value can be used to denote a calculated value, for example, in a fiat currency, of the equivalent net digital currency minable by a particular node in the distributed computing system based on current market exchange rates between the digital currency and the fiat currency, and based on the underlying mining or block reward, transaction or handling fees for conversion to a fiat currency, and the computational complexity and effort associated with mining the digial currency in question.
  • the applied computational value may also be applicable to the entire computing system or any subset of nodes thereof.
  • the applied computational value can be calculated by applying marketplace valuations of various digital currencies, their corresponding mining difficulty, mining reward, and transaction or handling fees to a calculated computational performance of a node (or the overall system or subset of nodes thereof), the computational performance of the node being determinable by considering the software and hardware resources of the node and its utiliziation. Scheduling of distributed computing tasks can subsequently be made based on a consideration of the relative applied computational values as mong various minable digital currencies as a decision-making factor.
  • the concept of the applied computational value can be used in other contexts. For example, if a computing system has additional storage space or additional bandwidth (e.g.
  • Fig. 1 shown therein is a diagram of a distributed computing system or a distributed computing environment 100 operable to carry out distributed computing tasks.
  • the distributed computing system 100 includes a plurality of nodes. Illustrated in Fig. 1 are nodes labeled“Node-l” 130 and“Node-N” 140 corresponding to the number of nodes in the distributed computing system 100 being N.
  • the nodes of the distributed computing system 100 can be any device capable of processing data. As such, nodes belonging to the distributed computing system 100 can include, but are not limited to, desktop and laptop computers, set-top boxes, smart devices (e.g.
  • Nodes can also include mobile devices such as tablets, smartphones and smart watches. Each node can be configured to communicate with other nodes within the distributed computing system 100 over a communication network 150.
  • Fig. 1 depicts a single network 150 interconnecting the nodes of the distributed computing system 100
  • communication between nodes can take place across a number of networks, including one or a combination of public and private networks such as the Internet, a private local area network, or mobile data network provided by a cellular telephone provider.
  • a node may be a smartphone connected to the distributed computing system 100 by way of a network link that traverses a private data network managed by a mobile phone service provider (e.g. a cellular EDGE, UMTS, HSPA or LTE data network), and a public network such as the Internet.
  • a mobile phone service provider e.g. a cellular EDGE, UMTS, HSPA or LTE data network
  • public network such as the Internet.
  • the data link to the distributed computing system 100 can traverse a home or enterprise local area network (e.g. by 10/100/1000-base-T Ethernet network, or an IEEE 802.11 compliant wireless network), the network of an Internet Service Provider (ISP), and the Internet.
  • ISP Internet Service Provider
  • the distributed computing system 100 can include any number of separate network systems, each such network system being operable to
  • each network system has at least one node contained therein to form the nodes of the distributed computing network.
  • One or more of the nodes in the distributed computing system 100 can be designated as a task supervisor 110.
  • one node is designated as the task supervisor 110.
  • multiple nodes can operate together as a task supervisor 110.
  • the node that is designated as the task supervisor 110 is configured to assign computing tasks (e.g. tasks related to mining digital currency) to each of the remaining nodes and to coordinate the execution of a computationally intensive task by the nodes within the distributed computing system 100 in a distributed manner.
  • the number of nodes may vary over time so that nodes may be added or removed from the distributed computing system 100.
  • the task supervisor 110 can further be operated and keep a record of active and inactive nodes within the distributed computing system 100. Such records can be stored in the storage module 120.
  • All nodes connected to the network 150 may follow a suitable communication protocol to send and receive electronic communication between each other and the task supervisor 110.
  • Such communication can include announcing to the network 150 that the node has joined (i.e. a broadcast communication), or announcing that the node will be removed or disconnected.
  • the task supervisor 110 may divide a large computational task into smaller, more manageable tasks for distribution to the nodes within the distributed computing system 100 over the network 150 using the same communication protocol. Once the smaller computing tasks have been completed, the node that completed the smaller computing task can then return the output or result of the task back to the task supervisor 110.
  • the network 150 can include a data interface for facilitating communication with the data source.
  • the data provided by the market data source 160 can include current (e.g. real-time) and historical financial data as well as exchange rates of various digital and fiat currency pairs 160.
  • the market financial data can also include information specific to certain digital currencies such as an indication of the mining difficulty or computational complexity 161 associated with the currency and the“block reward” 162 available for verifying a block of transactions.
  • the market data source 160 may be supplied by a third party data provider such as a digital currency exchange service.
  • the market data source 160 can be configured to supply a stream of pricing data accessible by the task supervisor 110 and the nodes.
  • the data connection to the data source 160 is illustratively a low-latency data connection to enable the task supervisor 110 and the nodes to obtain the most up-to-date exchange data.
  • a storage module 120 is included.
  • the storage module 120 can be used by the task supervisor 110, the nodes or both the task supervisor 110 and nodes to store data and other information relevant to carrying out the distributed computing task.
  • the storage module 120 can be used to store system configuration data or system profiles (present and historic profile information) of each of the nodes that operates in the distributed computing system 100.
  • some storage capacity of the storage module 120 can be allocated to be the profile database 123 for storing system profiles.
  • the storage module 120 may also be used to store performance data associated with various software components and hardware components, in the device database 121.
  • the telemetry storage 124 is configurable to store telemetry or utilization data of the nodes of the distributed computing system 100.
  • a task rule set 122 can further be stored which specify additional conditions that determine whether or not a computing task should be scheduled based on a determination of whether execution of the computing would affect a user’s experience of the node.
  • the storage of configuration and performance data so would enable the task supervisor 110, the node or any interested party, to determine the computational capability of a node or of the overall system, to correspondingly determine the applied computational value of each node or of the distributed computing system 100.
  • the task supervisor 110 can store a record of completed tasks, pending tasks or both pending and completed tasks in the storage module 120.
  • the storage module 120 is configured as a standalone component, and can be implemented by a dedicated data storage device such as a network-enabled hard disk drive, a database, a cloud-based storage resource or a storage area network (SAN).
  • the storage module 120 can alternatively be integrated with the task supervisor 110 as“local” storage.
  • each of the nodes and the task supervisor 110 of the distributed computing system 100 can be said to correspond to a mining pool.
  • the combined computing power of the nodes within the mining pool can execute computing tasks to verify blocks within the block-chain and thereby gain an amount of digital currency as a reward, known as the“block reward”, as noted above.
  • the task supervisor 110 may operate an appropriate mining pool server software which communicates with each of the nodes connected to the distributed computing system 100 via network 150 as well as the digital currency network (not shown).
  • the Stratum pool server is one such item of pool server software executable by the task supervisor 110 to form a mining pool.
  • a single pool server can be configured to manage the mining of multiple digital currencies.
  • pool servers may concurrently be operating on the task supervisor 110, with the task supervisor 100 being operable from a single node or across multiple nodes, to support mining of multiple digital currencies.
  • mining for different currencies can be carried out concurrently, for example, by grouping nodes together so that one group of nodes mine a first digital currency that is different from another group of nodes tasked to mine a second digital currency. In other cases, all the nodes can be tasked to mine the same digital currency and can switch between digital currencies in a sequential manner.
  • each node in the mining pool is in electronic communication with the task supervisor 110 to receive a computing task identified by a“job ID” related to the execution of proof-of-work functions that are carried out to verify a block within the block-chain associated with a particular digital currency.
  • the concept for proof-of-work can encompass other types of proofs, including, but not limited to proof-of-space, proof-of-stake, proof-of-storage.
  • the computational task of each node is to carry out cryptographic hash functions specified by the task supervisor 110 to determine a unique nonce value associated a block such that when that block is hashed with the nonce, a hash value having a specific set of leading zeros is generated.
  • the node can submit the results of the task and request additional tasks.
  • the task supervisor 110 can automatically provide a node with the next task upon receiving the results of a previous task.
  • Communication between nodes within the network 150 can be carried out in any suitable manner.
  • One technique that can be used include transmitting mining tasks and results using encoded messages between nodes.
  • the encoded messages may contain data objects specifying the task to be carried out and data generated as a result of executing a specified distributed computing task.
  • the data objects can also be structured using known and interoperable data object formats.
  • One widely used data object format is the JavaScript Object Notation or JSON format.
  • the JSON data object format is flexible as it is both human and machine readable, suitable for network-based communication.
  • the messages may be encoded using Extensible Markup Language (XML).
  • nodes within the distributed computing system 100 are operable to generate and process both JSON and XML-based data objects.
  • the messages may be encoded using Protocol Buffers.
  • nodes within the distributed computing system 100 are operable to generate and process any combination of these formats.
  • FIG. 2 is a generalized system block diagram of a computing system 200 operable as a node within the distributed computing system 100 of Fig. 1.
  • the computing system 200 includes a processor module 204, communication module 206, system device module 208, a task manager 210, a data storage module 212 and a device manager 214.
  • a node can be any device capable of completing a computational task, including computers, smartphones and any other suitable computing device.
  • the processor module 204 can include any suitable processing device or combination of different processing devices, including but not limited to, one or more of a general purpose processor, application processor, GPU, FPGA or ASIC.
  • the processor module 204 may be configured with a general purpose processor and several GPUs, FPGAs and/or ASICs.
  • a typical home computer can be equipped with a general purpose processor or central processing unit (CPU) installed within a main system board (i.e. motherboard) for executing operating system software, device drivers and user application programs.
  • the computer can be further equipped with specialized processing hardware to reduce the processing burden of the general purpose processor.
  • a separate graphics accelerator or“graphics card” having a graphic processing unit (GPU) can be connected to the mother board to provide visual output to the user.
  • the graphics card can be equipped with one or more such GPUs that are optimized for rendering visual output.
  • the GPU or the general purpose processor, or both the GPU and the general purpose processor can be tasked to carry out the distributed computing task.
  • the processor module 204 may be part of an integrated system such as in a system-on-chip (SOC) architecture, in which all processing devices and other system components are integrated into a single integrated circuit.
  • the system may be a dedicated processing device designed to carrying out specific computationally intensive tasks such as digital currency mining and can be equipped with several GPUs, FPGAs, ASICs or combinations thereof.
  • the computing system 200 of Fig. 2 includes a communication module 206 operable to enable the computing system 200 to connect to the distributed computing system 100 and communicate with the task supervisor 110 and other nodes 130 and 140 via the communication network 150.
  • the communication module 206 is equipped with a suitable communication interface, including but not limited to serial, USB, parallel, SATA, Bluetooth, WiFi network (e.g. compliant with the IEEE 802.11 family of standards), cellular (EDGE, UMTS, HSPA, or LTE), 10/100/1000-base-T Ethernet, as well as operable under known communication protocols including, but not limited to, TCP, UDP, HTTP, HTTPS and UDP. Accordingly, the
  • the communication module 206 is generally equipped with suitable physical layer components (i.e. the PHY device(s)) to enable generation of communication signals compliant with the communication interface and protocol in use.
  • a smartphone may be equipped with suitable communication processors, modulators, amplifiers and antenna systems to enable wireless communication over a regional cellular network, a localized WiFi network or a personal area network.
  • the communication module 206 of a desktop or a laptop computer may be equipped with the same wireless communication hardware as a smartphone and further equipped with hardware components to provide connectivity to a wired network.
  • the communication module 206 may include a network interface card with a physical interface compatible with 10/100/1000-base-T Ethernet.
  • the communication module 206 can be used to receive digital currency mining tasks and any data associated with that task from the task supervisor 110 and send the results of the computation back to the task supervisor 110. It is also understood that the communication module 206 is also operable to allow the computing system 200 to communicate with other devices outside of the distributed computing system 100.
  • the node can establish a data connection with an application server to receive software updates for the computing system 200.
  • the node can establish a data connection with a messaging server to send and receive messages intended for a user of the computing system 200.
  • the system device module 208 corresponds to hardware components and software components of the computing system 200 that are operable to generate outputs and receive inputs.
  • the system device module 208 includes software components (e.g. operating system, devices drivers, and user software) and hardware components for receiving a user input via a human interface device (HID).
  • Such interface devices include, but are not limited to, button/toggle inputs, keyboard input, mouse input, microphone input, camera input, fingerprint reader input, ID card input, touch-screen input, RFID readers, stylus inputs and the like.
  • Output feedback generatable by the system device module 208 include but are not limited to, display outputs, light emitting indicator outputs, vibrational outputs, and audio outputs.
  • the system device module 208 may also be used to monitor the status of the computing system 200 based on the outputs of sensors that are a part of the computing system, such as light sensors, positional sensors, and location sensors.
  • the system device module 208 can measure and track parameters related to the operation of the computing system 200 including, but not limited to, the power consumption, temperature and rotational speed of cooling fans. A warning can be generated, for example, if the system temperature is in excess of a predefined value, which can indicate overheating or other system faults.
  • the system device module 208 may also oversee power management of the node. For example, if the node is powered using a battery, the system device module 208 may include suitable power management components for charging the battery pack or manage power consumption by adjusting power consumption of various system devices. In the case of a smartphone or a tablet device, for example, the brightness of the display screen can be adjusted by the system device module 208 to reduce power consumption by the display system.
  • the memory module 212 is configurable for long term, short term or both long term and short term storage of data including, but not limited to, applications, application data, user data, operating system software, device drivers and the like.
  • the memory module 212 can include persistent or non-transitory memory for long-term data storage and temporary memory for short-term data storage.
  • Persistent memory may include diskettes, optical disks, tapes, hard disk drives, and solid state memory such as ROM, EEPROM and flash memory.
  • Short-term memory can include random access memory (RAM) and cache memory.
  • Data received from the task supervisor 110 may be stored in the persistent memory until it is processed by the computing system 200. During the processing operation, the data may be transferred from the persistent memory to the temporary memory for processing. The result of the processing task can be written to the persistent memory for storage prior to transmission to the task supervisor 110.
  • the device manager 214 is operable to coordinate the operation of the hardware and software components that make up the system device module 208 and monitor the node to generate a system profile 216 to indicate the system configuration and telemetry or utilization of at least one system resource of the node.
  • the node can be monitored, for example, by reading the data provided by the system device module 208 or by directly querying the devices.
  • the system configuration information may indicate the hardware configuration of the node, including the amount of memory available in the memory module 212, the type of processors available in the processor module 204, bandwidth of the communication interface 206, network information (e.g. IP address, network provider and physical location of the node), connected/att ched peripherals, and other hardware resources available at the node.
  • the profile can further indicate the software configuration of the node, including the version of the operating system, virtual machine status (e.g. that the node is a virtual machine, if applicable), whether the system is a part of an enterprise network, a listing of specific software libraries (e.g. OpenGL, OpenCL, DirectX, runtime libraries etc. and their version numbers).
  • virtual machine status e.g. that the node is a virtual machine, if applicable
  • specific software libraries e.g. OpenGL, OpenCL, DirectX, runtime libraries etc. and their version numbers.
  • the utilization of system resources can include, but are not limited to, any one or more of measured utilization of the processor module 204 (general processor, GPU etc.), measured utilization of the memory module 206 (including measured disk utilization), measured system temperature, measured system fan speed (if applicable), measured network bandwidth usage, measured system network response time, measured power consumption and system events (e.g. startup, shutdown, kernel panic, memory fault, hardware addition/removal etc.).
  • the system profile 216 further indicates additional parameters including, but not limited to, any one or more of the device type and model number (e.g. laptop, desktop, tablet device or smartphone, and the corresponding model and make of the device can also be indicated), power status (e.g. powered by battery or AC main line), operating state (e.g.
  • the utilization data can collectively be regarded as the telemetry data of the node.
  • a task manager 210 can be deployed to manage the execution of the tasks by the computing system 200.
  • the task manager 210 may be generally configured to calculate an applied computational value of the computing system 200 based on the system profile, computational complexity of the underlying proof of work tasks, and on marketplace valuation data obtainable from the market data source 160. The task manager 210 can then select whether to schedule or not schedule the task received from task supervisor 110 for execution by the processor module 204. Alternatively, if the task manager 210 is presented with a number of different tasks for execution, the task manager 210 may schedule the task based on the calculated applied computational value of the computing system 200 associated with the task. Details of the determination will be described in greater detail herebelow.
  • the task manager 210 may be implemented in a number of ways. In some embodiments,
  • the task manager 210 may be a process executed by the processor module 204 of the computing system 200. In this case, an instance of the task manager 210 may be active in each node within the distributed computing system 100.
  • the task manager 210 can be implemented as software application executable on a computer, tablet device or smartphone.
  • the task manager 210 may also be configured to coordinate communication between the node and the distributed computing system 100.
  • the task manager 210 may access the communication module 206 to communicate with the task supervisor 110 over the network 150 to announce that the node on which the task manager 210 is operating is connected to the distributed computing system 100 and is able to receive task execution requests from the task supervisor 110.
  • the task manager 210 can also notify the task supervisor 110 whether a particular task request can be scheduled for execution or not (i.e. whether the request can be fulfilled), based on the decision making process described in greater detail below.
  • the task manager 210 may be integrated into a user application such as a news aggregation program, or an anti-virus/anti-malware software.
  • a user application such as a news aggregation program, or an anti-virus/anti-malware software.
  • an option may be provided in the user application to enable the user to use the application on a paid subscription basis or on a revenue- supported basis.
  • the revenue-supported basis may include displaying paid advertisement to the user to earn advertising revenue.
  • the software developer may request the user join a distributed computing system to share unused computing resources to enable the software developer to carry out computing tasks such as mining for digital currencies to the benefit of the software developer.
  • the type of digital currencies to be mined can be determined on the applied computational value of the node or the entire distributed computing system 100 for a particular currency.
  • the task manager 210 may be implemented in hardware as an embedded program“hardcoded” in an FPGA or ASIC device. This alternative implementation may be useful in nodes that are designed specifically to carry out distributed computing tasks. For example, in digital currency mining, the design of dedicated ASIC mining devices or“mining rigs” can be simplified by integrating the task manager 210 as embedded hardware code since a separate general purpose processor would not be needed.
  • the task manager 210 and task supervisor 110 may be embodied in the same node within the distributed computing system 100. This configuration may be useful if many or all of the nodes within the distributed system 100 are part of the same local area network, forming a network cluster. A single, unified task supervisor 110 and task manager 210 may directly carry out determinations, for each node in the cluster, in respect of whether or not a distributed computing task should be scheduled for execution by a particular node within the cluster.
  • the device manager 214 can be operated to gather system telemetry data corresponding to the utilization of the various system resources and system status. For example, the user of a computer system may step away from the computer to run an errand. Accordingly, the system profile 216 can be updated from time to time or periodically with configuration and telemetry data to reflect the current state of the node. In one embodiment, the device manager 214 is configured to update the system profile 216 upon detecting a change in at least one system resource, such as the removal or addition of any hardware or software component. In another embodiment, the system profile 216 can be updated at a predetermined time interval. For example in some configurations, the system profile 216 is updated every 60 seconds, once per hour or at another suitable time period.
  • system profile 216 is continuously updated so that the task manager 210 is operated to repeatedly sample the state of the node. In some embodiments, the system profile 216 is updated upon receiving a task request from the task supervisor 110 so that the burden on the device manager 214 can be reduced.
  • system information and telemetry data stored in the system profile 216 can be collected by the device manager 214. This information may be used to determine usage habits or usage patterns of the node in which processing power is available to the task supervisor 110. This“behavioural” information can also be saved to the system profile 216 to assist the task manager 210 to determine how frequently a particular node can be used to carry out the specified distributed computing task.
  • the raw system usage data can be analyzed to determine, for example, a measured percentage uptime for the node.
  • the percentage uptime for example, can be calculated for the preceding 30 days, or any suitable temporal window, from the date of measurement.
  • Other telemetry data can include the percentage or amount of time the node remains “idle” over a given period of time and available to be used to carry out computing tasks for the task supervisor 110.
  • FIG. 3 is a flowchart 300 of a method of scheduling specified computing tasks for execution by a node (i.e. computing system) within the distributed computing system 100.
  • the node is monitored to gather system-level information including system configuration data and telemetry or utilization data of the node.
  • system-level information including system configuration data and telemetry or utilization data of the node.
  • one way of gathering telemetry and configuration data is by using the device manager 214.
  • Other suitable methods of gathering telemetry and configuration data can also be employed according to other techniques known in the art.
  • the monitoring process includes detecting one or more elements of hardware components installed in the node. Each element of detected hardware element can be identified by the element’s manufacturer, a model number, version number or build number. Corresponding software components can similarly be detected. Software components may be relevant to the determination of the applied monetary value since newer software versions may operate more efficiently relative to an earlier version and/or be able to make use of a particular piece of hardware more efficiently to provide a higher performance output.
  • the system-level configuration information and telemetry data obtained during the previous step can be saved into a system profile 216.
  • the system profile 216 specifies hardware configurations, software configurations, data and telemetry or utilization data of the node.
  • the configuration data is used to determine a performance potential of the node.
  • Each hardware and software component can be associated with a metric indicating the performance potential for that hardware or software combination.
  • the performance potential can be, for instance determined by carrying out a suitable benchmarking test.
  • the benchmarking test can be carried out by the operators of the distributed computing system 100 or by a third party testing facility.
  • Hardware and software performance data can be stored as hardware profiles in the device database 121 in storage module 120, for example, for later use. The collection of performance data can be updated as new hardware and software components are made available.
  • the performance potential of the overall node can be determined based on the performance potential of each of the constituent hardware and software elements.
  • a particular node may be a desktop computer with an x86-based general purpose processor, a GeForce-based graphics card from Nvidia®, and DDR3 RAM memory, operating on a Windows® operating system.
  • the performance potential of each of the elements is aggregated to generate a performance potential of the overall node. For example ,if the node is a relatively“older” computer which has a graphics card that does not include computational features, a weight value for the GPU would be 0 (zero). Likewise, some systems may have operating systems or libraries which may not support leveraging additional computing power from the GPU.
  • the GPU weighting value would be 0 also.
  • the performance potential can be represented in a manner suitable for assessing performance in respect of mining digital currencies.
  • the performance potential can be represented as a hash rate, or the number of hash calculations that can be performed by the node per second.
  • the performance potential may also be expressed in other metrics, such as FLOPS (Floating Point Operations Per Second) or MIPS (Millions of Instructions Per Second).
  • the performance potential of the overall node represents an“idealized” performance of the system, in which all of the available computing resources of a node are available for executing a computing task including, but not limited to, mining digital currencies.
  • the node may not always perform at the full performance potential for various reaosns relating to use and manufacturing variabilities of the hardware components.
  • an adjustment factor of .625 can be applied to all models of that CPU.
  • Applying such a correction factor can account for various factors including unknown factors, which might affect devices from attaining their theoretical performance levels.
  • the user of the node may be logged on and running user-applications.
  • Various pieces of background or ad hoc software such as anti-virus software, operating system software and device drivers, or specific applications, may use up memory and processor resources.
  • the node may have a particular percentage uptime suggesting that at certain times, the node would not be available to carry out mining activities.
  • the uptime, utilization patterns etc. can be obtained from the telemetry data collected by the device manager 214.
  • the performance potential is adjusted by applying an adjustment model to adjust the performance potential to calculate a computational performance of the node that corresponds to a real-world performance metric.
  • Telemetry data specific to the hardware and/or elements of the node 124 is used to carry out the adjustment calculation.
  • the adjustment model is trained using system profiles corresponding to the nodes of the distributed computing system 100. For example, a linear regression model based on a distributed forest, with known system attributes as features may be used.
  • the computational performance can similarly be represented in terms of a computing hash rate. Representation of computing performance using hash rates in the digital currency context can better facilitate determination of an applied computational value associated with the node.
  • marketplace data for various digital currencies are received, for example, from the market data source 160.
  • the applied computational value is calculated using the market place data of various digital currencies received at step 310 and the computational performance.
  • the marketplace valuation data generally includes coin exchange rates between a particular digital currency that can be mined by the node and one or more fiat currencies, the level of difficulty in mining the digital currency, a block reward associated with the digital currency, and any transaction or handling fee associated with converting the mined digital currency to the desired fiat currency.
  • the hash rate corresponding to the computational performance of the node can be used to estimate an amount of digital currency that can potentially be generated (i.e. mined) over a given time period taking into account of information such as the block difficulty and block reward.
  • the equivalent value in a fiat currency exchangeable at a specified time can therefore be calculated based on the estimated amount of digital currency the node is capable of mining. Since the marketplace data provide valuations of various digital currencies, the applied computational valuation can therefore encompass valuations corresponding to multiple digital currencies so that a selection of a particular digital currency for mining can be carried out.
  • a specified computational task for execution is selected from a number of available tasks.
  • the available tasks can correspond to mining tasks associated with different currency types. Since the same node can have different applied computational values associated with different digital currencies, one manner of selecting a mining task is to select based on a digital currency associated with the highest applied computational value.
  • the applied computational value maybe used to determine which nodes are assigned mining tasks. For example, nodes with a higher applied computational value may be given more mining tasks relative to nodes with lower corresponding applied computational value. In other implementations, nodes with higher applied computational values may be given priority in respect of being given mining tasks compared to nodes with lower applied monetary values.
  • the applied computational value of the node may fluctuate over time, and vary in relation to different digital currencies, in addition to variations in available computing power, for example, as a result of“real-world” usage.
  • the applied computational value of a node represents valuations of two different digital currencies
  • mining for the digital currency associated with the higher valuation may be executed. If valuations change, and the previously higher- valued digital currency becomes the lower- valued digital currency, then mining of the previously lowered- value digital currency (now the higher-valued currency) would be executed.
  • the valuation of each digital currency can be used to determine a level of priority given to the execution of mining tasks associated with the respective digital currency. Accordingly, the scheduling specified computing tasks can be carried out by considering the applied computational value to provide an optimal return on use of the available computing resources.

Abstract

A method and system to schedule computing tasks for execution by a computing system in a distributed computing environment are presented. The tasks are secheduled for execution by gathering system configuration data and system telemetry data of the computing system. A computing system profile of the computing system is generated based on the system configuration data and system telemetry data that were gathered to indicate a computational performance. Calculation of an applied monetary value of the computing system in association with at least one specified computing task is made based the computational performance and marketplace valuations associated with the at least one specified computational task. A specified computational task is selected from the at least one specified computational task for execution based on the applied monetary value of the computing system.

Description

WORKLOAD SCHEDULING IN A DISTRIBUTED COMPUTING ENVIRONMENT BASED ON AN APPLIED COMPUTATIONAL VALUE
[0001] The present application claims priority from U.S. patent application Serial No.
62/663,097 filed on April 26, 2018.
FIELD OF INVENTION
[0002] This present disclosure relates generally to distributed computing, and more particularly to a system and method of scheduling computing tasks carried out by a computing system in a distributed computing environment. By way of example, the computing tasks in question may relate to block-chain based cryptocurrency applications to carry out computationally intensive tasks in order to verify transactions recorded in the block-chain.
BACKGROUND
[0003] While the computational power of modem day computing devices has generally increased over time, the computing capabilities of a single computing device can still be limited when tasked to handle applications or programs of high complexity, such as processing large amounts of data in the context of machine learning, conducting probabilistic simulations (e.g. Monte Carlo simulations), graphical renderings (e.g. Nvidia Mental Ray) and the like. As such, it may not be feasible for a single computing device to complete a desired computing task within a reasonable time frame. Complex computing tasks may be addressed by constmcting larger computing devices with more system memory and more processors to obtain increased computing capacity. However, the scalability and cost of doing so may be prohibitive.
[0004] Distributed computing systems have been developed as an alternative to a standalone computing system. Distributed systems comprise of multiple interconnected autonomous computing systems, or“nodes”, in which each node performs a portion or share of an overall distributed computing task. Distributed systems can be scaled up or scaled down in a manner not normally possible with a standalone computing system. For example, more computing systems can be added or‘recruited’ to a distributed computing environment if a computing task is complex. Fewer computing systems may be needed when the computing task in question is less complex. Each node may be configured differently from another node such that the corresponding computational power of one node may be different from the computational power of another node.
SUMMARY OF THE DISCLOSURE
[0005] In general, the present specification describes a system and method for scheduling computing tasks in a computing system within a distributed computing environment.
[0006] According to a first broad aspect, there is provided method to schedule computing tasks executed by a computing system in a distributed computing environment comprising: gathering system configuration data and system telemetry data of the computing system; generating a computing system profile of the computing system, the profile is computed based on the system configuration data and system telemetry data; calculating an applied monetary value of the computing system based on the system profile and marketplace valuations associated with at least one specified computational task; and selecting a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
[0007] In at least one embodiment, the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply
configuration, sensor configurations, age of first installed piece of software, Basic Input/Output System (BIOS) version, operating system information, installed software libraries, virtual machine status, software versions.
[0008] In at least one embodiment, the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
[0009] In at least one embodiment, the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, application-specific integrated circuit (ASIC) devices. [0010] In at least one embodiment, the system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
[0011] In at least one embodiment, the system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
[0012] In at least one embodiment, the measured system uptime is based on calculated percentage uptime for the immediately preceding 30 days from the date of measurement.
[0013] In at least one embodiment the system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
[0014] In at least one embodiment, the computing system profile is stored in a system profile database.
[0015] In at least one embodiment, the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting of the computational performance based on the system telemetry data.
[0016] In at least one embodiment, the computational performance is determined by detecting at least one hardware element associated with the computing system based on the hardware configuration data; determining, for each of the at least one hardware element, a performance potential for that hardware element; and calculating the computational performance based on the performance potential for each of the at least one hardware element.
[0017] In at least one embodiment, the performance potential for each of the at least one hardware element is stored within a hardware profile database.
[0018] In at least one embodiment, the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element. [0019] In at least one embodiment, the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
[0020] In at least one embodiment, the method further comprising updating the computing system profile based on at least one of updated system telemetry data and updated configuration data.
[0021] In at least one embodiment, the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
[0022] In at least one embodiment, the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time.
[0023] In at least one embodiment, the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
[0024] In at least one embodiment, the method further comprising receiving an updated marketplace valuation associated with the at least one computational task and updating the applied monetary value of the computing system.
[0025] In at least one embodiment, the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the method further comprises: executing the first computational task when the first applied monetary value is greater than the second applied monetary value; and executing the second computational task when the second applied monetary value is greater than the first applied monetary value.
[0026] In at least one embodiment, comprising coordinating execution of the at least one specified computational task by the computing system with at least one other computing system.
[0027] According to another broad aspect, there is provided a system for scheduling computing tasks executed by a computing system in a distributed computing environment comprising: a plurality of computing systems, each computing system being operable to perform at least one specified computational task; a device manager operable to gather system configuration data and telemetry data of each of the computing system of the plurality of computing systems; a system profile database for storing a plurality of computing system profiles, the plurality of system profiles being generated based on the system configuration data and the system telemetry data of the plurality of the computing systems; a communication interface operable to receive marketplace valuations associated with at least one specified computational task; and a task manager operable to: calculate an applied monetary value of the computing system based on the computing system profile and marketplace valuations; and select a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
[0028] In at least one embodiment, the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply
configuration, sensor configurations, age of first installed piece of software, BIOS version, operating system information, installed software libraries, virtual machine status, software versions.
[0029] In at least one embodiment, the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
[0030] In at least one embodiment, the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, ASIC devices.
[0031] In at least one embodiment, the system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
[0032] In at least one embodiment, the system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
[0033] In at least one embodiment, the measured system uptime is based on a calculated percentage uptime for the immediately preceding 30 days from the date of measurement. [0034] In at least one embodiment, the system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
[0035] In at least one embodiment, the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting the computational performance based on the system telemetry data.
[0036] In at least one embodiment, the computational performance is determined by detecting at least one hardware element associated with the computing system based on the hardware configuration data; determining, for each of the at least one hardware element, a corresponding performance potential for that hardware element; and calculating the computational performance based on the performance potential for each of the at least one hardware element.
[0037] In at least one embodiment, the system further comprising a hardware profile database for storing a performance potential for each of the at least one hardware element.
[0038] In at least one embodiment, the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element.
[0039] In at least one embodiment, the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
[0040] In at least one embodiment, the computing system profile is updated based on at least one of updated system telemetry data and updated configuration data gathered by the device manager.
[0041] In at least one embodiment, the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
[0042] In at least one embodiment, the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time. [0043] In at least one embodiment, the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
[0044] In at least one embodiment, the communication interface is further operable to receive an updated marketplace valuation associated with the at least one computational task and the task manager is further operable to update the applied monetary value of the computing system.
[0045] In at least one embodiment, the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the task manager is further operable to: execute the first computational task when the first applied monetary value is greater than the second applied monetary value; and execute the second computational task when the second applied monetary value is greater than the first applied monetary value.
[0046] In at least one embodiment, the distributed computing environment comprises a plurality of network systems, each of the plurality of network systems having at least one computing system of the plurality of computing systems and being operable to send and receive electronic signals to and from any another network system of the plurality of network systems.
[0047] In at least one embodiment, the system further comprising a task supervisor operable to coordinate the execution, by the plurality of computing systems, of the at least one specified computing task.
[0048] Additional aspects of the present invention will be apparent in view of the description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
[0049] Features and advantages of the embodiments of the present invention will become apparent from the following detailed description, taken in with reference with the appended drawings in which:
[0050] Figure 1 is a system block diagram of a distributed computing system with a plurality of nodes; [0051] Figure 2 is a system block diagram of a node of the distributed computing system of Figure 1 ; and
[0052] Figure 3 is a flowchart of a method to determine the applied computational value of a node in the distributed computing system of Figure 1.
DETAILED DESCRIPTION
[0053] The description which follows, and the embodiments described therein, are provided by way of illustration of examples of particular embodiments of the principles of the present invention. These examples are provided for the purposes of explanation, and not limitation, of those principles and of the invention.
[0054] Throughout this specification, numerous terms and expressions are used in accordance with their ordinary meanings. Provided herein are definitions of some additional terms and expressions that are used in the description that follows. A“node” encompasses a computing system connected to a computer network or a distributed computing network. The term “distributed computing network” and“distributed computing system” can be used interchangeably to describe an interconnected plurality of nodes operable to carry out a distributed computing task.
[0055] The benefits of distributed computing become apparent in large-scale computing projects such as SETI@home, an Internet-based distributed computing project developed by the University of California, Berkeley and released in 1999. The project allows members of the public to volunteer computing resources available in their own computers to assist with processing observational radio telescope data taken from the Arecibo radio telescope and the Green Bank Telescope to identify possible evidence of radio transmissions from extraterrestrial intelligence. Under the distributed computing structure, SETI@home sends small chunks of the data to home computers for analysis. If the project were carried out in a non-distributed manner, the computational task would be an extremely onerous one.
[0056] In other examples, the scability and distributed nature of distributed computing systems have been structured and configured to operate as content distribution networks (CDNs) to distribute content such as images, text and live or on-demand audio and video data to a large number of audiences spread across a large geographic area (e.g. regionally, and globally). While the recipients of such data perceive the distributed computing system as a unitary system, the geographically distributed nodes of the CDN allows the CDN to serve any requested data with minimal network delay, by transmitting the requested data using a node that is physically proximal to the requester. The growth of machine learning and artificial intelligence applications also require significant computational power to process large amounts of training data so as to suitably “train” such machine learning systems. Once these systems have been trained, such distributed computing systems may be relied upon to carry out so-called“big data” analysis tasks on large volumes of data to identify patterns and insight therefrom.
[0057] Distributed computing systems have also been deployed in block-chain based cryptocurrency applications to carry out computationally intensive tasks in order to verify transactions recorded in the block-chain. The transactions can represent instances in which digital currencies such as Bitcoin change hands. The block-chain thus operates as a public ledger in which new transactions are recorded and verified. The block-chain is usually stored in a decentralized manner, meaning that many copies of the block-chain are maintained by various computing systems in a networked environment. The lack of trust between these computing systems requires validation of a new block before the new block is accepted for recordation by each of the computing systems configured to store a copy of the block-chain, in order to avoid recording fraudulent or unauthorized transactions. The validation in question involves various computational functions, the execution of which is known in the art as“proof-of-work”.
[0058] The verification process is generally conducted in a manner in which many networked computing systems compete with each other to be the first system to successfully verify a block of transactions for recordation into the block-chain. The first computing system that successfully verifies a block of transactions is awarded a small amount of newly created digital currency called the“block reward”. As such, the verification process is sometimes referred to in the art as “mining”. Multiple computing systems can be pooled together into a distributed computing system to form a“mining cluster” or“mining pool” so that the combined computational power of each computing system in the mining pool can translate to a higher likelihood of the pooled distributed computing system being the first to verify a block of transactions.
[0059] Recently, there have been a multitude of digital currencies available to be mined including Bitcoin, Ethereum, Bitcoin Cash, Dash, Ripple and Litecoin, which varieties of digital coins with significant public recognition. Mining for digital currencies has become lucrative because of their ever increasing perceived value. As a result, the relevant exchange rates between various digital currencies against fiat currencies have increased significantly. Certain digital currencies are perceived as more valuable than others, leading to different exchange rates between different digital currencies against a reference fiat currency (e.g. the U.S. dollar). Given that most digital currencies are generated by“mining”, it may be advantageous to spend time and energy to mine one digital currency over another if the one digital currency has a more favourable exchange rate with reference to the fiat currency relative to the other digital currency, the computational complexity and effort associated with the mining tasks in question permit the selected digital currency to be mined more profitably, taking into acount the more favourable exchange rate.
[0060] Disclosed herein is a system and method to schedule distributed computing tasks such as digital currency mining tasks based on an applied computational value associated with at least one node in the distributed computing system. For the purposes of the present disclosure, the applied computational value can be used to denote a calculated value, for example, in a fiat currency, of the equivalent net digital currency minable by a particular node in the distributed computing system based on current market exchange rates between the digital currency and the fiat currency, and based on the underlying mining or block reward, transaction or handling fees for conversion to a fiat currency, and the computational complexity and effort associated with mining the digial currency in question. The applied computational value may also be applicable to the entire computing system or any subset of nodes thereof. As will be described in greater detail below, the applied computational value can be calculated by applying marketplace valuations of various digital currencies, their corresponding mining difficulty, mining reward, and transaction or handling fees to a calculated computational performance of a node (or the overall system or subset of nodes thereof), the computational performance of the node being determinable by considering the software and hardware resources of the node and its utiliziation. Scheduling of distributed computing tasks can subsequently be made based on a consideration of the relative applied computational values as mong various minable digital currencies as a decision-making factor. The concept of the applied computational value can be used in other contexts. For example, if a computing system has additional storage space or additional bandwidth (e.g. network or processing bandwidth), such resources could be used to provide data storage or delivery as part of a distributed CDN, which could have an inherent value based on technical metrics such as network latency and bandwidth. [0061] It is generally acknowledged that the values of digital currencies are regarded as volatile. For example, the value of Bitcoin in late 2016 was approximately US $700. Throughout 2017, however, the value of Bitcoin climbed quickly to a peak of nearly US $20,000 in December of 2017. However, in early 2018, the value decreased to about US $10,000. Other digital currencies also experience similar levels of volatility such that, at any given time, one digital currency may become more valuable, relative to a reference fiat currency, than another digital currency and vice versa. Therefore, it may be helpful, at a particular time, to determine an applied computational value of a node or the overall distributed computing system in association with a particular digital currency, in view of the computational complextity and effort involved with that particular digital currency as compared to others, so that mining tasks can be scheduled to mine a digital currency associated with the highest applied computational value.
[0062] Referring first to Fig. 1, shown therein is a diagram of a distributed computing system or a distributed computing environment 100 operable to carry out distributed computing tasks. The distributed computing system 100 includes a plurality of nodes. Illustrated in Fig. 1 are nodes labeled“Node-l” 130 and“Node-N” 140 corresponding to the number of nodes in the distributed computing system 100 being N. The nodes of the distributed computing system 100 can be any device capable of processing data. As such, nodes belonging to the distributed computing system 100 can include, but are not limited to, desktop and laptop computers, set-top boxes, smart devices (e.g. network-enabled home appliances, and other“Intemet-of-things” or IoT devices) as well specialized processors including graphics processing units (GPU), field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs) (e.g. dedicated digital currency miners). Nodes can also include mobile devices such as tablets, smartphones and smart watches. Each node can be configured to communicate with other nodes within the distributed computing system 100 over a communication network 150.
[0063] For clarity, while Fig. 1 depicts a single network 150 interconnecting the nodes of the distributed computing system 100, other networking configurations are possible. For example, communication between nodes can take place across a number of networks, including one or a combination of public and private networks such as the Internet, a private local area network, or mobile data network provided by a cellular telephone provider. For instance a node may be a smartphone connected to the distributed computing system 100 by way of a network link that traverses a private data network managed by a mobile phone service provider (e.g. a cellular EDGE, UMTS, HSPA or LTE data network), and a public network such as the Internet. Similarly, if the node is a desktop or a laptop computer, the data link to the distributed computing system 100 can traverse a home or enterprise local area network (e.g. by 10/100/1000-base-T Ethernet network, or an IEEE 802.11 compliant wireless network), the network of an Internet Service Provider (ISP), and the Internet. In other words, the distributed computing system 100 can include any number of separate network systems, each such network system being operable to
communicate (e.g. send and receive electronic signals including those corresponding to data packets or other types of signals such as diagnostic signals) with any other such network system to form the overall distributed computing environment. It would be understood that each network system has at least one node contained therein to form the nodes of the distributed computing network.
[0064] One or more of the nodes in the distributed computing system 100 can be designated as a task supervisor 110. In the embodiment of FIG. 1, one node is designated as the task supervisor 110. However, in other embodiments multiple nodes can operate together as a task supervisor 110. The node that is designated as the task supervisor 110 is configured to assign computing tasks (e.g. tasks related to mining digital currency) to each of the remaining nodes and to coordinate the execution of a computationally intensive task by the nodes within the distributed computing system 100 in a distributed manner. Additionally, the number of nodes may vary over time so that nodes may be added or removed from the distributed computing system 100. Accordingly, the task supervisor 110 can further be operated and keep a record of active and inactive nodes within the distributed computing system 100. Such records can be stored in the storage module 120.
[0065] All nodes connected to the network 150 may follow a suitable communication protocol to send and receive electronic communication between each other and the task supervisor 110. Such communication can include announcing to the network 150 that the node has joined (i.e. a broadcast communication), or announcing that the node will be removed or disconnected. During operation, the task supervisor 110 may divide a large computational task into smaller, more manageable tasks for distribution to the nodes within the distributed computing system 100 over the network 150 using the same communication protocol. Once the smaller computing tasks have been completed, the node that completed the smaller computing task can then return the output or result of the task back to the task supervisor 110.
[0066] Access to a market data source 160 is provided via the network 150. The network 150 can include a data interface for facilitating communication with the data source. The data provided by the market data source 160 can include current (e.g. real-time) and historical financial data as well as exchange rates of various digital and fiat currency pairs 160. In some embodiments, the market financial data can also include information specific to certain digital currencies such as an indication of the mining difficulty or computational complexity 161 associated with the currency and the“block reward” 162 available for verifying a block of transactions. The market data source 160 may be supplied by a third party data provider such as a digital currency exchange service. The market data source 160 can be configured to supply a stream of pricing data accessible by the task supervisor 110 and the nodes. The data connection to the data source 160 is illustratively a low-latency data connection to enable the task supervisor 110 and the nodes to obtain the most up-to-date exchange data.
[0067] As previously noted, in the illustrated network 150 of Fig. 1, a storage module 120 is included. The storage module 120 can be used by the task supervisor 110, the nodes or both the task supervisor 110 and nodes to store data and other information relevant to carrying out the distributed computing task. For example, the storage module 120 can be used to store system configuration data or system profiles (present and historic profile information) of each of the nodes that operates in the distributed computing system 100. In the embodiment shown, some storage capacity of the storage module 120 can be allocated to be the profile database 123 for storing system profiles. The storage module 120 may also be used to store performance data associated with various software components and hardware components, in the device database 121. The telemetry storage 124 is configurable to store telemetry or utilization data of the nodes of the distributed computing system 100. A task rule set 122 can further be stored which specify additional conditions that determine whether or not a computing task should be scheduled based on a determination of whether execution of the computing would affect a user’s experience of the node. The storage of configuration and performance data so would enable the task supervisor 110, the node or any interested party, to determine the computational capability of a node or of the overall system, to correspondingly determine the applied computational value of each node or of the distributed computing system 100. The task supervisor 110 can store a record of completed tasks, pending tasks or both pending and completed tasks in the storage module 120.
[0068] In the illustrated diagram of Fig. 1, the storage module 120 is configured as a standalone component, and can be implemented by a dedicated data storage device such as a network-enabled hard disk drive, a database, a cloud-based storage resource or a storage area network (SAN). The storage module 120 can alternatively be integrated with the task supervisor 110 as“local” storage.
[0069] In the case of mining digital currency based on block-chain technology, each of the nodes and the task supervisor 110 of the distributed computing system 100 can be said to correspond to a mining pool. The combined computing power of the nodes within the mining pool can execute computing tasks to verify blocks within the block-chain and thereby gain an amount of digital currency as a reward, known as the“block reward”, as noted above. The task supervisor 110 may operate an appropriate mining pool server software which communicates with each of the nodes connected to the distributed computing system 100 via network 150 as well as the digital currency network (not shown). For example the Stratum pool server is one such item of pool server software executable by the task supervisor 110 to form a mining pool. A single pool server can be configured to manage the mining of multiple digital currencies. Alternatively, multiple instances of pool servers, each pool server being operable to manage the mining of one type of digital currency, may concurrently be operating on the task supervisor 110, with the task supervisor 100 being operable from a single node or across multiple nodes, to support mining of multiple digital currencies. In some embodiments, mining for different currencies can be carried out concurrently, for example, by grouping nodes together so that one group of nodes mine a first digital currency that is different from another group of nodes tasked to mine a second digital currency. In other cases, all the nodes can be tasked to mine the same digital currency and can switch between digital currencies in a sequential manner.
[0070] Under operation, each node in the mining pool is in electronic communication with the task supervisor 110 to receive a computing task identified by a“job ID” related to the execution of proof-of-work functions that are carried out to verify a block within the block-chain associated with a particular digital currency. It is noted however, the concept for proof-of-work can encompass other types of proofs, including, but not limited to proof-of-space, proof-of-stake, proof-of-storage. For example, in the case of Bitcoin, the computational task of each node is to carry out cryptographic hash functions specified by the task supervisor 110 to determine a unique nonce value associated a block such that when that block is hashed with the nonce, a hash value having a specific set of leading zeros is generated. Upon completion of this task, the node can submit the results of the task and request additional tasks. Alternatively, the task supervisor 110 can automatically provide a node with the next task upon receiving the results of a previous task.
[0071] Communication between nodes within the network 150 can be carried out in any suitable manner. One technique that can be used include transmitting mining tasks and results using encoded messages between nodes. For example, the encoded messages may contain data objects specifying the task to be carried out and data generated as a result of executing a specified distributed computing task. The data objects can also be structured using known and interoperable data object formats. One widely used data object format is the JavaScript Object Notation or JSON format. The JSON data object format is flexible as it is both human and machine readable, suitable for network-based communication. In other implementations, the messages may be encoded using Extensible Markup Language (XML). In some other implementations, nodes within the distributed computing system 100 are operable to generate and process both JSON and XML-based data objects. In alternate implementations, the messages may be encoded using Protocol Buffers. In yet other implementations, nodes within the distributed computing system 100 are operable to generate and process any combination of these formats.
[0072] Figure 2 is a generalized system block diagram of a computing system 200 operable as a node within the distributed computing system 100 of Fig. 1. The computing system 200 includes a processor module 204, communication module 206, system device module 208, a task manager 210, a data storage module 212 and a device manager 214. As noted above, a node can be any device capable of completing a computational task, including computers, smartphones and any other suitable computing device.
[0073] The processor module 204 can include any suitable processing device or combination of different processing devices, including but not limited to, one or more of a general purpose processor, application processor, GPU, FPGA or ASIC. In some embodiments, the processor module 204 may be configured with a general purpose processor and several GPUs, FPGAs and/or ASICs. For example, a typical home computer can be equipped with a general purpose processor or central processing unit (CPU) installed within a main system board (i.e. motherboard) for executing operating system software, device drivers and user application programs. The computer can be further equipped with specialized processing hardware to reduce the processing burden of the general purpose processor. For example, a separate graphics accelerator or“graphics card” having a graphic processing unit (GPU) can be connected to the mother board to provide visual output to the user. The graphics card can be equipped with one or more such GPUs that are optimized for rendering visual output. The GPU or the general purpose processor, or both the GPU and the general purpose processor, can be tasked to carry out the distributed computing task. In other systems, the processor module 204 may be part of an integrated system such as in a system-on-chip (SOC) architecture, in which all processing devices and other system components are integrated into a single integrated circuit. In yet other implementations of the computing system 200, the system may be a dedicated processing device designed to carrying out specific computationally intensive tasks such as digital currency mining and can be equipped with several GPUs, FPGAs, ASICs or combinations thereof.
[0074] The computing system 200 of Fig. 2 includes a communication module 206 operable to enable the computing system 200 to connect to the distributed computing system 100 and communicate with the task supervisor 110 and other nodes 130 and 140 via the communication network 150. The communication module 206 is equipped with a suitable communication interface, including but not limited to serial, USB, parallel, SATA, Bluetooth, WiFi network (e.g. compliant with the IEEE 802.11 family of standards), cellular (EDGE, UMTS, HSPA, or LTE), 10/100/1000-base-T Ethernet, as well as operable under known communication protocols including, but not limited to, TCP, UDP, HTTP, HTTPS and UDP. Accordingly, the
communication module 206 is generally equipped with suitable physical layer components (i.e. the PHY device(s)) to enable generation of communication signals compliant with the communication interface and protocol in use. For example, a smartphone may be equipped with suitable communication processors, modulators, amplifiers and antenna systems to enable wireless communication over a regional cellular network, a localized WiFi network or a personal area network. The communication module 206 of a desktop or a laptop computer may be equipped with the same wireless communication hardware as a smartphone and further equipped with hardware components to provide connectivity to a wired network. The communication module 206 may include a network interface card with a physical interface compatible with 10/100/1000-base-T Ethernet.
[0075] During use, the communication module 206 can be used to receive digital currency mining tasks and any data associated with that task from the task supervisor 110 and send the results of the computation back to the task supervisor 110. It is also understood that the communication module 206 is also operable to allow the computing system 200 to communicate with other devices outside of the distributed computing system 100. For example, the node can establish a data connection with an application server to receive software updates for the computing system 200. In another example, the node can establish a data connection with a messaging server to send and receive messages intended for a user of the computing system 200.
[0076] The system device module 208 corresponds to hardware components and software components of the computing system 200 that are operable to generate outputs and receive inputs. The system device module 208 includes software components (e.g. operating system, devices drivers, and user software) and hardware components for receiving a user input via a human interface device (HID). Such interface devices include, but are not limited to, button/toggle inputs, keyboard input, mouse input, microphone input, camera input, fingerprint reader input, ID card input, touch-screen input, RFID readers, stylus inputs and the like. Output feedback generatable by the system device module 208, include but are not limited to, display outputs, light emitting indicator outputs, vibrational outputs, and audio outputs. The system device module 208 may also be used to monitor the status of the computing system 200 based on the outputs of sensors that are a part of the computing system, such as light sensors, positional sensors, and location sensors.
]0077] In some embodiments, the system device module 208 can measure and track parameters related to the operation of the computing system 200 including, but not limited to, the power consumption, temperature and rotational speed of cooling fans. A warning can be generated, for example, if the system temperature is in excess of a predefined value, which can indicate overheating or other system faults. In other embodiments, the system device module 208 may also oversee power management of the node. For example, if the node is powered using a battery, the system device module 208 may include suitable power management components for charging the battery pack or manage power consumption by adjusting power consumption of various system devices. In the case of a smartphone or a tablet device, for example, the brightness of the display screen can be adjusted by the system device module 208 to reduce power consumption by the display system.
[0078] The memory module 212 is configurable for long term, short term or both long term and short term storage of data including, but not limited to, applications, application data, user data, operating system software, device drivers and the like. The memory module 212 can include persistent or non-transitory memory for long-term data storage and temporary memory for short-term data storage. Persistent memory may include diskettes, optical disks, tapes, hard disk drives, and solid state memory such as ROM, EEPROM and flash memory. Short-term memory can include random access memory (RAM) and cache memory. Data received from the task supervisor 110 may be stored in the persistent memory until it is processed by the computing system 200. During the processing operation, the data may be transferred from the persistent memory to the temporary memory for processing. The result of the processing task can be written to the persistent memory for storage prior to transmission to the task supervisor 110.
[0079] The device manager 214 is operable to coordinate the operation of the hardware and software components that make up the system device module 208 and monitor the node to generate a system profile 216 to indicate the system configuration and telemetry or utilization of at least one system resource of the node. The node can be monitored, for example, by reading the data provided by the system device module 208 or by directly querying the devices. The system configuration information may indicate the hardware configuration of the node, including the amount of memory available in the memory module 212, the type of processors available in the processor module 204, bandwidth of the communication interface 206, network information (e.g. IP address, network provider and physical location of the node), connected/att ched peripherals, and other hardware resources available at the node. The profile can further indicate the software configuration of the node, including the version of the operating system, virtual machine status (e.g. that the node is a virtual machine, if applicable), whether the system is a part of an enterprise network, a listing of specific software libraries (e.g. OpenGL, OpenCL, DirectX, runtime libraries etc. and their version numbers).
[0080] The utilization of system resources can include, but are not limited to, any one or more of measured utilization of the processor module 204 (general processor, GPU etc.), measured utilization of the memory module 206 (including measured disk utilization), measured system temperature, measured system fan speed (if applicable), measured network bandwidth usage, measured system network response time, measured power consumption and system events (e.g. startup, shutdown, kernel panic, memory fault, hardware addition/removal etc.). In some embodiments the system profile 216 further indicates additional parameters including, but not limited to, any one or more of the device type and model number (e.g. laptop, desktop, tablet device or smartphone, and the corresponding model and make of the device can also be indicated), power status (e.g. powered by battery or AC main line), operating state (e.g. sleep state, hibernate state, wake state, idle state or full-screen mode, measured system uptime, measured system idle time) and user status (e.g. logged in or logged out). The utilization data can collectively be regarded as the telemetry data of the node.
[0081] For the purposes of carrying out digital currency mining tasks, a task manager 210 can be deployed to manage the execution of the tasks by the computing system 200. In some embodiments, the task manager 210 may be generally configured to calculate an applied computational value of the computing system 200 based on the system profile, computational complexity of the underlying proof of work tasks, and on marketplace valuation data obtainable from the market data source 160. The task manager 210 can then select whether to schedule or not schedule the task received from task supervisor 110 for execution by the processor module 204. Alternatively, if the task manager 210 is presented with a number of different tasks for execution, the task manager 210 may schedule the task based on the calculated applied computational value of the computing system 200 associated with the task. Details of the determination will be described in greater detail herebelow.
[0082] The task manager 210 may be implemented in a number of ways. In some
embodiments, the task manager 210 may be a process executed by the processor module 204 of the computing system 200. In this case, an instance of the task manager 210 may be active in each node within the distributed computing system 100. For example, the task manager 210 can be implemented as software application executable on a computer, tablet device or smartphone. The task manager 210 may also be configured to coordinate communication between the node and the distributed computing system 100. For example, the task manager 210 may access the communication module 206 to communicate with the task supervisor 110 over the network 150 to announce that the node on which the task manager 210 is operating is connected to the distributed computing system 100 and is able to receive task execution requests from the task supervisor 110. The task manager 210, can also notify the task supervisor 110 whether a particular task request can be scheduled for execution or not (i.e. whether the request can be fulfilled), based on the decision making process described in greater detail below.
[0083] In other embodiments, the task manager 210 may be integrated into a user application such as a news aggregation program, or an anti-virus/anti-malware software. To generate revenue for the software developers or to fund further development of such software, an option may be provided in the user application to enable the user to use the application on a paid subscription basis or on a revenue- supported basis. The revenue-supported basis may include displaying paid advertisement to the user to earn advertising revenue. Alternatively, the software developer may request the user join a distributed computing system to share unused computing resources to enable the software developer to carry out computing tasks such as mining for digital currencies to the benefit of the software developer. The type of digital currencies to be mined can be determined on the applied computational value of the node or the entire distributed computing system 100 for a particular currency.
[0084] In some embodiments, the task manager 210 may be implemented in hardware as an embedded program“hardcoded” in an FPGA or ASIC device. This alternative implementation may be useful in nodes that are designed specifically to carry out distributed computing tasks. For example, in digital currency mining, the design of dedicated ASIC mining devices or“mining rigs” can be simplified by integrating the task manager 210 as embedded hardware code since a separate general purpose processor would not be needed. In other embodiments, the task manager 210 and task supervisor 110 may be embodied in the same node within the distributed computing system 100. This configuration may be useful if many or all of the nodes within the distributed system 100 are part of the same local area network, forming a network cluster. A single, unified task supervisor 110 and task manager 210 may directly carry out determinations, for each node in the cluster, in respect of whether or not a distributed computing task should be scheduled for execution by a particular node within the cluster.
[0085] As the node is being used, the device manager 214 can be operated to gather system telemetry data corresponding to the utilization of the various system resources and system status. For example, the user of a computer system may step away from the computer to run an errand. Accordingly, the system profile 216 can be updated from time to time or periodically with configuration and telemetry data to reflect the current state of the node. In one embodiment, the device manager 214 is configured to update the system profile 216 upon detecting a change in at least one system resource, such as the removal or addition of any hardware or software component. In another embodiment, the system profile 216 can be updated at a predetermined time interval. For example in some configurations, the system profile 216 is updated every 60 seconds, once per hour or at another suitable time period. In another embodiment, the system profile 216 is continuously updated so that the task manager 210 is operated to repeatedly sample the state of the node. In some embodiments, the system profile 216 is updated upon receiving a task request from the task supervisor 110 so that the burden on the device manager 214 can be reduced.
[0086] Over time, system information and telemetry data stored in the system profile 216 can be collected by the device manager 214. This information may be used to determine usage habits or usage patterns of the node in which processing power is available to the task supervisor 110. This“behavioural” information can also be saved to the system profile 216 to assist the task manager 210 to determine how frequently a particular node can be used to carry out the specified distributed computing task. The raw system usage data can be analyzed to determine, for example, a measured percentage uptime for the node. The percentage uptime, for example, can be calculated for the preceding 30 days, or any suitable temporal window, from the date of measurement. Other telemetry data can include the percentage or amount of time the node remains “idle” over a given period of time and available to be used to carry out computing tasks for the task supervisor 110.
[0087] Figure 3 is a flowchart 300 of a method of scheduling specified computing tasks for execution by a node (i.e. computing system) within the distributed computing system 100. At step 302, the node is monitored to gather system-level information including system configuration data and telemetry or utilization data of the node. As noted previously, one way of gathering telemetry and configuration data is by using the device manager 214. However other suitable methods of gathering telemetry and configuration data can also be employed according to other techniques known in the art. The monitoring process includes detecting one or more elements of hardware components installed in the node. Each element of detected hardware element can be identified by the element’s manufacturer, a model number, version number or build number. Corresponding software components can similarly be detected. Software components may be relevant to the determination of the applied monetary value since newer software versions may operate more efficiently relative to an earlier version and/or be able to make use of a particular piece of hardware more efficiently to provide a higher performance output.
[0088] At step 304, the system-level configuration information and telemetry data obtained during the previous step can be saved into a system profile 216. In some embodiments, the system profile 216 specifies hardware configurations, software configurations, data and telemetry or utilization data of the node. At step 306 the configuration data is used to determine a performance potential of the node. Each hardware and software component can be associated with a metric indicating the performance potential for that hardware or software combination. The performance potential can be, for instance determined by carrying out a suitable benchmarking test. The benchmarking test can be carried out by the operators of the distributed computing system 100 or by a third party testing facility. Hardware and software performance data can be stored as hardware profiles in the device database 121 in storage module 120, for example, for later use. The collection of performance data can be updated as new hardware and software components are made available.
[0089] The performance potential of the overall node can be determined based on the performance potential of each of the constituent hardware and software elements. For example, a particular node may be a desktop computer with an x86-based general purpose processor, a GeForce-based graphics card from Nvidia®, and DDR3 RAM memory, operating on a Windows® operating system. The performance potential of each of the elements is aggregated to generate a performance potential of the overall node. For example ,if the node is a relatively“older” computer which has a graphics card that does not include computational features, a weight value for the GPU would be 0 (zero). Likewise, some systems may have operating systems or libraries which may not support leveraging additional computing power from the GPU. In this case, the GPU weighting value would be 0 also. The performance potential can be represented in a manner suitable for assessing performance in respect of mining digital currencies. For example, the performance potential can be represented as a hash rate, or the number of hash calculations that can be performed by the node per second. The performance potential may also be expressed in other metrics, such as FLOPS (Floating Point Operations Per Second) or MIPS (Millions of Instructions Per Second).
[0090] The performance potential of the overall node represents an“idealized” performance of the system, in which all of the available computing resources of a node are available for executing a computing task including, but not limited to, mining digital currencies. Under practical conditions, the node may not always perform at the full performance potential for various reaosns relating to use and manufacturing variabilities of the hardware components. For example, in the context of digital currency mining, if the theoretical or estimated hashrate using the Cryptonbight algorithm of a given CPU module is 40,000 hashes/s, but in real-world testing and observation a hash rate of 25,000 hashes/s is attainable for that particular model, then an adjustment factor of .625 can be applied to all models of that CPU. Applying such a correction factor can account for various factors including unknown factors, which might affect devices from attaining their theoretical performance levels. In another example, the user of the node may be logged on and running user-applications. Various pieces of background or ad hoc software such as anti-virus software, operating system software and device drivers, or specific applications, may use up memory and processor resources. The node may have a particular percentage uptime suggesting that at certain times, the node would not be available to carry out mining activities. The uptime, utilization patterns etc. can be obtained from the telemetry data collected by the device manager 214. At step 308, the performance potential is adjusted by applying an adjustment model to adjust the performance potential to calculate a computational performance of the node that corresponds to a real-world performance metric. Telemetry data specific to the hardware and/or elements of the node 124 is used to carry out the adjustment calculation. In some embodiments, the adjustment model is trained using system profiles corresponding to the nodes of the distributed computing system 100. For example, a linear regression model based on a distributed forest, with known system attributes as features may be used. The computational performance can similarly be represented in terms of a computing hash rate. Representation of computing performance using hash rates in the digital currency context can better facilitate determination of an applied computational value associated with the node.
[0091] At step 310, marketplace data for various digital currencies are received, for example, from the market data source 160. At step 312, the applied computational value is calculated using the market place data of various digital currencies received at step 310 and the computational performance. Specifically, the marketplace valuation data generally includes coin exchange rates between a particular digital currency that can be mined by the node and one or more fiat currencies, the level of difficulty in mining the digital currency, a block reward associated with the digital currency, and any transaction or handling fee associated with converting the mined digital currency to the desired fiat currency. The hash rate corresponding to the computational performance of the node can be used to estimate an amount of digital currency that can potentially be generated (i.e. mined) over a given time period taking into account of information such as the block difficulty and block reward. The equivalent value in a fiat currency exchangeable at a specified time can therefore be calculated based on the estimated amount of digital currency the node is capable of mining. Since the marketplace data provide valuations of various digital currencies, the applied computational valuation can therefore encompass valuations corresponding to multiple digital currencies so that a selection of a particular digital currency for mining can be carried out.
[0092] At step 314, a specified computational task for execution is selected from a number of available tasks. The available tasks can correspond to mining tasks associated with different currency types. Since the same node can have different applied computational values associated with different digital currencies, one manner of selecting a mining task is to select based on a digital currency associated with the highest applied computational value. In some embodiments, the applied computational value maybe used to determine which nodes are assigned mining tasks. For example, nodes with a higher applied computational value may be given more mining tasks relative to nodes with lower corresponding applied computational value. In other implementations, nodes with higher applied computational values may be given priority in respect of being given mining tasks compared to nodes with lower applied monetary values.
[0093] As marketplace valuations of digital currencies change over time, updated pricing information would be received from the market data source 160 to enable recalculations of the applied computational value of the node. Accordingly, the applied computational value of the node may fluctuate over time, and vary in relation to different digital currencies, in addition to variations in available computing power, for example, as a result of“real-world” usage. For example, if the applied computational value of a node represents valuations of two different digital currencies, mining for the digital currency associated with the higher valuation may be executed. If valuations change, and the previously higher- valued digital currency becomes the lower- valued digital currency, then mining of the previously lowered- value digital currency (now the higher-valued currency) would be executed. Alternatively, the valuation of each digital currency can be used to determine a level of priority given to the execution of mining tasks associated with the respective digital currency. Accordingly, the scheduling specified computing tasks can be carried out by considering the applied computational value to provide an optimal return on use of the available computing resources.
[0094] The examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein.
[0095] Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the invention. The scope of the claims should not be limited by the illustrative embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.

Claims

WHAT IS CLAIMED IS:
1. A method to schedule computing tasks executed by a computing system in a distributed computing environment comprising:
gathering system configuration data and system telemetry data of the computing system; generating a computing system profile of the computing system, the profile is computed based on the system configuration data and system telemetry data;
calculating an applied monetary value of the computing system based on the system profile and marketplace valuations associated with at least one specified computational task; and
selecting a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
2. The method of claim 1, wherein the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply configuration, sensor configurations, age of first installed piece of software, Basic Input/Output System (BIOS) version, operating system information, installed software libraries, virtual machine status, software versions.
3. The method of claim 2, wherein the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
4. The method of claim 2, wherein the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, application-specific integrated circuit (ASIC) devices.
5. The method of claim 1 , wherein the system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
6. The method of claim 1 , wherein the system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
7. The method of claim 6, wherein the measured system uptime is based on calculated percentage uptime for the immediately preceding 30 days from the date of measurement.
8. The method of claim 6, wherein the system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
9. The method of claim 1, wherein the computing system profile is stored in a system profile database.
10. The method of claim 1 , wherein the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting of the computational performance based on the system telemetry data.
11. The method of claim 10, wherein the computational performance is determined by
detecting at least one hardware element associated with the computing system based on the hardware configuration data;
determining, for each of the at least one hardware element, a performance potential for that hardware element; and
calculating the computational performance based on the performance potential for each of the at least one hardware element.
12. The method of claim 11, wherein the performance potential for each of the at least one hardware element is stored within a hardware profile database.
13. The method of claim 1 1, wherein the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element.
14. The method of claim 13, wherein the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
15. The method of claim 1 further comprising updating the computing system profile based on at least one of updated system telemetry data and updated configuration data.
16. The method of claim 1, wherein the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
17. The method of claim 16, wherein the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time.
18. The method of claim 16, wherein the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
19. The method of claim 1 further comprising receiving an updated marketplace valuation associated with the at least one computational task and updating the applied monetary value of the computing system.
20. The method of claim 1 , wherein the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the method further comprises:
executing the first computational task when the first applied monetary value is greater than the second applied monetary value; and
executing the second computational task when the second applied monetary value is greater than the first applied monetary value.
21. The method of claim 1 further comprising coordinating execution of the at least one specified computational task by the computing system with at least one other computing system.
22. A system for scheduling computing tasks executed by a computing system in a distributed computing environment comprising:
a plurality of computing systems, each computing system being operable to perform at least one specified computational task;
a device manager operable to gather system configuration data and telemetry data of each of the computing system of the plurality of computing systems;
a system profile database for storing a plurality of computing system profiles, the plurality of system profiles being generated based on the system configuration data and the system telemetry data of the plurality of the computing systems;
a communication interface operable to receive marketplace valuations associated with at least one specified computational task; and
a task manager operable to:
calculate an applied monetary value of the computing system based on the computing system profile and marketplace valuations; and
select a specified computational task in the at least one specified computational task for execution based on the applied monetary value of the computing system.
23. The system of claim 21 , wherein the system configuration data comprises at least one of a description of: a speed of at least one processor, battery capacity, memory capacity, disk storage capacity, system display configuration, network interface configuration, power supply configuration, sensor configurations, age of first installed piece of software, BIOS version, operating system information, installed software libraries, virtual machine status, software versions.
24. The system of claim 22, wherein the installed software libraries comprises at least one of: CUDA libraries, OpenGL libraries and OpenCL libraries.
25. The system of claim 22, wherein the speed of the at least one processor comprises at least one of: speed of general purpose processors, graphics processing units, ASIC devices.
26. The system of claim 21, wherein the system telemetry data corresponds to the usage of the system hardware specified by the system configuration data.
27. The system of claim 21 , wherein the system telemetry data comprises: system event log information, measured processor utilization, measured memory utilization, measured disk utilization, measured Swap memory utilization, measured system temperature, measured supply voltages, measured fan speed, measured system uptime, measured system idle time, measured network bandwidth usage, measured system network response time.
28. The system of claim 26, wherein the measured system uptime is based on a calculated percentage uptime for the immediately preceding 30 days from the date of measurement.
29. The system of claim 26, wherein the system event log information indicates at least one of: software update events, security update events, hardware change events, and user logon and logoff events.
30. The system of claim 21 , wherein the computing system profile is computed by determining a computational performance based on the system configuration data and adjusting the computational performance based on the system telemetry data.
31. The system of claim 29, wherein the computational performance is determined by
detecting at least one hardware element associated with the computing system based on the hardware configuration data;
determining, for each of the at least one hardware element, a corresponding performance potential for that hardware element; and
calculating the computational performance based on the performance potential for each of the at least one hardware element.
32. The system of claim 29, further comprising a hardware profile database for storing a performance potential for each of the at least one hardware element.
33. The system of claim 29, wherein the adjusting the computational performance comprises applying an adjustment model to adjust the performance potential of a corresponding hardware element using a telemetry data specific to that hardware element.
34. The system of claim 32, wherein the adjustment model is trained using computing system profiles of other computing systems in the distributed computing environment.
35. The system of claim 21, wherein the computing system profile is updated based on at least one of updated system telemetry data and updated configuration data gathered by the device manager.
36. The system of claim 21, wherein the marketplace valuations specify at least one of: a computational difficulty associated with the computational task; and an exchange rate between at least one fiat currency and a digital currency generatable by executing the computational task.
37. The system of claim 35, wherein the monetary value of the computing system is expressed by the amount of digital currency generatable over predefined time period and by an equivalent amount of the at least one fiat currency based on the exchange rate at a specified time.
38. The system of claim 35, wherein the computational task comprises executing at least one proof-of-work function to verify a block of transactions in a blockchain.
39. The system of claim 21, wherein the communication interface is further operable to receive an updated marketplace valuation associated with the at least one computational task and the task manager is further operable to update the applied monetary value of the computing system.
40. The system of claim 21, wherein the applied monetary value of the computing system comprises a first applied monetary value associated with a first computational task and a second applied monetary value associated with a second computational task and the task manager is further operable to:
execute the first computational task when the first applied monetary value is greater than the second applied monetary value; and
execute the second computational task when the second applied monetary value is greater than the first applied monetary value.
41. The system of claim 21 , wherein the distributed computing environment comprises a plurality of network systems, each of the plurality of network systems having at least one computing system of the plurality of computing systems and being operable to send and receive electronic signals to and from any another network system of the plurality of network systems.
42. The system of claim 21, further comprising a task supervisor operable to coordinate the execution, by the plurality of computing systems, of the at least one specified computing task.
PCT/CA2019/000054 2018-04-26 2019-04-25 Workload scheduling in a distributed computing environment based on an applied computational value WO2019204898A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862663097P 2018-04-26 2018-04-26
US62/663,097 2018-04-26

Publications (1)

Publication Number Publication Date
WO2019204898A1 true WO2019204898A1 (en) 2019-10-31

Family

ID=68293394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2019/000054 WO2019204898A1 (en) 2018-04-26 2019-04-25 Workload scheduling in a distributed computing environment based on an applied computational value

Country Status (1)

Country Link
WO (1) WO2019204898A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190735A (en) * 2019-12-30 2020-05-22 湖南大学 Linux-based on-chip CPU/GPU (Central processing Unit/graphics processing Unit) pipelined computing method and computer system
CN112035252A (en) * 2020-08-26 2020-12-04 中国建设银行股份有限公司 Task processing method, device, equipment and medium
CN112104737A (en) * 2020-09-17 2020-12-18 南方电网科学研究院有限责任公司 Calculation migration method, mobile computing equipment and edge computing equipment
US20210064445A1 (en) * 2019-08-27 2021-03-04 Core Scientific, Inc. Harvesting remnant cycles in smart devices
US20210365939A1 (en) * 2018-07-18 2021-11-25 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for processing account of blockchain network, and storage medium
EP4006795A1 (en) * 2020-11-27 2022-06-01 ZOE Life Technologies AG Collaborative big data analysis framework using load balancing
WO2024013603A1 (en) * 2022-07-15 2024-01-18 Demaggio, Giovanni Cryptocurrency mining machine
EP4311159A1 (en) 2022-07-22 2024-01-24 Bitcorp S.r.l. Method and system for managing a cluster of processing units for the solution of a cryptographic problem

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120129A1 (en) * 2006-05-13 2008-05-22 Michael Seubert Consistent set of interfaces derived from a business object model
US20110022520A1 (en) * 1995-02-13 2011-01-27 Intertrust Technologies Corp. Systems and Methods for Secure Transaction Management and Electronic Rights Protection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022520A1 (en) * 1995-02-13 2011-01-27 Intertrust Technologies Corp. Systems and Methods for Secure Transaction Management and Electronic Rights Protection
US20080120129A1 (en) * 2006-05-13 2008-05-22 Michael Seubert Consistent set of interfaces derived from a business object model

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210365939A1 (en) * 2018-07-18 2021-11-25 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for processing account of blockchain network, and storage medium
US11687942B2 (en) * 2018-07-18 2023-06-27 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for processing account of blockchain network, and storage medium
US20210064445A1 (en) * 2019-08-27 2021-03-04 Core Scientific, Inc. Harvesting remnant cycles in smart devices
CN111190735A (en) * 2019-12-30 2020-05-22 湖南大学 Linux-based on-chip CPU/GPU (Central processing Unit/graphics processing Unit) pipelined computing method and computer system
CN111190735B (en) * 2019-12-30 2024-02-23 湖南大学 On-chip CPU/GPU pipelining calculation method based on Linux and computer system
CN112035252A (en) * 2020-08-26 2020-12-04 中国建设银行股份有限公司 Task processing method, device, equipment and medium
CN112104737A (en) * 2020-09-17 2020-12-18 南方电网科学研究院有限责任公司 Calculation migration method, mobile computing equipment and edge computing equipment
CN112104737B (en) * 2020-09-17 2022-08-30 南方电网科学研究院有限责任公司 Calculation migration method, mobile computing equipment and edge computing equipment
EP4006795A1 (en) * 2020-11-27 2022-06-01 ZOE Life Technologies AG Collaborative big data analysis framework using load balancing
WO2022112539A1 (en) * 2020-11-27 2022-06-02 Zoe Life Technologies Ag Collaborative big data analysis framework using load balancing
WO2024013603A1 (en) * 2022-07-15 2024-01-18 Demaggio, Giovanni Cryptocurrency mining machine
EP4311159A1 (en) 2022-07-22 2024-01-24 Bitcorp S.r.l. Method and system for managing a cluster of processing units for the solution of a cryptographic problem

Similar Documents

Publication Publication Date Title
WO2019204898A1 (en) Workload scheduling in a distributed computing environment based on an applied computational value
JP7395558B2 (en) How to quantify the usage of heterogeneous computing resources as a single unit of measurement
JP7465939B2 (en) A Novel Non-parametric Statistical Behavioral Identification Ecosystem for Power Fraud Detection
US11226805B2 (en) Method and system for predicting upgrade completion times in hyper-converged infrastructure environments
US9319280B2 (en) Calculating the effect of an action in a network
US9436535B2 (en) Integration based anomaly detection service
US9712410B1 (en) Local metrics in a service provider environment
US20160092516A1 (en) Metric time series correlation by outlier removal based on maximum concentration interval
US20120053925A1 (en) Method and System for Computer Power and Resource Consumption Modeling
AU2017258970A1 (en) Testing and improving performance of mobile application portfolios
US11762649B2 (en) Intelligent generation and management of estimates for application of updates to a computing device
CA3106991A1 (en) Task completion using a blockchain network
US11816178B2 (en) Root cause analysis using granger causality
US20190080020A1 (en) Sequential pattern mining
Wang et al. Flint: A platform for federated learning integration
US20210158257A1 (en) Estimating a result of configuration change(s) in an enterprise
US20210035115A1 (en) Method and system for provisioning software licenses
CN116034354A (en) System and method for automated intervention
US8856634B2 (en) Compensating for gaps in workload monitoring data
US20240119364A1 (en) Automatically generating and implementing machine learning model pipelines
US20240163344A1 (en) Methods and apparatus to perform computer-based community detection in a network
US20220286361A1 (en) Methods and apparatus to perform computer-based community detection in a network
US20240160694A1 (en) Root cause analysis using granger causality
US20240154985A1 (en) Systems and methods for predicting a platform security condition
US20220416960A1 (en) Testing networked system using abnormal node failure

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19791615

Country of ref document: EP

Kind code of ref document: A1