WO2024016007A2 - System and method for dynamic modeling of computer resources - Google Patents

System and method for dynamic modeling of computer resources Download PDF

Info

Publication number
WO2024016007A2
WO2024016007A2 PCT/US2023/070293 US2023070293W WO2024016007A2 WO 2024016007 A2 WO2024016007 A2 WO 2024016007A2 US 2023070293 W US2023070293 W US 2023070293W WO 2024016007 A2 WO2024016007 A2 WO 2024016007A2
Authority
WO
WIPO (PCT)
Prior art keywords
distributed computing
computing unit
container
physical
recited
Prior art date
Application number
PCT/US2023/070293
Other languages
French (fr)
Inventor
Nitin Perumbeti
Jeff BALKANSKI
Hamza ZAMAN
Original Assignee
Crusoe Energy Systems Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Crusoe Energy Systems Llc filed Critical Crusoe Energy Systems Llc
Publication of WO2024016007A2 publication Critical patent/WO2024016007A2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Definitions

  • the present disclosure relates generally to systems for dynamic modeling of computer resources associated with power systems, and more particularly, for managing and controlling the operation of such resources to optimally and dynamically balance power consumption and power production.
  • U.S. Patent No. 10,862,307 discloses flare mitigation systems that employ power generation equipment to generate electricity from natural gas produced at oil wells.
  • the systems allow the generated electricity to be consumed onsite, for example, to operate computing units that perform power-intensive, distributed computing tasks to generate revenue.
  • the computing units utilized which can include, for example, cryptocurrency miners or servers including GPUs, continue to advance in processing power and energy efficiency.
  • These continuous changes present numerous technical problems for tracking power production and consumption metrics, especially when such computing units are installed at various remote locations (e.g., oilfields) and utilize intermittent power generated by one or more power producers.
  • a computing unit is replaced with a newer model, or is moved from one location to another, it is much more complex than simply unplugging the computing unit and plugging a new computing unit in its place. Difficulties can arise if there is not a synergistic relationship between power generation and power consumption at a given site, which may include multiple power producers that generate the energy required for any number of power consumers (e g., computing units). For example, if power generation modules are operated outside specific processing parameters, the equipment may be damaged. Moreover, optimizing a utility of the power consumers adds a further layer of complexity; the specific balance of power generation and consumption needs to thread the needle between caution and business demands.
  • the system may include: a processor; memory; and a system orchestrator stored in the memory that, when executed by the processor, causes the processor to perform operations including: generate a graphical user interface modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generate, in a database in the memory, a data record including container information of each of the physical containers and/or position information for each of the distributed computing units in communication with the network, the container information and/or position information being automatically assigned to each distributed computing unit in communication with the network based
  • the techniques described herein relate to a system, wherein the container information includes information describing the geographical location of the physical container, a container identifier for the physical container, a size of the physical container, a type of the physical container and a cost of the physical container, wherein the position information includes a position of the distributed computing unit within the respective physical container, including at least one of a rack, a shelf and a slot where the distributed computing unit is positioned within the respective physical container.
  • the techniques described herein relate to a system, wherein the dynamically adjusting of the container information and/or position information includes confirming the container information and/or position information by an automatic entry event.
  • the techniques described herein relate to a system, wherein the automatic entry event includes scanning of a machine-readable representation affixed to the distributed computing unit.
  • the techniques described herein relate to a system, wherein the scanning of the machine-readable representation occurs at a repair facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the repair facility and/or a position of the distributed computing unit within the repair facility.
  • the techniques described herein relate to a system, wherein the scanning of the machine-readable representation occurs at a storage facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the storage facility and/or a position of the distributed computing unit within the storage facility.
  • the techniques described herein relate to a system, wherein the system orchestrator causes the processor to, in response to a new entry event indicating an addition of a new distributed computing unit into one of the physical containers: automatically update the database to include a new data record including an automatically generated inventory identifier.
  • the techniques described herein relate to a system, wherein the system orchestrator causes the processor to, in response to a connection of the new distributed computing unit to one of the network interfaces of one of the physical containers, automatically retrieve the preassigned unique hardware identifier from the new distributed computing unit, and store the preassigned unique hardware identifier and the geographical location of network interface to which the new distributed computing unit is connected in the new data record.
  • the techniques described herein relate to a system, wherein the distributed computing units are a plurality of cryptocurrency miners.
  • each of the data records includes financial information related to the cryptocurrency miner.
  • the techniques described herein relate to a system, wherein the financial information includes at least one of a purchase price of the cryptocurrency miner, a depreciation of the cryptocurrency miner or a profit generated by the cryptocurrency miner.
  • each of the data records includes repair and/or maintenance history information for the cryptocurrency miner including financial costs associated with the repair and/or maintenance.
  • each of the data records includes a hash rate for the cryptocurrency miner.
  • the techniques described herein relate to a system, wherein the distributed computing units each include a plurality of graphics processing units, the data record including a number of graphics processing units for each distributed computing unit, a number of currently available graphics processing units for each distributed computing unit, and a number of currently utilized graphics processing units for each distributed computing unit.
  • the techniques described herein relate to a system, wherein each of the distributed computing units are configured for running a plurality of virtual machines, each graphics processing unit being adapted to run a single virtual machine alone or together with one or more of the other graphics processing units of the respective distributed computing unit, the data record including the one or more the graphics processing units running each virtual machine.
  • the techniques described herein relate to a system, wherein the data record includes the number of graphics processing units of each distributed computing unit currently running virtual machines and an excess capacity for running further virtual machines for each distributed computing unit.
  • the techniques described herein relate to a system, wherein the physical containers each include a plurality of racks having predefined rack positions configured for receiving the distributed computing units, each of the predefined rack positions being associated with one of the network interfaces, the memory partitioned to store the predefined rack positions and the associated network interfaces, the system orchestrator configured to automatically assign the predefined rack position associated with the network interface with which the distributed computing unit is connected to the corresponding data record.
  • the techniques described herein relate to a system, wherein the physical containers are located at different physical sites, each of the physical sites having at least one of the physical containers and at least one of the physical sites having a plurality of the physical containers, the system orchestrator being configured for causing the processor to generate a graphical user interface depicting the physical sites, the physical containers within the physical sites, the predefined rack positions within the containers, and the distributed computing units in the predefined rack positions.
  • the techniques described herein relate to a method of updating a computerized inventory of distributed computing units movable throughout a plurality of containers across a plurality of physical sites, the method including: providing a computer system including a processor, memory and a system orchestrator stored in the memory and executable by the processor to cause the processor to perform operations to update an inventory model stored in the memory, the inventory model including information modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generating, by the system orchestrator in response to a request for a user device, a new object in the memory and associating an inventory identifier unique to
  • the techniques described herein relate to a method wherein the generating of the new object in the memory and associating the inventory identifier unique to the system orchestrator with the new object is performed prior to the new distributed computing unit being connected to the respective network interface.
  • the techniques described herein relate to a method wherein the user interface configured to generate the request to produce the machine-readable representation of the inventory identifier is a user interface configured to generate the request to print the machine- readable representation of the inventory identifier.
  • the techniques described herein relate to a method wherein the new distributed computing unit connected to the respective one of the network interfaces is at a specific location on a rack in the physical container, the position of the respective network interface within the physical container including the specific location of the new distributed computing unit on the rack.
  • the techniques described herein relate to a method further including retrieving and/or transmitting, by the system orchestrator, data for displaying, on the user interface provided with the visual representation of the new object, a representation illustrating the relationship between the new distributed computing unit and the other distributed computing units in the physical container at respective locations on the rack.
  • the techniques described herein relate to a method further including automatically generating, by the system orchestrator in response to a disconnecting of one of the distributed computing units from the respective network interface and a reconnecting of the disconnected distributed computing unit with a further network interface at the further location, updated location information for the reconnected distributed computing unit based on location information associated with the further user interface.
  • the techniques described herein relate to a method wherein upon the reconnection, automatically determining, by the system orchestrator, the preassigned unique hardware identifier of the reconnected distributed computing unit and looking up an object in the inventory model associated with the preassigned unique hardware identifier to identify the reconnected distributed computing unit and associate the updated location information for the reconnected distributed computing unit with the object.
  • the techniques described herein relate to a system for dynamic modeling of computer resources including: a communication system adapted to provide a network and including a plurality of network interfaces each assigned a network address of the network; a coordinator computer in communication with the network, the coordinator computer including a coordinator processor, a coordinator memory and a system orchestrator stored in the memory and executable by the coordinator processor to cause the coordinator processor to perform operations; a set of first distributed computing units in a first physical container at a first physical site in communication with the network via the network interfaces, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; a first container controller including a first controller processor, a first controller memory and a first controller orchestrator, the first container orchestrator configured to store in the first controller memory or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the first distributed computing units; and
  • the techniques described herein relate to a system wherein the coordinator computer is configured to receive a request from a client computer to access a virtual machine, and in response to the request, to retrieve the currently available computing capacity for each of the first and second distributed computing units, and to establish the virtual machine on the first or second distributed computing units based on the currently available computing capacity from the first and second inventory models.
  • each of the first and second distributed computing units are configured for running one or more virtual machines, each graphics processing unit being adapted to run a single virtual machine alone or together with one or more of the other graphics processing units of the respective distributed computing unit, each of the first and second distributed computing units further including an agent configured for controlling a number of virtual machines generated by the respective first or second distributed computing units, the system orchestrator configured for instructing the first or second container orchestrator to generate the virtual machine requested by the client computer, the first or second container orchestrator configured for instructing one of the agents to generate the virtual machine requested by the client computer on one or more of the graphics processing units controlled by the agent.
  • each of the agents is configured to monitor and store and/or periodically transmit metadata for the graphics processing units controlled by the agent to broadcast to the corresponding first or second container orchestrator.
  • the techniques described herein relate to a system wherein the first or second container orchestrator are configured to retrieve information regarding which of the graphics processing units of the first and second distributed computing units are mining cryptocurrency and which of the graphics processing units of the first and second distributed computing units are generating virtual machines for clients, and to transmit the retrieved information to the system orchestrator.
  • the techniques described herein relate to a system wherein the coordinator computer is configured to receive a request from a client computer to run a virtual machine, and in response to the request, to send a query via the system orchestrator to the first and second container orchestrators to retrieve an operating status of the graphics processing units of the first and second distributed computing units, and to send instructions to wipe an active virtual machine running on one or more of the graphics processing units of one of the first and second distributed computing units and to spin up the virtual machine requested by the client computer on one or more of the graphics processing units in which the active virtual machine was wiped out.
  • the techniques described herein relate to a system wherein, along with the spinning up of the virtual machine requested by the client computer on the respective one or more of the graphics processing units, the system orchestrator is configured to prompt available graphics processing units to spin up a further virtual machine running a hash algorithm to mine cryptocurrency.
  • the techniques described herein relate to a method for allocating computer resources including: receiving, by a system orchestrator connected to a plurality of distributed computing units by a network, a request from a client computer to access a virtual machine, the plurality of distributed computing units including at least a set of first distributed computing units in a first physical container at a first physical location and a set of second distributed computing units in a second physical container at the first physical location or a second physical location, each of the first and second distributed computing units including a plurality of graphics processing units, the request identifying a requested number of graphics processing units for running the requested virtual machine; identifying a distributed computing unit selected from the first and second distributed computing units with one or more available graphics processing units equal to or greater than the requested number of graphics processing units, the one or more available graphics processing units having available computing capacity for generating the requested virtual machine; instructing the one or more available graphics processing units to wipeout an entire baseload running on one or more available graphics processing units of the selected distributed computing unit, the one or more available graphics
  • the techniques described herein relate to a method, wherein the one or more available graphics processing units is a number of available graphics processing units that is greater than the requested number of graphics processing units such that there are further available graphics processing units following the spinning up the requested virtual machine on the requested number of graphics processing units of the one or more available graphics processing units, the method further including spinning up a new baseload on the further available graphics processing units, the new baseload running contemporaneously with the requested virtual machine.
  • the techniques described herein relate to a method, wherein the baseload and the new baseload are each a virtual machine performing a profit generating task.
  • the techniques described herein relate to a method, wherein the profit generating task is running a hash algorithm to mine cryptocurrency.
  • the techniques described herein relate to a method, wherein the profit generating task is training machine learning models. [0044] In some aspects, the techniques described herein relate to a method wherein the identifying of the distributed computing unit selected from the first and second distributed computing units with one or more available graphics processing units equal to or greater than the requested number of graphics processing units includes querying a first container orchestrator analyzing the first distributed computing units and a second container orchestrator analyzing the second distributed computing units to identify one of the first or second distributed computing units having available computing capacity for generating the requested virtual machine.
  • FIG. 1 schematically shows components of an exemplary system 100 according to an embodiment
  • FIG. 2 schematically shows an exemplary power production system 200 that may be utilized in the system 100 of Fig. 1;
  • Fig. 3a illustrates a method 300 of dynamically creating and updating a power consumer object in a computerized inventory corresponding to a DCU movable throughout a plurality of containers across one or more sites;
  • Fig. 3b illustrates substeps of step 308 of the method 300 in Fig. 3a;
  • FIG. 4 illustrates a method 400 for allocating computer resources of a power consumption system
  • Fig. 5 illustrates a method 500 for dynamically determining a target power output of a power production system and balancing a power consumption system load by selecting and adjusting individual consumers such that an aggregate utility of the power consumption systems is maximized.
  • FIG. 6 shows an exemplary computing machine 600 and modules 650 according to an embodiment. Detailed Description
  • Fig. 1 schematically shows components of an exemplary system 100 for dynamic modeling of computer resources.
  • the system 100 includes a central control system 101 in communication, via a network 134, with various subsystems that make up a particular installation or site 108.
  • a site 108 includes a power production system 200, a power consumption system 103 and a communication system 132 to provide communication with the control system 101 (e.g., via a network 134).
  • the central control system 101 is generally configured to manage (i.e., model, monitor and control) the site 108 components in order to maintain processing conditions within acceptable operational constraints. Such constraints may be determined by economic, practical, and/or safety requirements.
  • a coordinator 130 of the control system 101 may handle high-level operational control goals, low-level PID loops, communication with both local and remote operators, and communication with both local and remote systems.
  • the coordinator 130 comprises a coordinator processor 130a, a coordinator memory 130b and a system orchestrator 130c stored in the coordinator memory 130b and executable by the coordinator processor 130a to cause the coordinator processor 130a to perform operations related to managing the various components associated with each site 108.
  • a coordinator processor 130a a coordinator memory 130b and a system orchestrator 130c stored in the coordinator memory 130b and executable by the coordinator processor 130a to cause the coordinator processor 130a to perform operations related to managing the various components associated with each site 108.
  • the control system 101 may manage any number of sites and/or additional components that are not associated with a particular site.
  • the system 100 includes a power production system 200 associated with a site 108.
  • the power production system 200 may include any number of power producers 231 adapted to generate electrical power 205 that may be consumed by components of the power consumption system 103.
  • the power producers 231 may comprise one or more power generation modules (e.g., gensets, turbines, etc.) that generate electrical power 205 from a fuel gas (e.g., natural gas). Additionally or alternatively, energy producers such as solar panels, wind turbines, batteries, etc. may be employed.
  • a power consumption system 103 is also provided as part of the site 108.
  • the power consumption system 103 generally comprises any number of power consumers 112, 122 adapted to consume the electrical power 205 generated by the power production system 200.
  • the power consumers comprise distributed computing units (“DCUs”) 112, 122 that collectively enable a modular computing installation, for example, a data center, cryptocurrency mine or graphics computing cell.
  • DCUs distributed computing units
  • Each of the DCUs 112, 122 may comprise a computing machine having one or more processors 116, 126 (e.g., CPUs, GPUs, ASICs, etc.) adapted to conduct any number of processing-, computational-, and/or graphics-intensive computing processes.
  • processors 116, 126 e.g., CPUs, GPUs, ASICs, etc.
  • the DCUs may be employed for artificial intelligence (“Al”) research, training machine learning (“ML”) and other models, data analysis, server functions, storage, virtual reality (“VR”) and/or augmented reality (“AR”) applications, tasks relating to the Golem Project, non-currency blockchain applications.
  • Al artificial intelligence
  • ML training machine learning
  • VR virtual reality
  • AR augmented reality
  • the DCUs may be employed to execute mathematical operations in relation to the mining of cryptocurrencies, such as the following hashing algorithms: SHA-256, ETHash, scrypt, CryptoNight, RIPEMD160, BLAKE256, XI 1, Dagger-Hashimoto, Equihash, LBRY, X13, NXT, Lyra2RE, Qubit, Skein, Groestl, BOINC, Xl lgost, Scrypt-jane, Quark, Keccak, Scrypt-OG, X14, Axiom, Momentum, SHA-512, Yescrypt, Scrypt-N, Cunningham, NIST5, Fresh, AES, 2Skein, Equilhash, KSHAKE320, Sidechain, Lyra2RE, HybridScryptHash256, Momentum, HEFTY1, Skein-SHA2, Qubit, SpreadXl l, Pluck, and/or Fugue256.
  • hashing algorithms such as the following hashing algorithms: SHA-256, ETH
  • the DCUs 112, 122 may be housed within one or more containers, structures, or data centers 110, 120 disposed at a physical location associated with the site 108.
  • the containers 110, 120 may comprise a prefabricated housing or enclosure to contain and protect the various electronics disposed therein.
  • the enclosure may comprise a customized shipping container or other modular housing system designed for portability, durability, safety, stack-ability, ventilation, weatherproofing, dust control and operation in rugged oilfield conditions.
  • Each container 110, 120 may also include an electrical power distribution system 186 adapted to receive electrical power 205 from the power production system 200 and distribute the same to the various electrical components of the container.
  • the system 186 may comprise a series of power distribution units (“PDUs”) or power channels in communication with one or more breaker panels.
  • the containers 110, 120 may include one or more backup power systems 187 (e.g., batteries), and/or an environment control system 189.
  • each container 110, 120 (and any electronic components contained therein) are in communication with the central control system 101 via a connection to the communication system 132.
  • each container 110, 120 may include a plurality of network interfaces 136 of communication system 132 (each having a network address) and the DCUs 112, 122 may be connected to such interfaces 136 (e.g., via ethernet).
  • Each container 1 10, 120 may comprise a container controller 1 14, 124 configured to communicate with the central control system 101 and the DCUs of the respective container (discussed below).
  • a first container 110 may include a plurality of first DCUs 112 and a first container controller 114 configured for controlling the first DCUs 112.
  • a second container 120 may include a plurality of second DCUs 122 and a second container controller 124 configured for controlling the second DCUs 122.
  • the respective container controller 114, 124 may control one or more associated DCUs 112, 122 based on information received from the DCUs (or other container components) and/or according to instructions received from the central control system 101.
  • each container controller 114, 124 may include a controller processor 114a, 124a, a controller memory 114b, 124b, and a container orchestrator 114c, 124c.
  • Each container controller 114, 124 is generally configured to determine consumer information for each power consumer associated with the respective container and container information corresponding to the respective container (discussed in detail below).
  • the container controller 114, 124 may further be configured to store such information in the respective controller memory 114b, 124b and/or to periodically transmit such information to the central control system 101 (either directly or via an intermediary such as a site controller 177) such that the information may be stored in a database 130e.
  • container information and consumer information may be employed to execute a method 300 of managing a digital inventory of DCUs (e g., 112, 122) movable throughout a plurality of containers (e.g., 110, 120) associated with a single site (e.g., 108), or even across multiple sites.
  • each container controller 114, 124 may include a control module 114d, 124d adapted to adjust operating parameters of associated power consumers (e.g., DCUs 112, 122).
  • the container controllers 114, 124 may employ the control modules 114d, 124d to balance a load of the container(s) to a target power received from the control system 101 (either directly or via an intermediary such as a power consumption system controller 177).
  • the control module 114d, 124d may then adjust operating parameters of one or more consumers in order to balance the consumers load to the target power.
  • control module may select DCUs and/or particular DCU processors (e.g., 116, 126) for such adjustment: based on consumer information associated with each of the consumers (e.g., priority information, consumer metrics, etc.); to satisfy predetermined requirements or constraints; and/or to optimize a total utility of the consumers (e.g., revenue generation, hash power, uptime, etc.).
  • each site 108 may comprise or communication with a communication system 132 that provides a network 134 to which various components of the system 100 may be connected.
  • the network 134 may include wide area networks (“WAN”), local area networks (“LAN”), intranets, the Internet, wireless access networks, wired networks, mobile networks, telephone networks, optical networks, or combinations thereof.
  • the network 134 may be packet switched, circuit switched, of any topology, and may use any communication protocol. Communication links within the network 134 may involve various digital or analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth.
  • the communication system 132 may provide an internal network for a given site 108 that includes automatic load-balancing functionality.
  • the system 200 comprises one or more power producers (power generation modules 231a, 231b) in communication with a fuel gas supply 220 such that power generation modules 231a, 23 lb may receive a fuel gas stream 202 therefrom (e.g., natural gas).
  • the power generation modules 231a, 231b are further shown to optionally be in electrical communication with an electrical transformation module 235 such that an electrical output 203 may be transmitted from the power generation modules 231a, 23 lb to the electrical transformation module 235.
  • the power generation modules 231a, 231b may each comprise a generator component adapted to generate an electrical output 203 via combustion of the natural gas 202.
  • the generator component may employ either a fuel-gas-driven reciprocating engine or a fuel-gas-driven rotating turbine to combust the natural gas 202 and drive an electrical generator.
  • each power generation module 231a, 231b may be associated with various producer information, such as operational requirements, measured or determined producer metrics, and statistics determined over a time period.
  • the employed power generation modules 231a, 231b may each be specified to operate with natural gas 202 having a wide variety of properties.
  • certain modules may include generator components adapted to utilize rich natural gas or natural gas that has been processed to such that it is substantially free of propane and higher hydrocarbons (C3+) components.
  • the producers may be associated with a gas consumption rate, which refers to the volume of natural gas consumed by the generator within a given time period.
  • the gas consumption rate may be determined for continuous operation of the generator at standard ambient conditions.
  • the gas consumption rate of engine-type generators may range from about 40 Mscfd to about 500 Mscfd.
  • the gas consumption rate of turbine-type generators may range from about 1 MMscfd to about 6 MMscfd.
  • the power producers may further be associated with a generated power output that refers to the electrical energy output by a given generator after efficiency losses within the generator. This property is often referred to as “real power” or “kWe.”
  • the generated power output may be provided as “continuous power,” which refers to the real power obtained from the generator when the module is operating continuously at standard ambient conditions.
  • engine-type generators may produce an electrical output ranging from about 70 kW to about 2 MW, with an associated voltage ranging from about 480 V to about 4.16 kV.
  • turbine-type generators may produce an electrical output ranging from about 2 MW to 30 MW, with an associated voltage ranging from about 4.16 kV to about 12 kV.
  • the various generator components employed in the power generation module 231 may be adapted to operate reliably in harsh conditions, and with variability in gas rates, composition and heating values.
  • the specific generators employed in each of power generation modules 231a, 231b may be selected and configured based on the specifications and availability of natural gas at a particular location.
  • each of power generation modules 231 a, 231b may optionally be in further communication with a backup fuel supply 237 containing a backup fuel 208.
  • the backup fuel supply 237 may comprise a natural gas storage tank containing pressurized natural gas.
  • the backup fuel supply 237 may comprise an on-site reserve of propane. At times of low gas availability, the backup fuel 208 may be piped directly to the power generation modules 23 la, 23 lb, from the backup fuel supply 237.
  • each of the power generation modules 23 la, 23 lb will further comprise various ancillary components (commonly referred to as the “balance of plant”). Such components may include, but are not limited to, compressors, lubrication systems, emissions control systems, catalysts, and exhaust systems.
  • the power generation modules 231a, 231b may optionally comprise integrated emissions reduction technologies, such as but not limited to, a non-selective catalytic reduction (“NSCR”) system or a selective catalytic reduction (“SCR”) system.
  • NCR non-selective catalytic reduction
  • SCR selective catalytic reduction
  • the power generation modules 231a, 231b may each comprise a housing designed to contain and protect the above-described components of the module.
  • Such housing may provide features such as, but not limited to, weatherproofing, skid or trailer mounting for portability, and sound attenuation.
  • the power generation modules 231a, 23 lb may each be supported by a transportable chassis, trailer, or railcar to facilitate positioning and/or repositioning of the module. More particularly, the transportable chassis, trailers, or railcars may be coupled to vehicles, such as trucks or trains, and transported over a geographic area. The generator skids can range in size from an enclosed trailer hauled behind a pickup truck, to a plurality of semi-trailer loads for the generator and its required ancillary equipment. [0080] As shown, each of the power generation modules 231a, 23 lb can include one or more sensors 270 for measuring or determining various power producer metrics.
  • the modules can further include a respective controller 272 for transmitting producer information (e.g., metrics and statistics) to a controller (e.g., a master container controller 114, a site controller 117, or the remote control system 101).
  • controllers 272 can comprise a modbus controller such that producer metrics may be retrieved from the modbus controller at predetermined intervals, for example every 15 seconds.
  • System 200 can further include an inlet pressure sensor 274 configured to measure the pressure of gas entering into gas supply line 220.
  • an inlet pressure sensor 274 configured to measure the pressure of gas entering into gas supply line 220.
  • the system may include one inlet pressure sensor per power generation module 231a, 231b.
  • one or more controllers e.g., a master container controller 114, a site controller 117, or the remote control system 101
  • the electrical power production system 200 may comprise an electrical transformation module 235 in electrical communication with the power generation modules 231a, 23 lb.
  • the electrical power 203a, 203b generated by each of the power generation modules 231a, 231b may be transmitted through the electrical transformation module 235 such that it may be converted into an electrical flow 205 that is suitable for consumption by the power consumption system 103.
  • the electrical transformation module 235 may comprise various power conditioning equipment.
  • one or more step-down transformers may be employed to reduce the voltage of an incoming electrical flow 203a, 203b by one or more “steps down” into a secondary electrical flow 205 comprising a lower voltage.
  • a 1 MVA step-down transformer adapted to step down the voltage of an incoming electrical flow 203a, 203b having a voltage of up to about 4.16 kV.
  • the electrical transformation module 235 may convert the incoming electrical flow 203a, 203b to an output electrical flow 205 having a voltage of about 480 V or less.
  • the electrical transformation module 235 may reduce voltage in a plurality of steps.
  • the electrical transformation module may receive an incoming electrical flow 203a, 203b having a voltage of up to about 12 kV and may step down the voltage via multiple steps to a reduced-power output electrical flow 205 having a voltage of about 480 V or less.
  • the power production system may comprise a main breaker capable of cutting off all downstream electrical flows, which allows an operator to quickly depower any attached computing equipment in the case of operational work or emergency shutdown. Additionally or alternatively, component terminals may be fitted with “quick connects.”
  • each of the electrical transformation modules 235 can include one or more sensors 276 for measuring or determining various producer metrics.
  • the modules can further include a respective controller 278 for transmitting the producer metrics to a controller (e.g., container controller 114, 124 or the remote control system 101).
  • controller 278 can comprise a modbus controller such that the metrics may be fetched from the modbus controller at predetermined intervals, for example every 15 seconds. It will be appreciated that any number of power generation modules 231a, 231b and electrical transformation modules 235 may be included in the power production system 200.
  • the power generation modules 231a, 23 lb may be directly wired from a terminal of each of the power generation modules 231a, 23 lb into a primary side of the electrical transformation module 235.
  • two or more sets of power generation modules 231a, 231b and electrical transformation modules 235 may be employed, in a series configuration, to power any number of computing components.
  • a step-down transformer may not be required.
  • the output electrical flow 203 generated by the power generation module 231 comprises a voltage compatible with components of the power consumption system 103 (e.g., up to about 480V), such electrical output may be utilized without stepping-down the voltage.
  • the electrical power production system 200 may comprise multiple power generation modules 231a, 231b connected in parallel.
  • the multiple electrical power generation modules 231a, 231b may be phase-synced such that their output electrical flows 203a, 203b may be combined without misalignment of wave frequency.
  • the multiple phase-synced electrical flows 203a, 203b may be wired into a parallel panel 260, which outputs a single down-stream flow 204 with singular voltage, frequency, current and power metrics.
  • the singular down-stream flow 204 may be wired into a primary side of an electrical transformation module 235 for voltage modulation.
  • the singular down-stream flow 204 may be transmitted to the electrical transformation module 235 such that the flow may be converted into an output electrical flow 205 that is suitable for consumption by various components of the power consumption system.
  • each of the power generation modules 231a, 23 lb and/or the parallel panel 260 may comprise a control system that allows for the module to be synchronized and paralleled with other power generation modules.
  • the control system may allow load-sharing of up to 32 power generation modules via a data link and may provide power management capabilities, such as loaddependent starting and stopping, asymmetric load-sharing, and priority selection. Such functionality may allow an operator to optimize load-sharing based on various producer metrics, for example, running hours and/or fuel consumption.
  • Figs. 3a-3b illustrate the steps of method 300, which includes a step 302 of providing a computer system in the form of coordinator computer 130, including coordinator processor 130a, coordinator memory 130b and system orchestrator 130c stored in coordinator memory 130b and executable by coordinator processor 130a to cause coordinator processor 130a to perform operations to update an inventory model stored in coordinator memory 130b and associated with an inventory system.
  • the inventory system may comprise models of various components of the system 100, such as: sites, power producers (e.g., power generation modules, electrical transformation modules, etc.) and power consumers (e.g., containers, DCUs, etc.).
  • power producers e.g., power generation modules, electrical transformation modules, etc.
  • power consumers e.g., containers, DCUs, etc.
  • the system may determine and store site information for each site 108.
  • site information may include: site ID, operator information, location information (e.g., address and/or coordinates), fuel gas information (e.g., current and historical heat values, volumes), network equipment information, associated power producers information, and associated power consumers information (e.g., associated containers, associated power consumers).
  • the system may monitor, determine and/or store producer information such as: producer ID, producer type, an associated site, networking information (e.g., generator modbus URL, ECU modbus URL), operations constraints and requirements, producer metrics, producer statistics, and producer controls.
  • producer information such as: producer ID, producer type, an associated site, networking information (e.g., generator modbus URL, ECU modbus URL), operations constraints and requirements, producer metrics, producer statistics, and producer controls.
  • the system may monitor and/or calculate current values for some or all of the listed power producer metrics.
  • the system may calculate producer statistics over one or more time periods by analyzing historical values of such metrics.
  • Exemplary statistics include slope and exponential moving average (EMA).
  • EMA exponential moving average
  • the system determines engine pressure slope, engine pressure EMA, coolant temperature slope and/or coolant temperature EMA. Such statistics may be determined for various time periods.
  • power producers may be associated with certain operational requirements that must be observed. Such requirements may be predetermined (e g., based on producer type) or may be dynamically adjusted according to values of certain producer metrics (e g., based on a current Knock Index).
  • the system may also model and manage power consumers information for any number of power consumption systems.
  • Such information may comprise: a unique ID, associated container information, and consumer information for each power consumer associated with each of the associated containers.
  • exemplary container information may include: container ID, associated site, associated power producers, container type (e g., manufacturer, model), networking information (e g., container modbus URL), VLANs information (e.g., main, ASIC, loT, etc.), controller information (controller ID, controller IP address), layout information, associated DCUs, and various container metrics.
  • container type e g., manufacturer, model
  • networking information e.g., container modbus URL
  • VLANs information e.g., main, ASIC, loT, etc.
  • controller information controller ID, controller IP address
  • layout information e.g., associated DCUs, and various container metrics.
  • the embodiments may also manage power consumer information for each consumer.
  • Exemplary consumer information may include, but is not limited to: unique ID, hardware identifier, network information, associated container and location information, consumer type (e.g., manufacturer, model), processor information (e.g., type, count, temperature, etc.), fan speed, hashrate, board information (e.g., temperature), software information, uptime, financial information (e.g., mining pool, pool user), owner information, status information and/or priority information.
  • each of the consumers has a preassigned unique hardware identifier accessible to the system orchestrator 130c via the network.
  • the preassigned unique hardware identifier can be a media access control (MAC) address.
  • method 300 may further include a step 304 of generating, by system orchestrator 130c in response to a request from a user device 138 (e.g., a smartphone or computer), a new object in the memory 130b and associating a unique inventory identifier with the new object.
  • the inventory identifier is different from the MAC address and is an ID assigned by the system orchestrator 130c.
  • User device 138 can be a client computer, for example a mobile phone.
  • the generating of the new object in the memory 230b and associating the inventory identifier unique to the system orchestrator 230c with the new object may be performed prior to the new DCU 112, 122 being connected to the respective network interface 136.
  • method 300 may include a step 306 of sending, by system orchestrator 130c, a transmission that directs the user device 138 to generate a graphical user interface configured to generate a request to produce a machine-readable representation of the inventory identifier that is configured for being affixed to one of the DCUs 112, 122.
  • An intermediate system 144 for example an application server or a webserver, can receive the transmission from system orchestrator 130c and generate the graphical user interface on the user device 138.
  • the graphical user interface can include an icon that is selectable to print the machine-readable representation via a printer on the network 134.
  • the machine-readable representation can be a machine-readable code, for example a barcode or a QR code.
  • the user of the user device 138 can then affix the machine-readable code, which is for example printed on a sticker, to the new DCU 112, 122 and connect the new DCU 112, 122 to a respective one of the network interfaces 136, for example by plugging in an ethernet cable of the network 134 that is in communication with the network interface 136 into a port of the new DCU 112, 122.
  • the machine-readable code which is for example printed on a sticker
  • Method 300 also includes a step 308 in which the system orchestrator 130c, in response to the new DCU 112, 122 being connected to the network interface 136, automatically retrieves the preassigned unique hardware identifier from the new DCU 112, 122 and location information for the new DCU 112, 122 and associates the preassigned unique hardware identifier and the location information with the new object in the inventory model in memory 130b.
  • each container may be associated with layout information corresponding to a plurality of racks disposed within a container.
  • Each rack may comprise a plurality shelfs, where each shelf comprises various slots into which DCUs may be installed. Accordingly, each slot represents a unique physical location that may be employed to determine the physical location of a particular DCU if such components are correlated by the system.
  • each slot may be configured to include one of the network interfaces 136 of the communication system 132, wherein each interface is assigned a unique, static network address. Accordingly, when a DCU is connected to the particular network interface 136, the DCU is automatically associated with the corresponding network address. Because the network address uniquely identifies a particular slot, in a shelf of a rack located in a container disposed at a site, the network address association allows for a physical location to be determined.
  • the new DCU 112, 122 may be connected to the respective network interface 136 at a specific location on a rack in the physical container and the position of the respective network interface within the physical container 110, 120 including the specific location of the new DCU on the rack.
  • step 308 can include a plurality of substeps, including a substep 308a of automatically, by the system orchestrator 130c and in response to the new DCU being connected to the network interface 136, retrieving the preassigned unique hardware identifier from the new DCU 112, 122 via the communication system 132.
  • a substep 308b includes automatically, by the system orchestrator 130c, determining location information including the physical container 110, 120 in which the respective network interface 136 is located and a position of the respective network interface 136 within the physical container 110, 120 based on the network address of the respective network interface 136.
  • Each of the network interfaces 136 can have preassigned container identifier, a preassigned rack identifier, a preassigned shelf identifier and a preassigned shelf position identifier, and this information is automatically determined by the system orchestrator 130c upon the connecting of the new DCU 112, 122 to the network interface 136.
  • a substep 308c of step 308 includes automatically, by the system orchestrator 130c, directing the user device 138 to generate a visual representation of the new DCU 112, 122 via a graphical user interface displayed on user device 138.
  • Intermediate system 144 can receive a transmission from system orchestrator 130c to generate the visual representation and can generate the visual representation on the user device 138.
  • the visual representation can be a message with information describing that the new DCU 112, 122 has been added to the network 134 via connection to the specific network interface 136 and requesting the user of the user device 138 to scan the machine-readable representation of the inventory identifier that is affixed to the new DCU 112, 122 so the system orchestrator 130c can associate the preassigned unique hardware identifier with the inventory identifier and the new object.
  • the information describing that the new DCU 112, 122 can include any of the consumer information listed above. This information can be pulled by the container orchestrator 114c as soon as the DCU is connected to the network interface, and further information can be monitored and updated by the container orchestrator 114c periodically pinging the DCU.
  • each DCU can include a plurality of hashboards, with each hash board including a plurality of chips.
  • the container orchestrator 114c can periodically ping the DCU to obtain a maximum chip temperature for the chips on a hashboard, along with average temperature for each hashboard, and the hashrate of the DCU.
  • a substep 308d of step 308 includes automatically, by the system orchestrator 130c, associating the preassigned unique hardware identifier with the selected new object in the inventory model upon receiving an input of the machine-readable representation of the inventory identifier via the user device 138 or a separate user device.
  • the input of the machine- readable representation of the inventory identifier can be scanning the machine-readable representation of the inventory identifier via the camera of user device 138.
  • a substep 308e of step 308 includes automatically, by the system orchestrator 230c, associating the location information with the new object in the inventory model.
  • this location information can include the physical container 110, 120 in which the new DCU 112, 122 is located and the position of the respective network interface 136 within the physical container 110, 120.
  • This location information can be in the form of a preassigned container identifier, a rack identifier, a shelf identifier and a shelf position identifier.
  • Method 300 can then include a step 310 of providing a visual representation of the new object via a user interface 130d of coordinator computer 130.
  • the visual representation of the new object can be displayed on a user interface of a further computer connected to network 134.
  • the visual representation of the new object can be displayed in visual representation of the container with visual representation of the other DCUs in the container and can be selectable to obtain information related to the new DCU that is stored in memory 130b in association with the new object.
  • the system orchestrator 130c can retrieve and/or transmit data for displaying, on the user interface provided with the visual representation of the new object can include, a representation illustrating the relationship between the new DCU 112, 122 and the other DCUs 112, 122 in the physical container 110, 120 at respective locations on the rack.
  • Method 300 can also include steps for updating the object for any of DCUs 112, 122 if the DCU 112, 122 is disconnected from the network interface 136.
  • Method 300 can include a step 312 of automatically generating, by the system orchestrator 130c in response to a disconnecting of one of the DCUs 112, 122 from the respective network interface 136 and a reconnecting of the disconnected DCU 112, 122 with a further network interface 136 at the further location, updated location information for the reconnected DCU 112, 122 based on location information associated with the further user interface 136.
  • the system orchestrator 130c can automatically determine the preassigned unique hardware identifier of the reconnected DCU 112, 122 and look up an object in the inventory model associated with the preassigned unique hardware identifier to identify the reconnected DCU 112, 122 and associate the updated location information for the reconnected DCU 112, 122 with the object.
  • System orchestrator 130c can cause processor 130a to generate a graphical user interface modeling a plurality of physical containers, including for example physical containers 110, 120, and a plurality of DCUs in communication with a network, including for example DCUs 112, 122.
  • the system orchestrator 130c can be configured for causing the processor 130a to generate a graphical user interface depicting physical site 108, the first and second physical containers 110, 120 within the first physical site 108, and positions of the DCUs 112, 122 within the first and second physical containers 110, 120.
  • the physical site 108 can be represented by a site representation
  • second physical containers 110, 120 can be represented by container representations
  • DCUs 112, 122 can be represented by DCU representations.
  • Each representation is selectable by a user to access a corresponding data record in memory 130b generated by system orchestrator 130c.
  • the data records for containers 110, 120 include container information of each of the physical containers 110, 120.
  • the container information can include information describing the geographical location of the physical container, a container identifier for the physical container, a size of the physical container, a type of the physical container and a cost of the physical container
  • the data records for DCUs 112, 122 include position information for each of the DCUs in communication with the network 134.
  • the position information can include a position of the distributed computing unit within the respective physical container, including at least one of a rack, a shelf and a slot where the distributed computing unit is positioned within the respective physical container.
  • the container information and/or position information are automatically assigned to each DCU 112, 122 in communication with the network 134 based on the network address of the respective network interface 136.
  • the physical containers 110, 120 can each include a plurality of racks having predefined rack positions configured for receiving the DCUs, and each of the predefined rack positions can be associated with one of the network interfaces.
  • Memory 130b can be partitioned to store the predefined rack positions and the associated network interfaces, and the system orchestrator 130c can be configured to automatically assign the predefined rack position associated with the network interface 136 with which the DCU is connected to the corresponding data record.
  • the physical containers 110, 112 can be located at different physical sites and each of the physical sites can have at least one of the physical containers 110, 112 and at least one of the physical sites can have a plurality of physical containers.
  • the system orchestrator 130c can be configured for causing the processor to generate a graphical user interface depicting the physical sites, the physical containers within the physical sites, the predefined rack positions within the containers, and the DCUs in the predefined rack positions.
  • system orchestrator 130c can also cause processor 130a to automatically assign to each DCU 112, 122 an inventory identifier unique to the system orchestrator 130c, and the inventory identifier is stored in the database with the container information of the distributed computing unit and the preassigned unique hardware identifier.
  • the system orchestrator 130c causes the processor 130a to, in response to a new entry event indicating an addition of the new DCU into one of the physical containers, automatically update the database to include a new data record including an automatically generated inventory identifier unique to the system orchestrator 130c.
  • the system orchestrator 130c can cause the processor 130a to, in response to a connection of the new DCU to one of the network interfaces 136 of one of the physical containers 110, 120, automatically retrieve the preassigned unique hardware identifier from the new DCU, and store the preassigned unique hardware identifier and the geographical location of network interface 136 to which the new DCU is connected in the new data record.
  • the container information and/or position information of each DCU can be dynamically adjusted by the system orchestrator 130c in response to a disconnection of the DCU from the respective network interface 136 and a reconnection of the DCU to a different network interface in a different physical container or the same physical container. For example, if a DCU is disconnected, the graphical user interface can be automatically updated by system orchestrator 130c to remove the corresponding representation, and upon reconnection, the graphical user interface can be automatically updated by system orchestrator 130c to add the corresponding representation in the new position in the corresponding physical container.
  • the adjusted container information and/or position information can be confirmed by an automatic entry event, which can for example include scanning of the machine- readable representation affixed to the DCU via a user device.
  • the scanning of the machine-readable representation can occur at a repair facility and the container information and/or position information of the DCU is updated to indicate a geographical location of the repair facility and/or a position of the DCU within the repair facility.
  • the scanning of the machine-readable representation occurs at a storage facility and the container information and/or position information of the DCU is updated to indicate a geographical location of the storage facility and/or a position of the DCU within the storage facility.
  • the data record can include a number of GPUs for each DCUs 112, 122, a number of currently available GPUs for each DCU 112, 122, and a number of currently utilized GPUs for each DCU 112, 122.
  • each GPU can run a single virtual machine alone or together with one or more of the other GPUs of the respective DCU 112, 122 and the data record can include the one or more GPUs running each virtual machine.
  • the data record can also include the number of GPUs of each DCUs currently running virtual machines and an excess capacity for running further virtual machines for each DCU.
  • the data records can include financial information related to the cryptocurrency miner.
  • the financial information can include at least one of a purchase price of the cryptocurrency miner, a depreciation of the cryptocurrency miner or a profit generated by the cryptocurrency miner.
  • Each of the data records can also include repair and/or maintenance history information for the cryptocurrency miner including financial costs associated with the repair and/or maintenance. Further, each of the data records can include a hash rate for the cryptocurrency miner.
  • User interface 130d can display information with respect to containers 110, 112, racks, and individual DCUs. For example, if a container or rack is selected, user interface 130d can display information for each DCU within the rack or containers.
  • User interface 13 Od can display information with respect to all of the DCUs together in a container 110, 112 considered as a whole.
  • the displayed metrics can include the total number of DCUs online and offline for the container, the total hashrate of all of the DCUs of the container together (current value and over time on a graph), a mining pool hashrate, the load of the generator powering the DCUs of the container, a current fuel consumption generator powering the DCUs of the container, the power consumption by the DCUs of the container over time shown in graph, the gas pressure of the generator powering the DCUs of the container over time shown in a graph, an average maximum chip temperature for the DCUs of the container over time shown in a graph.
  • User interface 130d can also display information for a plurality of containers with respect to all of the DCUs together in each container. For example, all of the containers can be viewed together on a screen, so a user can compare metrics among containers.
  • the metrics can include a power (kW) of the generator powering the container, max power (kW) of the generator powering the container, gas pressure (psi), a current utility in terms of percentage of maximum hashrate being utilized, miner inventory (total # in container), the number of miners connected to the network, the number of miners that are hashing, the number of miners that are sleeping, the number of hashboards that are broken, a current utility in terms of percentage of maximum hashrate being utilized and a currently unused mining capacity (PH/s or TH/s).
  • a user of interface 130d can for example thus identify which containers are consuming the most and least power, which consumers have the highest and lowest percentages of utility, which containers have the most unhealthy miners and which containers have the most unused capacity in hashes per
  • first container orchestrator 114c is configured to store in the first controller memory 114b or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the first DCUs 112.
  • second container orchestrator 124c is configured to store in the second controller memory 124b or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the second DCUs 122.
  • the system orchestrator 130c is configured for communicating with the first and second container orchestrators 114c, 124c to obtain the total computing capacity and the currently available computing capacity for each of the DCUs 112, 122.
  • System 100 can be configured to communicate with client computers 138 seeking computing resources in the form of virtual machines running on DCUs 112, 122.
  • Client computers 138 can access, via a public network, a web page or application generated by an intermediate system 144, which can be a web server or application server, and submit a request to access a virtual machine generated by DCUs 112, 122.
  • the request can include a requested number of GPUs for running the virtual machine, and this request can be communicated to coordinator computer 114 to connect the client computer 138 to a respective one of the DCUs 112, 122.
  • each of DCUs 112, 122 can include multiple GPUs, and each of the first and second DCUs 112, 122 can be configured for running one or more virtual machines.
  • Each GPU can be adapted to run a single virtual machine alone or together with one or more of the other GPUs of the respective DCU 112, 122
  • the coordinator computer 114 is configured to receive a request via intermediate system 144 from client computer 138 to access a virtual machine, and in response to the request, to retrieve the currently available computing capacity for each of the first and second DCUs 112, 122, and to establish the virtual machine on the first or second DCUs 112, 122 based on the currently available computing capacity from the first and second inventory models.
  • Coordinator computer 130 is configured to receive a request from a client computer 138 to access a virtual machine, and in response to the request, to send a query via the system orchestrator 130c to the first and second container orchestrators 114c, 124c to retrieve an operating status of the GPUs of the first and second DCUs 112, 122, and to send instructions to wipe an active virtual machine running on the respective GPUs of one of the first and second DCUs 112, 122 and to spin up the virtual machine requested by the client computer 138 on one or more of the GPUs in which the active virtual machine was wiped out.
  • the operator of system 100 is running an active virtual machine for internal business purposes and/or for a profit generating activity on six GPUs of one of DCUs 122, and the client computer 138 requests a virtual machine that is to be run on three GPUs, the active virtual machine is wiped out, and the client computer 138 is connected to the DCU 122 and the requested virtual machine is run for the client computer 138 on three of the six GPUs in which the active virtual machine was wiped out.
  • the system orchestrator 130c is configured to prompt available GPUs to spin up a further virtual machine.
  • the internal business purposes and/or profit generating activity can include running a hash algorithm to mine cryptocurrency.
  • the three available GPUs can be prompted to run a further virtual machine for internal business purposes and/or profit generating activity.
  • six of the GPUs can be running an active virtual machine mining cryptocurrency, then the mining halts to reallocate three of the six GPUs to run the requested virtual machine for the client computer and three of the six GPUs to run a further virtual machine mining cryptocurrency.
  • Each of the first and second DCUs 112, 122 can further include an agent 146 configured for controlling a number of virtual machines generated by the respective first or second DCU 112, 122.
  • the system orchestrator 130c can be configured for instructing the first or second container orchestrator 114c, 124c to generate the virtual machine requested by the client computer 138.
  • the first or second container orchestrator 114c, 124c can be configured for instructing one of the agents 146 to generate the virtual machine requested by the client computer 138 on one or more of the GPUs controlled by the agent 146.
  • Each agent 146 can be software running on the respective DCU 112, 122 the agent 146 is controlling and monitoring.
  • Each of the agents 146 can be configured to monitor and store and/or periodically transmit metadata for the GPUs controlled by the agent 146 to broadcast to the corresponding first or second container orchestrator 114c, 124c.
  • a DCU 112 in container 110 includes eight GPUs, and four GPUs are running a first virtual machine for a first client computer 138, and four GPUs are running a second virtual machine for a second client computer 138
  • the agent 146 for the DCU 112 stores and/or periodically transmits the metadata for such GPUs to container orchestrator 114c. If the first client computer 138 disconnects from the DCU 112, the agent 146 for the DCU 112 stores and/or periodically transmits the metadata indicating that these four GPUs are no longer running the second virtual machine.
  • system orchestrator 130c can instruct, via the respective container orchestrator 114c and agent 146, the DCU 112 to run a hashing algorithm on the four GPUs that are no longer running the first virtual machine in response to the first client computer 138 disconnecting from the DCU 112 to mine cryptocurrency on these four GPUs.
  • the first or second container orchestrator 114c, 124c can be configured to retrieve information regarding which of the GPUs of the first and second DCUs 112, 122 are mining cryptocurrency and which of the GPUs of the first and second DCUs 112, 122 are generating virtual machines for clients, and to transmit the retrieved information to the system orchestrator 130c.
  • a method 400 for allocating computer resources is illustrated in Fig. 4.
  • the method can include a step 402 of receiving, by the system orchestrator 130c connected to DCUs by network, a request from a client computer 138 to access a virtual machine.
  • each of DCUs 112, 122 includes a plurality of GPUs and the request from the client computer 138 identifies a requested number of GPUs for running the requested virtual machine.
  • Step 402 can include querying the first container orchestrator 114c analyzing the first DCUs 112 and the second container orchestrator 124c analyzing the second DCUs 122 to identify one of the DCUs 112, 122 having available computing capacity for providing the requested virtual machine.
  • Method 400 also includes a step 404 of identifying a DCU selected from DCUs 112, 122 with one or more available GPUs equal to or greater than the requested number of GPUs.
  • the one or more available GPUs have available computing capacity for generating the requested virtual machine.
  • Method 400 then includes a step 406 of instructing the one or more available GPUs to wipe an entire baseload running on one or more available GPUs of the selected DCU.
  • the one or more available GPUs are equal to a total number of GPUs of the selected DCU minus a number of GPUs of the selected DCU running one or more virtual machines for client computers 138. For example, if the selected DCU includes ten GPUs and eight of these ten GPUs are running one or more virtual machines for client computers 138 - e.g., three of the eight are running a virtual machine for a first client computer 138, and five of the eight are running a virtual machine for a second client computer - the DCU includes two available client computers.
  • Method 400 next includes a step 408 of spinning up the requested virtual machine on the requested number of GPUs of the one or more available GPUs following the wipeout of the entire baseload.
  • the client computer 138 requests only a single GPU for the running the requested virtual machine and two GPUs are available, the requested virtual machine is run on one of the two available GPUs.
  • the method further includes a step 410 of spinning up a new baseload on the further available GPUs.
  • the new baseload runs contemporaneously with the requested virtual machine spun up in step 408.
  • the client computer 138 requests only a single GPU for the running the requested virtual machine and two GPUs are available, a new baseload is spun up on the other available GPU.
  • the baseload wiped out in step 406 and the new baseload spun up in step 410 are each a virtual machine performing a profit generating task.
  • the profit generating task can be running a hash algorithm to mine cryptocurrency or training machine learning models.
  • Fig. 5 shows a method 500 of balancing load created by various components of a power consumption system with an optimal power output determined for a power production system.
  • the disclosed embodiments may include automated control devices that are configured to monitor the operation of power producers of the power production system, adjust the operation of power producers based on producer metrics and/or operational requirements, and monitor and control the operation of power consumers of the power consumption system.
  • the embodiments provide an automated method for determining a target output power for the power production system based on various metrics, statistics and/or operational requirements; determining operational parameters of consumers of the power consumption system; and adjusting the operation of one or more consumers such that the power consumption system is modified to provide a load that substantially meets the target output power.
  • the embodiments allow for an optimal output power to be provided by the power production system while balancing power demand of the power consumption system.
  • method 500 includes retrieving inputs for a plurality of metrics of system 100
  • the metrics can advantageously include power generation metrics mentioned above, including an engine pressure, a generator output, an engine output, a coolant temperature, a percent engine load, cylinder positions of the engine, and a knock index.
  • One of power control modules 114d, 124d can retrieve sensor data for a plurality of control variables from the corresponding respective controller 272.
  • the power generation metrics can be fetched by the respective 114d from controller 272 at predetermined intervals.
  • the further metrics can also include one or more transformer metrics, including a temperature of electrical transformation module 235 retrieved from sensor 276 by respective power control modules 114d, 124d at the predetermined intervals.
  • the further metrics can further include one or more site metrics, which can be the pressure of gas entering into gas supply line 220, retrieved from inlet pressure sensor 274 by respective power control modules 114d, 124d at the predetermined intervals.
  • site metrics can be the pressure of gas entering into gas supply line 220, retrieved from inlet pressure sensor 274 by respective power control modules 114d, 124d at the predetermined intervals.
  • business metrics such as a maintenance schedule, retrieved from a business database 148 at the predetermined intervals.
  • a next step 504 is calculating a plurality of different target powers at a time t + 1 based on the power generation metrics and the further metrics.
  • Each of the different target powers is based on a different energy producer statistic derived from the energy producer metrics.
  • Step 504 can include running each of the metrics through a distinct PID controller to determine a target power for each of the metrics.
  • each power generation metric, each transformer metric and each site metric can be run through a distinct PID controller to determine a distinct target power for each distinct power generation metric, each distinct transformer metric and each distinct site metric.
  • This process can involve calculating power generation statistics, by power control modules 114d, 124d, from current and historical values of power generation metrics (i.e., over one or more time periods).
  • the system can compare the statistic to a specific threshold for the respective power generation statistic, then outputs an error if the respective power generation statistic breaches the threshold - i.e., is greater than a maximum threshold, or is less than a minimum threshold.
  • the error has a proportionality to the amount of the respective power generation statistic breaches the threshold, and this proportionality is used to calculate the target power for the respective power generation statistic.
  • Gas pressure metrics for other power generation modules 231a, 231b on the same site can also be taken into account for determining a target power. For example, if one power generation module 23 la drops below a certain pressure, the target power of another power generation module 231b can be decreased to cause a corresponding increase in the pressure of power generation module 231a.
  • a next step 506 is selecting (e.g., by a power control module 114d, 124d) a most conservative target power from the calculated different target powers. This conservative approach can advantageously prevent most generator shutdowns and maximize the uptime of DCUs 112, 114 as a whole.
  • a next step 508 is outputting, by the respective power control modules 114d, 124d, a power consumption change is calculated as a function of the most conservative target power. The power consumption change may be sent from the respective power control modules 114d, 124d to the respective container orchestrator 114c, 124c.
  • the power control modules 114d, 124d can take the lowest calculated target power and translate the lowest calculated target power into consumer target powers to be sent to one of container orchestrators 114c, 124c.
  • the power control module 114d, 124d outputs the required power decrease or increase to the respective container orchestrator 114c, 124c.
  • a next step 510 is selecting at least one DCU 112, 122, based on priority or hierarchy information associated with each of the DCUs, for altering a power state thereof to achieve the power consumption change. If the power consumption change is a power decrease, DCUs 112, 122 can be selected for powering down, and thus the altering of the power state is changing the power from on to off. If the power consumption change is a power increase, DCUs 112, 122 can be selected for powering up, and thus the altering of the power state is changing the power from off to on. DCUs 112, 122 can also be provided with firmware that allows the amount of power drawn by each DCUs 112, 112 to be increased or decreased within a range of non-zero to 100%.
  • the container orchestrators 114c, 124c can retrieve the hierarchy of DCUs 112, 122 under the control of the respective container orchestrator 114c, 124c (e.g., container orchestrator 114 controls DCUs 112, and container orchestrator 124c controls DCUs 114), and populate a data record with the hierarchy and the power currently being consumed by each DCU 112, 122. If for example the output power of the power generation module 23 la or 23 lb need to be decreased, the container orchestrators 114c, 124c can select the lowest priority DCUs 112, 122 that can achieve the required power reduction for powering down.
  • the container orchestrator 114c, 124c can for example start from the bottom of the hierarchy and select the lowest priority DCU for deactivation, then select the second lowest priority DCU, and this process continues until the selected DCUs together have a cumulative current power consumption that is equal to or greater than the required power reduction.
  • the hierarchy can be used to for selecting DCUs for activation if the most conservative target power at time t + 1 is greater than the power currently drawn by the DCUs 112, 122. Higher priority DCUs 112, 122 in the hierarchy are selected for activation before lower priority DCUs 112, 122.
  • the DCUs 112 can be prioritized by mining output, which can include selecting DCUs 112 for shutdown based on the age of the DCU 112. If the five oldest DCUs 112 have an output equal to or greater than the required power reduction, these five oldest DCUs 112 are selected for shutdown, and the remaining DCUs 112 continue to mine cryptocurrency and draw power from the respective power generation module 231a, 213b.
  • the cryptocurrency miners can be selected for being powered down before the servers.
  • the hierarchy can also change in accordance with a cloud virtual machine service offered by system 100. For example, if DCUs 112, 122 are being powered by a single power generation module 231a, and DCUs 112 includes GPUs configured to provide virtual machines for the cloud virtual machine service, then a maximum possible power draw for reserved virtual machines must be at all times preserved for DCUs 112, and only DCUs 122 can be considered for achieving the required power reduction. For example, if no customers have reserved virtual machines via the cloud virtual machine service, then the selection of DCUs 112, 122 for power changes can be done solely based on a standard hierarchy.
  • power generation module 231a has enough available power to accommodate the maximum possible power draw.
  • the power control modules 114d, 124d are further configured to translate the received target power into consumer target powers to be sent to one of container orchestrators 114c, 124c. For example, if the target power is less than the current power consumption (i.e., the current load), the power control module 114d, 124d determines the required power decrease in terms of consumers and sends the same to the respective container orchestrator 114c, 124c.
  • the current power consumption i.e., the current load
  • the container orchestrator may then output overclocking or underclocking instructions to the respective DCUs 112, 122 selected for power adjustment (e.g., based on a distributed computer hierarchy or priority information) and the required power consumption adjustment.
  • the container orchestrators 114c, 124c can retrieve the hierarchy of DCUs 112, 122 under the control of the respective container orchestrator 114c, 124c (e.g., container orchestrator 114 controls DCUs 112, and container orchestrator 124c controls DCUs 114), and populate a data record with the hierarchy and the power currently being consumed by each DCU 112, 122. If for example the output power of the power generation module 23 la or 23 lb need to be decreased, the container orchestrators 114c, 124c can then turn off the lowest priority DCUs 112, 122 that can achieve the required power reduction.
  • the container orchestrator 114c, 124c can for example start from the bottom of the hierarchy and select the lowest priority DCU for deactivation, then select the second lowest priority DCU, and this process continues until the selected DCUs together have a cumulative current power consumption that is equal to or greater than the required power reduction.
  • a single power generation module 23 la or 23 lb can power all of DCUs 112, 122.
  • a single power control module 114d or 124d can determine the lowest calculated target power for all of the different control variables, then can send consumer target powers to both of container orchestrators 114c, 124c.
  • a next step 512 is altering the power state of the selected at least one DCU 112, 122 to achieve the power consumption change. As noted above, this can involve turning off one or more DCUs 112, 122 if the power consumption change requires a decrease in power, and can involve turning on one or more of DCUs 112, 122 if the power consumption change requires an increase in power. Further, the amount of power consumption by DCUs 112, 122 can be granularly increased or decreased without turning DCUs 112, 122 on or off. For example, the five lowest priority DCUs 112, 122 can be turned down 25% to achieve the power consumption change.
  • FIG. 6 a block diagram is provided illustrating an exemplary computing machine 600 and modules 650 in accordance with one or more embodiments presented herein.
  • the computing machine 600 may represent any of the various computing systems discussed herein, such as but not limited to, the DCUs 112, 122, components of control systems (Fig. 1 at 101, 114, 124), the client devices (FIG. 1 at 138) and/or the third-party systems.
  • the modules 650 may comprise one or more hardware or software elements configured to facilitate the computing machine 600 in performing the various methods and processing functions presented herein.
  • the computing machine 600 may comprise all kinds of apparatuses, devices, and machines for processing data, including but not limited to, a programmable processor, a computer, and/or multiple processors or computers. As shown, an exemplary computing machine 600 may include various internal and/or attached components, such as a processor 610, system bus 670, system memory 620, storage media 640, input/output interface 680, and network interface 660 for communicating with a network 630.
  • a processor 610 system bus 670, system memory 620, storage media 640, input/output interface 680, and network interface 660 for communicating with a network 630.
  • the computing machine 600 may be implemented as a conventional computer system, an embedded controller, a server, a laptop, a mobile device, a smartphone, a wearable device, a kiosk, customized machine, or any other hardware platform and/or combinations thereof. Moreover, a computing machine may be embedded in another device, such as but not limited to, a portable storage device. In some embodiments, the computing machine 600 may be a distributed system configured to function using multiple computing machines interconnected via a data network or system bus 670.
  • the processor 610 may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands.
  • the processor 610 may be configured to monitor and control the operation of the components in the computing machine 600.
  • the processor 610 may be a general-purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a graphics processing unit (“GPU”), a field programmable gate array (“FPGA”), a programmable logic device (“PLD”), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • FPGA field programmable gate array
  • PLD programmable logic device
  • the processor 610 may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, coprocessors, or any combination thereof.
  • exemplary apparatuses may comprise code that creates an execution environment for the computer program (e g., code that constitutes one or more of processor firmware, a protocol stack, a database management system, an operating system, and a combination thereof).
  • the processor 610 and/or other components of the computing machine 600 may be a virtualized computing machine executing within one or more other computing machines.
  • the system memory 620 may include non-volatile memories such as read-only memory (“ROM”), programmable read-only memory (“PROM”), erasable programmable read-only memory (“EPROM”), flash memory, or any other device capable of storing program instructions or data with or without applied power.
  • the system memory 620 also may include volatile memories, such as random-access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), and synchronous dynamic random-access memory (“SDRAM”). Other types of RAM also may be used to implement the system memory.
  • RAM random-access memory
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • SDRAM synchronous dynamic random-access memory
  • Other types of RAM also may be used to implement the system memory.
  • the system memory 620 may be implemented using a single memory module or multiple memory modules.
  • system memory is depicted as being part of the computing machine 600, one skilled in the art will recognize that the system memory may be separate from the computing machine without departing from the scope of the subject technology. It should also be appreciated that the system memory may include, or operate in conjunction with, a non-volatile storage device such as the storage media 640.
  • the storage media 640 may store one or more operating systems, application programs and program modules such as module, data, or any other information.
  • the storage media may be part of, or connected to, the computing machine 600.
  • the storage media may also be part of one or more other computing machines that are in communication with the computing machine such as servers, database servers, cloud storage, network attached storage, and so forth.
  • the modules 650 may comprise one or more hardware or software elements configured to facilitate the computing machine 600 with performing the various methods and processing functions presented herein.
  • the modules 650 may include one or more sequences of instructions stored as software or firmware in association with the system memory 620, the storage media 640, or both.
  • the storage media 640 may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor.
  • Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor.
  • Such machine or computer readable media associated with the modules may comprise a computer software product.
  • a computer software product comprising the modules may also be associated with one or more processes or methods for delivering the module to the computing machine 600 via the network, any signal-bearing medium, or any other communication or delivery technology.
  • the modules 650 may also comprise hardware circuits or information for configuring hardware circuits such as microcode or configuration information for an FPGA or other PLD.
  • the input/output (“I/O”) interface 680 may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices.
  • the I/O interface 680 may include both electrical and physical connections for operably coupling the various peripheral devices to the computing machine 600 or the processor 610.
  • the I/O interface 680 may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine, or the processor.
  • the I/O interface 680 may be configured to implement only one interface or bus technology. Alternatively, the I/O interface may be configured to implement multiple interfaces or bus technologies.
  • the I/O interface may be configured as part of, all of, or to operate in conjunction with, the system bus 670.
  • the I/O interface 680 may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine 600, or the processor 610.
  • the I/O interface 680 may couple the computing machine 600 to various input devices to receive input from a user in any form. Moreover, the I/O interface 680 may couple the computing machine 600 to various output devices such that feedback may be provided to a user via any form of sensory feedback (e.g., visual, auditory or tactile).
  • sensory feedback e.g., visual, auditory or tactile
  • Embodiments of the subject matter described in this specification can be implemented in a computing machine 600 that includes one or more of the following components: a backend component (e.g., a data server); a middleware component (e.g., an application server); a frontend component (e.g., a client computer having a graphical user interface (“GUI”) and/or a web browser through which a user can interact with an implementation of the subject matter described in this specification); and/or combinations thereof.
  • the components of the system can be interconnected by any form or medium of digital data communication, such as but not limited to, a communication network.
  • the computing machine 600 may operate in a networked environment using logical connections through the network interface 660 to one or more other systems or computing machines across a network.
  • the processor 610 may be connected to the other elements of the computing machine 600 or the various peripherals discussed herein through the system bus 670. It should be appreciated that the system bus 670 may be within the processor, outside the processor, or both. According to some embodiments, any of the processor 610, the other elements of the computing machine 600, or the various peripherals discussed herein may be integrated into a single device such as a system on chip (“SOC”), system on package (“SOP”), or ASIC device.
  • SOC system on chip
  • SOP system on package
  • ASIC application specific integrated circuit

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system for dynamic modeling of computer resources includes a system orchestrator configured to generate a graphical user interface modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, with each of the physical containers housing a subset of the distributed computing units. Each of the physical containers having a plurality of network interfaces each assigned a network address of the network and each of the distributed computing units is connected to one of the network interfaces and associated with the respective network address. The system orchestrator is also configured to automatically assign to each distributed computing unit an inventory identifier unique to the system orchestrator that is stored in the database with container information of the distributed computing unit and a preassigned unique hardware identifier; and to dynamically adjust the container information and/or position information of each distributed computing unit in response to a disconnection of the distributed computing unit from the respective network interface and a reconnection of the distributed computing unit to a different network interface in a different one of the physical containers.

Description

System and Method for Dynamic Modeling of Computer Resources
Cross-Reference to Related Applications
[0001] The present application claims benefit of U.S. provisional patent application no. 63/389,170, titled “System and Method for Dynamic Modeling of Computer Resources,” filed July 14, 2022, which is incorporated by reference herein in its entirety.
Background
[0002] The present disclosure relates generally to systems for dynamic modeling of computer resources associated with power systems, and more particularly, for managing and controlling the operation of such resources to optimally and dynamically balance power consumption and power production.
[0003] U.S. Patent No. 10,862,307, incorporated by reference herein in its entirety, discloses flare mitigation systems that employ power generation equipment to generate electricity from natural gas produced at oil wells. The systems allow the generated electricity to be consumed onsite, for example, to operate computing units that perform power-intensive, distributed computing tasks to generate revenue.
[0004] Using power generation modules to power distributed computing units presents a number of challenges with respect to portability and the constant evolution of the technology involved. The computing units utilized, which can include, for example, cryptocurrency miners or servers including GPUs, continue to advance in processing power and energy efficiency. These continuous changes present numerous technical problems for tracking power production and consumption metrics, especially when such computing units are installed at various remote locations (e.g., oilfields) and utilize intermittent power generated by one or more power producers.
[0005] As an example, if a computing unit is replaced with a newer model, or is moved from one location to another, it is much more complex than simply unplugging the computing unit and plugging a new computing unit in its place. Difficulties can arise if there is not a synergistic relationship between power generation and power consumption at a given site, which may include multiple power producers that generate the energy required for any number of power consumers (e g., computing units). For example, if power generation modules are operated outside specific processing parameters, the equipment may be damaged. Moreover, optimizing a utility of the power consumers adds a further layer of complexity; the specific balance of power generation and consumption needs to thread the needle between caution and business demands.
[0006] In order to optimally and dynamically control such systems, vast amounts of data needs to be continually generated and processed. Systems and methods need to be in place that allow each power producer (e.g., power generation modules) and each power consumer (e.g., computing unit) to be monitored, analyzed, and controlled. Such systems should allow for power production to be iteratively predicted to dynamically adjust power consumption for complex installations including multiple producers and multiple consumers. It would be beneficial if such systems were adapted to dynamically determine an optimal or target power generation for a set of producers based on vast amounts of continuously compiled power generation metrics and statistics. It would be further beneficial if such systems could dynamically balance real-time power consumption (i.e., load) according to the determined target power by adjusting individual power consumers according to priority information such that an aggregate utility (e.g., hashpower, revenue generation, etc.) of the consumers is maximized.
Summary
[0007] In accordance with the foregoing objectives and others, exemplary systems and methods are disclosed herein for dynamic modeling of computer resources. The system may include: a processor; memory; and a system orchestrator stored in the memory that, when executed by the processor, causes the processor to perform operations including: generate a graphical user interface modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generate, in a database in the memory, a data record including container information of each of the physical containers and/or position information for each of the distributed computing units in communication with the network, the container information and/or position information being automatically assigned to each distributed computing unit in communication with the network based on the network address of the respective network interface; automatically assign to each distributed computing unit an inventory identifier unique to the system orchestrator, the inventory identifier being stored in the database with the container information of the distributed computing unit and the preassigned unique hardware identifier; and dynamically adjust the container information and/or position information of each distributed computing unit in response to a disconnection of the distributed computing unit from the respective network interface and a reconnection of the distributed computing unit to a different network interface in a different one of the physical containers.
[0008] In some aspects, the techniques described herein relate to a system, wherein the container information includes information describing the geographical location of the physical container, a container identifier for the physical container, a size of the physical container, a type of the physical container and a cost of the physical container, wherein the position information includes a position of the distributed computing unit within the respective physical container, including at least one of a rack, a shelf and a slot where the distributed computing unit is positioned within the respective physical container.
[0009] In some aspects, the techniques described herein relate to a system, wherein the dynamically adjusting of the container information and/or position information includes confirming the container information and/or position information by an automatic entry event.
[0010] In some aspects, the techniques described herein relate to a system, wherein the automatic entry event includes scanning of a machine-readable representation affixed to the distributed computing unit.
[0011] In some aspects, the techniques described herein relate to a system, wherein the scanning of the machine-readable representation occurs at a repair facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the repair facility and/or a position of the distributed computing unit within the repair facility.
[0012] In some aspects, the techniques described herein relate to a system, wherein the scanning of the machine-readable representation occurs at a storage facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the storage facility and/or a position of the distributed computing unit within the storage facility.
[0013] In some aspects, the techniques described herein relate to a system, wherein the system orchestrator causes the processor to, in response to a new entry event indicating an addition of a new distributed computing unit into one of the physical containers: automatically update the database to include a new data record including an automatically generated inventory identifier.
[0014] In some aspects, the techniques described herein relate to a system, wherein the system orchestrator causes the processor to, in response to a connection of the new distributed computing unit to one of the network interfaces of one of the physical containers, automatically retrieve the preassigned unique hardware identifier from the new distributed computing unit, and store the preassigned unique hardware identifier and the geographical location of network interface to which the new distributed computing unit is connected in the new data record.
[0015] In some aspects, the techniques described herein relate to a system, wherein the distributed computing units are a plurality of cryptocurrency miners.
[0016] In some aspects, the techniques described herein relate to a system, wherein each of the data records includes financial information related to the cryptocurrency miner.
[0017] In some aspects, the techniques described herein relate to a system, wherein the financial information includes at least one of a purchase price of the cryptocurrency miner, a depreciation of the cryptocurrency miner or a profit generated by the cryptocurrency miner.
[0018] In some aspects, the techniques described herein relate to a system, wherein each of the data records includes repair and/or maintenance history information for the cryptocurrency miner including financial costs associated with the repair and/or maintenance.
[0019] In some aspects, the techniques described herein relate to a system, wherein each of the data records includes a hash rate for the cryptocurrency miner.
[0020] In some aspects, the techniques described herein relate to a system, wherein the distributed computing units each include a plurality of graphics processing units, the data record including a number of graphics processing units for each distributed computing unit, a number of currently available graphics processing units for each distributed computing unit, and a number of currently utilized graphics processing units for each distributed computing unit. [0021] In some aspects, the techniques described herein relate to a system, wherein each of the distributed computing units are configured for running a plurality of virtual machines, each graphics processing unit being adapted to run a single virtual machine alone or together with one or more of the other graphics processing units of the respective distributed computing unit, the data record including the one or more the graphics processing units running each virtual machine.
[0022] In some aspects, the techniques described herein relate to a system, wherein the data record includes the number of graphics processing units of each distributed computing unit currently running virtual machines and an excess capacity for running further virtual machines for each distributed computing unit.
[0023] In some aspects, the techniques described herein relate to a system, wherein the physical containers each include a plurality of racks having predefined rack positions configured for receiving the distributed computing units, each of the predefined rack positions being associated with one of the network interfaces, the memory partitioned to store the predefined rack positions and the associated network interfaces, the system orchestrator configured to automatically assign the predefined rack position associated with the network interface with which the distributed computing unit is connected to the corresponding data record.
[0024] In some aspects, the techniques described herein relate to a system, wherein the physical containers are located at different physical sites, each of the physical sites having at least one of the physical containers and at least one of the physical sites having a plurality of the physical containers, the system orchestrator being configured for causing the processor to generate a graphical user interface depicting the physical sites, the physical containers within the physical sites, the predefined rack positions within the containers, and the distributed computing units in the predefined rack positions.
[0025] In some aspects, the techniques described herein relate to a method of updating a computerized inventory of distributed computing units movable throughout a plurality of containers across a plurality of physical sites, the method including: providing a computer system including a processor, memory and a system orchestrator stored in the memory and executable by the processor to cause the processor to perform operations to update an inventory model stored in the memory, the inventory model including information modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generating, by the system orchestrator in response to a request for a user device, a new object in the memory and associating an inventory identifier unique to the system orchestrator with the new object; directing the user device to generate a user interface configured to generate a request to produce a machine-readable representation of the inventory identifier that is configured for being affixed to one of the distributed computing units; automatically, by the system orchestrator and in response to a new distributed computing unit being connected to a respective one of the network interfaces: retrieving the preassigned unique hardware identifier from the new distributed computing unit; determining location information including the physical container in which the respective network interface is located and a position of the respective network interface within the physical container based on the network address of the respective network interface; directing the user device to generate a visual representation of the new distributed computing unit via a graphical user interface displayed on the user device; associating the preassigned unique hardware identifier with the selected new object in the inventory model upon receiving an input of the machine-readable representation of the inventory identifier via the user device or a separate user device; and associating the location information with the new object in the inventory model; and providing a visual representation of the new object via a user interface.
[0026] In some aspects, the techniques described herein relate to a method wherein the generating of the new object in the memory and associating the inventory identifier unique to the system orchestrator with the new object is performed prior to the new distributed computing unit being connected to the respective network interface.
[0027] In some aspects, the techniques described herein relate to a method wherein the user interface configured to generate the request to produce the machine-readable representation of the inventory identifier is a user interface configured to generate the request to print the machine- readable representation of the inventory identifier. [0028] In some aspects, the techniques described herein relate to a method wherein the new distributed computing unit connected to the respective one of the network interfaces is at a specific location on a rack in the physical container, the position of the respective network interface within the physical container including the specific location of the new distributed computing unit on the rack.
[0029] In some aspects, the techniques described herein relate to a method further including retrieving and/or transmitting, by the system orchestrator, data for displaying, on the user interface provided with the visual representation of the new object, a representation illustrating the relationship between the new distributed computing unit and the other distributed computing units in the physical container at respective locations on the rack.
[0030] In some aspects, the techniques described herein relate to a method further including automatically generating, by the system orchestrator in response to a disconnecting of one of the distributed computing units from the respective network interface and a reconnecting of the disconnected distributed computing unit with a further network interface at the further location, updated location information for the reconnected distributed computing unit based on location information associated with the further user interface.
[0031] In some aspects, the techniques described herein relate to a method wherein upon the reconnection, automatically determining, by the system orchestrator, the preassigned unique hardware identifier of the reconnected distributed computing unit and looking up an object in the inventory model associated with the preassigned unique hardware identifier to identify the reconnected distributed computing unit and associate the updated location information for the reconnected distributed computing unit with the object.
[0032] In some aspects, the techniques described herein relate to a system for dynamic modeling of computer resources including: a communication system adapted to provide a network and including a plurality of network interfaces each assigned a network address of the network; a coordinator computer in communication with the network, the coordinator computer including a coordinator processor, a coordinator memory and a system orchestrator stored in the memory and executable by the coordinator processor to cause the coordinator processor to perform operations; a set of first distributed computing units in a first physical container at a first physical site in communication with the network via the network interfaces, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; a first container controller including a first controller processor, a first controller memory and a first controller orchestrator, the first container orchestrator configured to store in the first controller memory or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the first distributed computing units; and a set of second distributed computing units in a second physical container at the first physical site or a second physical site in communication with the network via the network interfaces, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; a second container controller including a second controller processor, a second controller memory and a second controller orchestrator, the second container orchestrator configured to store in the second controller memory or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the second distributed computing units, the system orchestrator configured for communicating with the first and second container orchestrators to obtain the total computing capacity and the currently available computing capacity for each of the first and second distributed computing units.
[0033] In some aspects, the techniques described herein relate to a system wherein the coordinator computer is configured to receive a request from a client computer to access a virtual machine, and in response to the request, to retrieve the currently available computing capacity for each of the first and second distributed computing units, and to establish the virtual machine on the first or second distributed computing units based on the currently available computing capacity from the first and second inventory models.
[0034] In some aspects, the techniques described herein relate to a system wherein each of the first and second distributed computing units are configured for running one or more virtual machines, each graphics processing unit being adapted to run a single virtual machine alone or together with one or more of the other graphics processing units of the respective distributed computing unit, each of the first and second distributed computing units further including an agent configured for controlling a number of virtual machines generated by the respective first or second distributed computing units, the system orchestrator configured for instructing the first or second container orchestrator to generate the virtual machine requested by the client computer, the first or second container orchestrator configured for instructing one of the agents to generate the virtual machine requested by the client computer on one or more of the graphics processing units controlled by the agent.
[0035] In some aspects, the techniques described herein relate to a system wherein each of the agents is configured to monitor and store and/or periodically transmit metadata for the graphics processing units controlled by the agent to broadcast to the corresponding first or second container orchestrator.
[0036] In some aspects, the techniques described herein relate to a system wherein the first or second container orchestrator are configured to retrieve information regarding which of the graphics processing units of the first and second distributed computing units are mining cryptocurrency and which of the graphics processing units of the first and second distributed computing units are generating virtual machines for clients, and to transmit the retrieved information to the system orchestrator.
[0037] In some aspects, the techniques described herein relate to a system wherein the coordinator computer is configured to receive a request from a client computer to run a virtual machine, and in response to the request, to send a query via the system orchestrator to the first and second container orchestrators to retrieve an operating status of the graphics processing units of the first and second distributed computing units, and to send instructions to wipe an active virtual machine running on one or more of the graphics processing units of one of the first and second distributed computing units and to spin up the virtual machine requested by the client computer on one or more of the graphics processing units in which the active virtual machine was wiped out.
[0038] In some aspects, the techniques described herein relate to a system wherein, along with the spinning up of the virtual machine requested by the client computer on the respective one or more of the graphics processing units, the system orchestrator is configured to prompt available graphics processing units to spin up a further virtual machine running a hash algorithm to mine cryptocurrency.
[0039] In some aspects, the techniques described herein relate to a method for allocating computer resources including: receiving, by a system orchestrator connected to a plurality of distributed computing units by a network, a request from a client computer to access a virtual machine, the plurality of distributed computing units including at least a set of first distributed computing units in a first physical container at a first physical location and a set of second distributed computing units in a second physical container at the first physical location or a second physical location, each of the first and second distributed computing units including a plurality of graphics processing units, the request identifying a requested number of graphics processing units for running the requested virtual machine; identifying a distributed computing unit selected from the first and second distributed computing units with one or more available graphics processing units equal to or greater than the requested number of graphics processing units, the one or more available graphics processing units having available computing capacity for generating the requested virtual machine; instructing the one or more available graphics processing units to wipeout an entire baseload running on one or more available graphics processing units of the selected distributed computing unit, the one or more available graphics processing units being equal to a total number of graphics processing units of the selected distributed computing unit minus a number of graphics processing units of the selected distributed computing unit running one or more virtual machines for clients; and spinning up the requested virtual machine on the requested number of graphics processing units of the one or more available graphics processing units following the wipeout of the entire baseload.
[0040] In some aspects, the techniques described herein relate to a method, wherein the one or more available graphics processing units is a number of available graphics processing units that is greater than the requested number of graphics processing units such that there are further available graphics processing units following the spinning up the requested virtual machine on the requested number of graphics processing units of the one or more available graphics processing units, the method further including spinning up a new baseload on the further available graphics processing units, the new baseload running contemporaneously with the requested virtual machine.
[0041] In some aspects, the techniques described herein relate to a method, wherein the baseload and the new baseload are each a virtual machine performing a profit generating task.
[0042] In some aspects, the techniques described herein relate to a method, wherein the profit generating task is running a hash algorithm to mine cryptocurrency.
[0043] In some aspects, the techniques described herein relate to a method, wherein the profit generating task is training machine learning models. [0044] In some aspects, the techniques described herein relate to a method wherein the identifying of the distributed computing unit selected from the first and second distributed computing units with one or more available graphics processing units equal to or greater than the requested number of graphics processing units includes querying a first container orchestrator analyzing the first distributed computing units and a second container orchestrator analyzing the second distributed computing units to identify one of the first or second distributed computing units having available computing capacity for generating the requested virtual machine.
[0045] The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Brief Description of the Drawings
[0046] Fig. 1 schematically shows components of an exemplary system 100 according to an embodiment;
[0047] Fig. 2 schematically shows an exemplary power production system 200 that may be utilized in the system 100 of Fig. 1;
[0048] Fig. 3a illustrates a method 300 of dynamically creating and updating a power consumer object in a computerized inventory corresponding to a DCU movable throughout a plurality of containers across one or more sites;
[0049] Fig. 3b illustrates substeps of step 308 of the method 300 in Fig. 3a;
[0050] Fig. 4 illustrates a method 400 for allocating computer resources of a power consumption system; and
[0051] Fig. 5 illustrates a method 500 for dynamically determining a target power output of a power production system and balancing a power consumption system load by selecting and adjusting individual consumers such that an aggregate utility of the power consumption systems is maximized.
[0052] Fig. 6 shows an exemplary computing machine 600 and modules 650 according to an embodiment. Detailed Description
[0053] Fig. 1 schematically shows components of an exemplary system 100 for dynamic modeling of computer resources. As shown, the system 100 includes a central control system 101 in communication, via a network 134, with various subsystems that make up a particular installation or site 108. Generally, a site 108 includes a power production system 200, a power consumption system 103 and a communication system 132 to provide communication with the control system 101 (e.g., via a network 134).
[0054] The central control system 101 is generally configured to manage (i.e., model, monitor and control) the site 108 components in order to maintain processing conditions within acceptable operational constraints. Such constraints may be determined by economic, practical, and/or safety requirements. In certain embodiments, a coordinator 130 of the control system 101 may handle high-level operational control goals, low-level PID loops, communication with both local and remote operators, and communication with both local and remote systems.
[0055] In one embodiment, the coordinator 130 comprises a coordinator processor 130a, a coordinator memory 130b and a system orchestrator 130c stored in the coordinator memory 130b and executable by the coordinator processor 130a to cause the coordinator processor 130a to perform operations related to managing the various components associated with each site 108. Although only a single site 108 is illustrated, it will be appreciated that the control system 101 may manage any number of sites and/or additional components that are not associated with a particular site.
[0056] As shown, the system 100 includes a power production system 200 associated with a site 108. The power production system 200 may include any number of power producers 231 adapted to generate electrical power 205 that may be consumed by components of the power consumption system 103. As discussed in detail below with respect to Fig. 2, the power producers 231 may comprise one or more power generation modules (e.g., gensets, turbines, etc.) that generate electrical power 205 from a fuel gas (e.g., natural gas). Additionally or alternatively, energy producers such as solar panels, wind turbines, batteries, etc. may be employed.
[0057] A power consumption system 103 is also provided as part of the site 108. The power consumption system 103 generally comprises any number of power consumers 112, 122 adapted to consume the electrical power 205 generated by the power production system 200. Preferably, the power consumers comprise distributed computing units (“DCUs”) 112, 122 that collectively enable a modular computing installation, for example, a data center, cryptocurrency mine or graphics computing cell.
[0058] Each of the DCUs 112, 122 may comprise a computing machine having one or more processors 116, 126 (e.g., CPUs, GPUs, ASICs, etc.) adapted to conduct any number of processing-, computational-, and/or graphics-intensive computing processes. For example, the DCUs may be employed for artificial intelligence (“Al”) research, training machine learning (“ML”) and other models, data analysis, server functions, storage, virtual reality (“VR”) and/or augmented reality (“AR”) applications, tasks relating to the Golem Project, non-currency blockchain applications.
[0059] As another example, the DCUs may be employed to execute mathematical operations in relation to the mining of cryptocurrencies, such as the following hashing algorithms: SHA-256, ETHash, scrypt, CryptoNight, RIPEMD160, BLAKE256, XI 1, Dagger-Hashimoto, Equihash, LBRY, X13, NXT, Lyra2RE, Qubit, Skein, Groestl, BOINC, Xl lgost, Scrypt-jane, Quark, Keccak, Scrypt-OG, X14, Axiom, Momentum, SHA-512, Yescrypt, Scrypt-N, Cunningham, NIST5, Fresh, AES, 2Skein, Equilhash, KSHAKE320, Sidechain, Lyra2RE, HybridScryptHash256, Momentum, HEFTY1, Skein-SHA2, Qubit, SpreadXl l, Pluck, and/or Fugue256.
[0060] As shown, the DCUs 112, 122 may be housed within one or more containers, structures, or data centers 110, 120 disposed at a physical location associated with the site 108. In some embodiments, the containers 110, 120 may comprise a prefabricated housing or enclosure to contain and protect the various electronics disposed therein. The enclosure may comprise a customized shipping container or other modular housing system designed for portability, durability, safety, stack-ability, ventilation, weatherproofing, dust control and operation in rugged oilfield conditions.
[0061] Each container 110, 120 may also include an electrical power distribution system 186 adapted to receive electrical power 205 from the power production system 200 and distribute the same to the various electrical components of the container. To that end, the system 186 may comprise a series of power distribution units (“PDUs”) or power channels in communication with one or more breaker panels. In some embodiments the containers 110, 120 may include one or more backup power systems 187 (e.g., batteries), and/or an environment control system 189.
[0062] As shown, the containers 110, 120 (and any electronic components contained therein) are in communication with the central control system 101 via a connection to the communication system 132. For example, each container 110, 120 may include a plurality of network interfaces 136 of communication system 132 (each having a network address) and the DCUs 112, 122 may be connected to such interfaces 136 (e.g., via ethernet).
[0063] Each container 1 10, 120 may comprise a container controller 1 14, 124 configured to communicate with the central control system 101 and the DCUs of the respective container (discussed below). For example, a first container 110 may include a plurality of first DCUs 112 and a first container controller 114 configured for controlling the first DCUs 112. And a second container 120 may include a plurality of second DCUs 122 and a second container controller 124 configured for controlling the second DCUs 122. In each case, the respective container controller 114, 124 may control one or more associated DCUs 112, 122 based on information received from the DCUs (or other container components) and/or according to instructions received from the central control system 101.
[0064] As shown, each container controller 114, 124 may include a controller processor 114a, 124a, a controller memory 114b, 124b, and a container orchestrator 114c, 124c. Each container controller 114, 124 is generally configured to determine consumer information for each power consumer associated with the respective container and container information corresponding to the respective container (discussed in detail below). The container controller 114, 124 may further be configured to store such information in the respective controller memory 114b, 124b and/or to periodically transmit such information to the central control system 101 (either directly or via an intermediary such as a site controller 177) such that the information may be stored in a database 130e.
[0065] As detailed below with respect to Figs. 3a-b, container information and consumer information may be employed to execute a method 300 of managing a digital inventory of DCUs (e g., 112, 122) movable throughout a plurality of containers (e.g., 110, 120) associated with a single site (e.g., 108), or even across multiple sites. [0066] In some embodiments, each container controller 114, 124 may include a control module 114d, 124d adapted to adjust operating parameters of associated power consumers (e.g., DCUs 112, 122). As detailed below with respect to Fig. 5, the container controllers 114, 124 may employ the control modules 114d, 124d to balance a load of the container(s) to a target power received from the control system 101 (either directly or via an intermediary such as a power consumption system controller 177). The control module 114d, 124d may then adjust operating parameters of one or more consumers in order to balance the consumers load to the target power. Moreover, the control module may select DCUs and/or particular DCU processors (e.g., 116, 126) for such adjustment: based on consumer information associated with each of the consumers (e.g., priority information, consumer metrics, etc.); to satisfy predetermined requirements or constraints; and/or to optimize a total utility of the consumers (e.g., revenue generation, hash power, uptime, etc.).
[0067] As shown in Fig. 1, each site 108 may comprise or communication with a communication system 132 that provides a network 134 to which various components of the system 100 may be connected. The network 134 may include wide area networks (“WAN”), local area networks (“LAN”), intranets, the Internet, wireless access networks, wired networks, mobile networks, telephone networks, optical networks, or combinations thereof. The network 134 may be packet switched, circuit switched, of any topology, and may use any communication protocol. Communication links within the network 134 may involve various digital or analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth. In one embodiment, the communication system 132 may provide an internal network for a given site 108 that includes automatic load-balancing functionality.
Power Production System
[0068] Referring to Fig. 2, an exemplary power production system 200 is illustrated. As shown, the system 200 comprises one or more power producers (power generation modules 231a, 231b) in communication with a fuel gas supply 220 such that power generation modules 231a, 23 lb may receive a fuel gas stream 202 therefrom (e.g., natural gas). The power generation modules 231a, 231b are further shown to optionally be in electrical communication with an electrical transformation module 235 such that an electrical output 203 may be transmitted from the power generation modules 231a, 23 lb to the electrical transformation module 235.
[0069] In one embodiment, the power generation modules 231a, 231b may each comprise a generator component adapted to generate an electrical output 203 via combustion of the natural gas 202. Generally, the generator component may employ either a fuel-gas-driven reciprocating engine or a fuel-gas-driven rotating turbine to combust the natural gas 202 and drive an electrical generator.
[0070] As detailed below, each power generation module 231a, 231b may be associated with various producer information, such as operational requirements, measured or determined producer metrics, and statistics determined over a time period.
[0071] In one embodiment, the employed power generation modules 231a, 231b may each be specified to operate with natural gas 202 having a wide variety of properties. For example, certain modules may include generator components adapted to utilize rich natural gas or natural gas that has been processed to such that it is substantially free of propane and higher hydrocarbons (C3+) components.
[0072] The producers may be associated with a gas consumption rate, which refers to the volume of natural gas consumed by the generator within a given time period. The gas consumption rate may be determined for continuous operation of the generator at standard ambient conditions. Generally, the gas consumption rate of engine-type generators may range from about 40 Mscfd to about 500 Mscfd. And the gas consumption rate of turbine-type generators may range from about 1 MMscfd to about 6 MMscfd.
[0073] The power producers may further be associated with a generated power output that refers to the electrical energy output by a given generator after efficiency losses within the generator. This property is often referred to as “real power” or “kWe.” The generated power output may be provided as “continuous power,” which refers to the real power obtained from the generator when the module is operating continuously at standard ambient conditions.
[0074] Generally, engine-type generators may produce an electrical output ranging from about 70 kW to about 2 MW, with an associated voltage ranging from about 480 V to about 4.16 kV. And turbine-type generators may produce an electrical output ranging from about 2 MW to 30 MW, with an associated voltage ranging from about 4.16 kV to about 12 kV.
[0075] It will be appreciated that the various generator components employed in the power generation module 231 may be adapted to operate reliably in harsh conditions, and with variability in gas rates, composition and heating values. Moreover, it will be appreciated that the specific generators employed in each of power generation modules 231a, 231b may be selected and configured based on the specifications and availability of natural gas at a particular location.
[0076] As shown, each of power generation modules 231 a, 231b may optionally be in further communication with a backup fuel supply 237 containing a backup fuel 208. In one embodiment, the backup fuel supply 237 may comprise a natural gas storage tank containing pressurized natural gas. In another embodiment, the backup fuel supply 237 may comprise an on-site reserve of propane. At times of low gas availability, the backup fuel 208 may be piped directly to the power generation modules 23 la, 23 lb, from the backup fuel supply 237.
[0077] Typically, each of the power generation modules 23 la, 23 lb will further comprise various ancillary components (commonly referred to as the “balance of plant”). Such components may include, but are not limited to, compressors, lubrication systems, emissions control systems, catalysts, and exhaust systems. The power generation modules 231a, 231b may optionally comprise integrated emissions reduction technologies, such as but not limited to, a non-selective catalytic reduction (“NSCR”) system or a selective catalytic reduction (“SCR”) system.
[0078] In one embodiment, the power generation modules 231a, 231b may each comprise a housing designed to contain and protect the above-described components of the module. Such housing may provide features such as, but not limited to, weatherproofing, skid or trailer mounting for portability, and sound attenuation.
[0079] In certain embodiments, the power generation modules 231a, 23 lb may each be supported by a transportable chassis, trailer, or railcar to facilitate positioning and/or repositioning of the module. More particularly, the transportable chassis, trailers, or railcars may be coupled to vehicles, such as trucks or trains, and transported over a geographic area. The generator skids can range in size from an enclosed trailer hauled behind a pickup truck, to a plurality of semi-trailer loads for the generator and its required ancillary equipment. [0080] As shown, each of the power generation modules 231a, 23 lb can include one or more sensors 270 for measuring or determining various power producer metrics. The modules can further include a respective controller 272 for transmitting producer information (e.g., metrics and statistics) to a controller (e.g., a master container controller 114, a site controller 117, or the remote control system 101). In certain embodiments, controllers 272 can comprise a modbus controller such that producer metrics may be retrieved from the modbus controller at predetermined intervals, for example every 15 seconds.
[0081] System 200 can further include an inlet pressure sensor 274 configured to measure the pressure of gas entering into gas supply line 220. In one embodiment, there can be a single pressure sensor 274 for an entire site, and the value measured by the single inlet pressure sensor 274 can be used in correlation with each power generation module 231a, 23 lb fed by gas supply line 220. In another embodiment, the system may include one inlet pressure sensor per power generation module 231a, 231b. In any event, one or more controllers (e.g., a master container controller 114, a site controller 117, or the remote control system 101) may be configured for retrieving inlet gas pressure measurements from the inlet pressure sensor(s).
[0082] In some embodiments, the electrical power production system 200 may comprise an electrical transformation module 235 in electrical communication with the power generation modules 231a, 23 lb. In such cases, the electrical power 203a, 203b generated by each of the power generation modules 231a, 231b may be transmitted through the electrical transformation module 235 such that it may be converted into an electrical flow 205 that is suitable for consumption by the power consumption system 103.
[0083] To that end, the electrical transformation module 235 may comprise various power conditioning equipment. For example, one or more step-down transformers may be employed to reduce the voltage of an incoming electrical flow 203a, 203b by one or more “steps down” into a secondary electrical flow 205 comprising a lower voltage.
[0084] In one embodiment, a 1 MVA step-down transformer adapted to step down the voltage of an incoming electrical flow 203a, 203b having a voltage of up to about 4.16 kV. In such cases, the electrical transformation module 235 may convert the incoming electrical flow 203a, 203b to an output electrical flow 205 having a voltage of about 480 V or less. [0085] Alternatively, when larger turbine-type power generation modules 231 are employed, the electrical transformation module 235 may reduce voltage in a plurality of steps. For example, the electrical transformation module may receive an incoming electrical flow 203a, 203b having a voltage of up to about 12 kV and may step down the voltage via multiple steps to a reduced-power output electrical flow 205 having a voltage of about 480 V or less.
[0086] In certain embodiments, the power production system may comprise a main breaker capable of cutting off all downstream electrical flows, which allows an operator to quickly depower any attached computing equipment in the case of operational work or emergency shutdown. Additionally or alternatively, component terminals may be fitted with “quick connects.”
[0087] As shown, each of the electrical transformation modules 235 can include one or more sensors 276 for measuring or determining various producer metrics. The modules can further include a respective controller 278 for transmitting the producer metrics to a controller (e.g., container controller 114, 124 or the remote control system 101). In certain embodiments, controller 278 can comprise a modbus controller such that the metrics may be fetched from the modbus controller at predetermined intervals, for example every 15 seconds. It will be appreciated that any number of power generation modules 231a, 231b and electrical transformation modules 235 may be included in the power production system 200. For example, the power generation modules 231a, 23 lb may be directly wired from a terminal of each of the power generation modules 231a, 23 lb into a primary side of the electrical transformation module 235. As another example, two or more sets of power generation modules 231a, 231b and electrical transformation modules 235 may be employed, in a series configuration, to power any number of computing components.
[0088] It will be appreciated that, in some embodiments, a step-down transformer may not be required. For example when the output electrical flow 203 generated by the power generation module 231 comprises a voltage compatible with components of the power consumption system 103 (e.g., up to about 480V), such electrical output may be utilized without stepping-down the voltage.
[0089] In one particular embodiment, the electrical power production system 200 may comprise multiple power generation modules 231a, 231b connected in parallel. In such embodiments, the multiple electrical power generation modules 231a, 231b may be phase-synced such that their output electrical flows 203a, 203b may be combined without misalignment of wave frequency. As shown, the multiple phase-synced electrical flows 203a, 203b may be wired into a parallel panel 260, which outputs a single down-stream flow 204 with singular voltage, frequency, current and power metrics.
[0090] In one such embodiment, the singular down-stream flow 204 may be wired into a primary side of an electrical transformation module 235 for voltage modulation. For example, as discussed above, the singular down-stream flow 204 may be transmitted to the electrical transformation module 235 such that the flow may be converted into an output electrical flow 205 that is suitable for consumption by various components of the power consumption system.
[0091] Generally, each of the power generation modules 231a, 23 lb and/or the parallel panel 260 may comprise a control system that allows for the module to be synchronized and paralleled with other power generation modules. The control system may allow load-sharing of up to 32 power generation modules via a data link and may provide power management capabilities, such as loaddependent starting and stopping, asymmetric load-sharing, and priority selection. Such functionality may allow an operator to optimize load-sharing based on various producer metrics, for example, running hours and/or fuel consumption.
Inventory Management
[0092] Figs. 3a-3b illustrate the steps of method 300, which includes a step 302 of providing a computer system in the form of coordinator computer 130, including coordinator processor 130a, coordinator memory 130b and system orchestrator 130c stored in coordinator memory 130b and executable by coordinator processor 130a to cause coordinator processor 130a to perform operations to update an inventory model stored in coordinator memory 130b and associated with an inventory system.
[0093] Generally, the inventory system may comprise models of various components of the system 100, such as: sites, power producers (e.g., power generation modules, electrical transformation modules, etc.) and power consumers (e.g., containers, DCUs, etc.).
[0094] In one embodiment, the system may determine and store site information for each site 108. Exemplary site information may include: site ID, operator information, location information (e.g., address and/or coordinates), fuel gas information (e.g., current and historical heat values, volumes), network equipment information, associated power producers information, and associated power consumers information (e.g., associated containers, associated power consumers).
[0095] With respect to power producers, the system may monitor, determine and/or store producer information such as: producer ID, producer type, an associated site, networking information (e.g., generator modbus URL, ECU modbus URL), operations constraints and requirements, producer metrics, producer statistics, and producer controls.
[0096] As shown in Table 1, the system may monitor and/or calculate current values for some or all of the listed power producer metrics.
Table 1: Producer Metrics
Figure imgf000023_0001
[0097] In certain embodiments, the system may calculate producer statistics over one or more time periods by analyzing historical values of such metrics. Exemplary statistics include slope and exponential moving average (EMA). In certain embodiments, the system determines engine pressure slope, engine pressure EMA, coolant temperature slope and/or coolant temperature EMA. Such statistics may be determined for various time periods.
[0098] As shown in Table 2, power producers may be associated with certain operational requirements that must be observed. Such requirements may be predetermined (e g., based on producer type) or may be dynamically adjusted according to values of certain producer metrics (e g., based on a current Knock Index).
Table 2: Producer Operational Requirements
Figure imgf000024_0001
[0099] The system may also model and manage power consumers information for any number of power consumption systems. Such information may comprise: a unique ID, associated container information, and consumer information for each power consumer associated with each of the associated containers.
[0100] Generally, exemplary container information may include: container ID, associated site, associated power producers, container type (e g., manufacturer, model), networking information (e g., container modbus URL), VLANs information (e.g., main, ASIC, loT, etc.), controller information (controller ID, controller IP address), layout information, associated DCUs, and various container metrics.
[0101] The embodiments may also manage power consumer information for each consumer. Exemplary consumer information may include, but is not limited to: unique ID, hardware identifier, network information, associated container and location information, consumer type (e.g., manufacturer, model), processor information (e.g., type, count, temperature, etc.), fan speed, hashrate, board information (e.g., temperature), software information, uptime, financial information (e.g., mining pool, pool user), owner information, status information and/or priority information.
[0102] Generally, each of the consumers (e.g., DCUs 112, 122) has a preassigned unique hardware identifier accessible to the system orchestrator 130c via the network. For example, the preassigned unique hardware identifier can be a media access control (MAC) address.
[0103] In any event, method 300 may further include a step 304 of generating, by system orchestrator 130c in response to a request from a user device 138 (e.g., a smartphone or computer), a new object in the memory 130b and associating a unique inventory identifier with the new object. The inventory identifier is different from the MAC address and is an ID assigned by the system orchestrator 130c. User device 138 can be a client computer, for example a mobile phone. The generating of the new object in the memory 230b and associating the inventory identifier unique to the system orchestrator 230c with the new object may be performed prior to the new DCU 112, 122 being connected to the respective network interface 136.
[0104] Following step 304, method 300 may include a step 306 of sending, by system orchestrator 130c, a transmission that directs the user device 138 to generate a graphical user interface configured to generate a request to produce a machine-readable representation of the inventory identifier that is configured for being affixed to one of the DCUs 112, 122. An intermediate system 144, for example an application server or a webserver, can receive the transmission from system orchestrator 130c and generate the graphical user interface on the user device 138. The graphical user interface can include an icon that is selectable to print the machine-readable representation via a printer on the network 134. The machine-readable representation can be a machine-readable code, for example a barcode or a QR code.
[0105] The user of the user device 138 can then affix the machine-readable code, which is for example printed on a sticker, to the new DCU 112, 122 and connect the new DCU 112, 122 to a respective one of the network interfaces 136, for example by plugging in an ethernet cable of the network 134 that is in communication with the network interface 136 into a port of the new DCU 112, 122. [0106] Method 300 also includes a step 308 in which the system orchestrator 130c, in response to the new DCU 112, 122 being connected to the network interface 136, automatically retrieves the preassigned unique hardware identifier from the new DCU 112, 122 and location information for the new DCU 112, 122 and associates the preassigned unique hardware identifier and the location information with the new object in the inventory model in memory 130b.
[0107] As mentioned above, each container may be associated with layout information corresponding to a plurality of racks disposed within a container. Each rack may comprise a plurality shelfs, where each shelf comprises various slots into which DCUs may be installed. Accordingly, each slot represents a unique physical location that may be employed to determine the physical location of a particular DCU if such components are correlated by the system.
[0108] To that end, each slot may be configured to include one of the network interfaces 136 of the communication system 132, wherein each interface is assigned a unique, static network address. Accordingly, when a DCU is connected to the particular network interface 136, the DCU is automatically associated with the corresponding network address. Because the network address uniquely identifies a particular slot, in a shelf of a rack located in a container disposed at a site, the network address association allows for a physical location to be determined.
[0109] As described further below, the new DCU 112, 122 may be connected to the respective network interface 136 at a specific location on a rack in the physical container and the position of the respective network interface within the physical container 110, 120 including the specific location of the new DCU on the rack.
[0110] As shown in Fig. 3b, step 308 can include a plurality of substeps, including a substep 308a of automatically, by the system orchestrator 130c and in response to the new DCU being connected to the network interface 136, retrieving the preassigned unique hardware identifier from the new DCU 112, 122 via the communication system 132.
[0111] Next, a substep 308b includes automatically, by the system orchestrator 130c, determining location information including the physical container 110, 120 in which the respective network interface 136 is located and a position of the respective network interface 136 within the physical container 110, 120 based on the network address of the respective network interface 136. Each of the network interfaces 136 can have preassigned container identifier, a preassigned rack identifier, a preassigned shelf identifier and a preassigned shelf position identifier, and this information is automatically determined by the system orchestrator 130c upon the connecting of the new DCU 112, 122 to the network interface 136.
[0112] A substep 308c of step 308 includes automatically, by the system orchestrator 130c, directing the user device 138 to generate a visual representation of the new DCU 112, 122 via a graphical user interface displayed on user device 138. Intermediate system 144 can receive a transmission from system orchestrator 130c to generate the visual representation and can generate the visual representation on the user device 138. The visual representation can be a message with information describing that the new DCU 112, 122 has been added to the network 134 via connection to the specific network interface 136 and requesting the user of the user device 138 to scan the machine-readable representation of the inventory identifier that is affixed to the new DCU 112, 122 so the system orchestrator 130c can associate the preassigned unique hardware identifier with the inventory identifier and the new object. The information describing that the new DCU 112, 122 can include any of the consumer information listed above. This information can be pulled by the container orchestrator 114c as soon as the DCU is connected to the network interface, and further information can be monitored and updated by the container orchestrator 114c periodically pinging the DCU. For example, each DCU can include a plurality of hashboards, with each hash board including a plurality of chips. The container orchestrator 114c can periodically ping the DCU to obtain a maximum chip temperature for the chips on a hashboard, along with average temperature for each hashboard, and the hashrate of the DCU.
[0113] A substep 308d of step 308 includes automatically, by the system orchestrator 130c, associating the preassigned unique hardware identifier with the selected new object in the inventory model upon receiving an input of the machine-readable representation of the inventory identifier via the user device 138 or a separate user device. For example, the input of the machine- readable representation of the inventory identifier can be scanning the machine-readable representation of the inventory identifier via the camera of user device 138.
[0114] A substep 308e of step 308 includes automatically, by the system orchestrator 230c, associating the location information with the new object in the inventory model. As noted above, this location information can include the physical container 110, 120 in which the new DCU 112, 122 is located and the position of the respective network interface 136 within the physical container 110, 120. This location information can be in the form of a preassigned container identifier, a rack identifier, a shelf identifier and a shelf position identifier.
[0115] Method 300 can then include a step 310 of providing a visual representation of the new object via a user interface 130d of coordinator computer 130. In other embodiments, the visual representation of the new object can be displayed on a user interface of a further computer connected to network 134. As described below, the visual representation of the new object can be displayed in visual representation of the container with visual representation of the other DCUs in the container and can be selectable to obtain information related to the new DCU that is stored in memory 130b in association with the new object.
[0116] The system orchestrator 130c can retrieve and/or transmit data for displaying, on the user interface provided with the visual representation of the new object can include, a representation illustrating the relationship between the new DCU 112, 122 and the other DCUs 112, 122 in the physical container 110, 120 at respective locations on the rack.
[0117] Method 300 can also include steps for updating the object for any of DCUs 112, 122 if the DCU 112, 122 is disconnected from the network interface 136. Method 300 can include a step 312 of automatically generating, by the system orchestrator 130c in response to a disconnecting of one of the DCUs 112, 122 from the respective network interface 136 and a reconnecting of the disconnected DCU 112, 122 with a further network interface 136 at the further location, updated location information for the reconnected DCU 112, 122 based on location information associated with the further user interface 136.
[0118] Upon the reconnection, the system orchestrator 130c can automatically determine the preassigned unique hardware identifier of the reconnected DCU 112, 122 and look up an object in the inventory model associated with the preassigned unique hardware identifier to identify the reconnected DCU 112, 122 and associate the updated location information for the reconnected DCU 112, 122 with the object.
[0119] System orchestrator 130c can cause processor 130a to generate a graphical user interface modeling a plurality of physical containers, including for example physical containers 110, 120, and a plurality of DCUs in communication with a network, including for example DCUs 112, 122. In particular, the system orchestrator 130c can be configured for causing the processor 130a to generate a graphical user interface depicting physical site 108, the first and second physical containers 110, 120 within the first physical site 108, and positions of the DCUs 112, 122 within the first and second physical containers 110, 120. The physical site 108 can be represented by a site representation, second physical containers 110, 120 can be represented by container representations, and DCUs 112, 122 can be represented by DCU representations.
[0120] Each representation is selectable by a user to access a corresponding data record in memory 130b generated by system orchestrator 130c. The data records for containers 110, 120 include container information of each of the physical containers 110, 120. The container information can include information describing the geographical location of the physical container, a container identifier for the physical container, a size of the physical container, a type of the physical container and a cost of the physical container The data records for DCUs 112, 122 include position information for each of the DCUs in communication with the network 134. The position information can include a position of the distributed computing unit within the respective physical container, including at least one of a rack, a shelf and a slot where the distributed computing unit is positioned within the respective physical container. The container information and/or position information are automatically assigned to each DCU 112, 122 in communication with the network 134 based on the network address of the respective network interface 136.
[0121] The physical containers 110, 120 can each include a plurality of racks having predefined rack positions configured for receiving the DCUs, and each of the predefined rack positions can be associated with one of the network interfaces. Memory 130b can be partitioned to store the predefined rack positions and the associated network interfaces, and the system orchestrator 130c can be configured to automatically assign the predefined rack position associated with the network interface 136 with which the DCU is connected to the corresponding data record.
[0122] The physical containers 110, 112 can be located at different physical sites and each of the physical sites can have at least one of the physical containers 110, 112 and at least one of the physical sites can have a plurality of physical containers. The system orchestrator 130c can be configured for causing the processor to generate a graphical user interface depicting the physical sites, the physical containers within the physical sites, the predefined rack positions within the containers, and the DCUs in the predefined rack positions.
[0123] As noted above, system orchestrator 130c can also cause processor 130a to automatically assign to each DCU 112, 122 an inventory identifier unique to the system orchestrator 130c, and the inventory identifier is stored in the database with the container information of the distributed computing unit and the preassigned unique hardware identifier. For the addition of a new DCU to system 100, the system orchestrator 130c causes the processor 130a to, in response to a new entry event indicating an addition of the new DCU into one of the physical containers, automatically update the database to include a new data record including an automatically generated inventory identifier unique to the system orchestrator 130c. Specifically, the system orchestrator 130c can cause the processor 130a to, in response to a connection of the new DCU to one of the network interfaces 136 of one of the physical containers 110, 120, automatically retrieve the preassigned unique hardware identifier from the new DCU, and store the preassigned unique hardware identifier and the geographical location of network interface 136 to which the new DCU is connected in the new data record.
[0124] The container information and/or position information of each DCU can be dynamically adjusted by the system orchestrator 130c in response to a disconnection of the DCU from the respective network interface 136 and a reconnection of the DCU to a different network interface in a different physical container or the same physical container. For example, if a DCU is disconnected, the graphical user interface can be automatically updated by system orchestrator 130c to remove the corresponding representation, and upon reconnection, the graphical user interface can be automatically updated by system orchestrator 130c to add the corresponding representation in the new position in the corresponding physical container.
[0125] After reconnection, the adjusted container information and/or position information can be confirmed by an automatic entry event, which can for example include scanning of the machine- readable representation affixed to the DCU via a user device. In some embodiments, the scanning of the machine-readable representation can occur at a repair facility and the container information and/or position information of the DCU is updated to indicate a geographical location of the repair facility and/or a position of the DCU within the repair facility. In some embodiments, the scanning of the machine-readable representation occurs at a storage facility and the container information and/or position information of the DCU is updated to indicate a geographical location of the storage facility and/or a position of the DCU within the storage facility.
[0126] In embodiments where DCUs 112, 122 each include a plurality of GPUs, the data record can include a number of GPUs for each DCUs 112, 122, a number of currently available GPUs for each DCU 112, 122, and a number of currently utilized GPUs for each DCU 112, 122. During operation, each GPU can run a single virtual machine alone or together with one or more of the other GPUs of the respective DCU 112, 122 and the data record can include the one or more GPUs running each virtual machine. The data record can also include the number of GPUs of each DCUs currently running virtual machines and an excess capacity for running further virtual machines for each DCU.
[0127] In embodiments where DCUs 112, 122 are cryptocurrency miners, the data records can include financial information related to the cryptocurrency miner. The financial information can include at least one of a purchase price of the cryptocurrency miner, a depreciation of the cryptocurrency miner or a profit generated by the cryptocurrency miner. Each of the data records can also include repair and/or maintenance history information for the cryptocurrency miner including financial costs associated with the repair and/or maintenance. Further, each of the data records can include a hash rate for the cryptocurrency miner.
[0128] User interface 130d can display information with respect to containers 110, 112, racks, and individual DCUs. For example, if a container or rack is selected, user interface 130d can display information for each DCU within the rack or containers.
[0129] User interface 13 Od can display information with respect to all of the DCUs together in a container 110, 112 considered as a whole. For example, for a specific container the displayed metrics can include the total number of DCUs online and offline for the container, the total hashrate of all of the DCUs of the container together (current value and over time on a graph), a mining pool hashrate, the load of the generator powering the DCUs of the container, a current fuel consumption generator powering the DCUs of the container, the power consumption by the DCUs of the container over time shown in graph, the gas pressure of the generator powering the DCUs of the container over time shown in a graph, an average maximum chip temperature for the DCUs of the container over time shown in a graph.
[0130] User interface 130d can also display information for a plurality of containers with respect to all of the DCUs together in each container. For example, all of the containers can be viewed together on a screen, so a user can compare metrics among containers. The metrics can include a power (kW) of the generator powering the container, max power (kW) of the generator powering the container, gas pressure (psi), a current utility in terms of percentage of maximum hashrate being utilized, miner inventory (total # in container), the number of miners connected to the network, the number of miners that are hashing, the number of miners that are sleeping, the number of hashboards that are broken, a current utility in terms of percentage of maximum hashrate being utilized and a currently unused mining capacity (PH/s or TH/s). A user of interface 130d can for example thus identify which containers are consuming the most and least power, which consumers have the highest and lowest percentages of utility, which containers have the most unhealthy miners and which containers have the most unused capacity in hashes per second.
[0131] With respect to computing resource allocation for DCUs 112, 122, first container orchestrator 114c is configured to store in the first controller memory 114b or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the first DCUs 112. Similarly, second container orchestrator 124c is configured to store in the second controller memory 124b or periodically transmit data including a total computing capacity and a currently available computing capacity for each of the second DCUs 122. The system orchestrator 130c is configured for communicating with the first and second container orchestrators 114c, 124c to obtain the total computing capacity and the currently available computing capacity for each of the DCUs 112, 122.
[0132] System 100 can be configured to communicate with client computers 138 seeking computing resources in the form of virtual machines running on DCUs 112, 122. Client computers 138 can access, via a public network, a web page or application generated by an intermediate system 144, which can be a web server or application server, and submit a request to access a virtual machine generated by DCUs 112, 122. The request can include a requested number of GPUs for running the virtual machine, and this request can be communicated to coordinator computer 114 to connect the client computer 138 to a respective one of the DCUs 112, 122.
[0133] As noted above, each of DCUs 112, 122 can include multiple GPUs, and each of the first and second DCUs 112, 122 can be configured for running one or more virtual machines. Each GPU can be adapted to run a single virtual machine alone or together with one or more of the other GPUs of the respective DCU 112, 122
[0134] The coordinator computer 114 is configured to receive a request via intermediate system 144 from client computer 138 to access a virtual machine, and in response to the request, to retrieve the currently available computing capacity for each of the first and second DCUs 112, 122, and to establish the virtual machine on the first or second DCUs 112, 122 based on the currently available computing capacity from the first and second inventory models.
[0135] Coordinator computer 130 is configured to receive a request from a client computer 138 to access a virtual machine, and in response to the request, to send a query via the system orchestrator 130c to the first and second container orchestrators 114c, 124c to retrieve an operating status of the GPUs of the first and second DCUs 112, 122, and to send instructions to wipe an active virtual machine running on the respective GPUs of one of the first and second DCUs 112, 122 and to spin up the virtual machine requested by the client computer 138 on one or more of the GPUs in which the active virtual machine was wiped out. For example, if the operator of system 100 is running an active virtual machine for internal business purposes and/or for a profit generating activity on six GPUs of one of DCUs 122, and the client computer 138 requests a virtual machine that is to be run on three GPUs, the active virtual machine is wiped out, and the client computer 138 is connected to the DCU 122 and the requested virtual machine is run for the client computer 138 on three of the six GPUs in which the active virtual machine was wiped out.
[0136] Along with the spinning up of the plurality of virtual machines requested by the client computer 138 on the respective one or more of the GPU, the system orchestrator 130c is configured to prompt available GPUs to spin up a further virtual machine. The internal business purposes and/or profit generating activity can include running a hash algorithm to mine cryptocurrency. For the example in which six GPUs running an active virtual machine that was wiped, and then three of these GPUs were then used to run a requested virtual machine for a client computer 138, the three available GPUs can be prompted to run a further virtual machine for internal business purposes and/or profit generating activity. Specifically, six of the GPUs can be running an active virtual machine mining cryptocurrency, then the mining halts to reallocate three of the six GPUs to run the requested virtual machine for the client computer and three of the six GPUs to run a further virtual machine mining cryptocurrency.
[0137] Each of the first and second DCUs 112, 122 can further include an agent 146 configured for controlling a number of virtual machines generated by the respective first or second DCU 112, 122. The system orchestrator 130c can be configured for instructing the first or second container orchestrator 114c, 124c to generate the virtual machine requested by the client computer 138. The first or second container orchestrator 114c, 124c can be configured for instructing one of the agents 146 to generate the virtual machine requested by the client computer 138 on one or more of the GPUs controlled by the agent 146. Each agent 146 can be software running on the respective DCU 112, 122 the agent 146 is controlling and monitoring.
[0138] Each of the agents 146 can be configured to monitor and store and/or periodically transmit metadata for the GPUs controlled by the agent 146 to broadcast to the corresponding first or second container orchestrator 114c, 124c. For example, if a DCU 112 in container 110 includes eight GPUs, and four GPUs are running a first virtual machine for a first client computer 138, and four GPUs are running a second virtual machine for a second client computer 138, the agent 146 for the DCU 112 stores and/or periodically transmits the metadata for such GPUs to container orchestrator 114c. If the first client computer 138 disconnects from the DCU 112, the agent 146 for the DCU 112 stores and/or periodically transmits the metadata indicating that these four GPUs are no longer running the second virtual machine.
[0139] In some embodiments, system orchestrator 130c can instruct, via the respective container orchestrator 114c and agent 146, the DCU 112 to run a hashing algorithm on the four GPUs that are no longer running the first virtual machine in response to the first client computer 138 disconnecting from the DCU 112 to mine cryptocurrency on these four GPUs.
[0140] The first or second container orchestrator 114c, 124c can be configured to retrieve information regarding which of the GPUs of the first and second DCUs 112, 122 are mining cryptocurrency and which of the GPUs of the first and second DCUs 112, 122 are generating virtual machines for clients, and to transmit the retrieved information to the system orchestrator 130c.
Baseload Management
[0141] A method 400 for allocating computer resources is illustrated in Fig. 4. The method can include a step 402 of receiving, by the system orchestrator 130c connected to DCUs by network, a request from a client computer 138 to access a virtual machine. In this example, each of DCUs 112, 122 includes a plurality of GPUs and the request from the client computer 138 identifies a requested number of GPUs for running the requested virtual machine. Step 402 can include querying the first container orchestrator 114c analyzing the first DCUs 112 and the second container orchestrator 124c analyzing the second DCUs 122 to identify one of the DCUs 112, 122 having available computing capacity for providing the requested virtual machine.
[0142] Method 400 also includes a step 404 of identifying a DCU selected from DCUs 112, 122 with one or more available GPUs equal to or greater than the requested number of GPUs. The one or more available GPUs have available computing capacity for generating the requested virtual machine.
[0143] Method 400 then includes a step 406 of instructing the one or more available GPUs to wipe an entire baseload running on one or more available GPUs of the selected DCU. The one or more available GPUs are equal to a total number of GPUs of the selected DCU minus a number of GPUs of the selected DCU running one or more virtual machines for client computers 138. For example, if the selected DCU includes ten GPUs and eight of these ten GPUs are running one or more virtual machines for client computers 138 - e.g., three of the eight are running a virtual machine for a first client computer 138, and five of the eight are running a virtual machine for a second client computer - the DCU includes two available client computers.
[0144] Method 400 next includes a step 408 of spinning up the requested virtual machine on the requested number of GPUs of the one or more available GPUs following the wipeout of the entire baseload. Continuing the directly preceding example, if the client computer 138 requests only a single GPU for the running the requested virtual machine and two GPUs are available, the requested virtual machine is run on one of the two available GPUs.
[0145] If one or more available GPUs is a number of available GPUs that is greater than the requested number of GPUs such that there are further available GPUs following the spinning up the requested virtual machine on the requested number of GPUs of the one or more available GPUs, the method further includes a step 410 of spinning up a new baseload on the further available GPUs. The new baseload runs contemporaneously with the requested virtual machine spun up in step 408. Continuing the directly preceding example, if the client computer 138 requests only a single GPU for the running the requested virtual machine and two GPUs are available, a new baseload is spun up on the other available GPU.
[0146] In some embodiments, the baseload wiped out in step 406 and the new baseload spun up in step 410 are each a virtual machine performing a profit generating task. For example, the profit generating task can be running a hash algorithm to mine cryptocurrency or training machine learning models.
[0147] Fig. 5 shows a method 500 of balancing load created by various components of a power consumption system with an optimal power output determined for a power production system.
[0148] As discussed above, the disclosed embodiments may include automated control devices that are configured to monitor the operation of power producers of the power production system, adjust the operation of power producers based on producer metrics and/or operational requirements, and monitor and control the operation of power consumers of the power consumption system.
[0149] Generally, the embodiments provide an automated method for determining a target output power for the power production system based on various metrics, statistics and/or operational requirements; determining operational parameters of consumers of the power consumption system; and adjusting the operation of one or more consumers such that the power consumption system is modified to provide a load that substantially meets the target output power. In this way, the embodiments allow for an optimal output power to be provided by the power production system while balancing power demand of the power consumption system.
[0150] In a first step 502, method 500 includes retrieving inputs for a plurality of metrics of system 100 The metrics can advantageously include power generation metrics mentioned above, including an engine pressure, a generator output, an engine output, a coolant temperature, a percent engine load, cylinder positions of the engine, and a knock index. One of power control modules 114d, 124d can retrieve sensor data for a plurality of control variables from the corresponding respective controller 272. The power generation metrics can be fetched by the respective 114d from controller 272 at predetermined intervals.
[0151] The further metrics can also include one or more transformer metrics, including a temperature of electrical transformation module 235 retrieved from sensor 276 by respective power control modules 114d, 124d at the predetermined intervals.
[0152] The further metrics can further include one or more site metrics, which can be the pressure of gas entering into gas supply line 220, retrieved from inlet pressure sensor 274 by respective power control modules 114d, 124d at the predetermined intervals. [0153] The metrics can further include one or more business metrics, such as a maintenance schedule, retrieved from a business database 148 at the predetermined intervals.
[0154] A next step 504 is calculating a plurality of different target powers at a time t + 1 based on the power generation metrics and the further metrics. Each of the different target powers is based on a different energy producer statistic derived from the energy producer metrics. Step 504 can include running each of the metrics through a distinct PID controller to determine a target power for each of the metrics. In particular, each power generation metric, each transformer metric and each site metric can be run through a distinct PID controller to determine a distinct target power for each distinct power generation metric, each distinct transformer metric and each distinct site metric.
[0155] This process can involve calculating power generation statistics, by power control modules 114d, 124d, from current and historical values of power generation metrics (i.e., over one or more time periods).
[0156] The system can compare the statistic to a specific threshold for the respective power generation statistic, then outputs an error if the respective power generation statistic breaches the threshold - i.e., is greater than a maximum threshold, or is less than a minimum threshold. The error has a proportionality to the amount of the respective power generation statistic breaches the threshold, and this proportionality is used to calculate the target power for the respective power generation statistic.
[0157] Gas pressure metrics for other power generation modules 231a, 231b on the same site can also be taken into account for determining a target power. For example, if one power generation module 23 la drops below a certain pressure, the target power of another power generation module 231b can be decreased to cause a corresponding increase in the pressure of power generation module 231a.
[0158] A next step 506 is selecting (e.g., by a power control module 114d, 124d) a most conservative target power from the calculated different target powers. This conservative approach can advantageously prevent most generator shutdowns and maximize the uptime of DCUs 112, 114 as a whole. [0159] A next step 508 is outputting, by the respective power control modules 114d, 124d, a power consumption change is calculated as a function of the most conservative target power. The power consumption change may be sent from the respective power control modules 114d, 124d to the respective container orchestrator 114c, 124c. The power control modules 114d, 124d can take the lowest calculated target power and translate the lowest calculated target power into consumer target powers to be sent to one of container orchestrators 114c, 124c. The power control module 114d, 124d outputs the required power decrease or increase to the respective container orchestrator 114c, 124c.
[0160] A next step 510 is selecting at least one DCU 112, 122, based on priority or hierarchy information associated with each of the DCUs, for altering a power state thereof to achieve the power consumption change. If the power consumption change is a power decrease, DCUs 112, 122 can be selected for powering down, and thus the altering of the power state is changing the power from on to off. If the power consumption change is a power increase, DCUs 112, 122 can be selected for powering up, and thus the altering of the power state is changing the power from off to on. DCUs 112, 122 can also be provided with firmware that allows the amount of power drawn by each DCUs 112, 112 to be increased or decreased within a range of non-zero to 100%.
[0161] As noted above, the container orchestrators 114c, 124c can retrieve the hierarchy of DCUs 112, 122 under the control of the respective container orchestrator 114c, 124c (e.g., container orchestrator 114 controls DCUs 112, and container orchestrator 124c controls DCUs 114), and populate a data record with the hierarchy and the power currently being consumed by each DCU 112, 122. If for example the output power of the power generation module 23 la or 23 lb need to be decreased, the container orchestrators 114c, 124c can select the lowest priority DCUs 112, 122 that can achieve the required power reduction for powering down. In particular, the container orchestrator 114c, 124c can for example start from the bottom of the hierarchy and select the lowest priority DCU for deactivation, then select the second lowest priority DCU, and this process continues until the selected DCUs together have a cumulative current power consumption that is equal to or greater than the required power reduction. Conversely, if more than one of DCUs 112, 122 are currently turned off and not drawing power, then the hierarchy can be used to for selecting DCUs for activation if the most conservative target power at time t + 1 is greater than the power currently drawn by the DCUs 112, 122. Higher priority DCUs 112, 122 in the hierarchy are selected for activation before lower priority DCUs 112, 122. [0162] For example, if the DCUs 112 are different models of cryptocurrency miners, the DCUs 112 can be prioritized by mining output, which can include selecting DCUs 112 for shutdown based on the age of the DCU 112. If the five oldest DCUs 112 have an output equal to or greater than the required power reduction, these five oldest DCUs 112 are selected for shutdown, and the remaining DCUs 112 continue to mine cryptocurrency and draw power from the respective power generation module 231a, 213b.
[0163] As another example, if the DCUs 122 are a mix of cryptocurrency miners and servers that include GPUs, the cryptocurrency miners can be selected for being powered down before the servers.
[0164] The hierarchy can also change in accordance with a cloud virtual machine service offered by system 100. For example, if DCUs 112, 122 are being powered by a single power generation module 231a, and DCUs 112 includes GPUs configured to provide virtual machines for the cloud virtual machine service, then a maximum possible power draw for reserved virtual machines must be at all times preserved for DCUs 112, and only DCUs 122 can be considered for achieving the required power reduction. For example, if no customers have reserved virtual machines via the cloud virtual machine service, then the selection of DCUs 112, 122 for power changes can be done solely based on a standard hierarchy. However, as soon as customers start reserving hardware in the cloud center, then possible power from power generation module 231a must be reserved to make sure that if those cloud customers fully utilize their hardware up to the maximum power draw, power generation module 231a has enough available power to accommodate the maximum possible power draw.
[0165] The power control modules 114d, 124d are further configured to translate the received target power into consumer target powers to be sent to one of container orchestrators 114c, 124c. For example, if the target power is less than the current power consumption (i.e., the current load), the power control module 114d, 124d determines the required power decrease in terms of consumers and sends the same to the respective container orchestrator 114c, 124c.
[0166] The container orchestrator may then output overclocking or underclocking instructions to the respective DCUs 112, 122 selected for power adjustment (e.g., based on a distributed computer hierarchy or priority information) and the required power consumption adjustment.
31 [0167] The container orchestrators 114c, 124c can retrieve the hierarchy of DCUs 112, 122 under the control of the respective container orchestrator 114c, 124c (e.g., container orchestrator 114 controls DCUs 112, and container orchestrator 124c controls DCUs 114), and populate a data record with the hierarchy and the power currently being consumed by each DCU 112, 122. If for example the output power of the power generation module 23 la or 23 lb need to be decreased, the container orchestrators 114c, 124c can then turn off the lowest priority DCUs 112, 122 that can achieve the required power reduction. In particular, the container orchestrator 114c, 124c can for example start from the bottom of the hierarchy and select the lowest priority DCU for deactivation, then select the second lowest priority DCU, and this process continues until the selected DCUs together have a cumulative current power consumption that is equal to or greater than the required power reduction.
[0168] It should also be noted that a single power generation module 23 la or 23 lb can power all of DCUs 112, 122. When a single power generation module 23 la or 23 lb powers all of DCUs 112, 122, a single power control module 114d or 124d can determine the lowest calculated target power for all of the different control variables, then can send consumer target powers to both of container orchestrators 114c, 124c.
[0169] A next step 512 is altering the power state of the selected at least one DCU 112, 122 to achieve the power consumption change. As noted above, this can involve turning off one or more DCUs 112, 122 if the power consumption change requires a decrease in power, and can involve turning on one or more of DCUs 112, 122 if the power consumption change requires an increase in power. Further, the amount of power consumption by DCUs 112, 122 can be granularly increased or decreased without turning DCUs 112, 122 on or off. For example, the five lowest priority DCUs 112, 122 can be turned down 25% to achieve the power consumption change.
Computing Machines
[0170] Referring to FIG. 6, a block diagram is provided illustrating an exemplary computing machine 600 and modules 650 in accordance with one or more embodiments presented herein. The computing machine 600 may represent any of the various computing systems discussed herein, such as but not limited to, the DCUs 112, 122, components of control systems (Fig. 1 at 101, 114, 124), the client devices (FIG. 1 at 138) and/or the third-party systems. And the modules 650 may comprise one or more hardware or software elements configured to facilitate the computing machine 600 in performing the various methods and processing functions presented herein.
[0171] The computing machine 600 may comprise all kinds of apparatuses, devices, and machines for processing data, including but not limited to, a programmable processor, a computer, and/or multiple processors or computers. As shown, an exemplary computing machine 600 may include various internal and/or attached components, such as a processor 610, system bus 670, system memory 620, storage media 640, input/output interface 680, and network interface 660 for communicating with a network 630.
[0172] The computing machine 600 may be implemented as a conventional computer system, an embedded controller, a server, a laptop, a mobile device, a smartphone, a wearable device, a kiosk, customized machine, or any other hardware platform and/or combinations thereof. Moreover, a computing machine may be embedded in another device, such as but not limited to, a portable storage device. In some embodiments, the computing machine 600 may be a distributed system configured to function using multiple computing machines interconnected via a data network or system bus 670.
[0173] The processor 610 may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands. The processor 610 may be configured to monitor and control the operation of the components in the computing machine 600. The processor 610 may be a general-purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a graphics processing unit (“GPU”), a field programmable gate array (“FPGA”), a programmable logic device (“PLD”), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof. The processor 610 may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, coprocessors, or any combination thereof. In addition to hardware, exemplary apparatuses may comprise code that creates an execution environment for the computer program (e g., code that constitutes one or more of processor firmware, a protocol stack, a database management system, an operating system, and a combination thereof). According to certain embodiments, the processor 610 and/or other components of the computing machine 600 may be a virtualized computing machine executing within one or more other computing machines.
[0174] The system memory 620 may include non-volatile memories such as read-only memory (“ROM”), programmable read-only memory (“PROM”), erasable programmable read-only memory (“EPROM”), flash memory, or any other device capable of storing program instructions or data with or without applied power. The system memory 620 also may include volatile memories, such as random-access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), and synchronous dynamic random-access memory (“SDRAM”). Other types of RAM also may be used to implement the system memory. The system memory 620 may be implemented using a single memory module or multiple memory modules. While the system memory is depicted as being part of the computing machine 600, one skilled in the art will recognize that the system memory may be separate from the computing machine without departing from the scope of the subject technology. It should also be appreciated that the system memory may include, or operate in conjunction with, a non-volatile storage device such as the storage media 640.
[0175] The storage media 640 may store one or more operating systems, application programs and program modules such as module, data, or any other information. The storage media may be part of, or connected to, the computing machine 600. The storage media may also be part of one or more other computing machines that are in communication with the computing machine such as servers, database servers, cloud storage, network attached storage, and so forth.
[0176] The modules 650 may comprise one or more hardware or software elements configured to facilitate the computing machine 600 with performing the various methods and processing functions presented herein. The modules 650 may include one or more sequences of instructions stored as software or firmware in association with the system memory 620, the storage media 640, or both. The storage media 640 may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor. Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor. Such machine or computer readable media associated with the modules may comprise a computer software product. It should be appreciated that a computer software product comprising the modules may also be associated with one or more processes or methods for delivering the module to the computing machine 600 via the network, any signal-bearing medium, or any other communication or delivery technology. The modules 650 may also comprise hardware circuits or information for configuring hardware circuits such as microcode or configuration information for an FPGA or other PLD.
[0177] The input/output (“I/O”) interface 680 may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices. The I/O interface 680 may include both electrical and physical connections for operably coupling the various peripheral devices to the computing machine 600 or the processor 610. The I/O interface 680 may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine, or the processor. The I/O interface 680 may be configured to implement only one interface or bus technology. Alternatively, the I/O interface may be configured to implement multiple interfaces or bus technologies. The I/O interface may be configured as part of, all of, or to operate in conjunction with, the system bus 670. The I/O interface 680 may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine 600, or the processor 610.
[0178] The I/O interface 680 may couple the computing machine 600 to various input devices to receive input from a user in any form. Moreover, the I/O interface 680 may couple the computing machine 600 to various output devices such that feedback may be provided to a user via any form of sensory feedback (e.g., visual, auditory or tactile).
[0179] Embodiments of the subject matter described in this specification can be implemented in a computing machine 600 that includes one or more of the following components: a backend component (e.g., a data server); a middleware component (e.g., an application server); a frontend component (e.g., a client computer having a graphical user interface (“GUI”) and/or a web browser through which a user can interact with an implementation of the subject matter described in this specification); and/or combinations thereof. The components of the system can be interconnected by any form or medium of digital data communication, such as but not limited to, a communication network. Accordingly, the computing machine 600 may operate in a networked environment using logical connections through the network interface 660 to one or more other systems or computing machines across a network. [0180] The processor 610 may be connected to the other elements of the computing machine 600 or the various peripherals discussed herein through the system bus 670. It should be appreciated that the system bus 670 may be within the processor, outside the processor, or both. According to some embodiments, any of the processor 610, the other elements of the computing machine 600, or the various peripherals discussed herein may be integrated into a single device such as a system on chip (“SOC”), system on package (“SOP”), or ASIC device.
[0181] In the preceding specification, the present disclosure has been described with reference to specific exemplary embodiments and examples thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative manner rather than a restrictive sense.

Claims

Claims [0182] What is claimed is:
1. A system for dynamic modeling of computer resources comprising: a processor; memory; and a system orchestrator stored in the memory that, when executed by the processor, causes the processor to perform operations comprising: generate a graphical user interface modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generate, in a database in the memory, a data record including container information of each of the physical containers and/or position information for each of the distributed computing units in communication with the network, the container information and/or position information being automatically assigned to each distributed computing unit in communication with the network based on the network address of the respective network interface; automatically assign to each distributed computing unit an inventory identifier unique to the system orchestrator, the inventory identifier being stored in the database with the container information of the distributed computing unit and the preassigned unique hardware identifier; and dynamically adjust the container information and/or position information of each distributed computing unit in response to a disconnection of the distributed computing unit from the respective network interface and a reconnection of the distributed computing unit to a different network interface in a different one of the physical containers.
2. The system as recited in claim 1, wherein the container information includes information describing a geographical location of the physical container, a container identifier for the physical container, a size of the physical container, a type of the physical container and a cost of the physical container, wherein the position information includes a position of the distributed computing unit within the respective physical container, including at least one of a rack, a shelf and a slot where the distributed computing unit is positioned within the respective physical container.
3. The system as recited in claim 1, wherein the dynamically adjusting of the container information and/or position information includes confirming the container information and/or position information by an automatic entry event.
4. The system as recited in claim 3, wherein the automatic entry event includes scanning of a machine-readable representation affixed to the distributed computing unit.
5. The system as recited in claim 4, wherein the scanning of the machine-readable representation occurs at a repair facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the repair facility and/or a position of the distributed computing unit within the repair facility.
6. The system as recited in claim 4, wherein the scanning of the machine-readable representation occurs at a storage facility and the container information and/or position information of the distributed computing unit is updated to indicate a geographical location of the storage facility and/or a position of the distributed computing unit within the storage facility.
7. The system as recited in claim 1, wherein the system orchestrator causes the processor to, in response to a new entry event indicating an addition of a new distributed computing unit into one of the physical containers: automatically update the database to include a new data record including an automatically generated inventory identifier.
8. The system as recited in claim 7, wherein the system orchestrator causes the processor to, in response to a connection of the new distributed computing unit to one of the network interfaces of one of the physical containers, automatically retrieve the preassigned unique hardware identifier from the new distributed computing unit, and store the preassigned unique hardware identifier and a geographical location of network interface to which the new distributed computing unit is connected in the new data record.
9. The system as recited in claim 1, wherein the distributed computing units are a plurality of cryptocurrency miners.
10. The system as recited in claim 9, wherein each of the data records includes financial information related to the cryptocurrency miner.
11. The system as recited in claim 10, wherein the financial information includes at least one of a purchase price of the cryptocurrency miner, a depreciation of the cryptocurrency miner or a profit generated by the cryptocurrency miner.
12. The system as recited in claim 9, wherein each of the data records includes repair and/or maintenance history information for the ciyptocurrency miner including financial costs associated with the repair and/or maintenance.
13. The system as recited in claim 9, wherein each of the data records includes a hash rate for the cryptocurrency miner.
14. The system as recited in claim 1, wherein the distributed computing units each include a plurality of graphics processing units, the data record including a number of graphics processing units for each distributed computing unit, a number of currently available graphics processing units for each distributed computing unit, and a number of currently utilized graphics processing units for each distributed computing unit.
15. The system as recited in claim 14, wherein each of the distributed computing units are configured for running a plurality of virtual machines, each graphics processing unit being adapted to run a single virtual machine alone or together with one or more of the other graphics processing units of the respective distributed computing unit, the data record including the one or more the graphics processing units running each virtual machine.
16. The system as recited in claim 15, wherein the data record includes the number of graphics processing units of each distributed computing unit currently running virtual machines and an excess capacity for running further virtual machines for each distributed computing unit.
17. The system as recited in claim 1, wherein the physical containers each include a plurality of racks having predefined rack positions configured for receiving the distributed computing units, each of the predefined rack positions being associated with one of the network interfaces, the memory partitioned to store the predefined rack positions and the associated network interfaces, the system orchestrator configured to automatically assign the predefined rack position associated with the network interface with which the distributed computing unit is connected to the corresponding data record.
18. The system as recited in claim 17, wherein: the physical containers are located at different physical sites, each of the physical sites having at least one of the physical containers and at least one of the physical sites having a plurality of the physical containers; and the system orchestrator is configured for causing the processor to generate a graphical user interface depicting the physical sites, the physical containers within the physical sites, the predefined rack positions within the containers, and the distributed computing units in the predefined rack positions.
19. A method of updating a computerized inventory of distributed computing units movable throughout a plurality of containers across a plurality of physical sites, the method comprising: providing a computer system including a processor, memory and a system orchestrator stored in the memory and executable by the processor to cause the processor to perform operations to update an inventory model stored in the memory, the inventory model including information modeling a plurality of physical containers and a plurality of distributed computing units in communication with a network, each of the physical containers housing a subset of the distributed computing units, each of the distributed computing units having a preassigned unique hardware identifier accessible to the system orchestrator via the network, each of the physical containers having a plurality of network interfaces each assigned a network address of the network, each of the distributed computing units connected to one of the network interfaces and associated with the respective network address; generating, by the system orchestrator in response to a request for a user device, a new object in the memory and associating an inventory identifier unique to the system orchestrator with the new object; directing the user device to generate a user interface configured to generate a request to produce a machine-readable representation of the inventory identifier that is configured for being affixed to one of the distributed computing units; automatically, by the system orchestrator and in response to a new distributed computing unit being connected to a respective one of the network interfaces: retrieving the preassigned unique hardware identifier from the new distributed computing unit; determining location information including the physical container in which the respective network interface is located and a position of the respective network interface within the physical container based on the network address of the respective network interface; directing the user device to generate a visual representation of the new distributed computing unit via a graphical user interface displayed on the user device; associating the preassigned unique hardware identifier with the generated new object in the inventory model upon receiving an input of the machine-readable representation of the inventory identifier via the user device or a separate user device; and associating the location information with the new object in the inventory model; and providing a visual representation of the new object via a user interface.
20. The method as recited in claim 19 wherein the generating of the new object in the memory and associating the inventory identifier unique to the system orchestrator with the new object is performed prior to the new distributed computing unit being connected to the respective network interface.
21. The method as recited in claim 19 wherein the user interface configured to generate the request to produce the machine-readable representation of the inventory identifier is a user interface configured to generate the request to print the machine-readable representation of the inventory identifier.
22. The method as recited in claim 19 wherein the new distributed computing unit connected to the respective one of the network interfaces is at a specific location on a rack in the physical container, the position of the respective network interface within the physical container including the specific location of the new distributed computing unit on the rack.
23. The method as recited in claim 22 further comprising retrieving and/or transmitting, by the system orchestrator, data for displaying, on the user interface provided with the visual representation of the new object, a representation illustrating a relationship between the new distributed computing unit and the other distributed computing units in the physical container at respective locations on the rack.
24. The method as recited in claim 19 further comprising automatically generating, by the system orchestrator in response to a disconnecting of one of the distributed computing units from the respective network interface and a reconnecting of the disconnected distributed computing unit with a further network interface at a further location, updated location information for the reconnected distributed computing unit based on location information associated with the further network interface.
25. The method as recited in claim 24 wherein upon the reconnection, automatically determining, by the system orchestrator, the preassigned unique hardware identifier of the reconnected distributed computing unit and looking up an object in the inventory model associated with the preassigned unique hardware identifier to identify the reconnected distributed computing unit and associate the updated location information for the reconnected distributed computing unit with the object.
26. A method of controlling a power consumption of a plurality of computing units powered by at least one power generation module, the plurality of computing units being separated into at least two containers, each of the containers includes a respective container controller adapted to control the computing units within the container, the method comprising: receiving, via a power control module, from at least one sensor of the power generation module, power generation metrics of the power generation module; calculating a plurality of different target powers at a time t + 1 based on the power generation metrics, each of the different target powers being based on a different power generation statistic derived from the power generation metrics; selecting a most conservative target power from the calculated different target powers; outputting a power consumption change calculated as a function of the most conservative target power; selecting at least one computing unit, from a hierarchy of the plurality of computing, for altering a power state of to achieve the power consumption change; and altering the power state of the selected at least one computing unit to achieve the power consumption change.
27. The method as recited in claim 26 wherein each of the plurality of different target powers is calculated via a PID controller.
28. The method as recited in claim 26 wherein the computing units are cryptocurrency miners, the hierarchy involving a hash-rate efficiency, with the computing units with a lowest hash-rate efficiency being reduced in power first.
29. The method as recited in claim 26 wherein the computing units are cryptocurrency miners, the hierarchy involving ownership, with the computing units owned by the operator of the at least one energy producer being reduced in power first.
30. The method as recited in claim 26 wherein the energy producer metrics include at least one of an output power of the energy producer, a pressure of the energy producer, a coolant temperature of the energy producer and a percent load of the energy producer.
31. The method as recited in claim 26 wherein one of the containers includes the computing units including GPUs and another of the containers includes the computing units in the form of cryptocurrency miners, the cryptocurrency miners being lower priority in the hierarchy that the computing units including GPUs, the selected at least one computing unit being at least one of the cryptocurrency miners.
PCT/US2023/070293 2022-07-14 2023-07-14 System and method for dynamic modeling of computer resources WO2024016007A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263389170P 2022-07-14 2022-07-14
US63/389,170 2022-07-14

Publications (1)

Publication Number Publication Date
WO2024016007A2 true WO2024016007A2 (en) 2024-01-18

Family

ID=89537533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/070293 WO2024016007A2 (en) 2022-07-14 2023-07-14 System and method for dynamic modeling of computer resources

Country Status (1)

Country Link
WO (1) WO2024016007A2 (en)

Similar Documents

Publication Publication Date Title
CA3128478C (en) Behind-the-meter charging station with availability notification
US11283261B2 (en) Managing queue distribution between critical datacenter and flexible datacenter
CN113196201B (en) System of key data center and flexible data center behind instrument
US20220171449A1 (en) Redundant flexible datacenter workload scheduling
US11025060B2 (en) Providing computational resource availability based on power-generation signals
US8463450B2 (en) Aggregated management system
CN113056716A (en) System and method for dynamic power routing with post-meter energy storage
WO2004070907A2 (en) Energy grid management method
US20120323389A1 (en) Method and apparatus for controlling energy services based on market data
WO2014183120A1 (en) Methods for optimizing ship performance and devices thereof
JP6796691B2 (en) Consumer communication equipment, management server and communication method
CN107861392A (en) The data management platform and method of a kind of intelligent appliance
WO2024016007A2 (en) System and method for dynamic modeling of computer resources
WO2024016007A9 (en) System and method for dynamic modeling of computer resources
CN104319819A (en) Embedded fan group grid-connected active coordination controller
JP2004274937A (en) Electric power supply and demand management system
WO2024105135A1 (en) A data processing apparatus and method of providing energy to an energy consuming system
CN102870369B (en) Network system
KR20220036694A (en) Microgrid system and microgrid system control method
CN117937553A (en) Energy scheduling method and device, electronic equipment and storage medium
US20110307594A1 (en) Network system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23840594

Country of ref document: EP

Kind code of ref document: A2