WO2019126720A1 - Système et procédé d'optimisation et d'équilibrage de charge de grappes d'ordinateurs - Google Patents
Système et procédé d'optimisation et d'équilibrage de charge de grappes d'ordinateurs Download PDFInfo
- Publication number
- WO2019126720A1 WO2019126720A1 PCT/US2018/067239 US2018067239W WO2019126720A1 WO 2019126720 A1 WO2019126720 A1 WO 2019126720A1 US 2018067239 W US2018067239 W US 2018067239W WO 2019126720 A1 WO2019126720 A1 WO 2019126720A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- processor
- network
- database
- operating
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
Definitions
- the disclosure relates to the field of distributed computing, more specifically the field of load balancing and optimization of distributed computer clusters.
- a system and methods have been devised for optimization and load balancing for computer clusters, comprising a distributed computational graph, a server architecture using multi-dimensional time-series databases for continuous load simulation and forecasting, a server architecture using traditional databases for discrete load simulation and forecasting, and using a combination of real-time data and records of previous activity for continuous and precise load forecasting for computer clusters, datacenters, or servers.
- FIG. 1 is a block diagram illustrating an exemplary hardware architecture of a computing device used in various embodiments of the invention.
- FIG. 2 is a diagram of an exemplary architecture of a system for the capture and storage of time series data from sensors with heterogeneous reporting profiles according to an embodiment of the invention.
- FIG. 3 is a diagram illustrating an exemplary hardware architecture of a distributed computational graph interacting with multiple arrangements of computer cluster components for optimization and load forecasting.
- Fig. 4 is a method diagram illustrating the primary methods for creation of functions in a data pipeline and their storage on a server, according to a preferred aspect.
- FIG. 5 is a method diagram illustrating a data pipeline acting on data and recording the result, according to a preferred aspect.
- Fig. 6 is a method diagram illustrating the steps for forecasting server load using a data pipeline stored in a multidimensional time-series database, according to a preferred
- FIG. 7 is a block diagram illustrating an exemplary hardware architecture of a computing device.
- FIG. 8 is a block diagram illustrating an exemplary logical architecture for a client device.
- FIG. 9 is a block diagram showing an exemplary architectural arrangement of clients, se ers, and external sendees.
- FIG. 10 is another block diagram illustrating an exemplary hardware architecture of a computing device
- Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise.
- devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical.
- steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step).
- the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that tire illustrated process or any of its steps are necessary to one or more of the aspects, and does not imply that the illustrated process is preferred.
- steps are generally described once per aspect, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some aspects or some occurrences, or some steps may be executed more than once in a given aspect or occurrence.
- Fig. 1 is a block diagram of an exemplary architecture for a system 100 for predictive analysis of very large data sets using a distributed computational graph.
- streaming input feeds 110 may be a variety of data sources which may include but are not limited to the internet 111, arrays of physical sensors 112, database servers 113, electronic monitoring equipment 114 and direct human interaction ranging from a relatively few number of participants to a large crowd sourcing campaign. Streaming data from any combinations of listed sources and those not listed may also be expected to occur as part of the operation of the invention as the number of streaming input sources is not limited by the design.
- All incoming streaming date maybe passed through a data filter software engine 120 to remove information that has been damaged in transit, is misconfigured, or is malformed in some way that precludes use.
- Many of the filter parameters may be expected to be preset prior to operation, however, design of the invention makes provision for the behavior of the filter software engine 120 to be changed as progression of analysis requires through the automation of the system sanity and retrain software engine 163 which may serve to optimize system operation and analysis function.
- the data stream may also be split into two identical substreams at the data filter software engine 120 with one substream being fed into a streaming analysis pathway that includes the transformation pipeline software engine 161 of the distributed computational graph 160. The other substream may be fed to data formalization software engine 130 as part of the batch analysis pathway.
- the data formalization engine 130 formats the data stream entering the batch analysis pathway of the invention into date records to be stored by the input event data store 140.
- the input event data store 140 can be a database of any architectural type known to those knowledgeable in the art, but based upon the quantity of the data the data store engine would be expected to store and retrieve, options using highly distributed storage and map reduce quer protocols, of which Hadoop is one, but not the only example, may be generally' preferable to relational database schema.
- Analysis of data from the input event data store may be performed by the batch event analysis software engine 150.
- This engine may ' be used to analyze the data in tire input event data store for temporal information such as trends, pre vious occurrences of the progression of a set of events, with outcome, the occurrence of a single specific event with all events recorded before and after whether deemed relevant at the time or not, and presence of a particular event with all documented possible causative and remedial elements, including best guess probability information, including heuristics about task queue consumption rates and average job runtimes in the past.
- the search parameters used by the batch event analysis software engine 150 are preset by those conducting the analysis at the beginning of the process, however, as the search matures and results are gleaned from he streaming data during transformation pipeline software engine 161 operation, providing the system more timely event progress details, the system sanity and retrain software engine 163 may automatically update he batch analysis parameters 150. Alternately, findings outside the system may precipitate the authors of the analysis to tune the batch analysis parameters administratively from outside the system 170, 162, 163.
- the real- time data analysis core 160 of the invention should be considered made up of a
- the messaging engine 162 has connections from both the batch and tlie streaming data analysis pathways and serves as a conduit for operational as well as result information between hose two parts of he invention.
- the message engine also receives messages from hose administering analyses 180. Messages aggregated by the messaging engine 162 may then be sent to system sanity and retrain software engine 163 as appropriate.
- his is software hat may be used to monitor he progress of streaming data analysis optimizing coordination between streaming and batch analysis pathways by modifying or“retraining” the operation of the data filter software engine 120, data formalization software engine 130 and batch event analysis software engine 140 and he transformation pipeline engine 150 of he streaming pathway when he specifics of the search may change due to results produced during streaming analysis.
- System sanity and retrain engine 163 may also monitor for data searches or transformations that are processing slowly or may have hung and for results that are outside established data stability boundaries so hat actions can be implemented to resolve he issue. While the system sanity and retrain software engine 163 may be designed to act autonomousl and employs computer learning algorithms, according to some arrangements status updates may' be made by' administrators or potentially direct changes to operational parameters by such, according to the embodiment.
- Streaming data entering from the outside data feeds 110 through the data filter software engine 120 may be analyzed in real time within the transformation pipeline software engine 161.
- a set of functions tailored to the analy sis being run are applied to the input data stream.
- functions may be applied in a linear, directed path or in more complex configurations.
- Functions may' be modified over time during an analysis by the system sanity and retrain software engine 163 and the results of the transformation pipeline, impacted by the results of batch analysis are then output in the format stipulated by the authors of the analysis which may' be human readable printout, an alarm, machine readable information destined for another system or any' of a plurality of other forms known to those in the art.
- FIG. 2 is a diagram of an exemplary architecture of a system for the capture and storage of time series data from sensors with heterogeneous reporting profiles according to an embodiment of the invention 200.
- a plurality of sensor devices 210a-n stream data to a collection device, in this case a web server acting as a network gateway 215.
- sensors 21Qa ⁇ n can be of several forms, some non-exhaustive examples being: physical sensors measuring humidity', pressure, temperature, orientation, and presence of a gas; or virtual such as programming measuring a level of network traffic, memory usage in a controller, and number of times the word“refill” is used in a stream of email messages on a particular network segment, to name a small few' of the many ' diverse forms known to the art
- the sensor data is passed without transformation to the data management engine 220, where it is aggregated and organized for storage in a specific ty ⁇ e of data store 225 designed to handle the multidimensional time series data resultant from sensor data.
- Raw- sensor data can exhibit highly' different delivery characteristics. Some sensor sets may deliver low to moderate volumes of data continuously ' .
- the data stream management engine 220 would hold incoming data in memory ' , keeping only the parameters, or“dimensions” from within the larger sensor stream that are pre-decided by the administrator of the study as important and instructions to store them transmitted from the administration device 212.
- the data stream management engine 220 would then aggregate the data from multiple individual sensors and apportion that data at a predetermined interval, for example, every 10 seconds, using the timestamp as the key when storing the data to a multidimensional time series data store over a single swimiane of sufficient size.
- the invention also can make use of event based storage triggers where a predetermined number of data receipt events, as set at the administration device 212, triggers transfer of a data block consisting of the apportioned number of events as one dimension and a number of sensor ids as the other.
- the system time at commitment or a time stamp that is part of the sensor data received is used as the key for the data block value of the value-key ' pair.
- the invention can also accept a raw data stream with commitment occurring when the accumulated stream data reaches a predesigned size set at the administration device 212.
- the embodiment of the invention can, if capture parameters pre-set at the administration device 212, combine the data movement capacity of two or more swimlanes, the combined bandwidth dubbed a metaswimlane, transparently to the committing process, to accommodate the influx of data in need of commitment. All sensor data, regardless of delivery circumstances are stored in a multidimensional time series data store 225 which is designed for very ' low overhead and rapid data storage and minimal maintenance needs to sap resources.
- the embodiment uses a key'-value pair data store examples of which are Riak, Redis and Berkeley DB for their low overhead and speed, although the invention is not specifically tied to a single data store type to the exclusion of others known in the art should another data store with better response and feature characteristics emerge. Due to factors easily surmised by those knowledgeable in the art, data store commitment reliability is dependent on data store data size under the conditions intrinsic to time series sensor data analysis. The number of data records must be kept relatively low for the herein disclosed purpose. As an example one group of developers restrict the size of their multidimensional time series key-value pair data store to
- the archival storage is included 230. This archival storage might be locally provided by the user, might be cloud based such as that offered by Amazon Web Services or Google or could be any other available very large capacity storage method known to those skilled in the art.
- “data_spec” might be replaced by a list of individual sensors from a larger array of sensors and each sensor in the list might be given a human readable identifier in die format“sensor AS identifier”,“unit” allows the researcher to assign a periodicity for the sensor data such as second (s), minute (m), hour (h).
- One or more transformational filters which include hut a not limited to: mean, median, variance, standard deviation, standard linear interpolation, or Kalman filtering and smoothing, may be applied and then data formatted in one or more formats examples of with are text, JSON, KML, GEOjSON and TOPOJSON among others known to the art, depending on the intended use of the data.
- FIG. 3 is a diagram illustrating an exemplary hardware architecture of a distributed computational graph 160 interacting with multiple arrangements of computer cluster components for optimization and load forecasting.
- a distributed computational graph 160 is applied across a network 320 and connects to a server which can contain one of two possible architectures 330, 340.
- One such server architecture 330 operates a multidimensional time- series database (MDTSDB) 331, operating with a graphstack system 332, which serve to record events occurring both on the server 330 and happening with sensors and devices connected to the server, of which there may be zero or several, according to a preferred aspect.
- MDTSDB multidimensional time- series database
- Events in a MDTSDB 331 operating in a graphstack environment 332 are recorded as they occur, in a sequence based on the time the events occurred, and relationships between data across iimespans is used in a data pipeline for load forecasting 333. It is important to note that in this configuration the load forecasting application 333 relies on input from the graphstack system 332 working with the MDTSDB 331. In such a configuration 330, a data pipeline from a distributed computational graph 160 may operate dynamically on data from an MDTSDB 341 , leading to dynamic and changing results from the data pipeline and resulting in highly accurate load forecasting 343.
- An alternative device architecture 340 illustrates a second use case for the system.
- a load forecasting application 341 may be run without a MDTSDB 331 or time series of data from server activities.
- a database 342 may exist which stores a data pipeline used in the distributed computational graph 160 for data processing, as formatted text 343.
- Databases 342 which may be used include SQL databases and NoSQL databases including MONGODBTM, where the formatted text 343 may be relational entries in the case of SQL and other relational databases 342, or JavaScript Object Notation (JSON) stored as a document in NoSQL databases 342.
- Fig. 4 is a method diagram illustrating the primary methods for creation of functions in a data pipeline and their storage on a server, according to a preferred aspect.
- Data pipelines 410 as used in the distributed computational graph 160 may be written either manually by a human 411, or may be produced partially or completely procedurally 412, with the use of time-series data gathered from an MDTSDB 331, In such a configuration 330 several functions in a data pipeline for a load forecasting service 333 may be dynamic and based on MDTSDB data, representing semi-continuous data flow over time from any devices connected to the server or computer cluster and processed in past by the distributed computational graph 160.
- a load forecasting application 341 may load a discrete amount of data from records held in a database or datastore 342 with formatted text 343, which may hold functions for the data pipeline for load forecasting which are written manually by a human 411.
- the server 330, 340 or servers in a computer cluster receive these functions 420, which may be in the form of plain text or formatted text 343, or data in a MDTSDB system 331.
- Fig. 5 is a method diagram illustrating a date pipeline acting on data and recording the result, according to a preferred aspect.
- a request from a user or service is sent 510 to a se er using either basic configuration outlined in Fig. 3, 330, 340. This can be for either load forecasting, or for other purposes such as text analysis, image recognition, or other uses for data pipelines and distributed computational graphs 160 for computing clusters to optimize tasks across multiple devices.
- At least one server may send data through the pipeline 520, which means to refine data through a series of functions, subroutines, or other processes, the processes being defined by either automation 412 or manual input 411.
- Data proceeds through a pipeline 530 which may hold an undetermined amount of functions 531, 532, 533.
- a server working in a datacenter run by GOOGLETM could, using this system, use data pipelines run by distributed computational graphs 160 and MDTSDB sewer architectures 330 to more efficiently predict sewer load for image recognition in their search engine, and simultaneously use a different pipeline 530 which may be used in the image recognition search engine itself.
- Steps in this example could consist of, but need not be limited to, transforming an image inputted by the user to a specific resolution, recognizing color densities across regions of the image, and locating images in their databases or on the internet which have similar color density regions.
- the output of the pipeline is recorded 540, possibly onl in RAM and to be deleted later after it is use for some other purpose, but the data may be recorded in a database as well, according to whichever server configuration is used 330, 340.
- Fig. 6 is a method diagram illustrating the steps for forecasting server load using a data pipeline stored in a multidimensional time-series database 331, according to a preferred aspect.
- a server s entirety of data 610 is used for load simulation and forecasting, which comprises data from current activities 611, as well as records 614 from previous active periods on the server or computing cluster.
- a device in such a cluster, or a lone server in some cases, may be running tasks 612, for which it is important to calculate the computing time required to accomplish these tasks 613.
- Such tasks may be web pages loading, web apps running, interaction with game players for online video games, and more. This is made possible especially easily in MDTSDB configurations 330, which provides continuous time-series data on activities of connected devices and services.
- Records of server or device activity are also accessed 614, which may be used in systems of any configuration 330, 340, for discrete load simulation and forecasting.
- the pipeline functions which may be written by a human manually 411, or determined partially or entirely automatically based on MDTSDB architectures 330, 412, will then act on this data to simulate the expected load 620 and then take appropriate measures to optimize the tasks running and which tasks to delegate to other connected machines, based on this load 630.
- This optimization 630 may take the form of known and state-of-the-art algorithms using the new data provided by an MDTSDB system 330 or may consist of entirely new algorithms as they are devised by those working in the field.
- the techniques disclosed herein may be implemented on hardware or a combination of software and hardware. For example, they may be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specially constructed machine, on an application-specific integrated circuit (“ASIC”), or on a network interface card.
- ASIC application-specific integrated circuit
- Software/hardware hybrid implementations of at least some of the aspects disclosed herein may be implemented on a programmable network-resident machine (which should be understood to include intermittently connected network-aware machines) selectively activated or reconfigured by a computer program stored in memory.
- Such network devices may have multiple netwOrk interfaces that may be configured or designed to utilize different types of network communication protocols.
- a general architecture for some of these machines may be described herein in order to illustrate one or more exemplary means by which a given unit of functionality may be implemented.
- At least some of the features or functionalities of the various aspects disclosed herein may be implemented on one or more general-purpose computers associated with one or more networks, such as for example an end-user computer system, a client computer, a network server or other server system, a mobile computing device (e.g., tablet computing device, mobile phone, smartphone, laptop, or other appropriate computing device), a consumer electronic device, a music player, or any other suitable electronic device, router, switch, or other suitable device, or any combination thereof.
- at least some of the features or functionalities of the various aspects disclosed herein may be implemented in one or more virtualized computing environments (e.g., network computing clouds, virtual machines hosted on one or more physical computing machines, or other appropriate virtual environments).
- FIG. 7 there is shown a block diagram depicting an exemplary computing device 10 suitable for implementing at least a portion of the features or functionalities disclosed herein.
- Computing device 10 may be, for example, any one of the computing machines listed in the previous paragraph, or indeed any oilier electronic device capable of executing software- or hardware-based instructions according to one or more programs stored in memory.
- Computing device 10 may be configured to communicate with a plurality of other computing devices, such as clients or servers, over communications networks such as a wide area network a metropolitan area network, a local area network, a wireless network, the Internet, or any other network, using known protocols for such communication, whether wireless or wired.
- communications networks such as a wide area network a metropolitan area network, a local area network, a wireless network, the Internet, or any other network, using known protocols for such communication, whether wireless or wired.
- computing device 10 includes one or more central processing units (CPU) 12, one or more interfaces 15, and one or more busses 14 (such as a peripheral component interconnect (PCI) bus).
- CPU 12 may be responsible for implementing specific functions associated with the functions of a specifically configured computing device or machine.
- a computing device 10 may be configured or designed to function as a server system utilizing CPU 12, local memory 11 and/or remote memory 16, and interface(s) 15.
- CPU 12 may be caused to perform one or more of the different types of functions and/or operations under the control of software modules or components, which for example, may include an operating system and any appropriate applications software, drivers, and the like.
- CPU 12 may include one or more processors 13 such as, for example, a processor from one of the Intel, ARM, Qualcomm, and AMD families of microprocessors.
- processors 13 may include specially designed hardware such as application- specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), field-programmable gate arrays (FPGAs), and so forth, for controlling operations of computing device 10.
- ASICs application- specific integrated circuits
- EEPROMs electrically erasable programmable read-only memories
- FPGAs field-programmable gate arrays
- a local memory 11 such as non-volatile random access memory (RAM) and/or read-only memory (ROM), including for example one or more levels of cached memory
- RAM non-volatile random access memory
- ROM read-only memory
- Memory 11 may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, and the like. It should be further appreciated that CPU 12 may be one of a variety of system-on-a-chip (SOC) type hardware that may include additional hardware such as memory or graphics processing chips, such as a QUALCOMM
- SNAPDRAGONTM or SAMSUNG EXYNOSTM CPU as are becoming increasingly common in tire art, such as for use in mobile devices or integrated devices.
- interfaces 15 are provided as network interface cards (NICs).
- NICs control die sending and receiving of data packets over a computer network; other types of interfaces 15 may for example support other peripherals used with computing device 10.
- NICs network interface cards
- Ethernet interfaces frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, graphics interfaces, and die like.
- interfaces may be provided such as, for example, universal serial bus (USB), Serial, Ethernet, FIREWIRETM, THUNDERBOLTTM, PCI, parallel, radio frequency (RF), BLUETOOTHTM, near-field communications (e.g., using near-held magnetics), 802,11 (WiFi), frame relay, TCP/IP, ISDN, fast Ethernet interfaces, Gigabit Ediemet interfaces, Serial ATA (SATA) or external SATA (ESATA) interfaces, high- definition multimedia interface (HD MI), digital visual interface (DVI), analog or digital audio interfaces, asynchronous transfer mode (ATM) interfaces, high-speed serial interface (HSSI) interfaces, Point of Sale (POS) interfaces, fiber data distributed interfaces (FDDIs), and the like.
- USB universal serial bus
- RF radio frequency
- BLUETOOTHTM near-field communications
- near-field communications e.g., using near-held magnetics
- 802,11 WiFi
- frame relay e.g., Wi
- Such interfaces 15 may include physical ports appropriate for communication with appropriate media. In some cases, they may also include an independent processor (such as a dedicated audio or video processor, as is common in the art for high-fidelity A/V hardware interfaces) and, in some instances, volatile and/or non-volatile memory ' (e.g., RAM).
- an independent processor such as a dedicated audio or video processor, as is common in the art for high-fidelity A/V hardware interfaces
- volatile and/or non-volatile memory ' e.g., RAM
- FIG. 7 illustrates one specific architecture for a computing derice 10 for implementing one or more of the inventions described herein, it is by no means the only device architecture on which at least a portion of the features and techniques described herein may be implemented.
- architectures having one or any number of processors 13 may be used, and such processors 13 may be present in a single device or distributed among any number of devices.
- a single processor 13 handles communications as well as routing computations, while in other embodiments a separate dedicated communications processor may be provided.
- feaUires or functionalities may be implemented in a system according to tire invention that includes a client device (such as a tablet device or smartphone running client software) and server systems (such as a server system described in more detail below).
- client device such as a tablet device or smartphone running client software
- server systems such as a server system described in more detail below.
- the system of the present invention may employ one or more memories or memory modules (such as, for example, remote memory block 16 and local memory 11) configured to store data, program instructions for the general- purpose network operations, or other information relating to the functionality of the embodiments described herein (or any combinations of the above).
- Program instructions may control execution of or comprise an operating system and/or one or more applications, for example.
- Memory 16 or memories 11, 16 may also be configured to store data structures, configuration daia, encryption data, historical system operations information, or any ' other- specific or generic non-program information described herein.
- nontransitory machine-readable storage media which, for example, may be configured or designed to store program instructions, state information, and tlie like for performing various operations described herein.
- Examples of such nontransitory machine-readable storage media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD- ROM disks; magneto optical media such as optical disks, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM), flash memory (as is common in mobile devices and integrated systems), solid state drives (SSD) and“hybrid SSD” storage drives that may combine physical components of solid state and hard disk drives in a single hardware device (as are becoming increasingly common in the art with regard to personal computers), memristor memory, random access memory (RAM), and the like.
- such storage means may be integral and non removable (such as RAM hardware modules that may be soldered onto a motherboard or otherwise integrated into an electronic device), or they may be removable such as swappable flash memory' modules (such as“thumb drives” or oilier removable media designed for rapidly exchanging physical storage devices),“hot-swappable” hard disk drives or solid state drives, removable optical storage discs, or other such removable media, and that such integral and removable storage media may be utilized interchangeably.
- program instructions include both object code, such as may be produced by a compiler, machine code, such as may be produced by an assembler or a linker, byte code, such as may be generated by for example a JAVATM compiler and may be executed using a Java virtual machine or equivalent, or files containing higher level code that may be executed by the computer using an interpreter (for example, scripts written in Python, Perl, Ruby, Groovy, or any other scripting language).
- interpreter for example, scripts written in Python, Perl, Ruby, Groovy, or any other scripting language.
- systems according to die present invention may be implemented on a standalone computing system.
- FIG. 8 there is shown a block diagram depicting a typical exemplary architecture of one or more embodiments or components thereof on a standalone computing system.
- Computing device 20 includes processors 21 that may run software drat carr ' out one or more functions or applications of embodiments of the invention, such as for example a client application 24.
- Processors 21 may carry out computing instructions under control of an operating system 22 such as, for example, a version of MICROSOFT WINDOWSTM operating system, APPLE OSXTM or iOSTM operating systems, some variety of die Linux operating system, ANDROIDTM operating system, or the like.
- an operating system 22 such as, for example, a version of MICROSOFT WINDOWSTM operating system, APPLE OSXTM or iOSTM operating systems, some variety of die Linux operating system, ANDROIDTM operating system, or the like.
- one or more shared services 23 may be operable in system 20, and may be useful for providing common services to client applications 24.
- Services 23 may for example be WINDOWSTM services, user-space common services in a Linux environment, or any other type of common service architecture used with operating system 21.
- Input devices 28 may be of any type suitable for receiving user input, including for example a keyboard, touchscreen, microphone (for example, for voice input), mouse, touchpad, trackball, or any combination thereof.
- Output devices 27 may be of any type suitable for providing output to one or more users, whether remote or local to system 20, and may' include for example one or more screens for visual output, speakers, printers, or any combination thereof.
- Memosy 25 may ' be random-access memory' having any structure and architecture known in the art, for use by ' processors 21, for example to run software.
- Storage devices 26 may be any' magnetic, optical, mechanical, memristor, or electrical storage device for storage of data in digital fomi (such as those described above, referring to Fig. 7).
- Examples of storage devices 26 include flash memory, magnetic hard drive, CD-ROM, and/or the like.
- systems of the present invention may be implemented on a distributed computing network, such as one having any number of clients and/or servers.
- a distributed computing network such as one having any number of clients and/or servers.
- FIG. 9 there is shown a block diagram depicting an exemplary architecture 30 for implementing at least a portion of a system according to an embodiment of tire invention on a distributed computing network.
- any ' number of clients 33 may be provided.
- Each client 33 may ran software for implementing client-side portions of the present invention; clients may comprise a system 20 such as that illustrated in Fig. 8.
- any number of servers 32 may be provided for handling requests received from one or more clients 33.
- Clients 33 and servers 32 may communicate with one another via one or more electronic networks 31, which may be in various embodiments any of the Internet, a wide area network, a mobile telephony network (such as CDMA or GSM cellular networks), a wireless network (such as WiFi, WiMAX, LTE, and so forth), or a local area network (or indeed any network topology known in the art; the invention does not prefer any one network topology over any other).
- Networks 31 may be implemented using any known network protocols, including for example wired and/or wireless protocols.
- servers 32 may call external services 37 when needed to obtain additional information, or to refer to additional data concerning a particular call. Communications with external services 37 may take place, for example, via one or more networks 31.
- external services 37 may comprise web-enabled services or functionality related to or installed on the hardware device itself. For example, in an embodiment where client applications 24 are implemented on a smartphone or other electronic device, client applications 24 may obtain information stored in a server system 32 in tire cloud or on an external service 37 deployed on one or more of a particular enterprise’s or user’s premises.
- clients 33 or servers 32 may make use of one or more specialized services or appliances that may be deployed locally or remotely across one or more networks 31.
- one or more databases 34 may be used or referred to by one or more embodiments of the invention. It should be understood by one having ordinary' skill in the art that databases 34 may be arranged in a wide variety of architectures and using a wide variety of data access and manipulation means.
- one or more databases 34 may comprise a relational database system using a structured query language (SQL), while others may comprise an alternative data storage technology ' such as those referred to in the art as“NoSQL” (for example, HADOOP CASSANDRATM, GOOGLE BIGTABLETM, and so forth).
- SQL structured query language
- NoSQL alternative data storage technology
- variant database architectures such as column-oriented databases, in-memory ' databases, clustered databases, distributed databases, or even flat file data repositories may ' be used according to the invention.
- database any combination of known or future database technologies may be used as appropriate, unless a specific database technology' or a specific arrangement of components is specified for a particular ⁇ embodiment herein.
- database refers to a phy sical database machine, a cluster of machines acting as a single database system, or a logical database within an overall database management system.
- security and configuration management are common information technology' (IT) and web functions, and some amount of each are generally associated with any ' IT or web systems. It should be understood by one having ordinary skill in the art that any configuration or securit subsystems known in the art now or in the future may be used in conjunction with embodiments of the invention without limitation, unless a specific security 36 or configuration system 35 or approach is specifically required by ' the description of any specific embodiment.
- Fig. 10 shows an exemplary' overview of a computer system 40 as may be used in any of the various locations throughout the system. It is exemplary of any ' computer that may ' execute code to process data. Various modifications and changes may ' be made to computer system 40 without departing from the broader scope of the system and method disclosed herein.
- Central processor unit (CPU) 41 is connected to bus 42, to which bus is also connected memory ' 43, nonvolatile memory ' 44, display ' 47, input/output (I/O) unit 48, and network interface card (NIC) 53.
- I/O unit 48 may, typically, be connected to keyboard 49, pointing device 50, hard disk 52, and real-time clock 51.
- NIC 53 connects to network 54, which may be the Internet or a local network, which local network may or may' not have connections to the Internet.
- power supply ' unit 45 connected, in this example, to a main alternating current (AC) supply 46.
- AC alternating current
- functionality for implementing systems or methods of the present invention may be distributed among any number of client and/or server components.
- various software modules may be implemented for performing various functions in connection with the present invention, and such modules may be variously implemented to run on server and/or client components.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne un système et des procédés d'optimisation et d'équilibrage de charge destinés à des grappes d'ordinateurs et comprenant : un graphe de calcul distribué; une architecture de serveur utilisant des bases de données multidimensionnelles chronologiques pour une simulation et une prévision de charge continue; une architecture de serveur utilisant des bases de données classiques pour une simulation et une prévision de charge discrètes et utilisant une combinaison de données en temps réel et d'enregistrements d'une activité précédente pour une prévision de charge continue et précise pour des grappes d'ordinateurs, des centres de données ou des serveurs.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/849,901 US11023284B2 (en) | 2015-10-28 | 2017-12-21 | System and method for optimization and load balancing of computer clusters |
US15/849,901 | 2017-12-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019126720A1 true WO2019126720A1 (fr) | 2019-06-27 |
Family
ID=66995176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/067239 WO2019126720A1 (fr) | 2017-12-21 | 2018-12-21 | Système et procédé d'optimisation et d'équilibrage de charge de grappes d'ordinateurs |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019126720A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111757145A (zh) * | 2020-07-31 | 2020-10-09 | 四川巧夺天工信息安全智能设备有限公司 | 一种多路负载均衡的监控视频的批量处理方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222562A1 (en) * | 2008-03-03 | 2009-09-03 | Microsoft Corporation | Load skewing for power-aware server provisioning |
US20140156806A1 (en) * | 2012-12-04 | 2014-06-05 | Marinexplore Inc. | Spatio-temporal data processing systems and methods |
-
2018
- 2018-12-21 WO PCT/US2018/067239 patent/WO2019126720A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222562A1 (en) * | 2008-03-03 | 2009-09-03 | Microsoft Corporation | Load skewing for power-aware server provisioning |
US20140156806A1 (en) * | 2012-12-04 | 2014-06-05 | Marinexplore Inc. | Spatio-temporal data processing systems and methods |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111757145A (zh) * | 2020-07-31 | 2020-10-09 | 四川巧夺天工信息安全智能设备有限公司 | 一种多路负载均衡的监控视频的批量处理方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220012103A1 (en) | System and method for optimization and load balancing of computer clusters | |
US11836533B2 (en) | Automated reconfiguration of real time data stream processing | |
US11635994B2 (en) | System and method for optimizing and load balancing of applications using distributed computer clusters | |
US10204147B2 (en) | System for capture, analysis and storage of time series data from sensors with heterogeneous report interval profiles | |
CN109074377B (zh) | 用于实时处理数据流的受管理功能执行 | |
US11568404B2 (en) | Data monetization and exchange platform | |
CN109643312B (zh) | 托管查询服务 | |
US11321085B2 (en) | Meta-indexing, search, compliance, and test framework for software development | |
US10210255B2 (en) | Distributed system for large volume deep web data extraction | |
US10467036B2 (en) | Dynamic metering adjustment for service management of computing platform | |
US8447851B1 (en) | System for monitoring elastic cloud-based computing systems as a service | |
US20210166170A1 (en) | System for fully integrated predictive decision-making and simulation | |
US9727625B2 (en) | Parallel transaction messages for database replication | |
US9633105B2 (en) | Multidimensional data representation | |
US20240250996A1 (en) | System and method for algorithm crowdsourcing, monetization, and exchange | |
US9917885B2 (en) | Managing transactional data for high use databases | |
US20240256982A1 (en) | Removing biases within a distributed model | |
US10084866B1 (en) | Function based dynamic traffic management for network services | |
EP3440569A1 (fr) | Système de capture entièrement intégrée et d'analyse d'informations commerciales aboutissant à une prise de décision et une simulation prédictives | |
US9852203B1 (en) | Asynchronous data journaling model in hybrid cloud | |
WO2019126720A1 (fr) | Système et procédé d'optimisation et d'équilibrage de charge de grappes d'ordinateurs | |
US20180181914A1 (en) | Algorithm monetization and exchange platform | |
US20230208820A1 (en) | System and methods for predictive cyber-physical resource management | |
CN114238008A (zh) | 一种数据获取方法、装置、系统、电子设备及存储介质 | |
US20180181537A1 (en) | Multitemporal data analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18892821 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18892821 Country of ref document: EP Kind code of ref document: A1 |