WO2015153699A1 - Systèmes, éléments et procédés de calcul pour le traitement de données non structurées - Google Patents

Systèmes, éléments et procédés de calcul pour le traitement de données non structurées Download PDF

Info

Publication number
WO2015153699A1
WO2015153699A1 PCT/US2015/023746 US2015023746W WO2015153699A1 WO 2015153699 A1 WO2015153699 A1 WO 2015153699A1 US 2015023746 W US2015023746 W US 2015023746W WO 2015153699 A1 WO2015153699 A1 WO 2015153699A1
Authority
WO
WIPO (PCT)
Prior art keywords
ximm
unit
cluster
processor
xockets
Prior art date
Application number
PCT/US2015/023746
Other languages
English (en)
Inventor
Stephen Belair
Parin DALAL
Original Assignee
Xockets, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xockets, Llc filed Critical Xockets, Llc
Publication of WO2015153699A1 publication Critical patent/WO2015153699A1/fr
Priority to US15/283,287 priority Critical patent/US20170109299A1/en
Priority to US15/396,318 priority patent/US20170237672A1/en
Priority to US16/129,762 priority patent/US11082350B2/en
Priority to US18/085,196 priority patent/US20230231811A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/161Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning

Definitions

  • the present disclosure relates generally to computing systems, and more particularly to computing systems for processing unstructured data.
  • FIG. 1 is a block schematic diagram of a computing infrastructure according to an embodiment.
  • FIG. 2 is a block schematic diagram of a computing infrastructure according to another embodiment.
  • FIG. 3 is a block schematic diagram showing a resource allocation operation according to an embodiment.
  • FIG. 4 is a diagram showing cluster management in a server appliance according to an embodiment.
  • FIG. 5 is a diagram showing programs of a compute element processor according to an embodiment.
  • FIG. 6 is a diagram of a software defined infrastructure (SDI) according to an embodiment.
  • SDI software defined infrastructure
  • FIG. 7 is a diagram of a computing operation according to an embodiment.
  • FIG. 8 is a diagram showing a process for an SDI according to an embodiment.
  • FIG. 9 is a diagram showing a resource mapping transformation according to an embodiment.
  • FIG. 10 is a diagram showing a method according to an embodiment.
  • FIG. 1 1 is a diagram showing a software architecture according to an embodiment.
  • FIGS. 12A and 12B are diagrams showing computing modules according to embodiment.
  • FIG. 13 is a diagram of a server appliance according to an embodiment.
  • FIG. 14 is a diagram of a server according to an embodiment.
  • a computing infrastructure can run different distributed frameworks, for "big data” processing, as an example.
  • Such a computing infrastructure can host multiple diverse, large distributed frameworks with as little change as compared to conventional systems.
  • a computing infrastructure can be conceptualized as including a cluster infrastructure, and a computational infrastructure.
  • a cluster infrastructure can manage and configure computing clusters, including but not limited to cluster resource allocation, distributed consensus/agreement, failure detection, replication, resource location, and data exchange methods.
  • a computational infrastructure can be directed to unstructured data, and can include two classes of applications: batch and streaming. Both classes of applications can apply the same types of transformations to the data sets. They can differ in the size of the data sets (the batched applications like Hadoop is typically for very large data sets), but the data transformations can be similar, since the data is fundamentally unstructured and that determines the nature of the operations on the data.
  • a computing infrastructure can include a computing "appliance” (referred to herein as a Xockets appliance) for improved processing of data.
  • a computing "appliance” referred to herein as a Xockets appliance
  • Such an appliance can be integrated into a server systems.
  • a Xockets appliance can be placed within the same rack or alternatively, a different rack than the corresponding server.
  • a computing infrastructure can accommodate different frameworks with little porting effort and ease of configuration, as compared to conventional systems. According to embodiments, allocation and use of resources for a framework can be transparent to a user.
  • a computing infrastructure can include cluster management to enable the integration of a Xockets appliance into a system having other components.
  • applications hosted by a computing system can include a cluster manager.
  • Mesos can be used in the cluster infrastructure.
  • a distributed computation application can be built on the cluster manager (such as Storm, Spark, Hadoop), that can utilize unique clusters (referred to herein as Xockets clusters) based on computing elements of the appliance.
  • a cluster manager can encapsulate the semantics of different frameworks, to enable the configuration of different frameworks. Xockets clusters can be divided along framework lines.
  • a cluster manager can include extensions to accommodate Xockets clusters.
  • resources provided by Xockets clusters can be described in terms of computational elements.
  • a computation element (CE) can correspond to an elements within the appliance, and can include any of processor core(s), memory, programmable logic, or even predetermined fixed logic functions.
  • a computational element can include two ARM cores, a fixed amount of shared synchronous dynamic RAM (SDRAM) and one programmable logic unit.
  • SDRAM shared synchronous dynamic RAM
  • a majority if not all of the computing elements can be formed on memory bus mounted modules (referred to herein as XIMMs) of the appliance.
  • computational elements can extend beyond memory bus mounted resources, and can include other elements on or accessible via the appliance, such as a unit processor (e.g., x86 processor) of the appliance and some amount RAM.
  • unit processor e.g., x86 processor
  • RAM random access memory
  • XIMM resources is in contrast to conventional server approaches, which may allocate resources in terms of processors or Gbytes of RAM - metrics typical of conventional server nodes.
  • allocation of Xockets clusters can vary according to the particular framework.
  • FIG. 1 shows a framework 100 that can use resources of an appliances (i.e., Xockets clusters).
  • a framework scheduler can run on the cluster manager master 102 (e.g., Mesos Master) of the cluster.
  • a Xockets translation layer 104 can run on a host that will sit below the framework 100 and above the cluster manager 102. Resource allocations made in the framework calls into the cluster manager can pass through the Xockets translation layer 104.
  • a Xockets translation layer 104 can translate framework calls into requests relevant for a Xockets cluster 106.
  • a Xockets translation layer 104 can be relevant to a particular framework and its computational infrastructure.
  • a Xockets computational infrastructure can be particular to each distributed framework being hosted, and so the particulars of a framework's resource requirements will be understood and stored with the corresponding Xockets translation layer (104).
  • a Spark transformation on a Dstream that is performing a countByWindow could require one computational element, whereas a groupByKeyAndWindow might require two computational elements, an x86 helper process and some amount of RAM depending upon window size.
  • For each Xockets cluster there can be a resource list associated with the different transformations associated with a framework. Such a resource list is derived from the computational infrastructure of the hosted framework.
  • a Xockets cluster 106 can include various computing elements CEO to CEn, which can take the form of any of the various circuits described herein, or equivalents (i.e., processor cores, programmable logic, memory, and combinations thereof).
  • a Xockets cluster 106 can also include a host processor, which can be resident on the appliance housing the XIMMs which contain the computing elements (CEO to CEn). Computing elements (CEO to CEn) can be accessed by Xockets driver 1 12.
  • a framework can runs on one or more Xockets appliances and one or more regular servers clusters (i.e., a hybrid cluster).
  • hybrid cluster 108 can include conventional cluster elements such as processors 110-0/1 and RAM 110-2.
  • a proxy layer 214 can run above the Xockets driver 212 that can communicate with the cluster manager 202 master.
  • an appliance can reside under a top-of-the-rack (TOR) switch and can be part of a cluster that includes conventional servers from the rest of the rack, as well as even more racks, which can also containing a Xockets appliance.
  • additional policies can be implemented.
  • frameworks can be allocated resources from both Xockets appliance(s) and regular servers.
  • a local Xockets driver can be responsible for the allocation of its local XIMM resources (e.g., CEs), thus it may not possible to remotely allocate a XIMM or its sub-components. That is, resources in a Xockets appliance can be tracked and managed by the Xockets driver running on the unit processor (e.g., x86s) on the same Xockets appliance.
  • Xockets resources can continue to be offered in units of computational elements (CEs).
  • CEs computational elements
  • such resources may not include the number of unit (e.g., x86) processors or cores.
  • CE resources may be allocated only from the unit processor (e.g., x86) driver mastering the memory bus of the appliance (to which the XIMMs are connected).
  • FIG. 3 shows an allocation of resources operation for an arrangement like that of FIG. 2, according to an embodiment.
  • the cluster manager 202 master when running a cluster manager 202 master on a Xockets appliance directly, the cluster manager 202 master can pass resource allocations to a Xockets driver 212.
  • Proxy layer 214 can call into the Xockets driver 212 to allocate the physical resources of the Xockets appliance (i.e., CEs) to a framework.
  • CEs physical resources of the Xockets appliance
  • a full cluster manager slave is not running on processor cores (e.g., ARM cores) of XIMMs. Rather, part of the cluster manager slave can run on the unit processor (x86) 416 host when the host is also the cluster manager master.
  • a cluster manager master does not communicate directly to the CEs of a Xockets appliance, all such direct communication occurring via the Xockets driver (e.g., 212). Therefore allocation requests of Xockets resources can terminate in the Xockets driver, so that it can manage the resources.
  • a cluster manager can communicate with the unit processor (x86) host in order to allocate its XIMM resources.
  • a Xockets appliance host software can offer the resources of the appliance to the remote cluster manager master as a single node containing a certain number of CEs.
  • the CEs can then be resources private to a single remote node and the remote Xockets appliance(s) can look like a computational super-node.
  • resources can be allocated between Xockets nodes and regular nodes (i.e., nodes made of regular servers).
  • a default allocation policy can be for framework resources to use as many Xockets resources as are available, and rely upon traditional resources only when there are not enough of the Xockets resources.
  • such a default policy can be overridden, allowing resources to be divided for best results.
  • Map-Reduce In a Map-Reduce computation, it is very likely the Mappers or Reducers will run on a regular server processor (x86) and the Xockets resources are used to ameliorate the shuffle and lighten the burden of the reduce phase, so that Xockets is working cooperatively with regular server nodes.
  • the framework allocation would discriminate between regular and Xockets resources.
  • a cluster manager will not share the same Xockets cluster resources across frameworks.
  • Xockets clusters can be allocated to particular frameworks.
  • direct communication between a cluster manager master and slaves computational elements will be proxied on the unit processor (x86) host if the cluster manager master is running locally.
  • a Xockets driver can control the XIMM resources (CEs) and that control plane can be conceptualized as running over the cluster manager.
  • Xockets processor e.g., ARM
  • cores can run a stripped-down cluster manager slave (see FIG. 4).
  • a cluster manager layer can be used manage control plane communication between the Xockets driver and the XIMM processors (ARMs), such as the loading, unloading and configuration of frameworks.
  • the Xockets driver can control the XIMM resources and that control plane will run over the cluster manager, where the Xockets driver is proxying the cluster manager when performing these functions.
  • a system can employ a cluster manager for Xocket clusters, but not for sharing Xockets clusters across different frameworks, but for configuring and allocating Xockets nodes to particular frameworks.
  • systems can utilize Xockets appliances for processing unstructured data sets, whether in batch or streaming mode.
  • the operations on big unstructured data sets are pertinent to unstructured data and represent the transformations performed on a data set having its characteristics.
  • a computational infrastructure can include a Xockets Software Defined Infrastructure (SDI).
  • SDI Software Defined Infrastructure
  • a Xockets SDI can minimize porting to the ARM cores of CEs, as well as leverage a common set of transformations across the frameworks that Xockets will support.
  • frameworks can run on unit processors (x86s) of a Xockets appliance. There can be little control plane presence on the XIMM processor (ARM) cores, even in the case the Xockets appliance operates as a cluster manager slave. As understood from above, part of the cluster manager slave can run on the unit processor (x86) while only a stripped down and part runs on the XIMM processors (ARMs). The latter part can allow the Xockets driver to control the frameworks running on XIMMs and to utilize the resources on the XIMMs for the data plane. In this way, communication can be reduced to the XIMMs-to-data plane communication primarily, once a XIMM cluster is configured.
  • a framework requires more communication with the "Xockets node" (e.g., the Job Tracker communicating with the Task Tracker in Hadoop), such communication can happen on the unit processor (x86) between a logical counterpart representing the Xockets node, with the Xockets driver mediating providing actual communication to XIMM elements.
  • the unit processor x86
  • FIG. 5 is an example of processes running on a processor core of a CE (i.e., XIMM processor (ARM) core).
  • a processor core 520 can run an operating system (e.g., a version of Linux) 522-0, a user-level networking stack 522-1 , a streaming infrastructure 522-2, a minimal cluster manager slave 522-3, and the relevant computation that gets assigned to that core (ARM) 522-4.
  • an operating system e.g., a version of Linux
  • a user-level networking stack 522-1 e.g., a version of Linux
  • a streaming infrastructure 522-2 e.g., a streaming infrastructure 522-2
  • a minimal cluster manager slave 522-3 e.g., a minimal cluster manager slave 522-3
  • frameworks operating on unstructured data can be implemented as a pipelined graph constructed from transformational building blocks.
  • building blocks can be implemented by computations assigned to XIMM processor cores.
  • the distributed applications running on Xockets appliances can perform transformations such as the following on the data sets they operate on: map, reduce, partition by key, combine by key, merge, sort, filter or count. These transformations are understood to be exemplary "canonical" operations (e.g., transformations).
  • XIMM processor cores and/or Xockets appliance computation elements) can be configured for any suitable transformation.
  • transformations can be implemented by XIMM hardware (e.g., ARM processors). Each such operation can take a function/code to implement, such as a map, reduce, combine, sort, etc.
  • a Xockets SDI have a resource list (e.g., FIG. 6) for each type of transformation and this will affect the cluster resource allocation.
  • These transformations can be optimally implemented on one or more computational elements of a XIMM.
  • Each of the transformations may take input parameters, such as a string to filter on, a key to combine on, etc.
  • a global framework can be configured by allocating the amount of resources to the XIMM cluster that correlates to the normal amount of cluster resources in the normal cluster, and then assigning roles to different parts of the XIMMs or entire XIMMs. From this a workflow graph is constructed, defining inputs and outputs at each point in the graph.
  • FIG. 7 shows a work flow graph according to one particular embodiment.
  • Data can be streamed in from any of a variety of sources (DATA SOURCEO-2).
  • Data sources can be streaming data or batch data.
  • DATA SOURCEO can arrive from a memory bus of a XIMM (XIMMO).
  • DATA SOURCE1 arrives over a network connection (which can be to the appliance or to the XIMM itself).
  • SOURCE2 arrives from memory that is onboard the XIMM itself.
  • Various transformations can be performed by computing elements residing on XIMMs. Once one transformation is complete, the results can be transformed again in another resource. In particular embodiments, such processing can be on streams of data.
  • framework requests for services can be translated into units corresponding to the Xockets architecture. Therefore, a Xockets SDI can implement the following steps: (1 ) Determine types of computation that is being carried out by a framework. This is can be reflected in the framework's configuration of a job that it will run on the cluster. This information can result in a framework's request for resources. For example, a job might result in a resource list for N nodes to implement a filter-by-key, K nodes to do a parallel join, as well as M nodes to participate in a merge. These resources are essentially listed out by their transformations, as well as how to hook them together in work-flow graph.
  • the SDI can translate this into the resources required to implement on a Xockets cluster.
  • the Xockets SDI can include a correlation between fundamental transformations for a particular framework and XIMM resources.
  • a Xockets SDI can thus map transformations to XIMM resources needed.
  • any constraints that exist are applied as well (e.g., there might be a need to allocate two computational elements on the same XIMM but in different communication rings for a pipelined computation).
  • FIG. 8 is a flow diagram showing a process for an SDI 826.
  • a transformation list can be built from a framework 828. Transformations can be translated into a XIMM resource list 830. Transformations can be mapped to particular XIMM resources 832.
  • FIG. 9 shows a mapping of transformations to computing elements of a Xockets node.
  • a Xockets node 908' can be conceptualized as including various CEs.
  • a Xockets node 908 can have CEs grouped and/or connected to create predetermined transforms. Connections, iterations, etc. can be made between transforms by programmed logic (PL) and/or helper processes.
  • PL programmed logic
  • FIG. 10 shows a method according to an embodiment.
  • Data packets e.g., 1034-0 to -2 from different sessions (1040-0 to -2) can be collected.
  • packets can be collected over one or more interfaces 1042.
  • Such an action can include receiving data packets over a network connection of server including a Xockets appliance and/or over a network connection of Xockets appliance itself.
  • Collected packet data can be reassembled into corresponding complete values (1036, 1038, 1040). Such an action can include packet processing using server resources, including any of those shown herein. Based characteristics of the values (e.g., 1034-0, 1034-1 , 1034-2), complete values can be arranged in subsets 1046-0/1.
  • Transformations can then be made on the subsets as if they were originating from a same network session (1048, 1050).
  • Such action can include utilizing CEs of a Xocket appliance.
  • this can include streaming data through such CEs.
  • Transformed values 1056 can be emitted as packets on other network sessions 1040-x, 1040-y.
  • a system when a system is configured for a streaming data processing (e.g., Storm), it can be determined where data sources (e.g., Spouts) are, and how many of them there are.
  • data sources e.g., Spouts
  • an input stream comes in from a network through a TOR, a XIMM cluster can be configured with the specified amount of Spouts all running on the unit processor (x86).
  • the Spouts can be configured to run on the storage XIMMs, wherever HDFS blocks are read.
  • Operations e.g., Bolts
  • each Bolt can be configured to perform its given operation with any parameters that might be necessary, and then as part of the overall data flow graph, it will be told where to send its output, be it to another computational element on the same XIMM, or the ip address of another XIMM, etc.
  • a Bolt may need to be implemented that does a merge sort. This may require two pipelined computational elements on the same XIMM, but on different communication rings, as well as a certain amount of RAM (e.g., 512 Mbytes) to spill the results to.
  • RAM e.g., 512 Mbytes
  • FIG. 1 1 demonstrates the two levels that framework configuration and computations occupy, and summarizes the overview of a Xockets software architecture according to a particular embodiment.
  • FIG. 1 1 shows SDIs 1 160, corresponding jobs 1158, a framework scheduler 1 100-0 in a framework plane 1100-1 , cluster managers 1102-0/1 , CEs, a Xocket cluster 1 108, a hybrid cluster 1 108', XIMMs 1164 of a Xocket hardware plane 1 162.
  • Canonical transformation that are implemented as part of the Xockets computational infrastructure can have an implementation using Xockets streaming architecture.
  • a streaming architecture can implement transformations on RM cores (CEs), but in an optimal manner that reduces copies and utilizes HW logic.
  • the HW logic couples input and outputs and schedule data flows among or across XIMM processors (ARMs) of the same or adjacent computation elements.
  • the streaming infrastructure running on the XIMM processors have hooks to implement a computational algorithm in such a way that it is integrated into a streaming paradigm.
  • XIMMs can include special registers that accommodate and reflect input from classifiers running in the XIMM processor cores so that modifications to streams as they pass through the computational elements can provide indications to a next phase of processing of the stream.
  • an infrastructures can includes Xockets hardware implemented in a Xockets appliance.
  • Xockets hardware can include.XIMMs.
  • XIMMs can be modules compatible with a memory access structure, such as a memory bus.
  • XIMMs can occupy a DDR type memory slot.
  • FIGS. 12A and 12B each XIMM can incorporates processor elements (e.g., ARM cores) 1201 , memory elements 1203, and programmable logic 1205, highly interconnected with one another.
  • processor elements e.g., ARM cores
  • memory elements 1203 e.g., programmable logic 1205
  • XIMMs can take various forms including a compute XIMM (FIG. 12A) and a Storage XIMM (FIG. 12B).
  • a Compute XIMM can have a number of cores (e.g., 24 ARM cores), programmable logic 1205 and a programmable switch 1207.
  • a Storage XIMM can include a smaller number of cores (e.g., 12 ARM cores) 1201 , programmable logic 1205, a programmable switch 1207, and relatively large amount of storage (e.g., 1.5 Tbytes of flash memory) 1203.
  • Each XIMM can also include a network connection 1209.
  • XIMMs can have a connection to a memory bus, to enable computing resources to be accessed via the memory bus.
  • a XIMM can include DDR bus interface 1211. In particular embodiments, such an interface can be compatible with an existing memory in-line module standards.
  • XIMMs 1351 can be connected together in a XIMM unit 1313 (FIG. 13).
  • XIMMs of a unit can be connected to a common memory bus (e.g., DDR bus) 1315.
  • the memory bus can be controlled by a unit processor (e.g., x86 processor) 1317 of the XIMM unit.
  • a unit processor 1317 can run a driver for accessing and configuring XIMMs over the memory bus.
  • a XIMM unit can include DIMMs connected to the same memory bus (to serve as RAM).
  • a network of XIMMs can form a XIMM cluster, whether it be storage XIMMs and compute XIMMs in some combination.
  • the network of XIMMs can occupy one or more rack units.
  • a XIMM cluster is tightly coupled, unlike conventional data center clusters.
  • XIMMs can communicate over a DDR memory bus with a hub-and-spoke model, with a Xockets Driver (e.g., an x86 based driver) being the hub.
  • a Xockets Driver e.g., an x86 based driver
  • Each XIMM can have has a network connection that is connected to either a top of rack (TOR) or to other servers in the rack.
  • TOR top of rack
  • Such a connection can enable peer-to-peer XIMM-to-XIMM communication, not requiring a Xockets driver to facilitate the communication.
  • the XIMMs can be connected to each other or to other servers in a rack.
  • the XIMM cluster will appear to be a cluster with low and deterministic latencies, i.e., the tight coupling and deterministic HW scheduling within the XIMMs is not typical of an asynchronous distributed system.
  • FIG. 14 shows a rack arrangement with a TOR unit 1419 network connections between various XIMM units 1451.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Un procédé peut consister à : collecter des paquets de données provenant de différentes sessions en réseau ; reconstituer les paquets en valeurs complètes ; placer les valeurs dans des sous-ensembles spécifiques ; calculer, dans au moins un processeur d'usage courant, une transformation d'au moins un sous-ensemble exécutée comme si les valeurs provenaient d'une même session en réseau ; émettre des paquets de données dans une autre session de réseau représentant la transformation.
PCT/US2015/023746 2012-05-22 2015-03-31 Systèmes, éléments et procédés de calcul pour le traitement de données non structurées WO2015153699A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/283,287 US20170109299A1 (en) 2014-03-31 2016-09-30 Network computing elements, memory interfaces and network connections to such elements, and related systems
US15/396,318 US20170237672A1 (en) 2012-05-22 2016-12-30 Network server systems, architectures, components and related methods
US16/129,762 US11082350B2 (en) 2012-05-22 2018-09-12 Network server systems, architectures, components and related methods
US18/085,196 US20230231811A1 (en) 2012-05-22 2022-12-20 Systems, devices and methods with offload processing devices

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201461973207P 2014-03-31 2014-03-31
US61/973,207 2014-03-31
US201461976471P 2014-04-07 2014-04-07
US61/976,471 2014-04-07

Related Child Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2015/023730 Continuation WO2015153693A1 (fr) 2012-05-22 2015-03-31 Interface, procédés d'interface et systèmes de fonctionnement d'éléments de calcul fixés à un bus mémoire
US15/283,287 Continuation US20170109299A1 (en) 2012-05-22 2016-09-30 Network computing elements, memory interfaces and network connections to such elements, and related systems

Publications (1)

Publication Number Publication Date
WO2015153699A1 true WO2015153699A1 (fr) 2015-10-08

Family

ID=54241242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/023746 WO2015153699A1 (fr) 2012-05-22 2015-03-31 Systèmes, éléments et procédés de calcul pour le traitement de données non structurées

Country Status (1)

Country Link
WO (1) WO2015153699A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060048161A1 (en) * 2004-08-26 2006-03-02 De Rose Cesar Resource allocation method and system
US7496670B1 (en) * 1997-11-20 2009-02-24 Amdocs (Israel) Ltd. Digital asset monitoring system and method
US20130227558A1 (en) * 2012-02-29 2013-08-29 Vmware, Inc. Provisioning of distributed computing clusters
US20130318084A1 (en) * 2012-05-22 2013-11-28 Xockets IP, LLC Processing structured and unstructured data using offload processors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496670B1 (en) * 1997-11-20 2009-02-24 Amdocs (Israel) Ltd. Digital asset monitoring system and method
US20060048161A1 (en) * 2004-08-26 2006-03-02 De Rose Cesar Resource allocation method and system
US20130227558A1 (en) * 2012-02-29 2013-08-29 Vmware, Inc. Provisioning of distributed computing clusters
US20130318084A1 (en) * 2012-05-22 2013-11-28 Xockets IP, LLC Processing structured and unstructured data using offload processors

Similar Documents

Publication Publication Date Title
JP6653366B2 (ja) 計算タスクを処理するためのコンピュータクラスタ構成、およびそれを動作させるための方法
US11368385B1 (en) System and method for deploying, scaling and managing network endpoint groups in cloud computing environments
US10412021B2 (en) Optimizing placement of virtual machines
US20160350146A1 (en) Optimized hadoop task scheduler in an optimally placed virtualized hadoop cluster using network cost optimizations
US20200326993A1 (en) Reducing overlay network overhead across container hosts
EP3283974B1 (fr) Systèmes et procédés pour l'exécution d'unités d'exécution logicielles au moyen de processeurs logiciels
US20160359668A1 (en) Virtual machine placement optimization with generalized organizational scenarios
EP3204855A1 (fr) Attributions et/ou génération optimisées de machine virtuelle pour des tâches de réducteur
US20190007334A1 (en) Remote Hardware Acceleration
US10572421B2 (en) Topology-aware parallel reduction in an accelerator
CN113515483A (zh) 一种数据传输方法及装置
Aarthee et al. Energy-aware heuristic scheduling using bin packing mapreduce scheduler for heterogeneous workloads performance in big data
WO2015153699A1 (fr) Systèmes, éléments et procédés de calcul pour le traitement de données non structurées
Gharehchopogh et al. Analysis of scheduling algorithms in grid computing environment
Zhang et al. Dynamic load-balanced multicast based on the Eucalyptus open-source cloud-computing system
Liu et al. Topology‐Aware Strategy for MPI‐IO Operations in Clusters
Jin et al. : Efficient Resource Disaggregation for Deep Learning Workloads
Pasricha et al. Analytical parallel approach to evaluate cluster based strassen’s matrix multiplication
US11520713B2 (en) Distributed bus arbiter for one-cycle channel selection using inter-channel ordering constraints in a disaggregated memory system
Evans Oversubscription and Your Data How User Level Scheduling Can Increase Data Flow.
WO2022074415A1 (fr) Dispositif et procédé de suivi et de planification de co-flux
CN113268355A (zh) 分布式集群的数据库连接方法及装置
CN113015960A (zh) 云环境中的基础设施支持

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15774482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15774482

Country of ref document: EP

Kind code of ref document: A1