WO2022093256A1 - Provisioning of computational resources - Google Patents

Provisioning of computational resources Download PDF

Info

Publication number
WO2022093256A1
WO2022093256A1 PCT/US2020/058108 US2020058108W WO2022093256A1 WO 2022093256 A1 WO2022093256 A1 WO 2022093256A1 US 2020058108 W US2020058108 W US 2020058108W WO 2022093256 A1 WO2022093256 A1 WO 2022093256A1
Authority
WO
WIPO (PCT)
Prior art keywords
computational
transaction requests
processing
subset
computational resources
Prior art date
Application number
PCT/US2020/058108
Other languages
French (fr)
Inventor
Helen Balinsky
Josep ABAD PEIRO
Remy HUSSON
Roberto JORDANEY
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/058108 priority Critical patent/WO2022093256A1/en
Publication of WO2022093256A1 publication Critical patent/WO2022093256A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q2220/00Business processing using cryptography

Definitions

  • Secure ledger or “blockchain” technology has become increasingly prevalent across a wide range of applications in recent years.
  • Secure ledgers provide an immutable record of transactions between parties.
  • Secure ledgers may provide guarantees that certain processes have executed and that tasks have been carried out according to a well-defined process or contract. These technologies are implemented in a manner which is secure-by-design without human input.
  • Figure 1 is a schematic diagram showing an apparatus for processing secure ledger transactions according to an example
  • Figure 2 is a schematic diagram showing a method for provisioning computational resources, according to an example
  • Figure 3 is a block diagram of a method for parallel processing secure ledger transactions, according to an example
  • Figure 4 is a schematic diagram showing a processor and memory, according to an example.
  • An implementation of a secure ledger comprises a collection of protocols that are executed between computing devices.
  • a device executing a secure ledger client communicates with receiver nodes.
  • the client software sends transaction requests to the receiver nodes where they are processed and validated.
  • a transaction comprises a payload that is signed with a cryptographically secure private key of the client and some additional metadata.
  • the metadata may depend on the specific secure ledger implementation.
  • the transaction request is processed according to the secure ledger logic at the receiver node to generate an output.
  • a new ledger entry may be formed from a combination of the output hashed with previous secure ledger entries. In this fashion subsequent ledger entries depend on previous entries. This generates a ledger which is secure-by-design providing an immutable record of processed transactions.
  • Receiver nodes validate transactions, execute the secure ledger logic and write the results into persistent memory.
  • the secure ledger logic may include a smartcontract component. If a transaction is invalid then the results may not be written to persistent memory. A transaction may be deemed invalid for different reasons. For example, a transaction may be deemed invalid because the transaction has an invalid signature or because the transaction does not conform to rules prescribed in the secure ledger execution logic.
  • a receiver node may have access to different kinds of computational resources. Different computational resources have different computational characteristics. For example, Graphics Processing Units (GPUs) offer considerable parallelization capabilities when compared to central processing units (CPUs). This offers the possibility of leveraging parallelization in processing transactions.
  • GPUs Graphics Processing Units
  • CPUs central processing units
  • the methods and systems described herein distribute secure ledger transaction requests and other parallelizable operations among available GPUs and CPUs to maximize throughput.
  • the methods and systems described herein may also dynamically provision available GPUs and CPUs to achieve the desired throughput when the volume of incoming transaction requests is known or estimated based on past and current trend. Further methods are described herein that develop GPU parallelization techniques for secure ledger transactions to further improve the throughput.
  • provisioning of computational resources may refer to the allocation or selection of a group of computational resources having different computational characteristics from a collection of resources that are available for executing a computational task.
  • Figure 1 is a schematic diagram 100 showing an apparatus 110 for processing a set of secure ledger transaction requests 120, according to an example.
  • the apparatus 110 receives transaction requests 120 from one or more clients (not shown in Figure 1 ) over a network 130. Processed transactions may be written to a secure ledger 140 accessible to the apparatus 110.
  • the apparatus 110 communicates with computational resources 150, having a first computational characteristic and computational resources 160 having a second computational characteristic.
  • the computational characteristic of a computational resource refers to the throughput for processing secure ledger transactions in a given time period, for example, the number of transactions processed per second.
  • computational resources 150, 160 comprise GPUs and CPUs. The methods are also applicable with other kinds of computational resources such as those based on Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs) or similar.
  • ASICs Application-Specific Integrated Circuits
  • FPGAs Field-Programmable Gate Arrays
  • the computational resources 150, 160 may comprise of n CPUs and m GPUs respectively, where n and m are arbitrary non-negative integers.
  • the values n and m may be fixed values or may be time-dependent values n(t), /r?(t).
  • the former case describes a static computational environment.
  • the latter case describes a dynamic computational environment such as an elastic cloud environment in which resources may be dynamically allocated and deallocated or a shared environment where resources may temporarily be occupied by other processes.
  • the apparatus 110 comprises a resource allocation module 170.
  • the resource allocation module distributes and manages the workload between computational resources 150, 160.
  • the resource allocation module 170 is arranged to allocate computational resources 150 to process a first subset of the set of transaction requests 120 and allocate computational resources 160 to process a second subset of the set of transaction requests 120.
  • the resource allocation module 170 may allocate computational resources from the pools of computational resources 150 and 160 according to the context in which transactions are being processed. For example, different combinations of resources may be used depending on how many transactions are received at a given time or anticipated or estimated to be received. For instance, if small numbers of transactions are to be processed then it may be faster to process them on CPUs, while a large amount of transactions may be processed faster on a GPU due to parallelization. In addition, CPUs may carry out additional computation compared to GPUs. The resource allocation module 170 may factor in these kinds of differences when assigning tasks to be run on hardware devices.
  • each of the CPUs in the pool of resources 150 may be arranged to process r transactions during a single cycle of one of the GPUs in the pool of resources 160.
  • a GPU with p cores produces p transactions during the same time.
  • the maximum of processed transactions is (n r + m p), utilizing the full capacity of the pools of resources 150, 160 comprising n CPUs and m GPUs. The optimal performance time is reached when idle time for all of the allocated CPUs and GPUs is minimized.
  • the resource allocation module 170 may optimize the throughput for the remaining transactions. Depending on the remaining number of operations, there are two cases. If 0 ⁇ q ⁇ n r , then all remaining transactions can be produced on CPUs faster than a single GPU cycle. In that case, the resource allocation module 170 may distribute the processing of the remaining transactions equally across all available n CPUs to minimize total time. Otherwise, if n r ⁇ q ⁇ n r + m p, then at least some transactions are processed on a GPU. In that case, the throughput is at least since an extra GPU cycle’s worth of time is required for processing all the transactions.
  • the time t opt max (t GPU (N GPU ), t GPU (N GPU )), is the minimum amount of time to process T transactions. Intuitively, t opt is reached when idle time is minimized for both CPUs and GPUs.
  • a threshold T GPU may be specified.
  • the resource allocation module 170 may adopt the strategy whereby if T ⁇ T GPU then all transactions are given to CPUs and if 7 > T GPU then all transactions are given to the GPUs. This strategy is beneficial for high throughputs, where the involvement of CPUs creates computational overhead without delivering any efficiency savings.
  • the apparatus 110 also comprises a resource monitoring module 180 to determine a performance metric for processing transactions allocated to the computational resources 150, 160.
  • the resource allocation module 170 may allocate more resources or de-allocate resources based on the performance metric.
  • the resource allocation module 170 may switch between different optimization methods which are made available to it. In some cases, this may be to ensure that the secure ledger updates are recorded at maximum or optimal performance. In other cases, the resource allocation module 170 may switch optimization methods to minimize an overall cost, for example, by processing transactions during an off-peak period, when resources cost less.
  • the resource allocation module 170 may also adjust available resources or change optimization methods based on contextual information.
  • the resource allocation module 170 is to determine whether the performance metric satisfies a service level agreement (SLA) and is arranged to reallocate computational resources on basis of the determination.
  • SLA service level agreement
  • Figure 2 shows a method 200 for processing a set of secure ledger transaction requests, according to an example.
  • the method 200 may be implemented in conjunction with the other methods and systems described herein.
  • the method 200 may be implemented on the apparatus 110 shown in Figure 1 .
  • a partition of the set of transaction requests into a first subset and a second subset is determined. This may be performed by the resource allocation module 170, when the method is performed on the apparatus 110. According to an example, the first or second subset may be an empty set.
  • computational resources having a first characteristic are provisioned for processing the first subset of transactions and computational resources having a second computational characteristic are provisioned for processing the second subset of transactions.
  • the partition is determined to optimize a performance parameter for processing the set of transaction requests.
  • the partition of the set of transactions may be determined to minimize a time, overall cost or other parameters for processing the set of transaction requests on the computational resources.
  • the computational resources having first and second computational characteristics may be GPUs and CPUs as previously described.
  • the method 200 comprises processing the set of transaction requests on the central processing units when the number of transaction requests is below a predetermined number of transaction requests
  • the computational characteristic of a computational resource is determined on a basis of a throughput of the computational resource where the throughput comprises a number of transaction requests processed by the computational resource in a pre-determined time period.
  • GPU parallelization capabilities may further be leveraged in the following two use cases: checking validity of cryptographic signatures and checking transactions satisfy smart-contract rules. These capabilities may be used in conjunction with CPU processing to efficiently process higher volumes of transactions.
  • transaction data for a subset of the transactions 120 that are to be processed by a CPU in the pool of resources 150 are loaded into the memory of a GPU in the computational resource pool 160, prior to execution of operations. Following the execution, the results are sent back to CPU.
  • FIG. 3 shows a method 300 for leveraging parallelization capabilities at a receiver node, according to an example.
  • the method 300 may be implemented in conjunction with the other methods and systems described herein.
  • the method 300 may be implemented by the apparatus 110 in conjunction with one or more GPUs in the computational resource pool 160, shown in Figure 1.
  • transactions to be validated are received by a receiver node.
  • the transactions are combined into a block for parallel processing.
  • /V transactions may be combined where /V is less than the number of cores in the GPU.
  • data is obtained for parallel processing the /V transactions.
  • the data Di may comprise messages and the data D2 may comprise cryptographic signatures for the messages.
  • the processing is executed on the GPU.
  • the processing may comprise batch verification of cryptographic signatures.
  • the output is returned to the CPU memory. The output may be processed further or written into persistent storage.
  • the methods and systems described herein provide a higher throughput for processing transactions in a secure ledger by carefully choosing and switching between different optimization methods.
  • the methods may further account for the context in which transactions are being processed.
  • the methods minimize validation time for a given set of transactions whilst optimally utilizing all available system capacities.
  • These methods include the dynamic provisioning and de-provisioning of resources to achieve a desired throughput when presented with varying requests for transactions by clients.
  • the machine-readable instructions may, for example, be executed by a general-purpose computer, a special purpose computer, an embedded processor or processors of other programmable data processing devices to realize the functions described in the description and diagrams.
  • a processor or processing apparatus may execute the machine-readable instructions.
  • modules of apparatus may be implemented by a processor executing machine-readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry.
  • the term 'processor' is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate set etc.
  • the methods and modules may all be performed by a single processor or divided amongst several processors.
  • Such machine-readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode.
  • the instructions may be provided on a non-transitory computer readable storage medium encoded with instructions, executable by a processor.
  • Figure 4 shows an example of a processor 410 associated with a memory 420.
  • the memory 420 comprises computer readable instructions 430 which are executable by the processor 410.
  • the instructions 430 cause the processor to receive a set of transaction requests to a secure ledger, determine a partition of the set of transaction requests into a first subset and second subset and allocate computational resources having a first computational characteristic for processing the first subset of transaction requests and computational resources having a second computational characteristic, different from the first computational characteristic, for processing the second subset of transaction requests.
  • the partition of the set of transactions requests is determined to minimize a throughput for processing the set of transaction requests on the computational resources.
  • Such machine-readable instructions may also be loaded onto a computer or other programmable data processing devices, so that the computer or other programmable data processing devices perform a series of operations to produce computer-implemented processing, thus the instructions executed on the computer or other programmable devices provide an operation for realizing functions specified by flow(s) in the flow charts and/or block(s) in the block diagrams.
  • teachings herein may be implemented in the form of a computer software product, the computer software product being stored in a storage medium and comprising a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Technology Law (AREA)
  • General Engineering & Computer Science (AREA)
  • Advance Control (AREA)

Abstract

A method and apparatus for processing a set of secure ledger transaction requests are provided. The method comprises determining a partition of the set of transaction requests into a first subset and second subset and provisioning computational resources having a first computational characteristic for processing the first subset of transaction requests and computational resources having a second computational characteristic, different from the first computational characteristic, for processing the second subset of transaction requests. The partition optimizes a performance parameter for processing the transaction requests.

Description

PROVISIONING OF COMPUTATIONAL RESOURCES
BACKGROUND
[0001 ] Secure ledger or “blockchain” technology has become increasingly prevalent across a wide range of applications in recent years. Secure ledgers provide an immutable record of transactions between parties. Secure ledgers may provide guarantees that certain processes have executed and that tasks have been carried out according to a well-defined process or contract. These technologies are implemented in a manner which is secure-by-design without human input.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Figure 1 is a schematic diagram showing an apparatus for processing secure ledger transactions according to an example;
[0003] Figure 2 is a schematic diagram showing a method for provisioning computational resources, according to an example;
[0004] Figure 3 is a block diagram of a method for parallel processing secure ledger transactions, according to an example;
[0005] Figure 4 is a schematic diagram showing a processor and memory, according to an example.
DETAILED DESCRIPTION
[0006] In the following description, for purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to "an example" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples. [0007] Distributed secure ledger technologies such as blockchains are becoming increasingly prevalent. Distributed secure ledgers suffer from endemic poor performance compared to traditional centralized transaction schemes in databases. Performance bottlenecks arise due to the linear structure of secure ledgers.
[0008] An implementation of a secure ledger comprises a collection of protocols that are executed between computing devices. A device executing a secure ledger client communicates with receiver nodes. The client software sends transaction requests to the receiver nodes where they are processed and validated.
[0009] A transaction comprises a payload that is signed with a cryptographically secure private key of the client and some additional metadata. The metadata may depend on the specific secure ledger implementation. The transaction request is processed according to the secure ledger logic at the receiver node to generate an output. A new ledger entry may be formed from a combination of the output hashed with previous secure ledger entries. In this fashion subsequent ledger entries depend on previous entries. This generates a ledger which is secure-by-design providing an immutable record of processed transactions.
[0010] Receiver nodes validate transactions, execute the secure ledger logic and write the results into persistent memory. The secure ledger logic may include a smartcontract component. If a transaction is invalid then the results may not be written to persistent memory. A transaction may be deemed invalid for different reasons. For example, a transaction may be deemed invalid because the transaction has an invalid signature or because the transaction does not conform to rules prescribed in the secure ledger execution logic.
[0011 ] A receiver node may have access to different kinds of computational resources. Different computational resources have different computational characteristics. For example, Graphics Processing Units (GPUs) offer considerable parallelization capabilities when compared to central processing units (CPUs). This offers the possibility of leveraging parallelization in processing transactions. [0012] The methods and systems described herein distribute secure ledger transaction requests and other parallelizable operations among available GPUs and CPUs to maximize throughput. The methods and systems described herein may also dynamically provision available GPUs and CPUs to achieve the desired throughput when the volume of incoming transaction requests is known or estimated based on past and current trend. Further methods are described herein that develop GPU parallelization techniques for secure ledger transactions to further improve the throughput. Herein, provisioning of computational resources may refer to the allocation or selection of a group of computational resources having different computational characteristics from a collection of resources that are available for executing a computational task.
[0013] Figure 1 is a schematic diagram 100 showing an apparatus 110 for processing a set of secure ledger transaction requests 120, according to an example. The apparatus 110 receives transaction requests 120 from one or more clients (not shown in Figure 1 ) over a network 130. Processed transactions may be written to a secure ledger 140 accessible to the apparatus 110.
[0014] The apparatus 110 communicates with computational resources 150, having a first computational characteristic and computational resources 160 having a second computational characteristic. Herein the computational characteristic of a computational resource refers to the throughput for processing secure ledger transactions in a given time period, for example, the number of transactions processed per second. In the methods and systems described, computational resources 150, 160 comprise GPUs and CPUs. The methods are also applicable with other kinds of computational resources such as those based on Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs) or similar.
[0015] In examples, the computational resources 150, 160 may comprise of n CPUs and m GPUs respectively, where n and m are arbitrary non-negative integers. The values n and m may be fixed values or may be time-dependent values n(t), /r?(t). The former case describes a static computational environment. The latter case describes a dynamic computational environment such as an elastic cloud environment in which resources may be dynamically allocated and deallocated or a shared environment where resources may temporarily be occupied by other processes.
[0016] The apparatus 110 comprises a resource allocation module 170. The resource allocation module distributes and manages the workload between computational resources 150, 160. In particular, the resource allocation module 170 is arranged to allocate computational resources 150 to process a first subset of the set of transaction requests 120 and allocate computational resources 160 to process a second subset of the set of transaction requests 120.
[0017] According to examples, the resource allocation module 170 may allocate computational resources from the pools of computational resources 150 and 160 according to the context in which transactions are being processed. For example, different combinations of resources may be used depending on how many transactions are received at a given time or anticipated or estimated to be received. For instance, if small numbers of transactions are to be processed then it may be faster to process them on CPUs, while a large amount of transactions may be processed faster on a GPU due to parallelization. In addition, CPUs may carry out additional computation compared to GPUs. The resource allocation module 170 may factor in these kinds of differences when assigning tasks to be run on hardware devices.
[0018] In examples, each of the CPUs in the pool of resources 150 may be arranged to process r transactions during a single cycle of one of the GPUs in the pool of resources 160. A GPU with p cores produces p transactions during the same time. During one GPU cycle the maximum of processed transactions is (n r + m p), utilizing the full capacity of the pools of resources 150, 160 comprising n CPUs and m GPUs. The optimal performance time is reached when idle time for all of the allocated CPUs and GPUs is minimized. If the number of transactions L is higher than the full system capacity (n r + m p), then using the full system capacity, the number of full load cycles is given as:
Figure imgf000006_0001
[0019] After this number of full cycles are completed, the number q of remaining transactions is less than the full load and is given by: q = L mod (n r + m p) < n r + m p.
[0020] If =0, then the operation is completed. If q>0, then the resource allocation module 170 may optimize the throughput for the remaining transactions. Depending on the remaining number of operations, there are two cases. If 0 < q < n r , then all remaining transactions can be produced on CPUs faster than a single GPU cycle. In that case, the resource allocation module 170 may distribute the processing of the remaining transactions equally across all available n CPUs to minimize total time. Otherwise, if n r < q < n r + m p, then at least some transactions are processed on a GPU. In that case, the throughput is at least
Figure imgf000007_0001
since an extra GPU cycle’s worth of time is required for processing all the transactions.
[0021 ] The resource allocation module 170 determines a partition of the transactions into subsets to be processed by the pools of resources 150, 160. Let NCPU denote the number of transactions processed by the CPUs and NGPU denote the number of transactions, processed by the GPUs. The resource allocation module 170 determines a partition L = NCPU + NGPU, such that:
Figure imgf000007_0002
[0022] The time topt = max (tGPU(NGPU), tGPU(NGPU)), is the minimum amount of time to process T transactions. Intuitively, topt is reached when idle time is minimized for both CPUs and GPUs.
[0023] According to examples, different optimization strategies may be implemented. For example, in one case a threshold TGPU may be specified. Given a number T of transactions to be processed, the resource allocation module 170 may adopt the strategy whereby if T < TGPU then all transactions are given to CPUs and if 7 > TGPU then all transactions are given to the GPUs. This strategy is beneficial for high throughputs, where the involvement of CPUs creates computational overhead without delivering any efficiency savings.
[0024] The apparatus 110 also comprises a resource monitoring module 180 to determine a performance metric for processing transactions allocated to the computational resources 150, 160. The resource allocation module 170 may allocate more resources or de-allocate resources based on the performance metric. In some cases, the resource allocation module 170 may switch between different optimization methods which are made available to it. In some cases, this may be to ensure that the secure ledger updates are recorded at maximum or optimal performance. In other cases, the resource allocation module 170 may switch optimization methods to minimize an overall cost, for example, by processing transactions during an off-peak period, when resources cost less. The resource allocation module 170 may also adjust available resources or change optimization methods based on contextual information. In some cases, the resource allocation module 170 is to determine whether the performance metric satisfies a service level agreement (SLA) and is arranged to reallocate computational resources on basis of the determination.
[0025] Figure 2 shows a method 200 for processing a set of secure ledger transaction requests, according to an example. The method 200 may be implemented in conjunction with the other methods and systems described herein. In particular, the method 200 may be implemented on the apparatus 110 shown in Figure 1 .
[0026] At block 210 a partition of the set of transaction requests into a first subset and a second subset is determined. This may be performed by the resource allocation module 170, when the method is performed on the apparatus 110. According to an example, the first or second subset may be an empty set.
[0027] At block 220, computational resources having a first characteristic are provisioned for processing the first subset of transactions and computational resources having a second computational characteristic are provisioned for processing the second subset of transactions. The partition is determined to optimize a performance parameter for processing the set of transaction requests. According to an example, the partition of the set of transactions may be determined to minimize a time, overall cost or other parameters for processing the set of transaction requests on the computational resources.
[0028] According to examples the computational resources having first and second computational characteristics may be GPUs and CPUs as previously described. In some cases, the method 200 comprises processing the set of transaction requests on the central processing units when the number of transaction requests is below a predetermined number of transaction requests
[0029] According to examples of the method 200 shown in Figure 2, the computational characteristic of a computational resource is determined on a basis of a throughput of the computational resource where the throughput comprises a number of transaction requests processed by the computational resource in a pre-determined time period.
[0030] In applications with high volumes of transactions GPU parallelization capabilities may further be leveraged in the following two use cases: checking validity of cryptographic signatures and checking transactions satisfy smart-contract rules. These capabilities may be used in conjunction with CPU processing to efficiently process higher volumes of transactions.
[0031 ] According to a first example, transaction data for a subset of the transactions 120 that are to be processed by a CPU in the pool of resources 150, are loaded into the memory of a GPU in the computational resource pool 160, prior to execution of operations. Following the execution, the results are sent back to CPU.
[0032] Alternatively, part of a secure ledger state may be retained in the memory of the GPU for a period of time, during which new transaction requests are loaded to GPU for execution. The results are then applied to the ledger state in the GPU and are communicated back to the CPU in parallel. [0033] Figure 3 shows a method 300 for leveraging parallelization capabilities at a receiver node, according to an example. The method 300 may be implemented in conjunction with the other methods and systems described herein. In particular, the method 300 may be implemented by the apparatus 110 in conjunction with one or more GPUs in the computational resource pool 160, shown in Figure 1.
[0034] At block 310, transactions to be validated are received by a receiver node. At block 320, the transactions are combined into a block for parallel processing. According to examples, /V transactions may be combined where /V is less than the number of cores in the GPU.
[0035] At block 330, data is obtained for parallel processing the /V transactions. In examples, the data D = Di + D2 that is obtained may comprise data Di comprising the transaction payloads for the N transactions and data D2 for batch validating transactions. According to examples, the data Di may comprise messages and the data D2 may comprise cryptographic signatures for the messages.
[0036] At block 340 the processing is executed on the GPU. For example, the processing may comprise batch verification of cryptographic signatures. At block 350 the output is returned to the CPU memory. The output may be processed further or written into persistent storage.
[0037] The methods and systems described herein provide a higher throughput for processing transactions in a secure ledger by carefully choosing and switching between different optimization methods. The methods may further account for the context in which transactions are being processed. The methods minimize validation time for a given set of transactions whilst optimally utilizing all available system capacities. These methods include the dynamic provisioning and de-provisioning of resources to achieve a desired throughput when presented with varying requests for transactions by clients.
[0038] Further GPU optimizations are also disclosed. These optimizations exploit the parallelizability available in GPUs for high-volume transaction processing. In particular, the methods disclosed herein provide high throughput processing for signature verification and secure ledger logic execution.
[0039] The present disclosure is described with reference to flow charts and/or block diagrams of the method, devices and systems according to examples of the present disclosure. Although the flow diagrams described above show a specific order of execution, the order of execution may differ from that which is depicted. Blocks described in relation to one flow chart may be combined with those of another flow chart. In some examples, some blocks of the flow diagrams may not be necessary and/or additional blocks may be added. It shall be understood that each flow and/or block in the flow charts and/or block diagrams, as well as combinations of the flows and/or diagrams in the flow charts and/or block diagrams can be realized by machine readable instructions.
[0040] The machine-readable instructions may, for example, be executed by a general-purpose computer, a special purpose computer, an embedded processor or processors of other programmable data processing devices to realize the functions described in the description and diagrams. In particular, a processor or processing apparatus may execute the machine-readable instructions. Thus, modules of apparatus may be implemented by a processor executing machine-readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry. The term 'processor' is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate set etc. The methods and modules may all be performed by a single processor or divided amongst several processors.
[0041 ] Such machine-readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode.
[0042] For example, the instructions may be provided on a non-transitory computer readable storage medium encoded with instructions, executable by a processor. Figure 4 shows an example of a processor 410 associated with a memory 420. The memory 420 comprises computer readable instructions 430 which are executable by the processor 410.
[0043] The instructions 430 cause the processor to receive a set of transaction requests to a secure ledger, determine a partition of the set of transaction requests into a first subset and second subset and allocate computational resources having a first computational characteristic for processing the first subset of transaction requests and computational resources having a second computational characteristic, different from the first computational characteristic, for processing the second subset of transaction requests. The partition of the set of transactions requests is determined to minimize a throughput for processing the set of transaction requests on the computational resources.
[0044] Such machine-readable instructions may also be loaded onto a computer or other programmable data processing devices, so that the computer or other programmable data processing devices perform a series of operations to produce computer-implemented processing, thus the instructions executed on the computer or other programmable devices provide an operation for realizing functions specified by flow(s) in the flow charts and/or block(s) in the block diagrams.
[0045] Further, the teachings herein may be implemented in the form of a computer software product, the computer software product being stored in a storage medium and comprising a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure.
[0046] While the method, apparatus and related aspects have been described with reference to certain examples, various modifications, changes, omissions, and substitutions can be made without departing from the present disclosure. In particular, a feature or block from one example may be combined with or substituted by a feature/block of another example. [0047] The word "comprising" does not exclude the presence of elements other than those listed in a claim, "a" or "an" does not exclude a plurality, and a single processor or other unit may fulfil the functions of several units recited in the claims.
[0048] The features of any dependent claim may be combined with the features of any of the independent claims or other dependent claims.

Claims

CLAIMS What is claimed is:
1. A method for processing a set of secure ledger transaction requests, the method comprising: determining a partition of the set of transaction requests into a first subset and second subset; and provisioning computational resources having a first computational characteristic for processing the first subset of transaction requests and computational resources having a second computational characteristic, different from the first computational characteristic, for processing the second subset of transaction requests, wherein, the partition of the set of transactions request to optimize a performance parameter for processing the set of transaction requests on the computational resources.
2. The method of claim 1 , wherein the performance parameter is a time for processing the set of transaction requests on the computational resources.
3. The method of claim 1 , wherein the computational characteristic of a computational resource is determined on a basis of a throughput of the computational resource, the throughput comprising a number of transaction requests processed by the computational resource in a pre-determined time period.
4. The method of claim 1 , wherein the computational resources having a first computational characteristic comprise central processing units (CPU) and the computational resources having a second computational characteristic comprise graphical processing units (GPU).
5. The method of claim 4, comprising processing the set of transaction requests on the central processing units when the number of transaction requests is below a pre-determined number of transaction requests.
6. The method of claim 4, wherein processing the second subset of transaction requests on the GPUs comprises: loading transaction data for transaction requests of the second subset of transaction requests into a memory of a GPU; and batch processing the transaction data on the GPU.
7. The method of claim 6, wherein the transaction data for each of the transaction requests comprises a cryptographic signature.
8. The method of claim 7, wherein batch processing the transaction data comprises verifying cryptographic signatures for the transaction requests in parallel.
9. The method of claim 6, wherein batch processing the transaction data comprises evaluating secure ledger execution logic for the transaction requests in parallel.
10. The method of claim 8, wherein the secure ledger execution logic comprises a execution of a smart contract.
11. An apparatus for managing computational resources for processing a set of secure ledger transaction requests, the apparatus comprising: a resource allocation module to allocate computational resources having a first computational characteristic to process a first subset of the set of transaction requests; and allocate computational resources having a second computational characteristic different from the first computational characteristic to process a second subset of the set of transaction requests; and a resource monitoring module to determine a performance metric of the computational resources based on the allocation of computational resources.
12. The apparatus of claim 11 , wherein the resource allocation module is to dynamically provision or de-provision computational resources to achieve a predetermined throughput.
13. The apparatus of claim 11 wherein the performance metric comprises a time to process the first subset on computational resources having the first computational characteristic and to process the second subset on computational resources having the second computational characteristic.
14. The apparatus of claim 13, wherein the resource allocation module is to determine whether the performance metric satisfies a service level agreement (SLA) and to reallocate computational resources on the basis of the determination.
15. A non-transitory computer readable medium comprising instructions which are executable by a processor, to cause the processor to:
14 receive a set of transaction requests to a secure ledger; determine a partition of the set of transaction requests into a first subset and second subset; and allocate computational resources having a first computational characteristic for processing the first subset of transaction requests and computational resources having a second computational characteristic, different from the first computational characteristic, for processing the second subset of transaction requests, wherein, the partition of the set of transactions requests is determined to maximize a throughput for processing the set of transaction requests on the computational resources.
15
PCT/US2020/058108 2020-10-30 2020-10-30 Provisioning of computational resources WO2022093256A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/058108 WO2022093256A1 (en) 2020-10-30 2020-10-30 Provisioning of computational resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/058108 WO2022093256A1 (en) 2020-10-30 2020-10-30 Provisioning of computational resources

Publications (1)

Publication Number Publication Date
WO2022093256A1 true WO2022093256A1 (en) 2022-05-05

Family

ID=81383048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/058108 WO2022093256A1 (en) 2020-10-30 2020-10-30 Provisioning of computational resources

Country Status (1)

Country Link
WO (1) WO2022093256A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180351732A1 (en) * 2017-05-31 2018-12-06 Alibaba Group Holding Limited Enhancing processing efficiency of blockchain technologies using parallel service data processing
WO2019133577A1 (en) * 2017-12-26 2019-07-04 Akamai Technologies, Inc. Concurrent transaction processing in a high performance distributed system of record
US20190287101A1 (en) * 2018-12-28 2019-09-19 Alibaba Group Holding Limited Parallel execution of transactions in a blockchain network
US10432405B1 (en) * 2018-09-05 2019-10-01 Accelor Ltd. Systems and methods for accelerating transaction verification by performing cryptographic computing tasks in parallel

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180351732A1 (en) * 2017-05-31 2018-12-06 Alibaba Group Holding Limited Enhancing processing efficiency of blockchain technologies using parallel service data processing
WO2019133577A1 (en) * 2017-12-26 2019-07-04 Akamai Technologies, Inc. Concurrent transaction processing in a high performance distributed system of record
US10432405B1 (en) * 2018-09-05 2019-10-01 Accelor Ltd. Systems and methods for accelerating transaction verification by performing cryptographic computing tasks in parallel
US20190287101A1 (en) * 2018-12-28 2019-09-19 Alibaba Group Holding Limited Parallel execution of transactions in a blockchain network

Similar Documents

Publication Publication Date Title
CN109218355B (en) Load balancing engine, client, distributed computing system and load balancing method
US9817700B2 (en) Dynamic data partitioning for optimal resource utilization in a parallel data processing system
CN108268317B (en) Resource allocation method and device
US9323580B2 (en) Optimized resource management for map/reduce computing
CN111930498B (en) Efficient GPU resource allocation optimization method and system
EP1145127A2 (en) Apparatus for and method of non-linear constraint optimization in storage system configuration
CN106156159A (en) A kind of table connection processing method, device and cloud computing system
CN108900626B (en) Data storage method, device and system in cloud environment
CN110308984B (en) Cross-cluster computing system for processing geographically distributed data
CN110347515B (en) Resource optimization allocation method suitable for edge computing environment
CN109902059B (en) Data transmission method between CPU and GPU
CN110221775B (en) Method and device for distributing tokens in storage system
CN112888005B (en) MEC-oriented distributed service scheduling method
CN110178119B (en) Method, device and storage system for processing service request
CN109412865B (en) Virtual network resource allocation method, system and electronic equipment
US20230325149A1 (en) Data processing method and apparatus, computer device, and computer-readable storage medium
CN112286688A (en) Memory management and use method, device, equipment and medium
CN114064260A (en) Data de-tilting method and device, electronic equipment and storage medium
WO2022093256A1 (en) Provisioning of computational resources
CN114140115B (en) Block chain transaction pool fragmentation method, system, storage medium and computer system
CN112799637B (en) High-throughput modular inverse computation method and system in parallel environment
CN114546630A (en) Task processing method and distribution method, device, electronic equipment and medium
US11595319B2 (en) Differential overbooking in a cloud computing environment
CN115391042A (en) Resource allocation method and device, electronic equipment and storage medium
CN115729704A (en) Computing power resource allocation method, device and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20960163

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20960163

Country of ref document: EP

Kind code of ref document: A1