WO2011096249A1 - 負荷制御装置 - Google Patents
負荷制御装置 Download PDFInfo
- Publication number
- WO2011096249A1 WO2011096249A1 PCT/JP2011/050261 JP2011050261W WO2011096249A1 WO 2011096249 A1 WO2011096249 A1 WO 2011096249A1 JP 2011050261 W JP2011050261 W JP 2011050261W WO 2011096249 A1 WO2011096249 A1 WO 2011096249A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- transaction
- distribution destination
- transmission queue
- upper limit
- load control
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates to a load control device.
- the load information sharing means for sharing the load information, the load information sharing means provided by the load information sharing means to determine which task is transferred to which node, and the task transmission / reception Load distribution means for requesting transfer of the task to the means, and task execution means for processing the task requested to be executed and returning the execution result of the task to the local queuing means, the load distribution means
- the balance calculation means for dynamically calculating at least the amount of tasks to be executed on the own node and It has been described that includes a node priority assignment means for assigning a degree, the.
- the distribution rules are fixed and cannot be handled dynamically, or even in an environment that is not uniform due to the assumption of uniform computing resources, the most appropriate computing resources are not considered. Since there is no mechanism for maintaining the service level index such as turnaround time or the like, there is a problem that it is difficult to follow the ever-changing computing resources and it is not possible to take a detailed response.
- an object of the present invention is to maintain a service level index represented by a turnaround time of parallel online transaction processing with as few computational resources and power consumption as possible.
- a load control device is a load control device that distributes transaction processing to a plurality of computing resources, selects a receiving unit that receives a transaction processing request, and a distribution destination of the received transaction, A distribution control unit that stores the transaction in a transmission queue provided in the transmission queue, a transmission unit that transmits the transaction data stored in the transmission queue to the corresponding distribution destination, and the transmission corresponding to each distribution destination An overflow detection unit that monitors whether or not the transaction amount accumulated in the queue exceeds the upper limit, and if the transaction amount exceeds the upper limit as a result of monitoring by the overflow detection unit, the upper limit is exceeded. And a re-distribution unit that re-selects a transaction distribution destination stored in this manner.
- FIG. 1 is a diagram showing a configuration of a load control system using a load control device 10 according to the present embodiment. As shown in the figure, the load control device 10 is connected to a plurality of processing devices (computation resources) 20 via communication lines.
- processing resources computation resources
- the load control system is a system in which online transactions are processed in parallel by a plurality of processing devices 20 on a case-by-case basis.
- FIG. 2 is a diagram illustrating a configuration of the load control device 10 according to the present embodiment.
- the load control device 10 includes a receiving unit 101, a distribution control unit 102, a transmission queue 103, a transmission unit 104, an overflow detection unit 105, a redistribution unit 106, a timer 107, and a reception unit 108.
- the reception unit 101, the distribution control unit 102, the transmission unit 104, the overflow detection unit 105, the redistribution unit 106, the timer 107, and the reception unit 108 correspond to functions performed by a computer processor according to a program.
- the transmission queue 103 includes a storage device such as a memory and a hard disk.
- the receiving unit 101 receives a transaction processing request from a terminal connected via a communication line.
- the distribution control unit 102 refers to the distribution table, selects an appropriate distribution destination of the received transaction, and stores the transaction in the transmission queue 103 provided for each distribution destination.
- the distribution destination includes one or more processor cores (computation resources). When a plurality of cores are included, each core may be mounted on one processing device 20 or may be distributed and mounted on the plurality of processing devices 20.
- FIG. 3 shows the contents of the distribution table.
- an appropriate distribution destination for each data area is stored in advance in the distribution table.
- the data area corresponds to a set of data to be processed by a certain transaction among data stored in the database 207.
- the distribution control unit 102 selects an appropriate distribution destination from the distribution table based on which data area the received transaction is targeted for processing.
- the distribution destination may correspond to one processing device 20 or may correspond to a plurality of processing devices 20.
- the distribution table indicates the degree of performance degradation caused by the power-on state of the processing apparatus 20 that is the distribution destination and the power-off of the partial elements, or the degree of performance degradation caused by planned stoppage or failure such as periodic maintenance. Contains an available table to represent.
- Figure 4 shows an example of an available table.
- the available table is created based on the “element / distribution destination correspondence table”.
- the availability table stores the alive state of each distribution destination.
- the life and death state is determined based on performance degradation caused by power off or failure of elements included in the distribution destination. For example, when the available state of the element included in each distribution destination is as shown in the “element / distribution destination correspondence table”, the distribution destination 1 is the element 1 can be used (O), and the element Since 2 is an unusable state (x), the life / death state is 50%.
- the availability table needs to be reviewed regularly to reflect the latest state of the processing device.
- the transmission queue 103 is provided for each distribution destination, and the transaction data distributed to each distribution destination is sequentially stored.
- the transmission unit 104 transmits the transaction data stored in the transmission queue 103 to the corresponding distribution destination processing device 20.
- the transmission unit 104 transmits transaction data evenly to the respective processor cores by a method such as round robin, for example.
- the overflow detection unit 105 refers to the upper limit table and periodically monitors whether the transaction amount stored in the transmission queue 103 corresponding to each distribution destination exceeds the upper limit.
- FIG. 5 shows the contents of the upper limit table.
- the upper limit table stores in advance the upper limit value of transactions that can be accumulated in the transmission queue 103 corresponding to each distribution destination.
- the upper limit value can be, for example, the number of transactions that can be processed per fixed time.
- the redistribution unit 106 selects the distribution destination again by referring to the affinity table. If the redistribution destinations are overflowing, they are sequentially assigned to candidates for the next priority. If there are no more distribution destination candidates, error processing is executed.
- FIG. 6 is a diagram showing the contents held in the affinity table.
- the affinity table performance values of processing for data in each data area are stored in advance for each distribution destination.
- the performance value may be, for example, the number of transaction processes per unit time or the average turnaround time per case.
- the processing performance for a specific data area is determined by the ease of access to the database 207 including the data. Generally, the processing performance is the highest when the cache control unit 206 that caches data in the database 207 including the data to be processed and the processing unit 205 that executes transaction processing are mounted on the same processing device 20, and between the two The processing performance decreases as another control unit or the like intervenes in the access path. In this affinity table, the distribution destination with the highest processing performance for each data area is the distribution destination in the distribution table.
- the timer 107 determines the timing at which the overflow detection unit 105 monitors the processing amount of each processing device 20.
- a method of monitoring the processing amount by the overflow detection unit 105 there is also a method in which monitoring is always performed, and when the transaction amount of the transmission queue 103 of any distribution destination exceeds the upper limit, the redistribution unit 106 is notified immediately. is there.
- a notification is generated every time the transaction amount overflows, which causes a reduction in processing performance.
- the receiving unit 108 receives information from each processing device 20 such as overflow of transaction processing amount.
- FIG. 7 is a diagram illustrating a configuration of the processing apparatus 20.
- the processing device 20 includes a receiving unit 201, a receiving queue 202, a control unit 203, a transmission unit 204, a processing unit (calculation resource, processor core) 205, a cache control unit 206, and a database 207.
- the receiving unit 201 receives a transaction request from the load control device 10.
- the reception queue 202 is a storage unit that sequentially accumulates transaction requests received by the reception unit 201.
- the control unit 203 supplies the transaction request stored in the reception queue 202 to the processing unit 205.
- the load control device 10 is notified via the transmission unit 204.
- the transmission unit 204 transmits information such as an overflow of transaction processing amount to the load control apparatus 10.
- the processing unit 205 updates the database 207 by executing a transaction.
- the processing apparatus 20 may have a single core configuration including one processing unit 205 or a multi-core configuration including a plurality of processing units 205.
- the cache control unit 206 temporarily stores the contents of the database 207. In this embodiment, it is possible to access the cache control unit 206 in at least one other processing apparatus 20.
- the database 207 holds data to be subjected to transaction processing.
- FIG. 8 is a flowchart of transaction distribution processing by the load control device 10.
- the load control device 10 receives a transaction processing request at the receiving unit 101 (step S11).
- the distribution control unit 102 refers to the distribution table and selects an appropriate distribution destination for the received transaction (step S12).
- the distribution control unit 102 stores transaction processing data in the transmission queue 103 to the distribution destination selected in step S2 (step S13).
- the transmission unit 104 sequentially transmits the transaction data stored in the transmission queue 103 in this way to the corresponding distribution destination processing device 20.
- FIG. 9 is a flowchart of the processing when the transaction processing amount overflows.
- the overflow detection unit 105 detects that the transaction amount stored in the transmission queue 103 corresponding to each distribution destination exceeds the upper limit (step S21: YES)
- the redistribution unit 106 retransmits the transaction.
- the distribution is instructed (step S22).
- the redistribution unit 106 refers to the affinity table again to select a destination for the overflowing transaction (step S23).
- the redistribution method will be specifically described with reference to FIGS.
- a transaction whose processing target is the data area A is initially distributed to the distribution destination 1 by the distribution control unit 102.
- the overflow detection unit 105 instructs the redistribution unit 106 to perform redistribution.
- the redistribution unit 106 refers to the affinity table in FIG.
- the distribution destinations 2, 3, and 4 correspond, and the redistribution unit 106 selects a redistribution destination from these.
- the redistribution unit 106 moves the transaction data stored exceeding the upper limit to the redistribution destination transmission queue 103 (step S24).
- the transmission unit 104 sequentially transmits the transaction data stored in the transmission queue 103 in this way to the corresponding distribution destination processing device 20.
- the cache control unit 206 in another processing device 20 can be accessed between the processing devices 20, so that the cache control unit 206 of the processing device 20 at the distribution destination is accessed. Even if the data to be processed by the transaction is not cached, the process can be executed by accessing the cache control unit 206 of another processing device 20 in which the data is cached.
- the redistribution unit 106 receives the overflow information of the transaction processing amount from each processing device 20 except for the case where the redistribution unit 106 receives an instruction from the overflow detection unit 105. In this case, the transaction is redistributed.
- the distribution control unit 102 determines an appropriate distribution destination of the transaction and accumulates the transaction in the transmission queue 103 for each distribution destination. Resources can maintain service level metrics such as turnaround time for parallel online transaction processing. Further, the overflow detection unit 105 periodically monitors whether or not the transaction amount stored in the transmission queue 103 corresponding to each distribution destination exceeds the upper limit, and if it exceeds, the redistribution unit Since the allocation destination is determined again by 106, it is possible to prevent a decrease in the service level index such as a specific processing device 20 causing processing overflow and a longer turnaround time.
- the load control device 10 determines the transaction allocation destination and performs adjustments to prevent processing overflow, the transaction data is transmitted to each processing device 20. There is no need to provide a function for adjusting processing overflow, and computational resources can be effectively used for transaction processing.
- the load control apparatus 10 is particularly suitable for performing load control in an online transaction system using a database that holds a large amount of data.
- the processing performance is most affected because of the affinity for the data of each computing resource.
- the load control system includes a plurality of processing devices 20, and the cache control unit 206 can access the cache control unit 206 in at least one other processing device 20.
- a system configuration method there is a CC-NUMA (cache-coherent Non-Uniform Memory Access) method. Since CC-NUMA has a multiprocessor configuration having a memory address space common to all processors, the processor of each node can also access the memory of other nodes. Each processor has a local high-speed memory, and access to the local memory possessed by other processors is slower than that. Therefore, the present embodiment is suitable for a load control system that determines a distribution destination based on the affinity for data of each calculation resource, as in this embodiment.
- FIG. 10 is a diagram for explaining a method of determining an allocation destination based on power consumption.
- the table of “power consumption model” indicates the power consumption during operation for each core (corresponding to the processing unit 205 of the processing device 20).
- the power consumption of each when operating alone is “15”, but the total power consumption when operating the core 1 and core 2 simultaneously is “17”. Is. Therefore, the power consumption can be reduced by operating both the core 1 and the core 2 at the same time, rather than operating them alone.
- the “core / distribution destination correspondence table” shows the configuration of the distribution destinations that can determine the power consumption determined based on the power consumption model. As shown in the table, the distribution destinations 1 and 2 are composed of two cores, and the distribution destinations 3 to 7 are composed of one core.
- each computer resource (element) in the system shown in FIG. 11 requires a power reduction amount (reduced power) when the power is turned off and the start of reuse of the computer resource.
- An “immediate extension model” that generates a distribution table based on information on preparation time (recovery time) may be used.
- the allocation table is generated based on the usage fee that is reduced when each computing resource is returned to the resource pool and the information on the preparation time required to start the reuse of the computing resource.
- the configuration of the processing apparatus 20 is not limited to that shown in FIG. 7, and may be, for example, a configuration as shown in FIG.
- the example of FIG. 12 has a multi-core configuration in which two processing units 205 are included in the processing device 20, and two cache control units 206 and two databases 207 are also included.
- the present invention is suitable for maintaining a service level index represented by a turnaround time of parallel transaction processing with as little computational resources and power consumption as possible.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Power Sources (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
Description
コスト削減や電力消費量削減の観点からは、刻々と変化する利用可能な計算資源や利用可能な電力を効率よく活用しながらターンアラウンドタイムなどのサービスレベル指標を維持することが望ましいが、そのためには、各々のタスクを適切な計算資源に動的に振り分けるとともに、特定の計算資源に処理が集中して処理溢れが起こるのを防止する必要がある。処理溢れを防止するためには、各計算資源にタスクを送信する前の段階で適正なタスクの量に調整することが望ましいが、特許文献1に記載の方法では計算資源に送信する前の段階での調整は行われていなかった。また、既存のロードバランス手法では、振分ルールが固定的で動的に対応できなかったり、均一な計算資源が前提のために均一でない環境においても最も適切な計算資源を考慮することなく機械的に振り分けたり、ターンアラウンドタイムなどのサービスレベル指標を維持する仕組みがなかったりすることで、刻々と変化する計算資源に追従することが困難で、きめ細かい対応を行うことができないという課題があった。
図1は、本実施形態による負荷制御装置10を用いた負荷制御システムの構成を示す図である。図に示すように、負荷制御装置10は複数の処理装置(計算資源)20と通信回線を介して接続されている。
受信部108は、各処理装置20から、トランザクションの処理量溢れなどに情報を受信する。
受信キュー202は、受信部201で受信したトランザクション要求を順次蓄積する記憶部である。
処理部205は、トランザクションを実行してデータベース207を更新する。なお、処理装置20は、処理部205を1つ備えるシングルコア構成であってもよいし、処理部205を複数備えるマルチコア構成であってもよい。
データベース207は、トランザクション処理の対象となるデータを保有する。
図8は、負荷制御装置10によるトランザクション振り分け処理のフローチャートである。まず、負荷制御装置10が受信部101においてトランザクション処理要求を受信する(ステップS11)。
まず、溢れ検知部105が各振分先に対応する送信キュー103に蓄積されているトランザクション量が上限を超えていることを検知すると(ステップS21:YES)、再振分部106にトランザクションの再振り分けを指示する(ステップS22)。
再振分の方法について、図3、5、6を用いて具体的に説明する。例えば、図3の振分テーブルに示すとおり、データ領域Aを処理対象とするトランザクションは、初めは振分制御部102によって振分先1に振り分けられる。しかし、図5の上限テーブルに示すように、振分先1の送信キュー103に1600件以上のトランザクションが蓄積されると、溢れ検知部105によって再振分部106に再振り分けが指示される。再振分部106は図6のアフィニティテーブルを参照し、データ領域Aを処理対象とするトランザクションの処理性能(ここでは単位時間当たりの処理件数)が振分先1の次に高い振分先を選択する。図6に示す例では、振分先2,3,4が該当するので、再振分部106はこれらの中から再振分先を選択する。
図10は、電力消費量に基づく振分先の決定方法を説明する図である。「電力消費モデル」の表は、コア(処理装置20の処理部205に相当)毎の稼動時の電力消費量を示している。「コアN」(N=1,・・・,9)の行と「コアN」の列の交差点の数値はコアNを単独で稼動させたときの電力消費量を表し、「コアN」の行と「コアM」(M=1,・・・,9)の列の交差点の数値は、コアNとコアMを同時に稼動させたときの総電力消費量を表している。例えば、コア1とコア2について見ると、単独で稼動させた場合それぞれの電力消費量は「15」であるが、コア1とコア2を同時に稼動させた場合の合計の電力消費量は「17」である。従って、コア1とコア2はそれぞれ単独で稼動させるよりも両方を同時に稼動させたほうが電力消費量を削減することができる。
Claims (8)
- 複数の計算資源にトランザクション処理を振り分ける負荷制御装置であって、
トランザクション処理要求を受信する受信部と、
受信したトランザクションの振分先を選択し、振分先毎に設けられた送信キューに前記トランザクションを格納する振分制御部と、
前記送信キューに格納されたトランザクションデータを対応する前記振分先に送信する送信部と、
各振分先に対応する前記送信キューに蓄積されているトランザクション量が上限を超えていないか否かを監視する溢れ検知部と、
前記溢れ検知部による監視の結果、前記トランザクション量が上限を超えている場合には、上限を超えて格納されたトランザクションの振分先の選択を再度行う再振分部と、を備えた負荷制御装置。 - 前記振分先制御部は、前記トランザクションが処理対象とするデータ領域に基づいて、そのデータ領域に対する処理性能が最も高い振分先を選択することを特徴とする、請求項1に記載の負荷制御装置。
- 前記再振分部は、前記トランザクションが処理対象とするデータ領域に基づいて、前記送信キューに蓄積されているトランザクション量が上限を超えていない振分先の中から、そのデータ領域に対する処理性能が最も高い振分先を選択することを特徴とする、請求項1または2に記載の負荷制御装置。
- 前記振分先が、電力消費量を最小にする前記処理装置の組み合わせにより構成されていることを特徴とする請求項1から3のいずれかに記載の負荷制御装置。
- 予想されるトランザクション量に基づいて、各々の計算資源の電源のオンオフ状態が制御され、
前記振分制御部は、各々の計算資源の電源のオンオフ状態に基づいて、前記振分先を選択することを特徴とする請求項1から4のいずれかに記載の負荷制御装置。 - 前記溢れ検知部が各振分先に対応する前記送信キューに蓄積されているトランザクション量を監視するタイミングを決定するタイマーを備え、
前記振分制御部は、前記タイミングに従って定期的に前記送信キューに蓄積されているトランザクション量を監視することを特徴とする、請求項1から5のいずれかに記載の負荷制御装置。 - 複数の計算資源にトランザクション処理を振り分ける負荷制御方法であって、
トランザクション処理要求を受信する工程と、
受信したトランザクションの適切な振分先を選択し、振分先毎に設けられた送信キューに前記トランザクションを格納する工程と、
前記送信キューに格納されたトランザクションデータを対応する前記振分先に送信する工程と、
各振分先に対応する前記送信キューに蓄積されているトランザクション量が上限を超えていないか否かを監視する工程と、
前記監視の結果、前記トランザクション量が上限を超えている場合には、上限を超えて格納されたトランザクションの振分先の選択を再度行う工程と、を備えた負荷制御方法。 - コンピュータを、
複数の計算資源にトランザクション処理を振り分ける負荷制御装置として機能させるプログラムであって、
トランザクション処理要求を受信する受信部と、
受信したトランザクションの適切な振分先を選択し、振分先毎に設けられた送信キューに前記トランザクションを格納する振分制御部と、
前記送信キューに格納されたトランザクションデータを対応する前記振分先に送信する送信部と、
各振分先に対応する前記送信キューに蓄積されているトランザクション量が上限を超えていないか否かを監視する溢れ検知部と、
前記溢れ検知部による監視の結果、前記トランザクション量が上限を超えている場合には、上限を超えて格納されたトランザクションの振分先の選択を再度行う再振分部と、して機能させるプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/576,469 US9086910B2 (en) | 2010-02-05 | 2011-01-11 | Load control device |
CN2011800079925A CN102782653A (zh) | 2010-02-05 | 2011-01-11 | 负载控制设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010023943A JP5515810B2 (ja) | 2010-02-05 | 2010-02-05 | 負荷制御装置 |
JP2010-023943 | 2010-02-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011096249A1 true WO2011096249A1 (ja) | 2011-08-11 |
Family
ID=44355259
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/050261 WO2011096249A1 (ja) | 2010-02-05 | 2011-01-11 | 負荷制御装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9086910B2 (ja) |
JP (1) | JP5515810B2 (ja) |
CN (1) | CN102782653A (ja) |
WO (1) | WO2011096249A1 (ja) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5586551B2 (ja) * | 2011-09-21 | 2014-09-10 | 東芝テック株式会社 | コンピュータシステム、及びジョブ制御方法 |
JP6246603B2 (ja) * | 2014-01-21 | 2017-12-13 | ルネサスエレクトロニクス株式会社 | タスクスケジューラ機構、オペレーティングシステム及びマルチプロセッサシステム |
CN109933415B (zh) * | 2017-12-19 | 2021-05-04 | 中国移动通信集团河北有限公司 | 数据的处理方法、装置、设备及介质 |
US11182205B2 (en) * | 2019-01-02 | 2021-11-23 | Mellanox Technologies, Ltd. | Multi-processor queuing model |
CN110049350B (zh) * | 2019-04-15 | 2022-10-11 | 深圳壹账通智能科技有限公司 | 视频转码处理方法、装置、计算机设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62198949A (ja) * | 1986-02-26 | 1987-09-02 | Nec Corp | マルチプロセツサ・システムの動作制御方式 |
JPH0981399A (ja) * | 1995-09-13 | 1997-03-28 | Hitachi Ltd | バッチシステム |
JPH09212467A (ja) * | 1996-01-30 | 1997-08-15 | Fujitsu Ltd | 負荷分散制御システム |
JP2004192612A (ja) * | 2002-12-09 | 2004-07-08 | Internatl Business Mach Corp <Ibm> | 区分化されたデータ処理システムにおける電力節減 |
JP2005196262A (ja) * | 2003-12-26 | 2005-07-21 | Hitachi Ltd | 処理スケジュールの管理方法、リソース情報の作成方法、サーバ、クライアント、処理スケジュールの管理プログラム、リソース情報の作成プログラム |
JP2008015888A (ja) * | 2006-07-07 | 2008-01-24 | Hitachi Ltd | 負荷分散制御システム及び負荷分散制御方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5095421A (en) * | 1989-08-17 | 1992-03-10 | International Business Machines Corporation | Transaction processing facility within an operating system environment |
JPH0887473A (ja) * | 1994-09-16 | 1996-04-02 | Toshiba Corp | データ処理装置 |
US6226377B1 (en) * | 1998-03-06 | 2001-05-01 | Avaya Technology Corp. | Prioritized transaction server allocation |
US6578068B1 (en) * | 1999-08-31 | 2003-06-10 | Accenture Llp | Load balancer in environment services patterns |
US7376693B2 (en) * | 2002-02-08 | 2008-05-20 | Jp Morgan Chase & Company | System architecture for distributed computing and method of using the system |
US7096335B2 (en) * | 2003-08-27 | 2006-08-22 | International Business Machines Corporation | Structure and method for efficient management of memory resources |
US7302450B2 (en) * | 2003-10-02 | 2007-11-27 | International Business Machines Corporation | Workload scheduler with resource optimization factoring |
US7430290B2 (en) * | 2003-10-23 | 2008-09-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Virtual queuing support system and method |
JP4265377B2 (ja) | 2003-11-12 | 2009-05-20 | 日本電気株式会社 | 負荷分散方法及び装置とシステム並びにプログラム |
WO2005083574A1 (ja) * | 2004-03-02 | 2005-09-09 | Matsushita Electric Industrial Co., Ltd. | 機器制御サーバ及び機器制御方法 |
US7606154B1 (en) * | 2004-04-01 | 2009-10-20 | Juniper Networks, Inc. | Fair bandwidth allocation based on configurable service classes |
US7712102B2 (en) * | 2004-07-30 | 2010-05-04 | Hewlett-Packard Development Company, L.P. | System and method for dynamically configuring a plurality of load balancers in response to the analyzed performance data |
US7881961B2 (en) * | 2005-02-10 | 2011-02-01 | International Business Machines Corporation | Method and system of managing a business process |
JP4378335B2 (ja) * | 2005-09-09 | 2009-12-02 | インターナショナル・ビジネス・マシーンズ・コーポレーション | ディスクへのトランザクション・データ書き込みの方式を動的に切り替える装置、切り替える方法、及び切り替えるプログラム |
US8036372B2 (en) * | 2005-11-30 | 2011-10-11 | Avaya Inc. | Methods and apparatus for dynamically reallocating a preferred request to one or more generic queues |
JP4605036B2 (ja) * | 2006-01-27 | 2011-01-05 | 日本電気株式会社 | 計算機システム、管理サーバ、計算機設定時間を低減する方法およびプログラム |
US7992151B2 (en) * | 2006-11-30 | 2011-08-02 | Intel Corporation | Methods and apparatuses for core allocations |
US8037329B2 (en) * | 2007-01-31 | 2011-10-11 | Hewlett-Packard Development Company, L.P. | Systems and methods for determining power consumption profiles for resource users and using the profiles for resource allocation |
US8201219B2 (en) * | 2007-09-24 | 2012-06-12 | Bridgewater Systems Corp. | Systems and methods for server load balancing using authentication, authorization, and accounting protocols |
US8856196B2 (en) | 2008-07-22 | 2014-10-07 | Toyota Jidosha Kabushiki Kaisha | System and method for transferring tasks in a multi-core processor based on trial execution and core node |
-
2010
- 2010-02-05 JP JP2010023943A patent/JP5515810B2/ja active Active
-
2011
- 2011-01-11 CN CN2011800079925A patent/CN102782653A/zh active Pending
- 2011-01-11 WO PCT/JP2011/050261 patent/WO2011096249A1/ja active Application Filing
- 2011-01-11 US US13/576,469 patent/US9086910B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62198949A (ja) * | 1986-02-26 | 1987-09-02 | Nec Corp | マルチプロセツサ・システムの動作制御方式 |
JPH0981399A (ja) * | 1995-09-13 | 1997-03-28 | Hitachi Ltd | バッチシステム |
JPH09212467A (ja) * | 1996-01-30 | 1997-08-15 | Fujitsu Ltd | 負荷分散制御システム |
JP2004192612A (ja) * | 2002-12-09 | 2004-07-08 | Internatl Business Mach Corp <Ibm> | 区分化されたデータ処理システムにおける電力節減 |
JP2005196262A (ja) * | 2003-12-26 | 2005-07-21 | Hitachi Ltd | 処理スケジュールの管理方法、リソース情報の作成方法、サーバ、クライアント、処理スケジュールの管理プログラム、リソース情報の作成プログラム |
JP2008015888A (ja) * | 2006-07-07 | 2008-01-24 | Hitachi Ltd | 負荷分散制御システム及び負荷分散制御方法 |
Also Published As
Publication number | Publication date |
---|---|
JP5515810B2 (ja) | 2014-06-11 |
JP2011164736A (ja) | 2011-08-25 |
US9086910B2 (en) | 2015-07-21 |
CN102782653A (zh) | 2012-11-14 |
US20130104131A1 (en) | 2013-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11593152B1 (en) | Application hosting in a distributed application execution system | |
KR101781063B1 (ko) | 동적 자원 관리를 위한 2단계 자원 관리 방법 및 장치 | |
US8468246B2 (en) | System and method for allocating resources in a distributed computing system | |
US7877482B1 (en) | Efficient application hosting in a distributed application execution system | |
JP4255457B2 (ja) | エラー処理方法 | |
US8671134B2 (en) | Method and system for data distribution in high performance computing cluster | |
US20080282253A1 (en) | Method of managing resources within a set of processes | |
US20090187658A1 (en) | System for Allocating Resources in a Distributed Computing System | |
KR101680109B1 (ko) | 복수 코어 장치 및 그의 로드 조정 방법 | |
US9329937B1 (en) | High availability architecture | |
US10305724B2 (en) | Distributed scheduler | |
JP2006350780A (ja) | キャッシュ割当制御方法 | |
JP5515810B2 (ja) | 負荷制御装置 | |
US9635102B2 (en) | Broker module for managing and monitoring resources between internet service providers | |
US8914582B1 (en) | Systems and methods for pinning content in cache | |
US10606650B2 (en) | Methods and nodes for scheduling data processing | |
KR20200080458A (ko) | 클라우드 멀티-클러스터 장치 | |
CN112685167A (zh) | 资源使用方法、电子设备和计算机程序产品 | |
JP6191361B2 (ja) | 情報処理システム、情報処理システムの制御方法及び制御プログラム | |
JP2013206041A (ja) | 通信システム及び負荷分散処理装置 | |
WO2009113172A1 (ja) | ジョブ割当装置、ジョブ割当装置の制御プログラム及び制御方法 | |
JP5722247B2 (ja) | 仮想サーバ管理システム | |
JP2010146382A (ja) | 負荷分散システム、負荷分散方法、および負荷分散プログラム | |
WO2015155571A1 (en) | Elasticity engine for availability management framework (amf) | |
JP5857144B2 (ja) | 仮想サーバ管理システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180007992.5 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11739598 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13576469 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11739598 Country of ref document: EP Kind code of ref document: A1 |