JP2008146503A5 - - Google Patents

Download PDF

Info

Publication number
JP2008146503A5
JP2008146503A5 JP2006335130A JP2006335130A JP2008146503A5 JP 2008146503 A5 JP2008146503 A5 JP 2008146503A5 JP 2006335130 A JP2006335130 A JP 2006335130A JP 2006335130 A JP2006335130 A JP 2006335130A JP 2008146503 A5 JP2008146503 A5 JP 2008146503A5
Authority
JP
Japan
Prior art keywords
task
tasks
processor
executed
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006335130A
Other languages
Japanese (ja)
Other versions
JP4756553B2 (en
JP2008146503A (en
Filing date
Publication date
Application filed filed Critical
Priority to JP2006335130A priority Critical patent/JP4756553B2/en
Priority claimed from JP2006335130A external-priority patent/JP4756553B2/en
Publication of JP2008146503A publication Critical patent/JP2008146503A/en
Publication of JP2008146503A5 publication Critical patent/JP2008146503A5/ja
Application granted granted Critical
Publication of JP4756553B2 publication Critical patent/JP4756553B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Claims (8)

複数のプロセッサを含むマルチプロセッサシステムにおける分散処理方法であって、
各プロセッサの計算資源を時分割して複数のタスクに割り当てることにより、複数のタスクが並列に実行されるマルチタスク環境において、タスクの実行結果を別のタスクに与えることにより、負荷の異なる複数のタスクからなる特定処理を実行するためのパイプライン処理系を構築し、当該パイプライン処理系を複数動作させ、メインメモリにコンテキストが退避されて実行可能状態にあるタスクをいずれのタスクも実行していないプロセッサに割り当てて実行させることにより、複数のパイプライン処理系で実行される前記特定処理の複数のタスクの内、処理時間が所定の閾値よりも長い高負荷タスクが異なるプロセッサに割り当てられて実行されることを特徴とする分散処理方法。
A distributed processing method in a multiprocessor system including a plurality of processors,
By assigning the computing resources of each processor to multiple tasks in a time-sharing manner, in a multitasking environment where multiple tasks are executed in parallel, by giving the task execution result to another task, multiple tasks with different loads A pipeline processing system is constructed to execute a specific process consisting of tasks, multiple pipeline processing systems are operated, and any task that is in an executable state with its context saved in the main memory is executed. By assigning to a non-processor and executing it, among the tasks of the specific process executed in a plurality of pipeline processing systems, a high-load task whose processing time is longer than a predetermined threshold is assigned to a different processor and executed A distributed processing method.
前記プロセッサの数を前記特定処理を構成する前記高負荷タスクの数で除算して得られる値を超えない整数値の数だけ前記パイプライン処理系を動作させることを特徴とする請求項1に記載の分散処理方法。   2. The pipeline processing system is operated by an integer number not exceeding a value obtained by dividing the number of processors by the number of the high-load tasks constituting the specific process. Distributed processing method. 前記パイプライン処理系で実行される前記特定処理のタスク間でやりとりされるデータの入出力関係を記述した設定ファイルをもとにタスク間の入出力チャネルを構築し、前記入出力チャネルを介したタスク間のストリーム通信を実行することを特徴とする請求項1または2に記載の分散処理方法。   Build an input / output channel between tasks based on a setting file that describes the input / output relationship of data exchanged between tasks of the specific processing executed in the pipeline processing system, and pass through the input / output channel. The distributed processing method according to claim 1, wherein stream communication between tasks is executed. 前記設定ファイルに記述されたタスクの入出力経路に新たなタスクを直列または並列に挿入することにより、前記設定ファイルを動的に変更する手順をさらに含むことを特徴とする請求項3に記載の分散処理方法。   The method according to claim 3, further comprising a step of dynamically changing the configuration file by inserting a new task in series or in parallel in an input / output path of the task described in the configuration file. Distributed processing method. 前記パイプライン処理系で実行される前記特定処理の各タスクは、当該タスクを割り当てられたプロセッサによって互いに実行され、各プロセッサは、割り当てられたタスクの入力チャネルから入力を受け取って当該タスクを処理し、そのタスクの出力チャネルに実行結果を出力することを特徴とする請求項3に記載の分散処理方法。 Each task of the specific processing executed by the pipeline processing system is executed on each other by the processor assigned to the task, each processor, the task receives input from an input channel of the assigned tasks 4. The distributed processing method according to claim 3, wherein processing is performed and an execution result is output to an output channel of the task. 複数のプロセッサを含むマルチプロセッサシステム上で動作するオペレーティングシステムであって、
各プロセッサの計算資源を時分割して複数のタスクに割り当てることにより、複数のタスクが並列に実行されるマルチタスク環境において、タスクの実行結果を別のタスクに与えることにより、負荷の異なる複数のタスクからなる特定処理を実行するためのパイプライン処理系を構築し、当該パイプライン処理系を複数動作させる機能と、
メインメモリにコンテキストが退避されて実行可能状態にあるタスクをいずれのタスクも実行していないプロセッサに割り当てて実行させる機能とを前記マルチプロセッサシステムに実現させることを特徴とするオペレーティングシステム。
An operating system operating on a multiprocessor system including a plurality of processors,
By assigning the computing resources of each processor to multiple tasks in a time-sharing manner, in a multitasking environment where multiple tasks are executed in parallel, by giving the task execution result to another task, multiple tasks with different loads A pipeline processing system for executing a specific process consisting of tasks, and a function for operating a plurality of the pipeline processing systems;
An operating system for causing the multiprocessor system to realize a function of assigning and executing a task whose context is saved in a main memory and in an executable state to a processor that is not executing any task.
制御用のメインプロセッサと、それぞれがローカルメモリをもつ複数の演算用のサブプロセッサと、共有メモリとを含むマルチプロセッサシステムであって、
前記複数の演算用のサブプロセッサ上で動作するオペレーティングシステムは、
各サブプロセッサの計算資源を時分割して複数のタスクに割り当てることにより、複数のタスクが並列に実行されるマルチタスク環境において、タスクの実行結果を別のタスクに与えることにより、負荷の異なる複数のタスクからなる特定処理を実行するためのパイプライン処理系を構築し、当該パイプライン処理系を複数動作させる機能と、
前記共有メモリにコンテキストが退避されて実行可能状態にあるタスクをいずれのタスクも実行していないサブプロセッサの前記ローカルメモリにロードして実行させる機能とを含むことを特徴とするマルチプロセッサシステム。
A multiprocessor system including a main processor for control, a plurality of sub processors for operations each having a local memory, and a shared memory,
An operating system operating on the plurality of sub-processors for computation is
In a multitasking environment where multiple tasks are executed in parallel by assigning the computing resources of each sub-processor to multiple tasks in a time-sharing manner, multiple tasks with different loads can be obtained by giving the task execution results to other tasks. A pipeline processing system for executing a specific process consisting of the above tasks, and a function for operating a plurality of the pipeline processing systems,
A multiprocessor system comprising: a function for loading a task whose context is saved in the shared memory into an executable state and executing the task in a sub-processor that is not executing any task.
前記サブプロセッサに割り当てられた各タスクは、前記メインプロセッサを介在させることなく、通信チャネルを介して互いにデータをやりとりしながら実行されることを特徴とする請求項7に記載のマルチプロセッサシステム。 Each task assigned to said sub processor, without interposing said main processor, multi-processor according to claim 7, exchanging Shinano data with each other via a communication channel, characterized in that it is RaMinoru row system.
JP2006335130A 2006-12-12 2006-12-12 Distributed processing method, operating system, and multiprocessor system Active JP4756553B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006335130A JP4756553B2 (en) 2006-12-12 2006-12-12 Distributed processing method, operating system, and multiprocessor system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006335130A JP4756553B2 (en) 2006-12-12 2006-12-12 Distributed processing method, operating system, and multiprocessor system

Publications (3)

Publication Number Publication Date
JP2008146503A JP2008146503A (en) 2008-06-26
JP2008146503A5 true JP2008146503A5 (en) 2010-01-21
JP4756553B2 JP4756553B2 (en) 2011-08-24

Family

ID=39606588

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006335130A Active JP4756553B2 (en) 2006-12-12 2006-12-12 Distributed processing method, operating system, and multiprocessor system

Country Status (1)

Country Link
JP (1) JP4756553B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101275698B1 (en) * 2008-11-28 2013-06-17 상하이 신하오 (브레이브칩스) 마이크로 일렉트로닉스 코. 엘티디. Data processing method and device
WO2010110183A1 (en) * 2009-03-23 2010-09-30 日本電気株式会社 Distributed processing system, interface, storage device, distributed processing method, distributed processing program
JP5718558B2 (en) * 2009-09-16 2015-05-13 富士ゼロックス株式会社 Image data processing device
KR101710910B1 (en) * 2010-09-27 2017-03-13 삼성전자 주식회사 Method and apparatus for dynamic resource allocation of processing unit
JP5630396B2 (en) * 2011-07-27 2014-11-26 高田 周一 DMA controller
US20150032922A1 (en) * 2012-02-28 2015-01-29 Nec Corporation Computer system, method of processing the same, and computer readble medium
JP5887418B2 (en) * 2012-09-14 2016-03-16 株式会社日立製作所 Stream data multiplex processing method
JP2015088112A (en) 2013-11-01 2015-05-07 ソニー株式会社 Control device, processing device, and information processing method
CN112261314B (en) * 2020-09-24 2023-09-15 北京美摄网络科技有限公司 Video description data generation system, method, storage medium and equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0612392A (en) * 1992-03-19 1994-01-21 Fujitsu Ltd Method and system for decentralizing computer resource
JPH0784967A (en) * 1993-09-14 1995-03-31 Hitachi Ltd Process pipeline processing system
JP3680446B2 (en) * 1996-10-11 2005-08-10 富士ゼロックス株式会社 Pipeline control device and data processing method
JP2000353099A (en) * 1999-06-01 2000-12-19 Tektronix Inc Flow control method in active pipeline
NL1015579C1 (en) * 2000-06-30 2002-01-02 Thales Nederland Bv Method for automatically distributing program tasks over a collection of processors.
US7360219B2 (en) * 2002-12-13 2008-04-15 Hewlett-Packard Development Company, L.P. Systems and methods for facilitating fair and efficient scheduling of processes among multiple resources in a computer system
JP2006099579A (en) * 2004-09-30 2006-04-13 Toshiba Corp Information processor and information processing method
JP3964896B2 (en) * 2004-09-30 2007-08-22 株式会社東芝 Resource allocation apparatus and resource allocation method

Similar Documents

Publication Publication Date Title
JP2008146503A5 (en)
Wu et al. Flep: Enabling flexible and efficient preemption on gpus
US11163677B2 (en) Dynamically allocated thread-local storage
CN110308982B (en) Shared memory multiplexing method and device
EP2989540A2 (en) Controlling tasks performed by a computing system
Navarro et al. Strategies for maximizing utilization on multi-CPU and multi-GPU heterogeneous architectures
US10318261B2 (en) Execution of complex recursive algorithms
KR100694212B1 (en) Distribution operating system functions for increased data processing performance in a multi-processor architecture
US20150154054A1 (en) Information processing device and method for assigning task
Maroosi et al. Parallel and distributed computing models on a graphics processing unit to accelerate simulation of membrane systems
CN116414464B (en) Method and device for scheduling tasks, electronic equipment and computer readable medium
Madhu et al. Compiling HPC kernels for the REDEFINE CGRA
Grossman et al. A pluggable framework for composable HPC scheduling libraries
Buono et al. Optimizing message-passing on multicore architectures using hardware multi-threading
Gou et al. Addressing GPU on-chip shared memory bank conflicts using elastic pipeline
US8601236B2 (en) Configurable vector length computer processor
Schmaus et al. System Software for Resource Arbitration on Future Many-Architectures
Barthou et al. SPAGHETtI: Scheduling/placement approach for task-graphs on HETerogeneous architecture
Han et al. GPU-SAM: Leveraging multi-GPU split-and-merge execution for system-wide real-time support
Vert et al. Maintenance of sustainable operation of pipeline-parallel computing systems in the cloud environment
US20170330303A1 (en) Analysis system and method for reducing the control flow divergence in the Graphics Processing Units (GPUs)
Yamashita et al. Bulk execution of the dynamic programming for the optimal polygon triangulation problem on the GPU
Belviranli et al. A paradigm shift in GP-GPU computing: task based execution of applications with dynamic data dependencies
Shipman et al. Analysis of Application Sensitivity to System Performance Variability in a Dynamic Task Based Runtime.
Nguyen et al. Lu factorization: Towards hiding communication overheads with a lookahead-free algorithm