CN108874518B - Task scheduling method and terminal - Google Patents

Task scheduling method and terminal Download PDF

Info

Publication number
CN108874518B
CN108874518B CN201810486336.1A CN201810486336A CN108874518B CN 108874518 B CN108874518 B CN 108874518B CN 201810486336 A CN201810486336 A CN 201810486336A CN 108874518 B CN108874518 B CN 108874518B
Authority
CN
China
Prior art keywords
priority
request
storage space
request packet
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810486336.1A
Other languages
Chinese (zh)
Other versions
CN108874518A (en
Inventor
潘仰明
吕灼恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Digital Fujian Cloud Computing Operation Co ltd
Dawning Information Industry Beijing Co Ltd
Original Assignee
Fujian Digital Fujian Cloud Computing Operation Co ltd
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Digital Fujian Cloud Computing Operation Co ltd, Dawning Information Industry Beijing Co Ltd filed Critical Fujian Digital Fujian Cloud Computing Operation Co ltd
Priority to CN201810486336.1A priority Critical patent/CN108874518B/en
Publication of CN108874518A publication Critical patent/CN108874518A/en
Application granted granted Critical
Publication of CN108874518B publication Critical patent/CN108874518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The task scheduling method and the terminal provided by the invention store the priority of each received computing task request and the corresponding request packet in the preset storage space, sending a first request packet corresponding to the calculation task with the highest priority in the storage space to a scheduler, so that the scheduler sends the first serial data in the first request packet to the CPU for serial data processing, and sending the first parallel data in the first request packet to the GPU for parallel data processing, thereby solving the problem of the bottleneck of the computing performance of the traditional CPU, improving the computing efficiency of the computing task, and when the CPU or GPU processes data, the scheduler is dormant to improve the operation efficiency of the system and ensure the service life of the scheduler, meanwhile, when the calculation task is processed each time, the request packet corresponding to the highest priority in the storage space is processed, so that the important calculation task can be completed in time.

Description

Task scheduling method and terminal
Technical Field
The invention relates to the technical field of task scheduling, in particular to a task scheduling method and a terminal.
Background
With the rapid growth of internet users and the rapid expansion of data volume, the demand of data centers on task calculation is rapidly increased. The demand of artificial intelligence for computing tasks has far exceeded the capability of traditional CPU processors, and in order to further improve the computing performance of artificial intelligence, many relevant researches have been made in both academic and industrial fields, but the following disadvantages still exist: because the calculation amount of the calculation task is large, the traditional CPU can not meet the requirement; when a plurality of calculation tasks exist at the same time, relatively important calculation tasks cannot be completed in time, and certain loss is brought to users.
Disclosure of Invention
In view of this, the present invention provides a task scheduling method and a terminal, which improve the computation efficiency of a computation task.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention provides a task scheduling method, which comprises the following steps:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
The invention also provides a task scheduling terminal, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the following steps when executing the computer program:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
The beneficial effects of the above technical scheme are:
the invention provides a task scheduling method and a terminal, wherein the priority of each received computing task request and a corresponding request packet are stored in a preset storage space, a first request packet corresponding to the computing task with the highest priority in the storage space is sent to a scheduler, so that the scheduler sends first serial data in the first request packet to a CPU (central processing unit) for serial data processing, and sends first parallel data in the first request packet to a GPU (graphics processing unit) for parallel data processing; the serial data corresponding to the first computing task request is processed by the CPU, and the parallel data corresponding to the first computing task request is processed by the GPU, so that the problem of bottleneck of computing performance of the traditional CPU is solved, the computing efficiency of the computing task is improved, the scheduler is dormant when the CPU or the GPU processes the data so as to improve the operating efficiency of the system and ensure the service life of the scheduler, and meanwhile, when the computing task is processed each time, the request packet corresponding to the highest priority in the storage space is processed, and the important computing task can be completed in time.
Drawings
FIG. 1 is a schematic diagram illustrating the main steps of a task scheduling method according to the present invention;
FIG. 2 is a schematic structural diagram of a task scheduling terminal according to the present invention;
the reference numbers illustrate:
1. a memory; 2. a processor.
Detailed Description
The invention is further described below with reference to the following figures and specific examples:
as shown in fig. 1, a task scheduling method provided by the present invention includes the following steps:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
As can be seen from the above description, according to the task scheduling method and the terminal provided by the present invention, the priority of each received computing task request and the corresponding request packet are stored in a preset storage space, and a first request packet corresponding to the computing task with the highest priority in the storage space is sent to the scheduler, so that the scheduler sends the first serial data in the first request packet to the CPU for serial data processing, and sends the first parallel data in the first request packet to the GPU for parallel data processing; the serial data corresponding to the first computing task request is processed by the CPU, and the parallel data corresponding to the first computing task request is processed by the GPU, so that the problem of bottleneck of computing performance of the traditional CPU is solved, the computing efficiency of the computing task is improved, the scheduler is dormant when the CPU or the GPU processes the data so as to improve the operating efficiency of the system and ensure the service life of the scheduler, and meanwhile, when the computing task is processed each time, the request packet corresponding to the highest priority in the storage space is processed, and the important computing task can be completed in time.
Further, the S1 is preceded by:
the scheduler is started.
As can be seen from the above description, the scheduler is started before the task is scheduled to ensure that the computation task is scheduled in time.
Further, the S3 specifically includes:
obtaining a priority file corresponding to each calculation task request according to the priority of each calculation task request, wherein the priority file stores the priority of each calculation task request;
associating a request packet and a priority file corresponding to each computing task request to obtain corresponding association information, wherein the association information comprises the request packet and the corresponding priority file;
storing the associated information corresponding to each calculation task request in a preset storage space;
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
From the above description, according to the method, the association information with the highest priority in the storage space is conveniently and quickly acquired according to the sorted association information in the storage space, so that the data processing efficiency is improved.
Further, the S6 specifically includes:
and when the information that the first serial data processing is finished and the information that the first parallel data processing is finished and sent by the GPU are received, the scheduler is started after the associated information of the first request packet in the storage space is deleted.
As can be seen from the above description, after the CPU finishes processing the first serial data and the GPU finishes processing the first parallel data, the association information corresponding to the first request packet in the storage space is deleted, that is, the association information includes the first request packet, so as to avoid repeatedly processing the computation task when the computation task is scheduled, and to improve the reacquiring of the association information corresponding to the highest priority of the storage space.
Further, receiving a second request packet corresponding to the calculation task request in real time;
configuring a file according to the priority in the second request packet to obtain a first priority corresponding to the second request packet;
obtaining a first priority file according to the first priority;
associating the second request packet with the first priority file to obtain first associated information;
and storing the first associated information in the storage space.
As can be seen from the above description, the second request packet corresponding to the calculation task is received in real time, so as to prevent the file with higher priority from not being processed in time.
Further, after storing the first association information in the storage space, the method further includes:
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
From the above description, it can be known that, by the above method, it can be ensured that the newly received calculation task request with high priority can be processed in time.
Further, when a request packet corresponding to the computing task request is received, the request packet is stored in a preset first storage space.
As can be seen from the above description, when a request packet corresponding to a computing task request is received, the request packet is stored in a preset first storage space for backup, so as to prevent the computing task request from failing, and the request packet corresponding to the computing task is obtained from the first storage space, thereby ensuring that each computing task can be successfully processed.
As shown in fig. 2, the present invention provides a task scheduling terminal, which includes a memory 1, a processor 2 and a computer program stored in the memory 1 and executable on the processor 2, wherein the processor 2 implements the following steps when executing the computer program:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
Further, the resource scheduling terminal described in the foregoing S1 further includes:
the scheduler is started.
Further, in the resource scheduling terminal, the S3 is specifically:
obtaining a priority file corresponding to each calculation task request according to the priority of each calculation task request, wherein the priority file stores the priority of each calculation task request;
associating a request packet and a priority file corresponding to each computing task request to obtain corresponding association information, wherein the association information comprises the request packet and the corresponding priority file;
storing the associated information corresponding to each calculation task request in a preset storage space;
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
Further, in the resource scheduling terminal, the S6 is specifically:
and when the information that the first serial data processing is finished and the information that the first parallel data processing is finished and sent by the GPU are received, the scheduler is started after the associated information of the first request packet in the storage space is deleted.
Further, the resource scheduling terminal receives a second request packet corresponding to the calculation task request in real time;
configuring a file according to the priority in the second request packet to obtain a first priority corresponding to the second request packet;
obtaining a first priority file according to the first priority;
associating the second request packet with the first priority file to obtain first associated information;
and storing the first associated information in the storage space.
Further, after storing the first association information in the storage space, the resource scheduling terminal further includes:
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
Further, when receiving a request packet corresponding to the calculation task request, the resource scheduling terminal stores the request packet in a preset first storage space.
Some preferred embodiments or application examples are listed below to help those skilled in the art to better understand the technical content of the present invention and the technical contribution of the present invention with respect to the prior art:
the first preferred embodiment (or the first application embodiment) is:
the invention provides a task scheduling method, which comprises the following steps:
s0: starting a scheduler;
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
each calculation task corresponds to a specific priority value, and the priority value refers to a priority value;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
wherein, the S3 specifically is:
obtaining a priority file corresponding to each calculation task request according to the priority of each calculation task request, wherein the priority file stores the priority of each calculation task request;
associating a request packet and a priority file corresponding to each computing task request to obtain corresponding association information, wherein the association information comprises the request packet and the corresponding priority file;
storing the associated information corresponding to each calculation task request in a preset storage space;
arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
wherein, the S6 specifically is:
and when the information that the first serial data processing is finished and the information that the first parallel data processing is finished and sent by the GPU are received, the scheduler is started after the associated information of the first request packet in the storage space is deleted.
S7: repeating the steps S4-S6 until the content stored in the storage space is empty.
The second preferred embodiment (or the second application embodiment) is:
the second preferred embodiment is different from the first preferred embodiment in that the resource scheduling method receives a second request packet corresponding to a computing task request in real time, and stores/backs up the received second request packet in a preset first storage space;
configuring a file according to the priority in the second request packet to obtain a first priority corresponding to the second request packet;
obtaining a first priority file according to the first priority;
associating the second request packet with the first priority file to obtain first associated information;
storing the first association information in the storage space;
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
The third preferred embodiment (or the third application embodiment) is:
the invention provides a task scheduling terminal, which comprises a memory 1, a processor 2 and a computer program which is stored in the memory 1 and can run on the processor 2, wherein the processor 2 realizes all the steps of preferably implementing one step or preferentially implementing two steps when executing the computer program.
The present invention has been described with reference to the above embodiments and the accompanying drawings, however, the above embodiments are only examples for carrying out the present invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. Rather, modifications and equivalent arrangements included within the spirit and scope of the claims are included within the scope of the invention.

Claims (10)

1. A task scheduling method is characterized by comprising the following steps:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
2. The task scheduling method according to claim 1, wherein said S1 is preceded by:
the scheduler is started.
3. The task scheduling method according to claim 1, wherein the S3 specifically is:
obtaining a priority file corresponding to each calculation task request according to the priority of each calculation task request, wherein the priority file stores the priority of each calculation task request;
associating a request packet and a priority file corresponding to each computing task request to obtain corresponding association information, wherein the association information comprises the request packet and the corresponding priority file;
storing the associated information corresponding to each calculation task request in a preset storage space;
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
4. The task scheduling method according to claim 3, wherein the S6 specifically is:
and when the information that the first serial data processing is finished and the information that the first parallel data processing is finished and sent by the GPU are received, the scheduler is started after the associated information of the first request packet in the storage space is deleted.
5. The task scheduling method according to claim 3, wherein a second request packet corresponding to the task request is received in real time;
configuring a file according to the priority in the second request packet to obtain a first priority corresponding to the second request packet;
obtaining a first priority file according to the first priority;
associating the second request packet with the first priority file to obtain first associated information;
and storing the first associated information in the storage space.
6. The task scheduling method according to claim 5, further comprising, after storing the first association information in the storage space:
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
7. The method according to claim 1, wherein when a request packet corresponding to the computing task request is received, the request packet is stored in a preset first storage space.
8. A task scheduling terminal comprising a memory and a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
s1: receiving a request packet corresponding to a computing task request, wherein the request packet comprises serial data, parallel data and a priority configuration file;
s2: confirming the priority of each calculation task request according to the priority configuration file;
s3: storing the priority of each computing task request and the corresponding request packet in a preset storage space;
s4: sending a first request packet corresponding to the calculation task request with the highest priority in the storage space to a scheduler, so that after the scheduler acquires first serial data and first parallel data in the first request packet, the first serial data is sent to a CPU for serial data processing, and the first parallel data is sent to a GPU for parallel data processing;
s5: sleeping the scheduler;
s6: when first serial data processing completion information sent by a CPU (Central processing Unit) and first parallel data processing completion information sent by a GPU (graphics processing Unit) are received, a first request packet and a corresponding priority in a storage space are deleted, and then the scheduler is started;
s7: and repeatedly executing S4-S6 until the content stored in the storage space is empty.
9. The task scheduling terminal of claim 8, wherein the S1 is preceded by:
the scheduler is started.
10. The task scheduling terminal according to claim 8, wherein the S3 specifically is:
obtaining a priority file corresponding to each calculation task request according to the priority of each calculation task request, wherein the priority file stores the priority of each calculation task request;
associating a request packet and a priority file corresponding to each computing task request to obtain corresponding association information, wherein the association information comprises the request packet and the corresponding priority file;
storing the associated information corresponding to each calculation task request in a preset storage space;
and arranging the associated information corresponding to each calculation task request in the storage space according to the descending order of the priority files in all the associated information of the storage space.
CN201810486336.1A 2018-05-21 2018-05-21 Task scheduling method and terminal Active CN108874518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810486336.1A CN108874518B (en) 2018-05-21 2018-05-21 Task scheduling method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810486336.1A CN108874518B (en) 2018-05-21 2018-05-21 Task scheduling method and terminal

Publications (2)

Publication Number Publication Date
CN108874518A CN108874518A (en) 2018-11-23
CN108874518B true CN108874518B (en) 2021-05-11

Family

ID=64333751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810486336.1A Active CN108874518B (en) 2018-05-21 2018-05-21 Task scheduling method and terminal

Country Status (1)

Country Link
CN (1) CN108874518B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083388B (en) * 2019-04-19 2021-11-12 上海兆芯集成电路有限公司 Processing system for scheduling and access method thereof
CN110245127A (en) * 2019-06-12 2019-09-17 成都九洲电子信息系统股份有限公司 A kind of data migration method based on Row control
CN111190735B (en) * 2019-12-30 2024-02-23 湖南大学 On-chip CPU/GPU pipelining calculation method based on Linux and computer system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8503539B2 (en) * 2010-02-26 2013-08-06 Bao Tran High definition personal computer (PC) cam
CN101958808B (en) * 2010-10-18 2012-05-23 华东交通大学 Cluster task dispatching manager used for multi-grid access
CN102521050A (en) * 2011-12-02 2012-06-27 曙光信息产业(北京)有限公司 Mix scheduling method facing central processing unit (CPU) and graphic processing unit (GPU)
CN102541640B (en) * 2011-12-28 2014-10-29 厦门市美亚柏科信息股份有限公司 Cluster GPU (graphic processing unit) resource scheduling system and method
CN103279445A (en) * 2012-09-26 2013-09-04 上海中科高等研究院 Computing method and super-computing system for computing task
CN102981807B (en) * 2012-11-08 2015-06-24 北京大学 Graphics processing unit (GPU) program optimization method based on compute unified device architecture (CUDA) parallel environment
CN104317751B (en) * 2014-11-18 2017-03-01 郑州云海信息技术有限公司 Data flow processing system and its data flow processing method on a kind of GPU
CN105893263B (en) * 2016-04-25 2018-08-03 北京智能综电信息技术有限责任公司 A kind of test assignment dispatching method
CN107102894A (en) * 2017-04-07 2017-08-29 百度在线网络技术(北京)有限公司 Method for scheduling task, device and system
CN107273331A (en) * 2017-06-30 2017-10-20 山东超越数控电子有限公司 A kind of heterogeneous computing system and method based on CPU+GPU+FPGA frameworks
CN107391429A (en) * 2017-08-07 2017-11-24 胡明建 A kind of CPU+GPU+FPGA design method

Also Published As

Publication number Publication date
CN108874518A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108874518B (en) Task scheduling method and terminal
US7844853B2 (en) Methods and apparatus for restoring a node state
US7441240B2 (en) Process scheduling apparatus, process scheduling method, program for process scheduling, and storage medium recording a program for process scheduling
CN107273542B (en) High-concurrency data synchronization method and system
CN104199739B (en) A kind of speculating type Hadoop dispatching methods based on load balancing
CN109492018B (en) Self-adaptive dynamic adjustment method and device for data synchronization system
WO2018018611A1 (en) Task processing method and network card
CN103412786A (en) High performance server architecture system and data processing method thereof
Yildiz et al. Chronos: Failure-aware scheduling in shared Hadoop clusters
US11392414B2 (en) Cooperation-based node management protocol
CN104683472A (en) Data transmission method capable of supporting large data volume
CN103310460A (en) Image characteristic extraction method and system
EP3104275A1 (en) Data processing method, device and system
CN104461710A (en) Method and device for processing tasks
CN115086298A (en) File transmission method and device
CN106775975B (en) Process scheduling method and device
CN110502337B (en) Optimization system for shuffling stage in Hadoop MapReduce
CN113961341A (en) Concurrent data processing method, system, device and storage medium based on Actor model
CN109426554B (en) Timing implementation method and device for server
CN103501247A (en) Method and device for processing high concurrency request
CN110825342B (en) Memory scheduling device and system, method and apparatus for processing information
CN108958967B (en) Data processing method and server
CN113821174B (en) Storage processing method, storage processing device, network card equipment and storage medium
CN110354504B (en) Method, device, server and storage medium for obtaining matching group
CN109710390B (en) Multi-task processing method and system of single-thread processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant