CN108710535A - A kind of task scheduling system based on intelligent processor - Google Patents
A kind of task scheduling system based on intelligent processor Download PDFInfo
- Publication number
- CN108710535A CN108710535A CN201810495495.8A CN201810495495A CN108710535A CN 108710535 A CN108710535 A CN 108710535A CN 201810495495 A CN201810495495 A CN 201810495495A CN 108710535 A CN108710535 A CN 108710535A
- Authority
- CN
- China
- Prior art keywords
- task
- queue
- intelligent processor
- application
- task scheduling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45587—Isolation or security of virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Abstract
The invention discloses a kind of task scheduling systems based on intelligent processor, including client layer, system layer and hardware layer;The client layer includes user interface library;The system layer includes communication module, task preprocessing module, task queue, load balancing module and task scheduling modules;The hardware layer includes the task buffer queue of intelligent processor, using dictionary, communication module and several containers.The present invention provides the task scheduling systems based on intelligent processor, simplify under conditions of single machine more intelligent processors and programs difficulty, simplify using deployment, realize the consequences such as the throughput for avoiding system using isolation, realization Autonomous Scheduling and the utilization rate for maximizing intelligent processor is relatively low, the task corresponding time is longer, user experience is poor.
Description
Technical field
The present invention relates to computer information processing, more particularly to a kind of task scheduling system based on intelligent processor.
Background technology
As depth learning technology is in making a breakthrough property of the fields such as picture classification, language identification, object classification, machine translation
Progress, such as:In ImageNet contests in 2012, Alex Krizhevshy et al. are obtained surprising using convolutional neural networks
Achievement;Deep learning system AlphaGo is with total score 4:1 defeats South Korea chess player Li Shishi;Google research and development based on nerve
The machine translation system of network can the jumbo accuracy rate for improving translation.Depth learning technology is increasingly by academia and industrial quarters
Pay attention to.
However the scale of network increases with the increase of the complexity of task, required calculation amount also increases severely, such as VGG16
The each forward direction of network needs 31G floating number to calculate.Therefore the high-performance low energy consumption accelerator of compatible deep learning algorithm also at
For the research emphasis of scientific research and commercial undertaking.
Hardware-accelerated technology common at present has application-specific integrated circuit ASIC (Application Specific
Integrated Circuit) i.e. intelligent processor, field programmable gate array FPGA (Field Programmable
Gate Array) and graphics processor GPU (Graphics Processing Unit).Compared with GPU, intelligent processor has more
Low energy consumption;Compared with FPGA, intelligent processor has higher performance.
However intelligent processor has many new features, such as:Using different expression precision (such as fixed point 16, fixed point 8,1
Than top grade), higher performance is provided, possesses lower energy consumption ratio, support number of users limited etc., cause intelligent processor can not
Compatible most task scheduling and management software.Again because compared with common application, deep learning application also has new features,
Such as:The corresponding time that can rise still task with the utilization rate of the increase hardware resource of batchsize also will increase.These are new
The complexity and difficulty that characteristic can increase the programming complexity of programmer, increase the difficulty using deployment, increase task scheduling
To maximize the utilization rate of intelligent processor.
In order to pursue higher performance, a machine can use polylith GPU/ intelligent processors.The case where blocking the single machine more
Under, task has the problems such as more scheduling selections have selection while being also introduced into load balancing, and task scheduling becomes more with management
Add complexity, the utilization rate for maximizing intelligent processor becomes more difficult.The throughput for eventually resulting in system is relatively low, task
The serious consequences the such as accordingly time is longer, user experience is poor.
Invention content
Purpose of the present invention is to:A kind of task scheduling system based on intelligent processor is provided, the programming for reducing user is complicated
Degree maximizes the utilization rate of intelligent processor, improves scalability.Using the system, can reach in the more intelligent processors of single machine
Under conditions of reduce programmer programming difficulty, simplify application deployment, realize application isolation, realize Autonomous Scheduling, load balancing
With the utilization rate purpose for maximizing intelligent processor.
The technical scheme is that:
A kind of task scheduling system based on intelligent processor, including client layer, system layer and hardware layer, the client layer
Including easily user interface library;The system layer includes communication module, task preprocessing module, task queue, load balancing
Module and task scheduling modules;The hardware layer include intelligent processor task buffer queue, using dictionary, communication module and
Several containers.
In optimal technical scheme, the user interface library is made of many API, by calling these API, application program can
Easily to be interacted with the system, the API provided include but are not limited to application query, using be inserted into, transmission task and
Task dispatching function is deleted, communication mode includes but are not limited to nested word, shared drive, pipeline, famous pipeline, semaphore, letter
Number and message queue, the pattern of communication includes asynchronous mode and synchronous mode.
In optimal technical scheme, the communication module receives the information that user sends, and judges the type of information, and according to
The type of information carries out different processing, and each intelligent processor node is forwarded it to when receiving instruction, when receiving task
When call data preprocessing module to the task pretreatment handle after be sent into task queue;Communication mode includes but are not limited to embedding
Cover word, shared drive, pipeline, famous pipeline, semaphore, signal and message queue, communication pattern include asynchronous mode with it is synchronous
Pattern.
In optimal technical scheme, data volume in the rational analysis task of data preprocessing module is simultaneously tied according to analysis
Fruit handles the task, and when the data volume in task is less, which is merged, and is provided more while reducing scheduling times
High degree of parallelism maximizes the utilization rate of intelligent processor;When data volume in single task is larger, reasonably task is split as
Multiple subtasks handle the task by way of tasks in parallel using polylith intelligent processor, improve the response time of task.
In optimal technical scheme, the task queue records all tasks, and wherein task can be subdivided into four parts, respectively
For:Application message, input data, output data and task status, task status include but are not limited to wait state, distribution shape
State, executes state and completion status at ready state.
In optimal technical scheme, the task scheduling modules check task queue, when task queue non-empty, from task team
Task is taken out successively according to most urgency in row, schedules it among most suitable Intelligent treatment and change the shape of the task
State, until all tasks have all been assigned in task queue or the buffer queue of all intelligent processors is (hereinafter referred to as slow
Deposit queue) it is full.
In optimal technical scheme, the load balancing module inspects periodically task queue and all buffer queues, timing
Be waken up, the load of each intelligent processor in inspection system, when needed dynamically to load be adjusted, it is ensured that each intelligence
The load of processor is at equilibrium, maximizes intelligent processor utilization rate, the task scheduling modules realization method should
Include but are not limited to clock interrupt, timer.
In optimal technical scheme, it is described caching team record system layer be sent to node task, wherein task can
Four parts are subdivided into, respectively:Application message, input data, output data and task status.Task status includes but not only limits
In wait state, distribution state, ready state, execute state and completion status.
In optimal technical scheme, it is described using dictionary store application and container mapping relations and each container state simultaneously
Manage container.Container state includes but are not limited to buffer status and closed state, is dispatched using a variety of containers are provided in dictionary
Algorithm reduces the handover overhead of container under varying environment, the utilization rate of Intelligent treatment and the throughput of system is maximized, wherein adjusting
Degree algorithm, which includes but are not limited to prerequisite variable (FCFS, First Come First Serve), to be dispatched, is least recently used
(Least Recently Used) scheduling, poll (Round-Robin) scheduling etc., the realization method of wherein dictionary includes but not
It is only limitted to array, chained list, Hash table, tree etc..
In optimal technical scheme, the communication module realizes that intelligent processor node is handed over the application program in container
Mutually, and according to reception content corresponding response is made, interactive content includes but are not limited to transmission task, terminates application, connects
The condition conversion information etc. of docker containers is received, interactive mode includes but are not limited to nested word, shared file etc., interaction
Pattern includes asynchronous mode and synchronous mode.
In optimal technical scheme, the container is packaged using intel Virtualization Technology application so that between application carry out every
From alloing different application to configure different precision and improve the scalability of queue.User only needs using insertion application
Order and provide the progress for meeting specification, queue can active generate corresponding container in each intelligent processor node and execute note
Volume action, reduces the programming complexity of user.Because the calculating of application program is mainly by intelligent processor in docker containers
Processing, and the collecting of task, data load are mainly carried out by CPU with pretreatment, so according to application characteristic general
There are many states to realize that the load of different task data is Chong Die with parsing and execution for docker containers, lifting system throughput.
Docker container states include but are not limited to start, wait for, is ready, executing, terminating.
Compared with prior art, it is an advantage of the invention that:
The present invention provides the task scheduling systems based on intelligent processor, simple under conditions of single machine more intelligent processors
Change programming difficulty, simplify application deployment, realize application isolation, realize Autonomous Scheduling and maximize the utilization rate of intelligent processor,
The consequences such as the throughput that avoids system is relatively low, the task corresponding time is longer, user experience is poor.
Description of the drawings
The invention will be further described with reference to the accompanying drawings and embodiments:
Fig. 1 is the task queue system architecture diagram of the embodiment of the present invention;
Fig. 2 is the flow chart of the transmission task interface of the embodiment of the present invention;
Fig. 3 is the flow chart of the communication module of the system layer of the embodiment of the present invention;
Fig. 4 is the docker state transition graphs of the embodiment of the present invention.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference
Attached drawing, the present invention is described in further detail.
As shown in Figure 1, being the task queue system architecture diagram of the present embodiment, client layer includes user program and is answered by insertion
With interface, inquiry application interface, the user interface library for sending task interface composition, these interfaces use the nested word of synchronous mode
It realizes;System layer include communication module (by based on synchronous mode nested word and thread pool form), data preprocessing module, appoint
Business queue, load balancing module, task scheduling modules;Hardware layer includes buffer queue, (is based on same using dictionary, communication module
The nested word of step mode) and docker containers, between wherein docker and host task and instruction are carried out using communication module
Transmission, carry out using the mode of Shared Folders the transmission of data.
Fig. 2 is the flow chart that the present embodiment sends task interface.Pass through the communication mould of socket interfaces and server-side first
Block establishes connection, then sends inquiry operation code and corresponding Apply Names, then judges to inquire the application by returning the result
Whether it has been inserted into, corresponding error message is exported if being not inserted into and returns to corresponding error code prompt user and is inserted into application,
If using being inserted into transmission task and having awaited a response, judge whether task normally completes finally by return code, if normal
Completion then returns to successfully code, and corresponding error message is exported if being not properly completed and returns to corresponding error code.
It is relatively simple to be inserted into application interface, includes the following steps:
(1) it is established and is connected by the communication module of socket interfaces and server-side;
(2) sending application information;
(3) it receives return code and is judged, code is returned successfully if application success is inserted into, if application insertion failure is defeated
Go out corresponding error message and returns to corresponding error code.
Fig. 3 is the flow chart of the communication module of system layer in the present embodiment.Often receive a request just Shen into thread pool
Please 1 sub- thread process request, the request that the multiple users of quick response initiate to communication module.Sub-line journey meeting basis arrives first
Information judge whether it is transmission task, if send be to instruct if execute the instruction, if send be task if will appoint
Business is sent to task preprocessing module and task is waited for complete, and judges whether task normally completes finally by return code, if
It normally completes, returns to successfully code, if being not properly completed and returning to corresponding error code.
Table 1 is the algorithm of task preprocessing module in the present embodiment, the data volume of analysis task first when task arrives,
If the data volume of the task is moderate, task is directly sent into task queue;If the data volume of the task is excessive, data are used
The means of slice are multiple subtasks task cutting, then task queue are sent into these subtasks;If the data of the task compared with
It is small, then it whether there is to task queue inquiry and can be incited somebody to action if not with the task of the task merging if there is then merging
The task is sent among task queue.
The algorithm of 1 task preprocessing module of table
Task queue realizes that each task instances indicate { task ID (int by a five-tuple using the queue classes of c++
Type), using name (String types), input data path (String types), output data path (String types), task status
(int types) };Wherein task ID has uniqueness, and appointed task can be found by task ID.When being completed a task,
It first passes through task ID and deletes the task, then return code is judged and returns to user.
Table 2 is the algorithm of load balancing module in the present embodiment, and load balancing module is waken up by way of timing, every time
Wake-up first checks for task queue, if task queue has not yet assigned task, load balancing module task system can
Task is scheduled by task scheduling modules and reaches load balancing state, waits for wake up next time into sleep;If task team
It is classified as sky, system can not reach load balancing state, load condition of the load balancing module to system by task scheduling modules
It is analyzed, judges whether system load is balanced, the module enters sleep waiting and wakes up next time if system load balancing, such as
Fruit system load is unbalanced, carries out load balancing operation, waits for wake up next time entering to sleep.
Load balancing operation can be divided into following several steps:
(1) the entering caching the latest of the task is taken out from the final node of load.
(2) task of taking-up is sent into the caching of most lightly loaded.
(3) whether analysis system load is balanced.
(4) if load balancing, then load balancing operation is completed;Operated (1)~(4) again if load imbalance.
The algorithm of 2 load balancing module of table
Table 3 is the algorithm of task scheduling in the present embodiment, which uses prerequisite variable dispatching algorithm, first
First task scheduling modules can whether there is not scheduled task in query task queue, if not depositing not scheduled task,
Terminate this task scheduling;If there is not scheduled task, whether then query caching queue is full, if all cachings
Queue all completely terminates this task scheduling, and task-scheduling operation is carried out if not being to expire there are buffer queue.
Task-scheduling operation can be divided into following several steps:
(1) task is taken out from task queue.
It (2) will be in the buffer queue of the task scheduling of taking-up to most lightly loaded.
(3) if it is all full that not scheduled task or all buffer queues, which is not present, in task queue, this is completed
Task scheduling.No person continues to execute (1)~(3) successively.
The algorithm of 3 task scheduling of table
Buffer queue realizes that each task instances indicate { task ID (int by a five-tuple using the queue classes of c++
Type), using name (String types), input data path (String types), output data path (String types), task status
(int types) };Task in buffer queue is the backup of task in task queue, can by binding cache queue and task status
Achieve the effect that task run and task scheduling overlapping, the throughput of lifting system.
Realize that, using application name as the key assignments of dictionary, each application example is by one using the map classes of c++ using dictionary
A quadruple notation apply name (String types), and docker containers name (String types), docker container states (int types),
Application program port numbers (int types) };Wherein application program port numbers are convenient for device node and the application program in docker containers
It interacts.When the corresponding docker containers of the application of execution are not buffered, docker containers are on the one hand considered, using journey
On the other hand can the initialization expense of sequence need consideration system cache the docker containers.When system available resources are not enough to
When caching the container, it can use FCFS dispatching algorithms, release docker containers buffered earliest that can cache this until system
Docker containers.
The traffic model of hardware layer is responsible for intelligent processor node and is interacted with the application program in docker containers, and
Corresponding response is made according to reception content.Have in wherein actively sending:Task, application program terminate order and execute,
Have in receiving:The response of docker container states conversion and port inquiry request.
Because the calculating of application program is mainly handled by intelligent processor in docker containers, and the collecting of task, data
Load is mainly carried out by CPU with pre-processing, thus by docker containers be divided into beginning, waiting, it is ready, execute, terminate this five kinds
State realizes that the load of different task data is Chong Die with parsing and execution, lifting system throughput.Fig. 4 is the shape of docker containers
State conversion figure, docker containers are initially beginning state, and wait state is entered after successful initialization and notifies corresponding intelligence
It can processor node.Docker containers receive two kinds of orders in wait state, are reception task respectively and terminate docker containers,
When receiving termination docker container orders, exits application program and simultaneously close off docker containers;When receiving task, docker
Container loads data and is pre-processed to data, enters ready state after to be pre-treated, and notifies at corresponding intelligence
Manage device node.Docker containers wait for intelligent processor node allocation processing resource in ready state, after obtaining computing resource
Processing task simultaneously finally notifies corresponding intelligent processor node into operating status.Docker containers are waited in operating status appoints
Business is completed, after task the release of docker containers computing resource, intelligent processor node tasks be completed, into wait state simultaneously
Notify corresponding intelligent processor node.
The above embodiments merely illustrate the technical concept and features of the present invention, and its object is to allow person skilled in the art
It cans understand the content of the present invention and implement it accordingly, it is not intended to limit the scope of the present invention.It is all main according to the present invention
The modification for wanting the Spirit Essence of technical solution to be done, should be covered by the protection scope of the present invention.
Claims (11)
1. a kind of task scheduling system based on intelligent processor, it is characterised in that:Including client layer, system layer and hardware layer;
The client layer includes user interface library;The system layer includes communication module, task preprocessing module, task queue, load
Balance module and task scheduling modules;The hardware layer includes the task buffer queue of intelligent processor, using dictionary, communication mould
Block and several containers.
2. task scheduling system according to claim 1, it is characterised in that:If the user interface library of the client layer includes
Dry API, by calling these API, application program to be interacted with task scheduling system;The API includes application query, application
It is inserted into, sends task and delete task function, the communication mode of API includes nested word, shared drive, pipeline, famous pipeline, letter
Number amount, signal and message queue, communication pattern include asynchronous mode and synchronous mode.
3. task scheduling system according to claim 1, it is characterised in that:The communication module of the system layer receives user
The information of transmission, and judge the type of information, and different processing is carried out according to the type of information;Communication mode includes nesting
Word, shared drive, pipeline, famous pipeline, semaphore, signal and message queue, communication pattern include asynchronous mode and synchronous mould
Formula.
4. task scheduling system according to claim 3, it is characterised in that:In the task preprocessing module analysis task
Data volume and according to analysis result handle task;When the data volume in task is less, which is merged, task is worked as
In data volume it is larger when, task is split as multiple subtasks, by way of tasks in parallel use polylith intelligent processor
Handle the task.
5. task scheduling system according to claim 4, it is characterised in that:The task queue records all tasks,
Wherein task is divided into four parts, respectively:Application message, input data, output data and task status;The task status packet
Include wait state, distribution state, ready state, execution state and completion status.
6. task scheduling system according to claim 5, it is characterised in that:The task scheduling modules check task team
Row, when task queue non-empty, task is taken out according to most urgency from task queue, schedules it to most suitable intelligence successively
In processor and the state of the task can be changed, until in task queue all tasks be all assigned or it is all it is intelligent at
Manage the task buffer queue full of device.
7. task scheduling system according to claim 6, it is characterised in that:The load balancing module inspects periodically task
Queue and all task buffer queues;Load balancing module timing is waken up, the load of each intelligent processor in inspection system,
Dynamically load is adjusted when needed, it is ensured that the load of each intelligent processor is at equilibrium;The task scheduling
Module implementations include clock interrupt, timer.
8. task scheduling system according to claim 1, it is characterised in that:The buffer queue of the intelligent processor records
System layer be sent to node task, wherein task is divided into four parts, respectively:Application message, input data, output
Data and task status;Task status includes wait state, distribution state, ready state, execution state and completion status.
9. task scheduling system according to claim 8, it is characterised in that:It is described to store application and container using dictionary
Mapping relations and the state of each container simultaneously manage container;Container state includes buffer status and closed state;Using in dictionary
A variety of container dispatching algorithms are provided, wherein dispatching algorithm includes prerequisite variable scheduling, least recently used scheduling, poll tune
Degree;The realization method of wherein dictionary includes array, chained list, Hash table, tree.
10. task scheduling system according to claim 9, it is characterised in that:The communication module realizes intelligent processor
Node is interacted with the application program in container, and makes corresponding response according to reception content;Interactive content includes hair
The condition conversion information etc. for sending task, terminating application, receiving docker containers;Interactive mode includes nested word, shared file;
Interactive pattern includes asynchronous mode and synchronous mode.
11. task scheduling system according to claim 10, it is characterised in that:The container uses intel Virtualization Technology application
It is packaged so that carry out isolation between application and allow different application to configure different precision and improve expanding for queue
Malleability, docker container states include but are not limited to start, wait for, is ready, executing, terminating.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810495495.8A CN108710535A (en) | 2018-05-22 | 2018-05-22 | A kind of task scheduling system based on intelligent processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810495495.8A CN108710535A (en) | 2018-05-22 | 2018-05-22 | A kind of task scheduling system based on intelligent processor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108710535A true CN108710535A (en) | 2018-10-26 |
Family
ID=63868582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810495495.8A Pending CN108710535A (en) | 2018-05-22 | 2018-05-22 | A kind of task scheduling system based on intelligent processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108710535A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110445709A (en) * | 2019-09-11 | 2019-11-12 | 成都千立网络科技有限公司 | Carry the intelligent gateway of docker application |
CN110889497A (en) * | 2018-12-29 | 2020-03-17 | 中科寒武纪科技股份有限公司 | Learning task compiling method of artificial intelligence processor and related product |
CN111159782A (en) * | 2019-12-03 | 2020-05-15 | 支付宝(杭州)信息技术有限公司 | Safety task processing method and electronic equipment |
CN111459981A (en) * | 2019-01-18 | 2020-07-28 | 阿里巴巴集团控股有限公司 | Query task processing method, device, server and system |
CN112231080A (en) * | 2020-09-27 | 2021-01-15 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on multiple hash rings with different precisions |
CN112231079A (en) * | 2020-09-27 | 2021-01-15 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on buffer queue and Hash ring |
CN112486598A (en) * | 2020-10-28 | 2021-03-12 | 武汉中科通达高新技术股份有限公司 | Method and system for processing picture by using pipeline technology and electronic device |
CN112579289A (en) * | 2020-12-21 | 2021-03-30 | 中电福富信息科技有限公司 | Distributed analysis engine method and device capable of achieving intelligent scheduling |
CN113296915A (en) * | 2021-06-18 | 2021-08-24 | 瀚云科技有限公司 | Task generation method and system based on industrial internet platform |
CN115622988A (en) * | 2022-12-19 | 2023-01-17 | 成方金融科技有限公司 | Call response method and device of web interface, electronic equipment and storage medium |
US20230066881A1 (en) * | 2019-12-31 | 2023-03-02 | Ai Speech Co., Ltd. | Information flow-based decision-making and scheduling customization method and apparatus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101458634A (en) * | 2008-01-22 | 2009-06-17 | 中兴通讯股份有限公司 | Load equilibration scheduling method and device |
CN102567106A (en) * | 2010-12-30 | 2012-07-11 | 中国移动通信集团云南有限公司 | Task scheduling method, system and device |
CN103970612A (en) * | 2014-05-07 | 2014-08-06 | 田文洪 | Load balancing method and device based on pre-division of virtual machine |
-
2018
- 2018-05-22 CN CN201810495495.8A patent/CN108710535A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101458634A (en) * | 2008-01-22 | 2009-06-17 | 中兴通讯股份有限公司 | Load equilibration scheduling method and device |
CN102567106A (en) * | 2010-12-30 | 2012-07-11 | 中国移动通信集团云南有限公司 | Task scheduling method, system and device |
CN103970612A (en) * | 2014-05-07 | 2014-08-06 | 田文洪 | Load balancing method and device based on pre-division of virtual machine |
Non-Patent Citations (3)
Title |
---|
YING MAO: "DRAPS: Dynamic and resource-aware placement scheme for docker containers in a heterogeneous cluster", 《2017 IEEE 36TH INTERNATIONAL PERFORMANCE COMPUTING AND COMMUNICATIONS CONFERENCE (IPCCC)》 * |
白伟华: "面向云计算的小粒度应用容器模型研究与应用", 《中国博士学位论文全文数据库信息科技辑》 * |
许雍祯: "基于同构多核处理器的任务调度", 《计算机系统应用》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889497A (en) * | 2018-12-29 | 2020-03-17 | 中科寒武纪科技股份有限公司 | Learning task compiling method of artificial intelligence processor and related product |
CN111459981A (en) * | 2019-01-18 | 2020-07-28 | 阿里巴巴集团控股有限公司 | Query task processing method, device, server and system |
CN111459981B (en) * | 2019-01-18 | 2023-06-09 | 阿里巴巴集团控股有限公司 | Query task processing method, device, server and system |
CN110445709A (en) * | 2019-09-11 | 2019-11-12 | 成都千立网络科技有限公司 | Carry the intelligent gateway of docker application |
CN111159782B (en) * | 2019-12-03 | 2021-05-18 | 支付宝(杭州)信息技术有限公司 | Safety task processing method and electronic equipment |
CN111159782A (en) * | 2019-12-03 | 2020-05-15 | 支付宝(杭州)信息技术有限公司 | Safety task processing method and electronic equipment |
US20230066881A1 (en) * | 2019-12-31 | 2023-03-02 | Ai Speech Co., Ltd. | Information flow-based decision-making and scheduling customization method and apparatus |
CN112231080A (en) * | 2020-09-27 | 2021-01-15 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on multiple hash rings with different precisions |
CN112231079A (en) * | 2020-09-27 | 2021-01-15 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on buffer queue and Hash ring |
CN112231080B (en) * | 2020-09-27 | 2024-01-26 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on multiple hash rings with different precision |
CN112231079B (en) * | 2020-09-27 | 2024-01-26 | 武汉物易云通网络科技有限公司 | Task scheduling method and device based on buffer queue and hash ring |
CN112486598A (en) * | 2020-10-28 | 2021-03-12 | 武汉中科通达高新技术股份有限公司 | Method and system for processing picture by using pipeline technology and electronic device |
CN112579289A (en) * | 2020-12-21 | 2021-03-30 | 中电福富信息科技有限公司 | Distributed analysis engine method and device capable of achieving intelligent scheduling |
CN112579289B (en) * | 2020-12-21 | 2023-06-13 | 中电福富信息科技有限公司 | Distributed analysis engine method and device capable of being intelligently scheduled |
CN113296915A (en) * | 2021-06-18 | 2021-08-24 | 瀚云科技有限公司 | Task generation method and system based on industrial internet platform |
CN113296915B (en) * | 2021-06-18 | 2023-07-18 | 瀚云科技有限公司 | Task generation method and system based on industrial Internet platform |
CN115622988A (en) * | 2022-12-19 | 2023-01-17 | 成方金融科技有限公司 | Call response method and device of web interface, electronic equipment and storage medium |
CN115622988B (en) * | 2022-12-19 | 2023-04-28 | 成方金融科技有限公司 | Call response method and device for web interface, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108710535A (en) | A kind of task scheduling system based on intelligent processor | |
US11782870B2 (en) | Configurable heterogeneous AI processor with distributed task queues allowing parallel task execution | |
US11789895B2 (en) | On-chip heterogeneous AI processor with distributed tasks queues allowing for parallel task execution | |
US7370326B2 (en) | Prerequisite-based scheduler | |
US7137116B2 (en) | Method and system for performing a task on a computer | |
CN104123182B (en) | Based on the MapReduce task of client/server across data center scheduling system and method | |
US7246353B2 (en) | Method and system for managing the execution of threads and the processing of data | |
CN106503791A (en) | System and method for the deployment of effective neutral net | |
CN105095327A (en) | Distributed ELT system and scheduling method | |
CN110795254A (en) | Method for processing high-concurrency IO based on PHP | |
CN101833439B (en) | Parallel computing hardware structure based on separation and combination thought | |
US20160019089A1 (en) | Method and system for scheduling computing | |
Li et al. | Efficient online scheduling for coflow-aware machine learning clusters | |
Li et al. | Hone: Mitigating stragglers in distributed stream processing with tuple scheduling | |
Ashu et al. | Intelligent data compression policy for Hadoop performance optimization | |
CN109976873A (en) | The scheduling scheme acquisition methods and dispatching method of containerization distributed computing framework | |
CN113608858A (en) | MapReduce architecture-based block task execution system for data synchronization | |
CN112181689A (en) | Runtime system for efficiently scheduling GPU kernel under cloud | |
CN107277062A (en) | The method for parallel processing and device of packet | |
Salama | A swarm intelligence based model for mobile cloud computing | |
CN105610897B (en) | Calculation method based on the M/M/1 TOC service model being lined up and its service response time | |
CN106341447A (en) | Database service intelligent exchange method based on mobile terminal | |
Patil et al. | Review on a comparative study of various task scheduling algorithm in cloud computing environment | |
Heinz et al. | Supporting on-chip dynamic parallelism for task-based hardware accelerators | |
CN115720238B (en) | System and method for processing block chain request supporting high concurrency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181026 |