CN106776015A - A kind of concurrent program task processing method and its device - Google Patents

A kind of concurrent program task processing method and its device Download PDF

Info

Publication number
CN106776015A
CN106776015A CN201611073543.1A CN201611073543A CN106776015A CN 106776015 A CN106776015 A CN 106776015A CN 201611073543 A CN201611073543 A CN 201611073543A CN 106776015 A CN106776015 A CN 106776015A
Authority
CN
China
Prior art keywords
task
processing core
untreated
processing
several
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611073543.1A
Other languages
Chinese (zh)
Other versions
CN106776015B (en
Inventor
王渭巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201611073543.1A priority Critical patent/CN106776015B/en
Publication of CN106776015A publication Critical patent/CN106776015A/en
Application granted granted Critical
Publication of CN106776015B publication Critical patent/CN106776015B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Devices For Executing Special Programs (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of concurrent program task processing method and its device, including a untreated task to each processing core is respectively allocated according to the first preset rules from current untreated task;When processing core is processed the task of current distribution, the related task of task currently processed to processing core is selected from untreated task as follow-up work according to the data inheritance between task;When there is no follow-up work, a untreated task to processing core is distributed according to the second preset rules from current untreated task;After the completion of task treatment of the processing core to current distribution, the task data that will be obtained is stored to the corresponding caching of processing core, and is processed follow-up work as the task of the current distribution of processing core;Until whole task treatment are completed.The several tasks with relevance are cached using identical in the present invention, and treatment effeciency is high.

Description

A kind of concurrent program task processing method and its device
Technical field
The present invention relates to process processing technology field, more particularly to a kind of concurrent program task processing method and its dress Put.
Background technology
In current concurrent program task treatment, conventional mode is orderly random concurrent program task processing mode, Parallel task processing procedure under this rule has three features, and first must comply with the order (base of entirety or part for task This is time sequencing);Second may have advance ignorant data dependence for task;3rd may dynamically create for task Subtask is built, and is planned in future operation.
Under this mode, due to multiple processing core parallel processings, and task is sequentially allocated to each according to time sequencing Processing core is processed, and easily causes that the several tasks with relevance are processed by different processing cores, is so led Cause follow-up work needs first to read required data from the corresponding caching of processing core of before processing sequence task when being processed Task can be just carried out, efficiency is low, and (for example task 2 is related to task 7, needs to go the treatment of process task 2 first during the treatment of task 7 The corresponding data for caching out downloading task 2 of core).
Therefore, how to provide a kind for the treatment of effeciency concurrent program task processing method high and its device is art technology Personnel need the problem for solving at present.
The content of the invention
It is an object of the invention to provide a kind of concurrent program task processing method and its device, follow-up with relevance Business is when being processed, it is not necessary to goes in the caching of other processing cores to read, but directly uses the process cores where itself Data in the corresponding caching of the heart, treatment effeciency is high.
In order to solve the above technical problems, the invention provides a kind of concurrent program task processing method, including:
Step s101:From several current untreated tasks according to the first preset rules be respectively allocated one it is untreated Task is to each processing core;
Step s102:When the processing core is processed the task of current distribution, according to the data between task The task that inheritance selects a task currently processed with the processing core from untreated task and has correlation is made It is follow-up work;When in the absence of follow-up work, into step s104;
Step s103:After the completion of task treatment of the processing core to current distribution, by appointing for the current distribution In the data storage of business to the corresponding caching of the processing core, and using the follow-up work as the current of the processing core The task of distribution is processed;Return to step s102;
Step s104:From several current untreated tasks a untreated task is chosen according to the second preset rules Distribute to the processing core, and return to step s102;Until whole task treatment are completed.
Preferably, first preset rules are specially random rule.
Preferably, time sequencing numbering is provided with each task, second preset rules are specially from current The minimum task of selection time serial number is distributed to the processing core in several untreated tasks.
In order to solve the above technical problems, present invention also offers a kind of concurrent program Task Processing Unit, including several Processing core and task module;
The task module, for receiving several current untreated tasks;From several current untreated tasks In be respectively allocated a untreated task to each processing core according to the first preset rules;When the processing core is to current point When matching somebody with somebody for task is processed, one is selected from untreated task with the treatment according to the data inheritance between task The currently processed task of core has the task of correlation as follow-up work;When in the absence of follow-up work, if from current Distributed to the processing core according to one untreated task of the second preset rules selection in dry untreated task and processed, Until whole task treatment are completed;
Each described processing core, for after the completion of the task treatment to current distribution, by appointing for the current distribution In the data storage of business to itself corresponding caching, and at the task that the follow-up work is currently distributed as itself Reason;And trigger the task module and carry out follow-up work prediction treatment.
Preferably, including several section modules;
In each described section module company is corresponded including several described processing cores, respectively with the processing core Several first cachings for connecing, the second caching, the 3rd caching and the TU task unit that are connected with processing core each described respectively; TU task unit in each described section module collectively constitutes the task module;
Second caching, for for carrying out data sharing between several described processing cores;
It is described 3rd caching, for by the data of several processing cores in the section module where itself and its He is shared the section module data.
Preferably, the TU task unit is additionally operable to:
When in the section module where to itself processing core complete task distribution or determine the processing core after After continuous task, the task disposition in presently described section module where itself is synchronously informed that other are cut into slices in modules TU task unit.
Preferably, row data communication is entered by network chip between each described section module.
The invention provides a kind of concurrent program task processing method and its device, each processing core is carried out to task During treatment, also a task related to currently processed task can be selected as rear according to the data inheritance between task Continuous task, so that each processing core can a series of associated tasks of continuous processing, because these tasks are used The treatment of identical processing core, therefore follow-up work caching corresponding with preceding sequence task is identical, the data needed for follow-up work are not required to Go in the caching of other processing cores to read, but directly use the data in the corresponding caching of processing core where itself , treatment effeciency is improve, reduce task process time.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to institute in prior art and embodiment The accompanying drawing for needing to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the invention Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also obtain according to these accompanying drawings Obtain other accompanying drawings.
A kind of flow chart of the process of concurrent program task processing method that Fig. 1 is provided for the present invention;
A kind of specific embodiment schematic diagram in a kind of concurrent program task processing method that Fig. 2 is provided for the present invention;
A kind of structural representation of concurrent program Task Processing Unit that Fig. 3 is provided for the present invention;
A kind of structural representation of section module that Fig. 4 is provided for the present invention.
Specific embodiment
Core of the invention is to provide a kind of concurrent program task processing method and its device, follow-up with relevance Business is when being processed, it is not necessary to goes in the caching of other processing cores to read, but directly uses the process cores where itself Data in the corresponding caching of the heart, treatment effeciency is high.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
It is shown in Figure 1 the invention provides a kind of concurrent program task processing method, Fig. 1 for the present invention provide one Plant the flow chart of the process of concurrent program task processing method;The method includes:
Step s101:From several current untreated tasks according to the first preset rules be respectively allocated one it is untreated Task is to each processing core;
Wherein, the first preset rules here can be random rule;In addition, being provided with time sequencing in each task Numbering, the setup time for showing each task, the first preset rules here can also be from several untreated tasks Untreated task is randomly selected in middle task of the time sequencing numbering in preceding default position to be allocated, or will can also process After core is numbered, the order numbered sequentially in time from small to large is sequentially allocated each processing core, certainly, may be used also Other rules are set to, the present invention is not limited this.
Step s102:When processing core is processed the task of current distribution, according to the data inheritance between task Property select task currently processed with processing core there is the task of correlation as follow-up from untreated task Business;When in the absence of follow-up work, into step s104;
It is understood that data inheritance here refer to latter task operation when need use before The result data that one task is generated after terminating, has sequencing and with data dependence between two tasks.
Step s103:After the completion of task treatment of the processing core to current distribution, by the data of the task of current distribution Store to the corresponding caching of processing core, and processed follow-up work as the task of the current distribution of processing core; Return to step s102;
Step s104:From several current untreated tasks a untreated task is chosen according to the second preset rules Distribute to processing core, and return to step s102;Until whole task treatment are completed.
Wherein, it is provided with time sequencing numbering in each task, the second preset rules here are specially from current The minimum task of selection time serial number is distributed to processing core in several untreated tasks.Certainly, here second pre- If rule can also be to randomly choose task from several current untreated tasks to distribute to processing core, the present invention is to this Do not limit.
It is understood that by above method process task, it is possible that time sequencing has numbered later task Through completing, and time sequencing numbers very early task still in the situation of operation, and shown in Figure 2, Fig. 2 is provided for the present invention A kind of a kind of specific embodiment schematic diagram in concurrent program task processing method.Wherein, task 0 dynamic creation subtask, It is 42 that subtask is numbered sequentially in time.Shown in figure, because different task process time is different, may task 40 just Upon execution, task 22 is not completed yet.
Further, since multiple processing core parallel processing tasks in the present invention, it is possible that the situation of task conflict, this When can the task that carry out of selectivity abandon, while the follow-up work for being abandoned task is also abandoned, in addition, it is necessary to using to being put The abandoning the result data of task of the task can be also affected due to that cannot perform.For example, if task 20 is abandoned in figure, task 40 Also can be abandoned.Certainly, specifically chosen which task abandon the present invention and does not limit during task conflict.
The invention provides a kind of concurrent program task processing method, each processing core when processing task, Also a task related to currently processed task can be selected as follow-up work according to the data inheritance between task, A series of associated tasks of each processing core continuous processing are so enabled, because these tasks are used at identical Reason core processing, therefore follow-up work caching corresponding with preceding sequence task is identical, the data needed for follow-up work need not go other Read in the caching of processing core, but directly carried using the data in the corresponding caching of processing core where itself Treatment effeciency high, reduces task process time.
Present invention also offers a kind of concurrent program Task Processing Unit, including several processing cores and task mould Block;It is shown in Figure 3, a kind of structural representation of concurrent program Task Processing Unit that Fig. 3 is provided for the present invention.
Task module 1, for receiving several current untreated tasks;Pressed from several current untreated tasks A untreated task to each processing core 2 is respectively allocated according to the first preset rules;When processing core 2 is appointed to current distribution Business is when being processed, according to the data inheritance between task selected from untreated task one it is current with processing core 2 The task for the treatment of has the task of correlation as follow-up work;When in the absence of follow-up work, from current several not from Distributed to processing core 2 according to one untreated task of the second preset rules selection in reason task and processed, until all appointing Business treatment is completed;
Each processing core 2, for after the completion of the task treatment to current distribution, by the data of the task of current distribution Store to itself corresponding caching, and processed follow-up work as the task of itself current distribution;And trigger task Module 1 carries out follow-up work prediction treatment.
Preferably, the concurrent program Task Processing Unit includes several section modules;Shown in Figure 4, Fig. 4 is A kind of structural representation of section module that the present invention is provided.
Each section module in including several processing cores 2, connected one to one with processing core 2 respectively several First caching 21, the second caching the 22, the 3rd being connected with each processing core 2 respectively cache 23 and TU task unit 11;Each TU task unit 11 in section module collectively constitutes task module 1;
Second caching 22, for for carrying out data sharing between several processing cores 2;
3rd caching 23, for the data of several processing cores 2 in the section module where itself to be cut with other Piece module data is shared.
Preferably, TU task unit 11 is additionally operable to:
The processing core 2 in section module where to itself completes task distribution or determines the follow-up of processing core 2 After task, the task disposition in current section module where itself is synchronously informed into other job orders cut into slices in modules Unit 11.
It is understood that the task situation of record should in real time keep synchronous in each TU task unit 11, each task Unit 11 is both needed to it is to be understood that current which task is not completed, and which task is classified as follow-up work by other TU task units 11, Can not be selected again, treatment effeciency is low, program is chaotic caused by can so avoiding task duplication from performing etc..
In addition, TU task unit 11 is additionally operable to when there is task conflict, carrying after the completion of task and task is abandoned in selection Hand over operation etc..
Specifically, entering row data communication by network chip between each section module.Include in each section module One routing module, for carrying out network service.
The invention provides a kind of concurrent program Task Processing Unit, each processing core when processing task, Also a task related to currently processed task can be selected as follow-up work according to the data inheritance between task, A series of associated tasks of each processing core continuous processing are so enabled, because these tasks are used at identical Reason core processing, therefore follow-up work caching corresponding with preceding sequence task is identical, the data needed for follow-up work need not go other Read in the caching of processing core, but directly carried using the data in the corresponding caching of processing core where itself Treatment effeciency high, reduces task process time.
It should be noted that in this manual, such as first and second or the like relational terms are used merely to one Individual entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operate it Between there is any this actual relation or order.And, term " including ", "comprising" or its any other variant be intended to Cover including for nonexcludability, so that process, method, article or equipment including a series of key elements not only include those Key element, but also other key elements including being not expressly set out, or also include for this process, method, article or set Standby intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Also there is other identical element in the process including the key element, method, article or equipment.
The foregoing description of the disclosed embodiments, enables professional and technical personnel in the field to realize or uses the present invention. Various modifications to these embodiments will be apparent for those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, the present invention The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one The scope most wide for causing.

Claims (7)

1. a kind of concurrent program task processing method, it is characterised in that including:
Step s101:From several current untreated tasks a untreated task is respectively allocated according to the first preset rules To each processing core;
Step s102:When the processing core is processed the task of current distribution, according to the data inheritance between task Property from untreated task select a task currently processed with the processing core there is the task of correlation as rear Continuous task;When in the absence of follow-up work, into step s104;
Step s103:After the completion of task treatment of the processing core to current distribution, by the task of the current distribution In data storage to the corresponding caching of the processing core, and using the follow-up work as the processing core current distribution Task processed;Return to step s102;
Step s104:A untreated task is chosen from several current untreated tasks according to the second preset rules to distribute To the processing core, and return to step s102;Until whole task treatment are completed.
2. method according to claim 1, it is characterised in that first preset rules are specially random rule.
3. method according to claim 1, it is characterised in that time sequencing numbering is provided with each task, it is described Second preset rules be specially the minimum task of from several current untreated tasks selection time serial number distribute to The processing core.
4. a kind of concurrent program Task Processing Unit, it is characterised in that including several processing cores and task module;
The task module, for receiving several current untreated tasks;Pressed from several current untreated tasks A untreated task to each processing core is respectively allocated according to the first preset rules;When the processing core is to current distribution When task is processed, one is selected from untreated task with the processing core according to the data inheritance between task Currently processed task has the task of correlation as follow-up work;When in the absence of follow-up work, from current several Distributed to the processing core according to one untreated task of the second preset rules selection in untreated task and processed, until Whole task treatment are completed;
Each described processing core, for after the completion of the task treatment to current distribution, by the task of the current distribution In data storage to itself corresponding caching, and processed the follow-up work as the task of itself current distribution;And Triggering the task module carries out follow-up work prediction treatment.
5. device according to claim 4, it is characterised in that including several section modules;
Including several described processing cores, respectively connected one to one with the processing core in each described section module Several first cachings, the second caching being connected with processing core each described respectively, the 3rd caching and TU task unit;Each TU task unit in the section module collectively constitutes the task module;
Second caching, for for carrying out data sharing between several described processing cores;
3rd caching, for by the data of several processing cores in the section module where itself and other institutes Section module data is stated to be shared.
6. device according to claim 5, it is characterised in that the TU task unit is additionally operable to:
The processing core in section module where to itself completes task distribution or determines that the follow-up of the processing core is appointed After business, the task disposition in presently described section module where itself is synchronously informed into the task in other section modules Unit.
7. device according to claim 5, it is characterised in that carried out by network chip between each described section module Data communication.
CN201611073543.1A 2016-11-29 2016-11-29 Parallel program task processing method and device Active CN106776015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611073543.1A CN106776015B (en) 2016-11-29 2016-11-29 Parallel program task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611073543.1A CN106776015B (en) 2016-11-29 2016-11-29 Parallel program task processing method and device

Publications (2)

Publication Number Publication Date
CN106776015A true CN106776015A (en) 2017-05-31
CN106776015B CN106776015B (en) 2021-02-02

Family

ID=58900522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611073543.1A Active CN106776015B (en) 2016-11-29 2016-11-29 Parallel program task processing method and device

Country Status (1)

Country Link
CN (1) CN106776015B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170526A (en) * 2017-12-06 2018-06-15 北京像素软件科技股份有限公司 Load capacity optimization method, device, server and readable storage medium storing program for executing
CN111104167A (en) * 2018-10-25 2020-05-05 杭州嘉楠耘智信息科技有限公司 Calculation result submitting method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013110819A1 (en) * 2012-01-27 2013-08-01 Tymis Method of parallel execution of a plurality of computing tasks
CN103235742A (en) * 2013-04-07 2013-08-07 山东大学 Dependency-based parallel task grouping scheduling method on multi-core cluster server
CN103885826A (en) * 2014-03-11 2014-06-25 武汉科技大学 Real-time task scheduling implementation method of multi-core embedded system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013110819A1 (en) * 2012-01-27 2013-08-01 Tymis Method of parallel execution of a plurality of computing tasks
CN103235742A (en) * 2013-04-07 2013-08-07 山东大学 Dependency-based parallel task grouping scheduling method on multi-core cluster server
CN103885826A (en) * 2014-03-11 2014-06-25 武汉科技大学 Real-time task scheduling implementation method of multi-core embedded system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170526A (en) * 2017-12-06 2018-06-15 北京像素软件科技股份有限公司 Load capacity optimization method, device, server and readable storage medium storing program for executing
CN111104167A (en) * 2018-10-25 2020-05-05 杭州嘉楠耘智信息科技有限公司 Calculation result submitting method and device

Also Published As

Publication number Publication date
CN106776015B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US9298760B1 (en) Method for shard assignment in a large-scale data processing job
CN107402824A (en) A kind of method and device of data processing
CN108255958A (en) Data query method, apparatus and storage medium
EP3230861B1 (en) Technologies for fast synchronization barriers for many-core processing
CN108776897A (en) Data processing method, device, server and computer readable storage medium
CN106406987A (en) Task execution method and apparatus in cluster
CN106569891A (en) Method and device for carrying out task scheduling in storage system
CN106934027A (en) Distributed reptile realization method and system
CN109582716A (en) Data visualization treating method and apparatus
CN110399387A (en) Method and device based on table incidence relation dynamic generation query SQL
CN104484477A (en) Electronic map searching method, device and system
CN103678619B (en) Database index treating method and apparatus
CN109408689A (en) Data capture method, device, system and electronic equipment
CN106776015A (en) A kind of concurrent program task processing method and its device
CN105630419B (en) A kind of the subregion view sending method and management node of resource pool
CN107220376A (en) A kind of data query method and apparatus
CN105550220B (en) A kind of method and device of the access of heterogeneous system
CN107247695A (en) Coding rule generation method, system and storage device
CN107766503A (en) Data method for quickly querying and device based on redis
CN110532559A (en) The processing method and processing device of rule
CN107545351A (en) Method for allocating tasks and device
CN103473237A (en) Key value grouping method
CN103207907B (en) A kind of index file merges method and device
CN109165325A (en) Method, apparatus, equipment and computer readable storage medium for cutting diagram data
CN110109986A (en) Task processing method, system, server and task scheduling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210104

Address after: Building 9, No.1, guanpu Road, Guoxiang street, Wuzhong Economic Development Zone, Wuzhong District, Suzhou City, Jiangsu Province

Applicant after: SUZHOU LANGCHAO INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: Room 1601, floor 16, 278 Xinyi Road, Zhengdong New District, Zhengzhou City, Henan Province

Applicant before: ZHENGZHOU YUNHAI INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant