CN109800074A - Task data concurrently executes method, apparatus and electronic equipment - Google Patents
Task data concurrently executes method, apparatus and electronic equipment Download PDFInfo
- Publication number
- CN109800074A CN109800074A CN201910136089.7A CN201910136089A CN109800074A CN 109800074 A CN109800074 A CN 109800074A CN 201910136089 A CN201910136089 A CN 201910136089A CN 109800074 A CN109800074 A CN 109800074A
- Authority
- CN
- China
- Prior art keywords
- queue
- task data
- task
- lined
- pending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of task datas concurrently to execute method, apparatus and electronic equipment, is related to data and executes technical field, comprising: distributes pending task data according to the timing of task and be lined up into first queue;Pending task data is extracted from the first queue according to the content of task and is lined up into second queue;Pending task data in the second queue is all extracted and concurrently executed into third queue, solved and the technical issues of concurrently executing the waste of function to CPU task exists in the prior art according to putting in order for the second queue.
Description
Technical field
The present invention relates to data execute technical field, more particularly, to a kind of task data concurrently execute method, apparatus with
And electronic equipment.
Background technique
As information-based is popularized, the business of information system carrying is more and more, and the triggering frequency of business is also higher and higher.
Currently, the complexity with business is promoted, in business high concurrent scene, response time, that is, performance of business also can
Decline therewith, causes user experience poor.And if all there are timing successively to rely on for the task of asynchronous execution, it is necessary to successively hold
Row, to be unable to fully the concurrent capability using central processing unit (Central Processing Unit, abbreviation CPU).
Therefore, for the prior art, the waste that function is concurrently executed to CPU task is easily caused.
Summary of the invention
In view of this, concurrently executing method, apparatus the purpose of the present invention is to provide a kind of task data and electronics is set
It is standby, to solve that the technical issues of concurrently executing the waste of function to CPU task exists in the prior art.
In a first aspect, the embodiment of the invention provides a kind of task datas concurrently to execute method, comprising:
Pending task data is distributed according to the timing of task and is lined up into first queue;
Pending task data is extracted from the first queue according to the content of task and is lined up into second queue;
Pending task data in the second queue is all extracted according to putting in order for the second queue
It is concurrently executed into third queue.
With reference to first aspect, the embodiment of the invention provides the first possible embodiments of first aspect, wherein root
Pending task data is extracted from the first queue according to the content of task and is lined up into second queue, comprising:
Task distributor is extracted according to the corresponding packet ID of task definition, by pending task data from the first queue
It is extremely lined up in the corresponding target subqueue of the packet ID, wherein the target subqueue is set in second queue.
With reference to first aspect, the embodiment of the invention provides second of possible embodiments of first aspect, wherein also
Include:
Maximum subqueue quantity in the second queue is set;
The subqueue quantity being lined up in the second queue is limited according to the maximum subqueue quantity.
With reference to first aspect, the embodiment of the invention provides the third possible embodiments of first aspect, wherein also
Include:
The maximum task data amount that every height in the second queue is lined up is set;
It is limited according to the maximum task data amount that the son is lined up and is lined up in each subqueue of the second queue
Pending task data amount.
With reference to first aspect, the embodiment of the invention provides the 4th kind of possible embodiments of first aspect, wherein root
Pending task data is extracted from the first queue according to the content of task and is lined up into second queue, further includes:
When the task data amount being lined up in target is lined up is greater than the maximum task data amount that the son is lined up, institute
It states task distributor and the pending task data in the first queue is extracted according to adjusting sequence, wherein the tune
Section sequence is different from Queue sequence of the pending task data in the first queue.
With reference to first aspect, the embodiment of the invention provides the 5th kind of possible embodiments of first aspect, wherein also
Include:
The maximum task data amount of the first queue is set;
Pending be lined up in the first queue is limited according to the maximum task data amount of the first queue
Business data volume.
With reference to first aspect, the embodiment of the invention provides the 6th kind of possible embodiments of first aspect, wherein will
Pending task data in the second queue is all extracted according to putting in order for the second queue to third queue
It is middle concurrently to be executed, comprising:
The task data amount being lined up in the son is lined up is greater than the maximum task data amount that the son is lined up, and, third
When not having pending task data in queue, all pending task datas in the second queue are all extracted to described
In third queue so that all sons line up in pending task data concurrently execute, wherein during each son is lined up
Pending task data according to the second queue the execution that puts in order.
Second aspect, the embodiment of the present invention also provide a kind of concurrent executive device of task data, comprising:
Pending task data is distributed for the timing according to task and is lined up into first queue by distribution module;
Abstraction module, for being extracted pending task data to the second team from the first queue according to the content of task
It is lined up in column;
Extraction module, for by the pending task data in the second queue, according to the arrangement of the second queue
Sequentially, it all extracts and is concurrently executed into third queue.
The third aspect, the embodiment of the present invention also provide a kind of electronic equipment, including memory, processor, the memory
In be stored with the computer program that can be run on the processor, the processor is realized when executing the computer program
The step of stating method as described in relation to the first aspect.
Fourth aspect, the embodiment of the present invention also provide a kind of meter of non-volatile program code that can be performed with processor
Calculation machine readable medium, said program code make the method for the processor execution as described in relation to the first aspect.
Technical solution provided in an embodiment of the present invention brings following the utility model has the advantages that number of tasks provided in an embodiment of the present invention
According to concurrently execution method, apparatus and electronic equipment.Firstly, pending task data is distributed to first according to the timing of task
It is lined up in queue, then, is extracted pending task data into second queue from first queue according to the content of task
It is lined up, later, the pending task data in second queue according to putting in order for second queue, is all extracted extremely
It is concurrently executed in third queue, therefore, various forms of queuings is carried out by the content of timing and task by task,
It can be concurrent to task while guaranteeing task execution sequence in queue in queue task distribution implementation procedure to realize with this
It executes, to solve the technical issues of waste existing in the prior art for concurrently executing function to CPU task.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention are in specification and attached drawing
Specifically noted structure is achieved and obtained.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below
Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor
It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 shows the flow chart that task data provided by the embodiment of the present invention one concurrently executes method;
Fig. 2 shows the flow charts that task data provided by the embodiment of the present invention two concurrently executes method;
Fig. 3 shows another flow chart that task data provided by the embodiment of the present invention two concurrently executes method;
Fig. 4 shows the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present invention four.
Icon: 4- electronic equipment;41- memory;42- processor;43- bus;44- communication interface.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention
Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than
Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise
Under every other embodiment obtained, shall fall within the protection scope of the present invention.
Currently, the complexity with business is promoted, in business high concurrent scene, response time, that is, performance of business also can
Decline therewith, causes user experience poor.By promoted hardware server environment resource (central processing unit, disk, memory,
Input/output, bandwidth) be it is limited always, in the storage of data and using above easily reaching bottleneck.
Generally directed at the optimization processing of such case, complicated business is divided into two parts: the portion for needing to make an immediate response
Divide and do not need the part to make an immediate response.The part for needing to make an immediate response is immediately performed and synchronizes return response result;It does not need
The part to make an immediate response, by the way of task queue, by pending task asynchronous process.
But divide be difficult to solve there are two problem in this way: if first is that the task of asynchronous execution all there are timing successively according to
Rely, it is necessary to successively execute, be unable to fully the concurrent capability using CPU;Second is that sequence in the presence of the task such as fruit part of asynchronous execution
It successively relies on, it is difficult to which reasonable resolver is every dependence and non-dependent task.Therefore, it for the prior art, easily causes to CPU
Task concurrently executes the waste of function.
Based on this, a kind of task data provided in an embodiment of the present invention concurrently executes method, apparatus and electronic equipment, can
The technical issues of to solve the waste existing in the prior art for concurrently executing function to CPU task.
Concurrently to be held to a kind of task data disclosed in the embodiment of the present invention first convenient for understanding the present embodiment
Row method, apparatus and electronic equipment describe in detail.
Embodiment one:
A kind of task data provided in an embodiment of the present invention concurrently executes method, as shown in Figure 1, comprising:
S11: pending task data is distributed according to the timing of task and is lined up into first queue.
S12: pending task data is extracted from first queue according to the content of task and is lined up into second queue.
S13: the pending task data in second queue according to putting in order for second queue, is all extracted to the
It is concurrently executed in three queues.
In the present embodiment, task data, which concurrently executes method, can be used as the concurrent queuing strategy of processing business, pass through task
The queuing algorithm of queue can be such that task data concurrently executes.
Embodiment two:
A kind of task data provided in an embodiment of the present invention concurrently executes method, as shown in Figure 2, comprising:
S21: pending task data is distributed according to the timing of task and is lined up into first queue.
In practical applications, all pending task datas are all successively distributed according to timing to one big task team when initial
It arranges in (i.e. first queue), as shown in figure 3, the first queue is properly termed as to queuing, including task: G1-T1, G1-
T2, G2-T1, G2-T2, G3-T1 etc..
Another embodiment as the present embodiment, further includes: firstly, the maximum task data of setting first queue
Then amount limits the pending task data amount being lined up in first queue according to the maximum task data amount of first queue.
Specifically, can define the maximum number of tasks of the first queue where waiting task data, if default value is 3000.
For not allowing the generation of the generic task when the number of tasks in queuing reaches maximum number.
S22: the maximum task data amount that every height is lined up in setting second queue.
As the preferred embodiment of the present embodiment, the maximum number of tasks of single subqueue in second queue is defined, is such as lacked
Province's value is 20.Wherein, second queue is properly termed as pre- queuing.
S23: according to son line up maximum task data amount limitation second queue each subqueue in be lined up to
Execute task data amount.
It should be noted that being more than most for the number of tasks under this task nexus group in second queue (i.e. pre- queuing)
When big number, do not allow to join the team again into the pre- queuing.
S24: task distributor is extracted according to the corresponding packet ID of task definition, by pending task data from first queue
It is lined up into the corresponding target subqueue of packet ID, wherein target subqueue is set in second queue.
In this step, task is taken out according to timing sequencing to queuing, is discharged into pre- queuing, until walkthrough
Queue full.Specifically, as shown in figure 3, task distributor is discharged into corresponding packet ID according to the corresponding business relations group ID of task
In walkthrough task queue.For example, G1 group, G2 group and G3 group etc..
Preferably, when the task data amount being lined up in target is lined up is greater than the maximum task data amount that son is lined up, appoint
Business distributor extracts the pending task data in first queue according to adjusting sequence, wherein adjusting sequence is different from
Queue sequence of the pending task data in first queue.Therefore, when extracting task from when queuing, allow to take out across sequence
It takes, i.e., when the corresponding pre- queuing of traffic packets where current task has been expired, appointing for subsequent other traffic packets can be extracted
Business, is discharged in corresponding pre- queuing.
Further, it can also include: the maximum subqueue being arranged in second queue that task data, which concurrently executes method,
Quantity;The subqueue quantity being lined up in second queue is limited according to maximum subqueue quantity.Specifically, can be defined as holding
The sum limitation for the buffering queue (i.e. pre- queuing) that row queue prepares, if default value is 10.Preferred reality as the present embodiment
Mode is applied, pre- queuing itself is to be generated as needed by packet ID dynamic, and total walkthrough number of queues is fixed.When walkthrough number of queues
It has been expired that, then stop generating new pre- queuing.
S25: the task data amount being lined up in son is lined up is greater than the maximum task data amount that son is lined up, and, third queue
In when there is no pending task data, all pending task datas in second queue are all extracted into third queue,
So that all sons line up in pending task data concurrently execute, wherein every height line up in pending task data by
According to the execution that puts in order of second queue.
It, then will be entire as shown in figure 3, when pre- queuing is booked, and when executing the task in queue and being finished
Task in pre- queuing is extracted to execute and goes to execute in queue, and the timing sequencing of the two is guaranteed the quality unanimously.
As a preferred embodiment, finally executing for task is placed in the execution queue established according to traffic packets ID, system
One executes.When execution, individual queue can execute parallel, when the task in a queue must be executed according to timing sequencing.
In the present embodiment, the queue arranged task data includes to queuing and pre- queuing and executing queue
Three parts.Therefore, cutting task distributor when queue task execution, wait arranging, walkthrough and execute queue.The present embodiment
In, when the distribution of queue task executes, the concept by traffic packets is introduced, task distributes logic, task execution logic, Yi Jiren
Business control logic.Method is concurrently executed by task data can effectively solve the problems, such as the queue of service concurrence.
Therefore, task data is provided through this embodiment and concurrently execute method, can guarantee that the execution of task in queue is suitable
Sequence, and the execution speed of task in queue is improved as far as possible.By setting whole threshold values of joining the team, realize to systematic entirety energy
Protection.Furthermore in the present embodiment, the queue that each business generates is treated with a certain discrimination, realize control this business pair of thinner dynamics
The threshold values of joining the team answered.Extraction when for task execution, can across task acquisition, with prevent the task execution being likely to occur hinder
Plug.
Embodiment three:
The concurrent executive device of a kind of task data provided in an embodiment of the present invention, comprising: distribution module, abstraction module and
Extraction module.
Wherein, distribution module is arranged for being distributed pending task data according to the timing of task into first queue
Team.Abstraction module is arranged for being extracted pending task data from first queue according to the content of task into second queue
Team.
As the another embodiment of the present embodiment, extraction module is used for the pending number of tasks in second queue
According to according to putting in order for second queue, all extraction is concurrently executed into third queue.
Example IV:
A kind of electronic equipment provided in an embodiment of the present invention, as shown in figure 4, electronic equipment 4 includes memory 41, processor
42, the computer program that can be run on the processor is stored in the memory, the processor executes the calculating
The step of method that above-described embodiment one or embodiment two provide is realized when machine program.
Referring to fig. 4, electronic equipment further include: bus 43 and communication interface 44, processor 42, communication interface 44 and memory
41 are connected by bus 43;Processor 42 is for executing the executable module stored in memory 41, such as computer program.
Wherein, memory 41 may include high-speed random access memory (RAM, Random Access Memory),
It may further include nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.By at least
One communication interface 44 (can be wired or wireless) realizes the communication between the system network element and at least one other network element
Connection, can be used internet, wide area network, local network, Metropolitan Area Network (MAN) etc..
Bus 43 can be isa bus, pci bus or eisa bus etc..The bus can be divided into address bus, data
Bus, control bus etc..Only to be indicated with a four-headed arrow convenient for indicating, in Fig. 4, it is not intended that an only bus or
A type of bus.
Wherein, memory 41 is for storing program, and the processor 42 executes the journey after receiving and executing instruction
Sequence, method performed by the device that the process that aforementioned any embodiment of the present invention discloses defines can be applied in processor 42,
Or it is realized by processor 42.
Processor 42 may be a kind of IC chip, the processing capacity with signal.During realization, above-mentioned side
Each step of method can be completed by the integrated logic circuit of the hardware in processor 42 or the instruction of software form.Above-mentioned
Processor 42 can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network
Processor (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal
Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, referred to as
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) or other are programmable
Logical device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute in the embodiment of the present invention
Disclosed each method, step and logic diagram.General processor can be microprocessor or the processor is also possible to appoint
What conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware decoding processing
Device executes completion, or in decoding processor hardware and software module combination execute completion.Software module can be located at
Machine memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable memory, register etc. are originally
In the storage medium of field maturation.The storage medium is located at memory 41, and processor 42 reads the information in memory 41, in conjunction with
Its hardware completes the step of above method.
Embodiment five:
It is provided in an embodiment of the present invention it is a kind of with processor can be performed non-volatile program code it is computer-readable
Medium, said program code make the method that the processor executes above-described embodiment one or embodiment two provides.
Unless specifically stated otherwise, the opposite step of the component and step that otherwise illustrate in these embodiments, digital table
It is not limit the scope of the invention up to formula and numerical value.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description
It with the specific work process of device, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In all examples being illustrated and described herein, any occurrence should be construed as merely illustratively, without
It is as limitation, therefore, other examples of exemplary embodiment can have different values.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
The flow chart and block diagram in the drawings show the system of multiple embodiments according to the present invention, method and computer journeys
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, section or code of table, a part of the module, section or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually base
Originally it is performed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that
It is the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart, can uses and execute rule
The dedicated hardware based system of fixed function or movement is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
The computer-readable medium of the non-volatile program code provided in an embodiment of the present invention that can be performed with processor,
Method, apparatus and electronic equipment technical characteristic having the same, institute are concurrently executed with task data provided by the above embodiment
Also can solve identical technical problem, reach identical technical effect.
In addition, term " first ", " second ", " third " are used for description purposes only, it is not understood to indicate or imply phase
To importance.
The computer program product that task data concurrently executes method, including storage are carried out provided by the embodiment of the present invention
The computer readable storage medium of the executable non-volatile program code of processor, the instruction that said program code includes can
For executing previous methods method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.The apparatus embodiments described above are merely exemplary, for example, the division of the unit,
Only a kind of logical function partition, there may be another division manner in actual implementation, in another example, multiple units or components can
To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for
The mutual coupling, direct-coupling or communication connection of opinion can be through some communication interfaces, device or unit it is indirect
Coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
Finally, it should be noted that embodiment described above, only a specific embodiment of the invention, to illustrate the present invention
Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair
It is bright to be described in detail, those skilled in the art should understand that: anyone skilled in the art
In the technical scope disclosed by the present invention, it can still modify to technical solution documented by previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover in protection of the invention
Within the scope of.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. a kind of task data concurrently executes method characterized by comprising
Pending task data is distributed according to the timing of task and is lined up into first queue;
Pending task data is extracted from the first queue according to the content of task and is lined up into second queue;
By the pending task data in the second queue, according to putting in order for the second queue, all extract to the
It is concurrently executed in three queues.
2. task data according to claim 1 concurrently executes method, which is characterized in that will be wait hold according to the content of task
Row task data is extracted from the first queue and is lined up into second queue, comprising:
Task distributor is extracted from the first queue to institute according to the corresponding packet ID of task definition, by pending task data
It states and is lined up in the corresponding target subqueue of packet ID, wherein the target subqueue is set in second queue.
3. task data according to claim 2 concurrently executes method, which is characterized in that further include:
Maximum subqueue quantity in the second queue is set;
The subqueue quantity being lined up in the second queue is limited according to the maximum subqueue quantity.
4. task data according to claim 2 concurrently executes method, which is characterized in that further include:
The maximum task data amount that every height in the second queue is lined up is set;
Limited according to the maximum task data amount that the son is lined up be lined up in each subqueue of the second queue to
Execute task data amount.
5. task data according to claim 4 concurrently executes method, which is characterized in that will be wait hold according to the content of task
Row task data is extracted from the first queue and is lined up into second queue, further includes:
When the task data amount being lined up in target is lined up is greater than the maximum task data amount that the son is lined up, described
Business distributor extracts the pending task data in the first queue according to adjusting sequence, wherein the adjusting is suitable
Sequence is different from Queue sequence of the pending task data in the first queue.
6. task data according to claim 1 concurrently executes method, which is characterized in that further include:
The maximum task data amount of the first queue is set;
The pending number of tasks being lined up in the first queue is limited according to the maximum task data amount of the first queue
According to amount.
7. task data according to claim 4 concurrently executes method, which is characterized in that by the second queue to
Task data is executed, according to putting in order for the second queue, all extracts and is concurrently executed into third queue, wrap
It includes:
The task data amount being lined up in the son is lined up is greater than the maximum task data amount that the son is lined up, and, third queue
In when there is no pending task data, all pending task datas in the second queue are all extracted to the third
In queue so that all sons line up in pending task data concurrently execute, wherein each son line up in
Task data is executed according to the execution that puts in order of the second queue.
8. a kind of concurrent executive device of task data characterized by comprising
Pending task data is distributed for the timing according to task and is lined up into first queue by distribution module;
Abstraction module, for being extracted pending task data into second queue from the first queue according to the content of task
It is lined up;
Extraction module, for by the pending task data in the second queue, according to putting in order for the second queue,
It all extracts and is concurrently executed into third queue.
9. a kind of electronic equipment, including memory, processor, be stored in the memory to run on the processor
Computer program, which is characterized in that the processor realizes that the claims 1 to 7 are any when executing the computer program
The step of method described in item.
10. a kind of computer-readable medium for the non-volatile program code that can be performed with processor, which is characterized in that described
Program code makes the processor execute described any the method for claim 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910136089.7A CN109800074A (en) | 2019-02-21 | 2019-02-21 | Task data concurrently executes method, apparatus and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910136089.7A CN109800074A (en) | 2019-02-21 | 2019-02-21 | Task data concurrently executes method, apparatus and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109800074A true CN109800074A (en) | 2019-05-24 |
Family
ID=66561204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910136089.7A Pending CN109800074A (en) | 2019-02-21 | 2019-02-21 | Task data concurrently executes method, apparatus and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109800074A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112925621A (en) * | 2021-02-26 | 2021-06-08 | 北京百度网讯科技有限公司 | Task processing method and device, electronic equipment and storage medium |
CN113535361A (en) * | 2021-07-23 | 2021-10-22 | 百果园技术(新加坡)有限公司 | Task scheduling method, device, equipment and storage medium |
CN115629878A (en) * | 2022-10-20 | 2023-01-20 | 北京力控元通科技有限公司 | Data processing method and system based on memory exchange |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218455A (en) * | 2013-05-07 | 2013-07-24 | 中国人民解放军国防科学技术大学 | Method of high-speed concurrent processing of user requests of Key-Value database |
CN105095365A (en) * | 2015-06-26 | 2015-11-25 | 北京奇虎科技有限公司 | Information flow data processing method and device |
CN105389209A (en) * | 2015-12-25 | 2016-03-09 | 中国建设银行股份有限公司 | Asynchronous batch task processing method and system |
US20180314548A1 (en) * | 2017-04-27 | 2018-11-01 | Microsoft Technology Licensing, Llc | Work item management in content management systems |
US20190050255A1 (en) * | 2018-03-23 | 2019-02-14 | Intel Corporation | Devices, systems, and methods for lockless distributed object input/output |
-
2019
- 2019-02-21 CN CN201910136089.7A patent/CN109800074A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218455A (en) * | 2013-05-07 | 2013-07-24 | 中国人民解放军国防科学技术大学 | Method of high-speed concurrent processing of user requests of Key-Value database |
CN105095365A (en) * | 2015-06-26 | 2015-11-25 | 北京奇虎科技有限公司 | Information flow data processing method and device |
CN105389209A (en) * | 2015-12-25 | 2016-03-09 | 中国建设银行股份有限公司 | Asynchronous batch task processing method and system |
US20180314548A1 (en) * | 2017-04-27 | 2018-11-01 | Microsoft Technology Licensing, Llc | Work item management in content management systems |
US20190050255A1 (en) * | 2018-03-23 | 2019-02-14 | Intel Corporation | Devices, systems, and methods for lockless distributed object input/output |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112925621A (en) * | 2021-02-26 | 2021-06-08 | 北京百度网讯科技有限公司 | Task processing method and device, electronic equipment and storage medium |
CN112925621B (en) * | 2021-02-26 | 2023-11-07 | 北京百度网讯科技有限公司 | Task processing method, device, electronic equipment and storage medium |
CN113535361A (en) * | 2021-07-23 | 2021-10-22 | 百果园技术(新加坡)有限公司 | Task scheduling method, device, equipment and storage medium |
CN115629878A (en) * | 2022-10-20 | 2023-01-20 | 北京力控元通科技有限公司 | Data processing method and system based on memory exchange |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103370691B (en) | Managing buffer overflow conditions | |
CN106529883B (en) | Distribute the method and device of data object | |
CN109800074A (en) | Task data concurrently executes method, apparatus and electronic equipment | |
CN105487987B (en) | A kind of concurrent sequence of processing reads the method and device of IO | |
CN109815011A (en) | A kind of method and apparatus of data processing | |
CN110427256A (en) | Job scheduling optimization method, equipment, storage medium and device priority-based | |
CN113138860B (en) | Message queue management method and device | |
CN104572301A (en) | Resource distribution method and system | |
CN106815254A (en) | A kind of data processing method and device | |
CN105573711B (en) | A kind of data cache method and device | |
CN107656968A (en) | High-volume business datum deriving method and system | |
CN106980538A (en) | The method and device of data processing | |
WO2020142867A1 (en) | Traffic shaping method and related device | |
CN108304272B (en) | Data IO request processing method and device | |
CN103392169A (en) | Sorting | |
CN100498735C (en) | Resource using method in automatic testing process | |
CN110020046A (en) | A kind of data grab method and device | |
CN110474845A (en) | Flow entry eliminates method and relevant apparatus | |
CN110324204A (en) | A kind of high speed regular expression matching engine realized in FPGA and method | |
CN107741873A (en) | Method for processing business and device | |
CN105511956B (en) | A kind of method for scheduling task and system based on shared scheduling information | |
CN107277062A (en) | The method for parallel processing and device of packet | |
CN113472681A (en) | Flow rate limiting method and device | |
CN103338159B (en) | Polling dispatching implementation method and device | |
CN110008382A (en) | A kind of method, system and the equipment of determining TopN data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190524 |
|
RJ01 | Rejection of invention patent application after publication |