CN108572865A - A kind of task queue treating method and apparatus - Google Patents

A kind of task queue treating method and apparatus Download PDF

Info

Publication number
CN108572865A
CN108572865A CN201810296570.8A CN201810296570A CN108572865A CN 108572865 A CN108572865 A CN 108572865A CN 201810296570 A CN201810296570 A CN 201810296570A CN 108572865 A CN108572865 A CN 108572865A
Authority
CN
China
Prior art keywords
task
fragment
label
decision tree
leaf node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810296570.8A
Other languages
Chinese (zh)
Inventor
李正民
朱春鸽
张鸿
刘欣然
李小标
黄道超
孙发强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Rui Digital Security System Ltd By Share Ltd
National Computer Network and Information Security Management Center
Original Assignee
Tianjin Rui Digital Security System Ltd By Share Ltd
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Rui Digital Security System Ltd By Share Ltd, National Computer Network and Information Security Management Center filed Critical Tianjin Rui Digital Security System Ltd By Share Ltd
Priority to CN201810296570.8A priority Critical patent/CN108572865A/en
Publication of CN108572865A publication Critical patent/CN108572865A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a kind of task queue treating method and apparatus.This method includes:Cache decision tree is set, is correspondingly arranged label for each node in the cache decision tree, and atom queue is correspondingly arranged for the leaf node in node;According to the label information of task, leaf node is inquired step by step in the cache decision tree, and task access is executed in the corresponding atom queue of the leaf node.The present invention is a kind of cohort design method for supporting high concurrent task, according to the label of task, task is passed through into decision tree fragment cache memory step by step so that joining the team and going out team for task is distributed in different queue, so that high concurrent control is shared step by step, it is effectively improved concurrent intensity.

Description

A kind of task queue treating method and apparatus
Technical field
The present invention relates to field of cloud computer technology, more particularly to a kind of task queue treating method and apparatus.
Background technology
Queue is a kind of linear list in data structure, is inserted into data from one end, then deletes data from the other end.In reality In the business procession of border, many request tasks all need not Real-time Feedback as a result, need to only ensure final consistency, in order to ensure The order of data processing can often consider using queue come such issues that solve.
Currently, when using task queue, it would be desirable to which the task of execution synchronous with main thread is first cached in queue, then Asynchronous process is executed, after handling successfully, then asynchronous notifications user's handling result.But in the task of execution, if there is magnanimity Task, which is concurrently cached in queue, waits for thread to execute, and joining the team and going out team for task concentrates on some direction, often will appear The problems such as high concurrent is slow causes system low-response, even systemic breakdown.
Invention content
The technical problem to be solved in the present invention is to provide a kind of task queue treating method and apparatus, to solve existing skill For art in the task of execution, joining the team and going out team for task concentrates on some direction, is susceptible to the slow problem of high concurrent.
In order to solve the above-mentioned technical problem, the present invention solves by the following technical programs:
The present invention provides a kind of task queue processing method, including:Cache decision tree is set, is in the cache decision tree Each node be correspondingly arranged label, and be correspondingly arranged atom queue for the leaf node in node;Believed according to the label of task Breath, leaf node is inquired in the cache decision tree step by step, and task is executed in the corresponding atom queue of the leaf node Access.
Wherein, cache decision tree is set, is correspondingly arranged label for each node in the cache decision tree, and be node In leaf node be correspondingly arranged atom queue, including:It is multi-level buffer fragment by buffer setting, forms cache decision tree;Its In, it includes that multiple subordinates cache fragment that each higher level, which caches fragment,;It is correspondingly arranged label for each caching fragment as node; It is correspondingly arranged atom queue for each caching fragment as leaf node.
Wherein, according to the label information of task, leaf node is inquired step by step in the cache decision tree, in the leaf The corresponding atom queue of child node executes task access, including:The enqueue request of reception task;Parse the label of the task Information;It according to the label information of the task, is inquired and is cached step by step in the cache decision tree, until described will appoint Business is cached in the corresponding atom queue of leaf node.
Wherein, according to the label information of task, leaf node is inquired step by step in the cache decision tree, in the leaf The corresponding atom queue of child node executes task access, including:Reception task goes out team's request;Go out described in acquisition in team's request Label information;According to the label information, leaf node is inquired step by step in the cache decision tree;By the leaf node Task in corresponding atom queue returns up step by step according to query path.
Wherein, the method further includes:In the task of caching, in current queries to label after corresponding caching fragment, Locking operation is carried out to subordinate's caching fragment of the caching fragment;It is corresponding to label in current queries in the task of acquisition After caching fragment, fragment is cached to the higher level of the caching fragment and carries out locking operation.
The present invention provides a kind of task queue processing unit, including:Setup module, for cache decision tree to be arranged, for institute It states each node in cache decision tree and is correspondingly arranged label, and atom queue is correspondingly arranged for the leaf node in node;It holds Row module inquires leaf node, in the leaf step by step for the label information according to task in the cache decision tree The corresponding atom queue of node executes task access.
Wherein, the setup module, is used for:It is multi-level buffer fragment by buffer setting, forms cache decision tree;Wherein, It includes that multiple subordinates cache fragment that each higher level, which caches fragment,;It is correspondingly arranged label for each caching fragment as node;For Each caching fragment as leaf node is correspondingly arranged atom queue.
Wherein, the execution module, is used for:The enqueue request of reception task;Parse the label information of the task;Root It according to the label information of the task, is inquired and is cached step by step in the cache decision tree, until by the task buffer Into the corresponding atom queue of leaf node.
Wherein, the execution module, is used for:Reception task goes out team's request;Go out the label letter in team's request described in acquisition Breath;According to the label information, leaf node is inquired step by step in the cache decision tree;The leaf node is corresponding Task in atom queue returns up step by step according to query path.
Wherein, the execution module is additionally operable to:In the task of caching, current queries to the corresponding caching fragment of label it Afterwards, locking operation is carried out to subordinate's caching fragment of the caching fragment;In the task of acquisition, corresponded in current queries to label Caching fragment after, to it is described caching fragment higher level cache fragment carry out locking operation.
The present invention has the beneficial effect that:
The present invention is a kind of cohort design method for supporting high concurrent task, and according to the label of task, task is passed through certainly Plan tree fragment cache memory step by step so that joining the team and going out team for task is distributed in different queue so that high concurrent control is shared step by step, It is effectively improved concurrent intensity.
Description of the drawings
Fig. 1 is the flow chart of task queue processing method according to a first embodiment of the present invention;
Fig. 2 is the flow chart of task queue processing method according to a second embodiment of the present invention;
Fig. 3 is the schematic diagram of task queue processing method according to a second embodiment of the present invention;
Fig. 4 is the structure chart of task queue processing unit according to a third embodiment of the present invention.
Specific implementation mode
Below in conjunction with attached drawing and embodiment, the present invention will be described in further detail.It should be appreciated that described herein Specific embodiment be only used to explain the present invention, limit the present invention.
Embodiment one
The present embodiment provides a kind of task queue processing methods.Fig. 1 is task queue according to a first embodiment of the present invention The flow chart of processing method.
Cache decision tree is arranged in step S110, is correspondingly arranged label for each node in the cache decision tree, and be Leaf node in node is correspondingly arranged atom queue.
It is multi-level buffer fragment by buffer setting, forms cache decision tree;Wherein, it includes multiple that each higher level, which caches fragment, Subordinate caches fragment;It is correspondingly arranged label for each caching fragment as node;For each caching as leaf node point Piece is correspondingly arranged atom queue.Task is eventually stored in the atom queue of the bottom.
Step S120 inquires leaf node, in institute step by step according to the label information of task in the cache decision tree It states the corresponding atom queue of leaf node and executes task access.
The enqueue request of reception task;Parse the label information of the task;According to the label information of the task, It is inquired and is cached step by step in the cache decision tree, until by the task buffer to the corresponding atom queue of leaf node In.
Reception task goes out team's request;Go out the label information in team's request described in acquisition;According to the label information, in institute It states in cache decision tree and inquires leaf node step by step;By the task in the corresponding atom queue of the leaf node, according to looking into Path is ask to return up step by step.
In the task of caching, in current queries to label after corresponding caching fragment, to the subordinate of the caching fragment It caches fragment and carries out locking operation;In the task of acquisition, after corresponding caching fragment, delay to label to described in current queries The higher level for depositing fragment caches fragment and carries out locking operation.Further, first right after inquiring the corresponding caching fragment of label The caching fragment locked before is unlocked operation, then the caching fragment to needing to inquire later carries out locking operation.
Present embodiments provide a kind of task queue design method for supporting high concurrent.Pass through the decision based on tag definition Tree is quickly assigned in corresponding atom queue the high concurrent task that the multiple client received sends over, and uses office The mode that portion locks realizes the concurrent reading and writing of task queue.The present embodiment is by the way that task, fragment stores step by step so that concurrent Intensity is shared step by step, greatly improves the concurrent access ability of whole queue.
Embodiment two
In order to make the present invention easily facilitate understanding, the present embodiment provides a kind of more specific task queue processing methods. Those skilled in the art are it is appreciated that the various the present embodiment is merely to illustrate the present invention, rather than limits the present invention.For sea Measure task buffer and arrive the high concurrent problem occurred when queue, can with reference to the embodiment of the present invention, task based access control label to cache into Capable fragment step by step, task is quickly assigned in atom queue.
As shown in Fig. 2, for according to the flow chart of the task queue processing method of second embodiment of the invention.
Step S210 pre-defines the three-level label of task.
In the present embodiment, defining the three-level label of task is respectively:Region, department and application.
Region is level-one label, and department is two level label, using for three-level label.Wherein, tag representation task in region is returned Which region belonged to;Department's tag representation task belongs to which business or engineering department;Using tag representation task eventually by Performed by which application program.
Step S220, is that three-level caches fragment by buffer setting, and label is correspondingly arranged for each caching fragment.
When grade is arranged, the number of levels for caching fragment is identical with the tag level quantity of task.
Multiple sub- fragments are had in every grade of caching fragment, until finally corresponding to atom queue.
In the present embodiment, by buffer setting be three ranks, make entire queue structure be corresponding with three-level caching, according to for The labeling of every grade of buffer setting, three-level caching are region caching, department's caching and application cache successively.Higher level caches fragment Including multiple subordinates cache fragment, in this way, a region caching fragment, which includes multiple departments, caches fragment, department's caching point Piece includes multiple application cache fragments, and finally, each application cache fragment corresponds to respective atom queue.
Step S230 parses the label information of the task when the enqueue request for the task that receives.
When there is the enqueue request of task, the level-one label gone out on missions, two level label and three-level label are parsed.Such as:Solution The region label of precipitation task, department's label and apply label.
Step S240 locks level cache fragment, and searches level cache fragment identical with the level-one label of task, After finding, level cache fragment is unlocked.
According to the region label of task, searches region identical with the region label and cache fragment, concurrently visit in order to prevent It asks, in searching region caching Slicing procedure, locking operation only is carried out to region caching fragment, not interfere with other grades Caching.
Step S250 locks the L2 cache fragment under the level cache fragment found, and lookup and the two of task The identical L2 cache fragment of grade label unlocks the L2 cache fragment of locking after finding.
According to department's label of task, inquires department identical with department's label and cache fragment, also need to current The caching fragment of grade carries out locking operation.
Step S260 locks the three-level caching fragment under the L2 cache fragment found, and lookup and the three of task The identical three-level of grade label caches fragment, after finding, the three-level caching fragment unlock to locking, and task storage is arrived The three-level caches in the corresponding atom queue of fragment.
Label is applied according to task, searches and applies the identical application cache fragment of label with this, it is identical as abovementioned steps, In searching application cache Slicing procedure, locking operation is carried out to grade caching fragment, task is finally put into answering of finding With in the corresponding bottom atom queue of caching fragment, and operation is unlocked to the application cache fragment being locked.
Certainly, if also level Four label, Pyatyi label ... .N grades of labels, according to level-one label, two level label Locked and unlocked operation with the processing method of three-level label, until by task storage to and N grade labels caching divide In the corresponding atom queue of piece.
Step S270, when the task that receives go out team request when, obtain the label information of task.
Step S280 finds three-level caching point step by step according to the level-one label, two level label and three-level label of task Piece, and determine the corresponding atom queue of three-level caching fragment.
Certainly, if also level Four label, Pyatyi label ... .N grades of labels, according to the level-one label of task, two Grade label ..., N grade labels, find N grade caching fragments step by step, and the N grades of determination caches the corresponding atom team of fragment Row.
Task in the atom queue is cached fragment by step S290 from three-level caching fragment, L2 cache fragment and one It returns step by step, and in the process, the caching fragment that task will reach is locked.
According to the region label of task gone out in team's request, department's label and label is applied, is found step by step successively corresponding Atom queue, the task in bottom atom queue returns up step by step, and when returning up step by step, adds to current level Lock operation, to prevent concurrent problem.
It certainly, will also be to current level when task goes to next caching fragment from the caching fragment of current level Caching fragment is unlocked operation.
In the present embodiment, corresponding slow searching the rank if gone out not comprising certain grade of label in team's request When depositing fragment, the caching fragment all to the rank is polled, to reach all caching fragment average access to the rank Effect.
As shown in figure 3, for according to the schematic diagram of the task queue processing method of second embodiment of the invention.
Processing region label is GB, department's label is RB, the enqueue request for task of being PB using label.
Step 1.1, region caching fragment GA, GB......GM (M > 1) in decision tree are locked, then from ground The region that region label is GB is found in domain caching fragment GA, GB......GM and caches fragment, is set as GBC (C expressions have been selected), Finally the lock of region caching fragment GA, GBC......GM are discharged.
Step 1.2, region caching fragment GBC storages is all department's caching fragments under the fragment, first delays department It deposits fragment RA, RB......RX (X > 1) and carries out locking operation, then find portion from department caching fragment RA, RB......RX The department that door label is RB caches fragment, is set as RBC, finally discharges the lock of department caching fragment RA, RBC......RX.
Step 1.3, department's caching fragment RBC storages is all application cache fragments under the fragment, first by the application It caches fragment PA, PB......PY (Y > 1) and carries out locking operation, then found from application cache fragment PA, PB......PY Using the application cache fragment that label is PB, it is set as PBC, which corresponds directly to the atom queue of bottom, finally will be current Task is stored in the corresponding atom queues of PBC, and the lock of application cache fragment PA, PBC......PY are discharged.
The task that processing region label is GB, department's label is arbitrary, is PB using label goes out team's request.
Step 2.1, the region that region label is GB is found first and caches fragment, and detailed process please compare step 1.1.
Step 2.2, region caching fragment GBC storages is all department's caching fragments under the fragment, first delays department Deposit fragment RA, RB......RX (X > 1) carry out locking operation, due to ask out department's label of the task of team be it is arbitrary, can root Preference strategy is defined according to business demand to determine that the department when previous selection, such as the present embodiment are ensured respectively by the way of poll A department's caching is by fair selection, it is assumed that the department's caching fragment selected is RB, RB is set as RBC in this way, and department is delayed The lock for depositing fragment RA, RBC......RX discharges.
Step 2.3, department's caching fragment RBC storages is all application cache fragments under the fragment, with reference to step 1.3, It is found using the application cache fragment that label is PB under department caching fragment RBC, is set as PBC, application cache fragment PBC is straight It connects and corresponds to atom queue, take out task from the atom queue, returned up step by step by PBC, RBC, GBC.
When returning up task step by step, first PA, PBC......PY are locked, task is returned into PBC;Again to RA, RBC......RX is locked, and task is returned to RBC, is unlocked to PA, PBC......PY;Then GA, GBC......GM are added Task is returned to GBC, is unlocked to RA, RBC......RX by lock, and task is taken out from GBC, GA, GBC......GM are unlocked.
Embodiment three
The present embodiment provides a kind of task queue processing units.Fig. 4 is task queue according to a third embodiment of the present invention The structure chart of processing unit.
The task queue processing unit, including:
Setup module 410 is correspondingly arranged mark for cache decision tree to be arranged for each node in the cache decision tree Label, and it is correspondingly arranged atom queue for the leaf node in node.
Execution module 420 inquires leaf section step by step for the label information according to task in the cache decision tree Point executes task access in the corresponding atom queue of the leaf node.
Optionally, the setup module 410, is used for:It is multi-level buffer fragment by buffer setting, forms cache decision tree; Wherein, it includes that multiple subordinates cache fragment that each higher level, which caches fragment,;It is correspondingly arranged mark for each caching fragment as node Label;It is correspondingly arranged atom queue for each caching fragment as leaf node.
Optionally, the execution module 420, is used for:The enqueue request of reception task;Parse the label letter of the task Breath;It according to the label information of the task, is inquired and is cached step by step in the cache decision tree, until by the task It is cached in the corresponding atom queue of leaf node.
Optionally, the execution module 420, is used for:Reception task goes out team's request;Go out the mark in team's request described in acquisition Sign information;According to the label information, leaf node is inquired step by step in the cache decision tree;By the leaf node pair Task in the atom queue answered returns up step by step according to query path.
Optionally, the execution module 420, is additionally operable to:In the task of caching, in current queries to the corresponding caching of label After fragment, locking operation is carried out to subordinate's caching fragment of the caching fragment;In the task of acquisition, in current queries to mark After signing corresponding caching fragment, fragment is cached to the higher level of the caching fragment and carries out locking operation.
The function of device described in the present embodiment is described in Fig. 1-embodiments of the method shown in Fig. 3, therefore Not detailed place, may refer to the related description in previous embodiment, this will not be repeated here in the description of the present embodiment.
The present embodiment can determine queue scheduling sequence according to service feature custom task label;According to task characteristic Task is passed through into decision tree fragment step by step;According to task characteristic scheduler task so that high concurrent control is shared step by step, effectively Improve concurrent intensity.
Although being example purpose, the preferred embodiment of the present invention is had been disclosed for, those skilled in the art will recognize Various improvement, increase and substitution are also possible, and therefore, the scope of the present invention should be not limited to the above embodiments.

Claims (10)

1. a kind of task queue processing method, which is characterized in that including:
Cache decision tree is set, is correspondingly arranged label for each node in the cache decision tree, and be the leaf in node Node is correspondingly arranged atom queue;
According to the label information of task, leaf node is inquired step by step in the cache decision tree, in the leaf node pair The atom queue answered executes task access.
2. the method as described in claim 1, which is characterized in that setting cache decision tree is every in the cache decision tree A node is correspondingly arranged label, and is correspondingly arranged atom queue for the leaf node in node, including:
It is multi-level buffer fragment by buffer setting, forms cache decision tree;Wherein, it includes multiple subordinates that each higher level, which caches fragment, Cache fragment;
It is correspondingly arranged label for each caching fragment as node;
It is correspondingly arranged atom queue for each caching fragment as leaf node.
3. method as claimed in claim 2, which is characterized in that according to the label information of task, in the cache decision tree Leaf node is inquired step by step, and task access is executed in the corresponding atom queue of the leaf node, including:
The enqueue request of reception task;
Parse the label information of the task;
It according to the label information of the task, is inquired and is cached step by step in the cache decision tree, until described will appoint Business is cached in the corresponding atom queue of leaf node.
4. method as claimed in claim 2, which is characterized in that according to the label information of task, in the cache decision tree Leaf node is inquired step by step, and task access is executed in the corresponding atom queue of the leaf node, including:
Reception task goes out team's request;
Go out the label information in team's request described in acquisition;
According to the label information, leaf node is inquired step by step in the cache decision tree;
By the task in the corresponding atom queue of the leaf node, returned up step by step according to query path.
5. the method as described in any one of claim 2-4, which is characterized in that the method further includes:
In the task of caching, after corresponding caching fragment, the subordinate of the caching fragment is cached to label in current queries Fragment carries out locking operation;
In the task of acquisition, after corresponding caching fragment, the higher level of the caching fragment is cached to label in current queries Fragment carries out locking operation.
6. a kind of task queue processing unit, which is characterized in that including:
Setup module is correspondingly arranged label, and be for cache decision tree to be arranged for each node in the cache decision tree Leaf node in node is correspondingly arranged atom queue;
Execution module inquires leaf node, in institute step by step for the label information according to task in the cache decision tree It states the corresponding atom queue of leaf node and executes task access.
7. device as claimed in claim 6, which is characterized in that the setup module is used for:
It is multi-level buffer fragment by buffer setting, forms cache decision tree;Wherein, it includes multiple subordinates that each higher level, which caches fragment, Cache fragment;
It is correspondingly arranged label for each caching fragment as node;
It is correspondingly arranged atom queue for each caching fragment as leaf node.
8. device as claimed in claim 7, which is characterized in that the execution module is used for:
The enqueue request of reception task;
Parse the label information of the task;
It according to the label information of the task, is inquired and is cached step by step in the cache decision tree, until described will appoint Business is cached in the corresponding atom queue of leaf node.
9. device as claimed in claim 7, which is characterized in that the execution module is used for:
Reception task goes out team's request;
Go out the label information in team's request described in acquisition;
According to the label information, leaf node is inquired step by step in the cache decision tree;
By the task in the corresponding atom queue of the leaf node, returned up step by step according to query path.
10. device as claimed in any one of claims 7-9, which is characterized in that the execution module is additionally operable to:
In the task of caching, after corresponding caching fragment, the subordinate of the caching fragment is cached to label in current queries Fragment carries out locking operation;
In the task of acquisition, after corresponding caching fragment, the higher level of the caching fragment is cached to label in current queries Fragment carries out locking operation.
CN201810296570.8A 2018-04-04 2018-04-04 A kind of task queue treating method and apparatus Pending CN108572865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810296570.8A CN108572865A (en) 2018-04-04 2018-04-04 A kind of task queue treating method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810296570.8A CN108572865A (en) 2018-04-04 2018-04-04 A kind of task queue treating method and apparatus

Publications (1)

Publication Number Publication Date
CN108572865A true CN108572865A (en) 2018-09-25

Family

ID=63573979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810296570.8A Pending CN108572865A (en) 2018-04-04 2018-04-04 A kind of task queue treating method and apparatus

Country Status (1)

Country Link
CN (1) CN108572865A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825734A (en) * 2019-10-09 2020-02-21 上海交通大学 Concurrent updating method and read-write system for balance tree
CN116893786A (en) * 2023-09-05 2023-10-17 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102255788A (en) * 2010-05-19 2011-11-23 北京启明星辰信息技术股份有限公司 Message classification decision establishing system and method and message classification system and method
CN103902591A (en) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 Decision tree classifier establishing method and device
US8898520B1 (en) * 2012-04-19 2014-11-25 Sprint Communications Company L.P. Method of assessing restart approach to minimize recovery time
CN104657214A (en) * 2015-03-13 2015-05-27 华存数据信息技术有限公司 Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system
CN105159783A (en) * 2015-10-09 2015-12-16 上海瀚之友信息技术服务有限公司 System task distribution method
CN107391243A (en) * 2017-06-30 2017-11-24 广东神马搜索科技有限公司 Thread task processing equipment, device and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102255788A (en) * 2010-05-19 2011-11-23 北京启明星辰信息技术股份有限公司 Message classification decision establishing system and method and message classification system and method
US8898520B1 (en) * 2012-04-19 2014-11-25 Sprint Communications Company L.P. Method of assessing restart approach to minimize recovery time
CN103902591A (en) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 Decision tree classifier establishing method and device
CN104657214A (en) * 2015-03-13 2015-05-27 华存数据信息技术有限公司 Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system
CN105159783A (en) * 2015-10-09 2015-12-16 上海瀚之友信息技术服务有限公司 System task distribution method
CN107391243A (en) * 2017-06-30 2017-11-24 广东神马搜索科技有限公司 Thread task processing equipment, device and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825734A (en) * 2019-10-09 2020-02-21 上海交通大学 Concurrent updating method and read-write system for balance tree
CN110825734B (en) * 2019-10-09 2023-04-28 上海交通大学 Concurrent updating method of balance tree and read-write system
CN116893786A (en) * 2023-09-05 2023-10-17 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium
CN116893786B (en) * 2023-09-05 2024-01-09 苏州浪潮智能科技有限公司 Data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10366111B1 (en) Scalable distributed computations utilizing multiple distinct computational frameworks
CN104598563B (en) High concurrent date storage method and device
Karlsson et al. Do we need replica placement algorithms in content delivery networks
CN108924187B (en) Task processing method and device based on machine learning and terminal equipment
CN109886859A (en) Data processing method, system, electronic equipment and computer readable storage medium
US7444596B1 (en) Use of template messages to optimize a software messaging system
CN103957239B (en) DNS cache information processing method, equipment and system
CN109274730A (en) The optimization method and device that Internet of things system, MQTT message are transmitted
CN101577705A (en) Multi-core paralleled network traffic load balancing method and system
CN108572865A (en) A kind of task queue treating method and apparatus
US10776404B2 (en) Scalable distributed computations utilizing multiple distinct computational frameworks
CN106933664A (en) A kind of resource regulating method and device of Hadoop clusters
CN103294548A (en) Distributed file system based IO (input output) request dispatching method and system
Lee et al. Efficient processing of multiple continuous skyline queries over a data stream
CN106775498A (en) A kind of data cached synchronous method and system
WO2015047968A1 (en) Data caching policy in multiple tenant enterprise resource planning system
US8046780B1 (en) Efficient processing of assets with multiple data feeds
CN109376104A (en) A kind of chip and the data processing method and device based on it
Mealha et al. Data replication on the cloud/edge
CN108304253A (en) Map method for scheduling task based on cache perception and data locality
US20030225918A1 (en) Method and system for off-loading user queries to a task manager
CN103457976B (en) Data download method and system
CN109614411A (en) Date storage method, equipment and storage medium
US9703788B1 (en) Distributed metadata in a high performance computing environment
JP4718939B2 (en) Object categorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180925