CN1145312C - network processor thread scheduling method - Google Patents

network processor thread scheduling method Download PDF

Info

Publication number
CN1145312C
CN1145312C CNB01125114XA CN01125114A CN1145312C CN 1145312 C CN1145312 C CN 1145312C CN B01125114X A CNB01125114X A CN B01125114XA CN 01125114 A CN01125114 A CN 01125114A CN 1145312 C CN1145312 C CN 1145312C
Authority
CN
China
Prior art keywords
thread
external port
serve
engine
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB01125114XA
Other languages
Chinese (zh)
Other versions
CN1402471A (en
Inventor
叶未川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB01125114XA priority Critical patent/CN1145312C/en
Publication of CN1402471A publication Critical patent/CN1402471A/en
Application granted granted Critical
Publication of CN1145312C publication Critical patent/CN1145312C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to a thread scheduling method of a network processor, which comprises the steps that the priorities of external ports are set; each thread judges whether the thread and an engine thereof have the right to serve the external ports; if true, then the other threads of the engine are informed of being unable to serve the external ports; the thread selects and screens the external port with the highest priority; a task for serving the selected external port via the thread is set; the thread informs that other engines and the other threads in the engine can serve the external port; the screen is released in the process of executing the task. The present invention does not decrease the efficiency of the network processor, and also fully uses thread resources.

Description

Network processor thread scheduling method
Technical field:
The present invention relates to data communication field, the network processor thread scheduling method in especially a kind of data communication.
Background technology:
Often use network processing unit at present in data communication, network processing unit has a lot of functions, such as scheduling of the analysis of agreement in the data communication, processing, packet etc.Network processing unit inside generally comprises several independently miniature packet handlers, be called for short the bag engine, and each bag engine can be divided into the thread of several operations alternate with each other.Bag engine in the network processing unit is mutually independently, can move simultaneously, therefore can handle different data flow simultaneously; And the thread of bag engine internal is an alternate run, and they take public engine resource, can not move simultaneously.How coordinating the operational relation between each thread or the engine, is the key that improves the network processing unit performance.
A general network processing unit can be served several number according to port, does not have the requirement of priority between the port.The port that so how distributes idle thread and engine to go to serve data? how to distribute just and reach optimal effectiveness? present existing dispatching method has two kinds, is described below respectively:
One, centralized dispatching method
In network processing unit, distribute a special scheduling thread, all resources of unified scheduling.This scheduling thread can be located in any one engine of network processing unit.The process of this method scheduling is: scheduling thread inquiry external data port, and promptly the state of external port through handling, obtains the port that will serve; Then, whether scheduling thread requester network processor has other idle thread, selects the idle thread of lowest number; At last, by the simple mapping relations between the idle thread of the port that will serve and lowest number, distribute idle thread to remove to handle the port that to serve.
The shortcoming of this method is: it takies a thread resources specially, has objectively reduced the efficient of network processing unit; And, other thread of scheduling thread place engine can not be handled complicated business, because the thread of engine internal can not move simultaneously, necessary alternate run, therefore when other thread process complicated business, when taking the resource of this engine, the task of scheduling thread will be delayed, cause dispatching efficiency to descend, also limited the maximum utilization of resource simultaneously; In addition, it also is difficult to reasonable arrangement task, the desirable state of arranging of task is to be evenly distributed on each engine, is not to concentrate on an engine, because different engine can move simultaneously, and the necessary alternate run of the different threads of engine internal, and the simple distribution principle of centralized dispatching, always can be to the thread of lowest number Task Distribution, when port number after a little while, cause engine-operated unreasonablely, wasted the engine resource.
Two, distributed binding scheduling method
In this method, the scheduling thread that the network processing unit neither one is special, but the fixing thread of fixing of specifying goes to serve corresponding port, promptly thread and Port BindingBundling is got up, and whether handles the unique port that it is served by the own decision of each processing threads.So each processing threads all has the scheduling feature of oneself.Only serve 0 port such as 0 thread, 1 thread is only served 1 port ... by that analogy.
The shortcoming of this method is: because its binding thread and port, the port restricted of service is in the number of thread, and general number of threads is limited, and the port of service might be a lot, exceed the Thread Count destination interface so just may can not get service; In addition, it can not make full use of thread resources, and a thread process is limited in one's ability, when port flow is very big, can't linear process, but other thread is still idle, does not make full use of the parallel processing system of network processing unit, causes the waste of thread resources.
Summary of the invention:
The purpose of this invention is to provide a kind of network processor thread scheduling method, it neither takies special thread resources and reduces the efficient of network processing unit, make the port of service not be subject to the number of thread again, and the abundant parallel processing system of network processing unit, to make full use of thread resources.
For achieving the above object, solution of the present invention is: a kind of network processor thread scheduling method, and it comprises:
The priority of a, setting external port;
B, each thread be in case idle, and the engine of judging this thread and this thread place external port of whether all having the right to serve in case the both can serve external port, notifies other threads of this engine can not serve external port;
C, this thread are selected the highest external port of priority by the priority of predefined external port, and this external port are shielded;
D, this thread of usefulness of this thread constructing go the task of the good external port of services selection, notify other threads of other engines and this engine can serve external port, and the fixedly thread of next engine obtains serving the right of external port;
E, carry out and remove shielding in the process of this task.
Wherein, the detailed process of step b comprises:
B1, thread judge whether this thread can inquire about the state of external port;
If b2 judges the state that can not inquire about external port, then discharge the control of this thread; If can inquire about the state of external port, then notify other threads of this engine can not inquire about the state of external port;
B3, thread are judged the engine at this thread place external port of whether having the right to serve;
Can not serve external port if b4 judges, then discharge the control of this thread; If can serve external port, read the state of external port, remove the external port of temporarily not serving, obtain the tabulation of service external port;
B5, judged whether external port need the service, as then not discharging the control of this thread; If any then proceeding.
In step b1, thread judges that whether this thread can inquire about the method for the state of external port is intrinsic handoff relation between the thread according to this engine, judges whether to take turns to this thread and serves.
In step b3, judge that whether this engine has the right to serve the method for external port is to see that next engine of whether having received other threads can serve the notice of external port.
Steps d of the present invention comprises such process:
D1, this thread of usefulness of thread constructing go the task of the good external port of services selection;
D2, this thread notify other engines can serve this external port;
D3, this thread distribute this task;
The data mode that other thread of d4, this thread notice place engine can be inquired about outside port.
Between steps d 2 and d3, can comprise such process: judge that again this thread whether can allocating task,, promptly can not serve the control that external port then discharges thread as can not allocating task; As can allocating task, promptly can serve external port and then proceed.
Because the present invention is the scheduling feature that is provided with this thread oneself in each thread of network processing unit, it does not take special thread resources, thereby has improved the efficient of network processing unit; Because thread and external port among the present invention do not bundle, but any one idle thread can have an opportunity to serve the port of data, so just make the external port of service not be subject to the number of thread; In addition, because the present invention is by the order of the intrinsic service external port between the informing mechanism between each engine and each thread of engine internal, and to the shielding and the screen unlocking of external port, retrained the relation between thread and the external port, when the external port flow is very big, can make full use of the parallel processing system of network processing unit, handle, make full use of thread resources by a plurality of threads are parallel.
Description of drawings:
Fig. 1 is a method flow sketch of the present invention.
Fig. 2 is a method flow detail drawing of the present invention.
Embodiment "
In Fig. 1 method flow sketch of the present invention, we can find out performing step of the present invention substantially, and each step in this step is corresponding with our each subhead of back substantially.
(1) priority of setting external port.In general this setting is artificial.Owing in the process of each thread service external port, all be to select the highest external port of priority earlier in the back, when setting the priority of external port, to consider the significance level of data on the external port, most important external port is made as the highest external port of priority.
In the present invention, the order of each engine service external port is to be undertaken by the informing mechanism in the dispatching method, and this mechanism has obtained embodiment in the step (2) of back and (4).Each thread is served in the engine also a definite sequence, and only this is that network processing unit itself is intrinsic in proper order.Such as, four bag engines are arranged in our network processing unit, be numbered 0 respectively, 1,2,3, four threads are arranged respectively in each engine, be numbered 0 respectively, 1,2,3, the perfect condition of the thread order of service external port is to be evenly distributed on each engine, because different engines can move simultaneously, and the necessary alternate run of the different threads of engine internal, like this, we at first allow 0 thread (brief note is 00) of 0 engine serve external port earlier, are 0 thread (brief note is 10) in 1 engine then, secondly being 20 again, secondly is 30 again, 01,11,21,31,02,12,22,32,03,13,23,33.If have four external port to be numbered 0,1,2,3 respectively, we establish 0 external port and have limit priority, are 1,2,3 then successively.
In this step, can also set from beginning to execute the task to and remove the time the shielding of external port in addition, we are discussed in (5) about this a part of detailed content.
(2) each thread is in case idle, and the engine of judging this thread and this thread place external port of whether all having the right to serve in case the both can serve external port, notifies other threads of this engine can not serve external port.
In this process, the thread free time is played scheduler task when getting off automatically, and its detailed process comprises:
1) thread judges whether this thread can inquire about the state of external port, and its concrete grammar is an intrinsic handoff relation between the thread according to this engine, judges whether to take turns to this thread and serves.
2) if judge the state that to inquire about external port, then discharge the control of this thread; If can inquire about the state of external port, then notify other threads of this engine can not inquire about the state of external port.
3) thread is judged the engine at this thread place external port of whether having the right to serve, and its concrete grammar is to see that next engine of whether having received other threads can serve the notice of external port.In fact, notify in the described in the back steps of other threads (4), we are described in detail this process in the back, and only each thread is independently carried out 5 processes of the present invention, and other threads are to notify in it self performed step (4).
4) if judge and to serve external port, then discharge the control of this thread; If can serve external port, read the state of external port, remove the external port of temporarily not serving, obtain the tabulation of service external port.Its concrete grammar is to judge in the external port which does not currently need service, which has been fallen by other thread mask, about the shielding part we will in (3) of back, relate generally to, in fact here said shielding is meant by the step of other thread execution (3) and has masked, and is irrelevant with the process of this thread.From whole external port, deduct these external port then, be placed in the tabulation of service external port.
5) judged whether that external port needs service, as then not discharging the control of this thread; If any then proceeding.In this step, judge whether that the detailed process that external port need be served is the port that can serve according in the tabulation of service external port, go to check whether data are arranged on these ports, if any then needing service, as then not needing service.
About above 5 detailed process, clearly illustrate out in Fig. 2 left side.
Above example, we generally show 0 thread that starts 0 engine earlier when initialization, make it have the right to serve external port, so, 0 thread judges that at first can 0 thread inquire about the state of external port, answer is sure, and then 0 thread notifies 1,2,3 threads of this engine can not inquire about the state of external port.Then, judge that can 0 engine serve external port, answer also is sure, then reads the state of external port, finds that external port 1 temporarily do not serve, and then removes this port, obtains the tabulation of service external port, only contains ports having 0,2,3 in tabulation.Judged whether that then external port needs service, finding has data on port 0 and 2, and does not have data on port 3, thinks that then port 0,2 needs service, can proceed following processes.
(3) this thread is selected the highest external port of priority by the priority of predefined external port, and shields.
In last example, in the port 0,2 of needs service, the priority of port 0 is the highest, so select port 0 service, then port 0 is shielded.Why to shield? because give this port when distributing a task, when promptly this port being served, leave one's post and be engaged in carrying out also for some time in other words from real service, port status can not upgrade at once like this, and might in down-stream, distribute a task, in other words, there is a thread to serve this port again, what it received as a result is exactly the data that do not upgrade in time, has occurred repeating receiving.
(4) this thread of usefulness of this thread constructing goes the task of the good external port of services selection, notifies other threads of other engines and this engine can serve external port, and the fixedly thread of next engine obtains serving the right of external port.
Being implemented as follows of this process:
1) this thread of usefulness of thread constructing goes the task of the good external port of services selection.
2) this thread notifies other engines can serve this external port.Our this step is just to this right of other engines, but in fact has only engine ability real service external port in this step of next order, Here it is we at the substep 3 of step (2)) described content.Can obtain this right as for which thread in this engine, then be to be determined by the proper sequence that the network processor engine internal thread is served.
Thread can judge that this thread whether can allocating task again then, as can not allocating task, promptly can not serve the control that external port then discharges thread; As can allocating task, promptly can serve external port and then proceed.This judgement is in order to make the result more accurate, but is not necessary especially.
3) this thread distributes this task.
4) this thread is notified the data mode that other thread of place engine can be inquired about outside port.
Can clearly illustrate out about the middle part and the right part of this process at Fig. 2.
Still above example describes, the thread 0 of 0 engine has been constructed a task of going to serve external port 0 with thread 0, notify other engines can serve 0 port then, be actually and notify 1 engine to remove to serve 0 port, in 1 engine, given tacit consent to one 0 thread and removed to continue service 0 port by the intrinsic order of network processing unit, can judge again that then following this thread whether can allocating task, the judgement of why carrying out such repetition be prevent after judging for the first time up till now between in the program implementation process, this port is by other thread mask, or do not plan to serve again.Then, 0 thread distributes this task, the data mode of notifying 1,2,3 threads of 0 engine can inquire about outside port.
(5) carry out and remove shielding in the process of this task.Then, the fixedly thread in next engine just can be served this external port.In this course, predefinedly in (1) set by step remove the releasing of being carried out the external port shielding time the shielding of external port from beginning to execute the task to, that is to say, all finish less than this task, the new thread of new engine, promptly 0 thread of 1 engine begins to serve simultaneously 0 port again.Like this, the resource of network processing unit has obtained maximum utilization, has improved performance greatly.Certainly, the new thread of new engine begins to serve same external port again, also will be through the process of associate (1) to (5).
Because the basis of this dispatching method is distributed binding method, but eliminated the unreasonable and dumb of the binding method utilization of resources, adopted the free style scheduling, any one idle thread can have an opportunity to serve the port of data, and we are referred to as distributed free scheduling method.

Claims (9)

1, a kind of network processor thread scheduling method, it comprises:
The priority of a, setting external port;
B, each thread be in case idle, and the engine of judging this thread and this thread place external port of whether all having the right to serve in case the both can serve external port, notifies other threads of this engine can not serve external port;
C, this thread are selected the highest external port of priority by the priority of predefined external port, and this external port are shielded;
D, this thread of usefulness of this thread constructing go the task of the good external port of services selection, notify other threads of other engines and this engine can serve external port, and the fixedly thread of next engine obtains serving the right of external port;
E, carry out and remove shielding in the process of this task.
2, network processor thread scheduling method according to claim 1 is characterized in that the detailed process of step b comprises:
B1, thread judge whether this thread can inquire about the state of external port;
If b2 judges the state that can not inquire about external port, then discharge the control of this thread; If can inquire about the state of external port, then notify other threads of this engine can not inquire about the state of external port;
B3, thread are judged the engine at this thread place external port of whether having the right to serve;
Can not serve external port if b4 judges, then discharge the control of this thread; If can serve external port, read the state of external port, remove the external port of temporarily not serving, obtain the tabulation of service external port;
B5, judged whether external port need the service, as then not discharging the control of this thread; If any then proceeding.
3, network processor thread scheduling method according to claim 2, it is characterized in that: in step b1, thread judges that whether this thread can inquire about the method for the state of external port is intrinsic handoff relation between the thread according to this engine, judges whether to take turns to this thread and serves.
4, network processor thread scheduling method according to claim 2 is characterized in that: in step b3, judge that whether this engine has the right to serve the method for external port is to see that next engine of whether having received other threads can serve the notice of external port.
5, according to claim 2,3 or 4 described network processor thread scheduling methods, it is characterized in that: in step b4, the concrete grammar that reads the state of external port is to judge in the external port which does not currently need service, which has been fallen by other thread mask, from whole external port, deduct these external port, be placed in the tabulation of service external port.
6, according to claim 2,3 or 4 described network processor thread scheduling methods, it is characterized in that: in step b5, judge whether that the detailed process that external port need be served is the port that can serve according in the tabulation of service external port, go to check whether data are arranged on these ports, if any then needing service, as then not needing service.
7, network processor thread scheduling method according to claim 1 is characterized in that steps d comprises such process:
D1, this thread of usefulness of thread constructing go the task of the good external port of services selection;
D2, this thread notify other engines can serve this external port;
D3, this thread distribute this task;
The data mode that other thread of d4, this thread notice place engine can be inquired about outside port.
8, network processor thread scheduling method according to claim 7, it is characterized in that between steps d 2 and d3, comprising such process: judge that again this thread whether can allocating task, as can not allocating task, promptly can not serve the control that external port then discharges thread; As can allocating task, promptly can serve external port and then proceed.
9, network processor thread scheduling method according to claim 1 is characterized in that: preestablish in step a from beginning to execute the task to and remove the time the shielding of external port; In step e, carry out the releasing of external port shielding according to this time.
CNB01125114XA 2001-08-13 2001-08-13 network processor thread scheduling method Expired - Lifetime CN1145312C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB01125114XA CN1145312C (en) 2001-08-13 2001-08-13 network processor thread scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB01125114XA CN1145312C (en) 2001-08-13 2001-08-13 network processor thread scheduling method

Publications (2)

Publication Number Publication Date
CN1402471A CN1402471A (en) 2003-03-12
CN1145312C true CN1145312C (en) 2004-04-07

Family

ID=4665891

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB01125114XA Expired - Lifetime CN1145312C (en) 2001-08-13 2001-08-13 network processor thread scheduling method

Country Status (1)

Country Link
CN (1) CN1145312C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2412761C (en) * 2004-04-02 2011-01-05 Nokia Corp Improvements in or relating to an operating system for a computing device
GB0519981D0 (en) * 2005-09-30 2005-11-09 Ignios Ltd Scheduling in a multicore architecture
CN108038072B (en) * 2017-12-28 2021-11-09 深圳Tcl数字技术有限公司 Access method of serial device, terminal device and computer readable storage medium

Also Published As

Publication number Publication date
CN1402471A (en) 2003-03-12

Similar Documents

Publication Publication Date Title
CN1230737C (en) Device data polling dispatching method
CN1143514C (en) System optimization apparatus employing load prediction
CN109445944A (en) A kind of network data acquisition processing system and its method based on DPDK
CN111367652A (en) Task scheduling processing system and method of embedded real-time operating system
CN1286426A (en) Method for use of remote JAVA object allocator
CN1874538A (en) Concurrent method for treating calling events
CN1819523A (en) Parallel interchanging switching designing method
CN101282300B (en) Method for processing HTTP packet based on non-blockage mechanism
CN101042660A (en) Method of task execution environment switch in multitask system
CN1818875A (en) Grouped hard realtime task dispatching method of built-in operation system
US20110158254A1 (en) Dual scheduling of work from multiple sources to multiple sinks using source and sink attributes to achieve fairness and processing efficiency
CN105718315A (en) Task processing method and server
CN1276137A (en) Methods of initiating reconfiguring of cell in mobile radio network
US8532129B2 (en) Assigning work from multiple sources to multiple sinks given assignment constraints
CN100351792C (en) A real-time task management and scheduling method
CN1852131A (en) Timer scheduling method
CN1145312C (en) network processor thread scheduling method
CN1277196C (en) Method for applied server of computer system
CN1423456A (en) Sharing route realizing and sheduling method
CN1773955A (en) Queue dispatching method and apparatus in data network
CN1225105C (en) Call processing system adapted for application server and its realizing method
CN1315046C (en) A method for allocating computation nodes in cluster job management system
CN1588411A (en) Flow custom managing platform
CN1286277C (en) Communication method between kernel processor and micro-engine in network processor
CN1716183A (en) A kind of charge system of getting devices and methods therefor of multiline procedure processor simultaneously that is applied to

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20040407

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: Patent of Huawei Technology Co.,Ltd. The person in charge

Document name: Notice of expiration and termination of patent right