CN103517342A - Adaptive Trap message treatment method and device - Google Patents

Adaptive Trap message treatment method and device Download PDF

Info

Publication number
CN103517342A
CN103517342A CN201210207991.1A CN201210207991A CN103517342A CN 103517342 A CN103517342 A CN 103517342A CN 201210207991 A CN201210207991 A CN 201210207991A CN 103517342 A CN103517342 A CN 103517342A
Authority
CN
China
Prior art keywords
message
priority
trap message
preempted
trap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210207991.1A
Other languages
Chinese (zh)
Inventor
刘梅红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201210207991.1A priority Critical patent/CN103517342A/en
Publication of CN103517342A publication Critical patent/CN103517342A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Telephonic Communication Services (AREA)

Abstract

An adaptive Trap message treatment method comprises the following steps: when a trigger condition is satisfied, messages are taken out of a Trap message queue to constitute a preempting priority queue and constitute a preempted priority queue, wherein the position of the preempted priority queue is prior to that of the preempting priority queue; trap messages are taken out of the preempting priority queue and each Trap message undergoes treatments as follows: if a currently-picked Trap message satisfies preempting conditions and there exists a preemptible Trap message in the preempted priority queue, the position of the currently-picked Trap message in the Trap message queue is exchanged with the position of the preemptible Trap message, and the preemptible Trap message is removed from the preempted priority queue; and otherwise, the position of the currently-picked Trap message in the Trap message queue remains unchanged. The invention also provides an adaptive Trap message treatment device.

Description

A kind of adaptive Trap message treatment method and device
Technical field
The present invention relates to mobile communication technology field, particularly relate to eNodeB (Evolved NodeB, the Node B of evolution) and report TRAP message to OMC (Operator & Maintain Center, operation maintenance center) self adaptation message treatment method and device time.
Background technology
Based on SNMP (Simple Network Management Protocol, Simple Network Management Protocol) agreement, between OMC and eNodeB, there are two kinds of basic communication modes: OMC sends a request message to eNodeB, and wait for that eNodeB receives the response; ENodeB initiatively reports TRAP message to OMC, informs extremely the notices such as parameter modification.
Usually, OMC safeguards Trap message, adopts the treatment principle of FIFO (First Input First Output, first-in first-out) to process successively the Trap message that eNodeB reports; If network is when congested, this processing mode often makes the real-time of message can not be guaranteed.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of adaptive Trap message treatment method and device, while solving network congestion, cannot guarantee the problem of message real-time.
In order to address the above problem, the invention provides a kind of adaptive Trap message treatment method, comprising: when trigger condition meets, carry out operation as follows:
In Trap message queue, take out one or more message, form preempting priority queue; And, take out one or more message, form and be preempted priority query, and be preempted priority query position prior to preempting priority queue;
From described preempting priority queue, take out Trap message, to every Trap message, carry out:
If the Trap message of current taking-up meets preemptive condition, and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
Further, said method also can have following characteristics, the corresponding message priority of every Trap message and a preempting priority, and described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
Further, said method also can have following characteristics, judges whether the Trap message of current taking-up meets preemptive condition according to following mode:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
Further, said method also can have following characteristics, and whether the Trap message being preempted in priority query described in judging according to following mode can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.
Further, said method also can have following characteristics, describedly from described preempting priority queue, takes out Trap message and comprises:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.
Further, said method also can have following characteristics, the Trap message of the current taking-up of described exchange with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.
Further, said method also can have following characteristics, and described trigger condition is satisfied to be comprised:
In described Trap message queue, message number reaches congestion control thresholding; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened.
The present invention also provides a kind of adaptive Trap message processing apparatus, comprising:
Control unit, for judging that whether trigger condition meets, if met, triggers described queue creating unit;
Queue creating unit, is subject to, after described control unit triggering, taking out one or more message in Trap message queue, forms preempting priority queue; And, take out one or more message, form and to be preempted priority query, and described in be preempted priority query position prior to described preempting priority queue;
Seize processing unit, for taking out Trap message from described preempting priority queue, to every Trap message, carry out:
If the Trap message of current taking-up meet preemptive condition and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
Further, said apparatus also can have following characteristics, the corresponding message priority of every Trap message and a preempting priority, and described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
Further, said apparatus also can have following characteristics, described in seize processing unit and judge according to following mode whether the Trap message of current taking-up meets preemptive condition:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
Further, said apparatus also can have following characteristics, described in seize the Trap message that processing unit is preempted in priority query described in judging according to following mode and whether can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.
Further, said apparatus also can have following characteristics, described in seize processing unit and from described preempting priority queue, take out Trap message and comprise:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.
Further, said apparatus also can have following characteristics, described in seize Trap message that processing unit exchanges current taking-up with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.
Further, said apparatus also can have following characteristics, and control unit judges that whether trigger condition meets and comprise:
If message number reaches congestion control thresholding in described Trap message queue; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened, trigger condition meets.
The present invention can be used for following use scenes: when occurring that network environment worsens, while there is network congestion, in order to guarantee the real-time processing of message, and the generation of avoiding " dying of hunger " situation of high-priority message, the seize processing of employing based on keeping realizing with the mode of preempting priority message, the capacity of the Message Processing of raising system, improves user's impression effectively.
Accompanying drawing explanation
Fig. 1 is the model of Trap message self-adaptive processing;
Fig. 2 is that the message in the embodiment of the present invention is monitored flow process;
Fig. 3 is the congestion judging flow process of the Trap message in the embodiment of the present invention;
Fig. 4 is the preemption algorithm handling process of the Trap message in the embodiment of the present invention;
Fig. 5 is application example queue schematic diagram one of the present invention;
Fig. 6 is application example queue schematic diagram two of the present invention;
Fig. 7 is application example queue schematic diagram three of the present invention;
Fig. 8 is application example queue schematic diagram four of the present invention;
Fig. 9 is application example queue schematic diagram five of the present invention;
Figure 10 is the adaptive Trap message processing apparatus of embodiment of the present invention block diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, hereinafter in connection with accompanying drawing, embodiments of the invention are elaborated.It should be noted that, in the situation that not conflicting, the embodiment in the application and the feature in embodiment be combination in any mutually.
In order to ensure the network congestion in the situation that, guarantee the real-time processing of message, the embodiment of the present invention has proposed a kind of self adaptation Trap message treatment method, OMC carries out self-adaptive processing by the mode based on priority query to message, guarantee the real-time processing of its message, improve throughput and performance that system message is processed.In the embodiment of the present invention, the following two kinds of information of Trap message relating:
Message priority (Priority Level), span is [MinPriorityLevel..MaxPriorityLevel], and when value is designated value, message adopts FIFO processing mode, and described designated value can be MinPriorityLevel.Described MinPriorityLevel can be 0.
Preempting priority: seize ability (Preemption Capability): span is { can seize, can not seize }; Be preempted ability (Preemptable Capability): span is { can not be preempted, can be preempted };
Can seize explanation in trap queue, the important and promptness of this message is higher, can seize the preferential position of other message in queue, thereby preferentially be processed.Can not seize this message of explanation does not need preferentially to rob and accounts for other message previous processed, waits for that through ship stream processes in queue.
Be preempted ability and to seize ability relative.What seize ability definition is the priority treatment position that message can go to seize other message, and what be preempted ability definition is whether message can be seized by other message.If can not, this message will continue to keep it in queue Zhong position.
The embodiment of the present invention provides a kind of adaptive Trap message treatment method, comprising: when trigger condition meets, carry out operation as follows:
In Trap message queue, take out one or more message, form preempting priority queue; And, take out one or more message, form and be preempted priority query, and be preempted priority query position prior to preempting priority queue; Preferably, from head of the queue, take out and be preempted priority query, from tail of the queue, take out preempting priority queue; Also can increase other rules, the order that for example get at two ends all moves to centre successively, avoids repeating to adjust same group of message, only needs to meet to seize message home position after being preempted message.Along with Message Processing progress, message number is reduced under boundary value, will suspend adjustment, starts to get team's data construct be again preempted queue to next round.
From described preempting priority queue, take out Trap message, to every Trap message, carry out:
If the Trap message of current taking-up meets preemptive condition, and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
Wherein, described trigger condition is satisfied comprises: in described Trap message queue, message number reaches congestion control thresholding; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened.Certainly, also other trigger conditions can be set, the present invention is not construed as limiting this.
Wherein, the corresponding message priority of every Trap message and a preempting priority, described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
Wherein, according to following mode, judge whether the Trap message of current taking-up meets preemptive condition:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
Whether the Trap message wherein, being preempted in priority query described in judging according to following mode can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.Described assigned priority can be 0.Certainly, also can be set to other values.
Wherein, describedly from described preempting priority queue, take out Trap message and comprise:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.Also can take other orders, the present invention is not construed as limiting this.
Wherein, the Trap message of the current taking-up of described exchange with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.If be preempted priority query's medium priority Trap message minimum and that can be preempted, surpass one, the Trap message that can first take position after leaning on.Above-mentioned exchange mode is only example, the Trap message exchange position that also can be preempted with other.
In addition, also can not set assigned priority.All message all participates in seizing and being preempted.
The method of the message based maintenance described in the embodiment of the present invention and the self-adaptive processing of preempting priority is as follows, comprising:
Shown in Figure 1, according to the processing of Trap message, and the strategy of preempting priority, a kind of Trap message adaptive processing method flow process is provided, comprising:
Step M101, OMC monitors Trap message, and distributes the Trap message detecting;
Step M102, carries out congestion judging;
Step M103, when judgement is congested, carries out preemption algorithm.
Wherein, step M101 specifically as shown in Figure 2, comprising:
S201, OMC opens the monitoring of Trap message in 162 of standard;
S202, OMC judges whether to detect Trap message, if do not had, returns to step S201, otherwise, execution step S203;
S203, carries out Trap distribution procedure.
Wherein, step M102 specifically as shown in Figure 3, comprising:
S301, Trap message is inserted Trap message queue;
S302, judges whether the message number in Trap message queue reaches congestion control thresholding, if so, and execution step S303, otherwise, return to step S301;
S303, starts to carry out preemption algorithm.
As shown in Figure 4, the flow chart for step M103 carries out preemption algorithm, comprising:
S401, preemption algorithm starts to carry out;
S402, judges whether the control switch of preemption algorithm closes, and if so, performs step 412, otherwise, execution step S403;
S403-S404, builds and is preempted priority query and preempting priority queue;
S405, judges whether preempting priority queue is empty, if so, goes to step S412, otherwise, execution step S406;
S406 takes out the message that priority is the highest from preempting priority queue, judges whether its priority is 0, if so, filter this message, keep the invariant position of this message in Trap message queue, execution step S405, if this message priority is non-zero, performs step S407;
S407, judges whether the ability of seizing of this message is to seize, if so, and execution step S408, otherwise, filter this message, keep the invariant position of this message in Trap message queue, execution step S405;
S408, whether judgement is preempted priority query is empty, if so, execution step S412, otherwise, execution step S409;
S409 takes out the minimum but non-zero a piece of news of priority from be preempted priority query;
S410, judges whether the ability that is preempted of this message is to be preempted, if so, and execution step S411, otherwise, execution step S408;
S411, exchange current message of taking out from preempting priority queue and the message of taking out from be preempted priority query between position, return to step S405;
S412, finishes.
Provide specific embodiment below.
Embodiment mono-
Trap message adaptive processing method in the embodiment of the present invention one, comprises the steps:
S501, OMC opens the monitoring of Trap message in 162 of standard;
S502, after the Trap message that OMC detects, carries out Trap distribution procedure;
S503, judges whether message number reaches congestion control thresholding;
S504, if whether message number reaches congestion control thresholding, starts to carry out preemption algorithm;
S505-S506, if the control switch of preemption algorithm leaves, builds preempting priority queue, and is preempted priority query;
S507, if preemptive priority and queue are empty, algorithm return back to the processing mode of FIFO.
Embodiment bis-
S601, OMC opens the monitoring of Trap message in 162 of standard;
S602, after the Trap message that OMC detects, carries out Trap distribution procedure;
S603, judges whether message number reaches congestion control thresholding;
S604, if whether message number reaches congestion control thresholding, starts to carry out preemption algorithm;
S605-S606, if the control switch of preemption algorithm leaves, builds preempting priority queue, and is preempted priority query;
S607, if preemptive priority and queue are not empty, travels through the message in preemptive priority and queue;
S608-609, if the Priority of message is not 0 in preempting priority queue, and is expressed as and can seizes, and can carry out and seize the message that other priority are low;
S610-611, whether judgement is preempted priority query is empty, if be not empty, select and has priority minimum and non-zero, and be denoted as the message that can be preempted;
S612, exchanges and seizes message and the position that is preempted message.
Embodiment tri-
As shown in Fig. 5-9, with graphic example mode analogue data, seize optimizing process.Make the following assumptions:
3 signs of each data represent respectively the ability of seizing, are preempted ability and message priority, as 105 expressions: can seize other message, can not be preempted, message priority is 5;
Respectively 10 message before and after queue are optimized, queue subscript is assumed to be the original sequence number of every message, and the variation of the sequence number that is in course of adjustment is the variation of position;
Optimizing process is described corresponding with step, the highest and the ability of seizing of preempting priority queue medium priority is 1 message, with to be preempted priority query's medium priority minimum but non-zero, and the message exchange that the ability of being preempted is 1 (not participating in adjusting because priority is 0 expression), and then in like manner find the message exchange that next satisfies condition.When preempting priority queue or be preempted in priority query and do not have qualified message commutative, stop this process, comprising:
(1) as shown in Figure 5, initial queue builds, for each 10 message of front and back, also can preempting priority queue and to be preempted priority query's message number different, in addition, can from tail of the queue and head of the queue, not get preempting priority queue and be preempted priority query, as long as preempting priority queue is being preempted after priority query yet.
The first round is No. 1 message in the highest message of preempting priority queue medium priority, priority is 5, and the attribute of seizing of this message is 1, so meet preemptive condition, correspondingly, be preempted in priority query that priority is minimum but non-zero and to be preempted attribute be that 1 message is No. 18 message, exchange 2 message positions, and from the memory queue of two ends, remove 2 message respectively.It is { 2,3,4,5,6,7,8,9,10} that the first round finishes message in rear preempting priority queue;
(2). second takes turns, as shown in Figure 6, the highest message of preempting priority queue medium priority is respectively 7 and 9, priority is all 5, but the ability of seizing of these two message is all 0, does not participate in and seizes, so remove 7,9 two message from seize memory queue, then qualified message is No. 8 message that priority is the highest, with No. 16 message that are preempted in priority query.Second has taken turns in rear preempting priority queue message for { 2,3,4,5,6,10};
(3). in like manner, Fig. 7 and Fig. 8 are third round and fourth round exchange process, after third round completes in preempting priority queue message for 3,4,10}, after fourth round completes, in preempting priority queue, message is { 3,4};
(4). fourth round complete remaining 3,4} message, priority is all 0, so all remove from preempting priority queue, i.e. current preempting priority queue is empty, this adjustment process finishes, Fig. 9 is last adjustment result.
As shown in figure 10, a kind of adaptive Trap message processing apparatus for the embodiment of the present invention provides, comprising:
Control unit, for judging that whether trigger condition meets, if met, triggers described queue creating unit;
Queue creating unit, is subject to, after described control unit triggering, taking out one or more message in Trap message queue, forms preempting priority queue; And, take out one or more message, form and to be preempted priority query, and described in be preempted priority query position prior to described preempting priority queue;
Seize processing unit, for taking out Trap message from described preempting priority queue, to every Trap message, carry out:
If the Trap message of current taking-up meet preemptive condition and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
Wherein, the corresponding message priority of every Trap message and a preempting priority, described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
Wherein, described in, seize processing unit and judge according to following mode whether the Trap message of current taking-up meets preemptive condition:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
Whether wherein,, seize the Trap message that processing unit is preempted in priority query described in judging according to following mode can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.
Wherein, described in, seize processing unit takes out Trap message and comprises from described preempting priority queue:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.
Wherein, described in, seize Trap message that processing unit exchanges current taking-up with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.
Wherein, control unit judges that whether trigger condition meets and comprise:
If message number reaches congestion control thresholding in described Trap message queue; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened, trigger condition meets.
In the embodiment of the present invention, OMC substitutes the message process mode of FIFO by the mode based on priority query, message is carried out to self-adaptive processing, guarantees the real-time processing of its message, improves throughput and performance that system message is processed.
One of ordinary skill in the art will appreciate that all or part of step in said method can come instruction related hardware to complete by program, described program can be stored in computer-readable recording medium, as read-only memory, disk or CD etc.Alternatively, all or part of step of above-described embodiment also can realize with one or more integrated circuits.Correspondingly, each the module/unit in above-described embodiment can adopt the form of hardware to realize, and also can adopt the form of software function module to realize.The present invention is not restricted to the combination of the hardware and software of any particular form.

Claims (14)

1. an adaptive Trap message treatment method, is characterized in that, comprising: when trigger condition meets, carry out operation as follows:
In Trap message queue, take out one or more message, form preempting priority queue; And, take out one or more message, form and be preempted priority query, and be preempted priority query position prior to preempting priority queue;
From described preempting priority queue, take out Trap message, to every Trap message, carry out:
If the Trap message of current taking-up meets preemptive condition, and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
2. the method for claim 1, is characterized in that, the corresponding message priority of every Trap message and a preempting priority, and described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
3. method as claimed in claim 2, is characterized in that, judges whether the Trap message of current taking-up meets preemptive condition according to following mode:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
4. method as claimed in claim 2, is characterized in that, whether the Trap message being preempted in priority query described in judging according to following mode can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.
5. method as claimed in claim 2, is characterized in that, describedly from described preempting priority queue, takes out Trap message and comprises:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.
6. method as claimed in claim 2, is characterized in that, the Trap message of the current taking-up of described exchange with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.
7. the method as described in as arbitrary in claim 1 to 6, is characterized in that, described trigger condition meets and comprises:
In described Trap message queue, message number reaches congestion control thresholding; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened.
8. an adaptive Trap message processing apparatus, is characterized in that, comprising:
Control unit, for judging that whether trigger condition meets, if met, triggers described queue creating unit;
Queue creating unit, is subject to, after described control unit triggering, taking out one or more message in Trap message queue, forms preempting priority queue; And, take out one or more message, form and to be preempted priority query, and described in be preempted priority query position prior to described preempting priority queue;
Seize processing unit, for taking out Trap message from described preempting priority queue, to every Trap message, carry out:
If the Trap message of current taking-up meet preemptive condition and described in be preempted the Trap message that in priority query, existence can be preempted, the Trap message of exchanging current taking-up with described in be preempted Trap message that in priority query, this can be preempted in described Trap message queue Zhong position, and priority query, reject from described being preempted the Trap message that this can be preempted; Otherwise, the invariant position of the Trap message that keeps described current taking-up in described Trap message queue.
9. device as claimed in claim 8, is characterized in that, the corresponding message priority of every Trap message and a preempting priority, and described message priority is indicated the priority of this Trap message; In described preempting priority, comprise the ability of seizing, indicate this Trap message whether to seize other message; And, be preempted ability, indicate this Trap message whether can be seized by other message.
10. device as claimed in claim 9, is characterized in that, described in seize processing unit and judge according to following mode whether the Trap message of current taking-up meets preemptive condition:
If the message priority of the Trap message of described current taking-up is not assigned priority, and the ability of seizing in its preempting priority is designated as and can seizes, and described Trap message meets preemptive condition.
11. devices as claimed in claim 9, is characterized in that, described in seize the Trap message that processing unit is preempted in priority query described in judging according to following mode and whether can be preempted:
To the described Trap message being preempted in priority query, if its message priority is not assigned priority, and the ability that is preempted in its preempting priority is designated as and can be preempted, and this Trap message being preempted in priority query can be preempted.
12. devices as claimed in claim 9, is characterized in that, described in seize processing unit and from described preempting priority queue, take out Trap message and comprise:
Message priority according to each Trap message in described preempting priority queue takes out Trap message successively, and first takes out the Trap message that priority is the highest.
13. devices as claimed in claim 9, is characterized in that, described in seize Trap message that processing unit exchanges current taking-up with described in be preempted the Trap message that in priority query, this can be preempted and refer in described Trap message queue Zhong position:
The Trap message of exchanging current taking-up with described in be preempted the minimum and Trap message that can be preempted of priority query's medium priority in described Trap message queue Zhong position.
14. devices as described in as arbitrary in claim 8 to 13, is characterized in that, control unit judges whether trigger condition meets and comprises:
If message number reaches congestion control thresholding in described Trap message queue; Or the control switch that in described Trap message queue, message number reaches congestion control thresholding and preemption algorithm is opened, trigger condition meets.
CN201210207991.1A 2012-06-21 2012-06-21 Adaptive Trap message treatment method and device Pending CN103517342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210207991.1A CN103517342A (en) 2012-06-21 2012-06-21 Adaptive Trap message treatment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210207991.1A CN103517342A (en) 2012-06-21 2012-06-21 Adaptive Trap message treatment method and device

Publications (1)

Publication Number Publication Date
CN103517342A true CN103517342A (en) 2014-01-15

Family

ID=49899173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210207991.1A Pending CN103517342A (en) 2012-06-21 2012-06-21 Adaptive Trap message treatment method and device

Country Status (1)

Country Link
CN (1) CN103517342A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726822A (en) * 2019-03-21 2020-09-29 大唐移动通信设备有限公司 Trap message processing method and data synchronization management device
CN113986484A (en) * 2021-10-12 2022-01-28 丰辰网络科技(无锡)有限公司 Task processing global scheduling method of social software
CN114640638A (en) * 2020-12-16 2022-06-17 华为技术有限公司 Message transmission method and sending terminal equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1981221A1 (en) * 2006-09-29 2008-10-15 Huawei Technologies Co., Ltd. A service restoring method and device
CN101582786A (en) * 2009-06-17 2009-11-18 中兴通讯股份有限公司 Instant handling method and device of instant messages
CN102223668A (en) * 2010-04-15 2011-10-19 中兴通讯股份有限公司 Resource seizing method for long term evolution (LTE) system during service congestion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1981221A1 (en) * 2006-09-29 2008-10-15 Huawei Technologies Co., Ltd. A service restoring method and device
CN101582786A (en) * 2009-06-17 2009-11-18 中兴通讯股份有限公司 Instant handling method and device of instant messages
CN102223668A (en) * 2010-04-15 2011-10-19 中兴通讯股份有限公司 Resource seizing method for long term evolution (LTE) system during service congestion

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726822A (en) * 2019-03-21 2020-09-29 大唐移动通信设备有限公司 Trap message processing method and data synchronization management device
CN114640638A (en) * 2020-12-16 2022-06-17 华为技术有限公司 Message transmission method and sending terminal equipment
CN114640638B (en) * 2020-12-16 2024-05-14 华为技术有限公司 Message transmission method and transmitting terminal equipment
CN113986484A (en) * 2021-10-12 2022-01-28 丰辰网络科技(无锡)有限公司 Task processing global scheduling method of social software
CN113986484B (en) * 2021-10-12 2023-10-27 丰辰网络科技(无锡)有限公司 Task processing global scheduling method of social software

Similar Documents

Publication Publication Date Title
US11336581B2 (en) Automatic rate limiting based on explicit network congestion notification in smart network interface card
US9998357B2 (en) Multipath transmission based packet traffic control method and apparatus
CN107005485A (en) A kind of method, corresponding intrument and system for determining route
JP2010050857A (en) Route control apparatus and packet discarding method
EP3961981A1 (en) Method and device for congestion control, communication network, and computer storage medium
US8174980B2 (en) Methods, systems, and computer readable media for dynamically rate limiting slowpath processing of exception packets
JP2009239634A (en) Packet buffer management apparatus for determining discarding of arrival packet and method for determining discarding of arrival packet
US8693335B2 (en) Method and apparatus for control plane CPU overload protection
CN113014508A (en) Message processing method and device
US8732263B2 (en) Self clocking interrupt generation in a network interface card
CN113992588B (en) Data transmission method, device, electronic equipment and readable storage medium
CN113315720B (en) Data flow control method, system and equipment
US11165705B2 (en) Data transmission method, device, and computer storage medium
WO2015149460A1 (en) Fiber channel over ethernet flow control method, device and system
CN110474845A (en) Flow entry eliminates method and relevant apparatus
CN103517342A (en) Adaptive Trap message treatment method and device
CN108738078A (en) A kind of transmission side data subtraction unit, equipment and readable storage medium storing program for executing
EP4329261A1 (en) Data processing method and related device
CN105933183B (en) Flow control optimization method based on POTN
CN109379163B (en) Message forwarding rate control method and device
CN107995199A (en) The port speed constraint method and device of the network equipment
CN111431812A (en) Message forwarding control method and device
Chang et al. Software defined backpressure mechanism for edge router
JP5492709B2 (en) Band control method and band control device
CN108337181A (en) A kind of switching network congestion management and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140115

WD01 Invention patent application deemed withdrawn after publication