CN100334837C - A method for assigning path bandwidth in bearing control layer - Google Patents

A method for assigning path bandwidth in bearing control layer Download PDF

Info

Publication number
CN100334837C
CN100334837C CNB2003101230996A CN200310123099A CN100334837C CN 100334837 C CN100334837 C CN 100334837C CN B2003101230996 A CNB2003101230996 A CN B2003101230996A CN 200310123099 A CN200310123099 A CN 200310123099A CN 100334837 C CN100334837 C CN 100334837C
Authority
CN
China
Prior art keywords
bandwidth
path
newly
expense
increased
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003101230996A
Other languages
Chinese (zh)
Other versions
CN1633081A (en
Inventor
陈悦鹏
范灵源
吴登超
许波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2003101230996A priority Critical patent/CN100334837C/en
Publication of CN1633081A publication Critical patent/CN1633081A/en
Application granted granted Critical
Publication of CN100334837C publication Critical patent/CN100334837C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The present invention discloses a method of assigning path bandwidth in a bearing control layer. The method comprises that: a: after each hop load network resource manager receives a request of resource connecting application, paths are selected; bandwidth value is obtained according to the stack level MTD of a label with a maximum path in a bearing net, the length of the total message heading of user data, the bandwidth which is requested by a user and is in the request of resource connecting application and the message length MPPL of the peak value of users' services; the bandwidth value is assigned to each selected hop path; b: after the connection of all paths is established, each hop bearing net resource manager uses the bandwidth value obtained by the MPPL for replacing the bandwidth assigned by prior each hop path according to the stack level RTD of a relative path label of each hop path, the length of the total message heading of user data, requested bandwidth of users and bandwidth value obtained by the MPPL. The method of the present invention is adopted for assigning bandwidth in a simple mode; therefore, unnecessary wasting of resources is reduced, and the cost is reduced.

Description

A kind of in bearer control layer the method for dispense path bandwidth
Technical field
The present invention relates to the independent bearing layer differentiated service (Diff-serv, DifferentiatedService) technology, relate in particular to a kind of in the bearer control layer of the differentiated service that the independent bearing layer is arranged the method for dispense path bandwidth.
Background technology
Along with the continuous increase of the Internet (Internet) scale, various service quality (QoS, Quality ofService) technology is arisen at the historic moment.Therefore, the Internet engineering duty group (IETF, Internet EngineeringTask Force) has been advised a lot of service models and mechanism, to satisfy the demand of QoS.What industry was relatively approved at present is to use integrated service model (Int-Serv, Integrated Service) at the access and the edge of network, uses differentiated service (Diff-serv, Differentiated Service) at server.Differentiated service is only set priority level and is ensured the QoS measure, though this QoS measure has the high characteristics of line efficiency, concrete effect is difficult to prediction.Therefore, in order further to improve the QoS technology, industry begins to introduce an independently bearer control layer for the backbone network differentiated service, sets up the qos signaling mechanism of the special differentiated service of a cover.This differentiated service is called as the differentiated service that independent bearing control layer is arranged.
Fig. 1 has the independently differentiated service figure of bearer control layer.As shown in Figure 1, in this model, bearer control layer 102 places between bearer network 103 and the service control layer 101.Call Agent (CA in service control layer 101, Call Agent) is service server, such as soft switch, video request program (VOD) Control Server, route gatekeeper (GK, Gate Keeper) etc., CA receives the call request of subscriber equipment, and proxy user equipment is finished the request and the exchange of calling; In bearer control layer 102, load network resource manager has disposed rule and network topology, service bandwidth application Resources allocation for the client, three load network resource manager have only been drawn among this figure, be load network resource manager 1, load network resource manager 2 and load network resource manager 3, but the number of load network resource manager is not certain, and each load network resource manager is transmitted client's service bandwidth application request and result by signaling each other and is professional routed path information of applying for distribution etc.; In bearer network 103, each specific bearer network zone of load network resource manager management, this specific bearer network zone is called as the management domain of pairing load network resource manager, it among this figure the management domain 107 of load network resource manager 1, the management domain 108 of load network resource manager 2 and the management domain 109 of load network resource manager 3, comprise edge router (ER in the management domain 107, EdgeRouter) 110, core router 111 and border router (BR, Border Router) 112, wherein, ER can be linked into bearer network with the call business stream of subscriber equipment or draw bearer network, also comprises core router and border router in management domain 108 and the management domain 109.
In the differentiated service of independent bearing control layer was arranged, load network resource manager was that user's business connects the application communication path, and the path allocation bandwidth for applying for.In many differentiated service that bearer control layer independently arranged, the method for distributing bandwidth is arranged all, as serve key experimental network (Qbone, Quality-of-Service backbone) bandwidth broker device model, Fig. 2 is the bandwidth broker device illustraton of model of Qbone, as shown in Figure 2, bandwidth broker device 1, what bandwidth broker device 2 and bandwidth broker device 3 were realized is exactly the function of load network resource manager, in this model, the bandwidth broker device is responsible for handling from subscriber's main station, service server or network maintenance staff's bandwidth application request, utilize the statistic algorithm of traffic engineering to obtain the distribution bandwidth according to writing down the bulk information parameter in this bandwidth request and this Bandwidth Broker, these informations parameter comprise all kinds of configuration informations, the topology information of physical network, the configuration information of router and policy information, current resource obligate information, a large amount of static or dynamic information such as network seizure condition information.
The bandwidth broker device of above-mentioned Qbone distributes the shortcoming of bandwidth scheme to be: needs to calculate with quantity of parameters, and the calculation procedure complexity, amount of calculation is very big, and is also bigger to expending of device resources such as processor, thereby causes cost than higher.
In addition, the Rich QoS scheme that also has a kind of NEC Corporation to propose.Fig. 3 is the illustraton of model of Rich QoS scheme, as shown in Figure 3, QoS server 301 is as critical component, also comprise the strategic server 302 and LIST SERVER 303 and the network management monitoring server 304 that match with the QoS server, in this programme, distribute the method for bandwidth to be: to gather the primitive network topological data in the router of network management monitoring server 304 from bearer network, and the topological data that collects left in the LIST SERVER 303, when needing to distribute bandwidth, strategic server 302 reads relevant data from LIST SERVER 303, and obtaining bandwidth, QoS server 301 reads the result who obtains from strategic server 302 again, and distributes bandwidth.Bandwidth acquisition process wherein is: utilize based on multiprotocol label switching (MPLS, Multiprotocol Lable Switch) traffic engineering statistic algorithm obtains bandwidth, and this method is obtained the bandwidth that needs distribute according to the length of user data message and the multiple parameters such as two-way time of data.
The shortcoming of dispense path bandwidth is in the above-mentioned Rich QoS scheme: the network management data traffic of bearer control layer and bearer network is big, and bearer control layer has bigger bandwidth calculation amount, and server related on the hardware is too many, thereby expends a large amount of device resources; In addition, the method for obtaining bandwidth also needs the participation of quantity of parameters and program complexity, amount of calculation is very big, and is bigger to expending of device resources such as processor, thereby causes cost very high, and, measure and need expend time in two-way time, so the real-time of this kind scheme is very poor.
Summary of the invention
In view of this, main purpose of the present invention provide a kind of in bearer control layer the method for dispense path bandwidth, thereby simplify the step of distributing bandwidth, reduce the meaningless wasting of resources, and reduce cost.
To achieve these goals, technical scheme of the present invention specifically is achieved in that
A kind of in bearer control layer the method for dispense path bandwidth, it is characterized in that described method comprises:
A: in bearer control layer, carry out in the process of resource request, each is jumped after load network resource manager receives connection resource application request, select the path according to this connection resource application request, obtain the newly-increased expense of the maximum of user data the time through this jumping path according to total heading length of maximum path label stack degree of depth MTD in this bearer network and customer service raw data packets; Peak-peak message length MPPL according to bandwidth that the user asked that comprises in the newly-increased expense of this maximum, the connection resource application request and customer service obtains the maximum newly-increased shared bandwidth of expense again; The bandwidth sum that bandwidth that the newly-increased expense of the maximum of being obtained is shared and user are asked is distributed to selected every jumping path as bandwidth value;
B: when the source of resource request load network resource manager is jumped after the selected path of load network resource manager sets up all paths and connect according to each, each jumps the reality newly-increased expense of load network resource manager when obtaining user data and jump path according to each total heading length of jumping the relative path label stack degree of depth RTD in path and customer service raw data packets through each; The newly-increased shared bandwidth of expense of reality when obtaining user data and jump path according to this actual newly-increased expense, bandwidth that the user asked and MPPL again through each; And the shared bandwidth of the newly-increased expense of reality when respectively user data being jumped path through each and the user bandwidth sum of being asked is replaced before bandwidth for each jumping path allocation as bandwidth value.
In step a, the method for obtaining the newly-increased expense of described maximum is:
The value that total heading length of MTD * 4 * 2+ customer service raw data packets is obtained is as the newly-increased expense of described maximum.
In step a, the method for obtaining the newly-increased shared bandwidth of expense of described maximum is:
The value that the newly-increased expense/MPPL of bandwidth * maximum that the user asked is obtained increases the shared bandwidth of expense newly as described maximum.
After step b, described method is further comprising the steps of:
C, when the described load network resource manager of respectively jumping when receiving that connection resource is revised request, the described reality newly-increased expense of load network resource manager when obtaining user data and jump path of respectively jumping through each according to each total heading length of jumping the RTD in path and customer service raw data packets; The newly-increased shared bandwidth of expense of reality when revising the bandwidth that the user asked that comprises in the request and MPPL and obtain user data and jump path according to this actual newly-increased expense, connection resource again through each; The bandwidth addition that bandwidth that this actual newly-increased expense is shared and user are asked is made the bandwidth that path allocation had before been jumped in the bandwidth value replacement for each with what obtain with value.
The method of the described newly-increased expense of reality when obtaining user data and jumping path is through each: whether the message length of maximum data packet of judging customer service is greater than the maximum path transmission unit PMTU that works as the skip before path, if, the newly-increased expense of reality in the time of then will working as value that total heading length of RTD * 4 * 2+ customer service raw data packets in skip before path obtains and jump path through each as described user data; Otherwise, the newly-increased expense of reality in the time of will working as value that RTD * 4 in skip before path obtain and jump path through each as described user data.
The message length of the maximum data packet of described customer service is: the peak-peak message length+4 * described RTD that respectively jumps the path.
The method of the newly-increased shared bandwidth of expense of reality when obtaining described user data through each jumping path is:
The newly-increased shared bandwidth of expense of reality when the newly-increased value that expense/the peak-peak message length obtains of bandwidth * reality that the user asked is jumped the path as described user data through each.
Total heading length of described customer service raw data packets by: user data package each layer heading length sum of process.Described each layer heading comprises: link layer message head and IP heading.
Because the method for the invention is utilized resource network carrier independent allocation path bandwidth, thereby saved device resource, and method of the present invention only just can more accurately be obtained with a spot of parameters such as message length and bandwidth request and is required to be the bandwidth of respectively jumping path allocation, greatly reduce the complexity of obtaining bandwidth, workload is little, thereby save a large amount of processor resources, greatly reduce cost; In addition, the speed ratio of the method for the invention is very fast, does not also spend the two-way time of measurement data, so real-time is fine.
Description of drawings
Fig. 1 is the differentiated service figure that independent bearing control layer is arranged;
Fig. 2 is the bandwidth broker device illustraton of model of Qbone;
Fig. 3 is the illustraton of model of Rich QoS scheme;
Fig. 4 is for finishing the flow chart of resource request in bearer network;
Fig. 5 is the common message format figure of customer service raw data packets;
Fig. 6 is for as (the user service data message format figure during MPPL+4 * RTD)<=PMTU;
Fig. 7 is for as (the user service data message format figure during MPPL+4 * RTD)>PMTU.
Embodiment
The present invention is further described in more detail below in conjunction with the drawings and specific embodiments.
Bearer control layer is when connecting the selection path for customer service, need be according to user's resource request dispense path bandwidth, method of the present invention, mainly be on load network resource manager according to user's message length, bandwidth on demand and routing information, be the path allocation bandwidth of choosing.
The described path of present embodiment is meant label switched path (LSP, Label Switch Path), each load network resource manager inside in the bearer control layer can connect for the business of user request selects this jumping load network management device to have jurisdiction over LSP in the management domain, and obtain the bandwidth resources that on every jumping LSP, need distribution, and distribute bandwidth for every jumping LSP according to the result who obtains.
Fig. 4 is for finishing the flow chart of resource request in bearer network, as shown in Figure 4, application connection resource that ordinary business practice connects or modification are adjusted resource process and be may further comprise the steps:
A:CA is to the source load network resource manager, be that load network resource manager 1 sends connection resource application request, comprise the bandwidth RB that the user applies in this connection resource application request, after the source load network resource manager is received connection resource application request, select LSP, and on selected every LSP, distribute bandwidth reserved, jump load network resource manager to next subsequently and send connection resource application request;
B: after current load network resource manager is received connection resource application request, select LSP, and on selected every jumping LSP, distribute bandwidth reserved, if the purpose load network resource manager that this current load network resource manager is a resource request, be load network resource manager n, then upwards a jumping load network resource manager is returned connection resource application response, execution in step c; Otherwise, jump load network resource manager to next and send connection resource application request, return step b;
C: current load network resource manager is received connection resource application response, if the source load network resource manager that this current load network resource manager is a resource request, then set up professional all LSP that connect and connect, and return connection resource application response to CA according to the LSP information in the resource response; Otherwise upwards a jumping load network resource manager is returned connection resource application response, returns step c.
After the source load network resource manager is set up professional all LSP that connect and is connected, need be according to the adjustment of making amendment of the previous bandwidth of reserving of the information of all LSP; Afterwards, when the source load network resource manager is received the request of adjusting about the previous bandwidth of reserving is made amendment of CA, need be according to revising the request of adjustment to the adjustment of making amendment of the bandwidth of previous reservation.These the two kinds processes of revising the adjustment bandwidth are identical, and concrete steps are as follows:
D: the source load network resource manager is according to the bandwidth that the user asked that comprises in the connection resource application request, perhaps connection resource is revised the included bandwidth that the user asked in the request, be the adjustment of making amendment of the previous bandwidth of reserving of this source load network resource manager, and send connection resource to next bar source load network resource manager and revise request;
E: after current load network resource manager receives that connection resource is revised request, be the adjustment of making amendment of the previous bandwidth of reserving of this current load network resource manager, if the purpose load network resource manager that this current load network resource manager is a resource request, be load network resource manager n, then upwards a jumping load network resource manager is returned connection resource and is revised response, execution in step f; Otherwise, jump load network resource manager to next and send connection resource modification request, return step e;
F: current load network resource manager is received connection resource modification response, if the source load network resource manager that this current load network resource manager is a resource request is then returned connection resource to CA and revised response; Otherwise upwards a jumping load network resource manager is returned connection resource and is revised response, returns step f.
In the process of whole application resource or modification resource, load network resource manager will be connected every jumping LSP for user's business and go up the distribution bandwidth, and method of the present invention is exactly to obtain each to jump the bandwidth that needs on LSP, and carries out allocated bandwidth according to this:
Present embodiment is that example illustrates method of the present invention with the transmit IP message, at first, what explanation determine the required bandwidth of each jumping LSP by, Fig. 5 is the common message format figure of customer service raw data packets, as shown in Figure 5, the message of customer service raw data packets comprises: link layer message head 501, IP heading 502 and customer service net load data 503.Described link layer message head 501 is the heading of link layer, and described IP heading 502 is this message added heading through ip protocol layer the time, and described customer service net load data 503 are exactly user's business datum.The bandwidth RB that the user asked determines according to the shared bandwidth of above-mentioned customer service raw data packets.
When user service data transmits by LSP in bearer network, the message format of user service data is corresponding to change, Fig. 6 is the user service data message format figure of this moment, as shown in Figure 6, the user data message comprises: link layer message head 501, LSP label stack 601, IP heading 502 and customer service net load data 503.In bearer network, each jumps LSP all the label of self, each jumps self label of LSP and the tag storage of former jumping LSP is jumped in the label stack 601 of LSP at this, the quantity of storage tags is by the relative label stack degree of depth (RTD in the LSP label stack 601, Relative path Tag stack Depth) represents, promptly in whole LSP set, each jumps LSP for initial CN, the LSP jumping figure of process.For example first RTD that jumps LSP is that 1, the second RTD that jumps LSP is that 2, the three RTD that jump LSP are 3, and the rest may be inferred.In the differentiated service of bearer control layer is independently arranged, also has a specification attribute at load network resource manager, be the maximum path label stack degree of depth (MTD, Max path Tag stack Depth), MTD represents that business is connected in the whole LSP set, allow the maximum LSP jumping figure of process, the value of MTD can define voluntarily according to the scale of network.
LSP label stack 601 is also wanted occupied bandwidth for newly-increased expense, so will include LSP label stack 601 shared bandwidth when distributing bandwidth.The newly-increased shared byte number of expense is the length of LSP label stack 601, and the length=RTD of LSP label stack 601 * shared byte number of each label, again because in the user data service message, the byte length of each label is 4 bytes in the label stack, so the length of LSP label stack is RTD * 4.
Because bandwidth is not only relevant with the length of message, also relevant with the transmission frequency of message, and the message length of transmission all the time also is Protean, so, with peak-peak message length (MPPL, MaxPeak Packet Length) be illustrated in user's the business connection, the message of shared bandwidth maximum, described MPPL are connected the message length of individual data bag maximum under the peak bandwidth situation for user's business; In addition, one to jump the maximum data packet that institute allows to transmit on the LSP be MTU (MTU, Max TransferUnit); Between CN and CN, among professional all LSP that connect institute's energy process, MTU value minimal data bag is maximum path transmission unit (PMTU, Path Max Transfer Unit).
When under peak condition, the message length of the maximum data packet of customer service is: original message length+newly-increased overhead length, promptly (MPPL+4 * RTD).When (during MPPL+4 * RTD)<=PMTU, as shown in Figure 6, the general data bag that packet is original with respect to user shown in Figure 5, newly-increased expense is a LSP label stack 601, so will include increasing the shared bandwidth of expense newly when obtaining bandwidth, this newly-increased expense is: the length of LSP label stack 601, i.e. RTD * 4.
When (during MPPL+4 * RTD)>PMTU, then current LSP does not allow the packet of customer service to pass through, so this moment, this packet must carry out burst to be handled, the customer service net load data that are about in the packet are respectively charged into two packets, and these two packets can pass through current LSP, as shown in Figure 7, user's raw data packets is divided into packet 701 and packet 702, and packet 701 comprises: the first 703 of link layer message head 501, LSP label stack 601, IP heading 502 and customer service net load data 503; Packet 702 comprises: the remainder 704 of link layer message head 501, LSP label stack 601, IP heading 502 and customer service net load data 503.Compare with user's shown in Figure 5 raw data packets, LSP label stack 601 in the packet 701 is newly-increased expense 1, link layer message head 501 in the packet 702, LSP label stack 601 and IP heading 502 are newly-increased expense 2, so the newly-increased expense of these two parts will be taken into account when obtaining bandwidth, so the newly-increased expense of whole packet is: the newly-increased expense 2=LSP label stack length * 2+ (link layer header length+IP header length) of newly-increased expense 1+, wherein, LSP label stack length is: the RTD * shared byte number of each label, i.e. RTD * 4.
As mentioned above, the packet that transmits in bearer network has increased new expense than user's raw data packets, and these new expenses will take a part of bandwidth, so, for the bandwidth of distributing as skip before LSP is obtained with formula (1):
Be the bandwidth=RB+ Δ RB (1) that distributes as skip before LSP
In the formula (1), the unit of bandwidth is bps (bps), and RB is the bandwidth that the user asked, and Δ RB is the newly-increased shared bandwidth of expense.Because newly-increased expense can be different under different situations, thus under different situations the value difference of Δ RB, corresponding current LSP bandwidth is also different, below explanation respectively:
Be that described every jumping LSP is when distributing bandwidth reserved in above-mentioned steps a and step b, because can not determine final all LSP that this business connects fully connects, in order to ensure enough using for the bandwidth reserved that distributes as skip before LSP, need be the bandwidth of reserving as skip before LSP so obtain by the newly-increased expense of maximum this moment, promptly maximum newly-increased expense is: above-mentioned newly-increased expense 1 and newly-increased expense 2 sums, and as the maximum MTD of degree of depth choosing of the label stack of skip before LSP, suc as formula (2):
Δ RB=RB * maximum increases expense/MPPL (2) newly
In the formula (2), described newly-increased expense is the maximum shared byte number of newly-increased expense, i.e. 4 * MTD * 2+ (link layer header length+IP header length).
After whole LSP sets up successfully, then each jumps the RTD that load network resource manager has been known each jumping LSP, therefore can make first resource to the whole LSP set that previous application is got off and revise and adjust, promptly to the adjustment of making amendment of previous each bandwidth reserved of jumping LSP; Perhaps, because other reasons will be to adjustments of making amendment of the previous bandwidth of reserving for LSP, for example load network resource manager may be received that the connection resource of CA is revised and asks, so will be to the adjustment of making amendment of original bandwidth.At this moment, in order more accurately to be retrieved as the bandwidth of distributing,, two kinds of situations there is this moment so obtain bandwidth and distribution by accurately newly-increased expense as skip before LSP:
If (during MPPL+4 * RTD)<=PMTU, then:
Δ RB=RB * accurately newly-increased expense/MPPL (3)
As shown in Figure 6, the newly-increased expense described in the formula (3) is the label stack length as skip before LSP, that is, and and 4 * RTD.
If (MPPL+4 * RTD)>PMTU, then:
Δ RB=RB * accurately newly-increased expense/MPPL (4)
As shown in Figure 7, the newly-increased expense described in the formula (4) is: when the shared byte number * 2+ of the label stack of skip before LSP (link layer header length+IP header length), i.e. (4 * RTD * 2+ (link layer header length+IP header length)).
In the present embodiment, the clean business datum of user is at the second layer, i.e. IP layer, if the clean business datum of user the 3rd layer and with upper-layer protocol in, then above-mentioned IP header length should replace with: IP header length+three layer and each layer header sum more than three layers.
Generally speaking, adopt the described method of the foregoing description to obtain and the dispense path bandwidth, but method of the present invention also can only be obtained the newly-increased shared bandwidth of expense with formula (2), and calculates the bandwidth of required distribution with this, afterwards also not to the adjustment of making amendment of the bandwidth of distribution.Though this execution mode is fairly simple, amount of calculation is little, and precision is not high, causes the waste of bandwidth resources easily.This execution mode is for the time requirement height and the low business of bandwidth requirement is suitable for, but generally speaking, do not adopt this execution mode.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with the people of this technology in the disclosed technical scope of the present invention; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.

Claims (8)

1, a kind of in bearer control layer the method for dispense path bandwidth, it is characterized in that described method comprises:
A: in bearer control layer, carry out in the process of resource request, each is jumped after load network resource manager receives connection resource application request, select the path according to this connection resource application request, obtain the newly-increased expense of the maximum of user data the time through this jumping path according to total heading length of maximum path label stack degree of depth MTD in this bearer network and customer service raw data packets; Peak-peak message length MPPL according to bandwidth that the user asked that comprises in the newly-increased expense of this maximum, the connection resource application request and customer service obtains the maximum newly-increased shared bandwidth of expense again; The bandwidth sum that bandwidth that the newly-increased expense of the maximum of being obtained is shared and user are asked is distributed to selected every jumping path as bandwidth value;
B: when the source of resource request load network resource manager is jumped after the selected path of load network resource manager sets up all paths and connect according to each, each jumps the reality newly-increased expense of load network resource manager when obtaining user data and jump path according to each total heading length of jumping the relative path label stack degree of depth RTD in path and customer service raw data packets through each; The newly-increased shared bandwidth of expense of reality when obtaining user data and jump path according to this actual newly-increased expense, bandwidth that the user asked and MPPL again through each; And the shared bandwidth of the newly-increased expense of reality when respectively user data being jumped path through each and the user bandwidth sum of being asked is replaced before bandwidth for each jumping path allocation as bandwidth value.
2, the method for claim 1 is characterized in that, in step a, the method for obtaining the newly-increased expense of described maximum is:
The value that total heading length of MTD * 4 * 2+ customer service raw data packets is obtained is as the newly-increased expense of described maximum.
3, the method for claim 1 is characterized in that, in step a, the method for obtaining the newly-increased shared bandwidth of expense of described maximum is:
The value that the newly-increased expense/MPPL of bandwidth * maximum that the user asked is obtained increases the shared bandwidth of expense newly as described maximum.
4, the method for claim 1 is characterized in that, after step b, described method is further comprising the steps of:
C, when the described load network resource manager of respectively jumping when receiving that connection resource is revised request, the described reality newly-increased expense of load network resource manager when obtaining user data and jump path of respectively jumping through each according to each total heading length of jumping the RTD in path and customer service raw data packets; The newly-increased shared bandwidth of expense of reality when revising the bandwidth that the user asked that comprises in the request and MPPL and obtain user data and jump path according to this actual newly-increased expense, connection resource again through each; The bandwidth addition that bandwidth that this actual newly-increased expense is shared and user are asked is made the bandwidth that path allocation had before been jumped in the bandwidth value replacement for each with what obtain with value.
5, as claim 1 or 4 described methods, it is characterized in that, the method of the described newly-increased expense of reality when obtaining user data and jumping path is through each: whether the message length of maximum data packet of judging customer service is greater than the maximum path transmission unit PMTU that works as the skip before path, if, the newly-increased expense of reality in the time of then will working as value that total heading length of RTD * 4 * 2+ customer service raw data packets in skip before path obtains and jump path through each as described user data; Otherwise, the newly-increased expense of reality in the time of will working as value that RTD * 4 in skip before path obtain and jump path through each as described user data.
6, method as claimed in claim 5 is characterized in that, the message length of the maximum data packet of described customer service is: the peak-peak message length+4 * described RTD that respectively jumps the path.
As claim 1 or 4 described methods, it is characterized in that 7, the method for the newly-increased shared bandwidth of expense of reality when obtaining described user data through each jumping path is:
The newly-increased shared bandwidth of expense of reality when the newly-increased value that expense/the peak-peak message length obtains of bandwidth * reality that the user asked is jumped the path as described user data through each.
8, the method for claim 1 is characterized in that, total heading length of described customer service raw data packets by: user data package each layer heading length sum of process.
9, method as claimed in claim 8 is characterized in that, described each layer heading comprises: link layer message head and IP heading.
CNB2003101230996A 2003-12-24 2003-12-24 A method for assigning path bandwidth in bearing control layer Expired - Fee Related CN100334837C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2003101230996A CN100334837C (en) 2003-12-24 2003-12-24 A method for assigning path bandwidth in bearing control layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2003101230996A CN100334837C (en) 2003-12-24 2003-12-24 A method for assigning path bandwidth in bearing control layer

Publications (2)

Publication Number Publication Date
CN1633081A CN1633081A (en) 2005-06-29
CN100334837C true CN100334837C (en) 2007-08-29

Family

ID=34844737

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101230996A Expired - Fee Related CN100334837C (en) 2003-12-24 2003-12-24 A method for assigning path bandwidth in bearing control layer

Country Status (1)

Country Link
CN (1) CN100334837C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468408A (en) * 2013-09-22 2015-03-25 中国电信股份有限公司 Method for adjusting dynamically service bandwidth and control center server

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100596235C (en) * 2006-12-15 2010-03-24 华为技术有限公司 Method and system for scheduling of resource based on wireless system
CN101414956B (en) * 2007-10-15 2011-08-03 华为技术有限公司 Method, system and apparatus for bandwidth request
CN108462596B (en) * 2017-02-21 2021-02-23 华为技术有限公司 SLA decomposition method, equipment and system
CN112350935B (en) * 2019-08-08 2023-03-24 中兴通讯股份有限公司 Path calculation method and device for path with stack depth constraint

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10147748A1 (en) * 2001-09-27 2003-04-17 Siemens Ag Method and device for adapting label-switched paths in packet networks
WO2003065647A2 (en) * 2002-01-30 2003-08-07 Ericsson Inc. Method and apparatus for obtaining information about paths terminating at a node

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10147748A1 (en) * 2001-09-27 2003-04-17 Siemens Ag Method and device for adapting label-switched paths in packet networks
WO2003065647A2 (en) * 2002-01-30 2003-08-07 Ericsson Inc. Method and apparatus for obtaining information about paths terminating at a node

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468408A (en) * 2013-09-22 2015-03-25 中国电信股份有限公司 Method for adjusting dynamically service bandwidth and control center server
CN104468408B (en) * 2013-09-22 2018-04-06 中国电信股份有限公司 For dynamically adjusting the method and control centre's server of service bandwidth

Also Published As

Publication number Publication date
CN1633081A (en) 2005-06-29

Similar Documents

Publication Publication Date Title
CN100505639C (en) Method of implementing resource application for multi-service streams
US6854013B2 (en) Method and apparatus for optimizing network service
KR100853045B1 (en) Auto-ip traffic optimization in mobile telecommunications systems
CN1283079C (en) IP network service quality assurance method and system
CN101136866B (en) Integrated network communication layer service quality guaranteeing structure and operating method
US20040008688A1 (en) Business method and apparatus for path configuration in networks
US20080008091A1 (en) Qos CONTROL SYSTEM
JP2003507969A (en) Service Parameter Network Connection Method
CN1581791B (en) Method for providing reliable transmission service quality in communication network
CN100389581C (en) Method for ensuring quality of end-to-end service
CN100334837C (en) A method for assigning path bandwidth in bearing control layer
CN101123814A (en) Adjacent space multi-protocol tag switching network system and its processing method
WO2009049676A1 (en) Method and apparatus for use in a network
CN1756186B (en) Resource management realizing method
CN100589401C (en) Method for configuring path route at carrying network resource supervisor
EP1113629B1 (en) Session subscription system and method for same
CN101499970A (en) Band-width allocation method for guaranteeing QoS of customer in IP telecommunication network
CN100391154C (en) Selecting method of path in resource supervisor
CN100442703C (en) Method for transmitting service flow in supporting network
CN100355249C (en) A method for accomplishing resource request for bothway service in bearing network
CN100382540C (en) Method for realizing service connection resource management
CN100456691C (en) Method for distributing bearing net resource
Samdanis et al. Service Boost: Towards on-demand QoS enhancements for OTT apps in LTE
CN100394736C (en) Improvement of multi-medium communication reliability by logic media
CN100450049C (en) A method for implementing resource distribution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070829

Termination date: 20151224

EXPY Termination of patent right or utility model