WO2012114328A1 - System and method for active queue management per flow over a packet switched network - Google Patents

System and method for active queue management per flow over a packet switched network Download PDF

Info

Publication number
WO2012114328A1
WO2012114328A1 PCT/IL2012/000077 IL2012000077W WO2012114328A1 WO 2012114328 A1 WO2012114328 A1 WO 2012114328A1 IL 2012000077 W IL2012000077 W IL 2012000077W WO 2012114328 A1 WO2012114328 A1 WO 2012114328A1
Authority
WO
WIPO (PCT)
Prior art keywords
packets
subscriber
pswtn
pfaqmf
queue
Prior art date
Application number
PCT/IL2012/000077
Other languages
French (fr)
Inventor
Ayal Lior
Shahar Gorodeisky
Kobi BRENER-SHEM-TOV
Original Assignee
Celtro Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Celtro Ltd. filed Critical Celtro Ltd.
Publication of WO2012114328A1 publication Critical patent/WO2012114328A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/629Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]

Definitions

  • the embodiments presented in this disclosure generally relate to communication networks, and more particularly to per-flow-queue management of data traffic of a plurality of flows at an intermediate node of a packet switched network.
  • Hierarchical Queuing Framework is a queuing architecture, which includes a plurality of queues; each queue is used for storing incoming packets and/or frames and/or ATM cells for a specific session or group of sessions that are associated with a certain priority level. The queues can be organized in hierarchical architecture.
  • the HQF architecture complies with the topology of a network hierarchical tree. Therefore, each queue can be drained in a different rate, limited to a certain delay, dropping policy, etc..
  • the plurality of queues can be logical queues that are embedded in a single physical memory device with a plurality of pointers, each pointer can be associated with a different queue (different subscriber and/or session), for example.
  • the queuing system can control several aspects of handling packets of each session, such a queuing system can be referred as hierarchical Weighted Fair Queuing or hierarchical class based queuing.
  • the aspects can be multi level of packet scheduling; class based shaping (maximum rate, minimum rate), maximum delay, delay variation, dropping policy, for example.
  • class based shaping maximum rate, minimum rate
  • maximum delay maximum delay variation
  • dropping policy for example.
  • QAM active queue management
  • each queue can have further child queues.
  • Each queue can be associated with packets having one or more similar attributes.
  • AQM parameters can be configured after installation of a HQF in a network by an administrator of the network, for example. Once in a while, when a change in the network occurs the HQF can be reconfigured accordingly. Routing packets between the different queues in an HQF can be done based on parsing the header of received packets and comparing fields in the header to a queue routing table.
  • the fields can include QoS indication, label, layer 4 port number, IP address, etc.
  • access service provider such as operator of public switched telephone network (PSTN), Asymmetric Digital Subscriber Line (ADSL), Plain old telephone service (POTS), Integrated Services Digital Network (ISDN), Internet Service Provider (ISP), etc.
  • PSTN public switched telephone network
  • ADSL Asymmetric Digital Subscriber Line
  • POTS Plain old telephone service
  • ISDN Integrated Services Digital Network
  • ISP Internet Service Provider
  • Each flow can be associated with a subscriber having a different priority than another flow. Further, each flow can carry data of different type of sessions. Each session can have different priority, different bandwidth requirements and different packet loss and delay requirements, etc.
  • intermediate nodes can be router, switches, gateways, base stations, etc.
  • load available bandwidth (BW), maximum bit rate, etc
  • BW available bandwidth
  • maximum bit rate etc
  • Prior art dropping mechanism for managing congestion of plurality of queues.
  • Some prior art methods use a drop-tail mechanism, wherein a received packet is put onto a queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise, independently of the priority that the packet has.
  • Other prior art methods uses active queue management (AQM) that drops packets before the congestion buffer is full.
  • An AQM system starts dropping a certain percentages of packets when the congestion buffer reaches a certain threshold.
  • An exemplary AQM system is a random early detection (RED) method.
  • a RED system monitors the average queue size and drops packets based on statistical probabilities. If the buffer is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. For example, when 45% of the buffer is full, then a RED system can start dropping one packet every ten packets (10% dropping ratio); when the buffer is full, the probability has reached 1 and all incoming packets are dropped.
  • RED is indifferent and is not sensitive to the priority of the dropped packets. Priority that can be dependent on the priority of the subscriber that is associated with the dropped packet or the priority of the session carried by the dropped packed. RED techniques are well known to a skill person in the art and will not be further described.
  • WRED weighted random early detection
  • An exemplary WRED system may have several different queue thresholds. Each queue threshold is associated to a particular priority of the packet, a certain quality of service (QoS) for example. A queue may have lower thresholds for lower priority packet. A queue buildup will cause the lower priority packets to be dropped, hence protecting the higher priority packets in the same queue.
  • QoS quality of service
  • TCP transmission control protocol
  • a source and a destination of a TCP connection have a handshake protocol which allows the destination to inform the source that packets were received.
  • the destination can sent an acknowledge packet (ACK) upon receiving one or more packets.
  • ACK acknowledge packet
  • the destination can indicate that one or more packets are missing.
  • the indication can be done by sending three consecutive ACK packets carrying the same TCP sequence number.
  • the source retransmits the required packet. Retransmission can occur also when a time out is reached at the source of the flow.
  • a TCP/IP connection has rate controller that adapts the transmitting rate of a flow according to the requests of retransmission of dropped packets, as well as by responding to the window size carried by the ACK packet.
  • IP transport network is best effort network. As such delivery packets from one endpoint to the other can be delayed, dropped or jittered. A common cellular operator sell different level of services, accordingly a subscriber can purchase certain servicing policy which guaranties dropping rate, minimum BW etc.. Thus, a transport network in a RAN of a cellular operator requires personalize or subscriber's, HQF that comply with the subscriber contract.
  • a typical TCP/IP rate controller at a source of a flow of TCP/IP connection, has two mode of operation, a slow start mode and a congestion avoidance mode.
  • the source of a TCP flow, starts the transmission of a flow with the slow start mode.
  • slow-start mode the source transmits small number of TCP segments per a congestion window and waits for acknowledgement.
  • a TCP segment can be carried over a single packet.
  • the TCP source can start with transmitting two packets per congestion window, then four packets, etc.
  • the Rate is increased until the point that congestion window size (cwnd) reaches the size of the destination's advertised window or a pre-defined number of bytes (ssthresh) or until a retransmission is needed for one or more data segments.
  • the rate controller at the source, moves to the second mode, the congestion avoidance mode.
  • the rate is increased linearly.
  • the window is increased by one segment for each received acknowledgment (ACK).
  • ACK acknowledgment
  • an exemplary source of a TCP flow responds to retransmission of a single data packet by reducing the transmitting rate by a certain amount of percentages, it can be between 25-50%, for example. Then, the rate is increased according to the congestion avoidance mode of operation.
  • a common source, of a TCP flow retransmits multiple data packets within the same congestion window or when its retransmission timer expire it reduces the rate more drastically, by returning to slow-start mode of operation, starting by transmitting two packets in a congestion window.
  • Embodiments of novel AQM technique are disclosed.
  • An embodiment of the novel technique can have a Per-Flow-Active-Queue-Management Framework (PFAQMF), at an intermediate node that can control the dropping of packets at congestion buffers of an intermediate node, taking into consideration the congestion window of each TCP flow and avoid dropping of two or more packets that belong to a same congestion window.
  • PFAQMF Per-Flow-Active-Queue-Management Framework
  • Exemplary embodiments of the novel PFAQMF could also consider the priority of the subscriber and the priority of the session before dropping a packet. Further, some embodiments of the novel PFAQMF could also scatters the dropping packets among the plurality of flows in order to fairly share the available bandwidth between the plurality of flows.
  • An exemplary novel PFAQMF can be installed in-line to the traffic at a point of concentration (POC) at an ingress edge of an access network between the Internet and the plurality of subscribers.
  • An exemplary POC can be a switch, a router, a bridge, etc.
  • An embodiment of PFAQMF can be installed in a cellular access network which is carried over a Packet Switch Transport Network (PSwTN).
  • PSwTN Packet Switch Transport Network
  • Exemplary PSwTN can be Multi Protocol Label Switching (MPLS) over IP; MPLS- TP (transport profile); MAC-in-MAC; Ethernet; etc.
  • MPLS Multi Protocol Label Switching
  • MPLS- TP transport profile
  • MAC-in-MAC Ethernet
  • the PFAQMF can be installed in a cellular operator access network in the first POC between a Radio Network Controller (RNC) and the plurality of subscribers.
  • RNC Radio Network Controller
  • An exemplary PFAQMF may employ a Hierarchical Queuing Framework (HQF).
  • Hierarchical Queuing Framework is a queuing architecture, which includes a plurality of queues; each queue is used for storing incoming packets and/or frames and/or ATM cells and/or Protocol Data Unit (PDU) for a specific session or group of sessions that are associated with a certain priority level.
  • PDU Protocol Data Unit
  • the queues can be organized in hierarchical architecture.
  • the HQF architecture complies with the topology of a network hierarchical tree.
  • each queue can be drained in a different rate, limited to a certain delay, dropping policy, etc.
  • the plurality of queues can be logical queues that are embedded in a single physical memory device with a plurality of pointers, each pointer can be associated with a different queue (different subscriber and/or session), for example.
  • an exemplary PFAQMF may assign a statistical drop curve to each flow.
  • An exemplary drop curve may be calculated as a function of the queue occupancy of the queue that is associated with the flow.
  • One axis, the X axis for example, can reflect the percentage of the queue occupancy.
  • the queue occupancy can be calculated by using a smoothing algorithm to overcome temporary changes of the x-axis's value.
  • An exemplary smoothing algorithm can include mathematical averaging formula.
  • the other axis, the Y axis for example, of the drop curve can define the percentage of the packets to be dropped at certain occupancy.
  • An exemplary PFAQMF may have a plurality of LUTs that represent a plurality of drop curves in order to cover the plurality of combinations of priority of subscribers and priority of flows.
  • An exemplary drop curve LUT (DCLUT) can reflect low priority session, such as email for example, with low priority subscriber. Such a curve may be a steep curve that at lower occupancy it may reach high percentages of dropping packets. While other DCLUT can reflect a combination of high priority session, such as progressive Video for example, with high priority subscriber. Such a curve can be a moderate curve that at higher occupancy it may have low percentages of dropping packets.
  • Another exemplary PFAQMF may be configured to adapt the drop curve of a certain flow as a function the percentages of the utilized bandwidth of the certain flow from a fair-share bandwidth value per flow.
  • adapting the drop curve can be implemented by using a plurality of DCLUTs per flow. Each DCLUT can be associated with a certain percentages of the utilized bandwidth. A current DCLUT can be selected based on the current percentages of the utilized bandwidth from the fair-share BW. Other example may use a single DCLUT per flow and implement Horizontal Transformation on the stored function. Such an embodiment may calculate a current 'x' value for the X axis of the drop curve as a function of the queue occupancy and the percentages of the current utilized BW from the fair share.
  • FIG. 1 is a simplified block diagram illustrating a snapshoot of a portion of a cellular network in which an example of an embodiment of the present disclosure can be used;
  • FIG. 2 is a simplified block diagram illustrating a snapshoot of the portion of the cellular network of FIG. 1 with an example of an embodiment of a Per-Flow- Active-Queue-Management Framework (PFAQMF).
  • PFAQMF Per-Flow- Active-Queue-Management Framework
  • FIG. 4 schematically illustrates a flowchart showing relevant actions that can be implemented by an input packet processor (IPP) that can be employed in an example of an embodiment of a PFAQMF;
  • IPP input packet processor
  • FIG. 5 schematically illustrates a flowchart showing relevant actions of an example of an embodiment of a subscriber access processor (SAP) while learning the topology of the exemplary RAN 200;
  • SAP subscriber access processor
  • FIG. 6 schematically illustrates a flowchart showing relevant actions of an example of an embodiment of a subscriber access processor (SAP) for monitoring a user equipment (UE) current available bandwidth (UECABW) over the RAN 200;
  • SAP subscriber access processor
  • UE user equipment
  • UECABW current available bandwidth
  • FIG. 7 schematically illustrates a flowchart showing relevant actions of another example of a SAP process for learning the topology of an MPLS transport network
  • FIG. 8 schematically illustrates a flowchart showing relevant actions of an example of a SAP process for responding to a subscriber RANAP and NBAP messages; and [0038] FIG. 9a,b&c schematically illustrate a flowchart showing relevant actions of an example of a subscriber' s-session controller process (SSCP).
  • SSCP subscriber' s-session controller process
  • modules of the same or different types may be implemented by a single processor.
  • Software of a logical module may be embodied on a computer readable medium such as a read/write hard disc, CDROM, Flash memory, ROM, or other memory or storage, etc.
  • a software program may be loaded to an appropriate processor as needed.
  • the terms task, method, process, and procedure can be used interchangeably.
  • FIG. 1 illustrates snapshoot of an exemplary cellular network 100 of a cellular operator.
  • a common cellular network can comprises two types of networks, a radio access network (RAN) and a cellular operator core network (COCN) 110.
  • the RAN connects the plurality of base stations 125, via a base station controller, such as but not limited to a Radio Network Controller (RNC) 120, to the COCN 110.
  • RNC Radio Network Controller
  • Non limited examples of a base station can be a Node B (NB) 125, an Enhanced node B (eNode B).
  • eNode B Enhanced node B
  • the COCN 110 can comprise an operator's management network 114 having one or more management servers (not shown in the drawings).
  • the operator management network 114 can comprises a policy server such as policy and charging rules function (PCRF), an AAA system (Authentication Authorization and Accounting), etc.
  • the core network 110 can comprise an operator IP networks 112 that includes one or more servers (not shown in the drawings) that provide different IP services to the subscribers; services such as but not limited to border routers, IP portal, content provider, etc..
  • the operator IP network 112 can be connected to the World Wide Web (WWW) 102, Content Provider IP networks 104, and/or a plurality of Intranet (not shown in the drawings).
  • WWW World Wide Web
  • PSwTN 130 Packet Data Network 130
  • An exemplary PSwTN 130 can be based on IP.
  • a transport network provides traffic forwarding between two or more pairs of endpoints, NBs 125 and RNC 120 for example. This forwarding is based on a protocol header field in layer 2 and layer 2.5 of the Open System Interconnection (OSI) model. In some transport network, the traffic can be carried in tunnels between the endpoints.
  • OSI Open System Interconnection
  • an IP transport networks represent a Layer 2.5 in the OSI model layer.
  • the OSI layer 2.5 is considered to exist between traditional definitions of Layer 2 (Data link Layer) and Layer 3 (network layer). Layer 2.5 can be used to carry many different kinds of traffic, including IP packets as well as native Asynchronous Transfer Mode (ATM), Ethernet frames, etc..
  • PSwTN Packet Switch Transport Network
  • exemplary PSwTN network 130 can be Multi Protocol Label Switching (MPLS) over IP; MPLS-TP (transport profile); MAC-in-MAC; Ethernet; etc.
  • MPLS Multi Protocol Label Switching
  • MPLS-TP transport profile
  • MAC-in-MAC transport profile
  • Ethernet etc.
  • MPLS network can be used as a representative term for any PSwTN 130.
  • the PSwTN network 130 can carry subscribers' packets, the PSwTN signaling & control, and the RAN Signaling & Control, for example, NB application part (NBAP), between RNC 120 and NBs 125.
  • NBAP NB application part
  • Exemplary MPLS network 130 can comprise a plurality of points of concentration (POC 1 to POC 7). Each POC may be connected to one or more other POCs over an MPLS connection 131-139 and/or to one or more NBs 125 via an IP transport connection 122, 124 and 126, for example.
  • An exemplary POC can be a switch (for Ethernet), router, bridge, (for IP/MPLS), etc.
  • the topology of a RAN between the RNC 120 and a plurality of NBs 125 has a shape similar to a tree with a plurality of branches, wherein each junction includes a POC.
  • Each of the plurality of user equipment (UE 1 to UE 6) is connected via RF communication link 140 to a NB 125 and form the NB 125 over an IP transport connection (126, 124, 122) via one or more POCs (POCs 1-7) to the RNC 120 and from the RNC via an IP transport connection 119a through the COCN 110 to an Internet network, the World Wide Web (WWW) network 102, a content network 104 or an operator IP network 112, for example.
  • Connection 119a carries the RAN control and signaling of the RNC (RANAP), decrypted subscriber's packets.
  • RNC 120 can be connected to the PSwTN 130 via connection 119b to POC 1.
  • Connection 119b can carry the RAN control and signaling of the NB (NBAP); and encrypted subscriber's packets.
  • the signaling and controls connections can carry Radio Access Network Application Part (RANAP) messages between a cellular operator core network (COCN) 110 and the RNC 120, or carry Node B application part (NBAP) messages between the NBs and the RNC, for example.
  • RANAP Radio Access Network Application Part
  • NBAP Node B application part
  • PDUs of RANAP and/or NBAP can be carried over an IP network, Further in some of those embodiments a packet of the IP network can carry one or more RANAP or NBAP PDUs.
  • a reader who wishes to learn more about RANAP or NBAP is invited to read the well known protocols of 3 GPP published from 1999, the content of which is incorporate herein by reference.
  • cellular network 100 is shown having only six UEs (UE UE1-6), three NBs 125 (NB 1-3), seven POCs (POC 1-7), and one RNC 120.
  • UE UE1-6 UEs
  • NB 1-3 NB 1-3
  • POCs POC 1-7)
  • RNC 120 RNC 120
  • a skill person in the art appreciates that any other quantity of each component can be implemented in the access network.
  • RAN 100 is illustrated just for explanation and is not indented to limit the scope of this description.
  • the communication links between the different POCs, and between the POCs and the different NBs may use different physical layer, each link can have different capacity, different available bandwidth (BW), Uses different protocols, each POC may have different load.
  • BW available bandwidth
  • RAN 100 can comprise two or more RNCs 120.
  • the RNCs can communicate between themselves by using Radio Network Subsystem Application Part (RNSAP) over the IP transport network, which is carried over a PSwTN, such as but not limited to MPLS network 130 for example.
  • RNSAP Radio Network Subsystem Application Part
  • the old communication path between NB 1 and the RNC 120, includes NB 1, communication link 122, POC 2, communication link 131 and POC 1 and via IP transport connection 119b to RNC 120.
  • the new communication path between UE 2 and the RNC 120, includes NB 3, IP transport communication link 126, POC 5, IP/MPLS communication link 137, POC 4 and IP/MPLS communication link 135 to POC 1 and from there via IP transport connection 119b to RNC 120.
  • Each segment of the path can be carried on different physical line, having different BW and load, each POC can have different load and delay.
  • the IP address associated with the UE 2 in the IP transport network is changed from the IP address of NB 1 to the IP address of NB 3.
  • the port number that was associated to the subscriber's session can be changed too.
  • a subscriber's session can be tunneled between the RNC and NodeB. All of the subscriber IP packets belonging to the data session would be encapsulated in a tunnel called RAB (Radio Access Bearer), which can use UDP/IP encapsulation to the radio protocols. Looking at a RAB packet traveling towards a NB, the Destination IP address would be of the NB and the UDP Layer 4 port would identify this CUE tunnel.
  • RAB Radio Access Bearer
  • an IP transport network 130 can be best effort network. As such delivery packets from one endpoint to the other can be delayed, dropped or jittered.
  • a common cellular operator sells different level of services, accordingly a subscriber can purchase certain servicing policy which guaranties dropping rate, minimum BW etc..
  • a transport network 130 in a RAN of a cellular operator requires personalize or subscriber's, HQF that comply with the subscriber contract.
  • Yet some embodiments can be installed in access networks to the Internet, other than cellular access networks.
  • An example of such a network can be a Metro network.
  • Similar dropping methods would improves the experience of the subscribers of the access network.
  • information regarding an IP flow can be retrieved from the IP header and/or layer 2 header.
  • FIG. 2 illustrates an example of a Per-Flow-AQM Framework (PFAQMF) 210 that can be installed in association with RNC 120 and POC 1 of the snapshoot of FIG. 1.
  • the PFAQMF 210 can be installed as a transparent bridge after the RNC 120 and POC 1 , between POC 1 and the NBs 125.
  • the exemplary illustrated system in FIG. 2 controls the download traffic.
  • Other embodiments may have a slave module, which can be installed at the outputs of the egress POCs (POC 2, & POC 5, for example).
  • the slave module can manage the upload traffic in association with PFAQMF 210.
  • An embodiment of PFAQMF 210 can be installed as MPLS entity in the MPLS network 130 and may be connected to a plurality of entities in order to collect information regarding the Internet traffic to/from the plurality of UE 1-6.
  • PFAQMF 210 may obtain signaling and control that travel over the MPLS 130.
  • PFAQMF 210 can listen by sniffer 219 to RANAP traffic; and decrypted subscriber's packet that are travel over connection 119a.
  • connection PFAQMF 210 can collect information from the operator management networks 114.
  • the communication over connection 214 can be based on Gx protocol, RADIUS protocol, or similar protocol.
  • the policy information may include the guarantied dropping rate, minimum BW, bit rate, etc.
  • exemplary PFAQMF 210 obtains MPLS packets that carry: encrypted IP packets toward the UE 1-6, NBAP PDUs toward the plurality of NBs 125, MPLS signaling and control sent from POC 1 toward the plurality of POCs, etc.
  • the obtained information after being processed can be transferred toward it destination via connections 131b, 133b,& 135b respectively.
  • PFAQMF 210 may obtain via connections 131b, 133b,& 135b, MPLS packets that carry: encrypted IP packets from the UE 1-6 toward the Internet, NBAP PDUs from the plurality of NBs 125 toward RNC 120, MPLS signaling and control packets sent from the different POCs toward POC 1, etc.
  • the obtained packets after being processed, can be transferred toward it destination via connections 131a, 133a,& 135a respectively.
  • the exemplary PFAQMF 210 that is connected to the PSwTN network, such as IP/MPLS network 130 at the egress of POC 1 can be connected to a plurality of virtual networks, each providing transport service to a specific set of communication channels and protocols, that are carried over the IP/MPLS network 130.
  • Each network can comply with a different protocol and be used for different application.
  • One of the networks can be the network that carries the signaling and control of the plurality of POCs. Connecting to this network can be used for learning the topology of the network.
  • the example of PSwTN of the FIG. 2 with the plurality of POCs can be a single routing domain, a single autonomous system (AS), for example.
  • AS autonomous system
  • PFAQMF 210 which is configured to operate in the downstream of an PSwTN, based on Multi Protocol Label Switching (MPLS), the different POCs that are illustrated in FIG. 2, are Label Switch Routers (LSR) and/or Label Edge Router (LER) of the MPLS network 130.
  • LSR Label Switch Router
  • LER Label Edge Router
  • POC 1 serves as an ingress LER
  • POC 2 serves as an egress LER for NB 1
  • POC 5 serves as an egress LER for NB 3.
  • Such an exemplary PFAQMF 210 can be adapted to listen to the communication between the ingress LER (POC 1) and the plurality LSRs (POCs 3,4) as well as the egress LERs (POCs 2 & 5) .
  • RNC 120 may send PDUs that carry signaling and control information based on sections of Iub protocol, toward the NBs 125, etc.
  • POC 1 upon receiving PDUs from the RNC 120, may process the IP transport header, based on the IP address of the destination NB can add an MPLS header having one or more labels, based on a routing table, and send the IP/MPLS packet toward the relevant POC (POC 2, 3 or 4) via one of the connections 131a, 133a, & 135a toward a following POC via PFAQMF 210.
  • Each label defines a next POC in the way of the IP/MPLS packet.
  • Each POC can remove, add or replace the label that is pointed to it.
  • OAM Operations, Administration, and Maintenance
  • LTM link trace message
  • MEP Maintenance Entity Group
  • MIP MEGs Intermediate Points
  • the receiving side can respond with link trace reply (LTR) message that comprises the Media Access Control (MAC) address of the responder.
  • LTR link trace reply
  • a path from a subscriber UE and the embodiment of PFAQMF 210 includes a MEP at a certain MEG level and zero or more MIPs belonging to the same MEG and having the same MEG level.
  • the second MEP resides inside the embodiment of PFAQMF 210.
  • Exemplary path can be between POC 2 to PFAQMF 210.
  • This path serves UEs attached to NBl and NB2 125.
  • the MEG level of this path can be 4 and MEPs reside in POC2 125 and PFAQMF 210.
  • Another exemplary path can be MEP at NB 3 125 serving UE 5, MEG level 5 can be set for this path, having a MEP in this MEG level at PFAQMF 210.
  • a MIP in MEG level 5 can be set at POC5 and POC 4 along the path, etc.
  • the PFAQMF 210 and Egress POCs (2, 5, 6, and 7) can be configured as a MEP for one or more MEG levels, while the intermediate POCs (3 and 4) can be configured as MIPs of that MEG level.
  • This initial configuration can be implemented by a network operator that defines the MEPs and/or MIPs with the appropriate MEG level designated for the topology discovery. A reader who wishes to learn more about LTM and LTR messages is invited to read ITU standard Y.1731 published on 2006 by ITU, the content of which is incorporate herein by reference.
  • an embodiment of PFAQMF 210 may monitor the current available BW to the subscriber, the utilized BW, the dropping percentage, the RTT for a TCP flow, etc. Based on the processed information of the subscriber policy and the monitored information, PFAQMF 210 can manage an HQF that is connected in line to the egress of the PFAQMF 210. Such an exemplary PFAQMF 210 may define the draining rate (the shaper) of a queue allocated to a certain subscriber, avoid establishing a new IP session in case that the current available BW is below the guarantied minimum BW.
  • exemplary embodiments of the PFAQMF 210 can identify a group of the PSwTN packets that carries an entire subscriber's packet.
  • the PSwTN packets can be referred also as the Protocol Data Unit (PDU) of the IP transport layer. Consequently the dropping process of the PFAQMF 210 is more efficient than dropping process of portions of a subscriber's IP packets. In order to minimize latency and also, to maximize the bandwidth efficient by not sending fragments of subscriber's packets that are useless.
  • An exemplary embodiment may determine, upon receiving the first PSwTN packet, the 1 st PDU, that carries the beginning of a subscriber's IP packet, whether to drop the subscriber's IP packet or not.
  • the decision can be based on the subscriber's priority, the current load toward the destination NB, etc.
  • the following PDUs of the same subscriber's IP packet get the same dropping treatment until the PDU that carries the end of the subscriber's IP packet.
  • the PFAQMF 210 may manage the dropping of TCP/IP packets to one packet per a congestion window. More information on embodiments of PFAQMF 210 is disclosed below in conjunction with figures 3 to 9a, b &c.
  • FIG. 3 illustrates a simplified block diagrams with relevant elements of an example of PFAQMF 300.
  • PFAQMF 300 may comprise an input-packet processor (IPP) 310, a subscriber-IP -packets processor (SIPPP) 320, an active-subscriber table (AST) 335, a subscriber-access processor (SAP) 330, a plurality of egress-fix-access-network-interface card (EFANIC) 350, and one logical HQF per each POC connected to the egress of the PFAQMF 300 (POCs 2, 3, & 4, for example).
  • IPP input-packet processor
  • SIPPP subscriber-IP -packets processor
  • AST active-subscriber table
  • SAP subscriber-access processor
  • EDANIC egress-fix-access-network-interface card
  • each HQF may comprise a traffic manager (TM) 340 and a plurality of queue buffers 345.
  • the one or more TM 340 may communicate with each other during handover, for example.
  • a single TM 340 may control all of the HQFs.
  • An embodiment of AST 335 can be stored in read/write memory device, random access memory, etc. The AST 335 can be shared by the modules of PFAQMF 300.
  • IPP 310 may be used as a network processor between the plurality of connections that deliver a plurality of types of data, which can comply with a plurality of protocols, and the internal units of PFAQMF 300.
  • IPP 310 may process layers 1 to 3 of the OSI module. Based on the parsed information, the IPP 310 may initially classify the obtained data and accordingly may route it to an appropriate module of PFAQMF 300.
  • IPP 310 may listen, via connection 219 to the traffic over the connection 119a, and may obtain decrypted packets of subscribers IP session, as well as RANAP PDUs carried between the RNC 120 and the COCN 110. The IPP 310 may route those packets toward a queue of SAP 330. Via connection 214, the IPP 310 may listen to traffic carried over the management network 114. Such management traffic can comply with Gx protocol or RADIUS protocol and carries information regarding the subscribers' policy. The Gx or RADIUS messages can be carried by packets over an IP transport network. The information regarding the subscriber's policy can be requested by SAP 330 to be used for setting parameters that are needed for controlling the traffic toward that subscriber.
  • an embodiment of IPP 310 may obtain PSwTN packets, such as MPLS packets, that carry NBAP PDUs and IP session packets that are sent from UE 1-6 via the plurality of NBs 125 toward the relevant RNC 120 and IP servers respectively.
  • the IP packets can be transferred toward SIPPP 320 and be used for calculating the RTT parameter of a TCP/IP session, while the NBAP PDUs can be transferred toward SAP 330.
  • SIPPP 320 may receive only NBAP PDUs over those connections.
  • IPP 310 may receive, via connections 131a, 133a, & 135a (FIG. 2) PSwTN packets that carry IP packets targeted toward the UE 1-6 as well as PSwTN packets that carry NBAP data toward the plurality of NBs 125.
  • the IP packets can be transferred toward SIPPP 320, while the NBAP PDUs can be transferred toward SAP 330. More information on the operation of an exemplary IPP 310 is disclosed below in conjunction with FIG. 4.
  • SAP 330 may prepare the PFAQMF 300 to handle the IP data traffic to/from the plurality of subscribers' UEs 1-6. After installation and from time to time, when changes in the topology of the RAN occur, SAP 330 may learn the topology of the RAN and accordingly inform the TM 340 how to configure the queues buffers 345 of the HQF. During operation and even during a certain session SAP 330 may identify a change in the path to a relevant UE. For example, when a subscriber moves from one NB 125 to another. Switching from one NB 125 to another will affect the IP addresses of the packets over the PSwTN and will affect the subscriber identification.
  • the available bandwidth (BW) for the session can change while moving from one NB to another NB 125, the draining process of the queues that were associated with the subscriber while he was served by the previous NB has to be changed, etc.
  • SAP 330 has to configure dynamically the HQF allocated to that session and adapt it to the new situation associated with the new NB.
  • SAP 330 may manage and use the AST 335.
  • the Initial SREQ is transferred from the RNC 120 to an SGSN (not shown in the drawing) in the core network 110, for example.
  • SGSN stands for Serving GPRS Support Node
  • GPRS stands for General Packet Radio Service.
  • the SREQ carries the subscriber's UE International Mobile Subscriber's Identification (IMSI), and a common ID that identifies subsequent signaling messages between the UE and the core network.
  • IMSI can be given by listening to GTP-C messages.
  • IMSI is the identifier by which an entity can communicate with a mobile core entity (examples PCRF, policy server) for retrieval of the subscriber policy.
  • GTP stands for GPRS Tunneling Protocol.
  • GTP-C messages are used within the GPRS core network for signaling between Gateway GPRS Support Nodes (GGSN), Serving GPRS Support Nodes (SGSN) and RNC.
  • the response to this SREQ can comprise the common ID and an allocated Packet Temporary IMSI (P-TMSI) that will be used by the UE as long as the UE is connected to the SGSN.
  • P-TMSI Packet Temporary IMSI
  • an exemplary SAP 330 can obtain information regarding the subscriber IMSI, P-TMSI and the common ID used for signaling via communication link 219.
  • the common ID has one-to-one relationship with P-TMSI and is used in some of the signaling messages.
  • SAP 330 may obtain the Transport IP address of the servicing NB 125 and Layer 4 port allocated to the UE of the requesting subscriber.
  • the collected information can be stored in a new entry in AST 335 that is allocated to that new active subscriber.
  • SAP 330 may further listen, via connection 219, to few decrypted IP packet of the new IP session for identifying the transport layer and the application type of subscriber sessions (HTTP, FTP, etc.). In some embodiments this check can be done periodically in order to identify changes of the application that uses this connection. The identified transport layer and the application type can be added also to the allocated entry in the AST 335.
  • SAP 330 may collect, from connection 214, servicing policy information related to that subscriber. Information such as priority, guarantied dropping rate, minimum BW etc. Accordingly a DCLUT can be assigned to this session and a pointer to that DCLUT can be written in the allocated entry at AST 335.
  • SAP 330 may measure the CABW to that session.
  • SAP 330 can calculate a current average RTT value between the PFAQMF 300 and few common servers (not shown in the drawings) that are located over the operator IP network 112 (FIG. 2) servers such as but not limited to the operator portal, cache, etc.
  • the CABW and the calculated RTT to common servers can be stored also at the entry of AST 335.
  • the average RTT value between the PFAQMF 300 and few common servers can be found by using Ping procedure with each one of the common servers.
  • Ping procedure can be implemented by sending an Internet Control Message Protocol (ICMP) echo request packets to each one of the common servers and waiting for an ICMP response. Measuring the time duration between the transmitting and the receiving ICMP packet to each one of the common servers. This test can be run several times, and a statistical summary can be prepared. The summary can include the minimum, maximum, and the mean round- trip times, and sometimes the standard deviation of the mean.
  • ICMP Internet Control Message Protocol
  • TM 340 can instruct TM 340 to allocate a new queue to serve the new subscriber.
  • the new queue can be attached as a "leaf queue to the logical queue of NB 125 that is currently serving the relevant UE.
  • a weight value of the queues that are associated to the other UEs which are already served by that NB can be reduced in order to enable allocation of BW to the new subscriber.
  • the weight can also reflect the priority of each subscriber and/or session.
  • a Queue ID, in the HQF 345 can be allocated to that leaf queue and the Queue ID can be written in an appropriate field of the allocated entry in AST 335.
  • the queue ID can be written in an external header which is added to the PSwTN packet by the SIPP 320 and is removed by the TM 340 or EFANIC 350, for example.
  • SAP 320 can inform the SIPPP 320 about the new session and the allocated entry in the AST 335.
  • SAP 330 can identify NB handover, identify end of session and release the allocated resources for that session, etc..
  • NB handover from one Cell to another Cell can be identified by listening via connection 219 to new RAN parameters.
  • the new RAN parameters can be exchanged using protocols like NBAP, RANAP or GTP-C.
  • the SAP 320 can update the subscriber record in AST 335 with the new transport parameters. In addition, the queue assigned to the subscriber may change. More information on the operation of SAP 330 is disclosed below in conjunction with FIG. 5, 6, 7 and 8.
  • Exemplary AST 335 can comprise a plurality of entries. Each entry in the AST 335 can be assigned to an active subscriber's UE. In some embodiments an entry can be divided into sub-entries. Each sub-entry can be assigned to a certain flow of that subscriber. Each entry or sub-entry can include a plurality of fields. As it is disclosed above. An entry can store a plurality of parameters in a plurality of fields. Parameters such as ID values, IMSI, P-IMSI, queue ID, NB IP address, layer 4 port number, etc. Further, the entry can include policy information, current path (the labels between POCs), an associated TM 340; shaping instruction; relevant instruction to TM 340.
  • an entry can store the CABW, the calculated RTT, the utilized BW, the time duration from the last drop and the DCLUT or a pointer to the DCLUT, as well as other parameters that are needed for managing the traffic to and from each one of UEs (UE 1-6).
  • the AST 335 can be access by the SIPPP 320, TM 340 SAP 330, and IPP 310 for reading or writing different parameters that are related to the relevant subscriber IP session, the relevant entry.
  • SAP 330 can allocate an entry in AST 335 to a new active subscriber and release the entry at the end of the session.
  • SIPPP 320 can manage a plurality of processes operating in parallel.
  • SIPPP 320 can comprise one or more processors running in parallel. Each processor can execute one or more threads, while each thread can be allocated per one or more UEs.
  • An exemplary thread can be initiated by SIPPP 320 after obtaining an indication from SAP 330 about a new subscriber IP session and a pointer to the assigned entry in AST 335.
  • a session table can be allocated.
  • the session table can be part of the AST 335.
  • the entry in the AST 335, which is related to that subscriber, is copied to the SIPPP 320 as part of the session table.
  • the session table can be used for monitoring the traffic of this session.
  • the session table can store information that is related to the session. Information such as but not limited to counters that count the total number of bytes that have been handled in this session. This field can be used for verifying that the subscriber remains within its quota according to his policy.
  • the session table can include the RTT value related to that session.
  • the RTT value can be calculated only for TCP/IP subscribers' sessions.
  • the RTT value related to that session can be calculated as the sum of the calculated average RTT from the PFAQMF 300 to the few common servers (as it is disclosed above in conjunction with SAP 330, for example) plus a found RTT value between the relevant UE and the PFAQMF 300.
  • Calculating the found RTT value can be done by an embodiment o£ SIPPP 320 after waiting for a silent interval in the traffic associated with the two ports of that session, the download and the upload traffic. After identifying such a silent interval, SIPPP 320 can wait to the first download packet and start measure the time interval to the following upload packet. This measured time interval can be referred as the found RTT value between the relevant UE and the PFAQMF 300.
  • the measuring process can be repeated for several consecutive silent intervals and a smoothing algorithm can be used for defining the found RTT value between the relevant UE and the PFAQMF 300.
  • An exemplary smoothing algorithm can be an average value of several measured time intervals.
  • the RTT value of a session can be calculated by measuring the time interval between two consecutive burst of download packets.
  • An exemplary SIPPP 320 can obtain, from IPP 310, PSwTN packets, such as MPLS packets for example.
  • PSwTN packets can be referred as the protocol data unit (PDU) of the IP transport layer.
  • the PSwTN packets can carry, as payload, IP data packets of the subscriber IP session, NBAP PDUs between RNC 120 and the plurality of NB 125 and signaling as well as signaling and control packets of the MPLS network 130.
  • An embodiment of SIPPP 320 can parse the header of the Frame Protocol (FP) and /or the header of the radio link control (RLC) protocol that follows the IP transport header of the PDU.
  • FP Frame Protocol
  • RLC radio link control
  • the RLC header starts in a configurable offset (the number of bytes from the beginning of the payload of the PDU).
  • SIPPP 320 can identify the first PDU that carries the beginning of a subscriber IP packet and the PDU that carries the end of the subscriber IP packet.
  • the RLC header provides information such as new subscriber packet begins, by having ⁇ ' or 'HE' bits taking the appropriate values.
  • the RLC sequence number can enable determination of the subscriber's original IP packet length.
  • SIPPP 320 can drop the PDUs (PSwTN packets) that carried complete subscriber's one or more IP packets.
  • PSwTN packets that were not dropped by SIPPP 320 are transferred with an indication of the Queue ID to the TM 340 that is associated to the next POC (POC 2, 3 or 4 in FIG. 2) in the path to the NB 125 that is currently serving the UE of that subscriber. More information about the operation of SIPPP 320 is disclosed below in conjunction with FIG. 9a-c.
  • One or more Traffic Managers (TMs) 340 can be associated with the PFAQMF 300.
  • TM 340 can be common HQF that is used in a terrestrial access network.
  • Exemplary HQF can be FAP 21 of Broadcom Corp. USA, or HX 330 of Xelerated Sweden, for example.
  • the hierarchical of the queues can be defined by SAP 330 after determining the configuration of RAN 200.
  • the most parent queues (the highest in the hierarchy) are the queues to POC 2,3, &4.
  • the lowest in the hierarchy, the most child queue or leaf queue is per UE queue.
  • the queue to NB 125 is one above the UE queue.
  • Each a leaf queue is associated with a Queue ID, which is used by SIPPP 320 to route a subscriber IP session packets toward the appropriate queue.
  • the control and communication with the TM 340 can be implemented via an Application Programming Interface (API) of the TM 340.
  • the controlling can include scheduling, weight, shaper, etc. per each queue.
  • the scheduler defines the maximum and minimum rate in which the queue can be drained.
  • the maximum rate can reflect the maximum BW of the communication link at the egress of the queue; the minimum rate can reflect policy limitations.
  • the shaper defines the rate limitation for draining the queue, an exemplary embodiment the shaper control parameter can be used to reflect the current available BW that can be allocated for draining the relevant queue.
  • the value of the shaper is smaller than the maximum value of the scheduler.
  • the value of the weight can reflect the weight of the queue compare to other queues in the same level that are connected at the egress of a higher hierarchical level queue.
  • the weight can also reflect the priority of each subscriber and/or session.
  • each queue can be associated with queue identification, a queue ID. In some embodiments in which the amount of bytes that were sent to a certain subscriber exceeded the subscriber's usage quota, then the weight factor of that subscriber can be decreased.
  • the queuing control parameters such as scheduling, weight, shaper, etc. are used for static operation.
  • those parameters can be configured once, by an operator of the network, while configuring the HQF to a certain network and remains without changes during operation.
  • TM 340 those parameters are dynamically changed by the SAP 330 according to the current conditions over the network 200 and the current served mobile devices.
  • an exemplary SAP 330 determines that a change in the current available BW over a certain connection has occur, then one or more of the queue control parameters can be dynamically changed according to the changes in the available BW or topology.
  • the parameters can comprise the shaper of a queue, for example, that is associated with that connection can be changed accordingly.
  • TM 340 is configured to inform SIPPP 320 with the current utilization of the queue toward each of the current active subscribers. SIPPP 320 may use this information to determine the percentages of early dropping packets to each one of the subscribers according to the DCLUT related to that subscriber. In some embodiments, when the subscriber session is carried over TCP/IP, the dropping can be limited to one packet per RTT.
  • an example of PFAQMF 210 that is installed between POC 1 and POCs 2, 3 and 4 can have three HQFs, each HQF can be associated with one of the communication links 131b, 133b and 135b, for example.
  • the configuration of the HQF that is associated with the traffic from POC1 to NB 3 and from there to UE 5 and UE 6 is further disclosed.
  • the configuration of the other HQFs can be done in a similar way.
  • An example of HQF may comprise a plurality of logical hierarchical queues.
  • the plurality of logical hierarchical queues can be implemented by a plurality of physical memories organized in hierarchical queues based on the hierarchical topology tree.
  • the plurality of logical hierarchical queues can be a plurality of virtual queues that are embedded within a single physical memory (queue buffers 345, for example). Each of those virtual queues can be controlled by an associated scheduler.
  • the plurality of schedulers can be organized according to the hierarchical tree of the topology. Each scheduler can be associated with one or more virtual queues.
  • the first level the trunk level of the tree, is the logical queue associated with link 135b. There is only one logical queue in the first level, the level of the trunk.
  • the second level at the egress of POC 4, has three attached logical queue, one is associated with communication link 137 to POC 5, one with communication link 138 to POC 6, and one logical queue is associated with link 139 to POC 7. Only one of these logical queues will be further disclosed, the rest can be configured in a similar way.
  • the 3 rd level, at the egress of POC 5, has only one illustrated branch to NB 3 attached to it, for the purpose of the description assuming that additional three NBs are connected to POC 5 via other egress ports of POC 5 (not shown in the drawings) consequently four logical queues are attached to the logical queue to POC 5.
  • the logical queue is associated with communication link 126 to NB 3.
  • the 4 th level of hierarchical queuing includes the logical queues that are attached to the logical queue to NB 3, the child queue or leaf queue is the queue that is associated with a UE. In this level two leaf queues are illustrated as attached to the logical queue to NB 3, a queue for UE 5 and a queue for UE 6.
  • the trunk 135b, the 1 st level logical queue can be configured, the stem of the tree.
  • the scheduler is set to a value that can be equal to the maximum allowable BW over the communication link 135b.
  • An ID can be allocated to this level 1 scheduler.
  • the 2 nd level logical queue can be configured. One per each POC that directly connects to the stem of the tree.
  • the scheduler value can be equal to the maximum allowable BW over the communication link 137, for example.
  • the weight can be equal to 33% because there are additional 2 queues at 2 nd level that are connected to the same queue in the higher level, the queue of trunk 135b. In some embodiments, some of the links may be set to a higher weight due to the fact that these links are preferred over others.
  • An ID can be allocated to this level 2 logical queue.
  • the 3 rd level logical queue the logical queue that controls the traffic over link 126 to NB 3, amongst other links, can be configured now.
  • the scheduler value can be equal to the maximum allowable BW over the communication link 126, for example.
  • the weight can be equal to 25% because there are additional three virtual queues at 3rd level that are connected to the same scheduler in level 2 (not shown in the drawings).
  • An ID can be allocated to this level 3 scheduler in this example the ID can reflect the ID value of the queue of link 137.
  • the leaf level is configured, in the disclosed example the level 4 th queue to UE 6.
  • a set of queues is associated with the NB logical queue.
  • the number of queues in the set of queues can reflect the maximum number of UE that can be served by the NB.
  • a logical queue is assigned to every active user attached to this NB.
  • the scheduler of this queue can reflect the subscriber's policy that defines the maximum and the minimum allowable BW to the subscriber's UE, for example.
  • the minimum allowed BW can reflect the guaranteed BW which was promised to the subscriber and is written in the subscriber's policy.
  • the weight can be equal to 50% because there is only one additional UE currently communicating via NB 3. This weight may later be adjusted dynamically according to the specific subscriber that utilizes this specific queue, its priority, BW demand and application requirements, etc..
  • the dropping policy which is related to a certain UE, can be associated with the subscriber queue as AQM.
  • An ID can be allocated to this level 4 queue.
  • the ID value can reflect the higher queue ID, in this example the ID can reflect the ID value of the queue of link 126, which reflects the queue of the link 137, and so on.
  • dropping is handled by the SIPPP 320.
  • Exemplary one or more egress-fix-access-network-interface cards (EFANIC) 350 are connected to the egress of TM 340 and obtain PSwTN packets such as MPLS packets. Packets that are drained from the queue buffers 345 by TM 340 according to the topology and the commands received from SAP 330. Each EFANIC 350 receives packets from TM 340 that are targeted toward one of the associated POCs, POCs 2, 3, and 4. Each EFANIC 350 processes the PSwTN packets according to the lower layers of the communication links (131a, 133a, 135a, 131b, 133b, and 135b respectively) to be transmitted to the relevant POC. In some embodiments the communication over the links to POC 2, 3, and 4 can comply with Ethernet OAM protocols.
  • Process 400 can be implemented by an input packet processor (IPP) 310 employed in an embodiment of PFAQMF 300 (FIG. 3), for example.
  • IPP 310 may be used as an interface processor between the plurality of connections 131a,b; 133a,b; 135a,b; 219, 214 and the internal units of PFAQMF 300.
  • Those connections can deliver a plurality of types of data carried by different PSwTN packets.
  • Those packets can comply with a plurality of protocols.
  • IPP 310 may process layers 1 to 3, of the OSI model, of each connection and delivers the plurality of packets to a queue of the internal processing unit of IPP 310 for further processing.
  • Process 400 can be initiated 402 after power on and may run in a loop as long as PFAQMF 300 is active.
  • IPP 310 can check 405 its queue looking for a next packet in the queue. If there is no packet in the queue, then the process can wait in a loop looking for obtaining a packet. If 405 there is a packet in the queue, the header of the packet can be parsed 408, according to the protocol of the relevant network, Ethernet, MPLS, IP, etc. Based on the parsed information, the IPP 310 may initially classify the obtained data and accordingly may transfer it to an appropriate module of PFAQMF 300. If 410 the packet carries RANAP or NBAP PDUs, then the packet is transferred 412 to SAP 320 to the queue of SAP RANAP process 800, which is disclosed below in FIG. 8. RANAP PDUs and NBAP PDUs are related to subscriber requests to establish or terminating an IP session or when handover occurs.
  • RRO PDUs are used for learning and updating the topology to the plurality of NBs 125 and the POCs 2,3,4,5,6 and 7, for example.
  • Information such as ID information, policy information, subscriber priority, DCLUT, etc. This information can comply with Gx protocol, RADIUS protocol, or similar protocol.
  • SSCP subscriber' s-session controller process
  • the Ethernet OAM frames such as delay-measurement message (DMM) and corresponding delay- measurement reply (DMR).
  • DMM delay-measurement message
  • DMR delay- measurement reply
  • Those messages can be used by SAP 330 for calculating the UECABW. Therefore those frames can be transferred 442 to a queue associated with block 624 of the SAP BW process 600 that is disclosed below in conjunction with FIG. 6.
  • FIG. 5 schematically illustrates a flowchart showing relevant blocks of an example of a subscriber access processor (SAP) while learning the topology of the RAN 200 from the RNC 120 up to the plurality of NBs 125 and updating an NB- Topology DB (not shown in the drawings).
  • the exemplary process 500 can be initiated 502 at power on, in other embodiments process 500 can be initiated after executing process 700, which is disclosed below in conjunction with FIG. 7.
  • Process 700 is implemented for learning the topology of the POCs.
  • process 500 can be initiated each time a packet carries RSVP-TE RECORD object is sniffed by SAP 330 (FIG. 3) from POC1 or one of the egress POCs (POC 2, 5, 6 and 7, for example).
  • the NB-Topology DB can comprise a plurality of entries. Each entry is associated with a NB 125. Each entry includes a plurality of fields: NB MAC; NB IP address; topology to the NB (the POCs in the path, one or more Labels, IP address, and MAC, MEG level), MAX. BW of communication link to its associated POC; available BW, etc.
  • SAP 330 may prompt 504 an administrator of the operator network to configure the NB-Topology DB with different parameters of each network device (NBs, POCs, RNC, etc). Parameters such as but not limited to IP address, MACs, MAX BW, etc. the rest of the fields can be filled by SAP 330 while running one or more of the following disclosed processes.
  • process 500 may start 506 a loop between blocks 510 to 520. Each cycle in the loop is executed per each NB 125 that is served by the PFAQMF 210. The loop starts at block 510 for the next NB in the NB-Topology DB.
  • a LTM can be sent 512 with the MAC address of that NB.
  • SAP 330 may collect 514 one or more LTR which were sent by that NB and each one of the MIPs from the POCs in the way to that NB to that NB.
  • Each LTR includes parameters of the responding entity. Parameters such as but not limited to the responder's MAC, MEG level, etc.
  • the collected parameters can be stored 516 in the relevant entry of the NB-Topology DB as the updated topology information.
  • process 500 returns to block 510 and start a loop for the next NB 125. If 520 there is no additional NB, then process 500 can be terminated.
  • process 500 may use a POC table, the POC can be handled by SAP 330 as it is disclose in FIG. 7.
  • the topology can be found by listening to OSPF messages that are sent to POC 1.
  • method 500 can be adapted to listening to OSPF messages instead of LTM and LTR messages. A reader who wishes to learn more about OSPF is invited to read RFC 2328 which is incorporated here by reference.
  • FIG. 6 schematically illustrates a flowchart showing relevant actions of an example of SAP 330 (FIG. 3) for monitoring a UE current available bandwidth (UECABW) over the exemplary RAN 200.
  • a plurality of BW monitoring process 600 can be executed in parallel on a plurality of processors. Each process can be associated with a group of active subscribers in AST 335.
  • An embodiment of process 600 can be initiated 602 after updating the NB-Topology DB, process 500 in FIG. 5, for example.
  • After initiation process 600 may run periodically in a loop, between blocks 604 to 650.
  • An exemplary time interval between two loops can be few seconds to few hours (one hour, thirty minutes, etc.).
  • Each cycle of the periodical loop can be executed on the entries of the AST 335 (FIG. 3).
  • the next entry in AST 335 is fetched and be processed 612 for updating.
  • the considering-dropping flag field of that entry can be reset.
  • Information regarding the path from PFAQMF 210 to the relevant subscriber's UE can be obtained 612 from that entry. Among other parameters the information can include the one or more MEG levels along the path to the relevant NB 125 that currently serves the UE.
  • an internal loop is initiated from block 620 to 630. Each cycle in the loop is executed per a MEG level in the path to the relevant NB.
  • a burst of ' ⁇ ' DMM messages can be sent from PFAQMF 210 toward the far end MEP of the MEG level related to the current cycle of the internal loop.
  • the priority of that DMM messages can be similar to the priority of the relevant session and/or subscriber.
  • An exemplary value of 'N' can be a configurable number in the range of 5-100 DMM messages for example.
  • Each DMM messages can include a timestamp 'Tx', which can be expressed by 'M' bytes.
  • 'M' can be a configurable number between few tens to few hundreds of bytes.
  • An exemplary 'M' can be 100 bytes.
  • a DMM message can include a sequence number.
  • the transmitting rate of the 'N' DMM messages can be faster than the maximum transition rate of the relevant connection as it is retrieved from that entry of the AST 335. In some embodiments, the transmitting rate can be 5 times faster than the maximum rate of the connection.
  • the received 'N' DMR messages which were routed by IPP 310 in block 442 of process 400 to SAP 330, are parsed 624 and the timestamp 'Rx' of each received DMR is retrieved.
  • a representative DT value can be calculated as a statistical function of the plurality of DTn values.
  • the statistical function can be an average, a median, the maximum, etc., of the N-l values of the calculated differences DTn.
  • the DT value can reflect the current available BW (CABW) toward that MEP. Calculating the current available BW toward that MEP can be implemented 628 by dividing the packet size by the DT value. The calculated CABW to that MEP can be stored in the relevant entry of AST 335. This value can be used to define the shaper of the queue related to that MEP in TM 340 (FIG. 3).
  • a decision is made whether there are more MEG levels in the path. If yes, method 600 returns to block 620 and starts a next cycle for the next MEG level.
  • the CABW to the current serving NB 125 can be defined 632 by as the minimum CABW of the relevant MEG levels toward that NB.
  • the CABW for the NB can be divided by the number of UEs that are currently served by that NB in order to define the UECABW of the subscriber's UE associated to that entry of the AST 335. In some embodiments, dividing the CABW among the plurality of subscriber's UEs can be based also on the priority of each subscriber and or the session. [00121]
  • the TM 340 can be instructed 644 to set the shaper of the leaf queue, the ID queue, allocated to that UE, to the drain that queue in a rate that is higher than the guaranty minimum bit rate but smaller than the defined UECABW and the maximum guaranty bit rate.
  • the guarantee bit rate can be dependent on the priority of that subscriber and that session.
  • the TM 340 can be instructed 642 to set the shaper of the leaf queue, the ID queue, allocated to that UE, to drain that queue in a rate that is equal to the defined UECABW.
  • the considering-dropping flag in the relevant entry of AST 335, can be set. This flag can indicate SIPPP 320 to consider dropping entire packets of subscriber IP session. Then method 600 proceeds to block 650, which is disclosed above.
  • FIG. 7 schematically illustrates a flowchart showing relevant actions of another exemplary SAP 330 (FIG. 3) process 700 for learning the topology of an exemplary MPLS transport network having a plurality of POCs.
  • Process 700 can be implemented for updating a POC Table.
  • the POC Table can be stored as a section in the NB-Topology DB.
  • the POC Table can include a plurality of entries. Each entry is associated with a path to an egress POC. Each entry includes connectivity information written in a plurality of fields.
  • IP address such as: IP address, a label for the segment , the MAC address of each POC along the path, the POC MEG level, MAX BW per each segment, IP address and MAC of egress POCs (end of tunnels), etc.
  • the operator configures the IP address, the MACs, of each POCs the rest of the fields are filled by SAP 330, in other embodiments the operator enters all the information. 7
  • Process 700 can be initiated 702 at power on for learning the topology of the POCs. After initiation 702 method 700 may prompt 704 an administrator of the operator network to configure the NB-Topology DB with basic parameters. Then a POC Table can be allocated and reset and method 700 may wait 710 for receiving, from IPP 310 (FIG. 3), one or more replies of RRO. Wherein the RRO can be sent from POC 1 over one of the connections 131a, 133a, 135a (FIG. 2) while establishing a Label Switch Path (LSP). The reply of RRO from an egress POC can deliver connectivity, addressing and label of one or more POCs along the path from POC 1 to that egress POC.
  • IPP 310 FIG. 3
  • LSP Label Switch Path
  • SAP 330 processes 712 the obtained reply RRO and retrieved connectivity information regarding each POC along the path from the egress POC back to POC 1.
  • the connectivity information can include the IP address over the PSwTN, the label, etc. of each POC along the path from POC 1 to the responding egress POC.
  • the obtained information can be stored in the POC Table.
  • the POC table can be checked and a decision is made whether 720 the POC table is completed. If not, method 700 returns to block 710 waiting for the next reply to an RRO, a reply received from another egress POC, for example.
  • method 700 can update 722 the TM 340 (FIG. 3) with the topology tree from POC 1 to the plurality of egress POCs allowing the TM 340 to organize the hierarchy the queues in the Queue Buffers 345 (FIG. 3) according to the hierarchy of the POCs.
  • the NB-Topology process 500 (FIG. 5) can be initiated 722 as well as the BW monitoring process 600 (FIG. 6).
  • method 700 may wait 724 for an additional reply to an RRO, which may indicate a change in the POC topology.
  • the obtained reply is parsed 726 and be compared to each of plurality of entries of the POC Table looking for a similar information or to a change in the stored information. If 730 there is a change in the information, then the POC Table is updated. Updating 732 can be done by adding a new entry in the POC table due to a new egress POC, for example. Alternatively, one or more changes can be done in a relevant entry in the POC Table which is related to the differences from the obtained reply and the stored POC table.
  • the topology is of the POCs is discovered by listening to the OSPF (Open Shortest Path First) messages sent towards POC 1 and analyzed by OSPF stack embedded in SAP 330 for building the topology map of the entire PSwTN. Yet, in another embodiment both methods can be used together.
  • OSPF Open Shortest Path First
  • FIG. 8 schematically illustrates a flowchart showing relevant actions of an example of SAP 300 (FIG. 3) for responding to a RANAP or NBAP messages.
  • the RANAP messages can be sniffed via connection 219 (FIG. 2) and be transferred by IPP 310 (FIG. 3) as it is illustrated in block 412 of FIG. 4.
  • the NBAP messages can be received via the PSwTN over connections 131b, 133b, 135b (FIG. 2), for example, via the IPP 310.
  • Method 800 can handle establishing a subscriber IP session or handover process between two NodeBs.
  • method 800 checks its queue for a RANAP or NBAP massage, if 810 a message exist, then method 800 proceed to block 812. If not, method 800 waits for the next RANAP or NBAP message.
  • the message is parsed according to the message type. For a RANAP message the IMSI or TMSI fields can be retrieved.
  • the CRNC context ID which is assigned to a subscriber's UE upon attaching to the RAN 200 by the RNC 120 (FIG. 2), is retrieved. Based on the retrieved values the AST 335 (FIG. 3) is searched 812 for an entry.
  • SSCP subscriber's-session-controller process
  • the message is not a RANAP message, thus the message is an NBAP message that can indicate a handover, then at block 836 the found entry is fetched and parsed.
  • the Queue ID value that represent the personal queue to that subscriber, which is written in the appropriate field of the found entry of the AST is fetched and an instruction is sent 836 to TM 340 (FIG. 3).
  • the instruction can be to drain the personal queue at the CABW of the old NB and method 800 may wait until the relevant queue is drained.
  • the entry in the AST is updated 838 with the new NB IP address and two port numbers, one for upload and one for download, are written 838 in the entry.
  • the NB-Topology DB is searched for the entry associated with the new NB in order to retrieve routing information such as the labels.
  • the retrieved information is copied to the entry in the AST 335.
  • TM 340 is informed 838 about the new path and define a new hierarchy of queues in the Queue Buffers 345 that fits the path to the new NB and accordingly the queue ID points on the new path of queues.
  • the method 800 instructs the SSCP process, which handles this subscriber IP session, to execute the process of monitoring the UECABW at block 918 of FIG. 9a and process 800 returns to block 810 for handling the next RANAP or NBAP message.
  • a new RAB message represents a new tunnel between a NB 125 and RNC 120 that is established at the beginning of a IP data session or during handover by the new NB. However, handover is handled at block 836. If 820 not, then the message is ignored and process 800 returns to block 810 looking for the next RANAP message.
  • the message is a RANAP SREQ message or NBAP new RAB message
  • a new entry in the AST 335 is allocated 822 by SAP 330 (FIG. 3).
  • the retrieved identification information is stored 822 in the appropriate fields of the new entry.
  • the identification information depends on the type of the message. From a RANAP SREQ message the retrieved information can include IMSI, T-IMSI, subscriber IP, etc.. For NBAP new RAB message the retrieved identification information can include NB IP address, Layer 4 port, CRNC context ID, common ID, etc.. If 824 the message is RANAP then method 800 returns to block 810 for handling the next message in the queue.
  • the NB-Topology DB is searched for retrieving routing information from PFAQMF 210 (FIG. 2) to the relevant NB.
  • the routing information can include labels of each segment in the path, IP address of the one or more POC in the path, etc.
  • the retrieved routing information can be stored in the appropriate fields of the entry in the AST 335.
  • the last measured CABW to that NB is retrieved 826 from the NB-Topology DB and a fair share of the CABW of the NB is allocated to the new session.
  • the fair share can be function of the number of UEs that are currently served by the relevant NB and the priority of the subscriber/session (the weight of the subscriber/session).
  • SAP 330 informs SIPPP 320 about the new session and a subscriber's session control process (SSCP) is allocated and assigned to the new session.
  • SSCP subscriber's session control process
  • the new SSCP is associated with the new entry in the AST and a pointer to the queue of the assigned SSCP is stored in the new entry in the AST 335.
  • process 800 returns to block 810 looking for the next RANAP message.
  • block 822 may include a process for measuring the RTT value from the PFAQMF 210 to several common servers (not shown in the drawings) that are located over the operator IP network 112 (FIG. 2) and/or the Internet 102 (FIG. 2). Servers such as but not limited to the operator portal, cache, etc. The average RTT value between the PFAQMF 210 and the few common servers can be found by using Ping procedure with each one of the common servers.
  • Ping procedure can be implemented by sending an Internet Control Message Protocol (ICMP) echo request packets to each one of the common servers and waiting for an ICMP response. Measuring the time duration between the transmitting and the receiving ICMP packet to each one of the common servers delivers the RTT value to each one of those severs.
  • ICMP Internet Control Message Protocol
  • This test can be run several times, and a statistical summary can be prepared. The summary can include the minimum, maximum, average and the mean round-trip times, and sometimes the standard deviation of the mean. One from those values can be selected as representative value for the RTT to the Internet.
  • the RTT value can be measured by SIPPP 320 as it is disclosed below in conjunction with FIGs. 9a-c.
  • FIG. 9a,b&c illustrate a flowchart showing relevant actions of an exemplary subscriber' s-session-controller process (SSCP).
  • An embodiment of SSCP 900 can be executed by SIPPP 320.
  • SIPPP 320 (FIG. 3) may handle a plurality of SSCP 900 threads running in parallel by one or more processors. Each thread can be associated with a single subscriber IP session.
  • SSCP 900 can be associated with the relevant entry in AST 335 (FIG. 3) that is assigned to the same session.
  • Each SSCP can have an input queue in which packets can wait to be processed.
  • the SSCP queue is checked looking for a packet.
  • Packets can be placed in the queue by IPP 310 as it is disclosed in block 454 of FIG. 4; or by SAP 330 as it is illustrated in FIG. 8 block 818, for example. If 904 a packet exist in the queue, the packet is retrieved and a decision is made 910 whether the packet carries RANAP SRES (Service Response) message which is sniffed from connection 219 (FIG. 2) via IPP 310 (FIG. 3).
  • RANAP SRES Service Response
  • method 900 proceed to block 930 in FIG. 9b. If 910 the packet does not carry SRES message, then method 900 proceed to block 930 in FIG. 9b. If 910 the packet carries RANAP SRES message, then the message is parsed 914 and information such as P-TMSI is retrieved from the SRES message and be stored in the appropriate field of the relevant entry in AST 335. Based on the IMSI of the relevant subscriber information regarding the subscriber policy is obtained 916 by listening to connection 214 (FIG. 2) to the Operator management networks 114. The policy information can include: APN (Access Point Name), QoS indicator for that session, the subscriber priority, the guarantee bit rate interval, maximum dropping percentages, DCLUT or a pointer to that LUT, etc. Then, a timer Td can be allocated and reset for that session. The timer Td monitors the time duration from the last dropped IP packet.
  • APN Access Point Name
  • QoS indicator for that session
  • the subscriber priority the guarantee bit rate interval
  • SSCP 900 can start the monitor process 918 of the UECABW.
  • the monitor process can include similar actions to the actions that are illustrates in blocks 612 to 632 of FIG. 6 and are disclosed above.
  • a decision is made 920 whether the UECABW is OK and fits the subscriber policy.
  • TM 340 (FIG. 3) is informed 924 about the queue ID, the path to that UE, the weight, the maximum and the minimum guarantee bit rate can be loaded as the scheduler of that queue, the shaper of the queue can be updated too.
  • the shaper can be the minimum between the UECABW and the maximum guarantee bit rate, for example.
  • the weight can reflect the subscriber priority and the QoS related to the relevant session.
  • the TM 340, the AST 335, the SIPPP 320 and SSCP 900 are ready to handle packets of the relevant subscriber IP session and method 900 returns to block 904 for handling a next packet.
  • FIG. 9b illustrates relevant actions of an embodiment of a section of SSCP 900 for handling subscriber's related packets which do not carry RANAP SRES messages.
  • FIG. 9a if the PDU do not carry RANAP SRES messages, then SSCP 900 proceed to block 930.
  • the PDU is parsed 932 and a decision is made 940 whether the PDU is Iu Release Request or Iu Release Command indicating the end of Iu session. If 940 yes, then TM 340 (FIG. 3) is instructed 942 to drain the current queue of the session, queue ID, and to release the personal queue.
  • SAP 320 is informed to release the relevant entry at the AST 335 and to release the resources of SSCP 900 and method 900 can be terminated 944.
  • FP Frame Protocol
  • RLC radio link control
  • the edge can be the beginning or the end of the subscriber's IP packet.
  • the decision can be done by parsing the header of the Frame Protocol (FP) and /or the header of the radio link control (RLC) protocol that follows the IP transport header of the PDU.
  • the RLC header starts in a configurable offset (the number of bytes from the beginning of the payload of the PDU).
  • SIPPP 320 can identify the first PDU that carries the beginning of a subscriber IP packet and the PDU that carries the end of the subscriber IP packet. If 948, the PSwTN packet does not carry an edge of a subscriber IP packet, then SSCP 900 may proceed to block 954.
  • the decision can be reached by listening to the connection 219 (FIG. 2) for few packets during the beginning of the session in order to define the transport protocol of the IP session.
  • SIPPP 320 (FIG. 3) may check the upload and the download traffic of each session and based on the handshake traffic between both ends of the connection, verifying that the responder side responds with ACK packets, for example. If 950 the session is carried over TCP/IP, then process 900 proceed to block 970 in FIG. 9c. 12 000077
  • one or more parameters can be checked in order to determine whether to drop the packet or not.
  • the parameters can include: the considering-dropping flag field in the relevant entry of AST; the occupation of the buffer/queue, which is associated with the session; the priority of the subscriber/session, the DCLUT, the BW utilization of the session up to that IP packet, the percentages of dropped IP packet up to the current IP packet, etc.
  • the occupation of the buffer can be obtained from TM 340 (FIG. 3).
  • Different embodiments of PFAQMF may use different set of the above parameters.
  • An embodiment of SSCP 900 may consider 952 the current percentage of dropping compare to the subscriber policy, the utilized BW compare to the fair share of the other UE connected to the same NB. If the current percentage of dropping is below the guarantee percentages and the utilized BW is higher than the fair share, then the IP packet can be marked for dropping.
  • an AQM consideration may apply. Based on the current occupation of the queue of the session, the DCLUT of the subscriber/session can be observed 952 in order to obtain the required percentages of dropping. The obtained required percentages of dropping can be compared to the current percentages of dropping and if the obtained required percentages of dropping are bigger than the current percentages of dropping, then the IP packet can be marked for dropping. The mark for dropping can be written as a field in the associate entry of AST 335.
  • the mark for dropping field is checked. If 954 the mark for dropping field is false, then at block 958 the PSwTN packet can be transferred toward its destination while updating the current utilized BW. If 954 the mark for dropping field is true, then the PSwTN packet can be dropped 956. In addition, if the dropped PSwTN packet carries the end of a subscriber IP packet, then dropping percentages counter is increased 956 and method 900 can return to block 904 in FIG. 9a.
  • SSCP 900 proceed to block 970 in FIG. 9c and the entry at the AST is further parsed 972 in order to determine whether the RTT of the entire path of that session is known. If 974 the RTT is known, then the value of Td is checked 976 and a decision is made whether Td, the time duration from the last dropped IP packet, is bigger than the value of RTT. If 976 not, the PSwTN packet is transferred 994 toward the relevant queue at the next POC HQF. Then process 900 returns to block 904 in FIG. 9a.
  • one or more parameters can be checked in order to determine whether to drop the packet or not.
  • the parameters can include: the considering-dropping flag field in the relevant entry of AST; the occupation of the buffer/queue, which is associated with the session; the priority of the subscriber/session, the DCLUT, the BW utilization of the session up to that IP packet, the percentages of dropped IP packet up to the current IP packet, etc.
  • the occupation of the buffer can be obtained from TM 340 (FIG. 3).
  • Different embodiments of PFAQMF 300 may use different set of the above parameters. Different method may be used in order to decide whether to mark the packet for dropping. Some of the methods are disclosed above in conjunction with block 952 and will not be further disclosed.
  • the mark for dropping field in the AST is checked. If 990 the mark for dropping field is false, then at block 994 the PSwTN packet can be transferred toward its destination via the relevant queue at the next POC HQF. After transferring the PSwTN packet the utilized BW can be updated. If 990 the mark for dropping field is true, then the PSwTN packet can be dropped 992. In addition, if the dropped PSwTN packet carries the end of a subscriber IP packet, then the dropping percentages counter is increased 994, the timer Td can be reset and method 900 can return to block 904 in FIG. 9a.
  • the RTT process can found the RTT between the PFAQMF 200 (FIG. 2) and the relevant UE. After initiation the RTT process may wait for a silent interval in the traffic associated with the two ports of that session, the download and the upload P T/IL2012/000077
  • the first download packet after identifying the silent interval can start a timer to measure the time interval to the following upload packet.
  • the first upload packet can stop the timer and the measured time interval can be referred as the found RTT value between the relevant UE and the PFAQMF 200.
  • the RTT for the entire path from the subscriber to the Internet server and back to the subscriber can be calculated as the sum of the average value of the RTT from the PFAQMF 200 to the common servers and the found RTT value.
  • the RTT for the entire path can be written in the relevant entry of the AST 335 and process 900 proceed to block 994.
  • the RTT measuring process can be repeated for several consecutive silent intervals and a smoothing algorithm can be used for defining the found RTT value between the relevant UE and the PFAQMF 200.
  • An exemplary smoothing algorithm can be an average value of several measured time intervals.
  • the RTT value of the entire path can be calculated by measuring the time interval between two consecutive burst of download packets.
  • each of the verbs, "comprise”, “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements, or parts of the subject or subjects of the verb.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Embodiments of a system and a method for controlling a Per-Flow-Active-Queue- Management Framework (PFAQMF) are disclosed. An embodiment of PFAQMF can be installed in a Packet Switched transport network (PSwTN), wherein the PFAQMF is associated with a radio-access-network (RAN) between a plurality of cellular- subscriber's equipment (CUE) and a cellular operator core network. The PFAQMF can manage per each CUE an associated logical aggregating buffer that aggregates packets that are targeted toward the CUE. The PFAQMF can monitor the CUE current available bandwidth (UECABW) per each active CUE and accordingly modifies control parameters of the associated logical aggregating buffer. In some embodiments dropping subscriber's IP packets can be managed based on the UECABW, current occupation of the associated buffer and the subscriber policy.

Description

SYSTEM AND METHOD FOR ACTIVE QUEUE MANAGEMENT PER FLOW OVER A PACKET SWITCHED NETWORK
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is Patent Cooperation Treaty application being filed in the Israeli Receiving Office claiming the benefit of the prior filing date of the United States provisional application for patent that was filed on February 21, 2011 and assigned serial number 61/444,853, which application is incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
[0002] The embodiments presented in this disclosure generally relate to communication networks, and more particularly to per-flow-queue management of data traffic of a plurality of flows at an intermediate node of a packet switched network.
BACKGROUND
[0003] The rapid evolution of cellular networks has resulted in the development of communication nodes that are able to transfer different types of communication sessions through cellular network. Known techniques offer different quality of service (QoS) to different subscribers and different type of sessions. The subscriber, based on their contract agreement, can have high priority, medium priority and low priority, for example. In a similar way, different type of application (communication sessions) that are transferred over an Internet Protocol (IP) network may require different QoS. For example real time audio/video communications over IP, require and/or are entitled to a high quality of service (QoS). Other exemplary communication sessions, such as surfing the web can have medium QoS requirements. Yet other exemplary communication sessions can have low QoS requirements. Examples of a communications session that have low QoS requirements include best effort QoS, emails, content downloading, etc. The above examples of communication session are non-limiting examples of a communication session that has different QoS requirements. [0004] In order to deliver different QoS to different subscribers and different type of communication sessions, operators/carriers of access networks to the Internet, Intranet, content providers, etc. employ a Hierarchical Queuing Framework (HQF). Hierarchical Queuing Framework is a queuing architecture, which includes a plurality of queues; each queue is used for storing incoming packets and/or frames and/or ATM cells for a specific session or group of sessions that are associated with a certain priority level. The queues can be organized in hierarchical architecture. In some embodiments the HQF architecture complies with the topology of a network hierarchical tree. Therefore, each queue can be drained in a different rate, limited to a certain delay, dropping policy, etc.. The plurality of queues can be logical queues that are embedded in a single physical memory device with a plurality of pointers, each pointer can be associated with a different queue (different subscriber and/or session), for example.
[0005] In order to deliver personalize QoS and session type QoS two dimensional of hierarchical queuing structure is needed. One dimension is associated with the subscriber priority, and the second dimension is associated with the session type of a certain subscriber. Further, the queuing system can control several aspects of handling packets of each session, such a queuing system can be referred as hierarchical Weighted Fair Queuing or hierarchical class based queuing. The aspects can be multi level of packet scheduling; class based shaping (maximum rate, minimum rate), maximum delay, delay variation, dropping policy, for example. Theses parameters can be referred as active queue management (AQM) parameters. In HQF each queue can have further child queues. Each queue can be associated with packets having one or more similar attributes. Attributes such as but not limited to maximum and minimum rate, maximum delay, etc.. Usually AQM parameters can be configured after installation of a HQF in a network by an administrator of the network, for example. Once in a while, when a change in the network occurs the HQF can be reconfigured accordingly. Routing packets between the different queues in an HQF can be done based on parsing the header of received packets and comparing fields in the header to a queue routing table. The fields can include QoS indication, label, layer 4 port number, IP address, etc. [0006] Common Operators of terrestrial access networks (access service provider), such as operator of public switched telephone network (PSTN), Asymmetric Digital Subscriber Line (ADSL), Plain old telephone service (POTS), Integrated Services Digital Network (ISDN), Internet Service Provider (ISP), etc. already use hierarchical queuing framework, which are class based queuing wherein the class is a function of the subscriber and/or the session type.
[0007] In addition the evolution of Broadband Internet Access over terrestrial networks as well as over wireless networks results with a network of a plurality of intermediates nodes that are able to transfer a plurality of flows from a plurality of sources to a plurality of destinations carrying a plurality of different types of communication sessions through packet switched transport network (PSwTN). Exemplary terrestrial networks are mentioned above. Exemplary wireless networks, can comprise networks such as: cellular, Wi-Max and Wi-Fi, satellite, etc.
[0008] Each flow can be associated with a subscriber having a different priority than another flow. Further, each flow can carry data of different type of sessions. Each session can have different priority, different bandwidth requirements and different packet loss and delay requirements, etc.
[0009] Along a path from a source to a destination there are several intermediate nodes that can receive flows from a plurality of sources and distribute them to different destinations. Exemplary intermediates nodes can be router, switches, gateways, base stations, etc. Usually, the heavy load and the differences between the network condition (load, available bandwidth (BW), maximum bit rate, etc) in each side of an intermediate node enforce the intermediate node to aggregate packets in one or more congestion buffers. Eventually, the congestion buffers can reach a certain threshold enforcing dropping packets from the queue.
[0010] There are several prior art dropping mechanism for managing congestion of plurality of queues. Some prior art methods use a drop-tail mechanism, wherein a received packet is put onto a queue if the queue is shorter than its maximum size (measured in packets or in bytes), and dropped otherwise, independently of the priority that the packet has. Other prior art methods uses active queue management (AQM) that drops packets before the congestion buffer is full. An AQM system starts dropping a certain percentages of packets when the congestion buffer reaches a certain threshold. An exemplary AQM system is a random early detection (RED) method.
[0011] A RED system monitors the average queue size and drops packets based on statistical probabilities. If the buffer is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping an incoming packet grows too. For example, when 45% of the buffer is full, then a RED system can start dropping one packet every ten packets (10% dropping ratio); when the buffer is full, the probability has reached 1 and all incoming packets are dropped. RED is indifferent and is not sensitive to the priority of the dropped packets. Priority that can be dependent on the priority of the subscriber that is associated with the dropped packet or the priority of the session carried by the dropped packed. RED techniques are well known to a skill person in the art and will not be further described.
[0012] Yet another exemplary prior art AQM system is weighted random early detection (WRED). An exemplary WRED system may have several different queue thresholds. Each queue threshold is associated to a particular priority of the packet, a certain quality of service (QoS) for example. A queue may have lower thresholds for lower priority packet. A queue buildup will cause the lower priority packets to be dropped, hence protecting the higher priority packets in the same queue.
[0013] A significant portion of the Internet traffic complies with the transmission control protocol (TCP). TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer. A source and a destination of a TCP connection have a handshake protocol which allows the destination to inform the source that packets were received. The destination can sent an acknowledge packet (ACK) upon receiving one or more packets. In addition the destination can indicate that one or more packets are missing. The indication can be done by sending three consecutive ACK packets carrying the same TCP sequence number. In response the source retransmits the required packet. Retransmission can occur also when a time out is reached at the source of the flow. Further, a TCP/IP connection has rate controller that adapts the transmitting rate of a flow according to the requests of retransmission of dropped packets, as well as by responding to the window size carried by the ACK packet. SUMMARY OF THE PRESENT DISCLOSURE.
[0014] IP transport network is best effort network. As such delivery packets from one endpoint to the other can be delayed, dropped or jittered. A common cellular operator sell different level of services, accordingly a subscriber can purchase certain servicing policy which guaranties dropping rate, minimum BW etc.. Thus, a transport network in a RAN of a cellular operator requires personalize or subscriber's, HQF that comply with the subscriber contract.
[0015] Today there are no personalizing HQF for a RAN of a cellular access network which is based on an IP network as the transport network for carrying subscribers' IP packets between the NBs and the RNC. Common HQFs that are used over terrestrial access networks do not fit the conditions existing in cellular access networks. In a terrestrial access network the topology remains without changes along a certain session.
[0016] Further, a typical TCP/IP rate controller, at a source of a flow of TCP/IP connection, has two mode of operation, a slow start mode and a congestion avoidance mode. The source, of a TCP flow, starts the transmission of a flow with the slow start mode. In slow-start mode the source transmits small number of TCP segments per a congestion window and waits for acknowledgement. A TCP segment can be carried over a single packet. Usually the TCP source can start with transmitting two packets per congestion window, then four packets, etc. The Rate is increased until the point that congestion window size (cwnd) reaches the size of the destination's advertised window or a pre-defined number of bytes (ssthresh) or until a retransmission is needed for one or more data segments. At this point of the time the rate controller, at the source, moves to the second mode, the congestion avoidance mode. In the congestion avoidance mode the rate is increased linearly. The window is increased by one segment for each received acknowledgment (ACK). The increasing in the rate continues until the cwnd reaches the advertised window of the destination or a retransmission is needed or reaching a timeout.
[0017] Furthermore, an exemplary source of a TCP flow responds to retransmission of a single data packet by reducing the transmitting rate by a certain amount of percentages, it can be between 25-50%, for example. Then, the rate is increased according to the congestion avoidance mode of operation. However, when a common source, of a TCP flow, retransmits multiple data packets within the same congestion window or when its retransmission timer expire it reduces the rate more drastically, by returning to slow-start mode of operation, starting by transmitting two packets in a congestion window.
[0018] Today AQM methods used by common intermediate nodes drop packets as a function of the number of the packets that are stored in its congestion buffers independently on whether two dropped packets belong to a certain flow and a certain congestion window. Thus, quite often the source of the TCP flow is forced to reduce its transmitting rate dramatically and to return to the slow-start mode. Although that in many cases, dropping of a single data packet and reducing the source sending rate by 50% is sufficient to eliminate the congestion.
[0019] Embodiments of novel AQM technique are disclosed. An embodiment of the novel technique can have a Per-Flow-Active-Queue-Management Framework (PFAQMF), at an intermediate node that can control the dropping of packets at congestion buffers of an intermediate node, taking into consideration the congestion window of each TCP flow and avoid dropping of two or more packets that belong to a same congestion window.
[0020] Exemplary embodiments of the novel PFAQMF could also consider the priority of the subscriber and the priority of the session before dropping a packet. Further, some embodiments of the novel PFAQMF could also scatters the dropping packets among the plurality of flows in order to fairly share the available bandwidth between the plurality of flows.
[0021] An exemplary novel PFAQMF can be installed in-line to the traffic at a point of concentration (POC) at an ingress edge of an access network between the Internet and the plurality of subscribers. An exemplary POC can be a switch, a router, a bridge, etc. An embodiment of PFAQMF can be installed in a cellular access network which is carried over a Packet Switch Transport Network (PSwTN). Exemplary PSwTN can be Multi Protocol Label Switching (MPLS) over IP; MPLS- TP (transport profile); MAC-in-MAC; Ethernet; etc. In such embodiment, the PFAQMF can be installed in a cellular operator access network in the first POC between a Radio Network Controller (RNC) and the plurality of subscribers.
[0022] An exemplary PFAQMF may employ a Hierarchical Queuing Framework (HQF). Hierarchical Queuing Framework is a queuing architecture, which includes a plurality of queues; each queue is used for storing incoming packets and/or frames and/or ATM cells and/or Protocol Data Unit (PDU) for a specific session or group of sessions that are associated with a certain priority level. In the following description, the words "packet," "frame," "ATM cells" and "PDU" may be used interchangeably. The queues can be organized in hierarchical architecture. In some embodiments the HQF architecture complies with the topology of a network hierarchical tree. Therefore, each queue can be drained in a different rate, limited to a certain delay, dropping policy, etc.. The plurality of queues can be logical queues that are embedded in a single physical memory device with a plurality of pointers, each pointer can be associated with a different queue (different subscriber and/or session), for example.
[0023] In order to control the early dropping procedure of received packets an exemplary PFAQMF may assign a statistical drop curve to each flow. An exemplary drop curve may be calculated as a function of the queue occupancy of the queue that is associated with the flow. One axis, the X axis for example, can reflect the percentage of the queue occupancy. In some embodiments the queue occupancy can be calculated by using a smoothing algorithm to overcome temporary changes of the x-axis's value. An exemplary smoothing algorithm can include mathematical averaging formula. The other axis, the Y axis for example, of the drop curve can define the percentage of the packets to be dropped at certain occupancy. The drop curve can be stored in a Look-Up Table (LUT), wherein the addresses of the LUT can reflect the percentage of the queue occupancy, while the stored data in each address can reflect the percentage of the packets to be dropped at that occupancy. The drop curve can reflect the priority of the subscriber and the priority of the session that are associated with a certain flow.
[0024] An exemplary PFAQMF may have a plurality of LUTs that represent a plurality of drop curves in order to cover the plurality of combinations of priority of subscribers and priority of flows. An exemplary drop curve LUT (DCLUT) can reflect low priority session, such as email for example, with low priority subscriber. Such a curve may be a steep curve that at lower occupancy it may reach high percentages of dropping packets. While other DCLUT can reflect a combination of high priority session, such as progressive Video for example, with high priority subscriber. Such a curve can be a moderate curve that at higher occupancy it may have low percentages of dropping packets.
[0025] An exemplary PFAQMF may be configured to try to avoid dropping of two or more packets per a congestion window of a flow. The PFAQMF may be configured to calculate the round-trip time (RTT) per each flow. According to the calculated RTT the exemplary PFAQMF can limit the time interval between dropped packets of the same flow to be greater than the calculated RTT of the flow.
[0026] Another exemplary PFAQMF may be configured to adapt the drop curve of a certain flow as a function the percentages of the utilized bandwidth of the certain flow from a fair-share bandwidth value per flow. In one example, adapting the drop curve can be implemented by using a plurality of DCLUTs per flow. Each DCLUT can be associated with a certain percentages of the utilized bandwidth. A current DCLUT can be selected based on the current percentages of the utilized bandwidth from the fair-share BW. Other example may use a single DCLUT per flow and implement Horizontal Transformation on the stored function. Such an embodiment may calculate a current 'x' value for the X axis of the drop curve as a function of the queue occupancy and the percentages of the current utilized BW from the fair share.
[0027] These and other aspects of the disclosure will be apparent in view of the attached figures and detailed description. The foregoing summary is not intended to summarize each potential embodiment or every aspect of the present disclosure, and other features and advantages of the present disclosure will become apparent upon reading the following detailed description of the embodiments with the accompanying drawings and appended claims.
[0028] Furthermore, although specific embodiments are described in detail to illustrate the inventive concepts to a person skilled in the art, such embodiments are susceptible to various modifications and alternative forms. Accordingly, the figures and written description are not intended to limit the scope of the inventive concepts in any manner. BRIEF DESCRIPTION OF THE DRAWINGS
[0029] Some examples of embodiments of the present disclosure will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:
[0030] FIG. 1 is a simplified block diagram illustrating a snapshoot of a portion of a cellular network in which an example of an embodiment of the present disclosure can be used;
[0031] FIG. 2 is a simplified block diagram illustrating a snapshoot of the portion of the cellular network of FIG. 1 with an example of an embodiment of a Per-Flow- Active-Queue-Management Framework (PFAQMF).
[0032] FIG. 3 schematically illustrate simplified block diagrams with relevant elements of an example of an embodiment of a PFAQMF that operates according to the teachings and techniques of the present disclosure;
[0033] FIG. 4 schematically illustrates a flowchart showing relevant actions that can be implemented by an input packet processor (IPP) that can be employed in an example of an embodiment of a PFAQMF;
[0034] FIG. 5 schematically illustrates a flowchart showing relevant actions of an example of an embodiment of a subscriber access processor (SAP) while learning the topology of the exemplary RAN 200;
[0035] FIG. 6 schematically illustrates a flowchart showing relevant actions of an example of an embodiment of a subscriber access processor (SAP) for monitoring a user equipment (UE) current available bandwidth (UECABW) over the RAN 200;
[0036] FIG. 7 schematically illustrates a flowchart showing relevant actions of another example of a SAP process for learning the topology of an MPLS transport network;
[0037] FIG. 8 schematically illustrates a flowchart showing relevant actions of an example of a SAP process for responding to a subscriber RANAP and NBAP messages; and [0038] FIG. 9a,b&c schematically illustrate a flowchart showing relevant actions of an example of a subscriber' s-session controller process (SSCP).
DESCRIPTION OF EMBODIMENTS
[0039] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the invention. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number.
[0040] Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the specification to "one embodiment" or to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention, and multiple references to "one embodiment" or "an embodiment" should not be understood as necessarily all referring to the same embodiment.
[0041] Although some of the following description is written in terms that relate to software or firmware, embodiments may implement the features and functionality described herein in software, firmware, or hardware as desired, including any combination of software, firmware, and hardware. In the following description, the words "unit," "element," "module" and "logical module" may be used interchangeably. Anything designated as a unit or module may be a stand-alone unit or a specialized or integrated module. A unit or a module may be modular or have modular aspects allowing it to be easily removed and replaced with another similar unit or module. Each unit or module may be any one of, or any combination of, software, hardware, and/or firmware, ultimately resulting in one or more processors programmed to execute the functionality ascribed to the unit or module. Additionally, multiple modules of the same or different types may be implemented by a single processor. Software of a logical module may be embodied on a computer readable medium such as a read/write hard disc, CDROM, Flash memory, ROM, or other memory or storage, etc. In order to execute a certain task a software program may be loaded to an appropriate processor as needed. In the present disclosure the terms task, method, process, and procedure can be used interchangeably.
[0042] FIG. 1 illustrates snapshoot of an exemplary cellular network 100 of a cellular operator. A common cellular network can comprises two types of networks, a radio access network (RAN) and a cellular operator core network (COCN) 110. The RAN connects the plurality of base stations 125, via a base station controller, such as but not limited to a Radio Network Controller (RNC) 120, to the COCN 110. Non limited examples of a base station can be a Node B (NB) 125, an Enhanced node B (eNode B). Along the description an NB can be used as representative term of any type of base station. The COCN 110 can comprise an operator's management network 114 having one or more management servers (not shown in the drawings). The operator management network 114 can comprises a policy server such as policy and charging rules function (PCRF), an AAA system (Authentication Authorization and Accounting), etc. In addition the core network 110 can comprise an operator IP networks 112 that includes one or more servers (not shown in the drawings) that provide different IP services to the subscribers; services such as but not limited to border routers, IP portal, content provider, etc.. The operator IP network 112 can be connected to the World Wide Web (WWW) 102, Content Provider IP networks 104, and/or a plurality of Intranet (not shown in the drawings).
[0043] Today, there is a trend in cellular networks to use a packet switched network as the transport network (PSwTN) 130 for subscriber's IP communication over the RAN of a cellular operator. An exemplary PSwTN 130 can be based on IP. A transport network provides traffic forwarding between two or more pairs of endpoints, NBs 125 and RNC 120 for example. This forwarding is based on a protocol header field in layer 2 and layer 2.5 of the Open System Interconnection (OSI) model. In some transport network, the traffic can be carried in tunnels between the endpoints. Along this description an IP transport networks represent a Layer 2.5 in the OSI model layer. The OSI layer 2.5 is considered to exist between traditional definitions of Layer 2 (Data link Layer) and Layer 3 (network layer). Layer 2.5 can be used to carry many different kinds of traffic, including IP packets as well as native Asynchronous Transfer Mode (ATM), Ethernet frames, etc..
[0044] Additional trend in today RAN is that the IP transport packets, between the NBs 125 and the RNC 120, can be carried over another transport network. The IP transport network packets can be carried over tunnels of a Packet Switch Transport Network (PSwTN) 130. Exemplary PSwTN network 130 can be Multi Protocol Label Switching (MPLS) over IP; MPLS-TP (transport profile); MAC-in-MAC; Ethernet; etc. Along this description the MPLS network can be used as a representative term for any PSwTN 130. The PSwTN network 130 can carry subscribers' packets, the PSwTN signaling & control, and the RAN Signaling & Control, for example, NB application part (NBAP), between RNC 120 and NBs 125.
[0045] Exemplary MPLS network 130 can comprise a plurality of points of concentration (POC 1 to POC 7). Each POC may be connected to one or more other POCs over an MPLS connection 131-139 and/or to one or more NBs 125 via an IP transport connection 122, 124 and 126, for example. An exemplary POC can be a switch (for Ethernet), router, bridge, (for IP/MPLS), etc. Thus, the topology of a RAN between the RNC 120 and a plurality of NBs 125 has a shape similar to a tree with a plurality of branches, wherein each junction includes a POC.
[0046] Each of the plurality of user equipment (UE 1 to UE 6) is connected via RF communication link 140 to a NB 125 and form the NB 125 over an IP transport connection (126, 124, 122) via one or more POCs (POCs 1-7) to the RNC 120 and from the RNC via an IP transport connection 119a through the COCN 110 to an Internet network, the World Wide Web (WWW) network 102, a content network 104 or an operator IP network 112, for example. Connection 119a carries the RAN control and signaling of the RNC (RANAP), decrypted subscriber's packets. RNC 120 can be connected to the PSwTN 130 via connection 119b to POC 1. Connection 119b can carry the RAN control and signaling of the NB (NBAP); and encrypted subscriber's packets.
[0047] The signaling and controls connections can carry Radio Access Network Application Part (RANAP) messages between a cellular operator core network (COCN) 110 and the RNC 120, or carry Node B application part (NBAP) messages between the NBs and the RNC, for example. In some embodiments, PDUs of RANAP and/or NBAP can be carried over an IP network, Further in some of those embodiments a packet of the IP network can carry one or more RANAP or NBAP PDUs. A reader who wishes to learn more about RANAP or NBAP is invited to read the well known protocols of 3 GPP published from 1999, the content of which is incorporate herein by reference. By listening to those messages an exemplary PFAQMF can obtain information regarding subscribers that wishes to start a new access procedure to the network, indication when a certain UE requests to switch from one NB 125 to another, when a new Packet Data Protocol Context (PDP Context) is opened between the UE and the COCN 110, etc.
[0048] By way of example, cellular network 100 is shown having only six UEs (UE UE1-6), three NBs 125 (NB 1-3), seven POCs (POC 1-7), and one RNC 120. A skill person in the art appreciates that any other quantity of each component can be implemented in the access network. RAN 100 is illustrated just for explanation and is not indented to limit the scope of this description. The communication links between the different POCs, and between the POCs and the different NBs may use different physical layer, each link can have different capacity, different available bandwidth (BW), Uses different protocols, each POC may have different load.
[0049] In addition when a UE moves from one NB to another it may face different available BW and delay in addition to changes in the IP address of the transport network. If RAN 100 is based on IP/MPLS then, one or more labels can be changed too. Some exemplary RAN 100 can comprise two or more RNCs 120. The RNCs can communicate between themselves by using Radio Network Subsystem Application Part (RNSAP) over the IP transport network, which is carried over a PSwTN, such as but not limited to MPLS network 130 for example.
[0050] When UE 2 moves from NB 1 to NB 3, for example, the communication path is changed. The old communication path, between NB 1 and the RNC 120, includes NB 1, communication link 122, POC 2, communication link 131 and POC 1 and via IP transport connection 119b to RNC 120. The new communication path, between UE 2 and the RNC 120, includes NB 3, IP transport communication link 126, POC 5, IP/MPLS communication link 137, POC 4 and IP/MPLS communication link 135 to POC 1 and from there via IP transport connection 119b to RNC 120. Each segment of the path can be carried on different physical line, having different BW and load, each POC can have different load and delay. The IP address associated with the UE 2 in the IP transport network is changed from the IP address of NB 1 to the IP address of NB 3. The port number that was associated to the subscriber's session can be changed too. Thus, although the communication session continues smoothly while moving from one NB to the other, most of the parameters which were related to the communication session are changed. Accordingly there is a need for a HQF controller has to be able to monitor those changes and controls the HQF based on those changes. A subscriber's session can be tunneled between the RNC and NodeB. All of the subscriber IP packets belonging to the data session would be encapsulated in a tunnel called RAB (Radio Access Bearer), which can use UDP/IP encapsulation to the radio protocols. Looking at a RAB packet traveling towards a NB, the Destination IP address would be of the NB and the UDP Layer 4 port would identify this CUE tunnel.
[0051] Further, due to the UE migrating from one NB to another, as well as changes due to changes in the condition of a certain path, the available BW over the certain path from the RNC 120 to a UE is changed dynamically. Further, changes in a certain path can also be due to changes in the modulation over a microwave communication link between two POCs, for example. Therefore, there is a need for periodically monitoring the available BW for each path that is currently active.
[0052] In addition, an IP transport network 130 can be best effort network. As such delivery packets from one endpoint to the other can be delayed, dropped or jittered. A common cellular operator sells different level of services, accordingly a subscriber can purchase certain servicing policy which guaranties dropping rate, minimum BW etc.. Thus, a transport network 130 in a RAN of a cellular operator requires personalize or subscriber's, HQF that comply with the subscriber contract.
[0053] Last but not the least, as it is described above dropping of two or more subscriber's TCP/IP packets that belong to the same congestion window can reduce the transmitting bit rate back to the slow start mode. Thus, there is a need that the novel HQF can control the dropping of packets at congestion buffers of an intermediate node, taking into consideration the congestion window of each TCP flow and avoid dropping of two or more packets that belong to a same congestion window.
[0054] Yet some embodiments can be installed in access networks to the Internet, other than cellular access networks. An example of such a network can be a Metro network. In such access networks similar dropping methods, would improves the experience of the subscribers of the access network. In such an access network information regarding an IP flow can be retrieved from the IP header and/or layer 2 header.
[0055] Referring now to FIG. 2 which illustrates an example of a Per-Flow-AQM Framework (PFAQMF) 210 that can be installed in association with RNC 120 and POC 1 of the snapshoot of FIG. 1. In one embodiment, the PFAQMF 210 can be installed as a transparent bridge after the RNC 120 and POC 1 , between POC 1 and the NBs 125. The exemplary illustrated system in FIG. 2 controls the download traffic. Other embodiments (not shown in the drawings) may have a slave module, which can be installed at the outputs of the egress POCs (POC 2, & POC 5, for example). The slave module can manage the upload traffic in association with PFAQMF 210.
[0056] An embodiment of PFAQMF 210 can be installed as MPLS entity in the MPLS network 130 and may be connected to a plurality of entities in order to collect information regarding the Internet traffic to/from the plurality of UE 1-6. As an MPLS entity PFAQMF 210 may obtain signaling and control that travel over the MPLS 130. In addition PFAQMF 210 can listen by sniffer 219 to RANAP traffic; and decrypted subscriber's packet that are travel over connection 119a. Via connection 214, PFAQMF 210 can collect information from the operator management networks 114. Information about the policy of the subscribers that are currently connected to the network 200. The communication over connection 214 can be based on Gx protocol, RADIUS protocol, or similar protocol. The policy information may include the guarantied dropping rate, minimum BW, bit rate, etc.
[0057] Via connections 131a, 133a, and 135a exemplary PFAQMF 210 obtains MPLS packets that carry: encrypted IP packets toward the UE 1-6, NBAP PDUs toward the plurality of NBs 125, MPLS signaling and control sent from POC 1 toward the plurality of POCs, etc. The obtained information after being processed can be transferred toward it destination via connections 131b, 133b,& 135b respectively. In the other direction, PFAQMF 210 may obtain via connections 131b, 133b,& 135b, MPLS packets that carry: encrypted IP packets from the UE 1-6 toward the Internet, NBAP PDUs from the plurality of NBs 125 toward RNC 120, MPLS signaling and control packets sent from the different POCs toward POC 1, etc. The obtained packets, after being processed, can be transferred toward it destination via connections 131a, 133a,& 135a respectively.
[0058] Thus, the exemplary PFAQMF 210 that is connected to the PSwTN network, such as IP/MPLS network 130 at the egress of POC 1 can be connected to a plurality of virtual networks, each providing transport service to a specific set of communication channels and protocols, that are carried over the IP/MPLS network 130. Each network can comply with a different protocol and be used for different application. One of the networks can be the network that carries the signaling and control of the plurality of POCs. Connecting to this network can be used for learning the topology of the network. The example of PSwTN of the FIG. 2 with the plurality of POCs can be a single routing domain, a single autonomous system (AS), for example. Therefore, a dynamic routing protocol, such as open shortest path first (OSPF) can be used for determining the routing tree of network 200 between POC 1 and the plurality of POCs 2-7. A reader who wishes to learn more about OSPF protocol is invited to read RFC 2630 published on 2003 by IETF (Internet Engineering Task Force). The content of which is incorporate herein by reference. In some embodiments RSVP-TE protocol can be used for creating tunnels with BW guarantee. RSVP-TE tunnels can be used in addition or instead of OSPF routing protocol. A reader who wishes to learn more about RSVP-TE protocol is invited to read RFC 3209 published on 2001 by ITEF, the content of which is incorporate herein by reference.
[0059] In an embodiment of PFAQMF 210, which is configured to operate in the downstream of an PSwTN, based on Multi Protocol Label Switching (MPLS), the different POCs that are illustrated in FIG. 2, are Label Switch Routers (LSR) and/or Label Edge Router (LER) of the MPLS network 130. Wherein POC 1 serves as an ingress LER, POC 2 serves as an egress LER for NB 1 and POC 5 serves as an egress LER for NB 3. Such an exemplary PFAQMF 210 can be adapted to listen to the communication between the ingress LER (POC 1) and the plurality LSRs (POCs 3,4) as well as the egress LERs (POCs 2 & 5) . Further, RNC 120 may send PDUs that carry signaling and control information based on sections of Iub protocol, toward the NBs 125, etc.
[0060] In the embodiment of FIG. 2, POC 1 upon receiving PDUs from the RNC 120, may process the IP transport header, based on the IP address of the destination NB can add an MPLS header having one or more labels, based on a routing table, and send the IP/MPLS packet toward the relevant POC (POC 2, 3 or 4) via one of the connections 131a, 133a, & 135a toward a following POC via PFAQMF 210. Each label defines a next POC in the way of the IP/MPLS packet. Each POC can remove, add or replace the label that is pointed to it.
[0061] In such embodiment the PFAQMF controller 210 can parse RESERVE messages based on RSVP-TE protocol. Those messages can include path-request message send from POCl to the plurality of POCs. The RESVCON are sent from POC 1 and the RESV label are received from those POCs. The RSVP-TE messages can be sent from an MPLS tunnel endpoints, consequently those messages can be sent between POC 1 (ingress LER) and POC2 &POC 5 (egress LERs). The PFAQMF controller can be adapted to parse Sender TSpec and Receiver TSpec fields in the RSVP-TE messages that are sent from the ingress LER to the egress LERs. TSpec are messages that include information regarding the flow requirements of each edge of a connection.
[0062] For embodiments in which the PSwTN 130 supports the capability of Ethernet Operations, Administration, and Maintenance (OAM) processes, and standards. OAM messages can be used for on-demand link trace and for estimating the available BW for links and tunnels. For an example of the RAN transport that is carried over Ethernet Services and the Ethernet network supports Ethernet OAM, then link trace message (LTM) can be sent from a Maintenance Entity Group (MEG) end point (MEP) to each one of the other MEPs and or MEGs Intermediate Points (MIP), from the PFAQMF to POCs 2, 5, 6, and 7, for example. The receiving side can respond with link trace reply (LTR) message that comprises the Media Access Control (MAC) address of the responder. A path from a subscriber UE and the embodiment of PFAQMF 210 includes a MEP at a certain MEG level and zero or more MIPs belonging to the same MEG and having the same MEG level. The second MEP resides inside the embodiment of PFAQMF 210. . Exemplary path can be between POC 2 to PFAQMF 210. This path serves UEs attached to NBl and NB2 125. The MEG level of this path can be 4 and MEPs reside in POC2 125 and PFAQMF 210. Another exemplary path can be MEP at NB 3 125 serving UE 5, MEG level 5 can be set for this path, having a MEP in this MEG level at PFAQMF 210. In addition, a MIP in MEG level 5 can be set at POC5 and POC 4 along the path, etc.
[0063] In such embodiment the PFAQMF 210 and Egress POCs (2, 5, 6, and 7) can be configured as a MEP for one or more MEG levels, while the intermediate POCs (3 and 4) can be configured as MIPs of that MEG level. This initial configuration can be implemented by a network operator that defines the MEPs and/or MIPs with the appropriate MEG level designated for the topology discovery. A reader who wishes to learn more about LTM and LTR messages is invited to read ITU standard Y.1731 published on 2006 by ITU, the content of which is incorporate herein by reference.
[0064] Per each currently connected subscriber, an embodiment of PFAQMF 210 may monitor the current available BW to the subscriber, the utilized BW, the dropping percentage, the RTT for a TCP flow, etc. Based on the processed information of the subscriber policy and the monitored information, PFAQMF 210 can manage an HQF that is connected in line to the egress of the PFAQMF 210. Such an exemplary PFAQMF 210 may define the draining rate (the shaper) of a queue allocated to a certain subscriber, avoid establishing a new IP session in case that the current available BW is below the guarantied minimum BW.
[0065] Further, exemplary embodiments of the PFAQMF 210 can identify a group of the PSwTN packets that carries an entire subscriber's packet. The PSwTN packets can be referred also as the Protocol Data Unit (PDU) of the IP transport layer. Consequently the dropping process of the PFAQMF 210 is more efficient than dropping process of portions of a subscriber's IP packets. In order to minimize latency and also, to maximize the bandwidth efficient by not sending fragments of subscriber's packets that are useless. [0066] An exemplary embodiment may determine, upon receiving the first PSwTN packet, the 1st PDU, that carries the beginning of a subscriber's IP packet, whether to drop the subscriber's IP packet or not. The decision can be based on the subscriber's priority, the current load toward the destination NB, etc. The following PDUs of the same subscriber's IP packet get the same dropping treatment until the PDU that carries the end of the subscriber's IP packet. Furthermore, the PFAQMF 210 may manage the dropping of TCP/IP packets to one packet per a congestion window. More information on embodiments of PFAQMF 210 is disclosed below in conjunction with figures 3 to 9a, b &c.
[0067] FIG. 3 illustrates a simplified block diagrams with relevant elements of an example of PFAQMF 300. Among other elements, an embodiment of PFAQMF 300 may comprise an input-packet processor (IPP) 310, a subscriber-IP -packets processor (SIPPP) 320, an active-subscriber table (AST) 335, a subscriber-access processor (SAP) 330, a plurality of egress-fix-access-network-interface card (EFANIC) 350, and one logical HQF per each POC connected to the egress of the PFAQMF 300 (POCs 2, 3, & 4, for example). In one embodiment, each HQF may comprise a traffic manager (TM) 340 and a plurality of queue buffers 345. The one or more TM 340 may communicate with each other during handover, for example. In an alternate embodiment a single TM 340 may control all of the HQFs. An embodiment of AST 335 can be stored in read/write memory device, random access memory, etc. The AST 335 can be shared by the modules of PFAQMF 300.
[0068] An example of IPP 310 may be used as a network processor between the plurality of connections that deliver a plurality of types of data, which can comply with a plurality of protocols, and the internal units of PFAQMF 300. IPP 310 may process layers 1 to 3 of the OSI module. Based on the parsed information, the IPP 310 may initially classify the obtained data and accordingly may route it to an appropriate module of PFAQMF 300.
[0069] IPP 310 may listen, via connection 219 to the traffic over the connection 119a, and may obtain decrypted packets of subscribers IP session, as well as RANAP PDUs carried between the RNC 120 and the COCN 110. The IPP 310 may route those packets toward a queue of SAP 330. Via connection 214, the IPP 310 may listen to traffic carried over the management network 114. Such management traffic can comply with Gx protocol or RADIUS protocol and carries information regarding the subscribers' policy. The Gx or RADIUS messages can be carried by packets over an IP transport network. The information regarding the subscriber's policy can be requested by SAP 330 to be used for setting parameters that are needed for controlling the traffic toward that subscriber.
[0070] Via connections 131b, 133b, & 135b (FIG. 2) an embodiment of IPP 310 may obtain PSwTN packets, such as MPLS packets, that carry NBAP PDUs and IP session packets that are sent from UE 1-6 via the plurality of NBs 125 toward the relevant RNC 120 and IP servers respectively. The IP packets can be transferred toward SIPPP 320 and be used for calculating the RTT parameter of a TCP/IP session, while the NBAP PDUs can be transferred toward SAP 330. In some embodiments of PFAQMF 300 in which TCP/IP packets are not handled separately, SIPPP 320 may receive only NBAP PDUs over those connections.
[0071] In addition, IPP 310 may receive, via connections 131a, 133a, & 135a (FIG. 2) PSwTN packets that carry IP packets targeted toward the UE 1-6 as well as PSwTN packets that carry NBAP data toward the plurality of NBs 125. The IP packets can be transferred toward SIPPP 320, while the NBAP PDUs can be transferred toward SAP 330. More information on the operation of an exemplary IPP 310 is disclosed below in conjunction with FIG. 4.
[0072] SAP 330 may prepare the PFAQMF 300 to handle the IP data traffic to/from the plurality of subscribers' UEs 1-6. After installation and from time to time, when changes in the topology of the RAN occur, SAP 330 may learn the topology of the RAN and accordingly inform the TM 340 how to configure the queues buffers 345 of the HQF. During operation and even during a certain session SAP 330 may identify a change in the path to a relevant UE. For example, when a subscriber moves from one NB 125 to another. Switching from one NB 125 to another will affect the IP addresses of the packets over the PSwTN and will affect the subscriber identification.
[0073] Furthermore, the available bandwidth (BW) for the session can change while moving from one NB to another NB 125, the draining process of the queues that were associated with the subscriber while he was served by the previous NB has to be changed, etc. As a result of the mobility of a subscriber from one NB to another, SAP 330 has to configure dynamically the HQF allocated to that session and adapt it to the new situation associated with the new NB. In order to accomplish the above activities SAP 330 may manage and use the AST 335.
[0074] SAP 330 may obtain from IPP 310 the RANAP ,ΝΒΑΡ ,PSwTN signaling & controls received from the plurality of POCs 1-7, NBs 125, and sniffing data from connections 219 and 214 (FIG. 2). By listening to the NBAP and RANAP signaling and controls PDUs, SAP 330 can identify a request of a subscriber to establish an IP session. When a subscriber wishes to attach to RAN 200 for a data communication session, the subscriber's UE can send a request to open a packet PDP Context. The RNC 120 then initiates an Initial SREQ (service request), which is one type of the RANAP messages. The Initial SREQ is transferred from the RNC 120 to an SGSN (not shown in the drawing) in the core network 110, for example. SGSN stands for Serving GPRS Support Node, while GPRS stands for General Packet Radio Service. The SREQ carries the subscriber's UE International Mobile Subscriber's Identification (IMSI), and a common ID that identifies subsequent signaling messages between the UE and the core network. In other embodiments the subscriber IMSI can be given by listening to GTP-C messages. IMSI is the identifier by which an entity can communicate with a mobile core entity (examples PCRF, policy server) for retrieval of the subscriber policy. GTP stands for GPRS Tunneling Protocol. GTP-C messages are used within the GPRS core network for signaling between Gateway GPRS Support Nodes (GGSN), Serving GPRS Support Nodes (SGSN) and RNC.
[0075] The response to this SREQ can comprise the common ID and an allocated Packet Temporary IMSI (P-TMSI) that will be used by the UE as long as the UE is connected to the SGSN. Thus, by listening to the Initial SREQ and responses an exemplary SAP 330 can obtain information regarding the subscriber IMSI, P-TMSI and the common ID used for signaling via communication link 219. The common ID has one-to-one relationship with P-TMSI and is used in some of the signaling messages. In addition SAP 330 may obtain the Transport IP address of the servicing NB 125 and Layer 4 port allocated to the UE of the requesting subscriber. The collected information can be stored in a new entry in AST 335 that is allocated to that new active subscriber.
[0076] In some embodiments, SAP 330 may further listen, via connection 219, to few decrypted IP packet of the new IP session for identifying the transport layer and the application type of subscriber sessions (HTTP, FTP, etc.). In some embodiments this check can be done periodically in order to identify changes of the application that uses this connection. The identified transport layer and the application type can be added also to the allocated entry in the AST 335. In addition SAP 330 may collect, from connection 214, servicing policy information related to that subscriber. Information such as priority, guarantied dropping rate, minimum BW etc. Accordingly a DCLUT can be assigned to this session and a pointer to that DCLUT can be written in the allocated entry at AST 335.
[0077] After loading the allocated entry in AST 335 with the information about the session and the subscriber, SAP 330 may measure the CABW to that session. In some embodiments, SAP 330 can calculate a current average RTT value between the PFAQMF 300 and few common servers (not shown in the drawings) that are located over the operator IP network 112 (FIG. 2) servers such as but not limited to the operator portal, cache, etc. The CABW and the calculated RTT to common servers can be stored also at the entry of AST 335. The average RTT value between the PFAQMF 300 and few common servers can be found by using Ping procedure with each one of the common servers. Ping procedure can be implemented by sending an Internet Control Message Protocol (ICMP) echo request packets to each one of the common servers and waiting for an ICMP response. Measuring the time duration between the transmitting and the receiving ICMP packet to each one of the common servers. This test can be run several times, and a statistical summary can be prepared. The summary can include the minimum, maximum, and the mean round- trip times, and sometimes the standard deviation of the mean.
[0078] Based on the collected information, knowing the path of the new session SAP 330 can instruct TM 340 to allocate a new queue to serve the new subscriber. The new queue can be attached as a "leaf queue to the logical queue of NB 125 that is currently serving the relevant UE. Then, a weight value of the queues that are associated to the other UEs which are already served by that NB, can be reduced in order to enable allocation of BW to the new subscriber. The weight can also reflect the priority of each subscriber and/or session. A Queue ID, in the HQF 345, can be allocated to that leaf queue and the Queue ID can be written in an appropriate field of the allocated entry in AST 335. The queue ID can be written in an external header which is added to the PSwTN packet by the SIPP 320 and is removed by the TM 340 or EFANIC 350, for example.
[0079] After loading the entry in the AST 335 PFAQMF 300 is ready to manage the traffic of the new session. At this point of time SAP 320 can inform the SIPPP 320 about the new session and the allocated entry in the AST 335. During the session SAP 330 can identify NB handover, identify end of session and release the allocated resources for that session, etc.. NB handover from one Cell to another Cell (NodeB, eNodeB, BTS) can be identified by listening via connection 219 to new RAN parameters. The new RAN parameters can be exchanged using protocols like NBAP, RANAP or GTP-C. The SAP 320 can update the subscriber record in AST 335 with the new transport parameters. In addition, the queue assigned to the subscriber may change. More information on the operation of SAP 330 is disclosed below in conjunction with FIG. 5, 6, 7 and 8.
[0080] Exemplary AST 335 can comprise a plurality of entries. Each entry in the AST 335 can be assigned to an active subscriber's UE. In some embodiments an entry can be divided into sub-entries. Each sub-entry can be assigned to a certain flow of that subscriber. Each entry or sub-entry can include a plurality of fields. As it is disclosed above. An entry can store a plurality of parameters in a plurality of fields. Parameters such as ID values, IMSI, P-IMSI, queue ID, NB IP address, layer 4 port number, etc. Further, the entry can include policy information, current path (the labels between POCs), an associated TM 340; shaping instruction; relevant instruction to TM 340. In addition an entry can store the CABW, the calculated RTT, the utilized BW, the time duration from the last drop and the DCLUT or a pointer to the DCLUT, as well as other parameters that are needed for managing the traffic to and from each one of UEs (UE 1-6).
[0081] The AST 335 can be access by the SIPPP 320, TM 340 SAP 330, and IPP 310 for reading or writing different parameters that are related to the relevant subscriber IP session, the relevant entry. SAP 330 can allocate an entry in AST 335 to a new active subscriber and release the entry at the end of the session.
[0082] An embodiment of SIPPP 320 can manage a plurality of processes operating in parallel. In one embodiment SIPPP 320 can comprise one or more processors running in parallel. Each processor can execute one or more threads, while each thread can be allocated per one or more UEs. An exemplary thread can be initiated by SIPPP 320 after obtaining an indication from SAP 330 about a new subscriber IP session and a pointer to the assigned entry in AST 335.
[0083] Per each subscriber session which can be defined by the IP address of the current serving NB and the layer 4 port numbers for the download direction and the upload direction, which are associated with that subscriber, a session table can be allocated. In some embodiments of SIPPP 320 the session table can be part of the AST 335. Yet in alternate embodiment the entry in the AST 335, which is related to that subscriber, is copied to the SIPPP 320 as part of the session table. The session table can be used for monitoring the traffic of this session. The session table can store information that is related to the session. Information such as but not limited to counters that count the total number of bytes that have been handled in this session. This field can be used for verifying that the subscriber remains within its quota according to his policy. In addition, the type of the session (TCP/IP or UDP/IP) and the application type, the time when the last subscriber IP packet was dropped, a considering-dropping flag indicating that a dropping decision may be needed, the Queue ID used by TM 340 for that session, a copy of the relevant DCLUT or a pointer to that LUT, etc. In case that the session is carried over TCP/IP, then the session table can include the RTT value related to that session. In some embodiments, the RTT value can be calculated only for TCP/IP subscribers' sessions.
[0084] In one embodiment the RTT value related to that session can be calculated as the sum of the calculated average RTT from the PFAQMF 300 to the few common servers (as it is disclosed above in conjunction with SAP 330, for example) plus a found RTT value between the relevant UE and the PFAQMF 300. Calculating the found RTT value can be done by an embodiment o£ SIPPP 320 after waiting for a silent interval in the traffic associated with the two ports of that session, the download and the upload traffic. After identifying such a silent interval, SIPPP 320 can wait to the first download packet and start measure the time interval to the following upload packet. This measured time interval can be referred as the found RTT value between the relevant UE and the PFAQMF 300.
[0085] In other embodiments of SIPPP 320, the measuring process can be repeated for several consecutive silent intervals and a smoothing algorithm can be used for defining the found RTT value between the relevant UE and the PFAQMF 300. An exemplary smoothing algorithm can be an average value of several measured time intervals.
[0086] Yet in some other embodiments of SIPPP 320, the RTT value of a session can be calculated by measuring the time interval between two consecutive burst of download packets.
[0087] An exemplary SIPPP 320 can obtain, from IPP 310, PSwTN packets, such as MPLS packets for example. The PSwTN packets can be referred as the protocol data unit (PDU) of the IP transport layer. The PSwTN packets can carry, as payload, IP data packets of the subscriber IP session, NBAP PDUs between RNC 120 and the plurality of NB 125 and signaling as well as signaling and control packets of the MPLS network 130.
[0088] An embodiment of SIPPP 320 can parse the header of the Frame Protocol (FP) and /or the header of the radio link control (RLC) protocol that follows the IP transport header of the PDU. A reader who wishes to learn more about FP or RLC is invited to read the well known protocols of 3 GPP published from 1999. The RLC header starts in a configurable offset (the number of bytes from the beginning of the payload of the PDU). By parsing the RLC header SIPPP 320 can identify the first PDU that carries the beginning of a subscriber IP packet and the PDU that carries the end of the subscriber IP packet. The RLC header provides information such as new subscriber packet begins, by having Έ' or 'HE' bits taking the appropriate values. In such a case, the RLC sequence number can enable determination of the subscriber's original IP packet length. When a decision is made to drop subscriber's IP packet, because there is not enough available BW for example, or in exemplary embodiments in which AQM decisions are made based the current % of BW utilization of the subscriber and the subscriber DCLUT. Then, by using this method SIPPP 320 can drop the PDUs (PSwTN packets) that carried complete subscriber's one or more IP packets.
[0089] PSwTN packets that were not dropped by SIPPP 320 are transferred with an indication of the Queue ID to the TM 340 that is associated to the next POC (POC 2, 3 or 4 in FIG. 2) in the path to the NB 125 that is currently serving the UE of that subscriber. More information about the operation of SIPPP 320 is disclosed below in conjunction with FIG. 9a-c.
[0090] One or more Traffic Managers (TMs) 340 can be associated with the PFAQMF 300. TM 340 can be common HQF that is used in a terrestrial access network. Exemplary HQF can be FAP 21 of Broadcom Corp. USA, or HX 330 of Xelerated Sweden, for example. The hierarchical of the queues can be defined by SAP 330 after determining the configuration of RAN 200. The most parent queues (the highest in the hierarchy) are the queues to POC 2,3, &4. The lowest in the hierarchy, the most child queue or leaf queue is per UE queue. The queue to NB 125 is one above the UE queue. Each a leaf queue is associated with a Queue ID, which is used by SIPPP 320 to route a subscriber IP session packets toward the appropriate queue. The control and communication with the TM 340 can be implemented via an Application Programming Interface (API) of the TM 340. The controlling can include scheduling, weight, shaper, etc. per each queue.
[0091] Several parameters can be configured per each queue. Those parameters can define values for the scheduler, the shaper and the weight of the queue, etc.. The scheduler defines the maximum and minimum rate in which the queue can be drained. The maximum rate can reflect the maximum BW of the communication link at the egress of the queue; the minimum rate can reflect policy limitations. The shaper defines the rate limitation for draining the queue, an exemplary embodiment the shaper control parameter can be used to reflect the current available BW that can be allocated for draining the relevant queue. The value of the shaper is smaller than the maximum value of the scheduler. The value of the weight can reflect the weight of the queue compare to other queues in the same level that are connected at the egress of a higher hierarchical level queue. The weight can also reflect the priority of each subscriber and/or session. In addition each queue can be associated with queue identification, a queue ID. In some embodiments in which the amount of bytes that were sent to a certain subscriber exceeded the subscriber's usage quota, then the weight factor of that subscriber can be decreased.
[0092] In contrast to a common HQF in which the queue ID is used for internal routing between the queues of packets that are transferred via the HQF. Further, the queuing control parameters, such as scheduling, weight, shaper, etc. are used for static operation. In common HQF those parameters can be configured once, by an operator of the network, while configuring the HQF to a certain network and remains without changes during operation. In TM 340 those parameters are dynamically changed by the SAP 330 according to the current conditions over the network 200 and the current served mobile devices.
[0093] When an exemplary SAP 330 determines that a change in the current available BW over a certain connection has occur, then one or more of the queue control parameters can be dynamically changed according to the changes in the available BW or topology. The parameters can comprise the shaper of a queue, for example, that is associated with that connection can be changed accordingly.
[0094] In some embodiments, TM 340 is configured to inform SIPPP 320 with the current utilization of the queue toward each of the current active subscribers. SIPPP 320 may use this information to determine the percentages of early dropping packets to each one of the subscribers according to the DCLUT related to that subscriber. In some embodiments, when the subscriber session is carried over TCP/IP, the dropping can be limited to one packet per RTT.
[0095] In the downstream direction of the exemplary RAN 200, an example of PFAQMF 210 that is installed between POC 1 and POCs 2, 3 and 4 can have three HQFs, each HQF can be associated with one of the communication links 131b, 133b and 135b, for example. The configuration of the HQF that is associated with the traffic from POC1 to NB 3 and from there to UE 5 and UE 6 is further disclosed. The configuration of the other HQFs can be done in a similar way.
[0096] An example of HQF may comprise a plurality of logical hierarchical queues. In some embodiments the plurality of logical hierarchical queues can be implemented by a plurality of physical memories organized in hierarchical queues based on the hierarchical topology tree. In other embodiments the plurality of logical hierarchical queues can be a plurality of virtual queues that are embedded within a single physical memory (queue buffers 345, for example). Each of those virtual queues can be controlled by an associated scheduler. The plurality of schedulers can be organized according to the hierarchical tree of the topology. Each scheduler can be associated with one or more virtual queues.
[0097] Four hierarchical levels of logical queues can be used in the illustrated disclosed example of FIG. 2. The first level, the trunk level of the tree, is the logical queue associated with link 135b. There is only one logical queue in the first level, the level of the trunk. The second level, at the egress of POC 4, has three attached logical queue, one is associated with communication link 137 to POC 5, one with communication link 138 to POC 6, and one logical queue is associated with link 139 to POC 7. Only one of these logical queues will be further disclosed, the rest can be configured in a similar way.
[0098] The 3rd level, at the egress of POC 5, has only one illustrated branch to NB 3 attached to it, for the purpose of the description assuming that additional three NBs are connected to POC 5 via other egress ports of POC 5 (not shown in the drawings) consequently four logical queues are attached to the logical queue to POC 5. The logical queue is associated with communication link 126 to NB 3. The 4th level of hierarchical queuing includes the logical queues that are attached to the logical queue to NB 3, the child queue or leaf queue is the queue that is associated with a UE. In this level two leaf queues are illustrated as attached to the logical queue to NB 3, a queue for UE 5 and a queue for UE 6.
[0099] First, the trunk 135b, the 1st level logical queue can be configured, the stem of the tree. The scheduler is set to a value that can be equal to the maximum allowable BW over the communication link 135b. An ID can be allocated to this level 1 scheduler. Next the 2nd level logical queue can be configured. One per each POC that directly connects to the stem of the tree. The scheduler value can be equal to the maximum allowable BW over the communication link 137, for example. The weight can be equal to 33% because there are additional 2 queues at 2nd level that are connected to the same queue in the higher level, the queue of trunk 135b. In some embodiments, some of the links may be set to a higher weight due to the fact that these links are preferred over others. An ID can be allocated to this level 2 logical queue.
[00100] The 3rd level logical queue, the logical queue that controls the traffic over link 126 to NB 3, amongst other links, can be configured now. The scheduler value can be equal to the maximum allowable BW over the communication link 126, for example. The weight can be equal to 25% because there are additional three virtual queues at 3rd level that are connected to the same scheduler in level 2 (not shown in the drawings). An ID can be allocated to this level 3 scheduler in this example the ID can reflect the ID value of the queue of link 137.
[00101] Finally the leaf level is configured, in the disclosed example the level 4th queue to UE 6. In this example, a set of queues is associated with the NB logical queue. The number of queues in the set of queues can reflect the maximum number of UE that can be served by the NB. A logical queue is assigned to every active user attached to this NB. The scheduler of this queue can reflect the subscriber's policy that defines the maximum and the minimum allowable BW to the subscriber's UE, for example. The minimum allowed BW can reflect the guaranteed BW which was promised to the subscriber and is written in the subscriber's policy. In this example, the weight can be equal to 50% because there is only one additional UE currently communicating via NB 3. This weight may later be adjusted dynamically according to the specific subscriber that utilizes this specific queue, its priority, BW demand and application requirements, etc..
[00102] In some embodiments, the dropping policy, which is related to a certain UE, can be associated with the subscriber queue as AQM. An ID can be allocated to this level 4 queue. In some embodiments the ID value can reflect the higher queue ID, in this example the ID can reflect the ID value of the queue of link 126, which reflects the queue of the link 137, and so on. In other embodiments dropping is handled by the SIPPP 320.
[00103] Exemplary one or more egress-fix-access-network-interface cards (EFANIC) 350 are connected to the egress of TM 340 and obtain PSwTN packets such as MPLS packets. Packets that are drained from the queue buffers 345 by TM 340 according to the topology and the commands received from SAP 330. Each EFANIC 350 receives packets from TM 340 that are targeted toward one of the associated POCs, POCs 2, 3, and 4. Each EFANIC 350 processes the PSwTN packets according to the lower layers of the communication links (131a, 133a, 135a, 131b, 133b, and 135b respectively) to be transmitted to the relevant POC. In some embodiments the communication over the links to POC 2, 3, and 4 can comply with Ethernet OAM protocols.
[00104] Referring now to FIG. 4 that schematically illustrates a flowchart showing relevant actions of process 400. Process 400 can be implemented by an input packet processor (IPP) 310 employed in an embodiment of PFAQMF 300 (FIG. 3), for example. An embodiment of IPP 310 may be used as an interface processor between the plurality of connections 131a,b; 133a,b; 135a,b; 219, 214 and the internal units of PFAQMF 300. Those connections can deliver a plurality of types of data carried by different PSwTN packets. Those packets can comply with a plurality of protocols. IPP 310 may process layers 1 to 3, of the OSI model, of each connection and delivers the plurality of packets to a queue of the internal processing unit of IPP 310 for further processing. Process 400 can be initiated 402 after power on and may run in a loop as long as PFAQMF 300 is active.
[00105] After initiation 402 IPP 310 can check 405 its queue looking for a next packet in the queue. If there is no packet in the queue, then the process can wait in a loop looking for obtaining a packet. If 405 there is a packet in the queue, the header of the packet can be parsed 408, according to the protocol of the relevant network, Ethernet, MPLS, IP, etc. Based on the parsed information, the IPP 310 may initially classify the obtained data and accordingly may transfer it to an appropriate module of PFAQMF 300. If 410 the packet carries RANAP or NBAP PDUs, then the packet is transferred 412 to SAP 320 to the queue of SAP RANAP process 800, which is disclosed below in FIG. 8. RANAP PDUs and NBAP PDUs are related to subscriber requests to establish or terminating an IP session or when handover occurs.
[00106] If 410 the packet does not carry RANAP or NBAP PDUs, then at block 420 a decision is made whether the packet is Record-Route-Object (RRO) packet sent from one of the POCs. If 420 the packet is RRO packet, then the packet is transferred 422 to SAP 320 to the queue of SAP POC topology processes 500 and 700, which are disclosed below in FIGs. 5 and 7 respectively. RRO PDUs are used for learning and updating the topology to the plurality of NBs 125 and the POCs 2,3,4,5,6 and 7, for example.
[00107] If 420 the packet does not carry RRO data, then at block 430 a decision is made whether the packet is sniffed from the operator management network 114 (FIG. 2) via connection 214 (FIG. 2) and carries information related to the subscriber. Information such as ID information, policy information, subscriber priority, DCLUT, etc. This information can comply with Gx protocol, RADIUS protocol, or similar protocol.
[00108] If 430 the packet is sniffed from the operator management networks 114, then the packet is transferred 432 to a queue associated with block 916 of the subscriber' s-session controller process (SSCP) 900. Process SSCP 900 is disclosed below in FIGs. 9a to 9c. Executing of SSCP 900 can be shared by SAP 330 and SIPPP 320.
[00109] If 430 the packet was not sniffed from the operator management network 114, then at block 440 a decision is made whether the PSwTN packet carries an Ethernet Operations, Administration, and Maintenance (OAM) frame. The Ethernet OAM frames such as delay-measurement message (DMM) and corresponding delay- measurement reply (DMR). Those messages can be used by SAP 330 for calculating the UECABW. Therefore those frames can be transferred 442 to a queue associated with block 624 of the SAP BW process 600 that is disclosed below in conjunction with FIG. 6.
[00110] Finally, if 440 the packet was not Ethernet OAM, then at block 450 a decision is made whether the PSwTN carries a subscriber IP packet. If not, the packet is transferred 452 to TM 340 (FIG. 3) to be stored in a default queue toward a neighbor POC in the path to the destination of the PSwTN packet (POC 1, or POC 2, or 3, or 4, in the example of FIG. 2) and process 400 returns to block 405. If 450 the PSwTN packet carries is a subscriber IP packet, then the PSwTN packet is transferred 454 to a queue of SIPPP 320 (FIG. 3) and process 400 returns to block 405. [00111] FIG. 5 schematically illustrates a flowchart showing relevant blocks of an example of a subscriber access processor (SAP) while learning the topology of the RAN 200 from the RNC 120 up to the plurality of NBs 125 and updating an NB- Topology DB (not shown in the drawings). In some embodiments, the exemplary process 500 can be initiated 502 at power on, in other embodiments process 500 can be initiated after executing process 700, which is disclosed below in conjunction with FIG. 7. Process 700 is implemented for learning the topology of the POCs. During operation, process 500 can be initiated each time a packet carries RSVP-TE RECORD object is sniffed by SAP 330 (FIG. 3) from POC1 or one of the egress POCs (POC 2, 5, 6 and 7, for example).
[00112] The NB-Topology DB can comprise a plurality of entries. Each entry is associated with a NB 125. Each entry includes a plurality of fields: NB MAC; NB IP address; topology to the NB (the POCs in the path, one or more Labels, IP address, and MAC, MEG level), MAX. BW of communication link to its associated POC; available BW, etc. After initiation, SAP 330 may prompt 504 an administrator of the operator network to configure the NB-Topology DB with different parameters of each network device (NBs, POCs, RNC, etc). Parameters such as but not limited to IP address, MACs, MAX BW, etc. the rest of the fields can be filled by SAP 330 while running one or more of the following disclosed processes.
[00113] After the preliminary configuration, process 500 may start 506 a loop between blocks 510 to 520. Each cycle in the loop is executed per each NB 125 that is served by the PFAQMF 210. The loop starts at block 510 for the next NB in the NB-Topology DB. A LTM can be sent 512 with the MAC address of that NB. Then, SAP 330 may collect 514 one or more LTR which were sent by that NB and each one of the MIPs from the POCs in the way to that NB to that NB. Each LTR includes parameters of the responding entity. Parameters such as but not limited to the responder's MAC, MEG level, etc. The collected parameters can be stored 516 in the relevant entry of the NB-Topology DB as the updated topology information.
[00114] Next a decision is made 520 whether additional NB is listed in the NB- Topology DB. If yes, then process 500 returns to block 510 and start a loop for the next NB 125. If 520 there is no additional NB, then process 500 can be terminated. In some embodiments process 500 may use a POC table, the POC can be handled by SAP 330 as it is disclose in FIG. 7.
[00115] Some embodiments, wherein the PFAQMF 200 is installed in a network having Open Shortest Path First (OSPF) capabilities, the topology can be found by listening to OSPF messages that are sent to POC 1. In such embodiments, method 500 can be adapted to listening to OSPF messages instead of LTM and LTR messages. A reader who wishes to learn more about OSPF is invited to read RFC 2328 which is incorporated here by reference.
[00116] FIG. 6 schematically illustrates a flowchart showing relevant actions of an example of SAP 330 (FIG. 3) for monitoring a UE current available bandwidth (UECABW) over the exemplary RAN 200. In some embodiments a plurality of BW monitoring process 600 can be executed in parallel on a plurality of processors. Each process can be associated with a group of active subscribers in AST 335. An embodiment of process 600 can be initiated 602 after updating the NB-Topology DB, process 500 in FIG. 5, for example. After initiation process 600 may run periodically in a loop, between blocks 604 to 650. An exemplary time interval between two loops can be few seconds to few hours (one hour, thirty minutes, etc.). Each cycle of the periodical loop can be executed on the entries of the AST 335 (FIG. 3).
[00117] At block 610 the next entry in AST 335 is fetched and be processed 612 for updating. The considering-dropping flag field of that entry can be reset. Information regarding the path from PFAQMF 210 to the relevant subscriber's UE can be obtained 612 from that entry. Among other parameters the information can include the one or more MEG levels along the path to the relevant NB 125 that currently serves the UE. Then an internal loop is initiated from block 620 to 630. Each cycle in the loop is executed per a MEG level in the path to the relevant NB.
[00118] At block 622 a burst of 'Ν' DMM messages can be sent from PFAQMF 210 toward the far end MEP of the MEG level related to the current cycle of the internal loop. In some exemplary processes the priority of that DMM messages can be similar to the priority of the relevant session and/or subscriber. An exemplary value of 'N' can be a configurable number in the range of 5-100 DMM messages for example. Each DMM messages can include a timestamp 'Tx', which can be expressed by 'M' bytes. 'M' can be a configurable number between few tens to few hundreds of bytes. An exemplary 'M' can be 100 bytes. A DMM message can include a sequence number. The transmitting rate of the 'N' DMM messages can be faster than the maximum transition rate of the relevant connection as it is retrieved from that entry of the AST 335. In some embodiments, the transmitting rate can be 5 times faster than the maximum rate of the connection.
[00119] The received 'N' DMR messages, which were routed by IPP 310 in block 442 of process 400 to SAP 330, are parsed 624 and the timestamp 'Rx' of each received DMR is retrieved. Then, an embodiment of SAP 330 can calculate 626 the differences between two consecutive DMRs, DTn=Rxn-Rx(n-l), for having N-l values of the differences DTn. a representative DT value can be calculated as a statistical function of the plurality of DTn values. The statistical function can be an average, a median, the maximum, etc., of the N-l values of the calculated differences DTn. Using the differences between sending and receiving packets is executed by using a single clock, the clock of the sender, eliminating the needs to synchronize the clocks of the sender and the receiver.
[00120] The DT value can reflect the current available BW (CABW) toward that MEP. Calculating the current available BW toward that MEP can be implemented 628 by dividing the packet size by the DT value. The calculated CABW to that MEP can be stored in the relevant entry of AST 335. This value can be used to define the shaper of the queue related to that MEP in TM 340 (FIG. 3). At block 630 a decision is made whether there are more MEG levels in the path. If yes, method 600 returns to block 620 and starts a next cycle for the next MEG level. If 630 there are no more MEG levels, then the CABW to the current serving NB 125 can be defined 632 by as the minimum CABW of the relevant MEG levels toward that NB. The CABW for the NB can be divided by the number of UEs that are currently served by that NB in order to define the UECABW of the subscriber's UE associated to that entry of the AST 335. In some embodiments, dividing the CABW among the plurality of subscriber's UEs can be based also on the priority of each subscriber and or the session. [00121] At block 640 a decision is made whether the defined UECABW is OK, which means that the UECABW complies with the subscriber policy. If 640 yes, then the TM 340 can be instructed 644 to set the shaper of the leaf queue, the ID queue, allocated to that UE, to the drain that queue in a rate that is higher than the guaranty minimum bit rate but smaller than the defined UECABW and the maximum guaranty bit rate. The guarantee bit rate can be dependent on the priority of that subscriber and that session. Then at block 650 a decision is made whether additional entries exist in AST 335. If yes, method 600 may return to block 610 for handling the next entry in AST. If 650 not, process 600 can wait 652 for a period of 'X', wherein 'X' can be in the range of few milliseconds to few minutes for example. After the waiting period method 600 may return to block 604 and may start a new periodical cycle.
[00122] Returning now to block 640, if the UECABW is not OK, which means that the UECABW is smaller than the minimum guarantee bit rate, then the TM 340 can be instructed 642 to set the shaper of the leaf queue, the ID queue, allocated to that UE, to drain that queue in a rate that is equal to the defined UECABW. In addition the considering-dropping flag, in the relevant entry of AST 335, can be set. This flag can indicate SIPPP 320 to consider dropping entire packets of subscriber IP session. Then method 600 proceeds to block 650, which is disclosed above.
[00123] FIG. 7 schematically illustrates a flowchart showing relevant actions of another exemplary SAP 330 (FIG. 3) process 700 for learning the topology of an exemplary MPLS transport network having a plurality of POCs. Process 700 can be implemented for updating a POC Table. In an embodiment of PFAQMF the POC Table can be stored as a section in the NB-Topology DB. The POC Table can include a plurality of entries. Each entry is associated with a path to an egress POC. Each entry includes connectivity information written in a plurality of fields. Information such as: IP address, a label for the segment , the MAC address of each POC along the path, the POC MEG level, MAX BW per each segment, IP address and MAC of egress POCs (end of tunnels), etc. In some embodiments the operator configures the IP address, the MACs, of each POCs the rest of the fields are filled by SAP 330, in other embodiments the operator enters all the information. 7
36
[00124] Process 700 can be initiated 702 at power on for learning the topology of the POCs. After initiation 702 method 700 may prompt 704 an administrator of the operator network to configure the NB-Topology DB with basic parameters. Then a POC Table can be allocated and reset and method 700 may wait 710 for receiving, from IPP 310 (FIG. 3), one or more replies of RRO. Wherein the RRO can be sent from POC 1 over one of the connections 131a, 133a, 135a (FIG. 2) while establishing a Label Switch Path (LSP). The reply of RRO from an egress POC can deliver connectivity, addressing and label of one or more POCs along the path from POC 1 to that egress POC.
[00125] Upon 710 obtaining a reply to an RRO, SAP 330 processes 712 the obtained reply RRO and retrieved connectivity information regarding each POC along the path from the egress POC back to POC 1. The connectivity information can include the IP address over the PSwTN, the label, etc. of each POC along the path from POC 1 to the responding egress POC. The obtained information can be stored in the POC Table. Then at block 714 the POC table can be checked and a decision is made whether 720 the POC table is completed. If not, method 700 returns to block 710 waiting for the next reply to an RRO, a reply received from another egress POC, for example.
[00126] If 720 yes, then method 700 can update 722 the TM 340 (FIG. 3) with the topology tree from POC 1 to the plurality of egress POCs allowing the TM 340 to organize the hierarchy the queues in the Queue Buffers 345 (FIG. 3) according to the hierarchy of the POCs. In addition the NB-Topology process 500 (FIG. 5) can be initiated 722 as well as the BW monitoring process 600 (FIG. 6).
[00127] After filling the entire POC Table, method 700 may wait 724 for an additional reply to an RRO, which may indicate a change in the POC topology. The obtained reply is parsed 726 and be compared to each of plurality of entries of the POC Table looking for a similar information or to a change in the stored information. If 730 there is a change in the information, then the POC Table is updated. Updating 732 can be done by adding a new entry in the POC table due to a new egress POC, for example. Alternatively, one or more changes can be done in a relevant entry in the POC Table which is related to the differences from the obtained reply and the stored POC table. [00128] After updating the POC Table an update can be implemented 734 also on the NB-Topology DB with the new information. After the updating, the BW- monitoring process 600 (FIG. 6) can be initiated 736 and method 700 may return to block 724 waiting for another reply of RRO. Returning now to block 730, if there are no changes in the information stored in the POC Table, then process 700 may return to block 724 waiting for another reply of RRO.
[00129] In PSwTN, which does not have the dynamic routing capabilities, the topology can be studied by using Link Trace Massages to MEPs and MIPs in each tunnel.
[00130] In other embodiments where the PSwTN is based on MPLS or does not support RRO, the topology is of the POCs is discovered by listening to the OSPF (Open Shortest Path First) messages sent towards POC 1 and analyzed by OSPF stack embedded in SAP 330 for building the topology map of the entire PSwTN. Yet, in another embodiment both methods can be used together.
[00131] FIG. 8 schematically illustrates a flowchart showing relevant actions of an example of SAP 300 (FIG. 3) for responding to a RANAP or NBAP messages. The RANAP messages can be sniffed via connection 219 (FIG. 2) and be transferred by IPP 310 (FIG. 3) as it is illustrated in block 412 of FIG. 4. The NBAP messages can be received via the PSwTN over connections 131b, 133b, 135b (FIG. 2), for example, via the IPP 310. Method 800 can handle establishing a subscriber IP session or handover process between two NodeBs.
[00132] At block 810 method 800 checks its queue for a RANAP or NBAP massage, if 810 a message exist, then method 800 proceed to block 812. If not, method 800 waits for the next RANAP or NBAP message. At block 812 the message is parsed according to the message type. For a RANAP message the IMSI or TMSI fields can be retrieved. For a NBAP message the CRNC context ID, which is assigned to a subscriber's UE upon attaching to the RAN 200 by the RNC 120 (FIG. 2), is retrieved. Based on the retrieved values the AST 335 (FIG. 3) is searched 812 for an entry.
[00133] If 814 an entry was found, then a decision is made 816 whether the message is a RANAP message. If 816 yes, which indicates, that the RANAP message is related to an active subscriber, then the data from the found entry in AST 335 is retrieved 818 and be analyzed. Accordingly to the information written in the found entry the RANAP message is transferred toward a queue of a relevant subscriber's-session-controller process (SSCP) that manages the communication of the session and method 800 returns to block 810 waiting for the next RANAP message. An exemplary SSCP is illustrated in FIG. 9a-c.
[00134] If 816 the message is not a RANAP message, thus the message is an NBAP message that can indicate a handover, then at block 836 the found entry is fetched and parsed. The Queue ID value that represent the personal queue to that subscriber, which is written in the appropriate field of the found entry of the AST is fetched and an instruction is sent 836 to TM 340 (FIG. 3). The instruction can be to drain the personal queue at the CABW of the old NB and method 800 may wait until the relevant queue is drained.
[00135] After draining the old personal queue, the entry in the AST is updated 838 with the new NB IP address and two port numbers, one for upload and one for download, are written 838 in the entry. The NB-Topology DB is searched for the entry associated with the new NB in order to retrieve routing information such as the labels. The retrieved information is copied to the entry in the AST 335. In addition TM 340 is informed 838 about the new path and define a new hierarchy of queues in the Queue Buffers 345 that fits the path to the new NB and accordingly the queue ID points on the new path of queues. Then the method 800 instructs the SSCP process, which handles this subscriber IP session, to execute the process of monitoring the UECABW at block 918 of FIG. 9a and process 800 returns to block 810 for handling the next RANAP or NBAP message.
[00136] Returning now to block 814, if an entry was not found, then based on the message type a decision is made 820, whether the message is RANAP SREQ message (a service request message) or a NBAP new RAB (Radio Access Bearer) message. A new RAB message represents a new tunnel between a NB 125 and RNC 120 that is established at the beginning of a IP data session or during handover by the new NB. However, handover is handled at block 836. If 820 not, then the message is ignored and process 800 returns to block 810 looking for the next RANAP message. [00137] If 820 the message is a RANAP SREQ message or NBAP new RAB message, then a new entry in the AST 335 is allocated 822 by SAP 330 (FIG. 3). The retrieved identification information is stored 822 in the appropriate fields of the new entry. The identification information depends on the type of the message. From a RANAP SREQ message the retrieved information can include IMSI, T-IMSI, subscriber IP, etc.. For NBAP new RAB message the retrieved identification information can include NB IP address, Layer 4 port, CRNC context ID, common ID, etc.. If 824 the message is RANAP then method 800 returns to block 810 for handling the next message in the queue.
[00138] If 824 the message is NBAP message, then at block 826, based on the NB IP address, the NB-Topology DB is searched for retrieving routing information from PFAQMF 210 (FIG. 2) to the relevant NB. The routing information can include labels of each segment in the path, IP address of the one or more POC in the path, etc. The retrieved routing information can be stored in the appropriate fields of the entry in the AST 335. In addition the last measured CABW to that NB is retrieved 826 from the NB-Topology DB and a fair share of the CABW of the NB is allocated to the new session. The fair share can be function of the number of UEs that are currently served by the relevant NB and the priority of the subscriber/session (the weight of the subscriber/session).
[00139] At Block 828, SAP 330 informs SIPPP 320 about the new session and a subscriber's session control process (SSCP) is allocated and assigned to the new session. The new SSCP is associated with the new entry in the AST and a pointer to the queue of the assigned SSCP is stored in the new entry in the AST 335. Then process 800 returns to block 810 looking for the next RANAP message.
[00140] In some embodiments of PFAQMF 210, in which a calculated RTT is needed in order to manage the dropping process of a TCP/IP subscriber IP session, block 822 may include a process for measuring the RTT value from the PFAQMF 210 to several common servers (not shown in the drawings) that are located over the operator IP network 112 (FIG. 2) and/or the Internet 102 (FIG. 2). Servers such as but not limited to the operator portal, cache, etc. The average RTT value between the PFAQMF 210 and the few common servers can be found by using Ping procedure with each one of the common servers. [00141] Ping procedure can be implemented by sending an Internet Control Message Protocol (ICMP) echo request packets to each one of the common servers and waiting for an ICMP response. Measuring the time duration between the transmitting and the receiving ICMP packet to each one of the common servers delivers the RTT value to each one of those severs. This test can be run several times, and a statistical summary can be prepared. The summary can include the minimum, maximum, average and the mean round-trip times, and sometimes the standard deviation of the mean. One from those values can be selected as representative value for the RTT to the Internet. Yet in other embodiments of PFAQMF 210 the RTT value can be measured by SIPPP 320 as it is disclosed below in conjunction with FIGs. 9a-c.
[00142] FIG. 9a,b&c illustrate a flowchart showing relevant actions of an exemplary subscriber' s-session-controller process (SSCP). An embodiment of SSCP 900 can be executed by SIPPP 320. SIPPP 320 (FIG. 3) may handle a plurality of SSCP 900 threads running in parallel by one or more processors. Each thread can be associated with a single subscriber IP session. SSCP 900 can be associated with the relevant entry in AST 335 (FIG. 3) that is assigned to the same session. Each SSCP can have an input queue in which packets can wait to be processed. At block 904 the SSCP queue is checked looking for a packet. Packets can be placed in the queue by IPP 310 as it is disclosed in block 454 of FIG. 4; or by SAP 330 as it is illustrated in FIG. 8 block 818, for example. If 904 a packet exist in the queue, the packet is retrieved and a decision is made 910 whether the packet carries RANAP SRES (Service Response) message which is sniffed from connection 219 (FIG. 2) via IPP 310 (FIG. 3).
[00143] If 910 the packet does not carry SRES message, then method 900 proceed to block 930 in FIG. 9b. If 910 the packet carries RANAP SRES message, then the message is parsed 914 and information such as P-TMSI is retrieved from the SRES message and be stored in the appropriate field of the relevant entry in AST 335. Based on the IMSI of the relevant subscriber information regarding the subscriber policy is obtained 916 by listening to connection 214 (FIG. 2) to the Operator management networks 114. The policy information can include: APN (Access Point Name), QoS indicator for that session, the subscriber priority, the guarantee bit rate interval, maximum dropping percentages, DCLUT or a pointer to that LUT, etc. Then, a timer Td can be allocated and reset for that session. The timer Td monitors the time duration from the last dropped IP packet.
[00144] After adding the information to the entry of the AST, SSCP 900 can start the monitor process 918 of the UECABW. The monitor process can include similar actions to the actions that are illustrates in blocks 612 to 632 of FIG. 6 and are disclosed above. After calculating and storing the UECABW a decision is made 920 whether the UECABW is OK and fits the subscriber policy.
[00145] If 920 the UECABW is not OK and does not fit the subscriber policy; the UECABW is below the subscriber's guarantee bit rate, for example. Then, an indication to drop the packets of that session is written 922 in the relevant entry of the AST leading the SIPP 320 to drop each one of the subscriber's IP packets of that session and method 900 returns to block 904 waiting for the next packet.
[00146] If 920 the UECABW is OK and fits the subscriber policy, then a Queue ID is allocated to that subscriber's session and TM 340 (FIG. 3) is informed 924 about the queue ID, the path to that UE, the weight, the maximum and the minimum guarantee bit rate can be loaded as the scheduler of that queue, the shaper of the queue can be updated too. The shaper can be the minimum between the UECABW and the maximum guarantee bit rate, for example. The weight can reflect the subscriber priority and the QoS related to the relevant session. At this point of time the TM 340, the AST 335, the SIPPP 320 and SSCP 900 are ready to handle packets of the relevant subscriber IP session and method 900 returns to block 904 for handling a next packet.
[00147] FIG. 9b illustrates relevant actions of an embodiment of a section of SSCP 900 for handling subscriber's related packets which do not carry RANAP SRES messages. At block 910 FIG. 9a, if the PDU do not carry RANAP SRES messages, then SSCP 900 proceed to block 930. Next the PDU is parsed 932 and a decision is made 940 whether the PDU is Iu Release Request or Iu Release Command indicating the end of Iu session. If 940 yes, then TM 340 (FIG. 3) is instructed 942 to drain the current queue of the session, queue ID, and to release the personal queue. SAP 320 is informed to release the relevant entry at the AST 335 and to release the resources of SSCP 900 and method 900 can be terminated 944.
[00148] If 940 the packet does not carry an end of session message, then a decision is made 946 whether the PSwTN packet carries a subscriber's IP packet. Identifying subscriber's IP packets can be done by parsing the header of the Frame Protocol (FP) and /or the header of the radio link control (RLC) protocol that follows the IP transport header of the PDU. If 946 the PSwTN packet does nor carry a subscriber's IP packet, then the contribution of this packet to the current-utilized BW of the subscriber is added 958 to the field in the relevant entry in AST 335, which is assigned to the current-utilized BW, and the PSwTN packet is transferred 958 toward the relevant queue at the next POC HQF. Then process 900 returns to block 904 in FIG. 9a.
[00149] If 946 the PSwTN packet carries data of a subscriber's IP packet, then a decision is made 948 whether the PSwTN packet carries an edge of a subscriber's IP packet. The edge can be the beginning or the end of the subscriber's IP packet. The decision can be done by parsing the header of the Frame Protocol (FP) and /or the header of the radio link control (RLC) protocol that follows the IP transport header of the PDU. The RLC header starts in a configurable offset (the number of bytes from the beginning of the payload of the PDU). By parsing the RLC header SIPPP 320 can identify the first PDU that carries the beginning of a subscriber IP packet and the PDU that carries the end of the subscriber IP packet. If 948, the PSwTN packet does not carry an edge of a subscriber IP packet, then SSCP 900 may proceed to block 954.
[00150] If 948 it is an edge of an IP packet then at block 950 a decision is made whether the subscriber's IP session is carried over TCP/IP transport layer. In an embodiment of SSCP 900, the decision can be reached by listening to the connection 219 (FIG. 2) for few packets during the beginning of the session in order to define the transport protocol of the IP session. In other embodiments SIPPP 320 (FIG. 3) may check the upload and the download traffic of each session and based on the handshake traffic between both ends of the connection, verifying that the responder side responds with ACK packets, for example. If 950 the session is carried over TCP/IP, then process 900 proceed to block 970 in FIG. 9c. 12 000077
43
[00151] If 950 the session is not carried over TCP/IP, then for PSwTN packet that carries the beginning of a subscriber's IP packet, then at block 952 one or more parameters can be checked in order to determine whether to drop the packet or not. The parameters can include: the considering-dropping flag field in the relevant entry of AST; the occupation of the buffer/queue, which is associated with the session; the priority of the subscriber/session, the DCLUT, the BW utilization of the session up to that IP packet, the percentages of dropped IP packet up to the current IP packet, etc. The occupation of the buffer can be obtained from TM 340 (FIG. 3). Different embodiments of PFAQMF may use different set of the above parameters.
[00152] An embodiment of SSCP 900 may consider 952 the current percentage of dropping compare to the subscriber policy, the utilized BW compare to the fair share of the other UE connected to the same NB. If the current percentage of dropping is below the guarantee percentages and the utilized BW is higher than the fair share, then the IP packet can be marked for dropping.
[00153] In another embodiment of SSCP 900 an AQM consideration may apply. Based on the current occupation of the queue of the session, the DCLUT of the subscriber/session can be observed 952 in order to obtain the required percentages of dropping. The obtained required percentages of dropping can be compared to the current percentages of dropping and if the obtained required percentages of dropping are bigger than the current percentages of dropping, then the IP packet can be marked for dropping. The mark for dropping can be written as a field in the associate entry of AST 335.
[00154] Next at block 954, the mark for dropping field is checked. If 954 the mark for dropping field is false, then at block 958 the PSwTN packet can be transferred toward its destination while updating the current utilized BW. If 954 the mark for dropping field is true, then the PSwTN packet can be dropped 956. In addition, if the dropped PSwTN packet carries the end of a subscriber IP packet, then dropping percentages counter is increased 956 and method 900 can return to block 904 in FIG. 9a.
[00155] Returning now to block 950 in FIG. 9b, if the session is carried over a TCP/IP transport layer, then SSCP 900 proceed to block 970 in FIG. 9c and the entry at the AST is further parsed 972 in order to determine whether the RTT of the entire path of that session is known. If 974 the RTT is known, then the value of Td is checked 976 and a decision is made whether Td, the time duration from the last dropped IP packet, is bigger than the value of RTT. If 976 not, the PSwTN packet is transferred 994 toward the relevant queue at the next POC HQF. Then process 900 returns to block 904 in FIG. 9a.
[00156] If 974 the Td is bigger than RTT, then at block 978 one or more parameters can be checked in order to determine whether to drop the packet or not. The parameters can include: the considering-dropping flag field in the relevant entry of AST; the occupation of the buffer/queue, which is associated with the session; the priority of the subscriber/session, the DCLUT, the BW utilization of the session up to that IP packet, the percentages of dropped IP packet up to the current IP packet, etc. The occupation of the buffer can be obtained from TM 340 (FIG. 3). Different embodiments of PFAQMF 300 may use different set of the above parameters. Different method may be used in order to decide whether to mark the packet for dropping. Some of the methods are disclosed above in conjunction with block 952 and will not be further disclosed.
[00157] Next at block 990, the mark for dropping field in the AST is checked. If 990 the mark for dropping field is false, then at block 994 the PSwTN packet can be transferred toward its destination via the relevant queue at the next POC HQF. After transferring the PSwTN packet the utilized BW can be updated. If 990 the mark for dropping field is true, then the PSwTN packet can be dropped 992. In addition, if the dropped PSwTN packet carries the end of a subscriber IP packet, then the dropping percentages counter is increased 994, the timer Td can be reset and method 900 can return to block 904 in FIG. 9a.
[00158] Returning now to block 974, if the RTT in unknown, then the entry in the AST 335 is further parsed and a decision is made 980 whether the measuring RTT process is active. If 980 not, then at block 982 the RTT process is initiated and process 900 proceed to block 994 that is described above. In one embodiment, the RTT process can found the RTT between the PFAQMF 200 (FIG. 2) and the relevant UE. After initiation the RTT process may wait for a silent interval in the traffic associated with the two ports of that session, the download and the upload P T/IL2012/000077
45 traffic. Next when 980 the RTT process is active, then at block 984 the first download packet after identifying the silent interval can start a timer to measure the time interval to the following upload packet. The first upload packet can stop the timer and the measured time interval can be referred as the found RTT value between the relevant UE and the PFAQMF 200. The RTT for the entire path from the subscriber to the Internet server and back to the subscriber can be calculated as the sum of the average value of the RTT from the PFAQMF 200 to the common servers and the found RTT value. The RTT for the entire path can be written in the relevant entry of the AST 335 and process 900 proceed to block 994.
[00159] In other embodiments of SSCP 900, the RTT measuring process can be repeated for several consecutive silent intervals and a smoothing algorithm can be used for defining the found RTT value between the relevant UE and the PFAQMF 200. An exemplary smoothing algorithm can be an average value of several measured time intervals.
[00160] Yet in some other embodiments of SSCP 900, the RTT value of the entire path can be calculated by measuring the time interval between two consecutive burst of download packets.
[00161] In the description and claims of the present application, each of the verbs, "comprise", "include" and "have", and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements, or parts of the subject or subjects of the verb.
[00162] The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Many other ramification and variations are possible within the teaching of the embodiments comprising different combinations of features noted in the described embodiments. 12 000077
46
[00163] It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow.

Claims

CLAIMS:
1. A Per-Flow- Active-Queue-Management Framework (PFAQMF) that is installed in a radio-access-network (RAN) between a plurality of cellular subscriber user equipment (CUE) and a cellular operator core network, wherein the CUEs are engaged in Internet Protocol (IP) communication session, the PFAQMF comprising: an input packet processor (IPP) that intercepts Packet Switch transport network (PSwTN) packets that are transferred between an ingress point-of- concentration (POC) and one or more egress POCs. a subscriber-access processor (SAP) that obtains policy information regarding active subscribers that are currently communicating over the RAN, and obtains PSwTN packets that carry Protocol Data Units (PDU) that carries signaling and controls messages of the cellular network; an active subscriber table (AST), stored in a memory device, that comprises a plurality of entries, each entry is associated with an active subscriber and comprises identification information of the subscriber, identification information of the PDUs related to the subscriber, and information regarding handling the session packets; and one or more traffic managers (TM) each TM is associated with a group of base stations and manages a Hierarchical Queuing Framework (HQF) for aggregating PSwTN packets that are targeted toward a plurality of associated CUEs that are currently served by the associated group of base stations ; each TM obtains from the IPP a plurality of PSwTN packets that are directed toward the TM's associated subscribers; obtains from the SAP and the AST information regarding handling the subscriber's packets by an associated queue; wherein the SAP further monitors the topology between the ingress POC and the plurality of associated CUEs, identifies changes in the topology and accordingly update the configuration and the control parameters of one or more queues that are affected by those changes.
2. The PFAQMF of claim 1, wherein the SAP further obtains PSwTN packets that carry signaling and controls messages of the PSwTN.
3. The PFAQMF of claim 1, wherein each PDU carries the data communication from/to the plurality of base station to/from a radio network controller (RNC).
4. The PFAQMF of claim 1, wherein the SAP further monitors the UE current available bandwidth (UECABW) per each active CUE and accordingly modifies the control parameters of the associated queue that has been allocated to that CUE.
5. The PFAQMF of claim 1, wherein the base stations are Node B (NB).
6. The PFAQMF of claim 1, wherein the PSwTN is a Multi Protocol Label Switching (MPLS) network.
7. The PFAQMF of claim 5, wherein the ingress POC is connected to the RNC and the egress POC is connected to one or more NB.
8. The PFAQMF of claim 5, wherein the ingress POC is an ingress Label Edge Router (LER) and each egress POC is an egress LER.
9. The PFAQMF of claim 1, wherein the SAP concludes that a path to a CUE is changed and accordingly instructs the TM, which manages the traffic toward that CUE, to drain now the CUE associated queue.
10. The PFAQMF of claim 9, wherein the shaper of the CUE associated queue is adjusted according to the UECABW over the path that drains the queue of that path.
11. The PFAQMF of claim 9, wherein the SAP identifies a new path for said CUE; monitors the UECABW over the new path, updates the relevant entry in the AST and allocate a new queue for said CUE according to the new path.
12. The PFAQMF of claim 1, wherein the IPP comprises a network processor.
13. The PFAQMF of claim 1, wherein each TM is associated with a POC that is communicatively coupled to the group of base stations which are associated with that TM.
14. The PFAQMF of claim 1, wherein the IPP further comprises a subscriber IP packet processor (SIPPP) that communicate with one or more TMs and receives PSwTN packets that carry subscribers IP data packets, per each received PSwTN packets the SIPPP identifies the TM associated with the subscriber that is associated with the packet, adds an ID of the queue which is associated with the subscriber and transfers the packet toward that TM.
15. The PFAQMF of claim 14, wherein the SIPPP further determines to drop a set of PSwTN packets that carry an integer number of the subscriber IP data packets.
16. The PFAQMF of claim 14, wherein the SIPPP further determines to drop PSwTN packets based on the UECABW over the path of those PSwTN packets.
17. The PFAQMF of claim 14, wherein the SIPPP further determines to drop PSwTN packets based on a calculated round trip time (RTT) value of the subscribers IP data packets carried by the PSwTN packets.
18. The PFAQMF of claim 14, wherein the SIPPP further determines to drop PSwTN packets based on an active queue management (AQM) method.
19. The PFAQMF of claim 18, wherein the AQM method determines which percentage of subscriber's IP data packets to drop based on the current occupancy of the subscriber's associated queue.
20. The PFAQMF of claim 14, wherein the SIPPP further determines to drop PSwTN packets based on a policy of the subscriber associated with those PSwTN packets.
21. The PFAQMF of claim 1, wherein the identification information of the subscriber comprises the International Mobile Subscriber's Identification (IMSI) of the subscriber.
22. The PFAQMF of claim 1, wherein the identification information of the subscriber comprises CRNC context ID.
23. The PFAQMF of claim 1, wherein the identification information of the subscriber comprises the Packet Temporary IMSI (P-TMSI) of the subscriber.
24. The PFAQMF of claim 1, wherein the control parameters of the queue comprises a weight parameter.
25. A method for controlling a Per-Flow- Active-Queue-Management Framework (PFAQMF) installed in a Packet Switch transport network (PSwTN), wherein the PFAQMF is associated with a radio-access- network (RAN) between a plurality of subscriber cellular-user equipments (CUE) and a cellular operator core network, wherein the CUE are engaged in Internet Protocol (IP) communication session, the method comprising: obtaining packets of the PSwTN that are transferred between an ingress point-of-concentration (POC) and one or more egress POCs of the RAN; parsing packets that carry signaling and control messages of the cellular network; determining, per each CUE, a current path between the PFAQMF and a current servicing NB; monitoring, per each CUE, the current available bandwidth (UECABW) over the current path between the PFAQMF and the current servicing NB; controlling, per each CUE based on the current path between the PFAQMF and the current servicing NB, an associated logical aggregating buffer that aggregates packets that are targeted toward the CUE.
26. The method of claim 25, wherein parsing the PSwTN packets further comprising parsing controls messages of PSwTN that carries the communication between a plurality of POCs.
27. The method of claim 25, wherein per each CUE, its associated logical aggregating buffer is drained in a rate that is function of the UECABW and a policy that is relevant to the relevant subscriber.
28. The method of claim 25, further comprising: concluding that a CUE performing handover to a second NB and draining the CUE associated logical aggregate buffer, in a rate that is equal to the UECABW over the path to the current servicing NB; determining a path between the PFAQMF and the second NB; monitoring the current available bandwidth (CABW) for the subscriber's IP sessions session over the determined path to the second NB; and controlling a second associated logical aggregating buffer that aggregates packets of the PSwTN that are targeted toward the CUE over the second path.
29. The method of claim 28, wherein the CUE associated logical aggregate buffer and the second associated logical aggregating buffer are the same.
30. The method of claim 25, further comprising determining when to drop a set of PSwTN packets that carry an integer number of the subscriber IP data packets.
31. The method of claim 30, wherein the determining when to drop a set of PSwTN packets is based on the UECABW over the path of those PSwTN packets.
32. The method of claim 30, wherein the determining when to drop a set of PSwTN packets is based on a calculated round trip time (RTT) value of the subscribers IP data packets carried by the PSwTN packets.
33. The method of claim 30, wherein the determining when to drop a set of PSwTN packets is based on an active queue management (AQM) method.
34. The method of claim 33, wherein the AQM method that determines which percentage of subscriber's IP data packets to drop based on the current occupancy of the subscriber's associated logical aggregating buffer.
35. The method of claim 30, wherein the determining when to drop a set of PSwTN packets is based on a policy of the subscriber associated with those PSwTN packets.
36. The method of claim 25, further comprising identifying the subscriber policy is based on an International Mobile Subscriber's Identification (IMSI) of the subscriber.
37. The method of claim 25, wherein the controlling, per each CUE, an associated logical aggregating buffer further comprising controlling a weight parameter of the queue.
38. A Per-Flow- Active-Queue-Management Framework (PFAQMF) that manages a plurality of Internet Protocol (IP) communication sessions between a plurality of user equipments (UE) and a plurality of servers, wherein packets of the IP sessions are carried by Packet Switch transport network (PSwTN), the PFAQMF comprising: an input packet processor (IPP) that intercepts PSwTN transport- data units that are transferred between an ingress point-of- concentration (POC) and one or more egress POCs; a subscriber-access processor (SAP) that obtains policy information regarding active subscribers that are currently communicating by a UE via the PFAQMF; an active subscriber table (AST), stored in a memory device, that comprises a plurality of entries, each entry is associated with an active subscriber and comprises identification information of the subscriber, and information regarding handling the session packets; and one or more traffic managers (TM) each TM is associated with a group of POCs and manages a Hierarchical Queuing Framework (HQF) for aggregating PSwTN packets that carry packets that are targeted toward a plurality of associated UEs that are currently served by the associated group of POCs; each TM obtains from the IPP a plurality of PSwTN packets that carry packet of IP sessions that are directed toward the TM's associated subscribers; per each subscriber the associated TM obtains from the SAP and the AST information regarding handling the subscriber's IP packets; wherein the IPP further comprises a subscriber IP packet processor (SIPPP) that communicate with one or more TMs and receives PSwTN packets that carry subscribers IP data packets, per each received PSwTN packet the SIPPP identifies the TM associated with the subscriber that is associated with the packet, adds an ID of the queue which is associated with the subscriber and transfers the packet toward that TM.
39. The PFAQMF of claim 38, wherein the SIPPP is further configured to drop a set of PSwTN packets that carry an integer number of the subscriber IP data packets.
40. The PFAQMF of claim 38, wherein the SAP further monitors the UE current available bandwidth (UECABW) per each active UE and accordingly modifies control parameters of the associated queue that has been allocated to that UE.
41. The PFAQMF of claim 38, wherein the SIPPP is further configured to drop PSwTN packets based on the UECABW.
42. The PFAQMF of claim 38, wherein the SIPPP is further configured to drop, per each UE, PSwTN transport-data units based on a calculated round trip time (RTT) value of the IP data packets related to that UE.
43. The PFAQMF of claim 38, wherein the SIPPP is further configured to drop PSwTN packets based on an active queue management (AQM) method.
44. The PFAQMF of claim 43, wherein, per each subscriber's UE, the AQM method determines which percentage of subscriber's IP data packets to drop is based on the current occupancy of the subscriber's associated queue.
45. The PFAQMF of claim 38, wherein the SIPPP further determines to drop PSwTN packets based on a policy of the subscriber associated with those PSwTN packets.
46. The PFAQMF of claim 38, wherein the PSwTN is located between a plurality of cellular-base stations and a cellular-base-station controller.
47. The PFAQMF of claim 46, wherein the plurality of cellular-base stations are Node B and the cellular-base-station controller is Radio Network Controller (RNC).
48. The PFAQMF of claim 38, wherein the PSwTN is Multi Protocol Label Switching (MPLS) network.
49. The PFAQMF of claim 40, wherein the control parameters of a queue comprises a shaper parameter.
50. A method for controlling a Per-Flow-Active-Queue-Management Framework (PFAQMF) installed in a Packet Switch transport network (PSwTN), wherein the PFAQMF is associated with a plurality of subscriber . user equipments (UE), wherein each UE are engaged in Internet Protocol (IP) communication session, the method comprising: obtaining PSwTN packets that are transferred between an ingress point-of-concentration (POC) and one or more egress POCs of the PSwTN and carry the IP communication sessions associated with the plurality of UEs; monitoring, per each UE, the current available bandwidth (UECABW) over the current path between the PFAQMF and the UE; controlling, per each UE based on the current path between the PFAQMF and the UE, an associated logical aggregating buffer that aggregates PSwTN packets that are targeted toward the UE; and determining, per each UE, when to drop a set of PSwTN packets that carry an integer number of IP data packets of the IP communication session associated with the UE.
51. The method of claim 50, wherein the determining, per each UE, when to drop a set of PSwTN packets is based on the monitored UECABW.
52. The method of claim 50, wherein the determining, per each UE, when to drop a set of PSwTN packets is based on a calculated round trip time (RTT) value of the IP session associated with the UE.
53. The method of claim 50, wherein the determining, per each UE, when to drop a set of PSwTN packets is based on an active queue management (AQM) method.
54. The method of claim 53, wherein the AQM method that determines which percentage of UE's IP data packets to drop is based on the current occupancy of the subscriber's associated logical aggregating buffer.
55. The method of claim 50, wherein the determining when to drop a set of PSwTN packets is based on a policy of the subscriber associated with those PSwTN packets.
56. The method of claim 50, wherein the PSwTN is located between a plurality of cellular-base stations and a cellular-base-station controller.
57. The method of claim 50, wherein the PSwTN is Multi Protocol Label Switching (MPLS) network.
58. The method of claim 50, wherein the control parameters of a queue comprises a shaper parameter.
PCT/IL2012/000077 2011-02-21 2012-02-14 System and method for active queue management per flow over a packet switched network WO2012114328A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161444853P 2011-02-21 2011-02-21
US61/444,853 2011-02-21

Publications (1)

Publication Number Publication Date
WO2012114328A1 true WO2012114328A1 (en) 2012-08-30

Family

ID=46720183

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2012/000077 WO2012114328A1 (en) 2011-02-21 2012-02-14 System and method for active queue management per flow over a packet switched network

Country Status (1)

Country Link
WO (1) WO2012114328A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017059932A1 (en) * 2015-10-07 2017-04-13 Telefonaktiebolaget Lm Ericsson (Publ) Controlling downstream flow of data packets to one or more clients

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690645B1 (en) * 1999-12-06 2004-02-10 Nortel Networks Limited Method and apparatus for active queue management based on desired queue occupancy
US20040042397A1 (en) * 2002-09-02 2004-03-04 Motorola, Inc. Method for active queue management with asymmetric congestion control
US20080239956A1 (en) * 2007-03-30 2008-10-02 Packeteer, Inc. Data and Control Plane Architecture for Network Application Traffic Management Device
US20080304416A1 (en) * 2005-12-23 2008-12-11 Gabor Fodor Method and Apparatus for Solving Data Packet Traffic Congestion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690645B1 (en) * 1999-12-06 2004-02-10 Nortel Networks Limited Method and apparatus for active queue management based on desired queue occupancy
US20040042397A1 (en) * 2002-09-02 2004-03-04 Motorola, Inc. Method for active queue management with asymmetric congestion control
US20080304416A1 (en) * 2005-12-23 2008-12-11 Gabor Fodor Method and Apparatus for Solving Data Packet Traffic Congestion
US20080239956A1 (en) * 2007-03-30 2008-10-02 Packeteer, Inc. Data and Control Plane Architecture for Network Application Traffic Management Device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017059932A1 (en) * 2015-10-07 2017-04-13 Telefonaktiebolaget Lm Ericsson (Publ) Controlling downstream flow of data packets to one or more clients

Similar Documents

Publication Publication Date Title
US11595300B2 (en) Traffic shaping and end-to-end prioritization
US9866492B2 (en) Localized congestion exposure
US11159423B2 (en) Techniques for efficient multipath transmission
US7664017B2 (en) Congestion and delay handling in a packet data network
US11063785B2 (en) Multipath traffic management
JP2023512900A (en) Microslices with device groups and service level targets
US20160380884A1 (en) Flow-Based Distribution in Hybrid Access Networks
WO2018112657A1 (en) Packet transmission system and method
US11722391B2 (en) Dynamic prediction and management of application service level agreements
US20230142425A1 (en) Virtual dual queue core stateless active queue management (agm) for communication networks
KR20200083582A (en) Systems and methods for accelerating or decelerating data transmission network protocols based on real-time transmission network congestion conditions
US9591515B2 (en) Feedback-based profiling for transport networks
US20240056885A1 (en) Multi-access traffic management
Kumar et al. Device‐centric data reordering and buffer management for mobile Internet using Multipath Transmission Control Protocol
WO2012114328A1 (en) System and method for active queue management per flow over a packet switched network
Lilius et al. Planning and optimizing mobile backhaul for LTE
Wan et al. L4Span: Spanning Congestion Signaling over NextG Networks for Interactive Applications
Briscoe Internet-Draft BT Intended status: Informational M. Sridharan Expires: January 10, 2013 Microsoft July 09, 2012
Deiß et al. QoS
Pfeifer IPv6 Fragment Header Deprecated draft-bonica-6man-frag-deprecate-02

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12749651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12749651

Country of ref document: EP

Kind code of ref document: A1