WO2006121399A2 - Unite de passerelle de transmission pour noeud pico b - Google Patents

Unite de passerelle de transmission pour noeud pico b Download PDF

Info

Publication number
WO2006121399A2
WO2006121399A2 PCT/SE2006/000565 SE2006000565W WO2006121399A2 WO 2006121399 A2 WO2006121399 A2 WO 2006121399A2 SE 2006000565 W SE2006000565 W SE 2006000565W WO 2006121399 A2 WO2006121399 A2 WO 2006121399A2
Authority
WO
WIPO (PCT)
Prior art keywords
tgu
node
nbu
network
communications system
Prior art date
Application number
PCT/SE2006/000565
Other languages
English (en)
Other versions
WO2006121399A3 (fr
Inventor
Anders JÄRLEHOLM
Jan Söderkvist
Jeris Kessel
Per-Erik Sundvisson
Tomas Lagerqvist
Peter WAHLSTRÖM
Original Assignee
Andrew Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Andrew Corporation filed Critical Andrew Corporation
Publication of WO2006121399A2 publication Critical patent/WO2006121399A2/fr
Publication of WO2006121399A3 publication Critical patent/WO2006121399A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W92/00Interfaces specially adapted for wireless communication networks
    • H04W92/04Interfaces between hierarchically different network devices
    • H04W92/12Interfaces between hierarchically different network devices between access points and access point controllers

Definitions

  • the present invention relates to new transmission solutions enabling deployment of radio base stations in a UMTS network with significantly lower operational expense for transmission than current ATM-based networks.
  • the proposed solution is possible to use with existing ATM-based Radio Network Controller (RNC) and backhauls, using IP based transport for at least the last mile.
  • RNC Radio Network Controller
  • the proposed solution will also enable the use of IP transport for almost complete path, reducing cost for the more expensive transport to a minimum.
  • RAN implementations depend on ATM for communication between radio network controller RNC (and other centralized functions) and the base station, also referred to as Node B.
  • bandwidth can be reserved for each connection thus guaranteeing QoS for the end- user.
  • the synchronous physical transmission (e.g. El or STM-I) used for ATM networks also provides a good and traceable reference for Node B reference frequency clock which is needed for the radio transmitter and receiver of the Node B.
  • STM-I synchronous physical transmission
  • Node B reference frequency clock which is needed for the radio transmitter and receiver of the Node B.
  • ATM-based backhaul with its reserved bandwidth can be expensive.
  • Today the lease of transmission lines to base stations in telecommunication system networks is a major operating cost for many mobile operators.
  • the present invention targets the object of reducing transmission cost for Node B installation costs, in for particular pico Node Bs such as the OneBASE Pico Node Bs.
  • the main idea is to avoid using ATM communication with reserved bandwidth all the way from the RNC to each individual Pico Node B unit and instead using the more inexpensive IP transport for at least the "last mile".
  • the present invention fulfils this object by means of a communications system, a transmission gateway unit and a Node B unit, as defined in the appended claims.
  • the standardization forum for WCDMA UMTS also referred to as 3GPP, has attempted to define how control plane signaling (for controlling the base station), and userplane signaling, i.e. control and user data to/from the mobiles connected to the base station, shall be transported using an IP network.
  • 3GPP also discusses the possibility to design a mediation device for translating between the (current) ATM based transport system and an IP based system. However, no details are described for this device, neither how it is to be devised or where it should be located.
  • the defined standard mainly describes how messages are to be mapped on different protocols for the transport over the IP network, but they do not address all the problems which need to be solved when actually implementing IP transport in a real network, including e.g. problems with migration, i.e. when not all the equipment is designed for IP transport; detection, prevention and resolving congestion problems in the IP network; security issues etc. In reality, no system with satisfactory function and performance has hitherto been presented.
  • the invention described herein enables an operator to use IP for control and data transport to/from radio base stations, without having to migrate all the centralized functions of the network, e.g. the RNC, from ATM.
  • the solution involves the introduction of a "translator" between the ATM connect of the RNC and the IP connect of the base station, similar to the interworking function described by 3GPP.
  • the translator herein denoted Transmission Gateway Unit TGU, will not only translate between ATM and IP transport but it is also a key element in solving the inevitable problems when using IP transport over a more or less public IP network to transport control and userplane signaling to/from a remotely located radio base station.
  • the TGU and the radio base station interact to prevent, detect and resolve problems which can arise when using IP transport between the two nodes.
  • the other centralized functions e.g. the radio network controller RNC
  • the TGU will completely hide this complexity from the RNC, making the RNC believe that the Node B is still connected via ATM.
  • the invention therefore fulfils the object of providing a system which does not require modification of the RNC, even though the basis of the invention may be used also with RNCs modified for direct IP connectivity.
  • the TGU and Node B will be identical in various embodiments described herein.
  • Prevent congestion by an intelligent use of resource and bandwidth e.g. by packing, admission control, prioritization etc.
  • - Determine priority for different kind of traffic between the nodes.
  • IP networks for connecting base stations will also facilitate completely new deployment scenarios; Instead of having a few large base stations covering e.g. 6 cells from a single site, it will now be possible to deploy a lot of small base stations, each one only covering a single small cell, often denoted pico- cell, e.g. an office.
  • the currently available RNCs are often designed for the existing large "several sector sites", where the RNC can rely on the Node B itself for part of the administration of the different cells within the Node B. Deploying a large number of pico cells demand more activity from the RNC which may be difficult to implement.
  • the TGU can also be expanded to be a kind of "sub network RNC". This novel solution is also described herein.
  • a known problem with IP based networks is the path delay and variations in path delay; In particular if using wide area IP networks (e.g. public internet). This kind of delay variation may not be big issue for packet oriented services (e.g. HTTP, FTP etc) but may cause problem (e.g. poor perceived quality by the end- user) for circuit switched services like speech and video. This document shows a number of different ways of minimizing also this problem.
  • Other known problems with IP based networks are that they tend to degrade quickly when overloaded (e.g. due to congestion in a router). The solutions described herein also solves these problems.
  • ATM AIS Alarm Indication Signal
  • Nrt-VBR ATM service class Non real-time VBR
  • Ue User equipment as defined by 3GPP, e.g. a mobile phone
  • WiMax Worldwide Interoperability for Microwave Access xDSL (all types of) Digital Subscriber Lines
  • Uplink transfer of information from Node B to RNC, or from Ue to RNC via Node B
  • Downlink transfer of information from RNC to Node B, or from RNC to Ue via Node B
  • Radio link dedicated channel (DPCH) over the air interface directed to a particular Ue.
  • Radio link set a set of radio links directed to the same Ue from different cells within the same multi-sector Node B
  • Transport bearer as defined by 3GPP, i.e. signaling connection used to transfer userplane data between RNC and Node B for one user plane transport channel (either common transport channel or dedicated) or for a group of coordinated transport channels.
  • one transport bearer is implemented as a AAL2 transport bearer which is identified by its VPI-VCI-CID combination.
  • a "transport bearer" can also be mapped on IP connections for transfer to Node B over an IP network.
  • FIG. 1 schematically illustrates interconnection of an ATM network and an IP network by means of a transmission gateway unit, in accordance with an embodiment of the invention
  • Fig. 2 schematically illustrates an alternative version of the embodiment of
  • Fig. I 5 where a base station is connected in an IP network implemented over or as part of a public internet, with a firewall connected to protect an OMC;
  • Fig. 3 schematically illustrates an alternative version of the embodiment of Fig. I 5 where a geographically distributed IP network may use a mix of standard broadband internet connections or xDSL over telephone lines to reach individual remotely located Node B units;
  • Fig. 4 schematically illustrates an embodiment of the invention, in which circuit switched traffic is transported to/from Node B using ATM links, and packet switched user data is transported to/from Node B using IP networks;
  • Fig. 5 schematically illustrates NBAP over IP control plane signaling in accordance with an embodiment of the invention
  • Fig. 6 schematically illustrates simplified NBAP over IP, in accordance with an alternative control plane signaling embodiment
  • Fig. 7 schematically illustrates TGU transparent AAL2 signaling and ALCAP handling in accordance with an embodiment of the invention
  • Fig. 8 schematically illustrates an alternative solution for ALCAP handling, using ALCAP - IP - ALCAP, control plane signaling in accordance with an embodiment of the invention
  • Fig. 9 schematically illustrates NBU O&M signaling in accordance with an embodiment of the invention
  • Fig. 10 schematically illustrates external O&M signaling to TGU directly from OMC in accordance with an embodiment of the invention
  • Fig. 11 schematically illustrates control plane routing in an example with two control plane channels, in accordance with an embodiment of the invention
  • Fig. 12 schematically illustrates user plane signaling in accordance with an embodiment of the invention
  • Fig. 13 schematically illustrates user plane routing, in an example with three user plane channels in accordance with an embodiment of the invention
  • Fig. 14 schematically illustrates clock synchronization of a Node B in an IP network, in accordance with an embodiment of the invention
  • Fig. 15 schematically illustrates O&M VCC to Node B encapsulated in IP/UDP frames, in accordance with an embodiment of the invention
  • Fig. 16 schematically illustrates an O&M network in accordance with an embodiment of the invention
  • Fig. 17 schematically illustrates an alternative method for transferring
  • Alternate methods for establishing and releasing transport bearers in TGU and how to connect transport bearers to IP-UDP addresses including the method of having the configuration more or less hardcoded. How the Node B and TGU can detect, prevent and react on congestion/overload in the IP network.
  • TGU-Node B communication in parts is public.
  • RNC connects to the TGU using IP transport and TGU merely act as an intelligent traffic concentrators
  • TGU functionality is included inside the RNC, thus creating an RNC with IP interconnect to the Node Bs.
  • BLAN IP based Node B Local Area Network
  • NBU Node B units
  • This BLAN may either be a true local network, e.g. a network inside a building or a campus, alternatively the BLAN may be a geographically distributed network, more like WAN (wide area network) or even use the existing public IP network
  • RAN Radio Area Network
  • RNC radio network controller
  • O&M centralized O&M center
  • the BLAN concept allows the major part of the existing RAN (including RNC) to rely on ATM for Lj, and still use IP transport for at least "the last mile" towards Node B.
  • BLAN will use standard IP protocols for communication.
  • the transmission interface from BLAN towards the rest of the Radio Area Network (RAN) will be implemented in a Transmission Gateway Unit (TGU);
  • TGU Transmission Gateway Unit
  • ATM-RAN external
  • BLAN internal
  • the Node B units will be connected directly to the BLAN and will be designed to accept Iub (control and userplane) and O&M communication over a standard IP connection, e.g. an Ethernet port.
  • BLAN is preferably designed to only depend on available standard products such as IP/Ethernet switches and xDSL modems.
  • the design and choice of communication protocols over BLAN facilitate the use of standardized and readily available products without modification.
  • BLAN is preferably prepared for IP-based RAN transport, as specified by 3GPP in release 5.
  • IPv6 shall be supported and IPv4 is an option.
  • IPv4 will be used for local transport within the BLAN but preparations for IPv6 will be made in the NBU and TGU.
  • IP protocols it will also be possible to have the BLAN functionality implemented on existing IP network infrastructure, both WAN and e.g. office LANs, and sharing this infrastructure with other type of IP traffic.
  • the C ub i.e. the O&M interface between OMC and Node B
  • IPoA IP over ATM, as shown in section 6.1.1 above
  • the BLAN may be a local or geographically distributed network or any mix of these. In any case BLAN should only depend on available standard products such as IP/Ethernet switches and xDSL modems.
  • a local BLAN network could e.g. be a campus area or a large office complex requiring a number of Node B units.
  • a "long distance" (probably leased) ATM link will be needed from the RNC to the campus area.
  • IP transport e.g. over Ethernet, Gigabit Ethernet, WLAN, WiMax .
  • the TGU can act as an intelligent concentrator making it possible to save expensive bandwidth (ATM bandwidth) between RNC and TGU by reserving less than needed for a worst case simultaneous peak load on all Node B's connected to the BLAN.
  • ATM bandwidth bandwidth
  • the TGU will continuously monitor actual load on the ATM backhaul connection, and before accepting set-up (or reconfiguration) of an AAL2 transport bearer the TGU checks that requested additional bandwidth is available on the backhaul connection. In this way it will be possible for the operator to overbook the backhaul interconnect and get a soft degradation (new calls rejected, but no/few calls dropped or degraded) should an overload situation occur.
  • a local BLAN preferably makes use of dedicated Ethernet lines between the TGU and the different Node B's; If the Ethernet lines in the BLAN are shared with other IP traffic then this will add delay and delay variations to Iub traffic, which for some end-user services may cause some degradation of performance/quality (e.g. speech and/or video calls).
  • a geographically distributed BLAN network may use a mix of standard broadband internet connections and/or e.g. xDSL over telephone lines to reach individual remotely located Node B units, as illustrated in Fig. 3.
  • IP transport systems may also be used, e.g. WiMax.
  • For a distributed BLAN it may be possible to choose location of the TGU such the cost for transmission between RNC and TGU is minimized; In such case the requirements on "ATM backhaul trunk efficiency" from TGU to RNC may be reduced significantly.
  • Connection between xDSL modem and Node B may be a standard Ethernet connection.
  • an xDSL modem may be included inside the Node B.
  • the actual IP transmission and routing network needed to implement the communication to the Node B units can be designed to only depend on functionality and the lower level protocols (IP, DiffServ etc) already implemented and widely used in standard IP networks, implying that existing equipment and also infrastructure can be reused.
  • IP standard equipment and protocols also implies that the same network used for communication to Node B units also can be shared with other IP services, e.g. web surfing etc.
  • the BLAN can be implemented to use the public internet for communication between TGU and Node B units.
  • the main advantage is that the cost for communication to remote Node B can be very low, and in fact it will be easy for anyone to install a Node B e.g. to an existing office LAN or a broad band connection (e.g. ADSL) at home.
  • IP over IP IP over IP
  • IP over IP IP over IP
  • IP traffic is between TGU and Node B is run directly on the internet, without any VPN-like tunnels. This is a more efficient solution, and as will be shown later it is reasonably easy to secure the most sensitive parts of the TGU-Node B communication also without having to put VPN and IPsec on the complete traffic between the nodes.
  • the preferred solution for implementing BLAN over public internet is a mix where sensitive control information (NBAP, ALCAP and O&M) is ran on encrypted IPsec tunnels, while the userplane is ran a IP-over-IP tunnel but not encrypted, to save processing capacity in the gateways
  • the Node B itself terminates the IPsec tunnels on the one side.
  • the TGU and OMC network needs be protected from the public IP network by a security gateway, which terminates the encrypted IPsec tunnels.
  • the IPsec tunnels may also be terminated directly in the TGU and OMC.
  • IP over IP and IPsec tunnels also makes it possible to put the Node Bs on an office LAN, i.e. inside the firewall/security gateway (SGW) protecting this office LAN from the public internet.
  • SGW firewall/security gateway
  • the Node B will probably not have a public IP address, but instead get a NAT address by the firewall/SGW protecting the office LAN.
  • By encapsulating the IUB LAN traffic using IPsec and UDP encapsulation it will then be possible for BLAN functionality to traverse this kind of NAT gateway (RPC3948)
  • IP transport for the Iub will reduce cost for transmission significantly, in particular if a "best effort" links can be used most of the path from RNC to Node B. However, the same Node B's will probably simultaneously also carry circuit switched traffic as e.g. speech and/or video calls; Using "long distance" IP transport may in some IP networks (e.g. IP networks with heavy traffic load) be unsuitable for speech/video due to the delay and delay variations between RNC and Node B.
  • - circuit switched traffic e.g. speech, video
  • ATM links with reserved and guaranteed bandwidth
  • - packet switched user data e.g. TCP/IP, HTTP, FTP
  • Node B using IP networks Node B using IP networks.
  • each Node B need to have (at least) two physical communication ports: . - One (or more) port for BP connection, typically a Ethernet port, for connection to BLAN.
  • TGU One (or more) port for ATM connection, e.g. STM-I or El, for connection to an ATM backhaul (via ATM routers etc).
  • STM-I or El for connection to an ATM backhaul (via ATM routers etc).
  • the TGU will be responsible for splitting the data stream to/from the RNC between
  • IP based BLAN (primarily used for packet switched user data)
  • Iub control plane (NBAP, ALCAP etc) and Cub can be transported to/from Node B using either BLAN or ATM backhaul. (Operator may select)
  • TGU can use to choose which communication path (i.e. IP or ATM backhaul) it should use for different transport bearers (user data channels) between RNC and Node B: In some cases all communication over a particular ATM PVC is always of the same type (e.g. if RNC always places delay sensitive traffic like speech on the same ATM PVC), and then the TGU may be (semi-permanently) configured to
  • the TGU must choose for each AAL2 transport bearer (CID, part of a PVC) if to send it over the ATM backhaul (map it on other PVC to/from Node B) or if to translate it to IP traffic and send to Node B over BLAN.
  • CID AAL2 transport bearer
  • the TGU need to be dynamically configured which backhaul path (ATM or IP) to use to/from Node B for each AAL2 transport bearer (identified by its CBD). If this path selection information is included in (e.g.) the ALCAP signaling from RNC to Node B (e.g. added as proprietary addition to ALCAP messages), then the TGU could use these messages to configure its routing table. The TGU would also need to inform the Node B on which path to use for a particular transport bearer (IP or ATM/AAL2).
  • IP transport bearer
  • the TGU may use a combination of existing information in ALCAP messages to select path.
  • Another (probably better) option is that the Node B selects path based on information received from the RNC in NBAP messages (e.g. radio link setup request) and/or ALCAP messages (e.g. ALCAP Establish request);
  • NBAP messages e.g. radio link setup request
  • ALCAP messages e.g. ALCAP Establish request
  • the advantages with this method are that: - combining information from NBAP and ALCAP gives a better picture on what will be sent on a particular AAL2 transport bearer from the RNC (e.g. if it is to be used for an HSDPA channel).
  • Node B If Node B is responsible for selecting path then it needs to inform TGU which path to use for each transport bearer.
  • a Node B requires a high quality frequency reference for its radio transmitter and receiver, typically 50-100 ppb depending on class of base station.
  • a Node B When a Node B is connected to an ATM network using synchronous lines (e.g. El or STM-I) then the Node may derive its frequency reference from the clock of the transmission line. If a Node B is connected only via (e.g.) Ethernet backhaul (e.g. BLAN), then this backhaul cannot provide the required clock signal, and other methods will be needed to enable Node B to fulfill frequency accuracy requirements stated by 3GPP.
  • the general recommendation in 3GPP for network synchronization is to supply a traceable synchronization reference according ITU-T G811.
  • ITU-T G811 When Ethernet is introduced for Layer 1 interface there is no continues clock traceable to a primary reference clock. 3GPP do not specify how the frequency recovery is done in this case.
  • the proposed solution is to rely on a highly stable reference oscillator inside the Node B for frequency reference. Even for a reasonable cost it is possible equip each Node B with an internal reference oscillator having a guaranteed short term stability of better than 25 ppb, i.e. well within the 3gpp accuracy requirements of 0.1 ppm for a local area BS.
  • the Node B In order for the Node B to be able to compensate for the aging of its internal reference oscillator, the Node B needs to synchronize its internal clock to some external reference source.
  • This reference clock source is herein referred to as a "time server”.
  • the time server can be any existing NTP server in the network.
  • the Node B acquires a time reference either from some time server in the network - or - from the TGU internal frequency reference which is derived from the E1/T1/J1 or STM-I connecting the TGU with the RNC.
  • the quality of the synchronization over an IP network is very much depending on the delay variations (jitter) over the IP transport network.
  • a fixed delay is less of a problem, but a jitter (delay variation over time) may be difficult to separate from variations on the Node B internal clock.
  • the synchronization accuracy over an IP connection is highly dependent on link delays/variation and processing time. If the time server is located on internet the accuracy is expected to be 1 to 50 ms; Trying to remove this variation with a simple low-pass filter would require time constant of 2-4 weeks. Even with these variations NTP could be used to continuously evaluate the quality of the oscillator and perform slow adjustments to compensate for the aging of oscillator, as described in the following section.
  • jitter could be substantially less, in particular for local BLAN with its own Ethernet lines. If the IP network is using IP v6 then jitter could also be decreased by prioritizing timing messages.
  • the TGU itself needs to have a clock recovery function and this may be
  • - a network reference clock, in case of SDH or PDH backbone net, or - an extremely stable oscillator (free running or tracked to an internet NTP server).
  • the method to overcome this problem with jitter is to perform the synchronization quite often and to use statistics from all synchronization attempts to improve the quality of the synchronization.
  • the Node B will send a new synchronization request to the appointed time server (using standard messages specified for e.g. NTP or IEEE 1588) once every time a period T has lapsed since the last request, where T preferably is a constant period but may also be allowed or controlled to vary. If a NTP server is used a time server than these messages cannot be sent more frequent than once every 16 seconds to each NTP server.
  • Several NTP servers may be used by the same Node B to improve its synchronization characteristics.
  • NTP messages over a jittery IP network enabling it to in a few hours detect even a very small frequency drift (1-lOppb) between its internal clock and a reference time server.
  • Tests have shown that by applying these algorithms the NTP can successfully be used to continuously evaluate the quality of the Node B internal oscillator and perform slow adjustments to compensate for the aging of oscillator, even if variations are in the order of 50-100 ms.
  • Examples of usable algorithms and methods include "Time synchronization over networks using convex closures” by Jean-Marc Berthaud, IEEE Transactions on Networking, 8(2):265-277, April 2000.1, "Clock synchronization algorithms for network measurements” by L. Zhang, Z. Liu, and C. H. Xia in Proceedings of IEEE INFOCOM, June 2002, pp. 160—169, and "Estimation and removal of clock skew from network delay measurements” by S. B. Moon, P. Skelly, and D. Towsley in Proceedings of IEEE INFOCOM, volume 1, pages 227-234, March 1999.
  • the time may be located inside the TGU; A solution which is particularly appealing if the TGU is located relatively close to the Node B, e.g. when using a local "BLAN".
  • the time server is located inside the TGU; a commercially available standard time server could be used instead, e.g. an NTP server.
  • This separated time server may be locked to other primary synchronization sources as described by standards with synchronization hierarchies for time servers using NTP, IEEEl 588 etc.
  • the separated time server may also be implemented by using a GPS receiver, making it more independent of the jitter over the IP network, something particularly useful if the time server can be located close to the Node B units using it for synchronization reference.
  • the time server is connected to IP network, implying that it may be located wherever suitable (e.g. close to a window if a GPS receiver is used) and can e.g.
  • the TGU will be informed about all transport bearers being established, reconfigured and released. This can be done by TGU terminating the ALCAP connection for all Node B's connected to it. The TGU will then use the contents of the received ALCAP messages to
  • ATM transport bearers AAL2 CID
  • Admission control functions in TGU can be disabled either for ATM network or IP network or for both.
  • TGU Once TGU has selected UDP/IP port to use for the requested transport bearer then TGU sends a message to Node B to inform Node B on which UDP/IP it should use for a particular "binding ID".
  • TGU may implement a fixed mapping between Binding ID (BID) and UDP port to a particular Node B. 6.3.1.1 Admission control in TGU for ATM network side
  • the TGU can use the information received in messages in e.g. ALCAP to perform admission control for the ATM network. If the TGU has been configured (or in some other way been informed) about the maximum allowable bandwidth consumption per VP or per VC then TGU can compare the new request with e.g. either
  • the TGU may also continuously monitor the load (e.g. delays, buffer sizes, queues) on the ATM connections and use this to decide if to allow the new ATM transport bearer to be created and started.
  • load e.g. delays, buffer sizes, queues
  • TGU Admission control in TGU for IP network side Similar as for the ATM side (above) the TGU may also perform admission control for the IP network side. If the TGU has been configured (or in some other way informed) about the maximum allowed bandwidth consumption on the IP network, then TGU can compare new request for additional bandwidth (the new transport bearer) with e.g. either - the sum of current consumption (measured or estimated) of bandwidth to/from the TGU on the IP network interconnect, or
  • TGU Since transport bearer allocation request for ATM network only states requested bandwidth on ATM side, the TGU will need to recalculate bandwidth requirement to take into consideration the different overhead in ATM and IP networks.
  • the TGU can also implement an admission control for preventing overload/congestion of the IP network; In such case the TGU may deny an transport bearer setup because TGU and/or Node B suspects that IP network is overloaded/congested in some point (could be some router in between TGU and Node B and need not necessarily by the access point of Node B or TGU). For further details refer to the section for "handling of congestion in IP network”.
  • Node B terminates ALCAP and then sends a message to TGU to setup the transport bearer and request TGU to create a mapping between a certain CID (i.e. transport bearer) and UDP port.
  • the Node B can (if needed/requested by operator) implement admission control both for ATM connections (connections via TGU) and IP network.
  • admission control both for ATM connections (connections via TGU) and IP network.
  • Admission control of ATM interconnect can be implemented in Node B but will require that Node B has been configured with information about allowed capacity of VP and VC used in the TGU. In a further improvement the Node B can also receive measurements collected by TGU for the ATM interconnect and use this information for its admission control procedures.
  • Admission control of the IP interconnect (e.g. to avoid IP network overload) can be implemented in the Node B using the same procedures as described in previous chapter.
  • TGU can do any admission control (i.e. checking if requested transmission bandwidth on ATM and/or IP network is available).
  • the VPI corresponds to a VP directed to a certain Node B, i.e. the connection between VPI and address in the IP network need to be configured into the TGU.
  • the Node B address in the IP network may either be a fixed IP address or an address assigned via DHCP; In the latter case the TGU can find the IP address using DNS.
  • mapping between VCI-CID and UDP port can be implemented as a simple mathematical formula hard coded into the software in TGU and Node B.
  • the Node B can implement admission control using the procedures described in previous sections.
  • the prioritization implies that the transmitting node (TGU or Node B) need to set a proper value in the "type of service field" (DSCP) in each IP header according to the priority selected for this particular IP packet.
  • TGU transmitting node
  • DSCP type of service field
  • the TGU and/or Node B could also include required policing and shaping functions required by the DiffServ cloud (i.e. the IP network protected by the DiffServ), in which case the node doing this need to be configured with the "service contract" Similar procedures can be used for other type of prioritization schemes, e.g. when implementing BLAN over MPLS networks instead of pure IP networks 6.4.3 Priority for control information to/from Node B
  • Data flows containing traffic control information typically should be given a high priority on the IP network; The reason for this is that delayed or lost messages may cause time outs on higher layers (RRC etc), dropped calls etc.
  • Data flow containing operation and maintenance information (O&M) could typically be given a lower priority on the IP network; The reason for this is that most of these flows are not real time critical (e.g. software downloads) and that the communication either is protected by retransmission protocols as TCP and FTP or can be protected on application layer, e.g. by originating node resending a request message if no reply was received within a defined time.
  • the main method used for creating prioritization for user data flows is to assign each "AAL2 transport bearer" (e.g. a bearer assigned to a particular DCH or set of coordinated DCHs) to a VCC (where the VCC is identified by its VPI and VCI) with a given ATM service class.
  • AAL2 transport bearer e.g. a bearer assigned to a particular DCH or set of coordinated DCHs
  • VCC where the VCC is identified by its VPI and VCI
  • Different networks supports different number and types of ATM service classes; but typically an ATM network supports e.g.
  • Rt-VBR real-time variable bit rate
  • Nrt-VBR non-real-time VBR
  • Each of these types correspond to a priority level defined by the network; Type of priority and handling of priority differs between different network implementations.
  • the TGU knows which VCC received the data from the RNC, and can therefore use any information about ATM service class of the VCC to assign a priority for the IP network.
  • the Node B For uplink data stream the Node B need to know the ATM service class of the VCC which the TGU will map the particular user data on. Node B can achieve this information in a number of different ways: - if Node B terminates ALCAP then the ALCAP message itself informs Node B on which VCC the transport bearer will be assigned. In that case if Node B also knows the ATM service class of that VCC then it can assign IP network priority according to this. - If Node B gets information about assigned VCC in some other way, either from RNC (via e.g. NBAP) or from TGU (via some message originating from TGU) then if Node B knows ATM service class for the VCC then it can assign IP network priority according to this.
  • RNC via e.g. NBAP
  • TGU via some message originating from TGU
  • Node B gets downlink userplane data from TGU this data is marked with a priority, and Node B can simply use the same priority for the uplink information associated with the same AAL2 transport bearer, i.e. data mapped to the same UDP port.
  • ATM service classes for VCC cannot be used for prioritization, e.g. because the ATM network or RNC implementation does not use this feature.
  • IP network priority could selected using other means, e.g.:
  • the TGU and/or Node B could calculate IP network Priority from information received in ALCAP
  • Node B and TGU can select priority level depending on knowledge about the type of data that will be transported on that particular transport bearer, e.g.:
  • the Node B/TGU may use other information received from RNC to also differentiate priorities between different type of dedicated channels, e.g. assigning higher priority to "speech calls" and/or other type of circuit switched services; Node B can deduce type of end-user server by looking at detailed parameters for radio access bearer RAB when RNC configures/reconfigures the radio link, e.g. number and type of transport channels, transport formats for transport channels and the ToAws-ToAwe (i.e. timing window for Node B reception of downlink userplane data on Iub from RNC). In particular the timing window gives Node B a very good hint about the priority and timing constraints RNC wants to assign a particular transport bearer.
  • priority levels may be assigned either hard coded in the software in TGU and Node B or defined by the operator as part of the configuration of the Node B and/or TGU.
  • This predefined priority level could be an absolute level, or some kind of offset related to other types of traffic.
  • TGU will be interfaced with ATM towards RNC and with IP network towards one or a number of Node Bs. Between the TGU and Node Bs there will be several IP routers that handle the traffic between TGU and several Node B. The same IP network may also be handling other type IP traffic, e.g. if the network is a WAN/LAN used for public internet. Typically, routers in IP networks respond to congestion (overload) by delaying and/or dropping datagrams.
  • DiffServ For IP networks standard solutions exist for handling priority between different kind of IP traffic, e.g. DiffServ (RFC2475 etc.); However, these are not always used. And even if used they cannot always solve the problem with overload/congestion.
  • IP packets will be delayed/dropped in a random fashion and neither Node B nor TGU/RNC will be informed about this, but only see the effects.
  • TGU and/or RNC plus Node B can get an early warning of a potential congestion situation and take action to decrease the traffic before service quality degrades to much; If the RNC/TGU and/or Node B manage to reduce traffic on the IP network in a controlled way then a disaster situation can be avoided and the IP network can faster recover from the congestion, and not being overloaded by e.g. retransmissions. If all (or at least the critical) routers in the network monitor and report the load situation to some central management element, then TGU and/or Node B may be informed about this in order to take necessary action to reduce their load on the IP network.
  • both TGU and Node B need a method for
  • both methods should be used simultaneously, thus making the TGU-Node B IP Iub supervision independent of the congestion policies used by routers in the IP network, i.e. if they mainly drop or mainly delay traffic.
  • TGU sends a message to Node B and just before sending it to the IP network the TGU stamps the message with current reading of its internal clock.
  • each sender and receiver needs to continuously count number of block sent and received. With some periodicity, e.g. once every 5 seconds, TGU should send a status message to Node B telling it how many IP packets was sent since last status message and Node B compares that with number of IP packet received during the same period. The difference between the counter of packet sent from TGU and received in Node B gives the Node B information about the number of IP packets dropped/lost by the IP network.
  • the same procedure should also be used for the uplink direction, i.e. the Node B counting number of IP packet sent and TGU counting number of IP packets received.
  • the counters exchanged could of course also be "accumulated number of IP packets sent and received” thus removing the problem with “sampling periods" not being identical in sending and receiving node.
  • Another option for implementing these measurements is to not map 3GPP frame protocol frames (user data frames) directly on top of UDP/IP as defined by 3GPP, but instead also add an extra header containing a counter and a timestamp.
  • the TGU would need to add this extra header on Frame protocol frames in downlink and Node B to check them.
  • the Node B would add the extra header, TGU would check them and remove the extra header before transmitting the messages on the ATM network.
  • the Node B shall check that CFN of a particular message falls within a given capture window ToAWS - ToAWE as defined in TS25.402. Any variations in this could be used by Node B to detect if delay is increasing in the network from RNC to Node B. In the same way the RNC can detect an increasing delay.
  • this method is difficult to use in TGU because then TGU would need to know the relation between SFN and CFN for each transport bearer (something could be solved by Node B sending a message to TGU about this).
  • TGU would need to know the relation between SFN and CFN for each transport bearer (something could be solved by Node B sending a message to TGU about this).
  • the measurement depends on Node B and RNC actually transmitting the data at a constant offset to SFN/CFN, i.e. any transmit timing variations caused by load inside RNC and/or Node B could be misinterpreted as delay variations on the IP network. However, this node internal delay is most probably rather small compared to the delays of the IP network.
  • CFN/SFN stamping of 3GPP Frame protocol frames could be used for implementing the measurements needed, but it is much simpler to get the information by introducing completely new and dedicated messages between TGU and Node B as described with the preferred method above.
  • the preferred implementation is that Node B and TGU by sending messages over the IP network periodically exchange information such that both TGU and Node B keeps statistics of IP network delay, delay variation and lost IP packet for both uplink and downlink.
  • both TGU and Node B have the same kind of information then both nodes can take actions immediately if a suspected overload/congestion situation is detected. If both delay and lost IP packets are measured over the IP network then each transmitting node (i.e. TGU for downlink and Node B for uplink) should:
  • the node responsible for transmitting in the degraded direction shall immediately take action to resolve the situation, as described below.
  • the supervision shall use at least two thresholds for the supervision:
  • - a warning level indicating that at least something should be done to prevent the situation from getting worse
  • - a critical level indicating that the load on the IP network must be decreased significantly immediately.
  • FP frames frame protocol
  • the data between TGU and Node B will be separated on different priority levels on the IP network (e.g. using DiffServ to priorities). If different priority levels is used on the IP network between TGU and Node B then supervision for congestion/overload should be done separately for each priority level, thus making it possible to detect congestion e.g. only affecting "low priority traffic" (e.g. user data for the packet switched (PS) end-user services). In such case:
  • low priority traffic e.g. user data for the packet switched (PS) end-user services
  • TGU and Node B the nodes (TGU and Node B) needs to have separate counters of sent and received IP packets per priority level. - Message between TGU and Node B for measuring delay of the IP network needs to be sent on each IP priority level
  • Measuring delay and/or lost IP packets per priority level also makes it possible for TGU and/or Node B to implement a congestion/overload warning with different thresholds for different type of traffic, e.g.: - tolerating worse IP network behavior for PS userplane than for e.g. high priority circuit switched (CS) service like speech,
  • CS circuit switched
  • Node B and/or TGU reduce the amount of data the node transmits.
  • the first step in a suspected congestion/overload situation is that the transmitting node selects some FP frames which are discarded and not sent on to the IP network. This must be done by Node B for uplink data and TGU for downlink data; The main advantage with this is that if congestion/overload is only present in one direction of the IP network (e.g. from TGU to Node B) then the other direction is unharmed.
  • the transmitting node selects FP frames to drop from the transport bearers with assigned lowest priority, i.e. transport bearers which will be carried on ATM connections with lower service class. No frames should be dropped from UDP ports dedicated for Control plane information, e.g. NBAP and ALCAP.
  • the RNC is not aware of the overload/congestion supervision performed by Node B and TGU then either Node B or TGU or both need to decide and take action.
  • the preferred implementation is that the decision about dropping transport bearers and/or complete RL/RLS is taken by Node B .
  • the Node B needs to selects which dedicated radio link radio link sets (RLS), and/or HSDPA data flow dedicated to a certain mobile should be dropped. Decision on what to drop can be based on
  • Node B may select to drop e.g. any RLS or HSDPA data flow.
  • the preferred method for dropping RLS / HSDPA data flows is that Node B sends an NBAP message (e.g. NBAP message Radio link failure or Error indication) with proper cause value to the RNC. After this the RNC should as soon as possible remove the RLS including all transport bearers.
  • NBAP message e.g. NBAP message Radio link failure or Error indication
  • Node B could also send a message directly to TGU asking the TGU to stop transferring data corresponding the transport bearers of the RLS; However, this is in most cases probably not needed.
  • the TGU could also autonomously decide to drop a number of downlink data connection, i.e. discarding all downlink data for those transport bearers.
  • admission control is a way for a node to deny a request for new or modified bandwidth, e.g. when RNC tries to set up a new transport bearer and/or modify reserved bandwidth for an existing. Where admission control is implemented, this should be used also to combat overload/congestion by denying new services (e.g. new calls) to be setup/increased if the IP network is already under stress and close to overload.
  • new services e.g. new calls
  • the admission control in this case can be performed by e.g.
  • the Node B can use the NBAP message radio link failure to indicate to
  • RNC has implemented NBAP message ERROR INDICATION (not implemented by all RNC) then instead this can be used from Node B to RNC to inform RNC that radio link set needs to be dropped due to problems on IP network.
  • TGU may inform the RNC about a problem on IP network by issuing AIS or RDI on or more of the ATM VP or VC.
  • FP packets transferred over Iub are relatively small, typically less than 45 bytes. Instead the frequency is rather high, e.g. for each AMR speech call is one FP packet transferred each 20ms. For a single cell carrier Node B, maximum number of simultaneous speech calls is about 100, which gives a total rate of FP frames in excess of 4kHz. It is not obvious that this kind of solutions works well for an IP based network, where typically the IP packets are larger (Max MTU in the order of 1500bytes) and less frequent. For cases where the routing capacity (number of IP packets routed per second) of the IP network is limiting it would be much better if the end-points (in our case TGU and Node B), reduced the amount of packets and instead made each packet bigger.
  • the TGU and Node B maps one FP message (containing e.g. one AMR speech frame) onto one UDP-IP packet for IP transport network.
  • This method makes the transform between ATM and IP easy, but at the cost of unnecessary high frequency of small IP packets on the IP network. However, this is the preferred method since this is what is recommended by 3GPP.
  • the TGU and Node B may also pack several FP messages into the same UDP-IP packet.
  • the TGU For packing of FP messages in downlink the TGU has a small internal buffer with a size corresponding to max MTU of the IP network.
  • TGU and/or Node B should be configured with MAX MTU of the IP network in order to assure that the transmitting node does not generate IP packets longer than MAX MTU. All FP messages incoming to TGU from the ATM network will be added to this buffer in the same sequence as they arrive from the ATM network.
  • the downlink buffer in TGU is sent as a message over the IP network to Node B as soon as either - the first message in the current buffer has been stored in the buffer more than an allowed maximum delay time, typically ⁇ 5 ms, or
  • the Node B When receiving a packed message from the IP network, the Node B can then unpack the message and extract the individual FP messages. For uplink the Node B performs the same packing process, with the difference that in this case the FP messages has been produced by uplink signal processing inside the Node B. If priority is used in the IP network, e.g. using DiffServ, then TGU and Node B should implement a separate buffering and packing for each priority level.
  • TGU transmission gateway unit
  • the TGU will be configured with all data needed for the termination of ATM PVCs and information on how data shall be transformed into the IP network.
  • the TGU needs to be configured with at least: - the address of the Node B in the IP network, could be a fixed IP address or a logical name which can be used for lookup in using DNS, and - detailed data for all ATM connections (VP-VC) intended for this Node B;
  • This data includes e.g. ATM service class, parameters for the VC etc
  • the configuration data for ATM parameters are sent to the Node B (as had the node been connected via ATM).
  • the Node B receives this data it determines the IP address of the TGU it has been assigned as interface to ATM worlds, and then sends the configuration parameters for ATM to the TGU.
  • the same idea can be used for fault management and performance management, i.e. the Node B collects from the TGU data related to the Node B's interface to ATM world; Then the Node B reports this information to the central O&M system, making the TGU virtually invisible (but still managed) in the O&M network.
  • a further advantage of this method is that since the Node B holds all data, then if a TGU fails (or Node B fails to contact a particular TGU) then the Node B can instead try to establish contact with another TGU (a hot standby). When the Node B has sent all configuration to that TGU then the only remaining to action for a switch over would be to change the ATM network switching such that the VP 's are switched to this new TGU.
  • IP network used for communication between TGU and Node B is in someway accessible for the public, then some kind of protection will be need to prevent intrusion and disturbance of the operation of the nodes.
  • the best way to achieve this would be to put all the IP communication between TGU and Node B on a VPN connection, preferably protected by IPsec or similar. However, this may be an overkill and instead the protection may be depend on:
  • the traffic control information between RNC and Node B (NBAP, ALCAP etc) and between TGU and Node B (if and where used) are normally not protected other by the fact that they are completely binary and in a uncommon format. For increased protection these particular data flows may be encrypted between TGU and Node B.
  • MD5 algorithms scrambling the bits transferred.
  • the O&M information could preferably be encrypted using IPsec.
  • the TGU and Node B may implement a firewall using e.g. an IP address filter protecting the nodes from malicious traffic.
  • An optional solution would be to put a security gateway either in front of the TGU or incorporate this into the TGU. This security gateway function could then terminate VPN tunnels from the Node B, i.e. the node B terminates the other end of the tunnel.
  • VPN tunnels e.g. IPsec in ESP tunnel mode could used both for control plane (NBAP, ALCAP and O&M).
  • NBAP control plane
  • ALCAP ALCAP
  • O&M control plane
  • For Userplane (corresponding to the AAL2 transport bearers) transport over the IP network the frame protocol messages could also be transported over a tunnel, either with encryption (e.g. IPsec ESP) or with null-capsulation, i.e. without additional encryption for the IP network.
  • OAM VCC configured for each Node B in the TGU. All AAL5 frames received on OAM VCC are encapsulated in IP/UDP frames (IPoATM over UDP/IP) by the TGU and sent to Node B's. This is illustrated in Figs 15 and 16. In this option Node B both OAM and UP IP address can be configured with
  • DHCP DHCP
  • the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B.
  • DHCP relay agents one for each hop
  • OAM IP address the "normal" DHCP server in OAM network may be used. The precondition is that the lower IP address (UP IP) is already configured since it will be used to carry all packets to the TGU.
  • the Node B will reply to Inverse ATM ARP requests sent on the VCC. Since IP over ATM is encapsulated over UDP/IP it may be required to decrease the MTU because of the extra header (28 bytes).
  • the Node B only has 1 IP address (UP IP) but from the OAM network it looks like 2 addresses since all IP packets to the TGU are forwarded to the Node B.
  • UP IP IP address
  • Node B both OAM and UP IP address can be configured with DHCP.
  • the TGU may act as a DHCP server if needed but this requires DHCP relay agents (one for each hop) in the IP network between TGU and Node B.
  • DHCP For the OAM IP (TGU IP) address the "normal" DHCP server in OAM network may be used. In this option the TGU will act as a DHCP client to configure the address. The Node B does not have to know this IP address.
  • TGU as a DHCP server for both user plane IP addresses and OAM IP addresses.
  • the TGU is will still answer to InvARP with the OAM IP address provided to each Node B..
  • the user plane transport over Iub as specified by 3GPP is mainly/originally intended for transport over ATM networks.
  • ATM networks are mostly designed to be able to provide a guaranteed quality of service in terms of loss of frames and timeliness in delivery, i.e. the jitter in transport delay is generally assumed to be rather low (in the orders of a few ms for prioritized bearers).
  • Seen from Node B a jitter in time of arrival of userplane data from RNC can be caused of either RNC not sending the data with correct timing (e.g. due varying processing and routing load inside the RNC) or because time varying delay in transport network between RNC and Node B.
  • the same applies for uplink data, i.e. the transmit time from may vary due to internal load of the Node B and also delay over transport network may vary.
  • TS25.402 In order to cope with this 3GPP does in TS25.402 (and in TS25.427 and TS25.435) specifies that the Node B shall have "time of arrival window" for capturing downlink userplane FP frames received on Iub where each userplane FP frame is clearly marked with the CFN (connection frame number) or SFN (system frame number) when that particular frame should result in downlink data transmitted over the air interface (Uu).
  • This "time of arrival window” is given by the RNC to Node B as ToAWS and ToAWE for each transport bearer (TS25.402), i.e. each AAL2 transport bearer (when ATM used as backhaul to the Node B) carrying data for a transport channel or group of coordinated transport channels.
  • This time of arrival window for userplane can also be used for handling of jitter of an IP based interconnect to the Node B;
  • RNC types does for some services try to reduce downlink delay by sending userplane data as late as possible, i.e. the RNC tries to send downlink FP frames with such timing that they arrive as close as LTOA as possible (see TS25.402 section 7.2), giving very little space for jitter.
  • the method for RNC to know time of arrival in Node B is to use the FP messages "UL synchronization" and "DL synchronization specified in TS25.427 and
  • Time of arrival windows configured by RNC may be possible to adjust for a certain implementation of the transport network, but this is an implementation choice done by RNC designer/vendor.
  • ToAWS - ToAWE For some RNC implementations it may be possible to adjust the settings of ToAWS - ToAWE per Node B. In other RNC a changed value will be used for all Node B' s connected to the same RNC, which may cause problem if an RNC mixes Node Bs connected directly by ATM and Node B connected over IP via a TGU. In the latter case it may be necessary for the Node B to on its own increase the ToAWS - ToAWE setting received from RNC. This procedure can be useful even if the RNC can have its ToAWS- ToAWE settings adjusted to match the behaviour of the IP network being used. The actual values used by Node B could in this way be adaptable, i.e. Node B uses statistics (or other information) about the expected jitter of the network in order to decide size and position of the window. The Node B may even adjust the window during operation to match the current behaviour of the IP network.
  • Node B uses statistics to move-resize the time of arrival window, then this statistics calculation should be done per priority level on the IP network.
  • this kind of timing data collected by the Node B may be reported back to the RNC thus giving the RNC the possibility to already during set up of new channel select a suitable time of arrival window for the associated transport bearer (carried over IP or ATM) .
  • the reporting mechanisms in such a case could also be implemented via an existing "performance management"/"performance counters" reporting system where Node B reports collected data to e.g. an OMC (operation and maintenance center) supervising the network.
  • the RNC should have some kind of jitter buffers in uplink.
  • 3GPP does not state any particular requirements regarding the implementation of those, and hence the implementation will be different between different manufacturers.
  • the time of arrival windows in RNC released today are most probably designed and optimized for ATM connect to the Node Bs.
  • the window size may be possible to adjust by e.g. configuration data. But in some implementations windows may be hard coded in the RNC and not possible to tweak for certain implementation of the network.
  • uplink userplane data from all or some of the Node B connected to RNC is transported partly over an IP network, then it may be necessary to modify the time of arrival windows used by the RNC.
  • uplink userplane FP frames may be lost due to incorrect time of arrival in RNC. If the windows in the RNC cannot be adjusted to match the requirements imposed by the IP transport network, then the TGU can implement a "jitter buffer" making it possible to give a better and more stable timing of uplink data towards the RNC.
  • ATM transport can be performed on different type of media, on example is STM-I, another example is using a single El line, yet another example is to use multiple El lines either with or without IMA between the lines.
  • any protocol used for transport of ATM shall be regarded as examples usable in different embodiment. Same applies to the IP transport where most pictures indicate that IP is transported over Ethernet, but of course other type of media can be used for transport of the IP traffic e.g. Gigabit Ethernet, Wireless Lan, WiMax etc.
  • the TGU performs forwarding of AAL5 frames to UDP frames.
  • AAL2 signaling protocol (ALCAP)
  • An RNC using R99/R4 will use ALCAP to control transport bearer allocation etc.
  • the ALCAP will be placed on a dedicated AAL5 PVC using SSCOP 5 as is also illustrated by the AAL2 signaling (ALCAP) in Fig. 7:
  • the TGU performs forwarding of AAL5 frames to UDP frames.
  • AAL2 signaling ALCAP
  • Control plane routing is preferably performed as described in chapter 7.5.
  • the TGU may terminate ALCAP from RNC, also illustrated by the ALCAP-IP-ALCAP, control plane signaling of Fig. 8.
  • ALCAP-IP-ALCAP ALCAP-IP-ALCAP
  • TGU For the ATM transport control signaling on the network side TGU shall support ALCAP [ITU Q2630.2].
  • the TGU shall support IP-ALCAP [ITU Q2631.1].
  • IP-ALCAP The inter-work between IP-ALCAP and ALCAP shall be done according to [ITU Q2632.1]
  • Control plane routing is preferably performed as described in chapter 7.5.
  • TGU - NBU Interwork For dynamic establishment and release of inter-working connection for user data a new proprietary TGU inter- working signaling protocol (TISP) will be used.
  • the TGU shall act as the server for this protocol and NODE B as a client.
  • the server port for this shall be configurable.
  • the protocol can be used on either UDP or TCP.
  • Userplane signaling is described in chapter 7.6 and 7.7.
  • TGU-NBU Interwork may be based on IP-ALCAP as described in section 7.2.2.
  • TGU-NBU interwork can be omitted in some implementations, in particular if TGU implements a static mapping of VPI-VCI-CID versus IP address- UDP port. Mapping of VPI vs. IP address will then require that:
  • TGU can lookup the IP address of the Node B using DNS, or
  • Node B at start up registers its IP address in the TGU it has been assigned via e.g. configuration data stored in the unit.
  • the client will use an "inter-working setup request" message to request establishment of a new inter-working connection, i.e. connecting a AAL2 transport bearer from ATM backhaul with IP-UDP transport bearer (UDP port) on BLAN. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, downlink UDP port and downlink IP address. Uplink parameters are set to zero.
  • the setup request should also include information on:
  • TGU server
  • the server can accept the new connection then it will allocate an uplink UDP port and IP address and create a inter-working connection between AAL2 SSSAR CID and UDP/IP.
  • the TGU will respond with a "inter-working setup acknowledge" message.
  • the message will include the allocated downlink parameters and the parameters supplied in the request. If for some reason an inter-working can not be established the TGU shall respond with an "inter- working setup reject" message.
  • the message will include a fault code value and the parameters supplied in the request. Examples for reasons on reject:
  • the client will use the "inter-working release request" message to request release of an established inter-working connection. Supplied in the request will be PHY, VPI, VCI, CID, transaction id, uplink/downlink UDP port and uplink/downlink IP address.
  • the server (TGU) will release the connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter-working release acknowledge" message. The message the parameters supplied in the request.
  • the TGU shall respond with an "inter- working release reject" message.
  • the message will include a fault code value and the parameters supplied in the request.
  • the client (NBU) will use the "inter-working reset request” message to request release of all established inter- working connection.
  • the server (TGU) will release all connection between AAL2 SSSAR CID and UDP/IP. If operation is successful the TGU will respond with a "inter- working reset acknowledge" message.
  • the TGU shall respond with an "inter-working reset reject" message.
  • the message will include a fault code value.
  • the Node B's connected to BLAN may need to monitor operation of TGU, in particular if RNC is not aware about presence of the TGU in path between RNC and each Node B. For this reason will the proprietary Interwork between each Node B and TGU enable the Node B to:
  • NBU O&M signaling An embodiment for NBU O&M signaling is illustrated in Fig. 9.
  • the NBU O&M signaling interface is transported over IPoA, one channel per NBU.
  • the TGU shall forward between this interface and IP/Ethernet.
  • IPoA shall be implemented according to RFC1483 (LLC/SNAP encapsulation) and RFC1577 (Classical IP and ARP over ATM).
  • RFC1483 LLC/SNAP encapsulation
  • RFC1577 Classical IP and ARP over ATM
  • the MTU for IPoA should be configured to avoid fragmentation between the ATM and Ethernet interface. Control plane routing is preferably performed as described in chapter 7.5.
  • TGU O&M signaling (T ub ) An embodiment for external O&M signaling is illustrated in Fig. 10. The
  • TGU remote O&M signaling interface is transported over IPoA.
  • the TGU shall terminate this interface.
  • SNMP and FTP shall be supported.
  • Control plane routing For control plane signals the routing between the network side and BLAN shall be based on VP/VC on the network side and IP address + port number on the BLAN side. Protocol conversion shall be performed in the TGU; conversion type shall be remotely configurable for each routed channel. The configuration shall be stored in persistent memory.
  • Fig. 11 illustrates an embodiment of control plane routing, in an example with two control plane channels (CHl and CH2).
  • the TGU shall map the Frame Protocol between AAL2 SSSAR frames and UDP packets.
  • User plane routing shall be performed as described in chapter 7.7.
  • Fig. 12 illustrates an embodiment of user plane signaling, where FP equals what is specified by 3GPP in specifications TS25.427 and TS25.435. 7.7 User plane routing
  • the routing between the network side and BLAN shall be based on VP/VC+CID on the network side and IP address + UDP port on the BLAN side. Routing of individual data channels shall be dynamically configured with the proprietary protocol TISP.
  • Fig. 13 illustrates an embodiment of user plane routing, in an example with three user plane channels (CHl, CH2, CH3).
  • NBU OneBASE Pico Node B Unit
  • NBU OneBASE Pico Node B unit
  • NBU 3GPP Node B supporting traffic, base band and radio for one UMTS FDD carrier and one cell.
  • Basic functions, performance and layout of the OneBASE Node B is outlined in WO2005/094102, the content of which incorporated herein by reference.
  • In the roadmap for Andrew development is also a Micro Node B unit with up to two carriers; This would be based on the same source system (HW, FW and SW) and solutions for Pico Node B is transferable directly to the Micro Node B.
  • the transmission board se also the referenced document WO2005/094102
  • Each NBU also has a Ethernet port (10/100 BaseT) which currently only is used for local on- site maintenance.
  • Current version of OneBASE and its application software is designed for ATM based backhauls and transmission boards for 2xEl (with IMA) and STM-I are available today.
  • the existing NBU will be able to support IP transport using Ethernet on existing hardware. It would also be possible to design software allowing to mix IP transport and ATM transport on the same unit. The following S W/FW modifications would be needed for first release:
  • xDSL communication could be handled using an external xDSL modem connected to the Ethernet port of the OneBASE Pico Node B unit.
  • an external xDSL modem connected to the Ethernet port of the OneBASE Pico Node B unit.
  • TGU Transmission Gateway Unit
  • TGU Transmission Gateway Unit
  • the OneBASE TGU will act as an IP - ATM inter- working unit. It will enable a Node B using the IP transport option according to 3GPP release 5 to communicate with a RNC only supporting the ATM transport option.
  • the TGU will also act as a converter between physical interfaces, providing conversion between ATM traffic over E1/J1/T1/STM1 to/from the RNC to IP traffic over Ethernet to/from Node B.
  • a proprietary protocol will be used for exchanging routing information etc between TGU and Node B/Node Bs.
  • IP-ALCAP IP-ALCAP
  • ITU Q2631.1 IP-ALCAP
  • ITU Q2631.1 IP-ALCAP
  • the interwork between ALCAP and IP-ALCAP will be done according to ITU specification Q2632.1. Since the control of transport bearers between the RNC and several Node Bs could be performed by the TGU, it might also be possible to concentrate traffic in an "intelligent" way thus saving transmission cost for the interface between the RNC and a remotely located TGU. This particularly important if transmission cost between RNC and TGU is significant (e.g. a remote star configuration)
  • the TGU should have a persistent memory for storage of application programs and configuration data.
  • Performance management for collecting traffic statistics as e.g. load on links both on BLAN and ATM side
  • the TGU should continuously monitor the actual load on the ATM backhaul both:
  • AAL2 transport bearers AAL2 transport bearers
  • PVC to compare with configured max peak cell rate PCR etc
  • TGU should continuously monitor load also per priority level and/or prioritized item.
  • TGU Before accepting a setup request (from a Node B or RNC) of a new transport bearer the TGU must check that requested bandwidth up and downlink is available on the ATM backhaul connection to the TGU, i.e. that "current load” + “new request” ⁇ "max allowed load", where "current load” could be be calculated as e.g.
  • the "current load” and/or “max allowed load” can be defined as either:
  • admission control for new transport bearers also applies when reconfigurating a transport bearer, e.g. when RNC request the reserved bandwidth should be increased for a particular transport bearer.
  • admission control can be completely or partly disabled, e.g. when TGU is located close to the RNC with virtually unlimited bandwidth at no cost.
  • the TGU can be equipped with a time server using a very stable high quality reference oscillator inside the TGU.
  • the Node B in BLAN can then use this time server in a similar way as a NTP server on internet, but since this server is located on the BLAN the jitter in the IP network (i.e. BLAN) will be significantly smaller than for an NTP server on the public internet.
  • Node B Using IP transport for Iub connection of base stations (Node B) reduces the cost for the transmission backhaul significantly. The trend is also that base stations becomes smaller and smaller making it simpler to find suitable sites and to deploy them.
  • RNCs are not designed to cope with this large amount of separate single cell base stations; Instead they are designed to handle fewer but large base stations where each base station has a few control ports terminating NBAP but handles a number a number of sectors each with a number of RP carriers, e.g. a 6 sector x 3 carrier.
  • the TGU can therefore modified to also perform an aggregation of Iub, making a number of single cell carrier Node B to be regarded by the RNC as one larger several sector Node B.
  • TGU will need at least to terminate the common NBAP (C- NBAP, also called “Node B control port”) and ALCAP (if used).
  • C- NBAP also called "Node B control port”
  • ALCAP if used
  • the dedicated NBAP (D-NBAP, also called Communication control port) may either be terminated on the TGU, or may be forwarded to the Node B unit handling a particular radio link.
  • D-NBAP also called Communication control port
  • the problem with such a solution would be that if a radio link (DPCH) moves from one Node B unit to another (a handover) then control of the radio link should also be moved.
  • DPCH radio link
  • 3GPP has foreseen this kind of problem and procedures for this move of communication control port are already defined in the standard.
  • the TGU can then use these already defined procedures to change communication control port (i.e. switching control flow to another Node B for a particular Ue)
  • the TGU needs to decide which Node B handles that particular cell and then forward a radio link setup message to that cell. Either this forwarded message can be an identical copy of original message or it can be some kind of proprietary message.
  • the TGU would only terminate the C-NBAP and forward all other control and user plane signaling directly to Node B units handling the connection at that particular moment time. In such case the routing inside the TGU will need to route different CID to/from the same VPI-VCI to different Node B, i.e. different IP addresses.
  • the TGU may also perform so called “softer handover", i.e.
  • TGU needs to receive uplink data for the same Ue connections from several Node B and the combine this flows (e.g. by selection combining on FP packet level) to create one uplink flow to RNC for each Ue.
  • the TGU can in this way also emulate more than one "several sector Node B", i.e. terminating C- NBAP and ATM etc for more than one "cluster of Node B units".
  • the RNC via TGU would be seeing one "several sector Node B" per cluster.
  • the above described concept of a "TGU" acting as an Iub aggregator displaying one or more "several sector Node B” instead of a cluster of Node B units can also prove to be very useful even when the RNC itself can terminate Iub over IP. In such an implementation the TGU would have IP interface both to RNC and to the Node B's.
  • both the TGU and the NBU include computer systems comprising microprocessors with associated memory space and operation software, as well as application software configured such that execution of computer code of the application software causes the computer systems to carry out the steps mentioned herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention porte sur une nouvelle solution de transmission permettant le déploiement de noeuds pico B avec des frais opérationnels de transmission sensiblement inférieurs à ceux des réseaux ATM habituels, ainsi que sur un système de communication comprenant un contrôleur de réseau de radiocommunication (RNC) relié à un réseau à mode de transfert asynchrone (ATM) ; une station fixe radio d'unité de noeud B (NBU) reliée à un réseau IP et une unité de passerelle de transmission (TGU) reliée au réseau IP et au réseau ATM, configurée pour modifier la porteuse de transport de paquets de données transmis entre le RNC et la NBU. La solution proposée peut s'utiliser dans les RNC et liaisons terrestres actuels fondés sur ATM, au moyen du transport IP sur (au moins) le dernier mille, et permet aussi l'utilisation du transport IP sur presque tout le trajet, réduisant ainsi les coûts de transport au minimum.
PCT/SE2006/000565 2005-05-13 2006-05-15 Unite de passerelle de transmission pour noeud pico b WO2006121399A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US59486205P 2005-05-13 2005-05-13
US60/594,862 2005-05-13
US76684906P 2006-02-15 2006-02-15
US60/766,849 2006-02-15

Publications (2)

Publication Number Publication Date
WO2006121399A2 true WO2006121399A2 (fr) 2006-11-16
WO2006121399A3 WO2006121399A3 (fr) 2007-01-04

Family

ID=37396996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2006/000565 WO2006121399A2 (fr) 2005-05-13 2006-05-15 Unite de passerelle de transmission pour noeud pico b

Country Status (1)

Country Link
WO (1) WO2006121399A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834805A (zh) * 2010-05-31 2010-09-15 西南交通大学 一种流控制传输协议报文穿越网络地址转换设备的方法
CN101953225A (zh) * 2008-02-22 2011-01-19 高通股份有限公司 对基站的传输进行控制的方法和装置
EP2553971A1 (fr) * 2010-03-30 2013-02-06 Telefonaktiebolaget LM Ericsson (publ) Procédé de détection de congestion dans un système radiocellulaire

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081622A1 (en) * 2001-10-29 2003-05-01 Chang-Rae Jeong Data translation apparatus of ATM in mobile communication system
WO2004017585A2 (fr) * 2002-08-14 2004-02-26 Qualcomm Incorporated Interoperabilite avec un reseau federateur dans un systeme de picocellules
US20050043030A1 (en) * 2003-08-22 2005-02-24 Mojtaba Shariat Wireless communications system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081622A1 (en) * 2001-10-29 2003-05-01 Chang-Rae Jeong Data translation apparatus of ATM in mobile communication system
WO2004017585A2 (fr) * 2002-08-14 2004-02-26 Qualcomm Incorporated Interoperabilite avec un reseau federateur dans un systeme de picocellules
US20050043030A1 (en) * 2003-08-22 2005-02-24 Mojtaba Shariat Wireless communications system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953225A (zh) * 2008-02-22 2011-01-19 高通股份有限公司 对基站的传输进行控制的方法和装置
KR101124822B1 (ko) 2008-02-22 2012-03-26 콸콤 인코포레이티드 기지국의 송신을 제어하기 위한 방법들 및 장치
RU2496279C2 (ru) * 2008-02-22 2013-10-20 Квэлкомм Инкорпорейтед Способы и устройство для управления передачей базовой станции
CN101953225B (zh) * 2008-02-22 2015-03-11 高通股份有限公司 对基站的传输进行控制的方法和装置
US11477721B2 (en) 2008-02-22 2022-10-18 Qualcomm Incorporated Methods and apparatus for controlling transmission of a base station
EP2553971A1 (fr) * 2010-03-30 2013-02-06 Telefonaktiebolaget LM Ericsson (publ) Procédé de détection de congestion dans un système radiocellulaire
EP2553971A4 (fr) * 2010-03-30 2013-08-07 Ericsson Telefon Ab L M Procédé de détection de congestion dans un système radiocellulaire
US8908524B2 (en) 2010-03-30 2014-12-09 Telefonaktiebolaget L M Ericsson (Publ) Method of congestion detection in a cellular radio system
CN101834805A (zh) * 2010-05-31 2010-09-15 西南交通大学 一种流控制传输协议报文穿越网络地址转换设备的方法

Also Published As

Publication number Publication date
WO2006121399A3 (fr) 2007-01-04

Similar Documents

Publication Publication Date Title
US20060198336A1 (en) Deployment of different physical layer protocols in a radio access network
NO326391B1 (no) Fremgangsmate for overforing av data i GPRS
WO2008089660A1 (fr) Procédé, dispositif et système de réseau radio pour une unification d'accès radio
Bhattacharjee et al. Time-sensitive networking for 5G fronthaul networks
EP1256213B1 (fr) Procede et systeme de transmission de donnees entre une architecture de communications mobiles et une architecture a commutation de paquets
EP1234459B1 (fr) Systeme et procede dans un reseau gprs servant a interfacer un systeme de station de base avec un noeud de support gprs de service
EP1980056B1 (fr) Gestion de gigue pour liaison de réseau de données à commutation par paquets pour retour de données d'appel
US8619811B2 (en) Apparatus, system and method for forwarding user plane data
WO2006121399A2 (fr) Unite de passerelle de transmission pour noeud pico b
Salmelin et al. Mobile backhaul
Cisco New Features in Release 11.3
US11251988B2 (en) Aggregating bandwidth across a wireless link and a wireline link
Zhang et al. Routing and packet scheduling in LORAWANs-EPC integration network
Cisco Cisco IOS Configuration Guides Master Index, L through Z
Cisco X.25 and LAPB Commands
Lilius et al. Planning and Optimizing Mobile Backhaul for LTE
CA2563158A1 (fr) Procede et systeme pour fournir une interface entre un equipement de commutation et un moyen d'interfonctionnement sans fil 2g
KR101123068B1 (ko) Wlan 액세스 포인트와 서비스 제공 네트워크 간의 게이트웨이 노드를 이용하여 wlan 액세스 포인트를 통한 cdma/umts 서비스에 대한 액세스
CN101932003A (zh) 拥塞控制的处理方法和设备
EP2144477A2 (fr) Procédé et système de transmission unique 2G-3G groupée
Fendick et al. The PacketStar™ 6400 IP switch—An IP switch for the converged network
Li et al. Carrier Ethernet for transport in UMTS radio access network: Ethernet backhaul evolution
Li et al. Shared transport for different radio broadband mobile technologies
Parikh et al. TDM services over IP networks
Metsälä 3GPP Mobile Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

NENP Non-entry into the national phase in:

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06733409

Country of ref document: EP

Kind code of ref document: A2