WO2014197716A1 - System and method for managing a wireless network - Google Patents

System and method for managing a wireless network Download PDF

Info

Publication number
WO2014197716A1
WO2014197716A1 PCT/US2014/041137 US2014041137W WO2014197716A1 WO 2014197716 A1 WO2014197716 A1 WO 2014197716A1 US 2014041137 W US2014041137 W US 2014041137W WO 2014197716 A1 WO2014197716 A1 WO 2014197716A1
Authority
WO
WIPO (PCT)
Prior art keywords
managing
network
wireless network
infrastructure
ues
Prior art date
Application number
PCT/US2014/041137
Other languages
French (fr)
Inventor
Hang Zhang
Original Assignee
Huawei Technologies Co., Ltd.
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd., Futurewei Technologies, Inc. filed Critical Huawei Technologies Co., Ltd.
Priority to KR1020177015160A priority Critical patent/KR101876364B1/en
Priority to CN201480032572.6A priority patent/CN105850199B/en
Priority to EP14807236.6A priority patent/EP2997780B1/en
Priority to KR1020157037268A priority patent/KR101748228B1/en
Priority to JP2016518002A priority patent/JP6240318B2/en
Publication of WO2014197716A1 publication Critical patent/WO2014197716A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0215Traffic management, e.g. flow control or congestion control based on user or device properties, e.g. MTC-capable devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/542Allocation or scheduling criteria for wireless resources based on quality criteria using measured or perceived quality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5061Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
    • H04L41/5067Customer-centric QoS measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/69Identity-dependent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0226Traffic management, e.g. flow control or congestion control based on location or mobility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0284Traffic management, e.g. flow control or congestion control detecting congestion or overload during communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W48/00Access restriction; Network selection; Access point selection
    • H04W48/16Discovering, processing access restriction or access information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes
    • H04W56/0025Synchronization between nodes synchronizing potentially movable access points
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/20Manipulation of established connections
    • H04W76/27Transitions between radio resource control [RRC] states
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/02Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/02Resource partitioning among network components, e.g. reuse partitioning
    • H04W16/06Hybrid resource partitioning, e.g. channel borrowing
    • H04W16/08Load shedding arrangements

Definitions

  • the present invention relates generally to wireless network architecture and, in particular embodiments, to a system and method for managing a wireless network.
  • An embodiment method of managing a wireless network includes managing an infrastructure topology for the wireless network.
  • the wireless network includes a plurality of network nodes.
  • the method further includes managing a connection of a user equipment (UE) to the wireless network.
  • the method further includes managing a customer service provided to the UE over the connection.
  • the method also includes managing analytics for the wireless network and the service.
  • UE user equipment
  • An embodiment computing system for managing a wireless network includes a customer service manager, a connectivity manager, a data analyzer, and an infrastructure manager.
  • the customer service manager is configured to authorize access to the wireless network by UEs according to respective customer service information.
  • the customer service manager is further configured to negotiate a respective quality of experience (QoE) for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective virtual networks (VNs) are established by a control plane.
  • the connectivity manager is configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected.
  • the data analyzer is configured to generate a congestion and traffic report according to network status reports received periodically from network nodes attached to the wireless network.
  • the data analyzer is further configured to generate a QoE status according to QoE reports received periodically from the UEs.
  • the infrastructure manager is configured to adapt a topology for the wireless network according to the topology and the congestion and traffic report.
  • An embodiment communication system includes a control plane, a data plane, and a management plane.
  • the control plane is configured to make network resource management decisions for customer service traffic over a wireless network.
  • the data plane includes network nodes arranged in a topology and configured to forward network traffic according to the traffic management decisions.
  • the management plane includes a customer service manager, a connectivity manager, a data analyzer, and an infrastructure manager.
  • the customer service manager is configured to authorize access to the wireless network by UEs according to respective customer service information.
  • the customer service manager is further configured to negotiate a respective QoE for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective VNs are established by the control plane.
  • the connectivity manager is configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected.
  • the data analyzer is configured to generate a congestion and traffic report according to network status reports received periodically from the network nodes.
  • the data analyzer is further configured to generate a QoE status according to QoE reports received periodically from the UEs.
  • the infrastructure manager is configured to adapt the topology according to the topology and the congestion and traffic report.
  • Figure 1 is a block diagram of one embodiment of a logical functional architecture for a wireless network
  • Figure 2 is a block diagram of one embodiment of a management plane hierarchical architecture
  • Figure 3 is a block diagram of one embodiment of a network node
  • Figure 4 is a flow diagram of one embodiment of a method of managing a wireless network
  • Figure 5 is a block diagram of one embodiment of a computing system
  • Figure 6 is an illustration of operation of an embodiment management plane when a customer device enters the network
  • Figure 7 is an illustration of operation of an embodiment management plane when a customer device is idle without burst traffic
  • Figure 8 is an illustration of operation of an embodiment management plane when a customer device is idle with upstream burst traffic
  • Figure 9 is an illustration of operation of an embodiment management plane when a customer device is idle with downstream burst traffic
  • Figure 10 is an illustration of operation of an embodiment management plane when an upstream session triggers a customer device to transition from idle to active;
  • Figure 11 is an illustration of operation of an embodiment management plane when a downstream session triggers a customer device to transition from idle to active;
  • Figure 12 is an illustration of operation of an embodiment management plane for adapting a topology
  • Figure 13 is an illustration of operation of an embodiment management plane for integrating private network nodes into the network
  • Figure 14 is an illustration of operation of an embodiment management plane for on- demand QoE assurance.
  • Figure 15 is an illustration of operation of an embodiment management plane for on- demand network status analysis.
  • NFV network function virtu alization
  • SDN software defined networking
  • SDN is an architectural framework for creating intelligent programmable networks, where the control planes and the data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the application.
  • a virtual network is a collection of resources virtualized for a given service.
  • VN virtual network
  • a framework of management plane functionality is needed to guide design and implementation of future wireless networks.
  • the framework can identify management plane functionality, include management plane interfaces, and include operation procedures.
  • Management plane functionality can include infrastructure management, device connectivity management, customer service management, and status analysis management, among other functionality.
  • Infrastructure management functionality provides management capability for infrastructure topology adaptation according to network congestion and traffic load.
  • Infrastructure management functionality allows integration of wireless networks of multiple providers. It also provides for spectrum management of a radio access network (RAN) and spectrum sharing among various co-located wireless networks.
  • Infrastructure management also includes access (MAP) management and air interface management.
  • Device connectivity management functionality provides management capability for per-device attachment to the wireless network, including media access control (MAC) status, location tracking, and paging.
  • Device connectivity management functionality includes defining customized and scenario-aware location tracking schemes.
  • Device connectivity management functionality also provides for a software-defined and virtual per-mobile-user geographic location tracking entity and a triggering of user-specific data plane topology updates.
  • Device connectivity management functionality also provides user-specific virtual network migration and location tracking as a service (LTaaS).
  • a UE's location is relative to the network or, more specifically, relative to a set of potential network nodes that could provide network access to the UE.
  • LTaaS provides UE location tracking and location information to the UE's home operator or a global UE location information management center, which is generally managed by a third party. The information can then be accessed and used by other operators, VN operators, and others.
  • Customer service management functionality provides management capability for customers' private information, authorization of device access, device security, and negotiation of service quality of experience (QoE).
  • Customer service management functionality includes per- service QoE management as well as charging services.
  • Customer service management functionality also provides customer specific context for connections and services. This includes QoE monitoring, charging, and billing.
  • Status analysis management functionality provides management capability for on- demand network status analysis and QoE assurance. More specifically, status analysis management functionality provides for management of on-demand network status analytics, management of on-demand service QoE status analytics, and data analytics as a service (DAaaS).
  • DaaS data analytics as a service
  • the management plane functionality required in a wireless network can vary among devices and can depend on a device's state.
  • the various devices in a wireless network include customer devices, i.e., user equipment (UE), and operator nodes, i.e., base stations or radio nodes.
  • UE user equipment
  • operator nodes i.e., base stations or radio nodes.
  • a customer device can be powered on or off, and, while on, can be active or idle, all of which is referred to as the customer device's state.
  • an operator node can also be powered on or off and, while on, can be active, idle, or inactive.
  • an operator node's state can vary among its respective subsystems.
  • an operator node's access subsystem may be active, idle, or inactive while powered on, while the operator node's backhaul subsystem may be active or inactive.
  • the operator node's sensor subsystem may be active or inactive while powered on.
  • a UE While in an active state, a UE continuously searches and monitors for feedback channel quality indicators (CQI).
  • CQI feedback channel quality indicators
  • a continuous process is one that is carried out at every transmission interval. For example, if the transmission interval is 1 millisecond, then the UE searches and monitors periodically, where the period is 1 millisecond.
  • the UE also continuously sends signals enabling uplink (UL) CQI estimation.
  • UL uplink
  • a VN - active is established. The configuration of the VN - active depends on the UE's mobility, required QoE, and network status.
  • the network maintains contexts above and below layer-2, including automatic repeat request (ARQ) and hybrid ARQ (HARQ) below; and flow/service ID mapping to ID defined in the access link (AL), location, states, authentication keys, sessions, and QoE above.
  • ARQ automatic repeat request
  • HARQ hybrid ARQ
  • flow/service ID mapping to ID defined in the access link (AL), location, states, authentication keys, sessions, and QoE above.
  • the UE While in an idle state, the UE has an idle state ID and remains attached to the network.
  • the UE continuously searches and monitors the network for measurement purposes and location updates.
  • the UE may also perform mobility monitoring/tracking and location synchronization with the network. ⁇ certain embodiments, the UE carries out network monitoring conditionally.
  • the UE also transmits and receives short data bursts without going back to active state.
  • a VN - idle is established.
  • the configuration of the VN - idle depends on the UEs mobility, required QoE, and network status.
  • the VN - idle can have no dedicated physical network resources or can have partially dedicated physical network resources, for example, between a user-specific virtual serving gateway (v-u-SGW) and other gateways (GWs).
  • v-u-SGW user-specific virtual serving gateway
  • GWs gateways
  • FIG. 1 is a block diagram of one embodiment of a logical functional architecture 100 for a wireless network.
  • Architecture 100 separates the wireless network into a data plane 110, a control plane 120, and a management plane 130.
  • Data plane 110 transports network traffic among the various network nodes and UEs attached to the wireless network.
  • Control plane 120 makes network resource assignment decisions for customer service traffic and transports control signals among the various network nodes and UEs.
  • Management plane 130 provides various management and administrative functionality for the network. Interfaces exist among management plane 130, control plane 120, and data plane 110, enabling each to carry out its respective functionality.
  • control plane 120 has an application programming interface (API) 122 that allows various applications 140-1 through 140-N to access control plane 120.
  • API application programming interface
  • Architecture 100 also includes various databases that are occasionally accessed by management plane 130 in carrying out its functionalities. These databases include a privacy network database 150, a customer service information database 152, a customer device information database 154, an infrastructure database 156, and an infrastructure abstraction database 158.
  • Privacy network database 150 is a repository for topology information, node capabilities, states, and security information.
  • Customer service information database 152 is a repository for authentication and security information related to customer devices, i.e., UEs.
  • Customer device information database 154 is a repository for capabilities, locations, and states of customer devices.
  • Infrastructure database 156 is a repository for network topology, node capabilities, and states.
  • Infrastructure abstraction database 158 is a repository for various infrastructure abstractions within the wireless network.
  • management plane 130 provides various functionalities through respective control blocks, including: an infrastructure manager 132, a data analyzer 134, a customer service manager 136, and a connectivity manager 138.
  • Management plane 130 can provide additional functionalities, such as content service management, which is responsible for defining content caches in the radio access network (RAN), configuring cache-capable network nodes, and managing content forwarding.
  • Infrastructure manager 132 is responsible for integrating network nodes from various wireless network providers.
  • Infrastructure manager 132 is also responsible for spectrum management for RAN backhaul links and access networks, as well as spectrum sharing among co- located wireless networks.
  • Infrastructure manager 132 also provides access map management and air-interface management, among other functionalities.
  • Data analyzer 134 is responsible for data analytics related to the wireless network.
  • Customer service manager 136 is responsible for customer aspects of service, including QoE, charging, and customer information, among other functionalities.
  • Connectivity manager 138 is responsible for location tracking and synchronization, pushing topology updates to users, VN migration, and providing location tracking as a service (LTaaS), among other functionalities.
  • Infrastructure manager 132, data analyzer 134, customer service manager 136, and connectivity manager 138 can be implemented in one or more processors, one or more application specific integrated circuits (ASICs), one or more field-programmable gate arrays (FPGAs), dedicated logic circuitry, or any combination thereof, all collectively referred to as a processor.
  • ASICs application specific integrated circuits
  • FPGAs field-programmable gate arrays
  • the respective functions for Infrastructure manager 132, data analyzer 134, customer service manager 136, and connectivity manager 138 can be stored as instructions in non-transitory memory for execution by the processor.
  • FIG. 2 is a block diagram of one embodiment of a management plane hierarchical architecture 200.
  • Architecture 200 includes a top level management plane 210 having second and third levels beneath it.
  • Top level management plane 210 coordinates among second level management planes 220-1 through 220-N.
  • Each of second level management planes 220-1 through 220-N is responsible for coordinating among its respective third level management planes.
  • Second level management plane 220-N coordinates among third level management planes 230-1 through 230-M.
  • the management plane can include up to the full functionality for management plane 130 of Figure 1.
  • Top level management plane 210 and other upper-level management planes in architecture 200 also include coordination capabilities for coordinating among respective lower-level management planes.
  • the division of the management plane into the various levels of architecture 200 can be done in a variety of ways, including by geographic zones, virtual networks, and service types.
  • peer management planes can also communicate with each other.
  • second level management plane 220-1 could communicate with second level management plane 220-2.
  • FIG. 3 is a block diagram of one embodiment of a network node 300.
  • Network node 300 includes access subsystems 310-1 through 310-i, backhaul subsystems 320-1 through 320-j, and sensor subsystems 330-1 through 330-k.
  • Access subsystems 310-1 through 310-i can include single or multiple access links.
  • backhaul subsystems 320-1 through 320-j can include single or multiple backhaul links.
  • the various subsystems of network node 300 can be in different states at a given time. Each of the subsystems can be powered on and off independently, and transition among various states independently.
  • Each of access subsystems 310-1 through 310- i can be in an active state, idle state, or inactive state while powered on.
  • Each of backhaul subsystems 320-1 through 320-j can be in an active state or an inactive state while powered on.
  • Each of sensor subsystems 330-1 through 330-k can be in an active state or an inactive state while powered on. Each of these subsystems is inactive while powered off.
  • the operation of the subsystems of network node 300 varies with each subsystems current state.
  • An access subsystem while active, carries out continuous downlink (DL) transmissions, discontinuous radio frequency (RF) transmissions (e.g., pilots and reference signals), and continuously receiving and monitoring UL transmissions.
  • DL downlink
  • RF radio frequency
  • the access subsystem In the idle state, the access subsystem carries out no DL RF transmissions, but continuously carries out UL receiving and monitoring. While inactive, the access subsystem shuts down RF activity, including DL and UL.
  • a backhaul subsystem in the active state, carries out RF transmissions and continuously receives and monitors RF transmissions. While inactive, the backhaul subsystem shuts down RF activity.
  • a sensor subsystem sometimes referred to as an out-of-band sensor subsystem, carries out continuous and discontinuous RF transmissions and continuously receives and monitors RF transmissions while active. While inactive, the sensor subsystem shuts down RF activity.
  • FIG. 4 is a flow diagram of one embodiment of a method of managing a wireless network.
  • the method begins at a start step 410.
  • a management plane manages an infrastructure topology for the wireless network.
  • the wireless network' s infrastructure topology generally includes a plurality of network nodes.
  • the management plane manages connections of UEs to the wireless network according to the infrastructure topology.
  • the management plane manages services provided to the UEs over their respective connections.
  • the management plane performs analysis on performance metrics and manages the corresponding analytics.
  • FIG. 5 is a block diagram of a computing system 500 that may be used for implementing the devices and methods disclosed herein. Specific devices may utilize all of the components shown or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc.
  • the computing system 500 may comprise a processing unit 502 equipped with one or more input/output devices, such as a speaker, microphone, mouse, touchscreen, keypad, keyboard, printer, display, and the like.
  • the processing unit may include a central processing unit (CPU) 514, memory 508, a mass storage device 504, a video adapter 510, and an I/O interface 512 connected to a bus 520.
  • CPU central processing unit
  • the bus 520 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, video bus, or the like.
  • the CPU 514 may comprise any type of electronic data processor.
  • the memory 508 may comprise any type of non- transitory system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like.
  • the memory 508 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the mass storage 504 may comprise any type of non-transitory storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 520.
  • the mass storage 504 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
  • the video adapter 510 and the I/O interface 512 provide interfaces to couple external input and output devices to the processing unit 502.
  • input and output devices include a display 518 coupled to the video adapter 510 and a mouse/keyboard/printer 516 coupled to the I/O interface 512.
  • Other devices may be coupled to the processing unit 502, and additional or fewer interface cards may be utilized.
  • a serial interface such as Universal Serial Bus (USB) (not shown) may be used to provide an interface for a printer.
  • USB Universal Serial Bus
  • the processing unit 502 also includes one or more network interfaces 506, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or different networks.
  • the network interfaces 506 allow the processing unit 502 to communicate with remote units via the networks.
  • the network interfaces 506 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas.
  • the processing unit 502 is coupled to a local-area network 522 or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
  • Figure 6 is an illustration of a procedure 600 representing operation of an embodiment management plane when a customer device enters the network.
  • Procedure 600 is carried out among various components of the management plane, including customer service manager 136 and connectivity manager 138, both of Figure 1.
  • Procedure 600 is also carried out among a customer device 610, control plane 120 of Figure 1, and data plane 110, also of Figure 1.
  • Customer device 610 powers on at step 620 and begins DL acquisition and UL synchronization procedures at step 622.
  • Customer device 610 then sends ID and location information, at step 624 to control plane 120 for registration at step 640.
  • the location information is passed from control plane 120 to connectivity manager 138 where it is registered at step 670.
  • Connectivity manager 138 stores the location information in the customer device information database.
  • Connectivity manager 138 then begins synchronizing location tracking with customer device 610 at step 672.
  • the ID information is passed to customer service manager 136 for authentication at step 660. Authentication is carried out according to the ID information and information in the customer service information database. Authorization is then sent from customer service manager 136 to customer device 610, which is received at step 626.
  • Control plane 120 establishes a VN at step 642, which is represented by VN - idle in data plane 110 at step 650.
  • Customer device 610 then prepares for idle at step 628, and enters the idle state at step 630.
  • FIG. 7 is an illustration of a procedure 700 representing operation of an embodiment management plane when a customer device is idle without burst traffic.
  • Procedure 700 begins with customer device 610 in an idle state at step 710.
  • Customer device performs DL monitoring and location tracking at step 712.
  • a location update is made and is synchronized with connectivity manager 138 at step 720.
  • Connectivity manager 138 updates the location in customer device information database.
  • Connectivity manager then sends a location update to control plane 120 at step 730.
  • Control plane 120 then updates the VN - idle at step 740.
  • Connectivity manager 138 proceeds with location tracking at 722 and customer device 610 resumes DL monitoring and location tracking at step 716.
  • Figure 8 is an illustration of a procedure 800 representing operation of an embodiment management plane when a customer device is idle with upstream burst traffic.
  • Procedure 800 begins with customer device 610 in an idle state at step 810. Customer device 610 performs DL monitoring and UL synchronization at step 812. At step 814, customer device 610 sends a traffic burst to data plane 110. Data plane 110 forwards the traffic burst to a u-v-SGW using VN - idle. Customer device 610 remains in idle at step 816.
  • FIG. 9 is an illustration of a procedure 900 representing operation of an embodiment management plane when a customer device is idle with downstream burst traffic.
  • Procedure 900 begins with customer device 610 in an idle state at step 910. Customer device 610 performs DL monitoring and UL synchronization at step 912, and monitors for paging at step 914.
  • Data plane 110 receives burst traffic at a v-u-SGW and filters it at step 940.
  • the v-u-SGW notifies control plane 120.
  • Control plane 120 instructs connectivity manager 138 to page customer device 610 at step 950.
  • Customer device 610 acknowledges the page and measures CQI at step 916. According to the page, control plane 120 makes a traffic decision at step 932 and instructs data plane 110 to forward the burst at step 942.
  • the burst is received by customer device 610 at step 918, and customer device 610 continues to idle at step 920.
  • FIG. 10 is an illustration of a procedure 1000 representing operation of an embodiment management plane when an upstream session triggers a customer device to transition from idle to active.
  • Procedure 1000 begins with customer device 610 in an idle state at step 1010.
  • Customer device 610 performs DL acquisition and UL synchronization at step 1012.
  • Customer device 610 submits a session request to control plane 120 at step 1014.
  • Control plane 120 processes the request at step 1030 and begins negotiation with customer service manager 136 at step 1040.
  • Customer service manager 136 and control plane 120 negotiate a QoE for customer device 610 according to information in the customer service information database and make a QoE guarantee at step 1016.
  • customer service manager 136 negotiates directly with customer device 610 to establish the QoE guarantee.
  • Control plane 120 establishes a VN at step 1032 according to the negotiated QoE, location information, and network status.
  • VN - active 1050 is created in in data plane 110 at step 1050.
  • Customer device 610 then transitions to an active state at step 1018 and makes an upstream transmission via data plane 110 at step 1052. While customer device 610 is in the active state, customer device 610 makes occasional location updates at step 1020 and connectivity manager 138 synchronizes location at step 1060 and updates the location in the customer device information database.
  • FIG. 11 is an illustration of a procedure 1100 representing operation of an embodiment management plane when a downstream session triggers a customer device to transition from idle to active.
  • Procedure 1100 begins with customer device 610 in an idle state at step 1110.
  • Customer device 610 makes a location update at step 1112 and connectivity manager 138 synchronizes location at step 1130.
  • Connectivity manager 138 updates the location in the customer device information database.
  • Customer device 610 continuously monitors for paging at step 1114.
  • a session request is received at control plane 120 at step 1140.
  • Control plane 120 and customer service manager 136 then negotiate QoE requirements at step 1150 according to information in the customer service information database.
  • Control plane 120 makes a location inquiry to connectivity manager 138 at step 1142.
  • Connectivity manager 138 then pages customer device 610 at step 1132.
  • Customer device 610 acknowledges the page and measures CQI at step 1116.
  • Connectivity manager 138 updates the location for customer device 610 and passes it to control plane 120.
  • Control plane 120 establishes a VN at step 1144 according to the location information, QoE requirements, and network status.
  • a VN - active is created in data plane 110 at step 1120.
  • a downstream transmission is then made at step 1122, triggering customer device 610 to transition to an active state at step 1118.
  • Figure 12 is an illustration of a procedure 1200 representing operation of an embodiment management plane for adapting a topology. Procedure 1200 is carried out over various elements of the management plane, including infrastructure manager 132, data analyzer 134, and connectivity manager 138, all of Figure 1.
  • Procedure 1200 is also partially carried out by customer device 610 and data plane 110 of Figure 1.
  • Data analyzer 134 generates a congestion and traffic report at step 1210 and delivers the report to infrastructure manager 132.
  • Infrastructure manager 132 checks the current network topology at step 1220.
  • Infrastructure manager 132 then adapts the topology at step 1222 according to the congestion and traffic report and information in the infrastructure database. Adaptations can include bandwidth increase, new backhaul links, and modifying existing backhaul links, among other adaptations.
  • infrastructure manager 132 configures network nodes 1230 affected by the adaptation. Configuration can include beam/sector configuration and access link configuration, among other parameters.
  • Infrastructure manager 132 then issues an infrastructure update at step 1226.
  • Connectivity manager 138 receives the infrastructure update at pushes an updated infrastructure map to customer device 610 at step 1240.
  • Customer device 610 can be in an idle or active state.
  • FIG. 13 is an illustration of a procedure 1300 representing operation of an embodiment management plane for integrating private network nodes into the network.
  • Procedure 1300 begins with a congestion and traffic report from data analyzer 134 at step 1310. The report is received by infrastructure manager 132. Infrastructure manager checks the current network topology at step 1320. According to the congestion and traffic report, infrastructure manager 132 negotiates with private nodes 1330 to integrate them into the data plane 110 at step 1322. The infrastructure database is updated at 1324 and affected nodes 1332 are updated at step 1326. Infrastructure manager 132 issues an infrastructure update at step 1328. Connectivity manager 138 receives the infrastructure update and pushes an updated infrastructure map to customer device 610 at step 1340. Customer device 610 is in an idle state at step 1350.
  • FIG 14 is an illustration of a procedure 1400 representing operation of an embodiment management plane for on-demand QoE assurance.
  • Procedure 1400 is carried out among customer device 610, control plane 120, data plane 110, data analyzer 134, and customer service manager 136.
  • Procedure 1400 begins with customer device 610 in an active state at step 1420.
  • Data analyzer 134 schedules QoE reporting at step 1410.
  • QoE reporting can be scheduled with active customer devices, selected network nodes 1430, or both.
  • Customer device 610 generates QoE metrics at step 1422 and sends them to data analyzer 134.
  • Selected nodes 1430 in data plane 110 generate QoE metrics at step 1432 and send those to data analyzer 134 as well.
  • data analyzer analyzes the QoE metrics from customer device 610 and data plane 110 to generate a QoE status.
  • the QoE status is sent to control plane 120 at step 1440.
  • Control plane 120 negotiates QoE at step 1442 with customer service manager 136 at step 1450. The negotiation is done according to the QoE status generated by data analyzer 134.
  • control plane 120 updates VN - active 1434 step 1444.
  • Customer service manager 136 updates QoE requirements at step 1452 according to the negotiated QoE and sends the update to customer device 610.
  • Customer device 610 continues in the active state at step 1424.
  • Figure 15 is an illustration of a procedure 1500 representing operation of an embodiment management plane for on-demand network status analysis.
  • Procedure 1500 is carried out among infrastructure manager 132, data plane 110, and data analyzer 134.
  • Procedure 1500 begins with data analyzer 134 scheduling network status reporting at step 1510.
  • Network status reporting is scheduled for selected nodes 1520.
  • Selected nodes 1520 in data plane 110 generate network status reports at step 1522.
  • the network status reports are sent to data analyzer 134.
  • Data analyzer 134 analyzes the network statuses at step 1512 and generates a congestion and traffic report at step 1514.
  • the congestion and traffic report is passed to infrastructure manager 132.
  • Infrastructure manager 132 checks the current network topology at step 1530.
  • Infrastructure manager 132 then adapts the infrastructure at step 1532 for selected nodes 1520 according to the congestion and traffic report.
  • Infrastructure manager 132 can also integrate private nodes 1522 into the wireless network at step 1534 according to the congestion and traffic report.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

An embodiment method of managing a wireless network includes managing an infrastructure topology for the wireless network. The wireless network includes a plurality of network nodes. The method further includes managing a connection of a user equipment (UE) to the wireless network. The method further includes managing a customer service provided to the UE over the connection. The method also includes managing analytics for the wireless network and the service.

Description

System and Method for Managing a Wireless Network
This application claims the benefit of U.S. Provisional Application No. 61/831,471, titled "Framework for Management Plane Functionality and Interfaces," filed on June 5, 2013, which application is hereby incorporated herein by reference.
TECHNICAL FIELD
The present invention relates generally to wireless network architecture and, in particular embodiments, to a system and method for managing a wireless network.
BACKGROUND
Driven largely by smart phones, tablets, and video streaming, the amount of wireless data handled by wireless networks has risen markedly and is expected to continue to rise by orders of magnitude over the next ten years. In addition to the sheer volume of data, the number of devices is expected to continue to grow exponentially, possibly reaching into the billions of devices, along with radically higher data rates. Different applications will place different requirements on the performance of future wireless networks. Future wireless networks are expected to be highly flexible, highly efficient, open, and customizable for customers and consumers.
SUMMARY OF THE INVENTION
An embodiment method of managing a wireless network includes managing an infrastructure topology for the wireless network. The wireless network includes a plurality of network nodes. The method further includes managing a connection of a user equipment (UE) to the wireless network. The method further includes managing a customer service provided to the UE over the connection. The method also includes managing analytics for the wireless network and the service.
An embodiment computing system for managing a wireless network includes a customer service manager, a connectivity manager, a data analyzer, and an infrastructure manager. The customer service manager is configured to authorize access to the wireless network by UEs according to respective customer service information. The customer service manager is further configured to negotiate a respective quality of experience (QoE) for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective virtual networks (VNs) are established by a control plane. The connectivity manager is configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected. The data analyzer is configured to generate a congestion and traffic report according to network status reports received periodically from network nodes attached to the wireless network. The data analyzer is further configured to generate a QoE status according to QoE reports received periodically from the UEs. The infrastructure manager is configured to adapt a topology for the wireless network according to the topology and the congestion and traffic report.
An embodiment communication system includes a control plane, a data plane, and a management plane. The control plane is configured to make network resource management decisions for customer service traffic over a wireless network. The data plane includes network nodes arranged in a topology and configured to forward network traffic according to the traffic management decisions. The management plane includes a customer service manager, a connectivity manager, a data analyzer, and an infrastructure manager. The customer service manager is configured to authorize access to the wireless network by UEs according to respective customer service information. The customer service manager is further configured to negotiate a respective QoE for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective VNs are established by the control plane. The connectivity manager is configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected. The data analyzer is configured to generate a congestion and traffic report according to network status reports received periodically from the network nodes. The data analyzer is further configured to generate a QoE status according to QoE reports received periodically from the UEs. The infrastructure manager is configured to adapt the topology according to the topology and the congestion and traffic report.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Figure 1 is a block diagram of one embodiment of a logical functional architecture for a wireless network;
Figure 2 is a block diagram of one embodiment of a management plane hierarchical architecture;
Figure 3 is a block diagram of one embodiment of a network node;
Figure 4 is a flow diagram of one embodiment of a method of managing a wireless network; Figure 5 is a block diagram of one embodiment of a computing system;
Figure 6 is an illustration of operation of an embodiment management plane when a customer device enters the network;
Figure 7 is an illustration of operation of an embodiment management plane when a customer device is idle without burst traffic;
Figure 8 is an illustration of operation of an embodiment management plane when a customer device is idle with upstream burst traffic;
Figure 9 is an illustration of operation of an embodiment management plane when a customer device is idle with downstream burst traffic;
Figure 10 is an illustration of operation of an embodiment management plane when an upstream session triggers a customer device to transition from idle to active;
Figure 11 is an illustration of operation of an embodiment management plane when a downstream session triggers a customer device to transition from idle to active;
Figure 12 is an illustration of operation of an embodiment management plane for adapting a topology;
Figure 13 is an illustration of operation of an embodiment management plane for integrating private network nodes into the network;
Figure 14 is an illustration of operation of an embodiment management plane for on- demand QoE assurance; and
Figure 15 is an illustration of operation of an embodiment management plane for on- demand network status analysis.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
The making and using of embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that may be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention.
The advent of cloud-based networking has complicated the ability of future wireless networks to satisfy expected demands for higher throughput, lower latencies, lower energy, lower costs, and drastically more numerous connections. Cloud-based networking fundamentally redefines the endpoints and the time frame for which network services are provisioned. It requires the network be much more nimble, flexible, and scalable. Thus, technologies such as network function virtu alization (NFV) and software defined networking (SDN) have become increasingly important in building future wireless networks. NFV enables network functions that are traditionally tied to hardware to run on a cloud computing infrastructure in a data center.
Although the separation of the data plane, the control plane, and the management plane may not be feasible in the commercial cloud, the separation of those network functions from the hardware infrastructure will be a cornerstone of future wireless network architectures. One benefit is the ability to elastically support network functional demands. SDN is an architectural framework for creating intelligent programmable networks, where the control planes and the data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the application.
It is realized herein that a framework is needed for future wireless network operation that comprehends the prevalence of virtual networks and centralized control of resources. A virtual network (VN) is a collection of resources virtualized for a given service. It is realized herein that a framework of management plane functionality is needed to guide design and implementation of future wireless networks. It is further realized herein the framework can identify management plane functionality, include management plane interfaces, and include operation procedures. Management plane functionality can include infrastructure management, device connectivity management, customer service management, and status analysis management, among other functionality.
Infrastructure management functionality provides management capability for infrastructure topology adaptation according to network congestion and traffic load. Infrastructure management functionality allows integration of wireless networks of multiple providers. It also provides for spectrum management of a radio access network (RAN) and spectrum sharing among various co-located wireless networks. Infrastructure management also includes access (MAP) management and air interface management.
Device connectivity management functionality provides management capability for per-device attachment to the wireless network, including media access control (MAC) status, location tracking, and paging. Device connectivity management functionality includes defining customized and scenario-aware location tracking schemes. Device connectivity management functionality also provides for a software-defined and virtual per-mobile-user geographic location tracking entity and a triggering of user-specific data plane topology updates. Device connectivity management functionality also provides user-specific virtual network migration and location tracking as a service (LTaaS). A UE's location is relative to the network or, more specifically, relative to a set of potential network nodes that could provide network access to the UE. LTaaS provides UE location tracking and location information to the UE's home operator or a global UE location information management center, which is generally managed by a third party. The information can then be accessed and used by other operators, VN operators, and others.
Customer service management functionality provides management capability for customers' private information, authorization of device access, device security, and negotiation of service quality of experience (QoE). Customer service management functionality includes per- service QoE management as well as charging services. Customer service management functionality also provides customer specific context for connections and services. This includes QoE monitoring, charging, and billing.
Status analysis management functionality provides management capability for on- demand network status analysis and QoE assurance. More specifically, status analysis management functionality provides for management of on-demand network status analytics, management of on-demand service QoE status analytics, and data analytics as a service (DAaaS).
It is further realized herein the management plane functionality required in a wireless network can vary among devices and can depend on a device's state. The various devices in a wireless network include customer devices, i.e., user equipment (UE), and operator nodes, i.e., base stations or radio nodes. It is realized herein that a customer device can be powered on or off, and, while on, can be active or idle, all of which is referred to as the customer device's state. It is also realized herein that an operator node can also be powered on or off and, while on, can be active, idle, or inactive. Furthermore, an operator node's state can vary among its respective subsystems. For example, an operator node's access subsystem may be active, idle, or inactive while powered on, while the operator node's backhaul subsystem may be active or inactive. Similarly, the operator node's sensor subsystem may be active or inactive while powered on.
While in an active state, a UE continuously searches and monitors for feedback channel quality indicators (CQI). A continuous process is one that is carried out at every transmission interval. For example, if the transmission interval is 1 millisecond, then the UE searches and monitors periodically, where the period is 1 millisecond. The UE also continuously sends signals enabling uplink (UL) CQI estimation. On the network side, while the UE is active, a VN - active is established. The configuration of the VN - active depends on the UE's mobility, required QoE, and network status. Also, while the UE is active, the network maintains contexts above and below layer-2, including automatic repeat request (ARQ) and hybrid ARQ (HARQ) below; and flow/service ID mapping to ID defined in the access link (AL), location, states, authentication keys, sessions, and QoE above.
While in an idle state, the UE has an idle state ID and remains attached to the network. The UE continuously searches and monitors the network for measurement purposes and location updates. The UE may also perform mobility monitoring/tracking and location synchronization with the network. Γη certain embodiments, the UE carries out network monitoring conditionally. The UE also transmits and receives short data bursts without going back to active state. On the network side, while the UE is idle, a VN - idle is established. The configuration of the VN - idle depends on the UEs mobility, required QoE, and network status. The VN - idle can have no dedicated physical network resources or can have partially dedicated physical network resources, for example, between a user-specific virtual serving gateway (v-u-SGW) and other gateways (GWs).
Figure 1 is a block diagram of one embodiment of a logical functional architecture 100 for a wireless network. Architecture 100 separates the wireless network into a data plane 110, a control plane 120, and a management plane 130. Data plane 110 transports network traffic among the various network nodes and UEs attached to the wireless network. Control plane 120 makes network resource assignment decisions for customer service traffic and transports control signals among the various network nodes and UEs. Management plane 130 provides various management and administrative functionality for the network. Interfaces exist among management plane 130, control plane 120, and data plane 110, enabling each to carry out its respective functionality. Additionally, control plane 120 has an application programming interface (API) 122 that allows various applications 140-1 through 140-N to access control plane 120.
Architecture 100 also includes various databases that are occasionally accessed by management plane 130 in carrying out its functionalities. These databases include a privacy network database 150, a customer service information database 152, a customer device information database 154, an infrastructure database 156, and an infrastructure abstraction database 158. Privacy network database 150 is a repository for topology information, node capabilities, states, and security information. Customer service information database 152 is a repository for authentication and security information related to customer devices, i.e., UEs. Customer device information database 154 is a repository for capabilities, locations, and states of customer devices. Infrastructure database 156 is a repository for network topology, node capabilities, and states. Infrastructure abstraction database 158 is a repository for various infrastructure abstractions within the wireless network.
In architecture 100, management plane 130 provides various functionalities through respective control blocks, including: an infrastructure manager 132, a data analyzer 134, a customer service manager 136, and a connectivity manager 138. Management plane 130, in certain embodiments, can provide additional functionalities, such as content service management, which is responsible for defining content caches in the radio access network (RAN), configuring cache-capable network nodes, and managing content forwarding. Infrastructure manager 132 is responsible for integrating network nodes from various wireless network providers. Infrastructure manager 132 is also responsible for spectrum management for RAN backhaul links and access networks, as well as spectrum sharing among co- located wireless networks. Infrastructure manager 132 also provides access map management and air-interface management, among other functionalities. Data analyzer 134 is responsible for data analytics related to the wireless network. These analytics include managing on-demand network status, managing on-demand QoE, and providing data analytics as a service (DAaaS), among other functionalities. Customer service manager 136 is responsible for customer aspects of service, including QoE, charging, and customer information, among other functionalities.
Connectivity manager 138 is responsible for location tracking and synchronization, pushing topology updates to users, VN migration, and providing location tracking as a service (LTaaS), among other functionalities.
Infrastructure manager 132, data analyzer 134, customer service manager 136, and connectivity manager 138 can be implemented in one or more processors, one or more application specific integrated circuits (ASICs), one or more field-programmable gate arrays (FPGAs), dedicated logic circuitry, or any combination thereof, all collectively referred to as a processor. The respective functions for Infrastructure manager 132, data analyzer 134, customer service manager 136, and connectivity manager 138 can be stored as instructions in non-transitory memory for execution by the processor.
Figure 2 is a block diagram of one embodiment of a management plane hierarchical architecture 200. Architecture 200 includes a top level management plane 210 having second and third levels beneath it. Top level management plane 210 coordinates among second level management planes 220-1 through 220-N. Each of second level management planes 220-1 through 220-N is responsible for coordinating among its respective third level management planes. For example, Second level management plane 220-N coordinates among third level management planes 230-1 through 230-M. At each level of architecture 200, the management plane can include up to the full functionality for management plane 130 of Figure 1. Top level management plane 210 and other upper-level management planes in architecture 200 also include coordination capabilities for coordinating among respective lower-level management planes. The division of the management plane into the various levels of architecture 200 can be done in a variety of ways, including by geographic zones, virtual networks, and service types. In certain embodiments, within a single level of architecture 200, peer management planes can also communicate with each other. For example, second level management plane 220-1 could communicate with second level management plane 220-2.
Figure 3 is a block diagram of one embodiment of a network node 300. Network node 300 includes access subsystems 310-1 through 310-i, backhaul subsystems 320-1 through 320-j, and sensor subsystems 330-1 through 330-k. Access subsystems 310-1 through 310-i can include single or multiple access links. Similarly, backhaul subsystems 320-1 through 320-j can include single or multiple backhaul links. The various subsystems of network node 300 can be in different states at a given time. Each of the subsystems can be powered on and off independently, and transition among various states independently. Each of access subsystems 310-1 through 310- i can be in an active state, idle state, or inactive state while powered on. Each of backhaul subsystems 320-1 through 320-j can be in an active state or an inactive state while powered on. Each of sensor subsystems 330-1 through 330-k can be in an active state or an inactive state while powered on. Each of these subsystems is inactive while powered off.
The operation of the subsystems of network node 300 varies with each subsystems current state. An access subsystem, while active, carries out continuous downlink (DL) transmissions, discontinuous radio frequency (RF) transmissions (e.g., pilots and reference signals), and continuously receiving and monitoring UL transmissions. In the idle state, the access subsystem carries out no DL RF transmissions, but continuously carries out UL receiving and monitoring. While inactive, the access subsystem shuts down RF activity, including DL and UL.
A backhaul subsystem, in the active state, carries out RF transmissions and continuously receives and monitors RF transmissions. While inactive, the backhaul subsystem shuts down RF activity. Similarly, a sensor subsystem, sometimes referred to as an out-of-band sensor subsystem, carries out continuous and discontinuous RF transmissions and continuously receives and monitors RF transmissions while active. While inactive, the sensor subsystem shuts down RF activity.
Figure 4 is a flow diagram of one embodiment of a method of managing a wireless network. The method begins at a start step 410. At an infrastructure management step 420, a management plane manages an infrastructure topology for the wireless network. The wireless network' s infrastructure topology generally includes a plurality of network nodes. At a connectivity management step 430, the management plane manages connections of UEs to the wireless network according to the infrastructure topology. At a customer service management step 440, the management plane manages services provided to the UEs over their respective connections. At a data analytics step 450, the management plane performs analysis on performance metrics and manages the corresponding analytics. These steps are generally carried out in parallel, however, under certain circumstances, there are sequential aspects among the steps. The method then ends at an end step 460.
Figure 5 is a block diagram of a computing system 500 that may be used for implementing the devices and methods disclosed herein. Specific devices may utilize all of the components shown or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. The computing system 500 may comprise a processing unit 502 equipped with one or more input/output devices, such as a speaker, microphone, mouse, touchscreen, keypad, keyboard, printer, display, and the like. The processing unit may include a central processing unit (CPU) 514, memory 508, a mass storage device 504, a video adapter 510, and an I/O interface 512 connected to a bus 520.
The bus 520 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, video bus, or the like. The CPU 514 may comprise any type of electronic data processor. The memory 508 may comprise any type of non- transitory system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory 508 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
The mass storage 504 may comprise any type of non-transitory storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 520. The mass storage 504 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
The video adapter 510 and the I/O interface 512 provide interfaces to couple external input and output devices to the processing unit 502. As illustrated, examples of input and output devices include a display 518 coupled to the video adapter 510 and a mouse/keyboard/printer 516 coupled to the I/O interface 512. Other devices may be coupled to the processing unit 502, and additional or fewer interface cards may be utilized. For example, a serial interface such as Universal Serial Bus (USB) (not shown) may be used to provide an interface for a printer.
The processing unit 502 also includes one or more network interfaces 506, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or different networks. The network interfaces 506 allow the processing unit 502 to communicate with remote units via the networks. For example, the network interfaces 506 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, the processing unit 502 is coupled to a local-area network 522 or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
Figure 6 is an illustration of a procedure 600 representing operation of an embodiment management plane when a customer device enters the network. Procedure 600 is carried out among various components of the management plane, including customer service manager 136 and connectivity manager 138, both of Figure 1. Procedure 600 is also carried out among a customer device 610, control plane 120 of Figure 1, and data plane 110, also of Figure 1. Customer device 610 powers on at step 620 and begins DL acquisition and UL synchronization procedures at step 622. Customer device 610 then sends ID and location information, at step 624 to control plane 120 for registration at step 640. The location information is passed from control plane 120 to connectivity manager 138 where it is registered at step 670. Connectivity manager 138 stores the location information in the customer device information database. Connectivity manager 138 then begins synchronizing location tracking with customer device 610 at step 672. The ID information is passed to customer service manager 136 for authentication at step 660. Authentication is carried out according to the ID information and information in the customer service information database. Authorization is then sent from customer service manager 136 to customer device 610, which is received at step 626. Control plane 120 establishes a VN at step 642, which is represented by VN - idle in data plane 110 at step 650. Customer device 610 then prepares for idle at step 628, and enters the idle state at step 630.
Figure 7 is an illustration of a procedure 700 representing operation of an embodiment management plane when a customer device is idle without burst traffic. Procedure 700 begins with customer device 610 in an idle state at step 710. Customer device performs DL monitoring and location tracking at step 712. At step 714, a location update is made and is synchronized with connectivity manager 138 at step 720. Connectivity manager 138 updates the location in customer device information database. Connectivity manager then sends a location update to control plane 120 at step 730. Control plane 120 then updates the VN - idle at step 740. Connectivity manager 138 proceeds with location tracking at 722 and customer device 610 resumes DL monitoring and location tracking at step 716.
Figure 8 is an illustration of a procedure 800 representing operation of an embodiment management plane when a customer device is idle with upstream burst traffic.
Procedure 800 begins with customer device 610 in an idle state at step 810. Customer device 610 performs DL monitoring and UL synchronization at step 812. At step 814, customer device 610 sends a traffic burst to data plane 110. Data plane 110 forwards the traffic burst to a u-v-SGW using VN - idle. Customer device 610 remains in idle at step 816.
Figure 9 is an illustration of a procedure 900 representing operation of an embodiment management plane when a customer device is idle with downstream burst traffic. Procedure 900 begins with customer device 610 in an idle state at step 910. Customer device 610 performs DL monitoring and UL synchronization at step 912, and monitors for paging at step 914. Data plane 110 receives burst traffic at a v-u-SGW and filters it at step 940. The v-u-SGW notifies control plane 120. Control plane 120 instructs connectivity manager 138 to page customer device 610 at step 950. Customer device 610 acknowledges the page and measures CQI at step 916. According to the page, control plane 120 makes a traffic decision at step 932 and instructs data plane 110 to forward the burst at step 942. The burst is received by customer device 610 at step 918, and customer device 610 continues to idle at step 920.
Figure 10 is an illustration of a procedure 1000 representing operation of an embodiment management plane when an upstream session triggers a customer device to transition from idle to active. Procedure 1000 begins with customer device 610 in an idle state at step 1010. Customer device 610 performs DL acquisition and UL synchronization at step 1012. Customer device 610 submits a session request to control plane 120 at step 1014. Control plane 120 processes the request at step 1030 and begins negotiation with customer service manager 136 at step 1040. Customer service manager 136 and control plane 120 negotiate a QoE for customer device 610 according to information in the customer service information database and make a QoE guarantee at step 1016. In an alternative embodiment, customer service manager 136 negotiates directly with customer device 610 to establish the QoE guarantee. Control plane 120 establishes a VN at step 1032 according to the negotiated QoE, location information, and network status. VN - active 1050 is created in in data plane 110 at step 1050. Customer device 610 then transitions to an active state at step 1018 and makes an upstream transmission via data plane 110 at step 1052. While customer device 610 is in the active state, customer device 610 makes occasional location updates at step 1020 and connectivity manager 138 synchronizes location at step 1060 and updates the location in the customer device information database.
Figure 11 is an illustration of a procedure 1100 representing operation of an embodiment management plane when a downstream session triggers a customer device to transition from idle to active. Procedure 1100 begins with customer device 610 in an idle state at step 1110. Customer device 610 makes a location update at step 1112 and connectivity manager 138 synchronizes location at step 1130. Connectivity manager 138 updates the location in the customer device information database. Customer device 610 continuously monitors for paging at step 1114. A session request is received at control plane 120 at step 1140. Control plane 120 and customer service manager 136 then negotiate QoE requirements at step 1150 according to information in the customer service information database. Control plane 120 makes a location inquiry to connectivity manager 138 at step 1142. Connectivity manager 138 then pages customer device 610 at step 1132. Customer device 610 acknowledges the page and measures CQI at step 1116. Connectivity manager 138 updates the location for customer device 610 and passes it to control plane 120. Control plane 120 establishes a VN at step 1144 according to the location information, QoE requirements, and network status. A VN - active is created in data plane 110 at step 1120. A downstream transmission is then made at step 1122, triggering customer device 610 to transition to an active state at step 1118. Figure 12 is an illustration of a procedure 1200 representing operation of an embodiment management plane for adapting a topology. Procedure 1200 is carried out over various elements of the management plane, including infrastructure manager 132, data analyzer 134, and connectivity manager 138, all of Figure 1. Procedure 1200 is also partially carried out by customer device 610 and data plane 110 of Figure 1. Data analyzer 134 generates a congestion and traffic report at step 1210 and delivers the report to infrastructure manager 132. Infrastructure manager 132 checks the current network topology at step 1220. Infrastructure manager 132 then adapts the topology at step 1222 according to the congestion and traffic report and information in the infrastructure database. Adaptations can include bandwidth increase, new backhaul links, and modifying existing backhaul links, among other adaptations. At step 1224, infrastructure manager 132 configures network nodes 1230 affected by the adaptation. Configuration can include beam/sector configuration and access link configuration, among other parameters. Infrastructure manager 132 then issues an infrastructure update at step 1226. Connectivity manager 138 receives the infrastructure update at pushes an updated infrastructure map to customer device 610 at step 1240. Customer device 610 can be in an idle or active state.
Figure 13 is an illustration of a procedure 1300 representing operation of an embodiment management plane for integrating private network nodes into the network. Procedure 1300 begins with a congestion and traffic report from data analyzer 134 at step 1310. The report is received by infrastructure manager 132. Infrastructure manager checks the current network topology at step 1320. According to the congestion and traffic report, infrastructure manager 132 negotiates with private nodes 1330 to integrate them into the data plane 110 at step 1322. The infrastructure database is updated at 1324 and affected nodes 1332 are updated at step 1326. Infrastructure manager 132 issues an infrastructure update at step 1328. Connectivity manager 138 receives the infrastructure update and pushes an updated infrastructure map to customer device 610 at step 1340. Customer device 610 is in an idle state at step 1350.
Figure 14 is an illustration of a procedure 1400 representing operation of an embodiment management plane for on-demand QoE assurance. Procedure 1400 is carried out among customer device 610, control plane 120, data plane 110, data analyzer 134, and customer service manager 136. Procedure 1400 begins with customer device 610 in an active state at step 1420. Data analyzer 134 schedules QoE reporting at step 1410. QoE reporting can be scheduled with active customer devices, selected network nodes 1430, or both. Customer device 610 generates QoE metrics at step 1422 and sends them to data analyzer 134. Selected nodes 1430 in data plane 110 generate QoE metrics at step 1432 and send those to data analyzer 134 as well. At step 1412, data analyzer analyzes the QoE metrics from customer device 610 and data plane 110 to generate a QoE status. The QoE status is sent to control plane 120 at step 1440. Control plane 120 negotiates QoE at step 1442 with customer service manager 136 at step 1450. The negotiation is done according to the QoE status generated by data analyzer 134. According to the QoE negotiation, control plane 120 updates VN - active 1434 step 1444. Customer service manager 136 updates QoE requirements at step 1452 according to the negotiated QoE and sends the update to customer device 610. Customer device 610 continues in the active state at step 1424.
Figure 15 is an illustration of a procedure 1500 representing operation of an embodiment management plane for on-demand network status analysis. Procedure 1500 is carried out among infrastructure manager 132, data plane 110, and data analyzer 134. Procedure 1500 begins with data analyzer 134 scheduling network status reporting at step 1510. Network status reporting is scheduled for selected nodes 1520. Selected nodes 1520 in data plane 110 generate network status reports at step 1522. The network status reports are sent to data analyzer 134. Data analyzer 134 analyzes the network statuses at step 1512 and generates a congestion and traffic report at step 1514. The congestion and traffic report is passed to infrastructure manager 132. Infrastructure manager 132 checks the current network topology at step 1530. Infrastructure manager 132 then adapts the infrastructure at step 1532 for selected nodes 1520 according to the congestion and traffic report. Infrastructure manager 132 can also integrate private nodes 1522 into the wireless network at step 1534 according to the congestion and traffic report.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.

Claims

WHAT IS CLAIMED IS:
1. A method of managing a wireless network, comprising:
managing an infrastructure topology for the wireless network, wherein the wireless network comprises network nodes;
managing a connection of a user equipment (UE) to the wireless network;
managing a customer service provided to the UE over the connection; and
managing analytics for the wireless network and the service.
2. The method of Claim 1 wherein the managing the infrastructure topology comprises adapting the infrastructure topology according to congestion and traffic load distribution on the wireless network and communicating the infrastructure topology to the network nodes and UEs attached to the wireless network.
3. The method of Claim 2 wherein the adapting the infrastructure topology comprises: identifying the congestion and traffic load distribution on the wireless network;
checking the infrastructure topology;
carrying out at least one of:
increasing bandwidth,
establishing a new backhaul link, and
changing an existing backhaul link;
updating an infrastructure database according to the carrying out;
configuring the network nodes affected by the carrying out; and
communicating an updated infrastructure topology to a connectivity manager configured to carry out the managing the connection according to the updated infrastructure topology.
4. The method of Claim 3 wherein the managing the connection comprises updating an access map according to the updated infrastructure topology and transmitting the access map to the UE.
5. The method of Claim 1 wherein the managing the infrastructure topology comprises integrating a plurality of private network nodes into the wireless network, wherein the wireless network comprises current network nodes.
6. The method of Claim 5 wherein the integrating comprises:
identifying congestion and traffic load distribution on the wireless network;
checking the infrastructure topology; negotiating with the plurality of private network nodes to generate an extension of the wireless network;
updating an infrastructure database according to the extension;
configuring the plurality of private network nodes and affected nodes among the current network nodes; and
communicating an updated infrastructure topology to a connectivity manager configured to carry out the managing the connection according to the updated infrastructure topology.
7. The method of Claim 1 wherein the managing analytics comprises:
scheduling quality of experience (QoE) reporting from at least one UE attached to the wireless network and the network nodes;
receiving QoE performance reports from the at least one UE;
generating a QoE status according to the QoE performance reports; and
communicating the QoE status to a control plane.
8. The method of Claim 1 wherein the managing analytics comprises:
scheduling network status reporting from the network nodes;
receiving network status reports from the network nodes;
generating a congestion and traffic report for the wireless network; and
communicating the congestion and traffic report to an infrastructure manager configured to carry out the managing the infrastructure topology according to the congestion and traffic report.
9. The method of Claim 1 wherein the managing analytics comprises providing data analytics as a service (DAaaS).
10. The method of Claim 1 wherein the managing the customer service comprises managing private information for a customer corresponding to the UE.
11. The method of Claim 1 wherein the managing the customer service comprises authorizing access for the UE to the wireless network.
12. The method of Claim 1 wherein the managing the customer service comprises negotiating a quality of experience (QoE) for the service with a control plane according to customer service information, a location of the UE, and a request from the UE for a session, wherein the control plane establishes a virtual network-active (VN-active) for the session.
13. The method of Claim 1 wherein the managing the customer service comprises negotiating a quality of experience (QoE) for the service with a control plane according to customer service information, a location of the UE, and a session request from the control plane, wherein the control plane establishes a virtual network- active (VN-active) for the session request over which a downstream transmission is made.
14. The method of Claim 1 wherein the managing the connection comprises providing location tracking as a service (LTaaS) to the UE.
15. The method of Claim 1 wherein the managing the connection comprises location tracking and registering a location for the UE in a customer device information data structure after the UE enters the wireless network.
16. The method of Claim 15 wherein the managing the connection further comprises providing the location to a control plane for updating the virtual network when the UE is in idle the state.
17. The method of Claim 1 wherein the managing the connection comprises maintaining the UE's current state and paging the UE when the UE is in an idle state.
18. A computing system for managing a wireless network, comprising:
a customer service manager configured to:
authorize access to the wireless network by user equipments (UEs) according to respective customer service information, and
negotiate a respective quality of experience (QoE) for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective virtual networks (VNs) are established by a control plane; a connectivity manager configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected;
a data analyzer configured to:
generate a congestion and traffic report according to network status reports received periodically from network nodes attached to the wireless network, and
generate a QoE status according to QoE reports received periodically from the
UEs; and
an infrastructure manager configured to adapt a topology for the wireless network according to the topology and the congestion and traffic report.
19. The computing system of Claim 18 wherein the infrastructure manager is further configured to integrate a plurality of private network nodes into the wireless network according to the topology and the congestion and traffic report.
20. The computing system of Claim 18 wherein the connectivity manager is further configured to page idle UEs among the UEs.
21. The computing system of Claim 18 wherein the connectivity manager is further configured to track the respective locations of the UEs using the respective VNs.
22. The computing system of Claim 18 wherein the customer service manager is further configured to negotiate the respective QoE for one UE among the UEs according to a upstream session request generated by the one UE, wherein the respective QoE for the one UE is negotiated with the control plane.
23. The computing system of Claim 18 wherein the customer service manager is further configured to negotiate the respective QoE for one UE among the UEs according to a downstream session request received from the control plane.
24. The computing system of Claim 18 wherein the connectivity manager is further configured to update an access map according to the topology updated by the infrastructure manager and transmit the access map to the UEs.
25. A communication system, comprising:
a control plane configured to make network resource management decisions for customer service traffic over a wireless network;
a data plane comprising network nodes arranged in a topology and configured to forward network traffic according to the traffic management decisions; and
a management plane having:
a customer service manager configured to:
authorize access to the wireless network by UEs according to respective customer service information, and
negotiate a respective quality of experience (QoE) for each of the UEs according to respective locations of the UEs, the respective customer service information, and respective session requests for which respective virtual networks (VNs) are established by the control plane,
a connectivity manager configured to track respective locations of the UEs and update the respective VNs to which the UEs are connected,
a data analyzer configured to:
generate a congestion and traffic report according to network status reports received periodically from the network nodes, and
generate a QoE status according to QoE reports received periodically from the UEs, and
an infrastructure manager configured to adapt the topology according to the topology and the congestion and traffic report.
26. The communication system of Claim 25 wherein the management plane comprises a central managing server configured to coordinate the customer service manager, the connectivity manager, the data analyzer, and the infrastructure manager for a plurality of distributed managing servers.
27. The communication system of Claim 26 wherein the plurality of distributed managing servers are distributed according to geography.
PCT/US2014/041137 2013-06-05 2014-06-05 System and method for managing a wireless network WO2014197716A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020177015160A KR101876364B1 (en) 2013-06-05 2014-06-05 System and method for managing a wireless network
CN201480032572.6A CN105850199B (en) 2013-06-05 2014-06-05 For managing the method and system of wireless network
EP14807236.6A EP2997780B1 (en) 2013-06-05 2014-06-05 System and method for managing a wireless network
KR1020157037268A KR101748228B1 (en) 2013-06-05 2014-06-05 System and method for managing a wireless network
JP2016518002A JP6240318B2 (en) 2013-06-05 2014-06-05 System and method for managing a wireless network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361831471P 2013-06-05 2013-06-05
US61/831,471 2013-06-05
US14/297,073 2014-06-05
US14/297,073 US9532266B2 (en) 2013-06-05 2014-06-05 Systems and methods for managing a wireless network

Publications (1)

Publication Number Publication Date
WO2014197716A1 true WO2014197716A1 (en) 2014-12-11

Family

ID=52005379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/041137 WO2014197716A1 (en) 2013-06-05 2014-06-05 System and method for managing a wireless network

Country Status (6)

Country Link
US (2) US9532266B2 (en)
EP (1) EP2997780B1 (en)
JP (2) JP6240318B2 (en)
KR (2) KR101748228B1 (en)
CN (1) CN105850199B (en)
WO (1) WO2014197716A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018521564A (en) * 2015-06-01 2018-08-02 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Method and apparatus for customer service management for wireless communication networks
US10349240B2 (en) 2015-06-01 2019-07-09 Huawei Technologies Co., Ltd. Method and apparatus for dynamically controlling customer traffic in a network under demand-based charging
US10374965B2 (en) 2015-06-01 2019-08-06 Huawei Technologies Co., Ltd. Systems and methods for managing network traffic with a network operator

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430262B1 (en) * 2013-12-19 2016-08-30 Amdocs Software Systems Limited System, method, and computer program for managing hierarchy and optimization in a network function virtualization (NFV) based communication network
US9622019B2 (en) * 2014-11-28 2017-04-11 Huawei Technologies Co., Ltd. Systems and methods for generating a virtual network topology for M2M communications
FR3030076B1 (en) * 2014-12-10 2016-12-09 Bull Sas METHOD FOR MANAGING A NETWORK OF CALCULATION NODES
US9860339B2 (en) 2015-06-23 2018-01-02 At&T Intellectual Property I, L.P. Determining a custom content delivery network via an intelligent software-defined network
US9832797B2 (en) 2015-06-29 2017-11-28 At&T Intellectual Property I, L.P. Mobility network function consolidation
US9813299B2 (en) * 2016-02-24 2017-11-07 Ciena Corporation Systems and methods for bandwidth management in software defined networking controlled multi-layer networks
US10608928B2 (en) 2016-08-05 2020-03-31 Huawei Technologies Co., Ltd. Service-based traffic forwarding in virtual networks
CN109565657B (en) * 2016-08-31 2021-02-23 华为技术有限公司 Control device, resource manager and method thereof
US10271186B2 (en) * 2017-01-27 2019-04-23 Huawei Technologies Co., Ltd. Method and apparatus for charging operations in a communication network supporting service sessions for direct end users
US10321285B2 (en) 2017-01-27 2019-06-11 Huawei Technologies Co., Ltd. Method and apparatus for charging operations in a communication network supporting virtual network customers
US10887130B2 (en) 2017-06-15 2021-01-05 At&T Intellectual Property I, L.P. Dynamic intelligent analytics VPN instantiation and/or aggregation employing secured access to the cloud network device
KR102008886B1 (en) * 2017-12-28 2019-08-08 에스케이텔레콤 주식회사 Method for controlling state transition of terminal and estimating load of mobile network using inactive state, and apparatus for mobile communication applying same
US10979972B2 (en) * 2019-07-12 2021-04-13 At&T Intellectual Property I, L.P. Integrated access and backhaul link selection
US11284273B2 (en) * 2020-08-18 2022-03-22 Federated Wireless, Inc. Access point centric connectivity map service
US11917654B2 (en) * 2021-10-26 2024-02-27 Dell Products, Lp System and method for intelligent wireless carrier link management system at user equipment devices

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020095493A1 (en) * 2000-12-05 2002-07-18 Byrnes Philippe C. Automatic traffic and quality of service control system for communications networks
US20020101822A1 (en) * 2000-11-30 2002-08-01 Ayyagari Deepak V. Integrated method for performing scheduling, routing and access control in a computer network
US20050047335A1 (en) * 2003-08-18 2005-03-03 Cheng Mark W. Apparatus, and associated method, for selecting quality of service-related information in a radio communication system
US20080045234A1 (en) * 2001-10-04 2008-02-21 Reed Mark J Machine for providing a dynamic data base of geographic location information for a plurality of wireless devices and process for making same
US20100214917A1 (en) * 2007-11-09 2010-08-26 Huawei Technologies Co., Ltd. Method for admission control, and apparatus and communication system thereof
US20120054347A1 (en) * 2010-08-26 2012-03-01 Futurewei Technologies, Inc. Cross-Stratum Optimization Protocol
US20130058357A1 (en) * 2010-07-06 2013-03-07 Teemu Koponen Distributed network virtualization apparatus and method

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007104245A (en) 2005-10-04 2007-04-19 Ntt Docomo Inc System and method for acquiring position of mobile terminal
US8326969B1 (en) 2006-06-28 2012-12-04 Emc Corporation Method and apparatus for providing scalability in resource management and analysis system- three way split architecture
CN101512973A (en) 2006-08-29 2009-08-19 高通股份有限公司 Concurrent operation in multiple wireless local area networks
JP4688776B2 (en) 2006-10-31 2011-05-25 富士通株式会社 Congestion control method and apparatus in mobile communication network
JP2008187305A (en) 2007-01-29 2008-08-14 Iwatsu Electric Co Ltd Wireless lan packet transmitting method and device
US10075376B2 (en) * 2007-04-18 2018-09-11 Waav Inc. Mobile network operating method
WO2009026281A1 (en) * 2007-08-20 2009-02-26 Research In Motion Limited Inactivity timer in a discontinuous reception configured system
US20100208694A1 (en) * 2007-08-29 2010-08-19 Hisao Kumai Mobile communication system, radio communication method, core network, user equipment, and program
US7852849B2 (en) * 2008-03-04 2010-12-14 Bridgewater Systems Corp. Providing dynamic quality of service for virtual private networks
US20100128645A1 (en) 2008-11-26 2010-05-27 Murata Manufacturing Co., Ltd. System and method for adaptive power conservation based on traffic profiles
US8868029B2 (en) * 2010-01-29 2014-10-21 Alcatel Lucent Method and apparatus for managing mobile resource usage
JP5494045B2 (en) 2010-03-12 2014-05-14 富士通モバイルコミュニケーションズ株式会社 Mobile communication terminal and communication quality display method
JP5414619B2 (en) 2010-05-21 2014-02-12 株式会社日立製作所 Wireless communication system, access point, and gateway for controlling quality improvement of multiple wireless systems
US8874129B2 (en) * 2010-06-10 2014-10-28 Qualcomm Incorporated Pre-fetching information based on gesture and/or location
EP2405629B1 (en) 2010-07-06 2016-09-07 Telefonaktiebolaget LM Ericsson (publ) Method and apparatuses for controlling access to services of a telecommunications system
US20120039175A1 (en) * 2010-08-11 2012-02-16 Alcatel-Lucent Usa Inc. Enabling a distributed policy architecture with extended son (extended self organizing networks)
KR101719509B1 (en) * 2010-10-06 2017-03-24 삼성전자주식회사 Communication method of vehicular access point, vehicular user equipment and macro base station for user in the vehicle
CN102149161A (en) * 2011-01-24 2011-08-10 重庆大学 Hierarchical and regular mesh network routing method
ES2703429T3 (en) * 2011-06-22 2019-03-08 Ericsson Telefon Ab L M Procedures and devices for content distribution control
US8855017B2 (en) * 2011-09-07 2014-10-07 Telefonaktiebolaget Lm Ericsson (Publ) System and method of building an infrastructure for a virtual network
US9300548B2 (en) * 2011-10-14 2016-03-29 Alcatel Lucent Providing dynamic reliability and security in communications environments
MY184333A (en) * 2013-04-15 2021-04-01 Ericsson Telefon Ab L M Apparatus and method for providing power saving during idle to connected mode transitions
US20170374608A1 (en) * 2016-06-28 2017-12-28 Huawei Technologies Co., Ltd. Method and system for network access discovery

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020101822A1 (en) * 2000-11-30 2002-08-01 Ayyagari Deepak V. Integrated method for performing scheduling, routing and access control in a computer network
US20020095493A1 (en) * 2000-12-05 2002-07-18 Byrnes Philippe C. Automatic traffic and quality of service control system for communications networks
US20080045234A1 (en) * 2001-10-04 2008-02-21 Reed Mark J Machine for providing a dynamic data base of geographic location information for a plurality of wireless devices and process for making same
US20050047335A1 (en) * 2003-08-18 2005-03-03 Cheng Mark W. Apparatus, and associated method, for selecting quality of service-related information in a radio communication system
US20100214917A1 (en) * 2007-11-09 2010-08-26 Huawei Technologies Co., Ltd. Method for admission control, and apparatus and communication system thereof
US20130058357A1 (en) * 2010-07-06 2013-03-07 Teemu Koponen Distributed network virtualization apparatus and method
US20120054347A1 (en) * 2010-08-26 2012-03-01 Futurewei Technologies, Inc. Cross-Stratum Optimization Protocol

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2997780A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018521564A (en) * 2015-06-01 2018-08-02 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Method and apparatus for customer service management for wireless communication networks
CN108605032A (en) * 2015-06-01 2018-09-28 华为技术有限公司 Method and apparatus for carrying out customer service management for cordless communication network
US10200543B2 (en) 2015-06-01 2019-02-05 Huawei Technologies Co., Ltd. Method and apparatus for customer service management for a wireless communication network
US10349240B2 (en) 2015-06-01 2019-07-09 Huawei Technologies Co., Ltd. Method and apparatus for dynamically controlling customer traffic in a network under demand-based charging
US10374965B2 (en) 2015-06-01 2019-08-06 Huawei Technologies Co., Ltd. Systems and methods for managing network traffic with a network operator
CN108605032B (en) * 2015-06-01 2020-07-14 华为技术有限公司 Method and apparatus for customer service management for wireless communication networks
US10721362B2 (en) 2015-06-01 2020-07-21 Huawei Technologies Co., Ltd. Method and apparatus for customer service management for a wireless communication network
US11184289B2 (en) 2015-06-01 2021-11-23 Huawei Technologies Co., Ltd. Systems and methods for managing network traffic with a network operator
US11240644B2 (en) 2015-06-01 2022-02-01 Huawei Technologies Co., Ltd. Method and apparatus for dynamically controlling customer traffic in a network under demand-based charging

Also Published As

Publication number Publication date
KR20160014726A (en) 2016-02-11
US20170079053A1 (en) 2017-03-16
US9532266B2 (en) 2016-12-27
KR101748228B1 (en) 2017-06-27
JP2016521099A (en) 2016-07-14
US10506612B2 (en) 2019-12-10
EP2997780B1 (en) 2019-03-27
KR20170065690A (en) 2017-06-13
KR101876364B1 (en) 2018-08-02
JP6475306B2 (en) 2019-02-27
EP2997780A4 (en) 2016-05-25
JP6240318B2 (en) 2017-11-29
US20140362700A1 (en) 2014-12-11
EP2997780A1 (en) 2016-03-23
CN105850199A (en) 2016-08-10
CN105850199B (en) 2019-05-28
JP2018057007A (en) 2018-04-05

Similar Documents

Publication Publication Date Title
US10506612B2 (en) Methods and device for authorization between a customer device and connectivity manager
Wang et al. Integration of networking, caching, and computing in wireless systems: A survey, some research issues, and challenges
US11647422B2 (en) Adaptable radio access network
US10581932B2 (en) Network-based dynamic data management
US10492049B2 (en) Core network selection function in a radio access network for machine-to-machine (M2M) communications
EP3932017A1 (en) 5g network edge and core service dimensioning
Bekri et al. Internet of things management based on software defined networking: a survey
EP4161132A1 (en) Analytics-based policy generation
US12120200B2 (en) Flexible data analytics processing and exposure in 5GC
Giannone et al. Orchestrating heterogeneous MEC-based applications for connected vehicles
Khurshid et al. Big data assisted CRAN enabled 5G SON architecture
JP5575271B2 (en) Method for controlling resource usage within a communication system
He et al. Cost-efficient heterogeneous data transmission in software defined vehicular networks
US20230217298A1 (en) Quality Management for Wireless Devices
Habibi et al. Analyzing SDN-based vehicular network framework in 5G services: fog and Mobile edge computing
WO2024109032A1 (en) Data transmission optimization method and apparatus, computer-readable medium, and electronic device
WO2024109101A1 (en) Data transmission optimization method and apparatus, computer readable medium, and electronic device
US20240129876A1 (en) Systems and methods for analytics and information sharing between a radio access network and a core network
US20230354107A1 (en) Adjustment of network handover processing based on service time requirements
KR20240099444A (en) Method, device and readable storage medium for subscribing to analysis of model transmission status in a network
WO2024036268A1 (en) Support of data transmission measurement action guarantee for data delivery service
CN118575492A (en) Service specific authorization removal in a 5G core network (5 GC)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14807236

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016518002

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2014807236

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157037268

Country of ref document: KR

Kind code of ref document: A