US10120801B2 - Object caching for mobile data communication with mobility management - Google Patents

Object caching for mobile data communication with mobility management Download PDF

Info

Publication number
US10120801B2
US10120801B2 US14/378,118 US201314378118A US10120801B2 US 10120801 B2 US10120801 B2 US 10120801B2 US 201314378118 A US201314378118 A US 201314378118A US 10120801 B2 US10120801 B2 US 10120801B2
Authority
US
United States
Prior art keywords
base station
object cache
cache server
server
user equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/378,118
Other versions
US20150032974A1 (en
Inventor
Oliver M. Deakin
Victor S. Moore
Robert B. Nicholson
Colin J. Thorne
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GlobalFoundries US Inc
Original Assignee
GlobalFoundries Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GlobalFoundries Inc filed Critical GlobalFoundries Inc
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOORE, VICTOR S., THORNE, COLIN J., DEAKIN, OLIVER M., NICHOLSON, ROBERT B.
Publication of US20150032974A1 publication Critical patent/US20150032974A1/en
Assigned to GLOBALFOUNDRIES U.S. 2 LLC COMPANY reassignment GLOBALFOUNDRIES U.S. 2 LLC COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Assigned to GLOBALFOUNDRIES U.S. 2 LLC reassignment GLOBALFOUNDRIES U.S. 2 LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Assigned to GLOBALFOUNDRIES INC. reassignment GLOBALFOUNDRIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GLOBALFOUNDRIES U.S. 2 LLC, GLOBALFOUNDRIES U.S. INC.
Assigned to GLOBALFOUNDRIES U.S.2 LLC reassignment GLOBALFOUNDRIES U.S.2 LLC CORRECTIVE ASSIGNMENT TO CORRECT THE THE RECEIVING PARTY DATA (NAME OF ASSIGNEE) NEEDS TO BE CORRECTED. ASSIGNEE SHOULD READ GLOBALFOUNDRIES U.S. 2 LLC PREVIOUSLY RECORDED ON REEL 036277 FRAME 0160. ASSIGNOR(S) HEREBY CONFIRMS THE GLOBALFOUNDRIES U.S. 2 LLC COMPANY. Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Publication of US10120801B2 publication Critical patent/US10120801B2/en
Application granted granted Critical
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION SECURITY AGREEMENT Assignors: GLOBALFOUNDRIES INC.
Assigned to GLOBALFOUNDRIES U.S. INC. reassignment GLOBALFOUNDRIES U.S. INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GLOBALFOUNDRIES INC.
Assigned to GLOBALFOUNDRIES INC. reassignment GLOBALFOUNDRIES INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION
Assigned to GLOBALFOUNDRIES U.S. INC. reassignment GLOBALFOUNDRIES U.S. INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • H04L67/2842
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2876Pairs of inter-processing entities at each side of the network, e.g. split proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/288Distributed intermediate devices, i.e. intermediate devices for interaction with other intermediate devices on the same level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2885Hierarchically arranged intermediate devices, e.g. for hierarchical caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/0005Control or signalling for completing the hand-off
    • H04W36/0011Control or signalling for completing the hand-off for data sessions of end-to-end connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/08Reselecting an access point
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/17Embedded application
    • G06F2212/171Portable consumer electronics, e.g. mobile phone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements
    • G06F2212/621Coherency control relating to peripheral accessing, e.g. from DMA or I/O device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/08Access point devices

Definitions

  • This invention relates to the field of optimisation of mobile data communication with mobility management.
  • the invention relates to Quality of Experience optimisation using object caching for mobile data communication with mobility management.
  • a wireless mobile data terminal communicates with a server on a connected fixed network.
  • a mobile data terminal may be any device that can send data over a wireless network where the network provides mobility management.
  • networks include: the GPRS (General packet radio service) (2G) network; the WCDMA (Wideband Code Division Multiple Access) (3G) network; or the LTE (Long Term Evolution) or WiMAX (Worldwide Interoperability for Microwave Access) (4G) network.
  • the background and description of the invention are described in terms of the 3 rd Generation Mobile Phone Network, UMTS (Unified Mobile Telephony System)/WCDMA.
  • FIG. 1 a schematic diagram shows the UMTS architecture 100 which is standardised by the 3rd Generation Partnership Project (3GPP).
  • 3GPP 3rd Generation Partnership Project
  • the wireless device (cell phone, 3G dongle for a laptop, tablet device, etc.) is known in 3GPP terminology as a User Equipment (UE) 101 . It connects wirelessly 110 to the base station which is labelled Base Station (BS) 102 and is known as a Node B in 3GPP terminology. Around 100 Node Bs are connected over microwave or optical fibre 120 to a Radio Network Controller (RNC) 103 which is connected back to a Serving GPRS Support Node (SGSN) 104 (which supports several RNCs) and then a Gateway GPRS Support Node (GGSN) 105 . Finally the GGSN is connected back to the operators' service network (OSN) 10 which connects to the Internet 107 at a peering point.
  • RNC Radio Network Controller
  • the protocols between the base station back to the GGSN are various 3GPP specific protocols over which the IP traffic from the UE is tunnelled.
  • a GPRS tunnelling protocol (GTP) 130 is used between the RNC 103 and the GGSN 105 .
  • GTP GPRS tunnelling protocol
  • OSN 106 and the Internet 107 standard Internet Protocol (IP) 140 is used between the GGSN 105 .
  • IP Internet Protocol
  • a key problem with communication via mobile networks is the rapid increase of data traffic.
  • the density of mobile computing platforms is increasing at an exponential rate.
  • Mobile computing platforms include traditional platforms such as phones, tablets and mobile broadband enabled laptops but increasingly also mobile data enabled devices, such as GPS systems, cars, even mobile medical equipment.
  • MNOs Mobile Network Operators
  • the time taken to load a web page on a mobile device is typically much longer than to load the same page from a fixed connection. In part this is due to limited bandwidth and congestion in the network as described above but even if these factors are ignored, the round trip time over a mobile network is much longer than on a fixed link.
  • Some of this increased round-trip delay time (RTT) is related to the radio interface from the UE to the base station and some is related to the connection back from the base station over microwave to the core network and the core network itself.
  • FIGS. 2A to 2C a series of schematic block diagrams illustrate a network architecture 200 with mobility management as a user equipment (UE) 201 moves at the edge of the network.
  • the figures show four base stations 211 - 214 named Node Bs.
  • Sub-sets of base stations 211 - 212 , 213 - 214 communicate with individual RNCs 221 , 222 .
  • This communication is referred to as a backhaul link 231 , 232 between the base stations and the core of the telephone company's network.
  • the RNCs 221 , 222 communicate with a SGSN 241 which uses a GGSN 251 which connects to the Internet 260 which includes multiple servers, such as the shown server 261 .
  • the server 261 has a TCP (Transmission Control Protocol) socket 262 which communicates with a TCP socket 202 at the UE 201 when a user wishes to access data from the server 261 .
  • TCP Transmission Control Protocol
  • FIGS. 2A to 2C the data transfer is shown in solid straight arrows, and the signalling control is shown as curved hashed arrows.
  • the UMTS system makes a tunnel 270 from the GGSN 251 to the user equipment (UE) 201 .
  • the UE 201 is communicating with base station Node B 1 211 as the user is closest to this base station.
  • the IP tunnel 270 is shown for user traffic.
  • the IP tunnel 270 is shown for illustration purposes in FIGS. 2 a to 2 C and in practice passes through the interim components, such as the RNC 221 and the SGSN 241 in FIG. 2A .
  • the RNC 221 detects movement of the UE 201 and starts a mobility event and works with the GSNs 251 , 261 (GPRS Support Nodes) to move the tunnel 270 as the UE 201 moves from base station 211 to base station 212 .
  • GSNs 251 , 261 GPRS Support Nodes
  • FIG. 2B shows the UE 201 moved to base station Node B 2 212 and the tunnel 270 moved correspondingly.
  • FIG. 2C shows the UE 201 moved to base station Node B 3 213 causing inter-RNC mobility as the UE 201 moves from a base stations with first RNC 1 221 to a base station with second RNC 2 222 with the tunnel 270 moved accordingly.
  • the key point is that the UMTS system maintains the integrity of the tunnel 270 across the mobility event.
  • the TCP connections flowing through the tunnel 270 are not broken. It is possible that one or more IP packets may be dropped during the mobility event but TCP is designed to operate over lossy links and so this packet drop can be easily recovered.
  • the tunnel 270 is handed off seamlessly from one RNC 221 to another 222 .
  • the problem with simply adding a caching forward http proxy at the base station is that it impacts on mobility management.
  • the UE has a TCP connection which is terminated in the base station. If the UE moves to another base station, then it is extremely difficult to maintain a TCP connection which has state in a part of the network through which the data traffic is no longer travelling.
  • There are some solutions in the literature which talk of forwarding traffic for these connections from the base station which the UE is using back to the base station where the TCP connection is terminated but this scheme has a number of drawbacks.
  • a method for object caching with mobility management for mobile data communication including intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications, implementing object caching at the base station using snooped data communications, implementing object caching at an object cache server in the network, where the object cache server proxies communications to the content server from the user equipment, and maintaining synchrony between an object cache at the base station and an object cache at the object cache server.
  • a system for object caching with mobility management for mobile data communication including a processor, a network containing one or more base stations, where the network supports mobility management of data transfer to and from a user equipment, an object cache component at a base station for intercepting and snooping data communications between a user equipment and a content server without terminating communications, an object cache server in the network, where the object cache server proxies communications to the content server from the user equipment, and synchronising components at the base station and object cache server for maintaining synchrony between an object cache at the base station and an object cache at the object cache server.
  • a computer program stored on a computer readable medium and loadable into the internal memory of a digital computer, comprising software code portions, when the program is run on a computer, for performing the method of any the first aspect of the present invention.
  • FIG. 1 is a schematic diagram showing a mobile network architecture, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented;
  • FIGS. 2A to 2C are schematic block diagrams showing mobile management across a mobile network, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented;
  • FIG. 3 is a block diagram of a system, in accordance with a preferred embodiment of the present invention.
  • FIG. 4 is a block diagram of a computer system in which a preferred embodiment of the present invention may be implemented
  • FIG. 5 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
  • FIG. 6 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
  • FIG. 7 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
  • FIG. 8 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
  • FIG. 9 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
  • a solution is described for reducing the latency seen when a wireless mobile data terminal (the User Equipment (UE)) fetches data objects from a server.
  • UE User Equipment
  • Reduced latency is a key goal of mobile broadband providers because it has a dramatic effect on the Quality of Service perceived by the end users. End users desire instant load of information but the reality of the current network is that it can take many 100s of milliseconds or even seconds to load a web page, even over the most recent technology.
  • the mobile data terminal may be any device that can send data over a wireless network where the network provides mobility management.
  • Example networks include the GPRS (2G) network, the WCDMA (3G) network, or the LTE or WiMAX (4G) network. However, for the purposes of this description the 3G UMTS/WCDMA network will be used.
  • the described solution presents a way to provide the same savings in latency that may be achieved by placing an object cache at the base station whilst also providing mobility. That is to say that if a UE moves to a new location whilst it is being served content from a cache in the base station, the UE continues to receive content without a break. This capability is preserved even in the event that the UE has moved to a base station which has not been modified for this solution. Additionally, this technique can be combined with byte caching which improves its ability to cache effectively.
  • An object caching server is inserted into the network at the reference point called the “Gi”.
  • This server is referred to as a “OCGi” (Object Cache Gi).
  • OCGi Object Cache Gi
  • the Gi is much like a conventional WAN, it is the place where the connection is made to the peering point with the Internet.
  • This OCGi component contains a HTTP forward caching proxy which has some additional functionality that will be described further.
  • the breakout and object caching component at a base station is referred to as a “OCNB” (Object Cache Node B).
  • the OCNB contains a cache but this operates slightly differently to a traditional forward caching proxy as will be described.
  • FIG. 3 a block diagram shows an embodiment of the described system 300 .
  • a user equipment (UE) 301 moves at the edge of the network between multiple base stations (only two are shown in this example) 311 - 312 these are referred to as Node Bs in 3G terminology.
  • the UE 301 has a transfer protocol socket 302 for data transmission to and from a socket 362 of a server 361 on the Internet 360 .
  • the base stations 311 - 312 communicate with an RNC 321 .
  • This communication is referred to as a backhaul link 331 between the base stations and the core of the telephone company's network.
  • the RNC 321 communicates with a SGSN 341 which in turn communicates with a GGSN 351 .
  • the described system includes an object cache server 380 referred to herein as an object cache Gi (OCGI) at the point where the network connects to the peering point with the Internet 360 .
  • the object cache server 380 operates as HTTP forward caching proxy with additional functionality.
  • the object cache server 380 includes transfer protocol sockets 381 , 382 , and a object cache structure 383 .
  • the object cache server 380 may also include a synchronisation component 384 for synchronising its object cache structure 383 with that of the object cache structure 392 of the object cache component 390 of the base station 312 .
  • the object cache server 380 may also include a detecting component 385 for detecting movement of the user equipment from a base station cell and taking over serving an object of a request.
  • the Internet 360 provides communication with multiple content servers, such as the shown content server 361 .
  • the content server 361 has a transfer protocol socket 362 .
  • one of the base stations 312 includes an object cache component 390 referred to as an object cache Node B (OCNB) which includes breakout and object cache functionality to optimise data transfer.
  • the object cache component 390 includes an object cache structure 392 .
  • An embodiment of the object cache component 390 at the base station 312 includes a breakout component 391 for breaking out traffic which includes a fake socket 397 which mimics the behaviour and state of the real socket in the object cache server 380 .
  • the object cache component 390 also includes a snooping component 393 for snooping on traffic to and from the UE 301 . It also includes a cache look-up component 398 for determining if a snooped request or response is cached in the object cache 392 of the base station object cache component 390 . It also includes a mimicking component 394 for generating responses which mimic responses from the object cache server 380 .
  • the object cache component 390 may also include a synchronisation component 395 for synchronising its object cache 392 with that of the object cache 383 of the object cache server 380 . It may further include a notification component 396 for sending notification to the object cache server 380 of a cache hit at the object cache component 390 of the base station 312 .
  • an exemplary system for implementing aspects of the invention includes a data processing system 400 suitable for storing and/or executing program code including at least one processor 401 coupled directly or indirectly to memory elements through a bus system 403 .
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • the memory elements may include system memory 402 in the form of read only memory (ROM) 404 and random access memory (RAM) 405 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) 406 may be stored in ROM 404 .
  • System software 407 may be stored in RAM 405 including operating system software 408 .
  • Software applications 410 may also be stored in RAM 405 .
  • the system 400 may also include a primary storage means 411 such as a magnetic hard disk drive and secondary storage means 412 such as a magnetic disc drive and an optical disc drive.
  • the drives and their associated computer-readable media provide non-volatile storage of computer-executable instructions, data structures, program modules and other data for the system 400 .
  • Software applications may be stored on the primary and secondary storage means 411 , 412 as well as the system memory 402 .
  • the computing system 400 may operate in a networked environment using logical connections to one or more remote computers via a network adapter 416 .
  • Input/output devices 413 can be coupled to the system either directly or through intervening I/O controllers.
  • a user may enter commands and information into the system 400 through input devices such as a keyboard, pointing device, or other input devices (for example, microphone, joy stick, game pad, satellite dish, scanner, or the like).
  • Output devices may include speakers, printers, etc.
  • a display device 414 is also connected to system bus 403 via an interface, such as video adapter 415 .
  • a flow diagram 500 shows an embodiment of the described method.
  • the method includes intercepting and snooping 501 data communications at a base station between a user equipment and a content server on a network.
  • Object caching is implemented 502 at the base station using snooped communications to optimize data transfer.
  • Object caching is also implemented 503 at an object cache server provided in the network.
  • the object cache server proxies communications to the content server from the user equipment. Synchronicity is maintained 504 between the object cache at the base station and the object cache at the server.
  • the example scenario is now described in order to illustrate the described solution in more detail.
  • the example scenario is where the UE fetches a web object using the HTTP protocol in a UMTS network.
  • a UE establishes 601 a tunnel with a network.
  • the UE and the UMTS network may set up a radio bearer and tunnel between the UE and the Gi. Note that this operation often happens in advance with the same tunnel being used over and over again for different requests.
  • the UE may need to make 602 a new TCP connection (over the tunnel) to the HTTP port of a content server that it wishes to fetch a web object from.
  • This connection is proxied 603 in an OCGi.
  • This TCP connection involves one round trip delay on the radio network for the SYN-SYN-ACK TCP set-up phase; however, it should be noted that the HTTP protocol typically holds a single TCP connection open across many requests so it is assumed that the TCP connection will already exist in many cases and therefore there is no per request round trip.
  • the UE may make 604 an HTTP GET request for a web object over its TCP connection in the normal way.
  • the tunnel carrying the TCP connection over which the HTTP GET request flows may be redirected 605 into an OCNB appliance at the base station (Node B) by a breakout function as known in the prior art.
  • the TCP connection is not terminated or proxied, it is simply snooped 606 .
  • the HTTP GET request flows on through the tunnel to the core where the TCP connection is terminated in the OCGi.
  • the OCNB snoops 701 a request from the UE.
  • the OCNB uses the snooped HTTP request to perform 702 a lookup in the OCNB local cache. What happens next depends on whether there is a cache hit or a cache miss 703 .
  • the OCNB continues to monitor 709 the TCP connection and snoops the response as it flows through the OCNB unchanged from the OCGi to the UE.
  • the OCNB snoops the contents of the response and examines it to determine if it is cacheable. If it is cacheable it places 710 the response into its cache as well as allowing it to flow unchanged to the UE.
  • the determination of whether the request is cacheable includes logic to ensure that nothing is cached at the OCNB that has not also been cached at the OCGi.
  • a cache hit at the OCNB means that when the OCNB snooped the request it was able to determine that the OCNB has the data in its local cache necessary to serve the requested object.
  • the scheme guarantees also that the exact same object is present in the OCGi cache.
  • the requested object is retrieved 704 from the cache.
  • the OCNB can know precisely every detail of the HTTP response that the OCGi will send for the response. This knowledge can be precisely correct in every detail so that the OCNB is capable of creating the same sequence of bytes including all headers that the OCGi will create. Since the OCNB is also monitoring the TCP connection, it also knows the TCP state at the OCGi and can predict the TCP sequence numbers that will be used to send the response. In this way the OCNB can keep its Fake Socket synchronised with the real socket in the OCGi.
  • the OCNB creates 705 the response as the sequence of TCP packets that it knows the OCGi will create when the OCGi processes the request.
  • the OCNB sends 706 the response to the UE by causing its fake TCP socket to imitate the real TCP socket in the OCGi inserting into the TCP connection in the direction towards the UE the sequence of TCP packets that will be created by the OCGi. Because the fake socket at the OCNB is kept synchronised with the real socket at the OCGi, the packets sent from the fake socket are identical in all regards to those that would be sent from the OCGi.
  • the OCNB can perform this task before the request has reached the OCGi.
  • the UE will begin acknowledging these packets with TCP ACK packets containing the sequence numbers that are being acknowledged.
  • TCP ACK packets are snooped 707 by the OCNB in order to update the state of its fake socket but they are not intercepted. They flow back to the OCGi where they are used to maintain the state of the TCP socket in the OCGi, keeping this synchronised with the data transfer. This is important in case of mobility as shown below.
  • the OCNB sends 708 a notification to the OCGi to indicate that this is a cache hit.
  • the notification may be sent in multiple ways. Some examples are as follows:
  • the OCNB can send an out of band message, perhaps over UDP to a well known port and IP address and this is intercepted at the OCGi.
  • the advantage of this is that there is no need to delay the HTTP GET request.
  • the OCGi forward caching proxy receives 801 the request.
  • the OCGi proxies 802 the connection and begins to process the request in the normal way for a HTTP forward caching proxy.
  • the object is fetched 807 from the origin content server and may be later added 808 to the cache at the OCGi and OCNB. In this case, it can be guaranteed that the request is also a cache miss at the OCNB.
  • the request is processed in the normal way that a forward caching proxy processes a cache miss.
  • the OCGi may receive 805 an indication from the OCNB that this request is a cache hit in the OCNB. If this is received, rather than serving 809 the content, the OCGi simply maintains the TCP state machine which mirrors the state of the connection. It does not send packets but it receives the ACKs for the packets that the OCNB sends on its behalf.
  • the OCGi does not receive an indication from an OCNB that there has been an OCNB cache hit, then the OCGi starts serving 806 the object. There are three possible reasons for this as follows:
  • OCGi will begin serving packets before it receives the indication of a cache hit in the OCNB.
  • the OCGi may generate some packets which duplicate the packets generated at the OCNB. Both of these will flow to the UE but this is not a problem.
  • TCP allows for duplicate packets so long as they are the same. Eventually the OCGi will receive notification and will stop sending packets.
  • FIG. 9 an example embodiment is described of an aspect of the method 900 in the form of a mobility event.
  • the UE may move 901 to a new base station.
  • the OCGi may detect 902 that the UE has moved to a new base station. There are several ways this can happen. Here are three examples:
  • the OCGi is informed by the network.
  • the OCGi maintains a heartbeat with the OCNB.
  • the OCGi sends UDP datagrams to the UE which are removed by the OCNB. If the OCNB is not present then the UE will receive these and generate an ICMP Port Unreachable error. The OCGi notices that it has not received acknowledgement from the UE for packets that were sent and retransmits assuming no OCNB.
  • the OCGi When the OCGi detects that the UE has moved to a new base station, the OCGi takes over 903 serving the object. It can do this because it has the up to date state of the TCP connection. It simply needs to start sending packets at the point where the OCNB stopped sending them.
  • the OCGi may communicate out of band with the OCNB to arrange for the new OCNB to take over 904 the serving of the object. To do this it will be necessary to communicate the details of the original request together with the current offset and TCP information.
  • a cache at a base station object cache component must be consistent with the cache at the object cache server (the OCGi). This consistency is a guarantee that if an object is cached at the OCNB, the exact same object is cached at the OCGi, and that the OCGi does not need to go back to the origin content server (if the cached object is stale).
  • the OCGi has a different version of the object or if it needs to go back to the origin content server to fetch the object, then there is a possibility that the sequence of TCP packets generated by the OCNB may be different to that which would be generated by the OCGi.
  • Each OCNB cache may be of a fixed size.
  • the cache at the OCGi may be equal to the sum of all the OCNB cache sizes.
  • the OCGi may partition its cache and separately manage the cached objects cached for each OCNB present in the system.
  • the caching logic and parameters may be the same in the OCNB and OCGi for the following:
  • the described method and system may be combined with a byte caching process between a base station and the Gi.
  • Byte caching may be implemented between a central interception server in the Gi and a set of interception functions in a subset of the base stations.
  • the implementation intercepts but does not terminate transfer protocol connections. It optimises transfer protocol connections when they flow through a base station which has the optimisation function. If the UE moves to another base station which has the optimisation function then optimisation continues. If the UE moves to a base station that does not have the optimisation function then the transfer protocol connection is not affected but is not optimised.
  • a byte caching server may be inserted into the UMTS network at the reference point called the “Gi”.
  • This server may be referred to as a “BCGi” (Byte Cache Gi) and may be combined with the described object cache server (the OCGi).
  • This BCGi component operates as a conventional transparent TCP proxy but has additional byte caching behaviour.
  • a byte cache component (referred to as a “BCNB” (Byte Cache Node B)) may be provided at one or more of the base stations which may be combined with the described base station object cache component (the OCNB).
  • the BCNB function operates as a “bump in the wire”. It is not a proxy.
  • the transfer protocol connection between the UE and the core is not terminated but it is sometimes manipulated by the BCNB as if it were terminating it.
  • a UE may establish a tunnel with the network in the normal way. It makes a TCP connection to a port at a content server it wishes to receive data from. This TCP connection may be transparently proxied by the BCGi. The response from the server port may flow back through the proxy and may be propagated back to the UE.
  • the BCGi does not alter the TCP stream at all but does begin examining the data, calculating Rabin fingerprints and storing away chunks of the file in the byte cache together keyed on their SHA1 hash.
  • the byte caching is not fully described here because it is well described in the prior art references. Suffice to say that the BCGi starts to populate a standard byte cache structure but does nothing more until the UE moves to a base station with a BCNB function.
  • a BCNB may signal to the BCGi that it is present in the data path by generating marker IP packets which would not be generated by the UE and which the BCGi intercepts.
  • the BCGi may stop sending normal TCP traffic and may start instead to send “Byte Caching Records” (BCRs) for this traffic to the BCNB.
  • BCRs “Byte Caching Records”
  • these records are sent through the GTP tunnel as if they were to be sent to the UE but they are not sent inside the TCP connection. Instead they are sent over UDP to a port that the BCGi recognises. There are in fact many ways that these records could be sent, UDP to a special port is one example.
  • the BCRs may contain:
  • a byte caching token which is essentially a key that represents a chunk of the data (typically in the region of 8K in size).
  • a fake TCP socket may be created. This fake socket behaves identically to the fake socket described for the OCNB.
  • the BCNB may receive the byte caching tokens from the BCGi and may use these tokens to reconstitute the original data.
  • the details of reconstituting the original data are related to byte caching.
  • the byte cache at BCNB looks up the token in its cache to find the corresponding full data and reconstructs the TCP packets. What is critical is that the BCNB does not need to perform the expensive Rabin fingerprinting operations on the data, these can all be done at the BCGi.
  • the BCNB simply accesses the data related to the token and recreates the TCP packets.
  • the BCNB uses the data in the BCRs to reconstitute the TCP frames around the data fetched from the byte cache.
  • the OCNB cache would hold sequences of byte caching tokens keyed on the HTTP request details.
  • the initial lookup of a HTTP request would yield a sequence of byte cache tokens.
  • the byte cache would sit under this and may hold the data for some or all of these tokens. In the case where the data for some of the tokens was not present in the cache then it would be fetched. It will be clear to someone skilled in the art that the process of inserting the data into the TCP stream at the OCNB is identical between a byte caching implementation and an object caching implementation. In either case, the fake socket at the node B issues the sequence of TCP packets that the real TCP socket would have produced.
  • aspects of the present invention may be embodied as a system, method, computer program product or computer program. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc. or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Method and system are provided for object caching with mobility management for mobile data communication. The method may include: intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications; implementing object caching at the base station using snooped data communications; implementing object caching at an object cache server in the network, wherein the object cache server proxies communications to the content server from the user equipment; and maintaining synchrony between an object cache at the base station and an object cache at the object cache server.

Description

BACKGROUND OF THE INVENTION
This invention relates to the field of optimisation of mobile data communication with mobility management. In particular, the invention relates to Quality of Experience optimisation using object caching for mobile data communication with mobility management.
A wireless mobile data terminal (the User Equipment (UE)) communicates with a server on a connected fixed network. A mobile data terminal may be any device that can send data over a wireless network where the network provides mobility management. Examples of networks include: the GPRS (General packet radio service) (2G) network; the WCDMA (Wideband Code Division Multiple Access) (3G) network; or the LTE (Long Term Evolution) or WiMAX (Worldwide Interoperability for Microwave Access) (4G) network. The background and description of the invention are described in terms of the 3rd Generation Mobile Phone Network, UMTS (Unified Mobile Telephony System)/WCDMA.
Referring to FIG. 1, a schematic diagram shows the UMTS architecture 100 which is standardised by the 3rd Generation Partnership Project (3GPP).
The wireless device, (cell phone, 3G dongle for a laptop, tablet device, etc.) is known in 3GPP terminology as a User Equipment (UE) 101. It connects wirelessly 110 to the base station which is labelled Base Station (BS) 102 and is known as a Node B in 3GPP terminology. Around 100 Node Bs are connected over microwave or optical fibre 120 to a Radio Network Controller (RNC) 103 which is connected back to a Serving GPRS Support Node (SGSN) 104 (which supports several RNCs) and then a Gateway GPRS Support Node (GGSN) 105. Finally the GGSN is connected back to the operators' service network (OSN) 10 which connects to the Internet 107 at a peering point.
The protocols between the base station back to the GGSN are various 3GPP specific protocols over which the IP traffic from the UE is tunnelled. Between the RNC 103 and the GGSN 105 a GPRS tunnelling protocol (GTP) 130 is used. Between the GGSN 105, OSN 106 and the Internet 107, standard Internet Protocol (IP) 140 is used. Note that the OSN 106 is termed the “Gi” reference point in the 3GPP terminology.
A key problem with communication via mobile networks is the rapid increase of data traffic. The density of mobile computing platforms is increasing at an exponential rate. Mobile computing platforms include traditional platforms such as phones, tablets and mobile broadband enabled laptops but increasingly also mobile data enabled devices, such as GPS systems, cars, even mobile medical equipment. This exponential increase brings significant new challenges for Mobile Network Operators (MNOs) as data becomes the majority of the content they deliver. Specifically although additional base stations are fairly easy to deploy to increase the available aggregate “air interface” bandwidth, the connections back from the base stations to the RNC, typically implemented as microwave links, are bandwidth constrained. Upgrading them to fibre optic connections is very expensive. Similarly increasing the available bandwidth in the RNC and core network is expensive.
The time taken to load a web page on a mobile device is typically much longer than to load the same page from a fixed connection. In part this is due to limited bandwidth and congestion in the network as described above but even if these factors are ignored, the round trip time over a mobile network is much longer than on a fixed link. Some of this increased round-trip delay time (RTT) is related to the radio interface from the UE to the base station and some is related to the connection back from the base station over microwave to the core network and the core network itself. Modifications being made to the air interface are improving the air interface latency: “evolved HSPA (High Speed Packet Access)” (sometimes informally described as 3.5G) and “Long Term Evolution” (informally described as 4G) but the latency through the microwave and core will persist.
Mobile Internet Optimisation.
One technique to address this is to “break out” the data traffic out of the mobile phone protocols and optimise it. Several companies market devices designed to break data traffic out of the network. Examples include the Mobile Data Offload (MDO) product from Stoke, Inc. and the Internet Offload appliance marketed by Continuous Computing. Each of these examples breaks traffic out of the 3GPP protocols at the RNC. Similar technology is emerging to break out the IP traffic at the base station.
Once the IP traffic has been broken out of the network, it is possible to put an optimisation platform at the edge of the mobile phone network, either at the RNC or in the base station. This platform can host various optimisation and other applications.
UMTS Mobility Management.
Referring to FIGS. 2A to 2C, a series of schematic block diagrams illustrate a network architecture 200 with mobility management as a user equipment (UE) 201 moves at the edge of the network. The figures show four base stations 211-214 named Node Bs. Sub-sets of base stations 211-212, 213-214 communicate with individual RNCs 221, 222. This communication is referred to as a backhaul link 231, 232 between the base stations and the core of the telephone company's network. The RNCs 221, 222 communicate with a SGSN 241 which uses a GGSN 251 which connects to the Internet 260 which includes multiple servers, such as the shown server 261. The server 261 has a TCP (Transmission Control Protocol) socket 262 which communicates with a TCP socket 202 at the UE 201 when a user wishes to access data from the server 261. In FIGS. 2A to 2C, the data transfer is shown in solid straight arrows, and the signalling control is shown as curved hashed arrows.
As mentioned earlier, the UMTS system makes a tunnel 270 from the GGSN 251 to the user equipment (UE) 201. In FIG. 1, the UE 201 is communicating with base station Node B 1 211 as the user is closest to this base station. The IP tunnel 270 is shown for user traffic. The IP tunnel 270 is shown for illustration purposes in FIGS. 2a to 2C and in practice passes through the interim components, such as the RNC 221 and the SGSN 241 in FIG. 2A.
As the UE 201 begins to move from one cell to the next, the RNC 221 detects movement of the UE 201 and starts a mobility event and works with the GSNs 251, 261 (GPRS Support Nodes) to move the tunnel 270 as the UE 201 moves from base station 211 to base station 212.
FIG. 2B shows the UE 201 moved to base station Node B 2 212 and the tunnel 270 moved correspondingly.
FIG. 2C shows the UE 201 moved to base station Node B 3 213 causing inter-RNC mobility as the UE 201 moves from a base stations with first RNC 1 221 to a base station with second RNC 2 222 with the tunnel 270 moved accordingly.
The key point is that the UMTS system maintains the integrity of the tunnel 270 across the mobility event. The TCP connections flowing through the tunnel 270 are not broken. It is possible that one or more IP packets may be dropped during the mobility event but TCP is designed to operate over lossy links and so this packet drop can be easily recovered.
As the UE continues to move, into an area served by a new RNC 222, the tunnel 270 is handed off seamlessly from one RNC 221 to another 222.
Traffic Optimisation Solutions Using Object Caching in the Base Station.
There are well known techniques to place a caching http forward proxy in the base station and to serve content from this. This approach achieves dramatic reductions in page load times for objects that are cached because objects can be cached taking a round trip back to the Gi or the server on the Internet. Each of these round trips can take 100 ms or more of which more than 60 ms can be saved. Since a typical page can contain tens of objects the cumulative effect of this latency is dramatic.
The problem with simply adding a caching forward http proxy at the base station is that it impacts on mobility management. The UE has a TCP connection which is terminated in the base station. If the UE moves to another base station, then it is extremely difficult to maintain a TCP connection which has state in a part of the network through which the data traffic is no longer travelling. There are some solutions in the literature which talk of forwarding traffic for these connections from the base station which the UE is using back to the base station where the TCP connection is terminated but this scheme has a number of drawbacks.
It requires an overlay network between base stations such that traffic can be forwarded from one base station to another. Management of this overlay is very difficult.
It uses up additional bandwidth to forward traffic from base station to base station and potentially increases the latency.
SUMMARY
According to a first aspect of the present invention there is provided a method for object caching with mobility management for mobile data communication, including intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications, implementing object caching at the base station using snooped data communications, implementing object caching at an object cache server in the network, where the object cache server proxies communications to the content server from the user equipment, and maintaining synchrony between an object cache at the base station and an object cache at the object cache server.
According to a second aspect of the present invention there is provided a system for object caching with mobility management for mobile data communication, including a processor, a network containing one or more base stations, where the network supports mobility management of data transfer to and from a user equipment, an object cache component at a base station for intercepting and snooping data communications between a user equipment and a content server without terminating communications, an object cache server in the network, where the object cache server proxies communications to the content server from the user equipment, and synchronising components at the base station and object cache server for maintaining synchrony between an object cache at the base station and an object cache at the object cache server.
According to a third aspect of the present invention there is provided a computer program stored on a computer readable medium and loadable into the internal memory of a digital computer, comprising software code portions, when the program is run on a computer, for performing the method of any the first aspect of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will now be described, by way of example only, with reference to preferred embodiments, as illustrated in the following figures:
FIG. 1 is a schematic diagram showing a mobile network architecture, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented;
FIGS. 2A to 2C are schematic block diagrams showing mobile management across a mobile network, in accordance with the prior art, and in which a preferred embodiment of the present invention may be implemented;
FIG. 3 is a block diagram of a system, in accordance with a preferred embodiment of the present invention;
FIG. 4 is a block diagram of a computer system in which a preferred embodiment of the present invention may be implemented;
FIG. 5 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention;
FIG. 6 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention;
FIG. 7 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention;
FIG. 8 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention; and
FIG. 9 is a flow diagram of an aspect of a method, in accordance with a preferred embodiment of the present invention.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers may be repeated among the figures to indicate corresponding or analogous features.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
A solution is described for reducing the latency seen when a wireless mobile data terminal (the User Equipment (UE)) fetches data objects from a server.
Reduced latency is a key goal of mobile broadband providers because it has a dramatic effect on the Quality of Service perceived by the end users. End users desire instant load of information but the reality of the current network is that it can take many 100s of milliseconds or even seconds to load a web page, even over the most recent technology.
Method and system are described for optimising communication between a wireless mobile data terminal (the User Equipment (UE)) and a server on a connected fixed network. The mobile data terminal may be any device that can send data over a wireless network where the network provides mobility management. Example networks include the GPRS (2G) network, the WCDMA (3G) network, or the LTE or WiMAX (4G) network. However, for the purposes of this description the 3G UMTS/WCDMA network will be used.
The solution is described in the embodiment of a UE fetching web objects using the HTTP (Hypertext Transfer Protocol) protocol but the concepts apply to other protocols such as FTP (File Transfer Protocol) or RTP (Real-time Transport Protocol).
The described solution presents a way to provide the same savings in latency that may be achieved by placing an object cache at the base station whilst also providing mobility. That is to say that if a UE moves to a new location whilst it is being served content from a cache in the base station, the UE continues to receive content without a break. This capability is preserved even in the event that the UE has moved to a base station which has not been modified for this solution. Additionally, this technique can be combined with byte caching which improves its ability to cache effectively.
An object caching server is inserted into the network at the reference point called the “Gi”. This server is referred to as a “OCGi” (Object Cache Gi). At the Gi, traffic is no longer tunnelled. The Gi is much like a conventional WAN, it is the place where the connection is made to the peering point with the Internet. This OCGi component contains a HTTP forward caching proxy which has some additional functionality that will be described further.
Some or all of the base stations are augmented with a breakout and object cache component. The details of breakout itself are not described herein, as they are known to those skilled in the art. The breakout and object caching component at a base station is referred to as a “OCNB” (Object Cache Node B). The OCNB contains a cache but this operates slightly differently to a traditional forward caching proxy as will be described.
Referring to FIG. 3, a block diagram shows an embodiment of the described system 300.
A user equipment (UE) 301 moves at the edge of the network between multiple base stations (only two are shown in this example) 311-312 these are referred to as Node Bs in 3G terminology. The UE 301 has a transfer protocol socket 302 for data transmission to and from a socket 362 of a server 361 on the Internet 360.
The base stations 311-312 communicate with an RNC 321. This communication is referred to as a backhaul link 331 between the base stations and the core of the telephone company's network. The RNC 321 communicates with a SGSN 341 which in turn communicates with a GGSN 351.
The described system includes an object cache server 380 referred to herein as an object cache Gi (OCGI) at the point where the network connects to the peering point with the Internet 360. The object cache server 380 operates as HTTP forward caching proxy with additional functionality. The object cache server 380 includes transfer protocol sockets 381, 382, and a object cache structure 383.
The object cache server 380 may also include a synchronisation component 384 for synchronising its object cache structure 383 with that of the object cache structure 392 of the object cache component 390 of the base station 312. The object cache server 380 may also include a detecting component 385 for detecting movement of the user equipment from a base station cell and taking over serving an object of a request.
The Internet 360 provides communication with multiple content servers, such as the shown content server 361. The content server 361 has a transfer protocol socket 362.
In this embodiment, one of the base stations 312 includes an object cache component 390 referred to as an object cache Node B (OCNB) which includes breakout and object cache functionality to optimise data transfer. The object cache component 390 includes an object cache structure 392.
An embodiment of the object cache component 390 at the base station 312 includes a breakout component 391 for breaking out traffic which includes a fake socket 397 which mimics the behaviour and state of the real socket in the object cache server 380. The object cache component 390 also includes a snooping component 393 for snooping on traffic to and from the UE 301. It also includes a cache look-up component 398 for determining if a snooped request or response is cached in the object cache 392 of the base station object cache component 390. It also includes a mimicking component 394 for generating responses which mimic responses from the object cache server 380.
The object cache component 390 may also include a synchronisation component 395 for synchronising its object cache 392 with that of the object cache 383 of the object cache server 380. It may further include a notification component 396 for sending notification to the object cache server 380 of a cache hit at the object cache component 390 of the base station 312.
Referring to FIG. 4, an exemplary system for implementing aspects of the invention includes a data processing system 400 suitable for storing and/or executing program code including at least one processor 401 coupled directly or indirectly to memory elements through a bus system 403. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
The memory elements may include system memory 402 in the form of read only memory (ROM) 404 and random access memory (RAM) 405. A basic input/output system (BIOS) 406 may be stored in ROM 404. System software 407 may be stored in RAM 405 including operating system software 408. Software applications 410 may also be stored in RAM 405.
The system 400 may also include a primary storage means 411 such as a magnetic hard disk drive and secondary storage means 412 such as a magnetic disc drive and an optical disc drive. The drives and their associated computer-readable media provide non-volatile storage of computer-executable instructions, data structures, program modules and other data for the system 400. Software applications may be stored on the primary and secondary storage means 411, 412 as well as the system memory 402.
The computing system 400 may operate in a networked environment using logical connections to one or more remote computers via a network adapter 416.
Input/output devices 413 can be coupled to the system either directly or through intervening I/O controllers. A user may enter commands and information into the system 400 through input devices such as a keyboard, pointing device, or other input devices (for example, microphone, joy stick, game pad, satellite dish, scanner, or the like). Output devices may include speakers, printers, etc. A display device 414 is also connected to system bus 403 via an interface, such as video adapter 415.
Referring to FIG. 5, a flow diagram 500 shows an embodiment of the described method. The method includes intercepting and snooping 501 data communications at a base station between a user equipment and a content server on a network. Object caching is implemented 502 at the base station using snooped communications to optimize data transfer. Object caching is also implemented 503 at an object cache server provided in the network. The object cache server proxies communications to the content server from the user equipment. Synchronicity is maintained 504 between the object cache at the base station and the object cache at the server.
An example scenario is now described in order to illustrate the described solution in more detail. The example scenario is where the UE fetches a web object using the HTTP protocol in a UMTS network.
Referring to FIG. 6, an example embodiment of an aspect of the method 600 is described. A UE establishes 601 a tunnel with a network. The UE and the UMTS network may set up a radio bearer and tunnel between the UE and the Gi. Note that this operation often happens in advance with the same tunnel being used over and over again for different requests.
The UE may need to make 602 a new TCP connection (over the tunnel) to the HTTP port of a content server that it wishes to fetch a web object from. This connection is proxied 603 in an OCGi.
The creation of this TCP connection involves one round trip delay on the radio network for the SYN-SYN-ACK TCP set-up phase; however, it should be noted that the HTTP protocol typically holds a single TCP connection open across many requests so it is assumed that the TCP connection will already exist in many cases and therefore there is no per request round trip.
The UE may make 604 an HTTP GET request for a web object over its TCP connection in the normal way. The tunnel carrying the TCP connection over which the HTTP GET request flows may be redirected 605 into an OCNB appliance at the base station (Node B) by a breakout function as known in the prior art.
In the OCNB appliance the TCP connection is not terminated or proxied, it is simply snooped 606. The HTTP GET request flows on through the tunnel to the core where the TCP connection is terminated in the OCGi.
Referring to FIG. 7, an example embodiment is described of an aspect of the method 700 carried out at the OCNB. The OCNB snoops 701 a request from the UE. The OCNB uses the snooped HTTP request to perform 702 a lookup in the OCNB local cache. What happens next depends on whether there is a cache hit or a cache miss 703.
If there is a cache miss in the OCNB, the OCNB continues to monitor 709 the TCP connection and snoops the response as it flows through the OCNB unchanged from the OCGi to the UE. To explain this snooping in more detail: when the response arrives, the OCNB snoops the contents of the response and examines it to determine if it is cacheable. If it is cacheable it places 710 the response into its cache as well as allowing it to flow unchanged to the UE. The determination of whether the request is cacheable includes logic to ensure that nothing is cached at the OCNB that has not also been cached at the OCGi.
A cache hit at the OCNB means that when the OCNB snooped the request it was able to determine that the OCNB has the data in its local cache necessary to serve the requested object. The scheme guarantees also that the exact same object is present in the OCGi cache. The requested object is retrieved 704 from the cache.
Furthermore, because the design of the OCNB and OCGi are synchronised, the OCNB can know precisely every detail of the HTTP response that the OCGi will send for the response. This knowledge can be precisely correct in every detail so that the OCNB is capable of creating the same sequence of bytes including all headers that the OCGi will create. Since the OCNB is also monitoring the TCP connection, it also knows the TCP state at the OCGi and can predict the TCP sequence numbers that will be used to send the response. In this way the OCNB can keep its Fake Socket synchronised with the real socket in the OCGi.
The OCNB creates 705 the response as the sequence of TCP packets that it knows the OCGi will create when the OCGi processes the request. The OCNB sends 706 the response to the UE by causing its fake TCP socket to imitate the real TCP socket in the OCGi inserting into the TCP connection in the direction towards the UE the sequence of TCP packets that will be created by the OCGi. Because the fake socket at the OCNB is kept synchronised with the real socket at the OCGi, the packets sent from the fake socket are identical in all regards to those that would be sent from the OCGi.
The OCNB can perform this task before the request has reached the OCGi. As the TCP packets flow towards the UE from the OCNB, the UE will begin acknowledging these packets with TCP ACK packets containing the sequence numbers that are being acknowledged. These ACK packets are snooped 707 by the OCNB in order to update the state of its fake socket but they are not intercepted. They flow back to the OCGi where they are used to maintain the state of the TCP socket in the OCGi, keeping this synchronised with the data transfer. This is important in case of mobility as shown below.
Note that this means that the OCGi can see acknowledgements for packets that it has yet to generate. The TCP stack at the OCGi must be modified to recognise this possibility.
When a cache hit occurs at the OCNB the OCNB sends 708 a notification to the OCGi to indicate that this is a cache hit. The notification may be sent in multiple ways. Some examples are as follows:
Modify the HTTP GET from the UE to include a notification that the content is cached. This requires the HTTP GET to be delayed for the duration of the cache lookup.
The OCNB can send an out of band message, perhaps over UDP to a well known port and IP address and this is intercepted at the OCGi. The advantage of this is that there is no need to delay the HTTP GET request.
Referring to FIG. 8, an example embodiment is described of an aspect of the method 800 at the core. The OCGi forward caching proxy receives 801 the request. The OCGi proxies 802 the connection and begins to process the request in the normal way for a HTTP forward caching proxy.
It is determined 803 if the requested object is in the object cache at the OCGi. If the object is in the cache and is fresh then the OCGi prepares to serve 804 the object from the cache in the OCGi.
If the object is not in the cache or not fresh, then the object is fetched 807 from the origin content server and may be later added 808 to the cache at the OCGi and OCNB. In this case, it can be guaranteed that the request is also a cache miss at the OCNB. The request is processed in the normal way that a forward caching proxy processes a cache miss.
Whilst processing a cache hit at the OCGi, the OCGi may receive 805 an indication from the OCNB that this request is a cache hit in the OCNB. If this is received, rather than serving 809 the content, the OCGi simply maintains the TCP state machine which mirrors the state of the connection. It does not send packets but it receives the ACKs for the packets that the OCNB sends on its behalf.
If the OCGi does not receive an indication from an OCNB that there has been an OCNB cache hit, then the OCGi starts serving 806 the object. There are three possible reasons for this as follows:
There is no OCNB present in the data path.
There is an OCNB in the data path but it had a cache miss.
There is an OCNB in the data path and there was a cache hit but the notification has not arrived yet. In this case OCGi will begin serving packets before it receives the indication of a cache hit in the OCNB. The OCGi may generate some packets which duplicate the packets generated at the OCNB. Both of these will flow to the UE but this is not a problem. TCP allows for duplicate packets so long as they are the same. Eventually the OCGi will receive notification and will stop sending packets.
Referring to FIG. 9, an example embodiment is described of an aspect of the method 900 in the form of a mobility event.
At some point in the serving of the object over TCP, the UE may move 901 to a new base station.
The OCGi may detect 902 that the UE has moved to a new base station. There are several ways this can happen. Here are three examples:
The OCGi is informed by the network.
The OCGi maintains a heartbeat with the OCNB.
The OCGi sends UDP datagrams to the UE which are removed by the OCNB. If the OCNB is not present then the UE will receive these and generate an ICMP Port Unreachable error. The OCGi notices that it has not received acknowledgement from the UE for packets that were sent and retransmits assuming no OCNB.
When the OCGi detects that the UE has moved to a new base station, the OCGi takes over 903 serving the object. It can do this because it has the up to date state of the TCP connection. It simply needs to start sending packets at the point where the OCNB stopped sending them.
Optionally, if the UE moved to a new base station which has a OCNB, then the OCGi may communicate out of band with the OCNB to arrange for the new OCNB to take over 904 the serving of the object. To do this it will be necessary to communicate the details of the original request together with the current offset and TCP information.
Note that if the new base station has an OCNB but the object is not present in its cache then this is handled in the same way as a base station that does not have an OCNB.
Cache Consistency
A cache at a base station object cache component (an OCNB) must be consistent with the cache at the object cache server (the OCGi). This consistency is a guarantee that if an object is cached at the OCNB, the exact same object is cached at the OCGi, and that the OCGi does not need to go back to the origin content server (if the cached object is stale).
If the OCGi has a different version of the object or if it needs to go back to the origin content server to fetch the object, then there is a possibility that the sequence of TCP packets generated by the OCNB may be different to that which would be generated by the OCGi.
This consistency may be achieved by the following:
Each OCNB cache may be of a fixed size.
The cache at the OCGi may be equal to the sum of all the OCNB cache sizes.
The OCGi may partition its cache and separately manage the cached objects cached for each OCNB present in the system.
The caching logic and parameters may be the same in the OCNB and OCGi for the following:
    • When to cache a response;
    • The eviction policy when the cache size is exceeded;
    • When an object is deemed to be stale.
      Combined Object Caching and Byte Caching
The described method and system may be combined with a byte caching process between a base station and the Gi.
Byte caching may be implemented between a central interception server in the Gi and a set of interception functions in a subset of the base stations. The implementation intercepts but does not terminate transfer protocol connections. It optimises transfer protocol connections when they flow through a base station which has the optimisation function. If the UE moves to another base station which has the optimisation function then optimisation continues. If the UE moves to a base station that does not have the optimisation function then the transfer protocol connection is not affected but is not optimised.
A byte caching server may be inserted into the UMTS network at the reference point called the “Gi”. This server may be referred to as a “BCGi” (Byte Cache Gi) and may be combined with the described object cache server (the OCGi). This BCGi component operates as a conventional transparent TCP proxy but has additional byte caching behaviour.
A byte cache component (referred to as a “BCNB” (Byte Cache Node B)) may be provided at one or more of the base stations which may be combined with the described base station object cache component (the OCNB). In common with the OCNB function described above, as far as the user plane data is concerned, the BCNB function operates as a “bump in the wire”. It is not a proxy. In common with the OCNB, the transfer protocol connection between the UE and the core is not terminated but it is sometimes manipulated by the BCNB as if it were terminating it.
A UE may establish a tunnel with the network in the normal way. It makes a TCP connection to a port at a content server it wishes to receive data from. This TCP connection may be transparently proxied by the BCGi. The response from the server port may flow back through the proxy and may be propagated back to the UE. The BCGi does not alter the TCP stream at all but does begin examining the data, calculating Rabin fingerprints and storing away chunks of the file in the byte cache together keyed on their SHA1 hash. The byte caching is not fully described here because it is well described in the prior art references. Suffice to say that the BCGi starts to populate a standard byte cache structure but does nothing more until the UE moves to a base station with a BCNB function.
A BCNB may signal to the BCGi that it is present in the data path by generating marker IP packets which would not be generated by the UE and which the BCGi intercepts. When the BCGi recognises that a BCNB is present in the data path, the BCGi may stop sending normal TCP traffic and may start instead to send “Byte Caching Records” (BCRs) for this traffic to the BCNB. To be precise, these records are sent through the GTP tunnel as if they were to be sent to the UE but they are not sent inside the TCP connection. Instead they are sent over UDP to a port that the BCGi recognises. There are in fact many ways that these records could be sent, UDP to a special port is one example.
The BCRs may contain:
All the TCP metadata to allow the TCP packets to which they relate to be recreated at BCGi;
The starting 32 bit sequence number from the TCP header;
A byte caching token which is essentially a key that represents a chunk of the data (typically in the region of 8K in size).
At the BCNB a fake TCP socket may be created. This fake socket behaves identically to the fake socket described for the OCNB.
The BCNB may receive the byte caching tokens from the BCGi and may use these tokens to reconstitute the original data. The details of reconstituting the original data are related to byte caching. Briefly, the byte cache at BCNB looks up the token in its cache to find the corresponding full data and reconstructs the TCP packets. What is critical is that the BCNB does not need to perform the expensive Rabin fingerprinting operations on the data, these can all be done at the BCGi. The BCNB simply accesses the data related to the token and recreates the TCP packets. The BCNB uses the data in the BCRs to reconstitute the TCP frames around the data fetched from the byte cache.
It is important to note that because the data in the TCP segments recreated from the BCRs is identical, byte for byte with the data that would have been sent by the BCGi had the BCNB not been present, the TCP state in the fake socket stays synchronised precisely with the BCGi.
Note also that there is no need for the byte caching to operate purely on a TCP segment by TCP segment basis. The BCRs can contain tokens that match several segments worth of TCP data. Indeed the byte caching boundaries do not have to fall on segment boundaries.
Combining the object caching and byte caching gives the further benefit of allowing partial cache hits. This would allow the limited size cache at the OCNB to be managed in a more efficient manner.
In this case, the OCNB cache would hold sequences of byte caching tokens keyed on the HTTP request details. The initial lookup of a HTTP request would yield a sequence of byte cache tokens.
The byte cache would sit under this and may hold the data for some or all of these tokens. In the case where the data for some of the tokens was not present in the cache then it would be fetched. It will be clear to someone skilled in the art that the process of inserting the data into the TCP stream at the OCNB is identical between a byte caching implementation and an object caching implementation. In either case, the fake socket at the node B issues the sequence of TCP packets that the real TCP socket would have produced.
The particular optimisation described is one which:
Reduces the volume of data sent over the backhaul of the Radio Access Network (RAN) and the core network
Significantly reduces the round trip time seen by the UE and thus significantly reduces the time to load a web page.
Does not require any modification to the UE hardware or software nor modification to the server.
Does not Interfere with Mobility Management.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, computer program product or computer program. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc. or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
For the avoidance of doubt, the term “comprising”, as used herein throughout the description and claims is not to be construed as meaning “consisting only of”.

Claims (20)

The invention claimed is:
1. A method for object caching with mobility management for mobile data communication, comprising:
intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications;
implementing object caching at the base station using snooped data communications;
implementing object caching at an object cache server in the network, wherein the object cache server proxies communications to the content server from the user equipment;
maintaining synchrony between an object cache at the base station and an object cache at the object cache server by monitoring a state of a connection of a Transmission Control Protocol (TCP) at the object cache server;
determining a state of the TCP at the object cache server based on monitoring the state of the connection of the TCP;
predicting sequence numbers of the TCP at the object cache server used to send a response to the user equipment based on the determining of the state of the TCP at the object cache server;
detecting, by the object cache server, a movement of the user equipment from the base station while a requested object is being served to the user equipment from a cache of the base station; and
taking over, by the object cache server, serving the requested object based on the detecting, wherein the taking over includes providing a remainder of the requested object by the object cache server.
2. The method as claimed in claim 1, further comprising:
establishing a tunnel with a network and the user equipment;
establishing a TCP connection over the tunnel to a HTTP port of a content server;
making a request for a web object over a HTTP connection; and
providing a data response to the user equipment from the base station providing a cached object, wherein the data response mimics a response from the object cache server.
3. The method as claimed in claim 2, wherein providing a data response comprises creating a sequence of bytes.
4. The method as claimed in claim 2, further comprising:
providing a notification to the object cache server in response to a cache hit being made at the base station for a data communication; and
using the snooped data communications to perform a lookup on a local cache of the base station.
5. The method as claimed in claim 1, further comprising:
in response to a cache hit at the base station for a data communication, serving the cached object to the user equipment in data packets;
snooping, by the base station, one or more acknowledgement data packets from the user equipment and allowing the acknowledgement data packets to proceed to the object cache server where they are used to maintain the state of the TCP at the object cache server; and
monitoring the TCP connection at the object cache server during the snooping.
6. The method as claimed in claim 5, further comprising:
modifying the object cache server to accommodate receiving acknowledgement data packets from the user equipment for data packets it has not generated; and
modifying a HTTP GET of the user equipment to include a notification that content is cached at the object cache server.
7. The method as claimed in claim 1, further comprising:
snooping at the base station a response from the object cache server;
caching an object of the response in response to the object being cached at the object cache server; and
modifying a HTTP GET of the user equipment to include a notification that content is cached at the object cache server.
8. The method as claimed in claim 1, further comprising:
in response to a cache hit being served from the object cache server and a notification that the base station has received a cache hit, stopping the serving of the object from the object cache server whilst maintaining the state of the TCP at the object cache server which mirrors a TCP state of the base station.
9. The method as claimed in claim 1, further comprising:
detecting a movement of the user equipment from a base station cell by the object cache server; and
taking over, by a new base station, serving an object of a request.
10. The method as claimed in claim 1, further comprising:
maintaining an object cache at each base station consistent with an object cache at the object cache server, wherein the state of the TCP at the object cache server is maintained at each base station.
11. The method as claimed in claim 10, further comprising:
providing an object cache at each base station of a fixed size;
providing an object cache at the object cache server of a size equal to the sum of all the base station object caches; and
partitioning the object cache of the object cache server to manage separately the objects cached for each base station.
12. The method as claimed in claim 10, further comprising:
providing a same caching logic and a same parameters at the base stations and at the object cache server.
13. The method as claimed in claim 1, further comprising:
implementing byte caching at the base station and at the object cache server, wherein the byte caching enables partial cache hits; and
wherein the base station holds one or more sequences of byte caching tokens keyed on data request details and maintains the state of the TCP at the base station as the same as the TCP state at the object cache server.
14. The method as claimed in claim 1, wherein the data communications are hypertext transfer protocol requests and responses.
15. A system for object caching with mobility management for mobile data communication, comprising:
a processor;
a network containing one or more base stations, wherein the network supports mobility management of data transfer to and from a user equipment;
an object cache component at a base station for intercepting and snooping data communications between the user equipment and a content server without terminating communications;
an object cache server in the network, wherein the object cache server is configured to:
proxy communications to the content server from the user equipment;
synchronize components at the base station and object cache server for maintaining synchrony between a fake TCP socket of an object cache at the base station and a real TCP socket of an object cache at the object cache server by causing the fake TCP socket of the object cache at the base station to imitate the real TCP socket of the object cache at the object cache server;
detect a movement of the user equipment from the base station while a requested object is being served to the user equipment from a cache of the base station; and
take over serving the requested object based on the detecting, wherein the taking over includes providing a remainder of the requested object by the object cache server.
16. The system as claimed in claim 15, wherein the object cache component at the base station further comprises:
a mimicking component for providing a data response to a user equipment from the base station providing a cached object, wherein the data response mimics a response from the object cache server;
a breakout component for breaking out traffic to received at the object cache component; and
the fake TCP socket which mimics a behavior and the state of the TCP of the real TCP socket of the object cache server.
17. The system as claimed in claim 15, wherein the object cache component at the base station includes a notification component for providing a notification to the object cache server in response to a cache hit being made at the base station for a data communication.
18. The system as claimed in claim 15, wherein the object cache server comprises:
a detecting component for detecting a movement of the user equipment from a base station cell and taking over, by a new base station, serving an object of a request.
19. The system as claimed in claim 15, further comprising:
an object cache at each base station of a fixed size;
an object cache at the object cache server of a size equal to the sum of all the base station object caches, wherein the object cache of the object cache server is partitioned to manage separately the objects cached for each base station.
20. A computer program product stored on a non-transitory computer readable medium and loadable into the internal memory of a digital computer, comprising software code portions, that, when executed by the computer, cause the computer to perform object caching with mobility management for mobile data communication by performing actions comprising:
intercepting and snooping data communications at a base station between a user equipment and a content server without terminating communications;
determining the snooping data communications is cacheable based on a cache hit at the base station;
implementing object caching at the base station using the snooping-data communications;
implementing object caching at an object cache server in the network, wherein the object cache server proxies communications to the content server from the user equipment;
maintaining synchrony between an object cache at the base station and an object cache at the object cache server by monitoring a state of a Transmission Control Protocol (TCP) of the object cache server; and
detecting a movement of the user equipment from the base station while a requested object is being served to the user equipment from a cache of the base station; and
taking over serving the requested object based on the detecting, wherein the taking over includes providing a remainder of the requested object by the object cache server.
US14/378,118 2012-03-13 2013-02-08 Object caching for mobile data communication with mobility management Active 2034-01-18 US10120801B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1204362.6A GB2500373A (en) 2012-03-13 2012-03-13 Object caching for mobile data communication with mobility management
GB1204362.6 2012-03-13
PCT/EP2013/052558 WO2013135443A1 (en) 2012-03-13 2013-02-08 Object caching for mobile data communication with mobilty management

Publications (2)

Publication Number Publication Date
US20150032974A1 US20150032974A1 (en) 2015-01-29
US10120801B2 true US10120801B2 (en) 2018-11-06

Family

ID=46026417

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/378,118 Active 2034-01-18 US10120801B2 (en) 2012-03-13 2013-02-08 Object caching for mobile data communication with mobility management

Country Status (6)

Country Link
US (1) US10120801B2 (en)
CN (1) CN104160679B (en)
DE (1) DE112013000702T5 (en)
GB (2) GB2500373A (en)
TW (1) TWI578745B (en)
WO (1) WO2013135443A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9819604B2 (en) 2013-07-31 2017-11-14 Nvidia Corporation Real time network adaptive low latency transport stream muxing of audio/video streams for miracast
US10299171B2 (en) 2013-12-03 2019-05-21 Telefonaktiebolaget Lm Ericsson (Publ) First service network node, a second service network node and methods relating to handling of a service session
CN104159249B (en) * 2014-07-30 2018-05-18 华为技术有限公司 The method, apparatus and system of a kind of Service Data Management
CN105721538A (en) * 2015-12-30 2016-06-29 东莞市青麦田数码科技有限公司 Data access method and apparatus
CN105542685B (en) * 2016-02-03 2018-12-11 京东方科技集团股份有限公司 Sealant, liquid crystal display panel, liquid crystal display and preparation method
US10540282B2 (en) 2017-05-02 2020-01-21 International Business Machines Corporation Asynchronous data store operations including selectively returning a value from cache or a value determined by an asynchronous computation
WO2018203185A1 (en) * 2017-05-02 2018-11-08 International Business Machines Corporation Asynchronous data store operations
US11553061B1 (en) * 2021-07-30 2023-01-10 At&T Intellectual Property I, L.P. Hyperlocal edge cache

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001016788A2 (en) 1999-09-01 2001-03-08 Nextwave Telecom Inc. Distributed cache for a wireless communication system
US20020052884A1 (en) * 1995-04-11 2002-05-02 Kinetech, Inc. Identifying and requesting data in network using identifiers which are based on contents of data
US6535509B2 (en) * 1998-09-28 2003-03-18 Infolibria, Inc. Tagging for demultiplexing in a network traffic server
EP1039721B1 (en) 1999-03-24 2004-03-17 Kabushiki Kaisha Toshiba Information delivery to mobile computers using cache servers
US6721288B1 (en) 1998-09-16 2004-04-13 Openwave Systems Inc. Wireless mobile devices having improved operation during network unavailability
WO2004057437A2 (en) 2002-12-23 2004-07-08 Electronics And Telecommunications Research Institute System and method for managing cache data by using a cache register in a mobile database system
US20050135357A1 (en) * 2003-12-22 2005-06-23 3Com Corporation Stackable routers employing a routing protocol
US7080158B1 (en) * 1999-02-09 2006-07-18 Nortel Networks Limited Network caching using resource redirection
US7143169B1 (en) * 2002-04-04 2006-11-28 Cisco Technology, Inc. Methods and apparatus for directing messages to computer systems based on inserted data
US20070245090A1 (en) * 2006-03-24 2007-10-18 Chris King Methods and Systems for Caching Content at Multiple Levels
US20080086594A1 (en) * 2006-10-10 2008-04-10 P.A. Semi, Inc. Uncacheable load merging
US20080153460A1 (en) * 2006-12-21 2008-06-26 Chan Mary S Methods and Apparatus for Distributed Multimedia Content Supporting User Mobility
US20080229021A1 (en) * 2007-03-12 2008-09-18 Robert Plamondon Systems and Methods of Revalidating Cached Objects in Parallel with Request for Object
US20080310365A1 (en) 2007-06-12 2008-12-18 Mustafa Ergen Method and system for caching content on-demand in a wireless communication network
US20090291696A1 (en) 2008-05-20 2009-11-26 Mauricio Cortes Method and apparatus for pre-fetching data in a mobile network environment using edge data storage
US20090327412A1 (en) * 2008-06-25 2009-12-31 Viasat, Inc. Methods and systems for peer-to-peer app-level performance enhancing protocol (pep)
US20100057883A1 (en) 2008-08-28 2010-03-04 Sycamore Networks, Inc. Distributed content caching solution for a mobile wireless network
CN101682566A (en) 2007-03-12 2010-03-24 思杰系统有限公司 Systems and methods of providing proxy-based quality of service
US20100161741A1 (en) * 2008-12-24 2010-06-24 Juniper Networks, Inc. Using a server's capability profile to establish a connection
EP1928154B1 (en) 2006-12-01 2010-08-11 Fujitsu Ltd. Efficient utilization of cache servers in mobile communication system
US20100235585A1 (en) 2009-03-12 2010-09-16 At&T Mobility Ii Llc Data caching in consolidated network repository
US7813484B2 (en) * 2002-08-08 2010-10-12 Telecommunication Systems, Inc. All-HTTP multimedia messaging
WO2010115469A1 (en) 2009-04-09 2010-10-14 Nokia Siemens Networks Oy Base station caching for an efficient handover in a mobile telecommunication network with relays
US20110044338A1 (en) * 2006-12-20 2011-02-24 Thomas Anthony Stahl Throughput in a lan by managing tcp acks
US20110125820A1 (en) 2009-11-25 2011-05-26 Yi-Neng Lin Telecommunication network aggregation cache system and method
WO2011091861A1 (en) 2010-02-01 2011-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Caching in mobile networks
US20110202634A1 (en) * 2010-02-12 2011-08-18 Surya Kumar Kovvali Charging-invariant and origin-server-friendly transit caching in mobile networks
WO2011116819A1 (en) 2010-03-25 2011-09-29 Telefonaktiebolaget Lm Ericsson (Publ) Caching in mobile networks
WO2012009619A2 (en) 2010-07-15 2012-01-19 Movik Networks Hierarchical device type recognition, caching control and enhanced cdn communication in a wireless mobile network
US20130198274A1 (en) * 2012-01-26 2013-08-01 Matthew Nicholas Papakipos Social Hotspot

Patent Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020052884A1 (en) * 1995-04-11 2002-05-02 Kinetech, Inc. Identifying and requesting data in network using identifiers which are based on contents of data
US6721288B1 (en) 1998-09-16 2004-04-13 Openwave Systems Inc. Wireless mobile devices having improved operation during network unavailability
US6535509B2 (en) * 1998-09-28 2003-03-18 Infolibria, Inc. Tagging for demultiplexing in a network traffic server
US7080158B1 (en) * 1999-02-09 2006-07-18 Nortel Networks Limited Network caching using resource redirection
EP1039721B1 (en) 1999-03-24 2004-03-17 Kabushiki Kaisha Toshiba Information delivery to mobile computers using cache servers
US6941338B1 (en) * 1999-09-01 2005-09-06 Nextwave Telecom Inc. Distributed cache for a wireless communication system
WO2001016788A2 (en) 1999-09-01 2001-03-08 Nextwave Telecom Inc. Distributed cache for a wireless communication system
US7143169B1 (en) * 2002-04-04 2006-11-28 Cisco Technology, Inc. Methods and apparatus for directing messages to computer systems based on inserted data
US7813484B2 (en) * 2002-08-08 2010-10-12 Telecommunication Systems, Inc. All-HTTP multimedia messaging
WO2004057437A2 (en) 2002-12-23 2004-07-08 Electronics And Telecommunications Research Institute System and method for managing cache data by using a cache register in a mobile database system
US20050135357A1 (en) * 2003-12-22 2005-06-23 3Com Corporation Stackable routers employing a routing protocol
US20070245090A1 (en) * 2006-03-24 2007-10-18 Chris King Methods and Systems for Caching Content at Multiple Levels
US20080086594A1 (en) * 2006-10-10 2008-04-10 P.A. Semi, Inc. Uncacheable load merging
EP1928154B1 (en) 2006-12-01 2010-08-11 Fujitsu Ltd. Efficient utilization of cache servers in mobile communication system
US20110044338A1 (en) * 2006-12-20 2011-02-24 Thomas Anthony Stahl Throughput in a lan by managing tcp acks
US20080153460A1 (en) * 2006-12-21 2008-06-26 Chan Mary S Methods and Apparatus for Distributed Multimedia Content Supporting User Mobility
CN101682566A (en) 2007-03-12 2010-03-24 思杰系统有限公司 Systems and methods of providing proxy-based quality of service
US20080229021A1 (en) * 2007-03-12 2008-09-18 Robert Plamondon Systems and Methods of Revalidating Cached Objects in Parallel with Request for Object
US20080310365A1 (en) 2007-06-12 2008-12-18 Mustafa Ergen Method and system for caching content on-demand in a wireless communication network
EP2281383B1 (en) 2008-05-20 2013-03-13 Alcatel-Lucent Method and apparatus for pre-fetching data in a mobile network environment using edge data storage
US20090291696A1 (en) 2008-05-20 2009-11-26 Mauricio Cortes Method and apparatus for pre-fetching data in a mobile network environment using edge data storage
US20090327412A1 (en) * 2008-06-25 2009-12-31 Viasat, Inc. Methods and systems for peer-to-peer app-level performance enhancing protocol (pep)
US20100057883A1 (en) 2008-08-28 2010-03-04 Sycamore Networks, Inc. Distributed content caching solution for a mobile wireless network
US20100161741A1 (en) * 2008-12-24 2010-06-24 Juniper Networks, Inc. Using a server's capability profile to establish a connection
US20100235585A1 (en) 2009-03-12 2010-09-16 At&T Mobility Ii Llc Data caching in consolidated network repository
WO2010115469A1 (en) 2009-04-09 2010-10-14 Nokia Siemens Networks Oy Base station caching for an efficient handover in a mobile telecommunication network with relays
US20110125820A1 (en) 2009-11-25 2011-05-26 Yi-Neng Lin Telecommunication network aggregation cache system and method
WO2011091861A1 (en) 2010-02-01 2011-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Caching in mobile networks
US20120300747A1 (en) * 2010-02-01 2012-11-29 Telefonaktiebolaget L M Ericsson (Publ) Caching in mobile networks
US20110202634A1 (en) * 2010-02-12 2011-08-18 Surya Kumar Kovvali Charging-invariant and origin-server-friendly transit caching in mobile networks
WO2011100518A2 (en) 2010-02-12 2011-08-18 Movik Networks, Inc. Charging-invariant and origin-server-friendly transit caching in mobile networks
WO2011116819A1 (en) 2010-03-25 2011-09-29 Telefonaktiebolaget Lm Ericsson (Publ) Caching in mobile networks
WO2012009619A2 (en) 2010-07-15 2012-01-19 Movik Networks Hierarchical device type recognition, caching control and enhanced cdn communication in a wireless mobile network
US20130198274A1 (en) * 2012-01-26 2013-08-01 Matthew Nicholas Papakipos Social Hotspot

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Chinese Office Action in related Chinese Application No. 201380012617.9 dated Aug. 31, 2016, 9 pages.
Hsiao et al, "Tailoring a Dsm Simulation Environment for Edge Cache Architecture," Proceedings of the 7th Workshop on Compiler Techniques for High-Performance Computing (CTHPC'01), Mar. 15-16, 2001, 15 pages.
International Search Report and Written Opinion for International Application No. PCT/EP2013/052558, dated Apr. 3, 2013, 12 pages.
Lai et al., "Supporting User Mobility Through Cache Relocation," Mobile Information Systems, 2005, Abstract.
Lai et al., "Supporting User Mobility Through Cache Relocation," Mobile Information Systems, 2005, pp. 275-307.
Nguyen et al., "An Adaptive Cache Consistency Strategy in a Disconnected Mobile Wireless Network," 2011 IEEE International Conference on Computer Science and Automation Engineering (CSAE), Issue Date: Jun. 10-12, 2011, pp. 256-260.
Search Report for GB Application No. GB1204362.6, dated Jul. 11, 2012, 8 pages.

Also Published As

Publication number Publication date
DE112013000702T5 (en) 2014-10-09
CN104160679A (en) 2014-11-19
US20150032974A1 (en) 2015-01-29
GB201204362D0 (en) 2012-04-25
TWI578745B (en) 2017-04-11
GB201415284D0 (en) 2014-10-15
GB2513284A (en) 2014-10-22
CN104160679B (en) 2017-09-29
WO2013135443A1 (en) 2013-09-19
TW201404101A (en) 2014-01-16
GB2500373A (en) 2013-09-25
GB2513284B (en) 2014-12-17

Similar Documents

Publication Publication Date Title
US10120801B2 (en) Object caching for mobile data communication with mobility management
US9198089B2 (en) Caching architecture for streaming data between base stations in mobile networks
US9237438B2 (en) Continuous cache service in cellular networks
US9001840B2 (en) Content caching in the radio access network (RAN)
US20120297009A1 (en) Method and system for cahing in mobile ran
KR20130137859A (en) Method and apparatus for handover in mobile content centric network
KR20130122196A (en) Mobile contents delivery method using a hand-over and apparatus therefor
US8942174B2 (en) Reducing packet loss in a mobile data network with data breakout at the edge
US9253683B2 (en) Utilizing stored data to reduce packet data loss in a mobile data network with data breakout at the edge
US8848614B2 (en) Cooperative mobility management in a mobile data network with data breakout at the edge
US9390053B2 (en) Cache device, cache control device, and methods for detecting handover
US9729661B2 (en) Optimization of mobile data communication using byte caching
EP3186959B1 (en) Enrichment of upper layer protocol content in tcp based session
US9560557B2 (en) Mobility management of OSI connections between cell towers
US11140594B2 (en) Methods and service nodes for transferring a service session for a wireless device
WO2013069985A1 (en) Mobile communication system and content provision method in mobile communication system
US11166209B2 (en) Methods and service nodes for transferring a service session for a wireless device between service nodes associated to different base stations
KR101589446B1 (en) Traffic redirection method for contents delivery service and computer readable recording medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DEAKIN, OLIVER M.;MOORE, VICTOR S.;NICHOLSON, ROBERT B.;AND OTHERS;SIGNING DATES FROM 20140801 TO 20140807;REEL/FRAME:033512/0732

AS Assignment

Owner name: GLOBALFOUNDRIES U.S. 2 LLC COMPANY, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:036277/0160

Effective date: 20150629

AS Assignment

Owner name: GLOBALFOUNDRIES U.S. 2 LLC, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:036550/0001

Effective date: 20150629

AS Assignment

Owner name: GLOBALFOUNDRIES INC., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GLOBALFOUNDRIES U.S. 2 LLC;GLOBALFOUNDRIES U.S. INC.;REEL/FRAME:036779/0001

Effective date: 20150910

AS Assignment

Owner name: GLOBALFOUNDRIES U.S.2 LLC, NEW YORK

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE RECEIVING PARTY DATA (NAME OF ASSIGNEE) NEEDS TO BE CORRECTED. ASSIGNEE SHOULD READ GLOBALFOUNDRIES U.S. 2 LLC PREVIOUSLY RECORDED ON REEL 036277 FRAME 0160. ASSIGNOR(S) HEREBY CONFIRMS THE GLOBALFOUNDRIES U.S. 2 LLC COMPANY;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:036919/0644

Effective date: 20150629

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, DELAWARE

Free format text: SECURITY AGREEMENT;ASSIGNOR:GLOBALFOUNDRIES INC.;REEL/FRAME:049490/0001

Effective date: 20181127

AS Assignment

Owner name: GLOBALFOUNDRIES U.S. INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GLOBALFOUNDRIES INC.;REEL/FRAME:054633/0001

Effective date: 20201022

AS Assignment

Owner name: GLOBALFOUNDRIES INC., CAYMAN ISLANDS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:054636/0001

Effective date: 20201117

AS Assignment

Owner name: GLOBALFOUNDRIES U.S. INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:056987/0001

Effective date: 20201117

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4