CN108023900B - Method and system for realizing transparent cache - Google Patents

Method and system for realizing transparent cache Download PDF

Info

Publication number
CN108023900B
CN108023900B CN201610925870.9A CN201610925870A CN108023900B CN 108023900 B CN108023900 B CN 108023900B CN 201610925870 A CN201610925870 A CN 201610925870A CN 108023900 B CN108023900 B CN 108023900B
Authority
CN
China
Prior art keywords
uplink request
request
downlink data
user
transparent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610925870.9A
Other languages
Chinese (zh)
Other versions
CN108023900A (en
Inventor
翁颐
朱红绿
姚良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201610925870.9A priority Critical patent/CN108023900B/en
Publication of CN108023900A publication Critical patent/CN108023900A/en
Application granted granted Critical
Publication of CN108023900B publication Critical patent/CN108023900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method and a system for realizing transparent cache, and relates to the technical field of internet. The transparent cache equipment confirms whether the service needs to be provided for the request of the user or not through the uplink request of the user, if the service needs to be provided, the uplink request of the user is marked and then sent to the network equipment, the network equipment sends downlink data corresponding to the uplink request of the user to the transparent cache equipment for service in the downlink process, and the downlink data corresponding to the uplink request which does not need the service is not sent to the transparent cache equipment. Because the downlink data does not need to be served and does not pass through the transparent cache device any more, the I/O resource of the transparent cache device is saved, and the performance loss of the transparent cache device is reduced.

Description

Method and system for realizing transparent cache
Technical Field
The invention relates to the technical field of internet, in particular to a method and a system for realizing transparent cache.
Background
The web cache (network cache) is divided into a transparent cache and a non-transparent cache. The transparent cache has the advantage of making the user and the source service website imperceptible, but at the same time, because the transparent cache is connected in series in the network and needs to bear the whole network flow, the requirement on the performance of the cache equipment is high, and the deployment cost is expensive.
For transparent caching, user traffic is generally forced to pass through caching by configuring policy routing in an uplink and a downlink, all uplink traffic passes through caching, all downlink traffic also passes through caching, the serviceable traffic is served by caching equipment, the non-serviceable traffic is forwarded by the caching equipment as a transparent proxy, the uplink traffic is mainly requested by some users, and the traffic is small. The downlink traffic is relatively large, wherein the serviceable traffic only occupies a small portion, and the rest of the traffic also needs to pass through the cache device, which wastes the I/O (input/output) of the device, and in addition, although the cache device only acts as a proxy for the non-serviceable traffic, the proxy action is located at layer 4-7, which still consumes the device performance, and the throughput capacity of the cache device is also occupied for the non-serviceable traffic.
Disclosure of Invention
The invention aims to solve the technical problems that: how to reduce the performance consumption of the transparent caching device.
According to an aspect of the present invention, a method for implementing a transparent cache is provided, including: the transparent caching equipment confirms that the transparent caching equipment needs to provide service for the request of the user according to the uplink request of the user; the transparent cache equipment identifies the uplink request and sends the uplink request to network equipment; the network equipment identifies the identifier, records the characteristics of the uplink request and forwards the uplink request to a source server; the network equipment identifies downlink data which is sent by the source server and corresponds to the uplink request according to the recorded characteristics of the uplink request; the network equipment sends the identified downlink data to the transparent cache equipment; and the transparent cache equipment processes the downlink data and forwards the processed downlink data to the user.
In one embodiment, the determining, by the transparent caching device according to the user uplink request, that the transparent caching device needs to provide a service for the user's request includes: the transparent caching device judges whether service needs to be provided for the user's request according to the uniform resource locator in the user uplink request and the cacheability of the page object, and if the uniform resource locator in the request and the preset uniform resource locator meet the matching condition and the page object can be cached, the service is confirmed to be provided for the user's request.
In one embodiment, the identifying, by the transparent caching device, the uplink request includes: and the transparent caching equipment identifies the uplink request in a reserved field in a transmission control protocol packet header.
In one embodiment, the network device identifies the identifier, and recording the characteristics of the uplink request includes: and the network equipment unpacks the uplink request, and records the stream characteristics of the uplink request if the uplink request is identified to carry the identifier, wherein the stream characteristics comprise an IP five-tuple.
In one embodiment, the identifying, by the network device, the downstream data corresponding to the upstream request sent by the source server according to the characteristic of the upstream request includes: the network equipment identifies the stream characteristics of the downlink data sent by the source server; and the network equipment matches the stream characteristics of the downlink data with the stream characteristics of the uplink request and determines the matched downlink data as the downlink data corresponding to the uplink request.
According to another aspect of the present invention, a system for implementing a transparent cache is provided, which includes: the transparent cache equipment is used for confirming that the transparent cache equipment needs to provide service for the request of the user according to the uplink request of the user, identifying the uplink request, sending the uplink request to network equipment, receiving downlink data which is sent by the network equipment and corresponds to the uplink request, processing the downlink data and then forwarding the processed downlink data to the user; and the network equipment is used for identifying the identifier carried by the uplink request, recording the characteristics of the uplink request, forwarding the recorded uplink request to a source server, identifying downlink data which is sent by the source server and corresponds to the uplink request according to the characteristics of the uplink request, and sending the identified downlink data and the identified downlink data to the transparent cache equipment.
In an embodiment, the transparent caching device is configured to determine whether to provide a service for the user's current request according to a uniform resource locator in an uplink request of the user and cacheability of a page object, where the uniform resource locator in the request and a preset uniform resource locator meet a matching condition, and the page object is determined to provide the service for the user's current request if the page object is cacheable.
In an embodiment, the transparent caching device is configured to identify the uplink request in a reserved field in a tcp header.
In an embodiment, the network device is configured to unpack the uplink request, and record a flow characteristic of the uplink request when it is identified that the uplink request carries the identifier, where the flow characteristic includes an IP quintuple.
In an embodiment, the network device is configured to identify a stream characteristic of the downlink data sent by the source server, match the stream characteristic of the downlink data with the stream characteristic of the uplink request, and determine the matched downlink data as the downlink data corresponding to the uplink request.
The transparent cache equipment confirms whether the service needs to be provided for the request of the user or not through the uplink request of the user, if the service needs to be provided, the uplink request of the user is marked and then sent to the network equipment, the network equipment sends downlink data corresponding to the uplink request of the user to the transparent cache equipment for service in the downlink process, and the downlink data corresponding to the uplink request which does not need the service is not sent to the transparent cache equipment. Because the downlink data does not need to be served and does not pass through the transparent cache device any more, the I/O resource of the transparent cache device is saved, and the performance loss of the transparent cache device is reduced.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for implementing a transparent cache according to an embodiment of the present invention.
Fig. 2 shows a schematic flow diagram of upstream traffic of the implementation method of the transparent cache of the present invention.
Fig. 3 shows a schematic flow diagram of downlink traffic of the implementation method of the transparent cache according to the present invention.
Fig. 4 shows a format diagram of a transmission control protocol header.
Fig. 5 is a flowchart illustrating a method for implementing a transparent cache according to another embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a system for implementing a transparent cache according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The scheme is provided for solving the problems that in the prior art, all uplink data and all downlink data need to pass through transparent cache equipment, I/O (input/output) of the equipment is wasted, the performance of the equipment is consumed, and the throughput capacity of the equipment is reduced.
The following describes an implementation method of the asymmetric transparent cache of the present invention with reference to fig. 1 to 4.
Fig. 1 is a flowchart of an embodiment of a method for implementing a transparent cache according to the present invention.
Fig. 2 is a schematic flow diagram of upstream traffic in the present invention.
Fig. 3 is a schematic flow diagram of downlink traffic in the present invention.
Fig. 4 is a diagram illustrating a format of a tcp header.
As shown in fig. 1, the method of this embodiment includes:
and step S102, the transparent cache equipment confirms that the transparent cache equipment needs to provide service for the current request of the user according to the uplink request of the user.
As shown in fig. 2, the uplink request packet of the user is first forwarded to the transparent caching device through the network device. The network device is for example a router. The entire upstream traffic includes many parts, such as unrecognized private Protocol or traffic, encrypted Protocol or traffic, video streaming media based on RTP (Real-time Transport Protocol), traffic of p2p (peer-to-peer network), traffic of http (HyperText Transfer Protocol) (usually web browsing, http video, http download), and so on. The network cache usually only serves http traffic, and not all the http traffic can be cached, for example, a part of a dynamic web page in a web page is usually identified as nocache (not cached) by a source service website, which cannot serve. If https traffic cannot be understood by the cache because it is encrypted, it cannot be serviced. Therefore, the transparent Cache device determines whether to provide service for the request of the user according to a Uniform Resource Locator (URL) in the uplink request of the user and the cacheability of the page object, and if the URL in the request and a preset URL satisfy a matching condition and the page object is cacheable, it is determined that the service is provided for the request of the user, for example, the transparent Cache device may be configured, and perform exact or fuzzy matching between the URL in the uplink request of the user and the preset URL, and the specific page matched to a specific website is serviceable, determine the cacheability of the page object according to a Cache-Control header field, Cache the content set as public, and default to no service the objects set as private (private), uncache (no Cache), and not store).
And step S104, the transparent cache equipment marks the uplink request and sends the uplink request to the network equipment.
As shown in fig. 2, all the upstream traffic is judged and identified by the transparent cache and then sent to the network device. The transparent buffer device identifies the uplink request in a reserved field in a transmission control protocol packet header. Since the network device needs to obtain a specific port number, at least the network device needs to unpack to a TCP (Transmission Control Protocol), and therefore, it may be considered that the uplink request is identified in a TCP header. Fig. 4 is a TCP header format, as shown in fig. 4, where a 6-bit reserved field is present. And using the first bit of the 6 bits as an identifier, if the identifier is 0, the request does not need to be provided with service, and if the identifier is 1, the request needs to be provided with service. The network device can strip off the identification after the processing is finished, and the identification is only transmitted between the network device and the transparent cache device, so that even if the reserved field is utilized later, the influence is not too much.
And step S106, the network equipment identifies the identifier, records the characteristics of the uplink request and forwards the uplink request to the source server.
The network equipment unpacks TCP packet headers of all uplink flows and judges whether the TCP packet headers carry identifiers or not.
As shown in fig. 2, the network device forwards the source server after performing corresponding processing on the uplink request. The source server provides corresponding downlink data according to the content of the uplink request, for example, a user requests a certain video resource, and the source server returns the video resource as the downlink data to the user.
The characteristics of the uplink request include flow characteristics, specifically include an IP five-tuple, i.e., a source IP address, a source port, a destination IP address, a destination port, and a transport layer protocol.
And step S108, the network equipment identifies the downlink data which is sent by the source server and corresponds to the uplink request according to the recorded characteristics of the uplink request.
The network equipment identifies that the flow characteristics of the downlink data sent by the source server specifically include an IP quintuple, matches the flow characteristics of the downlink data with the flow characteristics of the uplink request, determines the matched downlink data as the downlink data corresponding to the uplink request, and matches a source IP address of the uplink request with a destination IP address of the downlink data in matching.
Step S110, the network device sends the identified downlink data to the transparent cache device.
As shown in fig. 3, the offloading is performed on the downlink data network device, the downlink data corresponding to the uplink request that needs to provide the service is forwarded to the transparent cache device, and the downlink data that cannot be matched with the recorded stream characteristics is directly forwarded to the user or other devices according to the routing table.
And step S112, the transparent cache device processes the downlink data and forwards the processed downlink data to the user.
The transparent caching device caches the content received from the source server and sends the content out through the TCP connection between the transparent caching device and the user side. If the transparent caching device is also provided with functions of http optimization, video optimization and the like, operations such as picture compression, video compression or transcoding, embedded CSS (Cascading Style Sheets) and the like may be included.
In the method of the embodiment, the transparent cache device determines whether the service needs to be provided for the current request of the user through the uplink request of the user, if the service needs to be provided, the uplink request of the user is marked and then sent to the network device, the network device sends downlink data corresponding to the uplink request of the user to the transparent cache device for service in the downlink process, and the downlink data corresponding to the uplink request which does not need service is not sent to the transparent cache device. Because the downlink data does not need to be served and does not pass through the transparent cache device any more, the I/O resource of the transparent cache device is saved, and the performance loss of the transparent cache device is reduced.
Another embodiment of the transparent cache implementation method of the present invention is described below with reference to fig. 5.
FIG. 5 is a flowchart illustrating another embodiment of a method for implementing a transparent cache according to the present invention. As shown in fig. 5, the method of this embodiment includes:
step S502, the transparent cache device receives the uplink traffic of the user sent by the network device.
Step S504, the transparent cache device determines whether it needs to provide service for the uplink traffic, if so, step S506 is executed, otherwise, step S508 is executed.
Step S506, the transparent cache device identifies the uplink traffic.
Step S508, the transparent cache device sends the uplink traffic to the network device.
Step S510, the network device determines whether the uplink traffic carries an identifier, if so, step S512 is executed, otherwise, step S514 is executed.
Step S512, the network device records the flow characteristics of the uplink flow and deletes the identifier.
Step S514, the network device sends the uplink traffic to the source server.
Step S516, the network device receives the downlink traffic sent by the source server, matches the stream characteristics of the downlink traffic with the recorded characteristics of the uplink traffic, if the matching is successful, step S520 is executed, otherwise, step S518 is executed.
Step S518, the network device sends the downlink traffic to the user.
In step S520, the network device sends the downlink traffic to the transparent cache device, and continues to execute step S522.
Step S522, the transparent cache device processes the downlink traffic and forwards the processed downlink traffic to the user.
The invention also provides a system for implementing the transparent cache, which is described below with reference to fig. 6.
Fig. 6 is a block diagram of an embodiment of a system for implementing a transparent cache according to the present invention. As shown in fig. 6, the system 60 includes:
the transparent caching device 602 is configured to confirm that the transparent caching device needs to provide service for the request of the user according to the uplink request of the user, identify the uplink request, send the uplink request to the network device 604, receive downlink data corresponding to the uplink request sent by the network device 604, process the downlink data, and forward the processed downlink data to the user.
The transparent caching device 602 is configured to determine whether to provide a service for the user's current request according to a uniform resource locator in the user uplink request and cacheability of the page object, where the uniform resource locator in the request and a preset uniform resource locator meet a matching condition, and the page object is determined to provide a service for the user's current request when the page object is cacheable.
The transparent buffer device 602 is configured to identify the uplink request in a reserved field in the tcp header.
The network device 604 is configured to identify an identifier carried by the uplink request, record characteristics of the uplink request, forward the recorded uplink request to the source server, identify downlink data corresponding to the uplink request and sent by the source server according to the characteristics of the uplink request, and send the identified downlink data and the identified downlink data to the transparent cache device 602.
The network device 604 is configured to unpack the uplink request, and record a stream characteristic of the uplink request when the uplink request is identified to carry the identifier, where the stream characteristic includes an IP quintuple.
The network device 604 is configured to identify a stream characteristic of the downlink data sent by the source server, match the stream characteristic of the downlink data with a stream characteristic of the uplink request, and determine the matched downlink data as the downlink data corresponding to the uplink request.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for implementing a transparent cache, comprising:
the transparent caching equipment confirms that the transparent caching equipment needs to provide service for the request of the user according to the uplink request of the user;
the transparent cache equipment identifies the uplink request and sends the uplink request to network equipment;
the network equipment identifies the identifier, records the characteristics of the uplink request and forwards the uplink request to a source server;
the network equipment identifies downlink data which is sent by the source server and corresponds to the uplink request according to the recorded characteristics of the uplink request;
the network equipment sends the identified downlink data to the transparent cache equipment;
the transparent cache equipment processes the downlink data and forwards the processed downlink data to the user;
wherein the network device identifies the identifier, and recording the characteristics of the uplink request includes:
and the network equipment unpacks the uplink request, and records the stream characteristics of the uplink request if the uplink request is identified to carry the identifier, wherein the stream characteristics comprise an IP five-tuple.
2. The method of claim 1,
the step of the transparent caching device determining that the transparent caching device needs to provide service for the user's request according to the user uplink request includes:
the transparent caching device judges whether service needs to be provided for the user's request according to the uniform resource locator in the user uplink request and the cacheability of the page object, and if the uniform resource locator in the request and the preset uniform resource locator meet the matching condition and the page object can be cached, the service is confirmed to be provided for the user's request.
3. The method of claim 1,
the identifying, by the transparent caching device, the uplink request includes:
and the transparent caching equipment identifies the uplink request in a reserved field in a transmission control protocol packet header.
4. The method of claim 1,
the identifying, by the network device according to the characteristic of the uplink request, downlink data corresponding to the uplink request sent by the source server includes:
the network equipment identifies the stream characteristics of the downlink data sent by the source server;
and the network equipment matches the stream characteristics of the downlink data with the stream characteristics of the uplink request and determines the matched downlink data as the downlink data corresponding to the uplink request.
5. A system for implementing a transparent cache, comprising:
the transparent cache equipment is used for confirming that the transparent cache equipment needs to provide service for the request of the user according to the uplink request of the user, identifying the uplink request, sending the uplink request to network equipment, receiving downlink data which is sent by the network equipment and corresponds to the uplink request, processing the downlink data and then forwarding the processed downlink data to the user;
the network equipment is used for identifying the identifier carried by the uplink request, recording the characteristics of the uplink request, forwarding the recorded uplink request to a source server, identifying downlink data which is sent by the source server and corresponds to the uplink request according to the characteristics of the uplink request, and sending the identified downlink data and the identified downlink data to the transparent cache equipment;
the network device is configured to unpack the uplink request, and record a flow characteristic of the uplink request when the identifier is identified to be carried by the uplink request, where the flow characteristic includes an IP quintuple.
6. The system of claim 5,
the transparent caching device is used for judging whether to provide service for the current request of the user according to the uniform resource locator in the uplink request of the user and the cacheability of the page object, the uniform resource locator in the request and a preset uniform resource locator meet matching conditions, and the service is confirmed to be provided for the current request of the user under the condition that the page object can be cached.
7. The system of claim 5,
the transparent buffer device is configured to identify the uplink request in a reserved field in a tcp header.
8. The system of claim 5,
the network device is configured to identify a stream characteristic of the downlink data sent by the source server, match the stream characteristic of the downlink data with the stream characteristic of the uplink request, and determine the matched downlink data as the downlink data corresponding to the uplink request.
CN201610925870.9A 2016-10-31 2016-10-31 Method and system for realizing transparent cache Active CN108023900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610925870.9A CN108023900B (en) 2016-10-31 2016-10-31 Method and system for realizing transparent cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610925870.9A CN108023900B (en) 2016-10-31 2016-10-31 Method and system for realizing transparent cache

Publications (2)

Publication Number Publication Date
CN108023900A CN108023900A (en) 2018-05-11
CN108023900B true CN108023900B (en) 2020-11-27

Family

ID=62069582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610925870.9A Active CN108023900B (en) 2016-10-31 2016-10-31 Method and system for realizing transparent cache

Country Status (1)

Country Link
CN (1) CN108023900B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102771090A (en) * 2009-12-23 2012-11-07 思杰系统有限公司 Systems and methods for policy based transparent client IP prosecution
CN103460735A (en) * 2011-04-12 2013-12-18 瑞典爱立信有限公司 Providing information to core network relating to cache in access network
CN103475626A (en) * 2012-06-07 2013-12-25 华为技术有限公司 Method, equipment and system used for resource requesting
CN103974057A (en) * 2013-01-24 2014-08-06 华为技术有限公司 Video quality user experience value evaluation method, device and system
CN105959228A (en) * 2016-06-23 2016-09-21 华为技术有限公司 Flow processing method and transparent cache system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677018B2 (en) * 2008-08-25 2014-03-18 Google Inc. Parallel, side-effect based DNS pre-caching
WO2013034195A1 (en) * 2011-09-09 2013-03-14 Telefonaktiebolaget L M Ericsson (Publ) Differentiated handling of data traffic with user-class dependent adaptation of network address lookup

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102771090A (en) * 2009-12-23 2012-11-07 思杰系统有限公司 Systems and methods for policy based transparent client IP prosecution
CN103460735A (en) * 2011-04-12 2013-12-18 瑞典爱立信有限公司 Providing information to core network relating to cache in access network
CN103475626A (en) * 2012-06-07 2013-12-25 华为技术有限公司 Method, equipment and system used for resource requesting
CN103974057A (en) * 2013-01-24 2014-08-06 华为技术有限公司 Video quality user experience value evaluation method, device and system
CN105959228A (en) * 2016-06-23 2016-09-21 华为技术有限公司 Flow processing method and transparent cache system

Also Published As

Publication number Publication date
CN108023900A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
US8942619B2 (en) Relay device
EP2897340B1 (en) Routing proxy for adaptive streaming
KR101567386B1 (en) Method and apparatus for internet protocol based content router
US8812623B2 (en) Techniques to support selective mobile content optimization
US10353962B2 (en) Method and system for bitrate management
US20090165115A1 (en) Service providing system, gateway, and server
CN103516606B (en) A kind of CDN route implementation methods and system
US20140074961A1 (en) Efficiently Delivering Time-Shifted Media Content via Content Delivery Networks (CDNs)
US20170041422A1 (en) Method and system for retrieving a content manifest in a network
CN104780199A (en) Method for downloading, at a client terminal, an upcoming sequence of segments of a multimedia content, and corresponding terminal
WO2017080459A1 (en) Method, device and system for caching and providing service contents and storage medium
CN103581765B (en) The method and apparatus that a kind of message forwards
CN111567011B (en) Method for improving QoE of video service and WEB service by using cross-layer information
KR102376496B1 (en) System for distributed forwarding service stream and method for the same
CN112104744B (en) Traffic proxy method, server and storage medium
JP2014241135A (en) Communication method of node in content centric network, and node therefor
Cha et al. A mobility link service for ndn consumer mobility
EP1950917B1 (en) Methods for peer-to-peer application message identifying and operating realization and their corresponding devices
JP6205765B2 (en) VIDEO DISTRIBUTION DEVICE, VIDEO DISTRIBUTION PROGRAM, VIDEO DISTRIBUTION METHOD, AND VIDEO DISTRIBUTION SYSTEM
CN105009520A (en) Method for delivering content in communication network and apparatus therefor
EP3235168B1 (en) Coordinated packet delivery of encrypted session
US10122630B1 (en) Methods for network traffic presteering and devices thereof
CN109120953A (en) Self adaptation stream processing system for video and method based on SDN and EC technology
CN108023900B (en) Method and system for realizing transparent cache
EP3488569B1 (en) System and method for ephemeral entries in a forwarding information base in a content centric network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant