CN103416027B - The system of the method, buffer and cache optimization of cache optimization - Google Patents

The system of the method, buffer and cache optimization of cache optimization Download PDF

Info

Publication number
CN103416027B
CN103416027B CN201280000262.7A CN201280000262A CN103416027B CN 103416027 B CN103416027 B CN 103416027B CN 201280000262 A CN201280000262 A CN 201280000262A CN 103416027 B CN103416027 B CN 103416027B
Authority
CN
China
Prior art keywords
content
url
response message
cache
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201280000262.7A
Other languages
Chinese (zh)
Other versions
CN103416027A (en
Inventor
欧雄兵
顾纳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN103416027A publication Critical patent/CN103416027A/en
Application granted granted Critical
Publication of CN103416027B publication Critical patent/CN103416027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The system of method, buffer and cache optimization the invention provides cache optimization.Wherein, the method for cache optimization includes:Multiple service requests of the cache node receiving terminal to the first uniform resource position mark URL, and receive multiple first response messages respectively from content source based on the plurality of service request, when the buffer control header field for not carrying buffer control header field or carrying in the plurality of first response message indicates not cache, the first signature value of content included in multiple first response messages is determined respectively;The more multiple first signature values of cache node, if identical, cache the content and a URL included in the first response message, if it is not the same, not caching the content included in the first response message.The signature value that the system of the method, buffer and cache optimization of cache optimization of the invention can be based on the business tine of request determines the cache contents being buffered in local cache equipment, so that bandwidth is reduced, lifting terminal traffic experience.

Description

The system of the method, buffer and cache optimization of cache optimization
Technical field
The present embodiments relate to the communications field, and method more particularly, to cache optimization, buffer and caching The system of optimization.
Background technology
Life of the development of internet in recent years to society and people generates tremendous influence, Internet Users, mutually Working application species, network bandwidth etc. all show explosive growth.These applications bring abundant interconnection dictyosome to client Test, but also result in network traffics surge.The network operator and manager of network are main in order to meet ever-increasing user's request Will be using dilatation bandwidth, control flow, introducing network-caching(web cache), introduce the method such as content source.Compared to dilatation band The limited improvement that wide, flow control, introducing network-caching, introducing content source are brought, telecom operators are also an option that in network Middle deployment cache node is cached to popular content, so as to realize low cost operation in the way of to reduce bandwidth and lift clothes Business quality, i.e. operator actively dispose OTT in its network(over the top)Cache node.
Generally, OTT cache nodes are deployed in the Internet exportation side of network, on the router deployment strategy route, By Hypertext Transfer Protocol(HTTP, Hyper Text Transport Protocol)Business is routed on cache node, such as The content that terminal is asked is cached with fruit cache node, then directly provides service by cache node, if do not had on cache node There is the caching content asked of terminal, then from cache node proxy terminal to consigning to terminal again after content source request content.
OTT cache nodes after content is got from content source by http protocol, if it find that content source return In http response message carry buffer control header field show that content can be cached, then cached according to buffer control header field in Hold;If it find that there is no the buffer control head for carrying buffer control header field or carrying in the http response message of content source return Domain indicates not cache, then give tacit consent to that content is not cached.
In general, service provider/content supplier(SP/CP, service provider/content provider)For the consideration for protecting oneself business and content, it is not inclined to and allows the network of operator to delay the content of oneself Deposit, therefore often acquiescence does not carry buffer control header field or carrying buffer control header field indicates not cache in the response message. So, if OTT cache nodes always determine that the caching of content and refreshing will cause caching according to buffer control header field Effect is poor, and substantial amounts of service request still needs content source, and not having reduces bandwidth and lift what terminal traffic was experienced Effect.
Or, operator's content offline larger to flow-rate ratio that SP/CP is provided is made whether the analysis that can be cached, then It is configured to analysis result as strategy on OTT cache nodes.But, the refreshing underaction of configuration strategy is timely, therefore not The change of business can be quickly tracked, once content source alteration ruler, then may influence its service quality.
The content of the invention
The system that the embodiment of the invention provides method, buffer and the cache optimization of cache optimization, can avoid completely Buffer control header field in http response head influences the problem of service quality deciding whether to cache.
On the one hand, there is provided a kind of method of cache optimization, including:Receiving terminal is to the first URL (URL, Uniform Resource Locator)Multiple service requests, and based on the multiple service request from content source point Multiple first response messages are not received, when not carrying buffer control header field or carrying in the multiple first response message Buffer control header field is indicated not cache, and the first signature value of content included in the multiple first response message is determined respectively; Compare multiple first signature values, if identical, cache the content and described included in first response message One URL, if it is not the same, not caching the content included in first response message.
On the other hand, there is provided a kind of buffer, including:Processing unit, for receiving terminal to multiple industry of a URL Business request, and receive multiple first response messages respectively from content source based on the multiple service request, when the multiple first The buffer control header field for not carrying buffer control header field or carrying in response message indicates not cache, and determines respectively described many First signature value of the content included in individual first response message;Comparing unit, for comparing multiple first signature values, such as It is really identical, then the content for being included in the first response message described in the buffer and a URL, if not phase Together, the buffer does not cache the content included in first response message.
A kind of another aspect, there is provided system of cache optimization, including:Multiple buffers;Service management device;It is expansible Message Processing Presence Protocol XMPP server, for obtain service management device issue caching rule and each described in Buffer issue to URL whether the analysis result that can be cached, so that the service management device is serviced by the XMPP Device know each described buffer issue to uniform resource position mark URL whether the analysis result that can be cached, and each The buffer knows that the caching rule that the service management device is issued is sent out with other buffers by the XMPP server Cloth to the URL whether the analysis result that can be cached.
The system of the method, buffer and cache optimization of the cache optimization of the embodiment of the present invention can be based on the business of request The signature value of content determines the cache contents being buffered in local cache equipment, so that bandwidth is reduced, lifting terminal traffic experience.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be in embodiment or description of the prior art The required accompanying drawing for using is briefly described, it should be apparent that, drawings in the following description are only some realities of the invention Example is applied, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is the flow chart of the method for cache optimization according to embodiments of the present invention.
Fig. 2 is the structural representation of buffer according to embodiments of the present invention.
Fig. 3 is the structural representation of buffer according to another embodiment of the present invention.
Fig. 4 is the structural representation of the buffer according to further embodiment of this invention.
Fig. 5 is the structural representation of buffer according to yet another embodiment of the invention.
Fig. 6 is the structural representation of the buffer according to further embodiment of this invention.
Fig. 7 is the structural representation of the system of cache optimization according to embodiments of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of method of cache optimization, i.e., a kind of method to content active cache realizes letter It is single and transparent to terminal, content source and operator.Can automatically to popular unified resource by the method for the embodiment of the present invention Finger URL(URL, Uniform Resource Locator)Automatically analyzed, the multiple request of URL is corresponded to by analyzing Response message judges whether content can cache.Further, it is also possible to signature value according to actual content is recognized in caching Hold, even if therefore can also lift the ratio of cache contents when URL changes, allow temperature is of a relatively high can actually to be cached Content is cached on cache node, so as to play reduction bandwidth, the effect of lifting terminal traffic experience.And it is of the invention Scheme can quickly track the change of business, allow the content of hot topic to terminate in as early as possible locally.
The method that cache optimization according to embodiments of the present invention is specifically described below with reference to Fig. 1.
It is route by configuration strategy on the router(PBR, Policy-Based Routing)User is accessed into outer net clothes The flow of business device turns to cache node by PBR, so as to HTTP flows are drained into cache node from router, then again by delaying Depositing node-agent user goes external network server to be supplied to user after downloading content.It is, the HTTP access requests of user pass through PBR is drawn to cache node;If the content that cache node is locally asked without cache user, proxy user takes to outer net Business device initiates HTTP access requests;External network server returns to response message to cache node according to HTTP access requests;Caching section Response message is returned to user by point again.
During cache node provides the user content delivery, for each HTTP request for receiving, if locally Without the corresponding content of caching, then the content source server request content to SP/CP is needed.For being received from content source server Response message, cache node needs to check its buffer control header field, if buffer control header field carries the header field that can cache, Then can directly be buffered in cache node local, not carry the buffer control of buffer control header field or carrying for response message When header field indicates not cache, then cache node is analyzed to judge that whether the content that response message is included may be used to popular URL To cache.
In Fig. 1, the method for cache optimization is comprised the following steps:
11, cache node receiving terminal to multiple service requests of the first uniform resource position mark URL, and based on the plurality of Service request is from content source(For example, content source server)Multiple first response messages are received respectively, if the plurality of first sound Answer the buffer control header field for not carrying buffer control header field or carrying in message to indicate not cache, the plurality of the is determined respectively First signature value of the content included in one response message.
That is, when terminal initiates the service request to a URL to cache node, cache node is distinguished from content source The first response message is received, if not carrying the buffer control header field of buffer control header field or carrying in first response message Instruction is not cached, then cache node determines the first signature value of the content included in first response message.It is appreciated that first Signature value can be directed to the full content or partial content of the content included in the first response message and determine.Now, cache A URL and its corresponding content are not preserved in node.If also repeatedly initiated to first to cache node after terminal The service request of URL, cache node still calculates multiple first signature values according to corresponding content.Now, cache node is still A URL and its corresponding content are not preserved.
Many to the service request that cache node is initiated generally, due to terminal, the URL for therefore relating to is also a lot.Preferably, The temperature of a URL can be first counted, such as before multiple service requests of the cache node receiving terminal to a URL Really the temperature of a URL is not reaching to threshold value, then cache node is without considering to preserve in a URL and its corresponding business Hold, only when the temperature of a URL exceeds threshold value, and determine that a URL is not statically configured and not in blacklist In, just carry out the operation of above-mentioned calculating the first signature value.So, cache node is actively analyzed to accessing temperature URL high, The result relatively repeatedly asked, local cache is forced if consistent, does not cache and make the URL to enter not if inconsistent Caching blacklist, it is possible to avoid replicate analysis.
From the foregoing, it will be observed that when cache node determines multiple first signature values, the multiple of a same URL can be directed to First signature value determines whether to cache a URL and its corresponding content.
12, the more multiple first signature values of cache node, if identical, cache what is included in the first response message Content and a URL, otherwise, the content included in the first response message are not cached.
Once cache node determines to be buffered corresponding to the content of a URL, then by a URL and correspond to The content caching of the first URL is local.When as soon as terminal initiates the service request to URL again, without waiting for SP/CP Content source server feed back relevant content, and content needed for only needing directly to be obtained from local cache equipment.That is, The signature value determination of the business tine that the method for the cache optimization of the embodiment of the present invention can be based on request is buffered in local cache Cache contents in equipment, so that bandwidth is reduced, lifting terminal traffic experience.
Specifically, for certain URL(For example, a URL)Access request, if the response that content source server is returned The buffer control header field for not carrying buffer control header field or carrying in message is indicated not cache, then cache node is not cached from interior Hold the content of the response message carrying that source server is obtained, but cache node is based on the URL of access times real-time statistics the Temperature.When the temperature of a URL reaches the threshold value of setting(threshold)Afterwards, check whether there is static state for a URL The cache policy of configuration can be matched, if can then be processed according to corresponding static policies.So-called static configuration is Refer to, if clearly knowing that some URL are belonging to dynamic URL and can not cache, it is possible to delay for these URL configurations are static Strategy is deposited clearly not cache these URL.When cache node determines that a URL does not have the cache policy of static configuration, then examine Whether a URL is looked into the blacklist not cached, subsequent treatment is not carried out if in blacklist, otherwise proceeded Following treatment:
When cache node receive again to should a URL service request after, due to not caching corresponding content locally, Cache node still needs from content source server acquisition content.During content is obtained from content source server, for The content generation signature value that this gets, the algorithm for generating signature value can have multiple choices, such as Message Digest 5 the 5th Version(MD5, Message Digest Algorithm5)Or SHA(SHA, Secure Hash Algorithm).The entire content that content source server return can be directed to when calculating signature value is calculated and is generated single label Name value, it is also possible to be sampled that generation is single or multiple signature values to content to reduce amount of calculation, such as simply using returning Return the preceding 16KB of content(Kbyte)Content calculate signature value, or to preceding 16KB and last 16KB all respectively calculate signature Value.The signature value being calculated is cached as the contents attribute obtained from content source server, at this moment from content source service The content that device is obtained is still first local without being stored in.Afterwards, when cache node receive again to should a URL service request Afterwards, due to not caching corresponding content locally, cache node still needs from content source server acquisition content.From content Source server calculates signature value during obtaining content using above-mentioned algorithm, and signature value that this is calculated and calculates before Signature value be compared.If continuous several times(For example, 2 times or 3 times)The signature value of the content downloaded from content source server Comparative result it is all consistent when, then it is assumed that to should the content of a URL can cache, i.e., at this moment obtained from content source server Content can be stored in locally, subsequently receive again for a URL service request just can using be buffered in it is local Content directly provides service.Inconsistent if there is signature value result, such as the signature value of double calculating is inconsistent, explanation The content for being obtained from content source server every time is had differences, therefore the content of a URL can not then should in local cache First URL needs to enter into the blacklist not cached or reduce the hot statistics value of a URL, it is to avoid carry out follow-up The judgement treatment that whether can be cached.
In order to further optimize above-mentioned caching method, can be to be installed with being included in the first response message for being cached Put life span TTL(Time To Live), so as to when the TTL time-out when, the first response message that Force Deletion is cached In the content that includes.Specifically, the ttl value different for the curriculum offering that local cache gets off.When TTL time-out, force The content of local cache is deleted, when new service request is received again from content source server request content.So, can keep away Exempt from actual content there is small change and according to the still consistent low probability event of the signature value of partial content technology, so that it is guaranteed that The content of caching is consistent with the content in content source server.
Wherein, setting TTL can be based on following principle.Set according to different domain names:For example regarded for main offer The domain name of frequency content, the change frequency of its content is low, therefore can set TTL more long;For the main domain name for providing information, The change frequency of its content is higher, therefore the TTL for setting is relatively shorter.Extension name according to institute's request content is set: The different content type of the actual correspondence of different extension name, therefore different TTL can be set.It is of course possible to by domain name and extension Name combines setting:Different domain names is directed to, the content of identical extension name can set different ttl values.
In the content catering service cached with local pressure, continue to send carrying " if repaiied to content source server Change(If-Modified-Since)" header field or " if mismatch(If-None-Match)" header field request, allow content source Server can record user access logses, while advantageously ensuring that the timely refreshing of content.
Because under some scenes, SP/CP needs to see the request of original user so as to collect the access log of user, this When cache node can be with pseudo-terminal to content source server initiating business request, so that content source server is according to service request The entity value information of the requested variable of middle carrying come check caching in content whether there occurs change.
For example, carrying the temporal information of the content generation of caching, content source server in If-Modified-Since header fields Can judge whether the content of caching refreshed according to the information;Or If-None-Match header fields carry the content life of caching Into entity value(Etag)Signing messages, content source server can judge whether the content of caching refreshes according to the signing messages Cross.
Specifically, the HTTP for being sent in cache node is downloaded(HTTP Get)If-Modified- is carried in request message Since header fields or If-None-Match header fields, content source server can directly according to If-Modified-Since header fields The Etag signing messages of the requested variable that the temporal information or If-None-Match header fields of carrying are carried carrys out the scope of examination is It is no to there occurs change.If do not changed, " 304 can be directly replied:not modified(Without modification)" response disappear Breath, in the response message without carrying actual data, therefore can equally reduce bandwidth.
According to the method for cache optimization as shown in Figure 1, once cache node determines that the content corresponding to a URL can be with Be buffered, then by a URL and corresponding to a URL content caching local.When terminal initiate again to this first During the service request of URL, just the content source server without waiting for SP/CP feeds back relevant content, and only needs directly from local Content needed for being obtained in buffer memory device.
However, when practical business is disposed, SP/CP for the consideration for protecting oneself business and content, except disappearing in response Give tacit consent to not carrying buffer control header field or carry buffer control header field in breath and indicate not caching outward, can also convert for same The URL of content, i.e. SP/CP issue different URL for same content to different users.From the request that cache node is received From the point of view of, the URL of each request is different, is substantially if simply going to search the content of caching according to URL and not ordered locally In.
For above-mentioned situation, still active cache can be realized using signature value, so as to reduce bandwidth, lift terminal industry Business experience, realizes that flow is as follows in detail.
First, cache node from content source server get content after simultaneously calculating section content signature value, for example The signature value of preceding 16KB contents and the signature value of rear 16KB contents are calculated, the signature value and content being calculated all are stored in this Ground, while set up the signature value of preceding 16KB contents and the corresponding relation of content, i.e., except that can be indexed in caching according to URL Outside appearance, it is also possible to the content of caching is indexed according to signature value.
When cache node receives the new access request of the 2nd URL of correspondence, will be searched according to the 2nd URL and locally whether delayed There is content corresponding with the 2nd URL of request, the content catering service if can then be based on caching.If first without if To the content of 16KB before content source server request, the content of 16KB can be taken by HTTP Get request message before request Band scope(Range)Header field indicates request scope to realize, for example:Range:bytes=0-16383.
After the content of 16KB before cache node is received, the content to this 16KB calculates signature value, the label that will be calculated Name value is illustrated not cache locally if the result of matching is not found and asked as the content of index search local cache Content, it is therefore desirable to continue to ask remaining content delivery to terminal from content source server, ask the remaining content can be with Request scope is indicated to realize by carrying Range header fields in HTTP Get request message, for example:Range:bytes= 16384-.The signature value of 16KB contents and the content that will be asked after calculating again during from content source server request content It is buffered in locally with corresponding signature value.
If finding the signature value of the signature value of the content of local cache and the content of this preceding 16KB for calculating Equally, then the content of explanation local cache may be exactly the content of this service request.In order to confirm that further school can be done Test and examine, cache node asks the content of last 16KB to content source server again, this can also be asked by HTTP Get Carrying Range header fields indicate request scope to realize in seeking message, for example:Range:bytes=-16384.Received in cache node To after the content of rear 16KB, the content to rear 16KB also calculates signature value, the signature value that will calculate and before according to preceding The signature value of the rear 16KB contents of the local cache content that the signature value of 16KB contents is indexed is compared.If consistent, Illustrate can be end users with services using the content of local cache, if it is inconsistent, explanation is not cached locally The content asked, it is therefore desirable to continue to ask remaining content delivery to terminal from content source server, ask in remaining Holding can indicate request scope to realize by carrying Range header fields in HTTP Get request message, for example:Range: bytes=16384-。
By above description as can be seen that in addition to the content that can indicate caching according to the URL of raw requests, Content can also be indicated according to the signature value of content.When processing in this manner, for Hot Contents, even if ask URL is different, and the content of local cache is also available with after content is identified according to signature value to provide service.
In actual applications, if for improving performance, the generation and verification of the corresponding signature value of rear 16KB contents also may be used To omit, i.e., content is simply recognized according to the signature value of above 16KB contents.Certainly, in practice for calculating in signature value Appearance can have various variation patterns, such as, by original document burst, part combination of bytes is extracted from every to calculating again together Signature value.
In sum, alternatively, in addition to the content included in caching the first response message and a URL, caching section The corresponding relation that point is set up and cache between the content and the first signature value included in the first cached response message.Therefore, When cache node receives the service request of the ULR of terminal-pair the 2nd, if the 2nd URL is identical with the URL for being cached, The content included in the first response message for being cached for corresponding to a URL is then sent to terminal, otherwise based on to second Corresponding relation between the content included in the service request of URL and the first response message for being cached and the first signature value from The content source(Server)The second response message is received, and determines that the second of the content included in second response message signs Value.Cache node compares the second signature value and the first signature value, if identical, is included in the first response message that will be cached Content be sent to terminal, otherwise will be from content source(Server)In the whole of the content included in the second response message for receiving Appearance is sent to terminal.
Buffer according to embodiments of the present invention is shown in Fig. 2.In fig. 2, buffer 20 includes the He of processing unit 21 Comparing unit 22.Wherein, processing unit 21 is used for multiple service requests of the receiving terminal to a URL, and based on the multiple Service request receives multiple first response messages from content source respectively, when not carrying caching in the multiple first response message Control header field or the buffer control header field of carrying indicate not cache, and determine what is included in the multiple first response message respectively First signature value of content.For example, processing unit 21 can be according in the part of the content included in first response message Hold and determine the first signature value;Or the first label can be determined according to the full content of the content included in first response message Name value.Comparing unit 22 is used to compare multiple first signature values, if identical, the first sound described in the buffer The content and a URL included in message are answered, otherwise described buffer is included in not caching first response message Content.
Therefore, the signature value determination of the business tine that the buffer of the embodiment of the present invention can be based on request is buffered in locally Cache contents in buffer memory device, so that bandwidth is reduced, lifting terminal traffic experience.
Buffer according to another embodiment of the present invention is shown in Fig. 3.In figure 3, except processing unit 21 and compare Unit 22, buffer 30 also includes statistic unit 23 and determining unit 24.Wherein, statistic unit 23 is used to count the URL's Temperature.Determining unit 24 is used to, when the temperature of the URL exceeds threshold value, determine that the URL is not statically configured and does not exist In blacklist.
The buffer according to further embodiment of this invention is shown in Fig. 4.In fig. 4, except processing unit 21, relatively list Unit 22, statistic unit 23 and determining unit 24, buffer 40 also include setting unit 25.The setting unit 25 is used for be cached The first response message in the curriculum offering life span TTL that includes, so that when TTL time-out, Force Deletion is cached The first response message in the content that includes.
Buffer according to yet another embodiment of the invention is shown in Fig. 5.In Figure 5, except processing unit 21, relatively list Unit 22, statistic unit 23, determining unit 24 and setting unit 25, buffer 50 also include analogue unit 26.The analogue unit 26 For pseudo-terminal to the content source initiating business request, so that the content source is according to the quilt carried in the service request The entity value information of variable is asked to come whether the scope of examination there occurs change.
The buffer according to further embodiment of this invention is shown in Fig. 6.In figure 6, except processing unit 21, relatively list Unit 22, statistic unit 23, determining unit 24, setting unit 25 and analogue unit 26, buffer 60 also include setting up unit 27, should Set up unit 27 for set up cached the first response message in correspondence between the content that includes and the first signature value Relation.Thus, processing unit 21 is further used for service request of the receiving terminal to the 2nd ULR, if the 2nd URL and institute First URL of caching is identical, then included in the first response message for being cached to terminal transmission corresponding to a URL Content, otherwise receives the second response message based on the service request and the corresponding relation to the 2nd URL from the content source, Determine the second signature value of content included in second response message.Comparing unit 22 is further used for second described in comparing Signature value and the first signature value, if identical, included in the first response message that the buffer will be cached in Appearance is sent to the terminal, the content included in the second response message that otherwise described buffer will be received from the content source Full content is sent to terminal.
In sum, the buffer of the embodiment of the present invention can be changed with the temperature of real-time tracking content, by temperature it is high and The content that can actually cache local cache as far as possible gets off, so as to play reduction bandwidth, the effect of lifting terminal traffic experience. Additionally, the buffer of the embodiment of the present invention continues to content as needed in the content catering service cached with local pressure Source server sends the request for carrying If-Modified-Since header fields or If-None-Match header fields, content source server Can continue to keep the record to user access logses, but not need actual returned content, therefore also improve content source clothes The performance of business device.
The system that Fig. 7 shows cache optimization according to embodiments of the present invention.The Real-Time Sharing between multiple cache nodes To URL whether in the analysis result that can be cached, i.e. whole network can shared buffer memory strategy, so as to avoid multiple cache nodes Compute repeatedly, the content that allowing to cache is cached to locally as early as possible, and then lift the performance of whole caching system.
For example, as shown in fig. 7, the system 70 of cache optimization according to embodiments of the present invention includes multiple buffers 71, industry Business managing device 72 and scalable message treatment Presence Protocol(XMPP, The Extensible Messaging and Presence Protocol)Server 73.Wherein, multiple buffers 71 can be buffer as shown in Figures 2 to 6.XMPP Server 73, can obtain service management device 72 issue caching rule and each buffer 71 issue to URL whether The analysis result that can be cached, so that service management device 72 knows the right of each buffer 71 issue by XMPP server 73 URL whether the analysis result that can be cached, and each buffer 71 knows service management device 72 by XMPP server 73 Issue caching rule and other buffers issue to the URL whether the analysis result that can be cached.Therefore, the present invention is implemented The system of the cache optimization of example can be based on the business tine of request signature value determine be buffered in it is slow in local cache equipment Content is deposited, so that bandwidth is reduced, lifting terminal traffic experience.
In practice, can be by sharing each cache node(For example, buffer 71)Analysis result, so as to optimize whole Body caching performance.For example, the publish/subscribe model based on XMPP realizes caching the issue of analysis result and shared.Tool For body, service management device 72 can issue known caching according to configuration as publisher to XMPP server 73 Rule;Each cache node(For example, buffer 71)Can also be as publisher, to the real-time release of XMPP server 73 caching Node to URL whether the analysis result that can be cached;Service management device 72 also serves as subscriber simultaneously, to XMPP server 73 Subscribe to other cache node real-time releases to URL whether the analysis result that can be cached, these results can help operator Deeper analysis and conclusion are carried out, static rule may finally be formed and be configured on all of cache node;Each delays Deposit node(For example, buffer 71)Also all it is simultaneously as subscriber, under the subscribing service managing device 72 of XMPP server 73 Hair caching rule, while also subscribe to other cache nodes issue to URL whether the analysis result that can be cached.For example, such as A really cache node(For example, buffer 71)From other cache nodes receive certain URL can cache and locally be currently make To be unable to caching process, then can be directly buffered in after temperature reaches threshold value local without whether consistent with verification is calculated; If a cache node(For example, buffer 71)From other cache nodes receive certain URL cannot cache then directly should URL is put into the blacklist that can not be cached.By sharing analysis result between this multiple cache nodes, it is possible to reduce whole net The operation that verification is compared in network, so as to improve the performance of whole system.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Unit and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel Described function, but this realization can be realized it is not considered that exceeding using distinct methods to each specific application The scope of the present invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, for example multiple units or component Can combine or be desirably integrated into another system, or some features can be ignored, or do not perform.It is another, it is shown or The coupling each other for discussing or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces Close or communicate to connect, can be electrical, mechanical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme 's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
If the function is to realize in the form of SFU software functional unit and as independent production marketing or when using, can be with Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment(Can be individual People's computer, server, or network equipment etc.)Perform all or part of step of each embodiment methods described of the invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage(ROM, Read-Only Memory), arbitrary access deposits Reservoir(RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (13)

1. a kind of method of cache optimization, it is characterised in that including:
Count the temperature of the first uniform resource position mark URL;
If the temperature of a URL exceeds threshold value, determine whether whether a URL is statically configured and in black name Dan Zhong;
When the temperature of a URL exceeds threshold value, and determine that a URL is not statically configured and not in blacklist When middle, receiving terminal to multiple service requests of first uniform resource position mark URL, and based on the multiple service request Multiple first response messages are received respectively from content source, when not carrying buffer control header field in the multiple first response message Or the buffer control header field for carrying indicates not cache, the of the content for determining to be included in the multiple first response message respectively One signature value;
Compare multiple first signature values, if identical, cache the content and institute included in first response message A URL is stated, if it is not the same, not caching the content included in first response message.
2. method according to claim 1, it is characterised in that described to determine to be wrapped in the multiple first response message respectively First signature value of the content for containing includes:
Determine the first signature value of the partial content of the content included in first response message;Or
Determine the first signature value of the full content of the content included in first response message.
3. method according to claim 1, it is characterised in that in being included in caching first response message After appearance, also include:
Be the curriculum offering life span TTL included in the first response message of the caching, so as to when the TTL time-out when, The content included in the first response message cached described in Force Deletion.
4. according to the method in any one of claims 1 to 3, it is characterised in that also include:
Pseudo-terminal to the content source initiating business request, so that the content source is according to the quilt carried in the service request Ask whether the content of the entity value information inspection caching of variable there occurs change.
5. method according to claim 4, it is characterised in that also include:
Corresponding relation between the content included in the first response message for setting up the caching and the first signature value.
6. method according to claim 5, it is characterised in that also include:
Receiving terminal is to the service request of the 2nd URL, if the 2nd URL is identical with a URL of caching, to institute Terminal transmission is stated corresponding to the content included in the first response message of the caching of a URL, otherwise, based on to institute The service request and the corresponding relation for stating the 2nd URL receive the second response message from the content source, determine described second Second signature value of the content included in response message;
Compare the second signature value and the first signature value, if identical, by the first response message of the caching Comprising content be sent to the terminal, if it is not the same, including in the second response message that will be received from the content source The full content of content is sent to the terminal.
7. a kind of buffer, it is characterised in that including:
Statistic unit, the temperature for counting the first uniform resource position mark URL;
Determining unit, if exceeding threshold value for the temperature of a URL, determines whether a URL is statically configured And whether in blacklist;
Processing unit, exceeds threshold value, and determine that a URL is not statically configured for the temperature as a URL And when not in blacklist, receiving terminal to multiple service requests of first uniform resource position mark URL, and based on institute State multiple service requests and receive multiple first response messages respectively from content source, do not taken when in the multiple first response message Indicate not cache with buffer control header field or the buffer control header field of carrying, in determining the multiple first response message respectively Comprising content the first signature value;
Comparing unit, for comparing multiple first signature values, if identical, the first response described in the buffer The content included in message and a URL, if it is not the same, during the buffer does not cache first response message Comprising content.
8. buffer according to claim 7, it is characterised in that the processing unit specifically for:
Determine the first signature value of the partial content of the content included in first response message;Or
Determine the first signature value of the full content of the content included in first response message.
9. buffer according to claim 7, it is characterised in that also including setting unit, for for the caching The curriculum offering life span TTL included in one response message, when the TTL is overtime, to be cached described in Force Deletion The content included in first response message.
10. the buffer according to any one of claim 7 to 9, it is characterised in that also include:
Analogue unit, for pseudo-terminal to the content source initiating business request, so that the content source is according to the business Whether the content of the entity value information inspection caching of the requested variable carried in request there occurs change.
11. buffers according to claim 10, it is characterised in that also include:
Unit is set up, between the content included in the first response message for setting up the caching and the first signature value Corresponding relation.
12. buffers according to claim 11, it is characterised in that
The processing unit is further used for service request of the receiving terminal to the 2nd URL, if the 2nd URL and caching First URL is identical, then bag in the first response message of the caching for corresponding to a URL is sent to the terminal The content for containing, otherwise, second is received based on the service request and the corresponding relation to the 2nd URL from the content source Response message, determines the second signature value of content included in second response message;
The comparing unit is further used for the second signature value described in comparing and the first signature value, described if identical The content included in first response message of the caching is sent to the terminal by buffer, if it is not the same, the caching The full content of the content included in the second response message that device will be received from the content source is sent to the terminal.
A kind of 13. systems of cache optimization, it is characterised in that including:
Multiple buffers as any one of claim 7 to 12;
Service management device;
Scalable message processes Presence Protocol XMPP server, for obtain the caching rule of service management device issue with And each described buffer issue to uniform resource position mark URL whether the analysis result that can be cached, so as to the business Managing device by the XMPP server know that each described buffer issues whether can be with to uniform resource position mark URL The analysis result of caching, and each described buffer knows what the service management device was issued by the XMPP server Caching rule and other buffers issue to the URL whether the analysis result that can be cached.
CN201280000262.7A 2012-01-31 2012-01-31 The system of the method, buffer and cache optimization of cache optimization Active CN103416027B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/070791 WO2013113150A1 (en) 2012-01-31 2012-01-31 Cache optimization method, cache and cache optimization system

Publications (2)

Publication Number Publication Date
CN103416027A CN103416027A (en) 2013-11-27
CN103416027B true CN103416027B (en) 2017-06-20

Family

ID=48904361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280000262.7A Active CN103416027B (en) 2012-01-31 2012-01-31 The system of the method, buffer and cache optimization of cache optimization

Country Status (2)

Country Link
CN (1) CN103416027B (en)
WO (1) WO2013113150A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789857B (en) * 2015-11-25 2020-08-14 中国移动通信集团公司 Information interaction method, equipment and cache system
CN105959361A (en) * 2016-04-25 2016-09-21 乐视控股(北京)有限公司 Task distribution method, task distribution device, and task distribution system
CN108471355A (en) * 2018-02-28 2018-08-31 哈尔滨工程大学 A kind of Internet of Things Information Interoperability method based on extra large cloud computing framework
CN109788047B (en) * 2018-12-29 2021-07-06 山东省计算中心(国家超级计算济南中心) Cache optimization method and storage medium
CN110555184A (en) * 2019-09-06 2019-12-10 深圳市珍爱捷云信息技术有限公司 resource caching method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101473590A (en) * 2006-05-05 2009-07-01 奥多比公司 System and method for cacheing WEB files
CN101888349A (en) * 2009-05-13 2010-11-17 上海即略网络信息科技有限公司 Interworking gateway of MSN and XMPP

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100316288B1 (en) * 1999-08-28 2001-12-20 서평원 Wireless Internet Service Method In Gateway System
US20040010543A1 (en) * 2002-07-15 2004-01-15 Steven Grobman Cached resource validation without source server contact during validation
CN101706827B (en) * 2009-08-28 2011-09-21 四川虹微技术有限公司 Method for caching file of embedded browser
CN102096712A (en) * 2011-01-28 2011-06-15 深圳市五巨科技有限公司 Method and device for cache-control of mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101473590A (en) * 2006-05-05 2009-07-01 奥多比公司 System and method for cacheing WEB files
CN101888349A (en) * 2009-05-13 2010-11-17 上海即略网络信息科技有限公司 Interworking gateway of MSN and XMPP

Also Published As

Publication number Publication date
CN103416027A (en) 2013-11-27
WO2013113150A1 (en) 2013-08-08

Similar Documents

Publication Publication Date Title
US10791201B2 (en) Server initiated multipath content delivery
US10237363B2 (en) Content delivery network request handling mechanism with cached control information
CN106031130B (en) Content distribution network framework with edge proxies
Zhang et al. A survey of caching mechanisms in information-centric networking
CN103329113B (en) Configuration is accelerated and custom object and relevant method for proxy server and the Dynamic Website of hierarchical cache
CN101409706B (en) Method, system and relevant equipment for distributing data of edge network
CN104714965B (en) Static resource De-weight method, static resource management method and device
US8566443B2 (en) Unobtrusive methods and systems for collecting information transmitted over a network
CN103001964B (en) Buffer memory accelerated method under a kind of LAN environment
CN104025521B (en) Content transmission system, optimize the method for network traffics in this system, central control unit and local cache device
CN106302445B (en) Method and apparatus for handling request
US20020138511A1 (en) Method and system for class-based management of dynamic content in a networked environment
CN103416027B (en) The system of the method, buffer and cache optimization of cache optimization
CN109491758A (en) Docker mirror image distribution method, system, data gateway and computer readable storage medium
CN104426718B (en) Data decryptor server, cache server and redirection method for down loading
CN101404585B (en) Policy system and method for implementing content distribution network content management
CN105100015B (en) A kind of method and device for acquiring internet access data
CN104640114B (en) A kind of verification method and device of access request
CN106603713A (en) Session management method and system
Hefeeda et al. Design and evaluation of a proxy cache for peer-to-peer traffic
CN101557427A (en) Method for providing diffluent information and realizing the diffluence of clients, system and server thereof
CN103888539B (en) Bootstrap technique, device and the P2P caching systems of P2P cachings
CN106131165B (en) Anti-stealing link method and device for content distributing network
CN105978936A (en) CDN server and data caching method thereof
US9055113B2 (en) Method and system for monitoring flows in network traffic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220209

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technologies Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right