CN104980478B - Sharing method, equipment and system are cached in content distributing network - Google Patents

Sharing method, equipment and system are cached in content distributing network Download PDF

Info

Publication number
CN104980478B
CN104980478B CN201410231155.6A CN201410231155A CN104980478B CN 104980478 B CN104980478 B CN 104980478B CN 201410231155 A CN201410231155 A CN 201410231155A CN 104980478 B CN104980478 B CN 104980478B
Authority
CN
China
Prior art keywords
caching server
server
caching
url
service request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410231155.6A
Other languages
Chinese (zh)
Other versions
CN104980478A (en
Inventor
李丛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Computer Systems Co Ltd
Original Assignee
Shenzhen Tencent Computer Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Computer Systems Co Ltd filed Critical Shenzhen Tencent Computer Systems Co Ltd
Priority to CN201410231155.6A priority Critical patent/CN104980478B/en
Publication of CN104980478A publication Critical patent/CN104980478A/en
Application granted granted Critical
Publication of CN104980478B publication Critical patent/CN104980478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The embodiment of the present invention provides caching sharing method, equipment and system in a kind of content distributing network, and this method includes:First caching server receives service request, obtains the first URL of the content that service request is asked;First caching server carries out calculating processing according to processing rule is calculated to the first URL, obtains the first calculating result;First caching server determines that the corresponding caching server of the first calculating result is identified according to the rule of correspondence, judges that caching server is identified whether consistent with the mark of the first caching server;If so, then the first caching server determines to handle service request by the first caching server;If it is not, then the first caching server determines that identifying corresponding second caching server processing business by caching server asks.The present embodiment can improve the communication efficiency of content distributing network.

Description

Sharing method, equipment and system are cached in content distributing network
Technical field
The present embodiments relate to sharing method, equipment are cached in the communication technology, more particularly to a kind of content distributing network And system.
Background technology
Content distributing network (Content Delivery Network, abbreviation CDN), by being assisted in existing network interconnection Discussing increases by one layer of new network architecture in (Internet Protocol, abbreviation IP) transmission network, the content of website is published to Closest to the network " edge " of user terminal so that user terminal can obtain required content nearby, improve user terminal and visit Ask the response speed of network.
In existing CDN server group system, including multiple caching servers and source station server.Multiple caching clothes Business device realizes being total to for multiple caching servers based on internet cache protocol (Internet Cache Protocol, abbreviation ICP) Enjoy.After any one caching server receives service request, if it find that not hitting locally, then to all other caching Server sends query messages, and receives the response message that other caching servers are sent, to know all other buffer service Whether device preserves content corresponding with the service request, if so, then the arbitrary caching server is interior from correspondence is preserved Content is obtained in the caching server of appearance, if all other caching server does not store the corresponding content of the service request, Then the arbitrary caching server sends to source station server and asked, and obtains the corresponding content of the service request.
However, when CDN computer rooms group system includes more caching server, it is miss in any caching server During service request, any caching server sends query messages to all other caching server, and waits all other The response message of caching server, not only Signalling exchange is complicated, also results in access delay, communication efficiency is low.
The content of the invention
The embodiment of the present invention provides caching sharing method, equipment and system in a kind of content distributing network, to improve communication Efficiency.
In a first aspect, the embodiment of the present invention, which is provided in a kind of content distributing network, caches sharing method, the content distribution The server cluster system of network includes the use that is stored with source station server and multiple caching servers, each caching server Rule is handled in the calculating for uniform resource position mark URL calculate processing, and calculating processing is carried out to the URL and is obtained The rule of correspondence that identifies of calculating result and caching server, methods described includes:
First caching server receives service request, obtains the first URL of the content that the service request is asked;
First caching server carries out calculating processing according to the processing rule that calculates to the first URL, obtains First calculates result;
First caching server determines that described first calculates the corresponding caching of result according to the rule of correspondence Server identification, judges that the caching server is identified whether consistent with the mark of first caching server;
If so, then first caching server is determined by first caching server to the service request progress Reason;
If it is not, then first caching server determines to identify corresponding second caching server by the caching server The service request is handled.
Second aspect, the embodiment of the present invention provides a kind of caching server, and the caching server is applied to content distribution Network, the server cluster system of the content distributing network includes source station server and multiple caching servers, each described slow Deposit the calculating being stored with server for uniform resource position mark URL calculate processing and handle rule, and to described URL carries out the rule of correspondence of calculating result and caching server mark that calculating processing is obtained, and the caching server is First caching server, including:
Receiving module, for receiving service request, obtains the first URL of the content that the service request is asked;
Processing module, for carrying out calculating processing to the first URL according to the processing rule that calculates, obtains the first meter Calculate result;
Judge module is identified, for determining that described first calculates the corresponding caching clothes of result according to the rule of correspondence Business device mark, judges that the caching server is identified whether consistent with the mark of first caching server;
If, it is determined that the service request is handled by first caching server;
If not, it is determined that corresponding second caching server is identified by the caching server service request is carried out Processing.
The third aspect, the embodiment of the present invention provides a kind of server cluster system of content distributing network, the server Group system includes source station server and the caching server of second aspect;
Wherein, the source station server is used for:
Rule is handled according to for the calculating for uniform resource position mark URL calculate processing, the source station is serviced The URL of the content of device storage carries out calculating processing, obtains the first calculating result;
The corresponding rule that obtained calculating result is identified with caching server are handled according to calculating is carried out to the URL Then, it is determined that caching server corresponding with the described first calculating result is identified, down sending content corresponding with the URL is given With the corresponding caching server of caching server mark.
Fourth aspect, the embodiment of the present invention provides a kind of caching server, and the caching server is applied to content distribution Network, the server cluster system of the content distributing network includes source station server and multiple caching servers, the caching Server is the first caching server, including:Network interface, memory, processor and bus, it is the network interface, described Memory and the processor are connected with the bus respectively, wherein:
The calculating for uniform resource position mark URL calculate processing that is stored with the memory handles rule, And the rule of correspondence of calculating result and caching server mark that calculating processing is obtained is carried out to the URL;
The processor is called the program stored in the memory, is used for by the bus:
Service request is received by the network interface, the first URL of the content that the service request is asked is obtained;
Calculating processing is carried out to the first URL according to the processing rule that calculates, the first calculating result is obtained;
Determine that described first calculates the corresponding caching server mark of result according to the rule of correspondence, judge described Caching server identifies whether consistent with the mark of first caching server;
If, it is determined that the service request is handled by first caching server;
If not, it is determined that corresponding second caching server is identified by the caching server service request is carried out Processing.
Sharing method, equipment and system are cached in content distributing network provided in an embodiment of the present invention, passes through the first caching Server receives service request, obtains the first URL of the content that service request is asked;First caching server according to calculating at Reason rule carries out calculating processing to the first URL, obtains the first calculating result;First caching server is true according to the rule of correspondence Fixed first calculates the corresponding caching server mark of result, judges that caching server is identified whether and the first caching server Mark it is consistent;If so, then the first caching server determines to handle service request by the first caching server;If it is not, Then the first caching server determines that identifying corresponding second caching server processing business by caching server asks.First caching Server not only realizes caching and shared by calculating the caching server that can determine that processing business is asked, and signaling is handed over It is mutually simple, it is to avoid access delay, improve communication efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is CDN of the present invention server cluster system structural representation;
Fig. 2 is the schematic flow sheet of caching sharing method embodiment one in present invention distribution network;
Fig. 3 is the schematic flow sheet of caching sharing method embodiment two in present invention distribution network;
Fig. 4 is the structural representation of caching server embodiment one of the present invention;
Fig. 5 is the structural representation of caching server embodiment two of the present invention;
Fig. 6 is the structural representation of caching server embodiment three of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 is CDN of the present invention server cluster system structural representation.As shown in figure 1, the CDN that the present embodiment is provided Server cluster system include source station server and multiple caching servers.Each caching server has two to throw the net card, interior Net network interface card and outer net network interface card.Intranet provides the management operation of machine, for example, can be asked by Intranet forwarding with transparent transmission service, or After the startup of CDN server group system, the Intranet network interface card interval in caching server sends multicast heartbeat, so that other cachings Server can be found that the caching server.If the multicast heartbeat of caching server stops, the caching server will be moved back automatically Go out cluster.Outer net externally provides server service, when caching server determines the service request that processing is obtained by outer net, takes The corresponding content of the service request of the machine caching returns to user, if the machine does not cache the corresponding caching of the service request, Then initiate to ask to source station server, the content for then obtaining request is sent to user, while being buffered in the machine.Wherein, source Site server is the server of accessed website.
During implementing, in source station server side, source station server is used for:
According to for being carried out to URL (Uniform Resource Locator, abbreviation URL) at calculating The calculating processing rule of reason, calculating processing is carried out to the URL of the content of source station server storage, obtains the first calculating processing knot Really;
The rule of correspondence that obtained calculating result is identified with caching server is handled according to calculating is carried out to URL, really Fixed caching server corresponding with the first calculating result is identified, and down sending content corresponding with URL is given into caching server Identify corresponding caching server.
The calculating for URL calculate processing that is stored with caching server side, caching server handles rule, And the rule of correspondence of calculating result and caching server mark that calculating processing is obtained is carried out to the URL.Caching clothes Business device is used to receive service request, obtains the first URL of the content that service request is asked;It is regular to first according to processing is calculated URL carries out calculating processing, obtains the first calculating result, according to the rule of correspondence, it is determined that handling the caching clothes of the service request Business device.
In source station server side, calculating processing rule and the rule of correspondence can be preset in the server of source station.In caching Server side, in caching server or source station server can be preset in slow by calculating processing rule and the rule of correspondence Deposit what server was issued, it will be understood by those skilled in the art that calculating processing rule and correspondence that source station server is applied are advised Then, the calculating processing rule and the rule of correspondence applied with caching server are identical.In the server of source station at default calculating Reason rule and the rule of correspondence, the calculating processing rule and the rule of correspondence stored in caching server, can be multiple, And can constantly it update.
In the present embodiment, rule can be handled for Hash calculation by calculating processing rule, or digest calculations processing Rule, the present embodiment is not specially limited to the specific processing rule that calculates, as long as calculating processing can be carried out to URL, obtain with The calculating result that numeric form is present.The rule of correspondence refers to calculate URL, obtained calculating result with The corresponding relation of caching server mark, is pair that the calculating result existed with numeric form is identified with caching server It should be related to.For example, caching server mark can be the numbering of caching server, the numbering can be represented with letter, can also Use numerical identity.When caching server is designated the numbering of caching server, the rule of correspondence can for calculate result with The numbering of caching server is consistent, when caching server is designated letter, and the rule of correspondence can be calculating result and word Female corresponding relation.
During implementing, source station server is identified to each caching server, and to each caching server Issue the mark of caching server.Source station server can handle rule according to any calculating, enter for the URL of storage content Row calculating is handled, and obtains the first calculating result, source station server determines the first calculating result pair according to the rule of correspondence The caching server mark answered, corresponding caching server is identified by the corresponding down sending contents of URL to caching server.This area Technical staff is appreciated that the content of source station server storage is substantial amounts of, and differs, therefore, is carrying out at calculating After reason, the distribution of content of source station server storage can be stored into multiple caching servers.For example, to storage content URL is calculated, and obtains the first result of calculation, and the first result of calculation is 2.When caching server is designated the volume of caching server Number when, the rule of correspondence be the first result of calculation it is consistent with the numbering of caching server, then the corresponding down sending contents of the URL can be arrived In the caching server that numbering is 2.
Caching server can calculate processing rule according to identical and calculating processing is carried out to URL, and use identical pair Should be regular, the caching server to storing the URL corresponding contents is found, then service request is directly transparent to caching clothes Business device, is handled the service request so that Signalling exchange is simple between caching server by the caching server, it is to avoid visited Delay is asked, communication efficiency is improved.
Alternatively, in the content update of source station server storage, source station server is regular to updating according to processing is calculated Content URL carry out calculating processing, obtain the second calculating result;
Source station server according to calculating the rule of correspondence that result and caching server are identified, it is determined that with the second calculating The corresponding caching server mark of result is managed, request message will be updated and be handed down to and the corresponding buffer service of caching server mark Device, updating solicited message includes the storage of caching server needs and/or the content deleted;
Now, source station server can be updated to the content being stored in each caching server, source station server according to Processing rule and the rule of correspondence are calculated, determines that storage content needs increased caching server, and storage content to need to delete Caching server, and issue renewal request message to corresponding caching server.
Accordingly, caching server receives source station server and sent more in the content update of source station server storage New solicited message, updating solicited message includes the storage of caching server needs and/or the content deleted;According to caching server The content update storage for needing to store and/or deleting, when caching server is handled service request, after renewal Storage is handled institute's service request.
The present embodiment enables caching server to upgrade in time locally stored content, and reduction caching server is to source station Server sends the probability of service request, improves communication efficiency.
In the above-described embodiment, source station server is described in detail, specific embodiment is used below, it is right Caching server in CDN server group system of the present invention realizes that the embodiment of caching sharing method is described in detail.
Fig. 2 is the schematic flow sheet of caching sharing method embodiment one in present invention distribution network.The present embodiment Executive agent is any caching server in Fig. 1 embodiments, and the caching server can be real by arbitrary software and/or hardware It is existing.The calculating for uniform resource position mark URL calculate processing that is stored with caching server handles rule, and right URL carries out the rule of correspondence of calculating result and caching server mark that calculating processing is obtained, as shown in Fig. 2 this implementation The method of example can include:
Step 201, the first caching server receive service request, obtain the first of the content that the service request is asked URL;
Step 202, first caching server are carried out at calculating according to the processing rule that calculates to the first URL Reason, obtains the first calculating result;
Step 203, first caching server determine that described first calculates result pair according to the rule of correspondence The caching server mark answered, judges that the caching server is identified whether consistent with the mark of first caching server, If so, step 204 is performed, if it is not, performing step 205;
Step 204, first caching server determine to carry out the service request by first caching server Processing;
Step 205, first caching server determine to identify corresponding second buffer service by the caching server Device is handled the service request.
During implementing, in step 201, the business that the first caching server receives user equipment transmission please Ask, the service request for example can be that HTTP (Hypertext transfer protocol, abbreviation HTTP) please Ask, the first caching server extracts the first URL of institute's request content from the service request, and on WWW, each information resources are all Have unified and in online unique address, the address is URL.
In step 202., first caching server carries out calculating processing according to processing rule is calculated to the first URL, obtains To the first result of calculation.
The present embodiment this sentence calculating processing rule be remainder computation rule, first calculate result be remainder operate knot Exemplified by fruit, calculating processing is carried out to the first URL and is described in detail.
In source station server side, source station server carries out Hash meter according to remainder computation rule for the URL of storage content Calculate, obtain Hash calculation result, remainder operation is carried out to the result of Hash calculation according to the sum of caching server, remainder is obtained Operating result.Further, down sending content corresponding with URL is given remainder operation knot by source station server according to the rule of correspondence Really corresponding caching server.
Specifically, it is non-limiting as example, when the numbering for being designated caching server of caching server, source station clothes Business device carries out Hash calculation for the URL of storage content, obtains Hash calculation result cryptographic Hash, cryptographic Hash is that one piece of data is unique And extremely compact numerical value representation.Using the cryptographic Hash as molecule, the sum of caching server carries out remainder as denominator Operation, obtains remainder operating result i.e. remainder, and content corresponding with URL is stored into the numbering and the remainder one to caching server In the caching server of cause.For example, cryptographic Hash is 1111, the sum of caching server is 5, then 1111/5 remainder is 1, then will In the caching server that content storage corresponding with the URL is 1 to numbering.It will be understood by those skilled in the art that can also lead to Cross letter to be identified caching server, different letters, the different calculating result of correspondence, the present embodiment is herein to slow The mark for depositing server is not specially limited.
In caching server side, first caching server is carried out at calculating according to remainder computation rule to the first URL Reason, obtains Hash calculation result;Remainder operation is carried out to the result of Hash calculation according to the sum of caching server, remainder is obtained Operating result.
It will be understood by those skilled in the art that the first caching server carries out calculating processing with source station server to URL Calculate processing rule identical, except that, source station server needs to carry out multiple URL calculating processing, and the first caching clothes The URL for the content that business device is only asked current service request carries out calculating processing.
In step 203, the first caching server determines the corresponding caching of the first calculating result according to the rule of correspondence Server identification, for example, caching server is designated the numbering of caching server, the first calculating result is 6, and first calculates The numbering of the corresponding caching server of result is 6.Then, the first caching server judges that the numbering 6 of caching server is No is the numbering of the first caching server, if so, performing step 204, the first caching server is determined by the first caching server Service request is handled if it is not, performing step 205, the first caching server determines corresponding by the mark of caching server Second caching server is handled the service request.
In the description above, when the first caching server handles the service request, detailed process is:First caching Server searches the corresponding content of the service request in the machine, if the machine is present, returns to the content to user equipment, such as The corresponding content of the service request is not present in fruit the machine, then initiates to ask to source station server, after request content is obtained, by this Content passes through user equipment, and is buffered in the machine.
When the first caching server determines that the second caching server handles the service request, and by the second caching server When managing the service request, detailed process is:First caching server please to the second caching server transparent transmission business by Intranet Ask, the second caching server receives the service request, it is corresponding that second caching server searches the service request in the machine Content, if the machine is present, the content is returned to the first caching server, if the machine is corresponding in the absence of the service request Content, then initiate to ask to source station server, after request content is obtained, the content passed through into the first caching server, and The machine is buffered in, the content is returned to user equipment by the first caching server.
It will be understood by those skilled in the art that the content stored in the server of source station is dynamic change, when source station service Device also not by the content update stored in the server of source station to caching server when, then there is caching server and searched in the machine The situation of content corresponding less than service request, then caching server to source station server initiate ask.
Sharing method is cached in content distributing network provided in an embodiment of the present invention, the first caching server receives business please Ask, obtain the first URL of the content that service request is asked;First caching server is regular to the first URL according to processing is calculated Calculating processing is carried out, the first calculating result is obtained;First caching server determines that the first calculating is handled according to the rule of correspondence As a result corresponding caching server mark, judges that caching server is identified whether consistent with the mark of the first caching server;If It is that then the first caching server determines to handle service request by the first caching server;If it is not, then the first buffer service Device determines that identifying corresponding second caching server processing business by caching server asks, and the first caching server is by calculating The second caching server of processing business request is can determine that, caching is not only realized and shares, go back Signalling exchange simple, it is to avoid visit Delay is asked, communication efficiency is improved.
Fig. 3 is the schematic flow sheet of caching sharing method embodiment two in present invention distribution network.The present embodiment Executive agent is any caching server in Fig. 1 embodiments.The method that the present embodiment is provided, including:
Step 301, the first caching server receive other slow in the server cluster system of the content distributing network The multicast heartbeat of server is deposited, its in the server cluster system of the content distributing network is found according to the multicast heartbeat The mark of its caching server and other caching servers;
Step 302, first caching server receive service request, obtain the content that the service request is asked First URL;
Step 303, first caching server judge whether the first URL is focus URL, if so, performing step 304, if it is not, performing step 306;
Step 304, first caching server determine to carry out the service request by first caching server Processing;
Step 305, first caching server are handled the service request;
Step 306, first caching server determine to count the first URL by first caching server Calculation is handled;
Step 307, first caching server carry out calculating processing to the first URL, obtain the first calculating processing As a result;
Step 308, first caching server determine that described first calculates result pair according to the rule of correspondence The caching server mark answered, judges that the caching server is identified whether consistent with the mark of first caching server, If so, step 309 is performed, if it is not, performing step 310;
Step 309, first caching server determine to carry out the service request by first caching server Processing;
Step 310, first caching server determine to identify corresponding second buffer service by the caching server Device is handled the service request;
Step 311, first caching server judge whether fail with the second caching server continuity testing, If so, step 312 is performed, if it is not, performing step 313;
Step 312, first caching server are handled the service request;
The service request is transparent to second caching server by step 313, first caching server, by institute The second caching server is stated to handle the service request.
Step 302 in the present embodiment is similar with the step 201 in Fig. 2 embodiments, and step 307 to step 310 is real with Fig. 2 The step 202 applied in example is similar to step 205, and here is omitted for the present embodiment.
In step 301, the first caching server receives the multicast heartbeat of other caching servers, is sent out according to multicast heartbeat The mark of existing other caching servers and other caching servers, and transmission multicast heartbeat is spaced, send out other caching servers The existing mark of first caching server.
In the present embodiment, each caching server makes the machine be sent out by other caching servers by way of multicast heartbeat It is existing, or the other caching servers of the machine discovery so that the configuration of CDN server group system is simple, it is easy to safeguard.
In step 303, the first caching server judges whether the first URL is focus URL.Implementing process In, alternatively, the first caching server obtains the higher URL of visiting frequency, the visit to being counted to the URL accessed It is focus URL to ask the higher URL of frequency, alternatively, and the first caching server can be used minimum using algorithm (Least in the recent period Recently Used, abbreviation LRU) chained list counts to the URL accessed, determines focus URL.
When the first caching server judges the first URL for focus URL, first caching server performs step 304, First caching server is determined by the first caching server directly to the first URL processing, without being forwarded to by Intranet Other caching servers, then perform step 305, and the first caching server is handled service request.If judge this first The non-hot URL of URL, then first caching server perform step 306, the first caching server determine by the first caching server Calculating processing is carried out to the first URL, step 307 is then performed.In the prior art, if some URL accesses especially many, and connect Receive the caching server of the service request and do not hit, then caching server can send a large number of services request, other caching clothes Business device can also receive substantial amounts of service request, cause the unbalanced of cluster inner machine load.The present invention is by the first caching server When judging the first URL for focus URL, directly to the first URL processing, URL hot localised points are on the one hand solved the problems, such as, Intranet transparent transmission amount, reduction central processing unit (Central Processing Unit, abbreviation are greatly reduced on the other hand CPU) consume.
Alternatively, in the present embodiment, if caching server breaks down, but in the presence of multicast heartbeat also, The timing of first caching server carries out connective with other caching servers in the server cluster system of content distributing network Test, in step 311, if continuity testing fails, the first caching server judges that the caching server breaks down. In the present embodiment, when the first caching server judges that second caching server breaks down by continuity testing, then Step 312 is performed, the first caching server is shielded asks to the second caching server transparent transmission service, treats next continuity testing After, then recover to ask to the second caching server transparent transmission service.First caching server to service request at Reason.If continuity testing success, the first caching server performs step 313, the first caching server is saturating by service request The second caching server is passed to, the service request is handled by the second caching server.It will be understood by those skilled in the art that when the In the case of the multicast heartbeat of two caching servers is non-existent, the second caching server can automatically exit from the group system, first Caching server will not find second caching server, will not also find the mark of second caching server, if at calculating Reason result is non-existent second caching server of the multicast heartbeat, then the first caching server handles the service request.
In the present embodiment, the first caching server and the process of the second caching server processing business request, specifically may be used The mode described referring to above-described embodiment, here is omitted for the present embodiment.
Method provided in an embodiment of the present invention, by continuity testing, when running into unit failure, detects and shields automatically Cover, reduce the influence that Single Point of Faliure is brought.
Fig. 4 is the structural representation of caching server embodiment one of the present invention.Buffer service provided in an embodiment of the present invention Device 40 can be the first caching server, and first caching server is any caching server in Fig. 1 embodiments.It is each described The calculating for uniform resource position mark URL calculate processing that is stored with caching server handles rule, and to institute State URL progress calculating processing obtained calculating result and delaying that rule of correspondence the present embodiment that caching server is identified is provided Depositing server 40 includes:Receiving module 401, processing module 402 and mark judge module 403.
Receiving module 401, for receiving service request, obtains the first URL of the content that the service request is asked;
Processing module 402, for carrying out calculating processing to the first URL according to the processing rule that calculates, obtains the One calculates result;
Judge module 403 is identified, for determining that the first calculating result is corresponding slow according to the rule of correspondence Server identification is deposited, judges that the caching server is identified whether consistent with the mark of first caching server;
If, it is determined that the service request is handled by first caching server;
If not, it is determined that the service request is entered by the mark of the caching server corresponding second caching server Row processing.
Caching server provided in an embodiment of the present invention, can perform the technical scheme of above method embodiment, and it realizes former Reason is similar with technique effect, and here is omitted for the present embodiment.
Fig. 5 is the structural representation of caching server embodiment two of the present invention.The present embodiment is on the basis of Fig. 4 embodiments Realize, it is specific as follows:
Alternatively, the processing rule that calculates is remainder computation rule, and described first calculates result operates for remainder As a result, the processing module 402 specifically for:Hash calculation is carried out to the first URL according to the remainder computation rule, obtained To Hash calculation result;
Remainder operation is carried out to the Hash calculation result according to the sum of caching server, remainder operating result is obtained.
Alternatively, in addition to focus judge module 404,
The focus judge module 404 is used to carry out calculating processing to the first URL in the processing module, according to meter Result is calculated, determines that mark the second caching server processing corresponding with the calculating result of caching server is described Whether before service request, it is focus URL to judge the first URL;
If it is not, then the focus judge module determines to calculate the first URL by first caching server Processing;
If so, then the focus judge module is determined by first caching server to the service request progress Reason.
Alternatively, the caching server also includes:
Test module 405 is connected, for being taken with other cachings in the server cluster system of the content distributing network Business device carries out continuity testing;
Judge module 406 is connected, for determining the mark pair by the caching server in first caching server After the second caching server answered handles the service request, judge to survey with second caching server connectedness Whether examination fails;
Business execution module 407, for when the connection judge module judged result is is, entering to the service request Row processing;
Transparent transmission module 408, for when the connection judge module judged result is no, the service request to be transparent to Second caching server, is handled the service request by second caching server.
Alternatively, the receiving module 401 is additionally operable to, in the connection test module and the clothes of the content distributing network Other caching servers in device group system of being engaged in are carried out before continuity testing, receive the server of the content distributing network The multicast heartbeat message of the mark for carrying other caching servers that other caching servers in group system are sent, root Other caching servers in the server cluster system of the content distributing network and institute are found according to the multicast heartbeat message State the mark of other caching servers.
Alternatively, the caching server also includes update module 409;
The receiving module 401 is additionally operable in the content update of the source station server storage, receives the source station clothes Be engaged in the renewal solicited message that device sends, the renewal solicited message include the first caching server needs store and/or The content of deletion;
The update module 409, for the content update for needing to store and/or delete according to first caching server The storage of first caching server, during so that first caching server is handled the service request, according to Storage after the renewal is handled institute's service request.
Caching server provided in an embodiment of the present invention, can perform the technical scheme of above method embodiment, and it realizes former Reason is similar with technique effect, and here is omitted for the present embodiment.
Fig. 6 is the structural representation of caching server embodiment three of the present invention.The caching server 60 that the present embodiment is provided, The caching server is applied to content distributing network, and the server cluster system of the content distributing network is serviced including source station Device and multiple caching servers, the caching server are the first caching server, including:Network interface 601, memory 605, Processor 603 and bus 604, the network interface 601, the memory 605 and the processor 603 respectively with it is described Bus 604 is connected, wherein:
The calculating for uniform resource position mark URL calculate processing that is stored with the memory 605 handles rule Then, and to the URL rule of correspondence of calculating result and caching server mark that calculating processing is obtained is carried out;
The processor 603 is called the program 606 stored in the memory 605, is used for by the bus 604:
Service request is received by the network interface 601, the first of the content that the service request is asked is obtained URL;
Calculating processing is carried out to the first URL according to the processing rule that calculates, the first calculating result is obtained;
Determine that described first calculates the corresponding caching server mark of result according to the rule of correspondence, judge described Caching server identifies whether consistent with the mark of first caching server;
If, it is determined that the service request is handled by first caching server;
If not, it is determined that corresponding second caching server is identified by the caching server service request is carried out Processing.
Alternatively, the processing rule that calculates is remainder computation rule, and described first calculates result operates for remainder As a result, the processor 603 specifically for:
Hash calculation is carried out to the first URL according to the remainder computation rule, Hash calculation result is obtained;
Remainder operation is carried out to the Hash calculation result according to the sum of caching server, remainder operating result is obtained.
Alternatively, the processor 603 also particularly useful for:In the processor according to the processing rule that calculates to described First URL carries out calculating processing, obtains before the first calculating result, whether judge the first URL is focus URL;
If not, it is determined that calculating processing is carried out to the first URL by first caching server;
If, it is determined that the service request is handled by first caching server.
Alternatively, the processor 603 also particularly useful for:
Pass through other caching servers in the server cluster system of the network interface and the content distributing network Carry out continuity testing;
The processor is determined please to the business by corresponding second caching server of mark of the caching server Ask after being handled, the processor is additionally operable to:
Whether judgement fails with the second caching server continuity testing, if so, then being carried out to the service request Processing, if it is not, the service request then is transparent into second caching server, by second caching server to described Service request is handled.
Alternatively, the server cluster system that the processor 603 passes through the network interface and the content distributing network Other caching servers in system are carried out before continuity testing,
The network interface 601 receives other caching servers in the server cluster system of the content distributing network The multicast heartbeat message of the mark of carrying other caching servers of transmission, the processor is according to the multicast heartbeat report Text finds the other caching servers and other caching servers in the server cluster system of the content distributing network Mark.
Alternatively, the processor 603 is additionally operable to, in the content update of the source station server storage, pass through the net Network interface 601 receives the renewal solicited message that the source station server is sent, and the renewal solicited message includes described first Caching server needs the content for storing and/or deleting;
The processor 603 is deposited according to the content update that first caching server needs to store and/or deletes The storage of reservoir, during so that first caching server is handled the service request, according to depositing after the renewal Storage is handled institute's service request.
Caching server provided in an embodiment of the present invention, can perform the technical scheme of above method embodiment, and it realizes former Reason is similar with technique effect, and here is omitted for the present embodiment.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above-mentioned each method embodiment can lead to The related hardware of programmed instruction is crossed to complete.Foregoing program can be stored in a computer read/write memory medium.The journey Sequence upon execution, performs the step of including above-mentioned each method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or Person's CD etc. is various can be with the medium of store program codes.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to The technical scheme described in foregoing embodiments can so be modified, or which part or all technical characteristic are entered Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology The scope of scheme.

Claims (17)

1. cache sharing method in a kind of content distributing network, it is characterised in that the server cluster of the content distributing network System includes being stored with source station server and multiple caching servers, each caching server for positioning unified resource Accord with URL and calculate the calculating processing rule of processing, and the URL is carried out the calculating obtained calculating result of processing with The rule of correspondence of caching server mark, calculating processing rule and the rule of correspondence that source station server is applied, with buffer service The calculating processing rule and the rule of correspondence that device is applied are identical;The rule of correspondence refers to carry out uniform resource position mark URL Calculate, the corresponding relation that obtained calculating result is identified with caching server;Methods described includes:
First caching server receives service request, obtains the first URL of the content that the service request is asked;
First caching server judges whether the first URL is focus URL;
If so, then first caching server determines to handle the service request by first caching server;
If it is not, then first caching server carries out calculating processing according to the processing rule that calculates to the first URL, obtain Result is calculated to first;
First caching server determines that described first calculates the corresponding buffer service of result according to the rule of correspondence Device is identified, and judges that the caching server is identified whether consistent with the mark of first caching server;
If so, then first caching server determines to handle the service request by first caching server;
If it is not, then first caching server determines to identify corresponding second caching server to institute by the caching server Service request is stated to be handled.
2. according to the method described in claim 1, it is characterised in that the processing rule that calculates is remainder computation rule, described First to calculate result be remainder operating result, and first caching server is according to the processing rule that calculates to described the One URL carries out calculating processing, obtains the first calculating result, including:
First caching server carries out Hash calculation according to the remainder computation rule to the first URL, obtains Hash Result of calculation;
First caching server carries out remainder operation according to the sum of caching server to the Hash calculation result, obtains Remainder operating result.
3. according to the method described in claim 1, it is characterised in that methods described also includes:
First caching server enters with other caching servers in the server cluster system of the content distributing network Row continuity testing;
First caching server determines to identify corresponding second caching server to the business by the caching server After request is handled, in addition to:
First caching server judges whether fail with the second caching server continuity testing, if so, then described First caching server is handled the service request, if it is not, then first caching server is by the service request Second caching server is transparent to, the service request is handled by second caching server.
4. method according to claim 3, it is characterised in that first caching server and the content distributing network Server cluster system in other caching servers carry out continuity testing before, in addition to:
First caching server receives other caching servers in the server cluster system of the content distributing network The multicast heartbeat message of the mark of carrying other caching servers of transmission, according to being found the multicast heartbeat message The mark of other caching servers and other caching servers in the server cluster system of content distributing network.
5. the method according to any one of Claims 1-4, it is characterised in that methods described also includes:
In the content update of the source station server storage, first caching server receives the source station server and sent Renewal solicited message, the renewal solicited message include the first caching server needs storage and/or delete it is interior Hold;First caching server is the according to the content update that first caching server needs to store and/or deletes The storage of one caching server, during so that first caching server is handled the service request, according to it is described more Storage after new is handled institute's service request.
6. a kind of caching server, the caching server is applied to content distributing network, the service of the content distributing network Device group system includes being stored with source station server and multiple caching servers, each caching server for unified money Source finger URL URL calculate the calculating processing rule of processing, and the calculating obtained to URL progress calculating processing is handled As a result the rule of correspondence identified with caching server, calculating processing rule and the rule of correspondence that source station server is applied, with delaying Deposit calculating processing rule and the rule of correspondence that server applied identical;The rule of correspondence refers to URL URL is calculated, the corresponding relation that obtained calculating result is identified with caching server;The caching server is first Caching server, it is characterised in that including:
Receiving module, for receiving service request, obtains the first URL of the content that the service request is asked;
Focus judge module, the focus judge module is used to carry out calculating processing to the first URL in the processing module, According to result is calculated, determine at mark the second caching server corresponding with the calculating result of caching server Manage before the service request, whether judge the first URL is focus URL;If it is not, then the focus judge module determine by First caching server carries out calculating processing to the first URL;If so, then the focus judge module is determined by described First caching server is handled the service request;
Processing module, for carrying out calculating processing to the first URL according to the processing rule that calculates, is obtained at the first calculating Manage result;
Judge module is identified, for determining that described first calculates the corresponding caching server of result according to the rule of correspondence Mark, judges that the caching server is identified whether consistent with the mark of first caching server;If, it is determined that by institute The first caching server is stated to handle the service request;If not, it is determined that identified by the caching server corresponding Second caching server is handled the service request.
7. caching server according to claim 6, it is characterised in that the calculating processing rule calculates rule for remainder Then, it is described first calculate result be remainder operating result, the processing module specifically for:
Hash calculation is carried out to the first URL according to the remainder computation rule, Hash calculation result is obtained;According to caching clothes The sum of business device carries out remainder operation to the Hash calculation result, obtains remainder operating result.
8. caching server according to claim 6, it is characterised in that the caching server also includes:
Test module is connected, for being carried out with other caching servers in the server cluster system of the content distributing network Continuity testing;
Judge module is connected, for determining to be delayed by caching server mark corresponding second in first caching server Deposit after server handled the service request, whether judgement loses with the second caching server continuity testing Lose,
Business execution module, for when the connection judge module judged result is is, handling the service request;
Transparent transmission module, for the connection judge module judged result for it is no when, the service request is transparent to described the Two caching servers, are handled the service request by second caching server.
9. caching server according to claim 8, it is characterised in that the receiving module is additionally operable to, in the connection Other caching servers in the server cluster system of test module and the content distributing network carry out continuity testing it Before, the carrying that other caching servers in the server cluster system of the reception content distributing network are sent is described other slow The multicast heartbeat message of the mark of server is deposited, the server of the content distributing network is found according to the multicast heartbeat message The mark of other caching servers and other caching servers in group system.
10. the caching server according to any one of claim 6 to 9, it is characterised in that the caching server also includes Update module;The receiving module is additionally operable in the content update of the source station server storage, receives the source station service The renewal solicited message that device is sent, the renewal solicited message includes the first caching server needs storage and/or deleted The content removed;The update module, for the content update institute for needing to store and/or delete according to first caching server The storage of the first caching server is stated, during so that first caching server is handled the service request, according to institute The storage after updating is stated to handle institute's service request.
11. a kind of server cluster system of content distributing network, the server cluster system is including source station server and such as Caching server described in any one of claim 6 to 10;Wherein, the source station server is used for:According to for unified money Source finger URL URL calculate the calculating processing rule of processing, and the URL of the content of the source station server storage is calculated Processing, obtains the first calculating result;Obtained calculating result and caching clothes are handled according to calculating is carried out to the URL The rule of correspondence for device mark of being engaged in, it is determined that caching server corresponding with the described first calculating result is identified, will be with the URL Corresponding down sending content gives the caching server and identifies corresponding caching server.
12. the server cluster system of content distributing network according to claim 11, it is characterised in that the source station clothes Business device is additionally operable to:In the content update of the source station server storage, the source station server calculates processing rule according to described Calculating processing then is carried out to the URL of the content of the renewal, the second calculating result is obtained;The source station server is according to institute State and calculate the rule of correspondence that result is identified with caching server, result is corresponding to be cached it is determined that being calculated with described second Server identification, will update request message and is handed down to and the corresponding caching server of caching server mark, the renewal Solicited message includes the caching server needs storage and/or the content deleted;The caching server is additionally operable to:Institute When stating the content update of source station server storage, the renewal solicited message that the source station server is sent is received, the renewal please Information is asked to include the caching server needs storage and/or the content deleted;Storage is needed according to the caching server And/or the storage of caching server described in the content update deleted, when the caching server is handled service request, Institute's service request is handled according to the storage after the renewal.
13. a kind of caching server, the caching server is applied to content distributing network, the service of the content distributing network Device group system includes source station server and multiple caching servers, and the caching server is the first caching server, and it is special Levy and be, including:
Network interface, memory, processor and bus, the network interface, the memory and processor difference It is connected with the bus, wherein:
The calculating for uniform resource position mark URL calculate processing that is stored with the memory handles rule, and The rule of correspondence of calculating result and caching server mark that calculating processing is obtained is carried out to the URL;Source station server The calculating processing rule and the rule of correspondence applied, the calculating processing rule and rule of correspondence phase applied with caching server Together;The rule of correspondence refers to calculate uniform resource position mark URL, obtained calculating result and caching server The corresponding relation of mark;
The processor is called the program stored in the memory, is used for by the bus:Connect by the network interface Service request is received, the first URL of the content that the service request is asked is obtained;According to the processing rule that calculates to described the One URL carries out calculating processing, obtains the first calculating result;Determine that described first calculates processing knot according to the rule of correspondence Really corresponding caching server mark, judges that the caching server identifies whether the mark one with first caching server Cause;If, it is determined that the service request is handled by first caching server;If not, it is determined that by described slow Corresponding second caching server of server identification is deposited to handle the service request;
The processor also particularly useful for:The first URL is counted according to the processing rule that calculates in the processor Calculation is handled, and is obtained before the first calculating result, whether judge the first URL is focus URL;If not, it is determined that by institute State the first caching server and calculating processing is carried out to the first URL;If, it is determined that by first caching server to institute Service request is stated to be handled.
14. caching server according to claim 13, it is characterised in that the calculating processing rule calculates rule for remainder Then, it is described first calculate result be remainder operating result, the processor specifically for:According to the remainder computation rule Hash calculation is carried out to the first URL, Hash calculation result is obtained;According to the sum of caching server to the Hash calculation As a result remainder operation is carried out, remainder operating result is obtained.
15. caching server according to claim 13, it is characterised in that the processor also particularly useful for:By institute State network interface and carry out continuity testing with other caching servers in the server cluster system of the content distributing network; The processor determines that identifying corresponding second caching server by the caching server is handled the service request Afterwards, the processor is additionally operable to:Whether judgement fails with the second caching server continuity testing, if so, then to institute State service request to be handled, if it is not, the service request then is transparent into second caching server, delayed by described second Server is deposited to handle the service request.
16. caching server according to claim 15, it is characterised in that the processor by the network interface with Other caching servers in the server cluster system of the content distributing network are carried out before continuity testing, the network The carrying that other caching servers in the server cluster system of content distributing network described in interface are sent is described other The multicast heartbeat message of the mark of caching server, the processor finds the content distribution according to the multicast heartbeat message The mark of other caching servers and other caching servers in the server cluster system of network.
17. the caching server according to any one of claim 13 to 16, it is characterised in that the processor is additionally operable to During the content update of the source station server storage, the renewal for receiving the source station server transmission by the network interface please Information is sought, the renewal solicited message includes the first caching server needs storage and/or the content deleted;The place The storage of device memory according to the content update that first caching server needs to store and/or deletes is managed, so that institute When stating the first caching server the service request being handled, institute's service request is carried out according to the storage after the renewal Processing.
CN201410231155.6A 2014-05-28 2014-05-28 Sharing method, equipment and system are cached in content distributing network Active CN104980478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410231155.6A CN104980478B (en) 2014-05-28 2014-05-28 Sharing method, equipment and system are cached in content distributing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410231155.6A CN104980478B (en) 2014-05-28 2014-05-28 Sharing method, equipment and system are cached in content distributing network

Publications (2)

Publication Number Publication Date
CN104980478A CN104980478A (en) 2015-10-14
CN104980478B true CN104980478B (en) 2017-10-31

Family

ID=54276578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410231155.6A Active CN104980478B (en) 2014-05-28 2014-05-28 Sharing method, equipment and system are cached in content distributing network

Country Status (1)

Country Link
CN (1) CN104980478B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847362A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Distribution content cache method and distribution content cache system used for cluster
CN107770209B (en) * 2016-08-16 2021-04-16 中兴通讯股份有限公司 Resource sharing method and device
CN108616762B (en) * 2016-12-12 2019-11-19 视联动力信息技术股份有限公司 A kind of sharing method and view networked server of view networked server
CN107026758B (en) * 2017-04-14 2021-05-04 深信服科技股份有限公司 Information processing method, information processing system and server for CDN service update
CN108737470B (en) * 2017-04-19 2020-03-13 贵州白山云科技股份有限公司 Access request source returning method and device
CN107645386B (en) * 2017-09-25 2021-06-22 网宿科技股份有限公司 Method and device for acquiring data resources
CN108111623A (en) * 2017-12-29 2018-06-01 北京奇虎科技有限公司 A kind of communication means and device based on content distributing network CDN
CN108769166A (en) * 2018-05-17 2018-11-06 北京云端智度科技有限公司 A kind of CDN cache contents managing devices based on metadata
CN109167820A (en) * 2018-08-13 2019-01-08 彩讯科技股份有限公司 A kind of method for down loading of application program, device, storage medium and terminal
CN109586969B (en) * 2018-12-13 2022-02-11 平安科技(深圳)有限公司 Content distribution network disaster tolerance method and device, computer equipment and storage medium
CN109995881B (en) * 2019-04-30 2021-12-14 网易(杭州)网络有限公司 Load balancing method and device of cache server
CN112055039B (en) * 2019-06-06 2022-07-26 阿里巴巴集团控股有限公司 Data access method, device and system and computing equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150421A (en) * 2006-09-22 2008-03-26 华为技术有限公司 A distributed content distribution method, edge server and content distribution network
CN102263828A (en) * 2011-08-24 2011-11-30 北京蓝汛通信技术有限责任公司 Load balanced sharing method and equipment
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150421A (en) * 2006-09-22 2008-03-26 华为技术有限公司 A distributed content distribution method, edge server and content distribution network
CN102263828A (en) * 2011-08-24 2011-11-30 北京蓝汛通信技术有限责任公司 Load balanced sharing method and equipment
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device

Also Published As

Publication number Publication date
CN104980478A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
CN104980478B (en) Sharing method, equipment and system are cached in content distributing network
US10089153B2 (en) Synchronizing load balancing state information
US9448932B2 (en) System for caching data
CN108494891A (en) A kind of domain name analytic method, server and system
CN111629051B (en) Performance optimization method and device for industrial internet identification analysis system
CN106998370A (en) Access control method, device and system
CN105530127B (en) A kind of method and proxy server of proxy server processing network access request
CN113472852B (en) Method, device and equipment for returning source of CDN node and storage medium
WO2011134086A1 (en) Systems and methods for conducting reliable assessments with connectivity information
CN109274730A (en) The optimization method and device that Internet of things system, MQTT message are transmitted
CN105868231A (en) Cache data updating method and device
CN109274632A (en) A kind of recognition methods of website and device
CN105653198A (en) Data processing method and device
CN107332908A (en) A kind of data transmission method and its system
CN108984553A (en) Caching method and device
US9489306B1 (en) Performing efficient cache invalidation
CN107026758A (en) For the information processing method of CDN processing business and updates, information processing system and server
CN106464710A (en) Profile-based cache management
CN105915621A (en) Data access method and pretreatment server
CN107347015A (en) A kind of recognition methods of content distributing network, apparatus and system
CN108984433A (en) Cache data control method and equipment
CN106874371A (en) A kind of data processing method and device
CN104424316B (en) A kind of date storage method, data query method, relevant apparatus and system
CN109586937A (en) A kind of O&M method, equipment and the storage medium of caching system
CN106598881A (en) Page processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant