CN104202362B - SiteServer LBS and its content distribution method and device, load equalizer - Google Patents
SiteServer LBS and its content distribution method and device, load equalizer Download PDFInfo
- Publication number
- CN104202362B CN104202362B CN201410401428.7A CN201410401428A CN104202362B CN 104202362 B CN104202362 B CN 104202362B CN 201410401428 A CN201410401428 A CN 201410401428A CN 104202362 B CN104202362 B CN 104202362B
- Authority
- CN
- China
- Prior art keywords
- access request
- client
- resource file
- cache
- cache resources
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Information Transfer Between Computers (AREA)
- Computer And Data Communications (AREA)
Abstract
A kind of SiteServer LBS and its content distribution method and device, load equalizer, methods described include:Receive the access request of client;The access request of the client received is redirected to corresponding cache resources server.Above-mentioned technical scheme, the problem of response speed caused by can avoiding load equalizer transfer is slow can effectively lift the usage experience of user.
Description
Technical field
The present invention relates to content distribution techniques field, more particularly to a kind of SiteServer LBS and its content distribution method
With device, load equalizer.
Background technology
Content distributing network (Content Delivery Network, CDN), by placing node serve everywhere in network
Device, one layer of intelligent virtual network is built on existing Internet basic, the content of website is published to closest to client
Network " edge ", client is obtained required content nearby.And these " edges ", it that is to say cache resources service
Device, to cache source station content.
In current CDN, typically the content map storage of source station is saved to specific using common hash algorithm
On point server.When a cache resources server adds or exits this cluster, all resource file and cache resources
Mapping relations between server will be destroyed.
Fig. 1 is referred to, specifically, the object (resource file) in source station and caching Resource Server are exactly used into phase
With hash function be mapped to relevant position in ring 1, using object K1 as starting point, along finding next node B node clockwise,
Then by object k1 storages into B node.The object of storage on the machine if the cache resources server of B node is delayed, B node will
It can clockwise move on the cache resources server of C nodes, so, can only influence C nodes, and other A nodes, D are saved
The object stored in point does not result in influence.So, with the increase of load, C nodal cache Resource Servers are easy to
There is machine of delaying, by that analogy, it will cause whole SiteServer LBS all to be paralysed.
To solve the above problems, introducing uniformity hash algorithm in the CDN of prior art, it that is to say in the ring 1
Upper to increase multiple " dummy nodes ", the storage of object is to look for a dummy node along the clockwise direction of ring, each virtual section
Point can all be associated with a real node.For example, as shown in Fig. 2 A1, A2, B1, B2, C1, C2, D1, D2 are virtual section
Point, A nodes are stored with dummy node A1 and A2 data, and B node storage dummy node B1, B2 data, C nodes storage are virtual
Node C1, C2 data, D nodes storage dummy node D1, D2 data.
Introduce after " dummy node ", mapping relations have just been transformed into { object → dummy node } from { object → node }.By
It is large number of in dummy node, it is uniformly distributed, thus can be as small as possible when removing or adding a cache resources server
Ground changes existing mapping relations, meets the requirement of monotonicity.
Above-mentioned CDN is in the magnanimity access request in face of client, it is possible to use GSLB technology is by client
The access at end is pointed on the cache resources server working properly nearest from client.But due to separate unit cache resources server
Storage capacity, concurrent capability and bandwidth it is limited, it is impossible to meet the concurrent requirements for access of client.Therefore, it is past in the prior art
Toward using load equalizer (also known as reverse proxy machine) and many cache resources server groups into cache resources service
Device cluster handles the concurrently access of client.When some resource file of client request, load equalizer can be according to phase
The load equilibration scheduling method of pass, the access request of client is transmitted in cache resources server cluster one selected
Cache resources server, the response contents for then again sending cache resources server pass through client, and this system is also known as
For SiteServer LBS.
In SiteServer LBS of the prior art, the access request and cache resources server of client are according to client
The response contents that the access request at end is returned are required to by the load equalizer transfer, thus it is slow to there is response speed
The problem of.
The content of the invention
The problem of embodiment of the present invention is solved is the response speed for how improving SiteServer LBS.
To solve the above problems, the embodiments of the invention provide a kind of content distribution method of SiteServer LBS, it is described
Method includes:
Receive the access request of client;
The access request of the client received is redirected to corresponding cache resources server, the cache resources clothes
The corresponding cache resource file of access request for the client that is stored with business device.
Alternatively, methods described also includes:
The access request of client in preset time is collected, it is corresponding slow that the access request includes the access request
Deposit the URL of resource file;
As the URL of same buffered resource file in the access request of collected client in the preset time
When occurrence number is more than default threshold value, the cache resource file is set as Hot Contents;
Indicate that the Hot Contents are issued to each cache resources server by source station respectively.
Alternatively, the access request by the client received is redirected to corresponding cache resources server, bag
Include:
Whether the corresponding cache resource file of access request for judging the client is Hot Contents;
When the corresponding cache resource file of the access request for judging the client is Hot Contents, by the visitor received
The access request at family end is fifty-fifty redirected to corresponding cache resources server;
When the corresponding cache resource file of the access request for judging the client is not Hot Contents, by what is received
The access request of client is redirected to corresponding cache resources server.
Alternatively, it is described when the corresponding cache resource file of access request for judging the client is not Hot Contents
When, the access request of the client received is redirected to corresponding cache resources server, including:
Obtain the URL of the cache resource file in the access request of the client;
The URL of acquired cache resource file cryptographic Hash is calculated using hash function;
The cryptographic Hash that the access request of the client received is redirected to the URL of the cache resource file is corresponding
In cache resources server, the cryptographic Hash that the cache resource file is stored in the URL of the cache resource file is corresponding slow
Deposit in Resource Server.
Alternatively, methods described also includes:When detecting corresponding cache resources server failure, it will be received
The access request of client be redirected to other cache resources servers.
The embodiment of the present invention additionally provides a kind of content delivering apparatus of SiteServer LBS, and described device includes:
Receiving unit, the access request suitable for receiving client;
First redirects unit, and the access request of the client suitable for the receiving unit is received is redirected to accordingly
Cache resources server, the corresponding cache resources of access request for the client that is stored with the cache resources server
File.
Alternatively, described device also includes:
Collector unit, the access request suitable for collecting client in preset time, the access request includes described visit
Ask the URL for asking corresponding cache resource file;
In statistic unit, the access request suitable for counting collector unit client collected in preset time
The URL of same buffered resource file occurrence number.
Judging unit, the phase in access request suitable for judging client in the preset time that the statistic unit is counted
Whether the occurrence number with the URL of cache resource file is more than default threshold value;
Setup unit, suitable for judging the same buffered in preset time in the access request of client when the judging unit
When the URL of resource file occurrence number is more than default threshold value, the cache resource file is set as Hot Contents;
Indicating member, is suitable to indicate that the Hot Contents that source station sets the setup unit are issued to each cache resources respectively
Server.
Alternatively, the first redirection unit includes:
Judgment sub-unit, whether the corresponding cache resource file of access request suitable for judging the client is in focus
Hold;
First redirects subelement, and the access request suitable for judging the client when the judgment sub-unit is corresponding slow
When to deposit resource file be Hot Contents, the access request of the client received is fifty-fifty redirected to corresponding cache resources
Server;
Second redirects subelement, and the access request suitable for judging the client when the judgment sub-unit is corresponding slow
When depositing resource file and being not Hot Contents, the access request of the client received is redirected to corresponding cache resources service
Device.
Alternatively, the second redirection subelement includes:
Acquisition module, the URL suitable for obtaining the cache resource file in the client access request;
Computing module, suitable for the URL's using the cache resource file acquired in the hash function calculating acquisition module
Cryptographic Hash;
Redirection module, suitable for the access request of the client received is redirected into the cache resource file
In the URL corresponding cache resources server of cryptographic Hash, the cache resource file is stored in the URL of the cache resource file
The corresponding cache resources server of cryptographic Hash in.
Alternatively, described device also includes:
Detection unit, is adapted to detect for whether corresponding cache resources server breaks down;
Second redirects unit, when the detection unit, which detects corresponding cache resources server, to break down, will
The access request of the client received is redirected to other cache resources servers.
The embodiment of the present invention additionally provides a kind of load equalizer, and the load equalizer includes above-mentioned content distribution and filled
Put.
The embodiment of the present invention additionally provides a kind of SiteServer LBS, and the SiteServer LBS includes:More than two
Cache resources server and above-mentioned load equalizer, wherein, the load equalizer respectively with the cache resources server
It is connected.
Compared with prior art, technical scheme has the following advantages that:
Above-mentioned technical scheme, by the way that the access request of client is sent to corresponding cache resources server, and will
The corresponding contents that cache resources server is sent are redirected to the client, caused by can avoiding load equalizer transfer
The problem of response speed is slow, can effectively lift the usage experience of user.
Further, because the content setting accessed client more than predetermined threshold value is Hot Contents, and by the heat
Point content is issued in each cache resources server in SiteServer LBS by source station respectively to be stored, and therefore, it can
When the request sent in the client of reception is Hot Contents, the Hot Contents access request of client is fifty-fifty distributed to respectively
Cache resources server, can be with the load of each cache resources server of active balance, so as to improve SiteServer LBS
Response speed.
Further, can be by the visit of client due to when detecting corresponding cache resources server and breaking down
Ask that request is redirected to other cache resources servers, it can be ensured that the access request of client is timely responded, from
And the usage experience of user can be improved.
Brief description of the drawings
Fig. 1 is that the mapping storage relation of a kind of object in CDN of the prior art and cache resources server is shown
It is intended to;
Fig. 2 is that the mapping storage of another object and cache resources server in CDN of the prior art is closed
It is schematic diagram;
Fig. 3 is a kind of composition structural representation of SiteServer LBS in the embodiment of the present invention;
Fig. 4 is a kind of flow chart of the content distribution method of SiteServer LBS in the embodiment of the present invention;
Fig. 5 is the flow chart of the content distribution method of another SiteServer LBS in the embodiment of the present invention;
Fig. 6 is the hot flow chart for determining content of setting in the embodiment of the present invention;
Fig. 7 is redirected to the request of the non-hot content of access of the client of reception accordingly in the embodiment of the present invention
The flow chart of cache resources server;
Fig. 8 is a kind of structural representation of the content delivering apparatus of SiteServer LBS in the embodiment of the present invention;
Fig. 9 is the first structural representation for redirecting unit in the embodiment of the present invention;
Figure 10 is the second structural representation for redirecting subelement in the embodiment of the present invention.
Embodiment
The load-balancing method of existing SiteServer LBS mainly includes following three kinds:
The first is polling dispatching method:This method distributes to the access request of the client received described negative in turn
Carry the cache resources server in equalizing system.The advantage of this method is simple and easy to apply, but every in SiteServer LBS
Platform cache resources server is both needed to cache all resource files of source station, and does not consider the actual negative of each cache resources server
Loading capability.
Second is weight dispatching method:This method, according to the different disposal ability of cache resources server, is each caching
Resource Server distributes different weighted values, handles different loads.This method take into account different cache resources servers
Disposal ability, thus may insure that high performance cache resources server has higher utilization rate, and performance can be prevented effectively from
Relatively low server load is overweight.But, as first method, methods described needs each caching of SiteServer LBS to provide
Source server stores the resource file of all source stations.
The third is URL (Uniform Resource Locator, URL) Hash calculation method:The party
Method calculates corresponding cryptographic Hash to the URL of the resource file of all source stations using identical hash algorithm, then will calculate again
The cryptographic Hash drawn is mapped on corresponding cache resources server.Due to each resource text calculated using identical hash function
The URL of part cryptographic Hash is identical, then the request that client accesses the corresponding cache resource files of the URL will be divided
It is assigned on fixed cache resources server.Therefore, the advantage of this method is needed only on a cache resources server
Storage URL corresponding with the cryptographic Hash of URL resource file, and as a result of hash function computing resource text
The URL of part cryptographic Hash, the access request for the client that can be received fifty-fifty is distributed to corresponding cache resources server
On.
But, three kinds of above-mentioned load-balancing methods are required to the unified access request for receiving client of load equalizer,
And the access request of the client received is forwarded on corresponding cache resources server, meanwhile, when the cache resources
When server returns to corresponding response contents, the load equalizer needs the response contents being transmitted to the client,
Due to needing to carry out corresponding transfer, there is higher requirement not only for the disposal ability of load equalizer, but also slow down
The corresponding speed that client is accessed.
To solve the above-mentioned problems in the prior art, the mode that the embodiment of the present invention is redirected is by the access of client
Request is sent to corresponding cache resources server, and the response speed caused by can avoiding load equalizer transfer is slowly asked
Topic, can further lift the usage experience of user.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent, below in conjunction with the accompanying drawings to the present invention
Specific embodiment be described in detail.
Fig. 3 shows a kind of composition structural representation of SiteServer LBS in the embodiment of the present invention.As shown in Figure 3
SiteServer LBS can include:Load equalizer 31 and the cache resources server 32 of more than two, wherein, load equalizer
31 are connected with cache resources server 32 respectively, and are connected by internet with client 33.
Fig. 4 shows a kind of flow chart of the content distribution method of SiteServer LBS in the embodiment of the present invention.Below
Detailed introduction is done to the operation principle of the SiteServer LBS shown in Fig. 3 with reference to Fig. 4:
Step S401:Receive the access request of client.
In specific implementation, the content access request of client 33 includes the corresponding caching money of the access request of client 33
The URL of source file.
Step S402:The access request of the client received is redirected to corresponding cache resources server.
In specific implementation, when receiving the access request of client 33, load equalizer 31 can be using HTTP weights
The access request of the client 33 received is redirected to corresponding cache resources server 32 by the mode of orientation.Due to described
The corresponding resource file of access request for the client 33 that is stored with cache resources server 32, by load equalizer 31
After redirection, client 33 can directly be set up with corresponding cache resources server 32 and connect, and obtain cache resources service
The response contents that device 32 is provided.
Fig. 5 shows the flow chart of the content distribution method of another SiteServer LBS in the embodiment of the present invention.Under
The content distribution method of SiteServer LBS of the face with reference to described in Fig. 5, the operation principle to the SiteServer LBS shown in Fig. 3 is done
It is further details of to introduce:
Step S501:Receive the access request of client.
In specific implementation, when user is inputted in the address field of the browser of client in the CDN desired access to
Resource file address when, first by CDN GSLB device receive client 33 access request, then,
GSLB device is communicated with the load equalizer 31 of all SiteServer LBSs, is taken with choosing optimal cache resources
Business device 32 carrys out the access request at customer in response end.For example, when the access request of client 33 arrives, GSLB device root
According to corresponding GSLB dispatching method, the access request of the client 33 is sent to the load balancing described in Fig. 3
Load equalizer 31 in system.
Step S502:Whether the corresponding cache resource file of access request for judging the client is Hot Contents.
In specific implementation, after the content access request of client 33 is received, load equalizer 31 can be to institute
The access request of the client of reception is carried out after the processing such as door chain, URI conversions, then performs the access for judging the client 33
Ask corresponding cache resource file whether be Hot Contents operation.
In specific implementation, in order to further realize the load balancing of SiteServer LBS, cache resources can be serviced
The resource file cached on device 32 is respectively set as Hot Contents and non-hot content, for example, can be cached according to preset time
Resource file accessed number of times determines which resource file is Hot Contents, and which resource file is non-Hot Contents.Tool
The method for setting Hot Contents shown in Figure 6 for body, described, can include:
Step S601:Collect the access request of client in preset time.
In specific implementation, by the access request of the client 33 received is redirected to by load equalizer 31
In different cache resources servers 32, and include in the access request of client 33 corresponding cache resource file
URL, therefore, the access request for the client 33 that load equalizer 31 can be received by parsing and counting in preset time,
Just the URL of the corresponding cache resource file of access request of client 33 can be obtained.
In specific implementation, the preset time can be configured according to the actual needs, for example, the preset time
It could be arranged to 1 day or 3 days etc..
Step S602:Judge whether the occurrence number of the URL of same buffered resource file in collected access request is big
In default threshold value.
In specific implementation, the default threshold value can be configured according to the actual needs, for example, can will be described
Threshold value is set as 3 times or 9 inferior.
In specific implementation, when judged result for it is no when, step S603 can be performed, can be with when judged result for when being
Perform step S604~step S605.
Step S603:The cache resource file is set as non-hot content.
In specific implementation, when in the access request of collected client 33 in the preset time, same buffered
When the URL of resource file occurrence number is less than default threshold value, the content for showing the resource file is not that client is frequent
The content of access, therefore, it can the cache resource file being set as non-hot content.
Step S604:The cache resource file is set as Hot Contents.
In specific implementation, when in the access request of client 33 collected in preset time, same buffered resource is literary
When the URL of part occurrence number is more than default threshold value, it is that client is frequently visited to illustrate the corresponding cache resource files of the URL
The content asked, therefore, it can the corresponding cache resource files of the URL being set as Hot Contents.
Step S605:Indicate that the Hot Contents are issued to each cache resources server by source station respectively.
In specific implementation, when Hot Contents are stored in a cache resources server 32, because client 33 can be frequent
Ground accesses the Hot Contents, then, the cache resources server 32 of the Hot Contents that are stored with will undertake excessive bear
Carry, therefore, it is very easy to the accidents such as the machine of delaying occur.
In order to further realize the load balancing of SiteServer LBS, it can indicate that Hot Contents are issued to load by source station
In each cache resources server 32 in equalizing system, so, because each cache resources server 32 is stored with the focus
Content, therefore, each cache resources server 32 can handle the request that client 33 accesses Hot Contents, so that by original one
Share each slow into SiteServer LBS the load average for the access Hot Contents that platform cache resources server 32 is undertaken
Deposit on Resource Server 32, so as to improve the response speed of SiteServer LBS.
Continuing with shown in Figure 5, in specific implementation, when the judged result in step S502 is to be, it can perform
Step S503, otherwise can perform step S504.
Step S503:The access request of the client received is fifty-fifty redirected to corresponding cache resources service
Device.
In specific implementation, it is set to due to being stored with each cache resources server 32 in SiteServer LBS
The cache resource file of Hot Contents, therefore, can be by client in order to further realize the load balancing of SiteServer LBS
33 access request is fifty-fifty distributed on each cache resources server 32, for example, load equalizer 31 can according to 32 (1)-
32 (n) (n is the suitable of the request that each cache resources server 32 accesses Hot Contents in SiteServer LBS relative client 33
Sequence) order by access Hot Contents request be redirected on corresponding cache resources server 32.
It is to be herein pointed out relative client 33 need not be carried out according to subdispatch cache resources server 32 by working as
During access request, step S501~step S503 can be performed.
And according to the customer in response end 33 of subdispatch cache resources server 32 access request when, load equalizer 31 is then
Can be according to the IP address of client 33 by the cache resources server 32 nearest apart from client 33, and by selected caching
The IP address of Resource Server 32 is sent to client 33, and the caching that client 33 can then be returned according to load equalizer 31 is provided
The IP address of source server 32, directly sets up with the cache resources server 32 and is connected, so as to obtain the caching money
The corresponding content that source server 32 is provided.
Step S504:The access of the client received is redirected to corresponding cache resources server.
In specific implementation, because the content of source station can map storage to the cache resources of each node using hash function
On server 32, then, what the request of the client handled by each cache resources server 32 was to determine.When receiving client
During the access request at end 33, load equalizer 31 passes through the URL of cache resource file wherein stored and cache resources service
Mapping relations between device 32, just can determine to store slow where the corresponding cache resource file of access request of client 33
Resource Server 32 is deposited, is provided so as to which the access request of the client received is redirected into identified corresponding caching
Source server 32.
Specifically, it is shown in Figure 7, when the corresponding cache resource file of access request for judging the client 33
When being not Hot Contents, the access request of the client 32 received is redirected to corresponding cache resources server, can be with
Including:
Step S701:Obtain the URL of the cache resource file in the client access request.
In specific implementation, the mapping storage relation between cache resource file and cache resources server 32, Ke Yitong
The mapping relations crossed between the URL of cache resource file and cache resources server 32 are determined, therefore, in order to determine corresponding visitor
The cache resources server 32 of the access request at family end, the corresponding cache resource file of client access request can be obtained first
URL.
Step S702:The URL of acquired cache resource file cryptographic Hash is calculated using hash function.
In specific implementation, the mapping relations between the URL and cache resources server 32 of cache resource file can be adopted
Be determined with hash function, in order to determine store client the corresponding resource file of access request cache resources server
On 32, the URL of cache resource file cryptographic Hash can be calculated using the hash function.
Step S703:The access request of the client received is redirected to the URL of cache resource file Kazakhstan
It is uncommon to be worth in corresponding cache resources server.
In specific implementation, by the URL of cache resource file calculated cryptographic Hash, it just can determine to store institute
State the cache resources server 32 of cache resource file, so as to the access request of corresponding client is redirected to really
On fixed corresponding cache resources server 32.
In specific implementation, for the communications status of the cache resources server of collecting each node, it is ensured that will not be by client
The access request at end is distributed on the cache resources server node of failure so that the access request of client can be obtained
Quickly respond in time, the content distribution method of the SiteServer LBS shown in Fig. 5 can also include:
Step S505:Detect whether corresponding cache resources server breaks down.
, can be with the operation of each nodal cache Resource Server 32 in timed collection SiteServer LBS in specific implementation
Situation, so that it is determined that whether corresponding cache resources server 32 breaks down.
In specific implementation, when the test results is yes, step S506 can be performed;When testing result for it is no when, can be with
Perform step S507.
Step S506:The access request of the client received is redirected to corresponding cache resources server.
In specific implementation, when by detecting that determining that correspondingly cache resources server 32 is normally run does not break down
When, then the access request of the client received can be redirected to corresponding cache resources server 32.
Step S507:The access request of the client received is redirected to other cache resources servers.
In specific implementation, due to the same resource file in source station can cache it is different slow in SiteServer LBS
Deposit on Resource Server 32, therefore, when judging that correspondingly cache resources server 32 breaks down, in order to ensure client 33
Access request quickly can meet with a response in time, the access request of the client 33 received can be redirected to storage
There are other cache resources servers 32 of the resource file.
Fig. 8 shows a kind of structural representation of the content delivering apparatus of SiteServer LBS in the embodiment of the present invention.
The content delivering apparatus 800 of SiteServer LBS as shown in Figure 8, can include receiving unit 801 and first and redirect unit
802.Wherein:
Receiving unit 801, the access request suitable for receiving client.
First redirects unit 802, and the access request of the client suitable for receiving unit 801 is received is redirected to phase
The corresponding caching money of access request for the client that is stored with the cache resources server answered, the cache resources server
Source file.
Fig. 9 shows the structural representation of the first redirection unit in the embodiment of the present invention.As shown in Figure 9 the
One redirects unit 900, can include judgment sub-unit 901, first and redirect the redirection subelement of subelement 902 and second
903.Wherein:
Judgment sub-unit 901, whether the corresponding cache resource file of access request suitable for judging the client is heat
Point content.
First redirects subelement 902, suitable for judging that the access request of the client is corresponding when judgment sub-unit 901
When cache resource file is Hot Contents, the access request of the client received is fifty-fifty redirected to corresponding caching and provided
Source server.
Second redirects subelement 903, suitable for judging that the access request of the client is corresponding when judgment sub-unit 901
When cache resource file is not Hot Contents, the access request of the client received is redirected to corresponding cache resources and taken
Business device.
Figure 10 shows a kind of structural representation of second redirection subelement in the embodiment of the present invention.As shown in Figure 10
Second redirect subelement 100, acquisition module 101, computing module 102 and redirection module 103 can be included.Wherein:
Acquisition module 101, the URL suitable for obtaining the cache resource file in the client access request.
Computing module 102, suitable for calculating the cache resource file acquired in the acquisition module 101 using hash function
URL cryptographic Hash.
Redirection module 103, is calculated suitable for the access request of the client received is redirected into computing module 102
In the URL of the cache resource file gone out the corresponding cache resources server of cryptographic Hash, wherein, the cache resource file
It is stored in the corresponding cache resources server of the URL of cache resource file cryptographic Hash.
In specific implementation, the content delivering apparatus 800 of SiteServer LBS as shown in Figure 8 can also include connecting successively
Collector unit 803, statistic unit 804, judging unit 805, setup unit 806 and the indicating member 807 connect.Wherein:
Collector unit 803, the access request suitable for collecting client in preset time, the access request includes described
The URL of the corresponding cache resource file of access request.
In statistic unit 804, the access request of the client collected in preset time suitable for statistics collection unit 803
Same buffered resource file URL occurrence number.
In judging unit 805, the access request suitable for judging client in the preset time that statistic unit 804 is counted
Whether the URL of same buffered resource file occurrence number is more than default threshold value.
Setup unit 806, suitable for judging identical slow in the access request of client in preset time when judging unit 805
When the occurrence number for depositing the URL of resource file is more than default threshold value, the cache resource file is set as Hot Contents.
Indicating member 807, is suitable to indicate that the Hot Contents that source station sets setup unit 806 are issued to each caching money respectively
Source server.
In specific implementation, the content delivering apparatus of the SiteServer LBS in the embodiment of the present invention can also include inspection
Survey unit 808 and second and redirect unit 809.Wherein:
Detection unit 808, is adapted to detect for whether corresponding cache resources server breaks down.
Second redirects unit 809, suitable for being broken down when detection unit 808 detects corresponding cache resources server
When, the access request of the client received is redirected to other cache resources servers.
The embodiment of the present invention additionally provides a kind of load equalizer, and the load equalizer can include in above-described embodiment
Content delivering apparatus, repeat no more here.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in computer-readable recording medium, and storage is situated between
Matter can include:ROM, RAM, disk or CD etc..
The method and system to the embodiment of the present invention have been described in detail above, and the present invention is not limited thereto.Any
Art personnel, without departing from the spirit and scope of the present invention, can make various changes or modifications, therefore the guarantor of the present invention
Shield scope should be defined by claim limited range.
Claims (10)
1. a kind of content distribution method of SiteServer LBS, it is characterised in that
Receive the access request of client;
The access request of the client received is redirected to corresponding cache resources server, the cache resources server
In be stored with the corresponding cache resource file of access request of the client;
The access request of client in preset time is collected, the access request includes the corresponding caching money of the access request
The URL of source file;
When the URL of the same buffered resource file in the access request of collected client in preset time appearance
When number of times is more than default threshold value, the cache resource file is set as Hot Contents;
Indicate that the Hot Contents are issued to each cache resources server by source station respectively.
2. the content distribution method of SiteServer LBS according to claim 1, it is characterised in that described to be received
The access request of client is redirected to corresponding cache resources server, including:
Whether the corresponding cache resource file of access request for judging the client is Hot Contents;
When the corresponding cache resource file of the access request for judging the client is Hot Contents, by the client received
Access request be fifty-fifty redirected to corresponding cache resources server;
When the corresponding cache resource file of the access request for judging the client is not Hot Contents, by the client received
The access request at end is redirected to corresponding cache resources server.
3. the content distribution method of SiteServer LBS according to claim 2, it is characterised in that described described when judging
When the corresponding cache resource file of access request of client is not Hot Contents, by the access request weight of the client received
Corresponding cache resources server is directed to, including:
Obtain the URL of the cache resource file in the client access request;
The URL of acquired cache resource file cryptographic Hash is calculated using hash function;
The access request of the client received is redirected to the URL of the cache resource file corresponding caching of cryptographic Hash
In Resource Server, the cache resource file is stored in the URL of the cache resource file corresponding caching money of cryptographic Hash
In source server.
4. the content distribution method of SiteServer LBS according to claim 1, it is characterised in that also include:Work as detection
When being broken down to corresponding cache resources server, the access request of the client received is redirected to other cachings
Resource Server.
5. a kind of content delivering apparatus of SiteServer LBS, it is characterised in that including:
Receiving unit, the access request suitable for receiving client;
First redirects unit, and the access request of the client suitable for the receiving unit is received is redirected to corresponding delay
Deposit the corresponding cache resources text of access request for the client that is stored with Resource Server, the cache resources server
Part;
Collector unit, the access request suitable for collecting client in preset time, the access request includes the access please
Seek the URL of corresponding cache resource file;
It is identical in statistic unit, the access request suitable for counting collector unit client collected in preset time
The URL of cache resource file occurrence number;
It is identical slow in judging unit, the access request suitable for judging client in the preset time that the statistic unit is counted
Whether the occurrence number for depositing the URL of resource file is more than default threshold value;
Setup unit, suitable for judging the same buffered resource in preset time in the access request of client when the judging unit
When the URL of file occurrence number is more than default threshold value, the cache resource file is set as Hot Contents;
Indicating member, is suitable to indicate that the Hot Contents that source station sets the setup unit are issued to each cache resources service respectively
Device.
6. the content delivering apparatus of SiteServer LBS according to claim 5, it is characterised in that described first redirects
Unit includes:
Judgment sub-unit, whether the corresponding cache resource file of access request suitable for judging the client is Hot Contents;
First redirects subelement, the corresponding caching money of the access request suitable for judging the client when the judgment sub-unit
When source file is Hot Contents, the access request of the client received is fifty-fifty redirected to corresponding cache resources service
Device;
Second redirects subelement, the corresponding caching money of the access request suitable for judging the client when the judgment sub-unit
When source file is not Hot Contents, the access request of the client received is redirected to corresponding cache resources server.
7. the content delivering apparatus of SiteServer LBS according to claim 6, it is characterised in that described second redirects
Subelement includes:
Acquisition module, the URL suitable for obtaining the cache resource file in the client access request;
Computing module, the Hash of the URL suitable for calculating the cache resource file acquired in the acquisition module using hash function
Value;
Redirection module, the URL suitable for the access request of the client received to be redirected to the cache resource file
In the corresponding cache resources server of cryptographic Hash, the cache resource file is stored in the URL of cache resource file Kazakhstan
It is uncommon to be worth in corresponding cache resources server.
8. the content delivering apparatus of SiteServer LBS according to claim 5, it is characterised in that also include:
Detection unit, is adapted to detect for whether corresponding cache resources server breaks down;
Second redirects unit, when the detection unit, which detects corresponding cache resources server, to break down, will be connect
The access request of the client of receipts is redirected to other cache resources servers.
9. a kind of load equalizer, it is characterised in that including the content delivering apparatus described in claim any one of 5-8.
10. a kind of SiteServer LBS, it is characterised in that including:Cache resources server and claim 9 institute of more than two
The load equalizer stated, wherein, the load equalizer is connected with the cache resources server respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410401428.7A CN104202362B (en) | 2014-08-14 | 2014-08-14 | SiteServer LBS and its content distribution method and device, load equalizer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410401428.7A CN104202362B (en) | 2014-08-14 | 2014-08-14 | SiteServer LBS and its content distribution method and device, load equalizer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104202362A CN104202362A (en) | 2014-12-10 |
CN104202362B true CN104202362B (en) | 2017-11-03 |
Family
ID=52087587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410401428.7A Active CN104202362B (en) | 2014-08-14 | 2014-08-14 | SiteServer LBS and its content distribution method and device, load equalizer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104202362B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107211189A (en) * | 2014-12-30 | 2017-09-26 | 上海诺基亚贝尔股份有限公司 | A kind of method and apparatus sent for video |
CN107277093A (en) * | 2016-04-08 | 2017-10-20 | 北京优朋普乐科技有限公司 | Content distributing network and its load-balancing method |
CN107517241A (en) * | 2016-06-16 | 2017-12-26 | 中兴通讯股份有限公司 | Request scheduling method and device |
WO2018090315A1 (en) * | 2016-11-18 | 2018-05-24 | 华为技术有限公司 | Data request processing method and cache system |
CN111464649B (en) * | 2017-04-19 | 2022-10-21 | 贵州白山云科技股份有限公司 | Access request source returning method and device |
CN107707597A (en) * | 2017-04-26 | 2018-02-16 | 贵州白山云科技有限公司 | One kind burst focus accesses equalization processing method and device |
CN107689930A (en) * | 2017-09-08 | 2018-02-13 | 桂林加宏汽车修理有限公司 | A kind of resource regulating method and system |
CN108667692B (en) * | 2018-07-06 | 2021-01-29 | 厦门网宿有限公司 | Performance test method and system for load balance of distributed cache equipment |
CN109067898A (en) * | 2018-08-24 | 2018-12-21 | 山东浪潮商用系统有限公司 | A method of being distributed by file hash, which reduces content distributing network fringe node, returns source rate |
CN109819039B (en) * | 2019-01-31 | 2022-04-19 | 网宿科技股份有限公司 | File acquisition method, file storage method, server and storage medium |
CN110336848B (en) * | 2019-04-23 | 2022-12-20 | 网宿科技股份有限公司 | Scheduling method, scheduling system and scheduling equipment for access request |
CN111314414B (en) * | 2019-12-17 | 2021-09-28 | 聚好看科技股份有限公司 | Data transmission method, device and system |
CN113132443B (en) * | 2019-12-31 | 2022-06-07 | 北京金山云网络技术有限公司 | Resource acquisition method and device and node equipment in CDN (content delivery network) |
CN115150475A (en) * | 2021-03-31 | 2022-10-04 | 贵州白山云科技股份有限公司 | Scheduling method, device, medium and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101222424A (en) * | 2007-12-24 | 2008-07-16 | 中国电信股份有限公司 | Content distribution network and scheduling method based on content in the network |
CN101764824A (en) * | 2010-01-28 | 2010-06-30 | 深圳市同洲电子股份有限公司 | Distributed cache control method, device and system |
CN101848137A (en) * | 2009-03-26 | 2010-09-29 | 北京快网科技有限公司 | Load balancing method and system applied to three-layer network |
CN102118433A (en) * | 2010-12-27 | 2011-07-06 | 网宿科技股份有限公司 | Multiple-tier distributed cluster system |
CN103281367A (en) * | 2013-05-22 | 2013-09-04 | 北京蓝汛通信技术有限责任公司 | Load balance method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7512943B2 (en) * | 2005-08-30 | 2009-03-31 | Microsoft Corporation | Distributed caching of files in a network |
-
2014
- 2014-08-14 CN CN201410401428.7A patent/CN104202362B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101222424A (en) * | 2007-12-24 | 2008-07-16 | 中国电信股份有限公司 | Content distribution network and scheduling method based on content in the network |
CN101848137A (en) * | 2009-03-26 | 2010-09-29 | 北京快网科技有限公司 | Load balancing method and system applied to three-layer network |
CN101764824A (en) * | 2010-01-28 | 2010-06-30 | 深圳市同洲电子股份有限公司 | Distributed cache control method, device and system |
CN102118433A (en) * | 2010-12-27 | 2011-07-06 | 网宿科技股份有限公司 | Multiple-tier distributed cluster system |
CN103281367A (en) * | 2013-05-22 | 2013-09-04 | 北京蓝汛通信技术有限责任公司 | Load balance method and device |
Non-Patent Citations (2)
Title |
---|
P2P流媒体内容分发系统负载均衡策略的研究与实现;梅洪舟;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20110331;I136-608 * |
基于IPV6的内容分发网络系统负载均衡技术研究与实现;徐卫东;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20060331;I139-44 * |
Also Published As
Publication number | Publication date |
---|---|
CN104202362A (en) | 2014-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104202362B (en) | SiteServer LBS and its content distribution method and device, load equalizer | |
CN105763628B (en) | Data access request processing method and processing device, edge node server and cluster | |
CN107317879B (en) | A kind of distribution method and system of user's request | |
CN113037869B (en) | Method and apparatus for back-sourcing of content distribution network system | |
CN1098488C (en) | Method and apparatus for dynamic interval-based load balancing | |
WO2018209734A1 (en) | Dynamic scheduling and allocation method and system for network traffic | |
EP1975793A2 (en) | Optimized network resource location | |
CN108173937A (en) | Access control method and device | |
US11372937B1 (en) | Throttling client requests for web scraping | |
KR20130070500A (en) | Method and apparatus for processing server load balancing with the result of hash function | |
CN102118433A (en) | Multiple-tier distributed cluster system | |
CN104580393A (en) | Method and device for expanding server cluster system and server cluster system | |
CN102447719A (en) | Dynamic load balancing information processing system for Web GIS service | |
CN101984624A (en) | Method and device for distributing network flow | |
CN102215247B (en) | Network proximity load balancing method and device | |
CN105208133A (en) | Server, load balancer as well as server load balancing method and system | |
CN103152396A (en) | Data placement method and device applied to content distribution network system | |
CN104935653A (en) | Bypass cache method for visiting hot spot resource and device | |
EP3066577A2 (en) | Content node selection using network performance profiles | |
CN108111567A (en) | Realize the uniform method and system of server load | |
CN105025042B (en) | A kind of method and system of determining data information, proxy server | |
CN104270371A (en) | CDN cache server selecting method based on fuzzy logic | |
US20230018983A1 (en) | Traffic counting for proxy web scraping | |
EP4222617A1 (en) | Web scraping through use of proxies, and applications thereof | |
KR20100038800A (en) | Method for updating data stored in cache server, cache server and content delivery system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |