CN104202362A - Load balance system and content distribution method and device thereof, and load balancer - Google Patents

Load balance system and content distribution method and device thereof, and load balancer Download PDF

Info

Publication number
CN104202362A
CN104202362A CN201410401428.7A CN201410401428A CN104202362A CN 104202362 A CN104202362 A CN 104202362A CN 201410401428 A CN201410401428 A CN 201410401428A CN 104202362 A CN104202362 A CN 104202362A
Authority
CN
China
Prior art keywords
cache resources
access request
client
redirected
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410401428.7A
Other languages
Chinese (zh)
Other versions
CN104202362B (en
Inventor
朱大伟
徐永丰
顾庆荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Supreme Being Joins Information Technology Share Co Ltd
Original Assignee
Shanghai Supreme Being Joins Information Technology Share Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Supreme Being Joins Information Technology Share Co Ltd filed Critical Shanghai Supreme Being Joins Information Technology Share Co Ltd
Priority to CN201410401428.7A priority Critical patent/CN104202362B/en
Publication of CN104202362A publication Critical patent/CN104202362A/en
Application granted granted Critical
Publication of CN104202362B publication Critical patent/CN104202362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a load balance system and a content distribution method and device thereof, and a load balancer. The method comprises the following steps: receiving an access request of a client; and reorienting the received access request of the client t a corresponding cache resource server. Through adoption of a technical scheme, the problem of low response speed caused by transit of the load balancer can be solved, and the use experience of a user can be improved effectively.

Description

SiteServer LBS and content distribution method thereof and device, load equalizer
Technical field
The present invention relates to content distribution techniques field, particularly relate to a kind of SiteServer LBS and content distribution method thereof and device, load equalizer.
Background technology
Content distributing network (Content Delivery Network, CDN), by placing node server everywhere at network, on existing Internet basic, build one deck intelligent virtual network, the content of website is published to the network " edge " that approaches client most, makes client can obtain required content nearby.And these " edges " that is to say cache resources server, in order to buffer memory source station content.
In current CDN network, generally adopt common hash algorithm that the content map of source station is stored on concrete node server.When a cache resources server adds or exits this cluster, the mapping relations between all resource files and cache resources server will be destroyed.
Refer to Fig. 1, particularly, adopt identical hash function to be mapped to the relevant position in ring 1 object in source station (resource file) and buffer memory Resource Server exactly, take object K1 as starting point, along finding next node B node clockwise, object k1 is stored in B node.The machine if the cache resources server of B node is delayed, the object of the storage on B node will move on the cache resources server of C node clockwise, like this, only can affect C node, and can not impact the object of storing in other A node, D node.So, along with the increase of load, C nodal cache Resource Server be easy to also occur the machine of delaying, by that analogy, will cause whole SiteServer LBS all to be paralysed.
For addressing the above problem, in the CDN network of prior art, introduce consistency hash algorithm, that is to say and on described ring 1, increase a plurality of " dummy nodes ", the storage of object is to look for a dummy node along the clockwise direction of ring, and each dummy node can be associated with a real node.For example, as shown in Figure 2, A1, A2, B1, B2, C1, C2, D1, D2 are dummy node, A node stores the data of dummy node A1 and A2, the data of B node storing virtual Node B 1, B2, the data of C node storing virtual node C1, C2, the data of D node storing virtual node D1, D2.
Introduce after " dummy node ", mapping relations have just been transformed into { object → dummy node } from { object → node }.Due to dummy node One's name is legion, be uniformly distributed, thereby can when removing or add a cache resources server, change as small as possible existing mapping relations, meet the requirement of monotonicity.
Above-mentioned CDN, when the magnanimity access request of facing customers end, can utilize GSLB technology that the access of client is pointed on the nearest cache resources server working properly of client.But due to the storage capacity of separate unit cache resources server, concurrent ability and limited bandwidth, cannot meet the Concurrency Access demand of client.Therefore the cache resources server cluster that, often adopts a load equalizer (being called again reverse proxy machine) and many cache resources servers to form in prior art is processed the Concurrency Access of client.When certain resource file of client-requested, load equalizer can be according to relevant load equilibration scheduling method, the access request of client is transmitted to a selected cache resources server in cache resources server cluster, and then the response contents of cache resources server transmission is passed through to client, this system is called again SiteServer LBS.
In SiteServer LBS of the prior art, the response contents that the access request of client and cache resources server return according to the access request of client all needs through described load equalizer transfer, thereby exists response speed problem slowly.
Summary of the invention
The problem that the embodiment of the present invention solves is how to improve the response speed of SiteServer LBS.
For addressing the above problem, the embodiment of the present invention provides a kind of content distribution method of SiteServer LBS, and described method comprises:
Receive the access request of client;
The access request of received client is redirected to corresponding cache resources server, in described cache resources server, stores the cache resources file corresponding to access request of described client.
Alternatively, described method also comprises:
The access request of collecting client in Preset Time, described access request comprises the URL of the cache resources file that described access request is corresponding;
When the occurrence number of the URL of the same buffered resource file in the access request of client collected in described Preset Time is greater than default threshold value, described cache resources file is set as to Hot Contents;
Indication source station is issued to respectively each cache resources server by described Hot Contents.
Alternatively, the described access request by received client is redirected to corresponding cache resources server, comprising:
Whether cache resources file corresponding to access request that judges described client is Hot Contents;
When the cache resources file corresponding to access request of the described client of judgement is Hot Contents, the access request of received client is redirected to corresponding cache resources server fifty-fifty;
When the cache resources file corresponding to access request of the described client of judgement is not Hot Contents, the access request of received client is redirected to corresponding cache resources server.
Alternatively, when described cache resources file corresponding to access request when the described client of judgement is not Hot Contents, the access request of received client is redirected to corresponding cache resources server, comprises:
Obtain the URL of the cache resources file in the access request of described client;
Adopt hash function to calculate the cryptographic Hash of the URL of the cache resources file obtaining;
The access request of received client is redirected in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file, described cache resources file is stored in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
Alternatively, described method also comprises: when corresponding cache resources server being detected and break down, the access request of received client is redirected to other cache resources server.
The embodiment of the present invention also provides a kind of content delivering apparatus of SiteServer LBS, and described device comprises:
Receiving element, is suitable for receiving the access request of client;
First is redirected unit, and the access request that is suitable for client that described receiving element is received is redirected to corresponding cache resources server, stores the cache resources file corresponding to access request of described client in described slow Resource Server.
Alternatively, described device also comprises:
Collector unit, is suitable for collecting the access request of client in Preset Time, and described access request comprises the URL of the cache resources file that described access request is corresponding;
Statistic unit, is suitable for adding up the occurrence number of the URL of the same buffered resource file in the access request of described collector unit collected client in Preset Time.
Judging unit, is suitable for judging whether the occurrence number of the URL of the same buffered resource file in the access request of client in the Preset Time that described statistic unit adds up is greater than default threshold value;
Setup unit, when the occurrence number that is suitable for the URL of the same buffered resource file in the access request of client in described judging unit judgement Preset Time is greater than default threshold value, is set as Hot Contents by described cache resources file;
Indicating member, the Hot Contents that is suitable for indicating source station that described setup unit is set is issued to respectively each cache resources server.
Alternatively, the described first redirected unit comprises:
Judgment sub-unit, is suitable for judging whether the cache resources file corresponding to access request of described client is Hot Contents;
First is redirected subelement, is suitable for, when described judgment sub-unit judges that the cache resources file corresponding to access request of described client is Hot Contents, the access request of received client being redirected to corresponding cache resources server fifty-fifty;
Second is redirected subelement, is suitable for, when described judgment sub-unit judges that the cache resources file corresponding to access request of described client is not Hot Contents, the access request of received client being redirected to corresponding cache resources server.
Alternatively, the described second redirected subelement comprises:
Acquisition module, is suitable for obtaining the URL of the cache resources file in described client-access request;
Computing module, is suitable for adopting hash function to calculate the cryptographic Hash of the URL of the cache resources file that described acquisition module obtains;
Redirection module, be suitable for the access request of received client to be redirected in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file, described cache resources file is stored in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
Alternatively, described device also comprises:
Whether detecting unit, be suitable for detecting corresponding cache resources server and break down;
Second is redirected unit, when described detecting unit detects corresponding cache resources server and breaks down, the access request of received client is redirected to other cache resources server.
The embodiment of the present invention also provides a kind of load equalizer, and described load equalizer comprises above-mentioned content delivering apparatus.
The embodiment of the present invention also provides a kind of SiteServer LBS, and described SiteServer LBS comprises: two above cache resources servers and above-mentioned load equalizer, wherein, described load equalizer is connected with described cache resources server respectively.
Compared with prior art, technical scheme of the present invention has advantages of following:
Above-mentioned technical scheme, by the access request of client being sent to corresponding cache resources server, and the corresponding contents that cache resources server is sent is redirected to described client, response speed that load equalizer transfer causes problem slowly can be avoided, user's experience can be effectively promoted.
Further, owing to client-access being surpassed to the content setting of predetermined threshold value, it is Hot Contents, and described Hot Contents is issued in each cache resources server in SiteServer LBS and is stored by source station respectively, therefore, when the request that can send in the client receiving is Hot Contents, the Hot Contents access request of client is distributed to each cache resources server fifty-fifty, load that can each cache resources server of active balance, thus the response speed of SiteServer LBS can be improved.
Further, due to when corresponding cache resources server being detected and break down, the access request of client can be redirected to other cache resources server, can guarantee that the access request of client is responded timely, thereby can improve user's experience.
Accompanying drawing explanation
Fig. 1 is that the mapping storage of a kind of object in CDN network of the prior art and cache resources server is related to schematic diagram;
Fig. 2 is that the mapping storage of another kind of object in CDN network of the prior art and cache resources server is related to schematic diagram;
Fig. 3 is the composition structural representation of a kind of SiteServer LBS in the embodiment of the present invention;
Fig. 4 is the flow chart of the content distribution method of a kind of SiteServer LBS in the embodiment of the present invention;
Fig. 5 is the flow chart of the content distribution method of the another kind of SiteServer LBS in the embodiment of the present invention;
Fig. 6 sets heat to determine the flow chart of content in the embodiment of the present invention;
Fig. 7 in the embodiment of the present invention is redirected to the request of the non-Hot Contents of access of the client of reception the flow chart of corresponding cache resources server;
Fig. 8 is the structural representation of the content delivering apparatus of a kind of SiteServer LBS in the embodiment of the present invention;
Fig. 9 is the structural representation that first in the embodiment of the present invention is redirected unit;
Figure 10 is the structural representation that second in the embodiment of the present invention is redirected subelement.
Embodiment
The load-balancing method of existing SiteServer LBS mainly comprises following three kinds:
The first is polling dispatching method: the method is distributed to the cache resources server in described SiteServer LBS in turn by the access request of received client.The advantage of this method is simple, but every cache resources server in SiteServer LBS all needs all resource files of buffer memory source station, and reckons without the actual loading ability of each cache resources server.
The second is weight dispatching method: the method, according to the different disposal ability of cache resources server, for the different weighted value of each cache resources server-assignment, is processed different loads.The method has been considered the disposal ability of different cache resources servers, thereby can guarantee that high performance cache resources server has higher utilization rate, and can effectively avoid the server load that performance is lower overweight.But the same with first method, described method needs the resource file of all source stations of each cache resources server stores of SiteServer LBS.
The third is URL(uniform resource locator) (Uniform Resource Locator, URL) Hash calculation method: the method adopts identical hash algorithm to calculate corresponding cryptographic Hash to the URL of the resource file of all source stations, and then the cryptographic Hash calculating is mapped on corresponding cache resources server.Because the cryptographic Hash of the URL of each resource file that adopts identical hash function to calculate is identical, the request of the cache resources file that described in client-access, URL is corresponding so will be assigned on fixing cache resources server.Therefore, the advantage of this method is the resource file that only needs the storage URL corresponding with the cryptographic Hash of described URL on a cache resources server, and owing to having adopted the cryptographic Hash of the URL of hash function computational resource file, the access request of the client that can receive is dispensed on corresponding cache resources server fifty-fifty.
But, three kinds of above-mentioned load-balancing methods all need the unified access request that receives client of load equalizer, and the access request of received client is forwarded on corresponding cache resources server, simultaneously, when described cache resources server returns to corresponding response contents, described load equalizer need to be transmitted to described client by described response contents, because needs carry out corresponding transfer, not only the disposal ability for load equalizer has higher requirement, but also has slowed down the corresponding speed of client-access.
For solving the above-mentioned problems in the prior art, the redirected mode of the embodiment of the present invention is sent to corresponding cache resources server by the access request of client, response speed that load equalizer transfer causes problem slowly can be avoided, user's experience can be further promoted.
For above-mentioned purpose of the present invention, feature and advantage can more be become apparent, below in conjunction with accompanying drawing, specific embodiments of the invention are described in detail.
Fig. 3 shows the composition structural representation of a kind of SiteServer LBS in the embodiment of the present invention.SiteServer LBS as shown in Figure 3 can comprise: load equalizer 31 and two above cache resources servers 32, wherein, load equalizer 31 is connected with cache resources server 32 respectively, and is connected with client 33 by the Internet.
Fig. 4 shows the flow chart of the content distribution method of a kind of SiteServer LBS in the embodiment of the present invention.Below in conjunction with Fig. 4, the operation principle of the SiteServer LBS shown in Fig. 3 is done to detailed introduction:
Step S401: the access request that receives client.
In concrete enforcement, the access to content request of client 33 comprises the URL of the cache resources file that client 33 access request are corresponding.
Step S402: the access request of received client is redirected to corresponding cache resources server.
In concrete enforcement, when receiving the access request of client 33, load equalizer 31 can adopt the mode of HTTP redirection that the access request of received client 33 is redirected to corresponding cache resources server 32.Owing to storing the resource file corresponding to access request of described client 33 in described cache resources server 32, after load equalizer 31 is redirected, client 33 can directly connect with corresponding cache resources server 32, and obtains the response contents that cache resources server 32 provides.
Fig. 5 shows the flow chart of the content distribution method of the another kind of SiteServer LBS in the embodiment of the present invention.Below in conjunction with the content distribution method of the SiteServer LBS described in Fig. 5, the operation principle of the SiteServer LBS shown in Fig. 3 is further described in detail:
Step S501: the access request that receives client.
In concrete enforcement, when user inputs the address of the resource file in the CDN network of wanting access in the address field of the browser of client, first by the GSLB device of CDN network, received the access request of client 33, then, the load equalizer 31 of GSLB device and all SiteServer LBSs communicates, and carrys out the access request of customer in response end to choose best cache resources server 32.For example, when the access request of client 33 arrives, GSLB device is according to corresponding GSLB dispatching method, and the access request of described client 33 is sent to the load equalizer 31 in the SiteServer LBS described in Fig. 3.
Step S502: whether cache resources file corresponding to access request that judges described client is Hot Contents.
In concrete enforcement, after the access to content request that receives client 33, load equalizer 31 can carry out after the processing such as door chain, URI conversion the access request of received client, then execution judges whether the cache resources file corresponding to access request of described client 33 is the operation of Hot Contents.
In concrete enforcement, in order further to realize the load balancing of SiteServer LBS, the resource file of buffer memory on cache resources server 32 can be set as respectively to Hot Contents and non-Hot Contents, for example, can determine which resource file is Hot Contents according to the accessed number of times of Preset Time cache resources file, which resource file is non-Hot Contents.Particularly, shown in Figure 6, the method for described setting Hot Contents, can comprise:
Step S601: the access request of collecting client in Preset Time.
In concrete enforcement, because the access request of received client 33 is all redirected in different cache resources servers 32 by load equalizer 31, and in the access request of client 33, include the URL of corresponding cache resources file, therefore, load equalizer 31 can be by resolving and add up the access request of the client 33 receiving in Preset Time, just can obtain the URL of cache resources file corresponding to the access request of client 33.
In concrete enforcement, described Preset Time can arrange according to the actual needs, and for example, described Preset Time can be set to 1 day or 3 days etc.
Step S602: whether the occurrence number that judges the URL of same buffered resource file in collected access request is greater than default threshold value.
In concrete enforcement, described default threshold value can arrange according to the actual needs, for example, can be 3 times or 9 inferior by described Threshold.
In concrete enforcement, when judgment result is that while being, can perform step S603, when the determination result is NO, can perform step S604~step S605.
Step S603: described cache resources file is set as to non-Hot Contents.
In concrete enforcement, in the access request of client collected in described Preset Time 33, when the occurrence number of the URL of same buffered resource file is greater than default threshold value, while showing the content of described resource file, client is not the content of the frequent access of client, therefore, described cache resources file can be set as to non-Hot Contents.
Step S604: described cache resources file is set as to Hot Contents.
In concrete enforcement, in Preset Time in the access request of collected client 33, when the occurrence number of the URL of same buffered resource file is greater than default threshold value, illustrate that the cache resources file that described URL is corresponding is the content of the frequent access of client, therefore, cache resources file corresponding to described URL can be set as to Hot Contents.
Step S605: indication source station is issued to respectively each cache resources server by described Hot Contents.
In concrete enforcement, when Hot Contents is stored in a cache resources server 32, because client 33 can be accessed described Hot Contents continually, so, the described cache resources server 32 that stores Hot Contents will be born too much load, therefore, is easy to occur the accidents such as the machine of delaying.
In order further to realize the load balancing of SiteServer LBS, can indicate source station that Hot Contents is issued in each cache resources server 32 in SiteServer LBS, like this, because each cache resources server 32 all stores described Hot Contents, therefore, each cache resources server 32 all can be processed the request of client 33 access Hot Contents, thereby share to each cache resources server 32 in SiteServer LBS the load average of the access Hot Contents that an original cache resources server 32 is born, thereby improve the response speed of SiteServer LBS.
Please continue shown in Figure 5ly, in concrete enforcement, when judgment result is that in step S502 is, can perform step S503, otherwise can perform step S504.
Step S503: the access request of received client is redirected to corresponding cache resources server fifty-fifty.
In concrete enforcement, owing to all storing the cache resources file that is set to Hot Contents in each cache resources server 32 in SiteServer LBS, therefore, in order further to realize the load balancing of SiteServer LBS, the access request of client 33 can be distributed to fifty-fifty on each cache resources server 32, for example, load equalizer 31 can be redirected to the request of access Hot Contents on corresponding cache resources server 32 according to the order of 32 (1)-32 (n) (n is the order of each cache resources server 32 in the request of SiteServer LBS relative client 33 access Hot Contents).
Here it is pointed out that in the time need to not carrying out the access request of relative client 33 according to subdispatch cache resources server 32, can perform step S501~step S503.
And during according to the access request of subdispatch cache resources server 32 customer in response ends 33,31 of load equalizers can be according to the IP address of client 33 by the cache resources server 32 nearest apart from client 33, and the IP address of selected cache resources server 32 is sent to client 33, the IP address of 33 cache resources servers 32 that can return according to load equalizer 31 of client, directly connect with described cache resources server 32, thereby can obtain the corresponding content that described cache resources server 32 provides.
Step S504: the access of received client is redirected to corresponding cache resources server.
In concrete enforcement, because the content of source station can adopt hash function mapping to store on the cache resources server 32 of each node, so, the request of each cache resources server 32 handled clients is determined.When receiving the access request of client 33, load equalizer 31 is by the URL of the cache resources file wherein stored and the mapping relations between cache resources server 32, just can determine the cache resources server 32 at the cache resources file place corresponding to access request of storage client 33, thereby the access request of received client can be redirected to determined corresponding cache resources server 32.
Particularly, shown in Figure 7, when the cache resources file corresponding to access request of the described client 33 of judgement is not Hot Contents, the access request of received client 32 is redirected to corresponding cache resources server, can comprise:
Step S701: the URL that obtains the cache resources file in described client-access request.
In concrete enforcement, mapping storage relation between cache resources file and cache resources server 32, can determine by URL and the mapping relations between cache resources server 32 of cache resources file, therefore, in order to determine the cache resources server 32 of the access request of corresponding client, can first obtain the URL of cache resources file corresponding to client-access request.
Step S702: adopt hash function to calculate the cryptographic Hash of the URL of the cache resources file obtaining.
In concrete enforcement, mapping relations between the URL of cache resources file and cache resources server 32 can adopt hash function to determine, on cache resources server 32 for resource file corresponding to the access request of definite storage client, can adopt described hash function to calculate the cryptographic Hash of the URL of described cache resources file.
Step S703: the access request of received client is redirected in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
In concrete enforcement, the cryptographic Hash of URL by the cache resources file that calculates, just can determine the cache resources server 32 of the described cache resources file of storage, thereby the access request of corresponding client can be redirected on determined corresponding cache resources server 32.
In concrete enforcement, in order to collect the communications status of the cache resources server of each node, guarantee the access request of client not to be distributed on the cache resources server node breaking down, make the access request of client can obtain responding rapidly in time, the content distribution method of the SiteServer LBS shown in Fig. 5, can also comprise:
Step S505: detect corresponding cache resources server and whether break down.
In concrete enforcement, the operation conditions that can regularly collect each nodal cache Resource Server 32 in SiteServer LBS, thus determine whether corresponding cache resources server 32 breaks down.
In concrete enforcement, when the test results is yes, can perform step S506; When testing result is while being no, can perform step S507.
Step S506: the access request of received client is redirected to corresponding cache resources server.
In concrete enforcement, when by detecting while determining that correspondingly the normal operation of cache resources server 32 is not broken down, the access request of received client can be redirected to corresponding cache resources server 32.
Step S507: the cache resources server that the access request of received client is redirected to other.
In concrete enforcement, on the different cache resources server 32 of the same resource file in source station in can buffer memory SiteServer LBS, therefore, when judging that correspondingly cache resources server 32 breaks down, access request in order to ensure client 33 can meet with a response fast in time, the access request of received client 33 can be redirected to other the cache resources server 32 that stores described resource file.
Fig. 8 shows the structural representation of the content delivering apparatus of a kind of SiteServer LBS in the embodiment of the present invention.The content delivering apparatus 800 of SiteServer LBS as shown in Figure 8, can comprise that receiving element 801 and first is redirected unit 802.Wherein:
Receiving element 801, is suitable for receiving the access request of client.
First is redirected unit 802, and the access request that is suitable for client that receiving element 801 is received is redirected to corresponding cache resources server, stores the cache resources file corresponding to access request of described client in described cache resources server.
Fig. 9 shows the structural representation that described first in the embodiment of the present invention is redirected unit.The first redirected unit 900 as shown in Figure 9, can comprise that judgment sub-unit 901, first is redirected subelement 902 and second and is redirected subelement 903.Wherein:
Judgment sub-unit 901, is suitable for judging whether the cache resources file corresponding to access request of described client is Hot Contents.
First is redirected subelement 902, is suitable for, when the cache resources file corresponding to access request of the described client of judgment sub-unit 901 judgement is Hot Contents, the access request of received client being redirected to corresponding cache resources server fifty-fifty.
Second is redirected subelement 903, is suitable for, when the cache resources file corresponding to access request of the described client of judgment sub-unit 901 judgement is not Hot Contents, the access request of received client being redirected to corresponding cache resources server.
Figure 10 shows the structural representation that a kind of second in the embodiment of the present invention is redirected subelement.The second redirected subelement 100 as shown in figure 10, can comprise acquisition module 101, computing module 102 and redirection module 103.Wherein:
Acquisition module 101, is suitable for obtaining the URL of the cache resources file in described client-access request.
Computing module 102, is suitable for adopting hash function to calculate the cryptographic Hash of the URL of the cache resources file that described acquisition module 101 obtains.
Redirection module 103, be suitable for the access request of received client to be redirected in cache resources server corresponding to the cryptographic Hash of URL of the described cache resources file that computing module 102 calculates, wherein, described cache resources file is stored in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
In concrete enforcement, the content delivering apparatus 800 of SiteServer LBS as shown in Figure 8 can also comprise collector unit 803, statistic unit 804, judging unit 805, setup unit 806 and the indicating member 807 connecting successively.Wherein:
Collector unit 803, is suitable for collecting the access request of client in Preset Time, and described access request comprises the URL of the cache resources file that described access request is corresponding.
Statistic unit 804, is suitable for the occurrence number of the URL of the same buffered resource file in the access request of statistics collection unit 803 collected client in Preset Time.
Judging unit 805, is suitable for judging whether the occurrence number of the URL of the same buffered resource file in the access request of client in the Preset Time that statistic unit 804 adds up is greater than default threshold value.
Setup unit 806, when the occurrence number that is suitable for the URL of the same buffered resource file in the access request of client in judging unit 805 judgement Preset Times is greater than default threshold value, is set as Hot Contents by described cache resources file.
Indicating member 807, the Hot Contents that is suitable for indicating source station that setup unit 806 is set is issued to respectively each cache resources server.
In concrete enforcement, the content delivering apparatus of the SiteServer LBS in the embodiment of the present invention, can also comprise that detecting unit 808 and second is redirected unit 809.Wherein:
Whether detecting unit 808, be suitable for detecting corresponding cache resources server and break down.
Second is redirected unit 809, is suitable for, when detecting unit 808 detects corresponding cache resources server and breaks down, the access request of received client being redirected to other cache resources server.
The embodiment of the present invention also provides a kind of load equalizer, and described load equalizer can comprise the content delivering apparatus in above-described embodiment, repeats no more here.
One of ordinary skill in the art will appreciate that all or part of step in the whole bag of tricks of above-described embodiment is to come the hardware that instruction is relevant to complete by program, this program can be stored in computer-readable recording medium, and storage medium can comprise: ROM, RAM, disk or CD etc.
Above the method and system of the embodiment of the present invention have been done to detailed introduction, the present invention is not limited to this.Any those skilled in the art, without departing from the spirit and scope of the present invention, all can make various changes or modifications, so protection scope of the present invention should be as the criterion with claim limited range.

Claims (12)

1. a content distribution method for SiteServer LBS, is characterized in that,
Receive the access request of client;
The access request of received client is redirected to corresponding cache resources server, in described cache resources server, stores the cache resources file corresponding to access request of described client.
2. the content distribution method of SiteServer LBS according to claim 1, is characterized in that, also comprises:
The access request of collecting client in Preset Time, described access request comprises the URL of the cache resources file that described access request is corresponding;
When the occurrence number of the URL of the same buffered resource file in the access request of client collected in described Preset Time is greater than default threshold value, described cache resources file is set as to Hot Contents;
Indication source station is issued to respectively each cache resources server by described Hot Contents.
3. the content distribution method of SiteServer LBS according to claim 2, is characterized in that, the described access request by received client is redirected to corresponding cache resources server, comprising:
Whether cache resources file corresponding to access request that judges described client is Hot Contents;
When the cache resources file corresponding to access request of the described client of judgement is Hot Contents, the access request of received client is redirected to corresponding cache resources server fifty-fifty;
When the cache resources file corresponding to access request of the described client of judgement is not Hot Contents, the access request of received client is redirected to corresponding cache resources server.
4. the content distribution method of SiteServer LBS according to claim 3, it is characterized in that, when described cache resources file corresponding to access request when the described client of judgement is not Hot Contents, the access request of received client is redirected to corresponding cache resources server, comprises:
Obtain the URL of the cache resources file in described client-access request;
Adopt hash function to calculate the cryptographic Hash of the URL of the cache resources file obtaining;
The access request of received client is redirected in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file, described cache resources file is stored in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
5. the content distribution method of SiteServer LBS according to claim 1, it is characterized in that, also comprise: when corresponding cache resources server being detected and break down, the access request of received client is redirected to other cache resources server.
6. a content delivering apparatus for SiteServer LBS, is characterized in that, comprising:
Receiving element, is suitable for receiving the access request of client;
First is redirected unit, and the access request that is suitable for client that described receiving element is received is redirected to corresponding cache resources server, stores the cache resources file corresponding to access request of described client in described slow Resource Server.
7. the content delivering apparatus of SiteServer LBS according to claim 6, is characterized in that, also comprises:
Collector unit, is suitable for collecting the access request of client in Preset Time, and described access request comprises the URL of the cache resources file that described access request is corresponding;
Statistic unit, is suitable for adding up the occurrence number of the URL of the same buffered resource file in the access request of described collector unit collected client in Preset Time.
Judging unit, is suitable for judging whether the occurrence number of the URL of the same buffered resource file in the access request of client in the Preset Time that described statistic unit adds up is greater than default threshold value;
Setup unit, when the occurrence number that is suitable for the URL of the same buffered resource file in the access request of client in described judging unit judgement Preset Time is greater than default threshold value, is set as Hot Contents by described cache resources file;
Indicating member, the Hot Contents that is suitable for indicating source station that described setup unit is set is issued to respectively each cache resources server.
8. the content delivering apparatus of SiteServer LBS according to claim 7, is characterized in that, described first is redirected unit comprises:
Judgment sub-unit, is suitable for judging whether the cache resources file corresponding to access request of described client is Hot Contents;
First is redirected subelement, is suitable for, when described judgment sub-unit judges that the cache resources file corresponding to access request of described client is Hot Contents, the access request of received client being redirected to corresponding cache resources server fifty-fifty;
Second is redirected subelement, is suitable for, when described judgment sub-unit judges that the cache resources file corresponding to access request of described client is not Hot Contents, the access request of received client being redirected to corresponding cache resources server.
9. the content delivering apparatus of SiteServer LBS according to claim 8, is characterized in that, described second is redirected subelement comprises:
Acquisition module, is suitable for obtaining the URL of the cache resources file in described client-access request;
Computing module, is suitable for adopting hash function to calculate the cryptographic Hash of the URL of the cache resources file that described acquisition module obtains;
Redirection module, be suitable for the access request of received client to be redirected in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file, described cache resources file is stored in cache resources server corresponding to the cryptographic Hash of URL of described cache resources file.
10. the content delivering apparatus of SiteServer LBS according to claim 6, is characterized in that, also comprises:
Whether detecting unit, be suitable for detecting corresponding cache resources server and break down;
Second is redirected unit, when described detecting unit detects corresponding cache resources server and breaks down, the access request of received client is redirected to other cache resources server.
11. 1 kinds of load equalizers, is characterized in that, comprise the content delivering apparatus described in claim 6-10 any one.
12. 1 kinds of SiteServer LBSs, is characterized in that, comprising: more than two cache resources servers and the load equalizer described in claim 11, wherein, described load equalizer is connected with described cache resources server respectively.
CN201410401428.7A 2014-08-14 2014-08-14 SiteServer LBS and its content distribution method and device, load equalizer Active CN104202362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410401428.7A CN104202362B (en) 2014-08-14 2014-08-14 SiteServer LBS and its content distribution method and device, load equalizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410401428.7A CN104202362B (en) 2014-08-14 2014-08-14 SiteServer LBS and its content distribution method and device, load equalizer

Publications (2)

Publication Number Publication Date
CN104202362A true CN104202362A (en) 2014-12-10
CN104202362B CN104202362B (en) 2017-11-03

Family

ID=52087587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410401428.7A Active CN104202362B (en) 2014-08-14 2014-08-14 SiteServer LBS and its content distribution method and device, load equalizer

Country Status (1)

Country Link
CN (1) CN104202362B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107211189A (en) * 2014-12-30 2017-09-26 上海诺基亚贝尔股份有限公司 A kind of method and apparatus sent for video
CN107277093A (en) * 2016-04-08 2017-10-20 北京优朋普乐科技有限公司 Content distributing network and its load-balancing method
CN107517241A (en) * 2016-06-16 2017-12-26 中兴通讯股份有限公司 Request scheduling method and device
CN107689930A (en) * 2017-09-08 2018-02-13 桂林加宏汽车修理有限公司 A kind of resource regulating method and system
CN107707597A (en) * 2017-04-26 2018-02-16 贵州白山云科技有限公司 One kind burst focus accesses equalization processing method and device
WO2018090315A1 (en) * 2016-11-18 2018-05-24 华为技术有限公司 Data request processing method and cache system
CN108667692A (en) * 2018-07-06 2018-10-16 厦门网宿有限公司 A kind of performance test methods and system of distributed caching equipment load balance
CN109067898A (en) * 2018-08-24 2018-12-21 山东浪潮商用系统有限公司 A method of being distributed by file hash, which reduces content distributing network fringe node, returns source rate
CN109819039A (en) * 2019-01-31 2019-05-28 网宿科技股份有限公司 A kind of file acquisition method, file memory method, server and storage medium
CN110336848A (en) * 2019-04-23 2019-10-15 网宿科技股份有限公司 A kind of dispatching method and scheduling system, equipment of access request
CN111314414A (en) * 2019-12-17 2020-06-19 聚好看科技股份有限公司 Data transmission method, device and system
CN111464649A (en) * 2017-04-19 2020-07-28 贵州白山云科技股份有限公司 Access request source returning method and device
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network
CN115150475A (en) * 2021-03-31 2022-10-04 贵州白山云科技股份有限公司 Scheduling method, device, medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070050761A1 (en) * 2005-08-30 2007-03-01 Microsoft Corporation Distributed caching of files in a network
CN101222424A (en) * 2007-12-24 2008-07-16 中国电信股份有限公司 Content distribution network and scheduling method based on content in the network
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN101848137A (en) * 2009-03-26 2010-09-29 北京快网科技有限公司 Load balancing method and system applied to three-layer network
CN102118433A (en) * 2010-12-27 2011-07-06 网宿科技股份有限公司 Multiple-tier distributed cluster system
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070050761A1 (en) * 2005-08-30 2007-03-01 Microsoft Corporation Distributed caching of files in a network
CN101222424A (en) * 2007-12-24 2008-07-16 中国电信股份有限公司 Content distribution network and scheduling method based on content in the network
CN101848137A (en) * 2009-03-26 2010-09-29 北京快网科技有限公司 Load balancing method and system applied to three-layer network
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN102118433A (en) * 2010-12-27 2011-07-06 网宿科技股份有限公司 Multiple-tier distributed cluster system
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐卫东: "基于IPV6的内容分发网络系统负载均衡技术研究与实现", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
梅洪舟: "P2P流媒体内容分发系统负载均衡策略的研究与实现", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107211189A (en) * 2014-12-30 2017-09-26 上海诺基亚贝尔股份有限公司 A kind of method and apparatus sent for video
CN107277093A (en) * 2016-04-08 2017-10-20 北京优朋普乐科技有限公司 Content distributing network and its load-balancing method
CN107517241A (en) * 2016-06-16 2017-12-26 中兴通讯股份有限公司 Request scheduling method and device
WO2018090315A1 (en) * 2016-11-18 2018-05-24 华为技术有限公司 Data request processing method and cache system
CN111464649A (en) * 2017-04-19 2020-07-28 贵州白山云科技股份有限公司 Access request source returning method and device
CN111464649B (en) * 2017-04-19 2022-10-21 贵州白山云科技股份有限公司 Access request source returning method and device
CN107707597A (en) * 2017-04-26 2018-02-16 贵州白山云科技有限公司 One kind burst focus accesses equalization processing method and device
WO2018196775A1 (en) * 2017-04-26 2018-11-01 贵州白山云科技有限公司 Method and apparatus for balanced processing of sudden hot spot access, medium, and device
CN107689930A (en) * 2017-09-08 2018-02-13 桂林加宏汽车修理有限公司 A kind of resource regulating method and system
CN108667692A (en) * 2018-07-06 2018-10-16 厦门网宿有限公司 A kind of performance test methods and system of distributed caching equipment load balance
CN108667692B (en) * 2018-07-06 2021-01-29 厦门网宿有限公司 Performance test method and system for load balance of distributed cache equipment
CN109067898A (en) * 2018-08-24 2018-12-21 山东浪潮商用系统有限公司 A method of being distributed by file hash, which reduces content distributing network fringe node, returns source rate
CN109819039A (en) * 2019-01-31 2019-05-28 网宿科技股份有限公司 A kind of file acquisition method, file memory method, server and storage medium
CN109819039B (en) * 2019-01-31 2022-04-19 网宿科技股份有限公司 File acquisition method, file storage method, server and storage medium
CN110336848A (en) * 2019-04-23 2019-10-15 网宿科技股份有限公司 A kind of dispatching method and scheduling system, equipment of access request
CN111314414B (en) * 2019-12-17 2021-09-28 聚好看科技股份有限公司 Data transmission method, device and system
CN111314414A (en) * 2019-12-17 2020-06-19 聚好看科技股份有限公司 Data transmission method, device and system
CN113132443A (en) * 2019-12-31 2021-07-16 北京金山云网络技术有限公司 Resource acquisition method and device and node equipment in CDN (content delivery network)
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network
CN113132443B (en) * 2019-12-31 2022-06-07 北京金山云网络技术有限公司 Resource acquisition method and device and node equipment in CDN (content delivery network)
CN115150475A (en) * 2021-03-31 2022-10-04 贵州白山云科技股份有限公司 Scheduling method, device, medium and equipment
WO2022206479A1 (en) * 2021-03-31 2022-10-06 贵州白山云科技股份有限公司 Scheduling method and apparatus, medium, and device

Also Published As

Publication number Publication date
CN104202362B (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN104202362A (en) Load balance system and content distribution method and device thereof, and load balancer
EP3211857B1 (en) Http scheduling system and method of content delivery network
CN102882939B (en) Load balancing method, load balancing equipment and extensive domain acceleration access system
CN102118433A (en) Multiple-tier distributed cluster system
US20170171344A1 (en) Scheduling method and server for content delivery network service node
CN108173937A (en) Access control method and device
CN103227754A (en) Dynamic load balancing method of high-availability cluster system, and node equipment
CN104580393A (en) Method and device for expanding server cluster system and server cluster system
US11372937B1 (en) Throttling client requests for web scraping
CN105208133A (en) Server, load balancer as well as server load balancing method and system
CN102215247B (en) Network proximity load balancing method and device
WO2015069748A2 (en) Content node selection using network performance profiles
JP5957965B2 (en) Virtualization system, load balancing apparatus, load balancing method, and load balancing program
CN108111567A (en) Realize the uniform method and system of server load
US20140068052A1 (en) Advanced notification of workload
KR101131787B1 (en) Method for updating data stored in cache server, cache server and content delivery system thereof
CN104270371A (en) CDN cache server selecting method based on fuzzy logic
Kontogiannis et al. ALBL: an adaptive load balancing algorithm for distributed web systems
CN105025042B (en) A kind of method and system of determining data information, proxy server
CN103746926A (en) Local area network accelerating device and local area network accelerating system
EP4227828A1 (en) Web scraping through use of proxies, and applications thereof
US20230018983A1 (en) Traffic counting for proxy web scraping
CN102055805A (en) Device and method for point-to-point (P2P) downloading based on internetwork protocol standards
CN111131083B (en) Method, device and equipment for data transmission between nodes and computer readable storage medium
US20150012663A1 (en) Increasing a data transfer rate

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant