CN107517241A - Request scheduling method and device - Google Patents

Request scheduling method and device Download PDF

Info

Publication number
CN107517241A
CN107517241A CN201610439369.1A CN201610439369A CN107517241A CN 107517241 A CN107517241 A CN 107517241A CN 201610439369 A CN201610439369 A CN 201610439369A CN 107517241 A CN107517241 A CN 107517241A
Authority
CN
China
Prior art keywords
url request
caching server
request
content
url
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610439369.1A
Other languages
Chinese (zh)
Inventor
程智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201610511615.XA priority Critical patent/CN107517243A/en
Priority to CN201610439369.1A priority patent/CN107517241A/en
Publication of CN107517241A publication Critical patent/CN107517241A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Abstract

The invention provides a kind of request scheduling method and device.Wherein, this method includes:The uniform resource position mark URL request that receiving terminal is sent;Judge whether the content that the URL request is asked is Hot Contents;It is in the case that the content that the URL request is asked is Hot Contents in judged result, the URL request is dispatched to the second caching server in addition to the first caching server once allocated before the URL request.By the present invention, solve the problems, such as be in correlation technique same URL request distribute separate unit cache equipment caused by same cache equipment load it is higher, and then the effect of the same cache machine utilizations of mitigation.

Description

Request scheduling method and device
Technical field
The present invention relates to the communications field, in particular to a kind of request scheduling method and device.
Background technology
As content distributing network (Content Delivery Network, referred to as CDN) is in each row The extensive use of industry, the CDN network architecture is also increasingly by familiar.Simplest CDN Network is the one cache server cache of DNS and each node for being responsible for global equilibrium, this Sample can run work.
With the popularization of internet and increasing, the separate unit on a node of smart phone user Cache can not meet bearing load, it is necessary to more cache while work, share loads, this When be also required to server load balancing (Server load Balancing, referred to as SLB) equipment to assist More cache are adjusted to be operated.
Existing scheme is primarily present two problems:1) same file content requests, no matter the content Whether Hot Contents, all only can ask to service to same cache equipment scheduling, greatly increase A certain cache bearing load, can not support high concurrent scene;If 2) some cache When can not work, when corresponding to the content stored on the cache it is requested come when, then need to return again Source, greatly reduce flow of services.
For Hot Contents, the load of separate unit cache equipment how is reduced, multiple devices is cooperateed with work Make, improve flow of services and user's sensory experience, accelerate promoting the use of for business for network-caching, With important Research Significance.Prior art has yet to be improved and developed.
For in correlation technique, the list caused by same cache equipment is distributed for same URL request The problem of load of platform cache equipment is higher, not yet proposes effective solution.
The content of the invention
The embodiments of the invention provide a kind of request scheduling method and device, at least to solve correlation technique In be load that same URL request distributes the separate unit cache equipment caused by same cache equipment The problem of higher.
According to one embodiment of present invention, there is provided a kind of request scheduling method, including:Receive eventually The uniform resource position mark URL request that end is sent;Judging the content that the URL request is asked is No is Hot Contents;It is Hot Contents in the content that judged result is asked by the URL request In the case of, the URL request is dispatched to except be once allocated before the URL request first delays Deposit the second caching server outside server.
Alternatively, once it is allocated before the URL request is dispatched to except the URL request Before the second caching server outside first caching server, in addition to:Record terminal is sent every time URL request, and the procotol IP of the caching server for the distribution of the URL request first Address.
Alternatively, it is described that the URL request is dispatched to except being once allocated before the URL request The first caching server outside the second caching server include:Whether judge the URL request To send first;In the case where judged result is not to send first for the URL request, according to The IP address of caching server once allocated before the URL request, by the URL request It is dispatched to the caching server in addition to the IP address.
Alternatively, when judging content that the URL request is asked for Hot Contents, the side Method also includes:Content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, with The caching server for receiving the content Copy Info is set to carry out Hot Contents duplication;Wherein, it is described Content Copy Info includes at least one of:The URL of the Hot Contents, the focus is cached The network protocol IP address of the caching server of content.
Alternatively, methods described also includes:In judged result is asked by the URL request In the case that appearance is not Hot Contents, according to the unique ID of URL request institute request content Hash calculation is carried out, to be defined as the caching server of the URL request service.
According to another embodiment of the invention, there is provided one kind request dispatching device, including:Receive Module, the uniform resource position mark URL request sent for receiving terminal;Judge module, it is used for Judge whether the content that the URL request is asked is Hot Contents;Scheduler module, for sentencing In the case that the content that disconnected result is asked by the URL request is Hot Contents, by the URL Request is dispatched to second in addition to the first caching server once allocated before the URL request Caching server.
Alternatively, described device also includes:Logging modle, for being dispatched by the URL request To the second buffer service in addition to the first caching server once allocated before the URL request Before device, the URL request that terminal is sent every time is recorded, and is the URL request distribution first Caching server network protocol IP address.
Alternatively, the scheduler module includes:Judging unit, for judging that the URL request is No is to send first;Scheduling unit, for being that the URL request is not to send out first in judged result In the case of sending, according to the IP address of caching server once allocated before the URL request, The URL request is dispatched to the caching server in addition to the IP address.
Alternatively, described device also includes:Processing module, for judging the URL request institute It is interior to needing the caching server for carrying out Hot Contents duplication to issue when the content of request is Hot Contents Hold Copy Info, answered so that the caching server for receiving the content Copy Info carries out Hot Contents System;Wherein, the content Copy Info includes at least one of:The URL of the Hot Contents, The network protocol IP address of the caching server of the Hot Contents is cached.
Alternatively, described device also includes:Computing module, for being the URL in judged result In the case that the asked content of request is not Hot Contents, asked according to the URL request interior The unique ID of appearance carries out Hash calculation, to be defined as the buffer service of the URL request service Device.
According to still another embodiment of the invention, a kind of storage medium is additionally provided.The storage medium is set It is set to the program code that storage is used to perform following steps:The URL that receiving terminal is sent URL request;Judge whether the content that the URL request is asked is Hot Contents;Judging to tie In the case that the content that fruit is asked by the URL request is Hot Contents, by the URL request It is dispatched to the second caching in addition to the first caching server once allocated before the URL request Server.
By the present invention, the uniform resource position mark URL request that receiving terminal is sent;Judge the URL Whether the asked content of request is Hot Contents;Asked in judged result for the URL request Content be Hot Contents in the case of, by the URL request be dispatched to except before the URL request once by The second caching server outside first caching server of distribution.And then it is same to solve in correlation technique The load that one URL request distributes the separate unit cache equipment caused by same cache equipment is higher Problem, reach the effect for mitigating same cache machine utilizations.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the one of the application Part, schematic description and description of the invention are used to explain the present invention, not formed to this hair Bright improper restriction.In the accompanying drawings:
Fig. 1 is request scheduling method flow chart according to embodiments of the present invention;
Fig. 2 is service load balancing slb scheduling flow schematic diagrames according to embodiments of the present invention;
Fig. 3 is the flow chart that cache equipment rooms according to embodiments of the present invention carry out Hot Contents duplication Schematic diagram;
Fig. 4 is cache device pollings service procedure schematic diagram according to embodiments of the present invention;
Fig. 5 is the structured flowchart of request dispatching device according to embodiments of the present invention;
Fig. 6 is the structured flowchart (one) of request dispatching device according to embodiments of the present invention;
Fig. 7 is the structured flowchart (two) of request dispatching device according to embodiments of the present invention;
Fig. 8 is the structured flowchart (three) of request dispatching device according to embodiments of the present invention;
Fig. 9 is the structured flowchart (four) of request dispatching device according to embodiments of the present invention.
Embodiment
Describe the present invention in detail below with reference to accompanying drawing and in conjunction with the embodiments.It should be noted that In the case where not conflicting, the feature in embodiment and embodiment in the application can be mutually combined.
It should be noted that the term in description and claims of this specification and above-mentioned accompanying drawing " first ", " second " etc. be for distinguishing similar object, without for describe specific order or Precedence.
Embodiment 1
A kind of request scheduling method is provided in the present embodiment, and Fig. 1 is according to embodiments of the present invention Request scheduling method flow chart, as shown in figure 1, the flow comprises the following steps:
Step S102, the uniform resource position mark URL request that receiving terminal is sent;
Step S104, judge whether the content that the URL request is asked is Hot Contents;
Step S106, it is the feelings that the content that the URL request is asked is Hot Contents in judged result Under condition, the URL request is dispatched to except the first buffer service once allocated before the URL request The second caching server outside device.
Alternatively, in the present embodiment, the application scenarios of above-mentioned request scheduling method include but and unlimited In:More cache server cache while cooperating on one node.In the applied field Under scape, URL (Uniform Resource Locator, abbreviation that receiving terminal is sent For URL) request;Judge whether the content that the URL request is asked is Hot Contents;Judging As a result in the case that the content asked for the URL request is Hot Contents, the URL request is adjusted Second buffer service of the degree extremely in addition to the first caching server once allocated before the URL request Device.That is, in the present embodiment, judging content that URL request is asked in focus Rong Shi, same URL request is dispatched to corresponding caching server respectively by way of poll, And then it is that same URL request distributes list caused by same cache equipment to solve in correlation technique The problem of load of platform cache equipment is higher, the effect of the same cache machine utilizations of mitigation is reached.
With reference to specific example, to the present embodiment explanation for example.
Present example provides a kind of request scheduling method and system, passes through this method and system, one side The bearing load of separate unit cache equipment can be reduced, on the other hand can improve flow of services and user Sensory experience.Wherein, caching server is by taking cache equipment as an example.With reference to the system of this example Framework specifically describes above-mentioned request scheduling method.
1) local load balancing equipment slb
Slb is responsible for the load balancing of each cache in each node, ensures the operating efficiency between node. The information between collector node and surrounding environment is also wanted simultaneously, keeps logical between global load balancing Letter, realizes the load balancing of whole system.Slb needs to record the URL that user asks every time, also needs When recording each URL and asking first, cache that slb equipment is arrived by cid-hash algorithms selections Equipment ip addresses, the content for being easy to implement follow-up cache equipment rooms replicate.
2) Hot Contents replication module
The Hot Contents replication module, interface is replicated by the content appointed, slb will need to replicate Information by shifting the cache equipment for needing to carry out content duplication under json bodies onto, in json message bodies Include the URL for needing Hot Contents, the buffered Hot Contents cache equipment ip.cache After receiving the message, then http can be initiated to buffered equipment according to the information of band in json bodies Request, realizes the duplication of Hot Contents.
3) poll services module
The poll services module, it is to judge whether request that subsequent user comes is heat in TOP N Point content, if it is, request then can be respectively dispatched to and delayed according to new polling algorithm by slb Deposited on two or more cache of the content, allow Hot Contents on two or multiple devices in turn Offer service, effectively meet high concurrent scene;If it is not, then also according to before Cid-hash algorithmic dispatchings are to servicing in a certain equipment.
In one alternatively embodiment, before the URL request is dispatched to except the URL request Once before the second caching server outside the first allocated caching server, in addition to following step Suddenly:
Step S11, the URL request that record terminal is sent every time, and be the URL request first The network protocol IP address of the caching server of distribution.
The URL request sent every time by recording terminal in step S11, and be the URL first The network protocol IP address of the caching server of distribution is asked, can be used for load-balancing device slb and enter Row request scheduling, can by same URL request by way of poll respectively by same URL request Corresponding caching server is dispatched to, and then it is that same URL request is divided equally to solve in correlation technique The problem of load with the separate unit cache equipment caused by same cache equipment is higher, reaches mitigation The effect of same cache machine utilizations.
In one alternatively embodiment, the URL request is dispatched to except the URL request is previous The second caching server outside secondary the first allocated caching server comprises the following steps:
Step S21, judge whether the URL request is to send first;
Step S22, in the case where judged result is not to send first for the URL request, according to The IP address of caching server once allocated, the URL request is dispatched before the URL request To the caching server in addition to the IP address.
By step S21 to step S22, judging content that URL request is asked in focus Rong Shi, same URL request is dispatched to corresponding caching server respectively by way of poll, Further solve in correlation technique is that same URL request is distributed caused by same cache equipment The problem of load of separate unit cache equipment is higher, reach the effect for mitigating same cache machine utilizations.
In one alternatively embodiment, judging content that the URL request is asked for focus During content, the method for above-mentioned request scheduling also includes:
Step S31, content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, So that the caching server for receiving the content Copy Info carries out Hot Contents duplication;
It should be noted that the above Copy Info includes at least one of:The Hot Contents URL, cached the Hot Contents caching server network protocol IP address.
By above-mentioned steps S31, content is issued to the caching server for needing to carry out Hot Contents duplication Copy Info, so that the caching server for receiving the content Copy Info carries out Hot Contents duplication, So that after URL request is dispatched into other caching servers, corresponding resource is able to access that, Improve user experience.
In one alternatively embodiment, in the content that judged result is asked for the URL request In the case of not being Hot Contents, carried out according to the unique ID of the URL request institute request content Hash calculation, to be defined as the caching server of the URL request service.
With reference to specific example, the present embodiment is illustrated.
In following examples, caching server illustrates by taking cache as an example.
As shown in Fig. 2 when reaching detection cycle, slb can count each url in detection cycle Total request number of times.If detection cycle is 1 hour, in 1 hour, user initiated 4 requests, Url is respectively: url1:http:The@.exe of //down10.zol.com.cn/zoldown/WeChat_C1012@428288, url2:http://flv5.bn.netease.com/videolib3/1511/10/WhBmc5859/HD/WhBmc5 859.flv
url3:http://112.84.104.39/flv.bn.netease.com/videolib3/1511/10/YHbSI12 52/HD/YHbSI1252.flvWsiphost=local, url4:http://61.160.204.74/youku/65729AB85433D8271DA3B626C4/0300010 B0455B930A185DB092B13A2E90C0903-79E7-CB79-5351-D0C2783BAF7 B.flv&start=0, the total degree of request are respectively 6 times, 5 times, 4 times, 3 times, when asking first By cid-hash algorithms selections to cache equipment be respectively cache1, cache1, cache2, Cache3, if the focus that we need to count is TOP3, url1, url2, url3 is in focus Hold.
As shown in figure 3, the Hot Contents url1 equipment that hash chooses first is cache1, if The number for needing to replicate is 2 parts, then url1 content also needs to deposit on cache2 or cache3 Storage, it is assumed that it is cache2 that we, which randomly choose the equipment for needing to replicate,;Similar url2 content is random The equipment that selection needs to replicate is cache3, and url3 content random selection is to the equipment for needing to replicate cache3。
As shown in the flow (1) in Fig. 3, content url1, the cache1 being replicated that slb replicates needs Equipment ip is encapsulated in json bodies, then replicates that the cache2 for needing to replicate is pushed under interface by content Equipment, ip2 therein are the ip of cache2 equipment, and port is the port of management port:6620, cache2 After receiving the content duplication message that slb is sent, after parsing the information brought in message body, to cache1 The drop-down of url1 contents is carried out, then will be local in content caching, such as flow (2) institute in Fig. 3 Show, wherein ip1 is the ip of cache1 equipment, and port is serve port port:6610, it is thus complete Into content duplication, url1 content has caching on equipment cache1 and cache2;Similarly Cache3 completes url2 download and caching to cache1 device requests, and cache3 asks to cache2 The download and caching, so all Hot Contents for completing url3 contents are completed the duplication of equipment room, All Hot Contents all have storage in two cache equipment.
As shown in figure 4, after the duplication of completion Hot Contents, slb can be preserved in all detection cycles Md5 values corresponding to Hot Contents, after the request of subsequent user reaches slb, slb can judge first be No is Hot Contents, if the content can be found in Hot Contents table, it is focus to illustrate the content Request dispatch of taking turns has been cached setting for the content by content, it is necessary to using new polling algorithm to two Standby upper, non-hot content then services in single device.When content such as url1 is asked again, slb If being dispatched to cache1 first, when next time asks, url1 request will be dispatched to by slb Cache2 is serviced, and similar Hot Contents url2 and url3 are also serviced in turn in two equipment rooms, Rather than hotspot request url4 is then serviced on cache3 all the time, the poll of Hot Contents equipment, reduce Single device load, it is effectively guaranteed the scene of high concurrent.
To sum up, request scheduling method provided by the present invention is greatly reduced required for separate unit cache equipment Under the load born, particularly the high concurrent scene caused by festivals or holidays or popular movie and television play, significantly carry Rise the bearing capacity of cache servers.And improve cache share loads and the service energy accelerated Power, promote, have very important significance for cache network storages and the commercial of acceleration.
Through the above description of the embodiments, those skilled in the art can be understood that root The mode of required general hardware platform can be added by software according to the method for above-described embodiment to realize, when So can also be by hardware, but the former is more preferably embodiment in many cases.Based on such reason Solution, the part that technical scheme substantially contributes to prior art in other words can be with soft The form of part product embodies, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disc, CD) in, including some instructions are make it that a station terminal equipment (can To be mobile phone, computer, server, or network equipment etc.) perform each embodiment institute of the present invention The method stated.
Embodiment 2
A kind of request dispatching device is additionally provided in the present embodiment, and the device is used to realize above-mentioned implementation Example and preferred embodiment, repeating no more for explanation was carried out.As used below, term " module " can realize the combination of the software and/or hardware of predetermined function.Although following examples are retouched The device stated preferably is realized with software, but hardware, or the realization of the combination of software and hardware And may and be contemplated.
Fig. 5 is the structured flowchart of request dispatching device according to embodiments of the present invention, as shown in figure 5, The device includes:
1) receiving module 52, the uniform resource position mark URL request sent for receiving terminal;
2) judge module 54, for judging whether the content that the URL request is asked is Hot Contents;
3) scheduler module 56, for being that the content that the URL request is asked is focus in judged result In the case of content, the URL request is dispatched to except first be once allocated before the URL request The second caching server outside caching server.
Alternatively, in the present embodiment, the application scenarios of above-mentioned request dispatching device include but and unlimited In:More cache server cache while cooperating on one node.In the applied field Under scape, URL (Uniform Resource Locator, abbreviation that receiving terminal is sent For URL) request;Judge whether the content that the URL request is asked is Hot Contents;Judging As a result in the case that the content asked for the URL request is Hot Contents, the URL request is adjusted Second buffer service of the degree extremely in addition to the first caching server once allocated before the URL request Device.That is, in the present embodiment, judging content that URL request is asked in focus Rong Shi, same URL request is dispatched to corresponding caching server respectively by way of poll, And then it is that same URL request distributes list caused by same cache equipment to solve in correlation technique The problem of load of platform cache equipment is higher, reach the effect for mitigating same cache machine utilizations.
Fig. 6 is the structured flowchart (one) of request dispatching device according to embodiments of the present invention, such as Fig. 6 It is shown, the device in addition to all modules in including Fig. 5, in addition to:
1) logging modle 62, for before the URL request is dispatched to except the URL request once Before the second caching server outside the first allocated caching server, record terminal is sent every time URL request, and the procotol IP of the caching server for the distribution of the URL request first Address.
By the optional embodiment, the URL request that terminal is sent every time is recorded, and for first The network protocol IP address of the caching server of URL request distribution, can be used for load balancing and sets Standby slb makes requests on scheduling, can respectively will be same by way of poll by same URL request URL request is dispatched to corresponding caching server, and then it is same URL to solve in correlation technique The problem of asking the load for distributing the separate unit cache equipment caused by same cache equipment higher, Reach the effect for mitigating same cache machine utilizations.
Fig. 7 is the structured flowchart (two) of request dispatching device according to embodiments of the present invention, such as Fig. 7 Shown, scheduler module 56 includes:
1) judging unit 72, for judging whether the URL request is to send first;
2) scheduling unit 74, for being that the URL request is not situation about sending first in judged result Under, according to the IP address of caching server once allocated before the URL request, by the URL Request is dispatched to the caching server in addition to the IP address.
By the optional embodiment, when judging content that URL request is asked for Hot Contents, Same URL request is dispatched to corresponding caching server respectively by way of poll, further Solve in correlation technique is that same URL request distributes separate unit caused by same cache equipment The problem of load of cache equipment is higher, reach the effect for mitigating same cache machine utilizations.
Fig. 8 is the structured flowchart (three) of request dispatching device according to embodiments of the present invention, such as Fig. 8 Shown, the device also includes in addition to module in including Fig. 5:
1) processing module 82, for when judging content that the URL request is asked for Hot Contents, Content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, so as to receive this The caching server of content Copy Info carries out Hot Contents duplication;
Wherein, the content Copy Info includes at least one of:The URL of the Hot Contents, delay Deposit the network protocol IP address of the caching server of the Hot Contents.
It is interior to needing the caching server for carrying out Hot Contents duplication to issue by this optional embodiment Hold Copy Info, so that the caching server for receiving the content Copy Info carries out Hot Contents duplication, So that after URL request is dispatched into other caching servers, corresponding resource is able to access that, Improve user experience.
Fig. 9 is the structured flowchart (four) of request dispatching device according to embodiments of the present invention, such as Fig. 9 Shown, the device also includes:
1) computing module 92, for judged result for the content that the URL request is asked be not heat In the case of point content, Hash meter is carried out according to the unique ID of the URL request institute request content Calculate, to be defined as the caching server of the URL request service.
Embodiment 3
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, on State storage medium and can be configured to the program code that storage is used to perform following steps:
S1, the uniform resource position mark URL request that receiving terminal is sent;
S2, judge whether the content that the URL request is asked is Hot Contents;
S3, it is in the case that the content that the URL request is asked is Hot Contents in judged result, The URL request is dispatched in addition to the first caching server once allocated before the URL request The second caching server.
Alternatively, in the present embodiment, above-mentioned storage medium can include but is not limited to:USB flash disk, only Read memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with Jie of store program codes Matter.
Alternatively, in the present embodiment, processor is held according to the program code stored in storage medium Row above-mentioned steps S1, S2 and S3.
Alternatively, the specific example in the present embodiment may be referred to above-described embodiment and optional embodiment Described in example, the present embodiment will not be repeated here.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step It can be realized with general computing device, they can be concentrated on single computing device, or It is distributed on the network that multiple computing devices are formed, alternatively, they be able to can be held with computing device Capable program code realizes, it is thus possible to be stored in storage device by computing device Lai Perform, and in some cases, can be shown or described to be performed different from order herein Step, they are either fabricated to each integrated circuit modules respectively or by multiple moulds in them Block or step are fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any spy Fixed hardware and software combines.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for For those skilled in the art, the present invention can have various modifications and variations.All essences in the present invention God any modification, equivalent substitution and improvements made etc., should be included in the present invention with principle Protection domain within.

Claims (10)

  1. A kind of 1. request scheduling method, it is characterised in that including:
    The uniform resource position mark URL request that receiving terminal is sent;
    Judge whether the content that the URL request is asked is Hot Contents;
    In the feelings that the content that judged result is asked by the URL request is Hot Contents Under condition, the URL request is dispatched to except once allocated before the URL request The second caching server outside first caching server.
  2. 2. according to the method for claim 1, it is characterised in that by described in URL request is dispatched to except the first caching server once allocated before the URL request Outside the second caching server before, in addition to:
    The URL request that record terminal is sent every time, and be the URL request first The network protocol IP address of the caching server of distribution.
  3. 3. according to the method for claim 2, it is characterised in that described by described in URL request is dispatched to except the first caching server once allocated before the URL request Outside the second caching server include:
    Judge whether the URL request is to send first;
    It is in the case that the URL request is not to send first, according to institute in judged result The IP address for the caching server being once allocated before URL request is stated, by the URL Request is dispatched to the caching server in addition to the IP address.
  4. 4. according to the method for claim 1, it is characterised in that described in judgement When the content that URL request is asked is Hot Contents, methods described also includes:
    Content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, So that the caching server for receiving the content Copy Info carries out Hot Contents duplication;
    Wherein, the content Copy Info includes at least one of:The Hot Contents URL, cached the Hot Contents caching server network protocol IP address.
  5. 5. method according to any one of claim 1 to 4, it is characterised in that Methods described also includes:
    By the content that the URL request is asked it is not Hot Contents in judged result In the case of, carry out Hash according to the unique ID of URL request institute request content Calculate, to be defined as the caching server of the URL request service.
  6. 6. one kind request dispatching device, it is characterised in that including:
    Receiving module, the uniform resource position mark URL request sent for receiving terminal;
    Judge module, for judging whether the content that the URL request is asked is focus Content;
    Scheduler module, the content for being asked in judged result by the URL request are In the case of Hot Contents, the URL request is dispatched to except before the URL request Once the second caching server outside the first allocated caching server.
  7. 7. device according to claim 6, it is characterised in that also include:
    Logging modle, for being dispatched to by the URL request except the URL request Before the second caching server outside preceding the first caching server being once allocated, note The URL request that record terminal is sent every time, and be the URL request distribution first The network protocol IP address of caching server.
  8. 8. device according to claim 7, it is characterised in that the scheduling mould Block includes:
    Judging unit, for judging whether the URL request is to send first;
    Scheduling unit, for not being to send first in the judged result URL request In the case of, according to the IP of caching server once allocated before the URL request Location, the URL request is dispatched to the caching server in addition to the IP address.
  9. 9. device according to claim 6, it is characterised in that also include:
    Processing module, for judging content that the URL request is asked in focus Rong Shi, content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, So that the caching server for receiving the content Copy Info carries out Hot Contents duplication;
    Wherein, the content Copy Info includes at least one of:The Hot Contents URL, cached the Hot Contents caching server network protocol IP address.
  10. 10. the device according to any one of claim 6 to 9, it is characterised in that Also include:
    Computing module, for the content asked in judged result by the URL request not In the case of being Hot Contents, according to the unique mark of URL request institute request content ID carries out Hash calculation, to be defined as the caching server of the URL request service.
CN201610439369.1A 2016-06-16 2016-06-16 Request scheduling method and device Pending CN107517241A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610511615.XA CN107517243A (en) 2016-06-16 2016-06-16 Request scheduling method and device
CN201610439369.1A CN107517241A (en) 2016-06-16 2016-06-16 Request scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610439369.1A CN107517241A (en) 2016-06-16 2016-06-16 Request scheduling method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201610511615.XA Division CN107517243A (en) 2016-06-16 2016-06-16 Request scheduling method and device

Publications (1)

Publication Number Publication Date
CN107517241A true CN107517241A (en) 2017-12-26

Family

ID=60721398

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610439369.1A Pending CN107517241A (en) 2016-06-16 2016-06-16 Request scheduling method and device
CN201610511615.XA Withdrawn CN107517243A (en) 2016-06-16 2016-06-16 Request scheduling method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201610511615.XA Withdrawn CN107517243A (en) 2016-06-16 2016-06-16 Request scheduling method and device

Country Status (1)

Country Link
CN (2) CN107517241A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151512A (en) * 2018-09-12 2019-01-04 中国联合网络通信集团有限公司 The method and device of content is obtained in CDN network
WO2019052299A1 (en) * 2017-09-15 2019-03-21 通鼎互联信息股份有限公司 Sdn switch, and application and management method for sdn switch
CN109819039A (en) * 2019-01-31 2019-05-28 网宿科技股份有限公司 A kind of file acquisition method, file memory method, server and storage medium
CN110300132A (en) * 2018-03-22 2019-10-01 贵州白山云科技股份有限公司 Server data caching method, device and system
CN112019451A (en) * 2019-05-29 2020-12-01 中国移动通信集团安徽有限公司 Bandwidth allocation method, debugging network element, local cache server and computing equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385327B (en) * 2018-12-28 2022-06-14 阿里巴巴集团控股有限公司 Data processing method and system
CN113472901B (en) * 2021-09-02 2022-01-11 深圳市信润富联数字科技有限公司 Load balancing method, device, equipment, storage medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668046A (en) * 2009-10-13 2010-03-10 成都市华为赛门铁克科技有限公司 Resource caching method, resource obtaining method, device and system thereof
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device
US20140115120A1 (en) * 2011-12-14 2014-04-24 Huawei Technologies Co., Ltd. Content Delivery Network CDN Routing Method, Device, and System
CN104202362A (en) * 2014-08-14 2014-12-10 上海帝联信息科技股份有限公司 Load balance system and content distribution method and device thereof, and load balancer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668046A (en) * 2009-10-13 2010-03-10 成都市华为赛门铁克科技有限公司 Resource caching method, resource obtaining method, device and system thereof
US20140115120A1 (en) * 2011-12-14 2014-04-24 Huawei Technologies Co., Ltd. Content Delivery Network CDN Routing Method, Device, and System
CN103281367A (en) * 2013-05-22 2013-09-04 北京蓝汛通信技术有限责任公司 Load balance method and device
CN104202362A (en) * 2014-08-14 2014-12-10 上海帝联信息科技股份有限公司 Load balance system and content distribution method and device thereof, and load balancer

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019052299A1 (en) * 2017-09-15 2019-03-21 通鼎互联信息股份有限公司 Sdn switch, and application and management method for sdn switch
CN110300132A (en) * 2018-03-22 2019-10-01 贵州白山云科技股份有限公司 Server data caching method, device and system
CN111131402A (en) * 2018-03-22 2020-05-08 贵州白山云科技股份有限公司 Method, device, equipment and medium for configuring shared cache server group
CN109151512A (en) * 2018-09-12 2019-01-04 中国联合网络通信集团有限公司 The method and device of content is obtained in CDN network
CN109819039A (en) * 2019-01-31 2019-05-28 网宿科技股份有限公司 A kind of file acquisition method, file memory method, server and storage medium
CN109819039B (en) * 2019-01-31 2022-04-19 网宿科技股份有限公司 File acquisition method, file storage method, server and storage medium
CN112019451A (en) * 2019-05-29 2020-12-01 中国移动通信集团安徽有限公司 Bandwidth allocation method, debugging network element, local cache server and computing equipment
CN112019451B (en) * 2019-05-29 2023-11-21 中国移动通信集团安徽有限公司 Bandwidth allocation method, debugging network element, local cache server and computing device

Also Published As

Publication number Publication date
CN107517243A (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN107517241A (en) Request scheduling method and device
CN112512090B (en) Communication processing method and device, computer readable medium and electronic equipment
CN107395683B (en) Method for selecting return path and server
Zhang et al. Proactive workload management in hybrid cloud computing
CN101848137B (en) Load balancing method and system applied to three-layer network
CN102263828B (en) Load balanced sharing method and equipment
CN103905500B (en) A kind of method and apparatus for accessing application server
CN103457993B (en) Local cache device and the method that content caching service is provided
CN104852934A (en) Method for realizing flow distribution based on front-end scheduling, device and system thereof
CN102196060A (en) Method and system for selecting source station by Cache server
US20110040892A1 (en) Load balancing apparatus and load balancing method
CN109660578B (en) CDN back-to-source processing method, device and system
CN102739717B (en) Method for down loading, download agent server and network system
WO2008147578A1 (en) System and/or method for client- driven server load distribution
JP6485980B2 (en) Network address resolution
US8935377B2 (en) Dynamic registration of listener resources for cloud services
CN110430274A (en) A kind of document down loading method and system based on cloud storage
CN101326493A (en) Method and device for distributing load of multiprocessor server
CN101997822A (en) Streaming media content delivery method, system and equipment
CN105847853A (en) Video content distribution method and device
CN106657183A (en) Caching acceleration method and apparatus
US11575773B2 (en) Request processing in a content delivery framework
Rodrigues et al. Benchmarking wireless protocols for feasibility in supporting crowdsourced mobile computing
CN106789956A (en) A kind of P2P order methods and system based on HLS
CN106027356B (en) A kind of conversion method and device of Tunnel Identifier

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171226

RJ01 Rejection of invention patent application after publication