CN103237031B - Time source side method and device in order in content distributing network - Google Patents
Time source side method and device in order in content distributing network Download PDFInfo
- Publication number
- CN103237031B CN103237031B CN201310149174.XA CN201310149174A CN103237031B CN 103237031 B CN103237031 B CN 103237031B CN 201310149174 A CN201310149174 A CN 201310149174A CN 103237031 B CN103237031 B CN 103237031B
- Authority
- CN
- China
- Prior art keywords
- user
- source station
- server
- request
- limit value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Computer And Data Communications (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention provides a kind of time source side method and device in order in content distributing network, the method comprises: receive user's request; If this user request needs Hui Yuan, then judge whether server current processing load in source station reaches default limit value; If server current processing load in described source station reaches this limit value preset, then sequence wait is carried out to this user request; When the limit value that the processing load of described source station server is preset lower than this, described source station server asks preferentially to process to sequence user at first.The present invention can be conducive to the pressure reducing source station server, improves the response efficiency of asking user Hui Yuan.
Description
Technical field
The present invention relates to time source side method in content distributing network, particularly relate to a kind of time source side method and device in order in content distributing network.
Background technology
Content distributing network (CDN), when the request receiving user, needs directly to return the content that source station server removes to take family needs for some reason.In prior art, content distributing network does not do special processing when request Hui Yuan, and it is all equality process that each user asks back during source.
Fig. 1 shows a kind of time source processing method in prior art, comprising: at S11 place, and client sends multiple user request; At S12 place, edge node server receives user's request; At S13 place, judge that this user request is the need of Hui Yuan, if do not needed, then goes to S14, is responded by this request of edge node server process; If need Hui Yuan, then advance to S15, by source station server processing requests, and will return results and transfer to edge node server and be back to client.
Adopt traditional processing mode, when client's number of requests is larger, such as, when e-commerce website carries out purchasing by group, special price, flash sale etc. are movable, the visit capacity of website can present the situation of sudden increase at short notice, the scope causing the client's number of requests simultaneously needing Hui Yuan to exceed source station server can bearing, causes source station server all respond slow machine of even delaying to whole users request.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of time source side method and device in order in content distributing network, is conducive to the pressure reducing source station server, improves the response efficiency of asking user Hui Yuan.
For solving the problems of the technologies described above, the invention provides a kind of time source side method in order in content distributing network, comprising:
Receive user's request;
If this user request needs Hui Yuan, then judge whether server current processing load in source station reaches default limit value;
If server current processing load in described source station reaches this limit value preset, then sequence wait is carried out to this user request;
When the limit value that the processing load of described source station server is preset lower than this, described source station server asks preferentially to process to sequence user at first.
According to one embodiment of present invention, the method also comprises: if this user request does not need Hui Yuan, then asked by this user of edge node server process and returned response results.
According to one embodiment of present invention, the method also comprises: if server current processing load in described source station does not reach the limit value that this is preset, then access described source station server and obtain desired data.
According to one embodiment of present invention, carry out sorting etc. to be included to this user request: this user request is added into waiting list according to time order and function order.
According to one embodiment of present invention, the method also comprises: if the user's request waited for and current nothing sorts lower than this limit value preset of the processing load of described source station server, then ask directly to process to new user and wait for without the need to sorting.
Present invention also offers a kind of time source apparatus in order in content distributing network, comprising:
User interface section, receives user's request;
Judging unit, if this user request needs Hui Yuan, then judges whether server current processing load in source station reaches default limit value;
Sequencing unit, if server current processing load in described source station reaches the limit value that this is preset, then carries out sequence wait to this user request;
First processing unit, when the limit value that the processing load of described source station server is preset lower than this, asks preferentially to transfer to described source station server to process by sequence user at first.
According to one embodiment of present invention, this device also comprises: the second processing unit, if this user request does not need Hui Yuan, then this user request is transferred to edge node server process and is returned response results.
According to one embodiment of present invention, this device also comprises: the 3rd processing unit, if server current processing load in described source station does not reach the limit value that this is preset, then accesses described source station server and obtains desired data.
According to one embodiment of present invention, this user request is added into waiting list according to time order and function order by described sequencing unit.
According to one embodiment of present invention, this device also comprises: fourth processing unit, if the processing load of described source station server is the user's request waited for and current nothing sorts lower than this limit value preset, then ask directly to process to new user and wait for without the need to sorting.
Compared with prior art, the present invention has the following advantages:
The orderly of the embodiment of the present invention returns in source side method and device, when user's request needs go back to source, first judge whether server current processing load in source station has reached default limit value, if reached, then carry out sequence to user's request of request Hui Yuan to wait for, when the processing load of source station server is lower than limit value, to sequence, user at first asks preferentially to process, the user's request waited for if do not sorted, asks directly process to new user and waits for without the need to sequence, thus avoid source station server to too much client request process simultaneously, decrease the pressure of source station server, also improve back the response efficiency of source request.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the processing method of a kind of time source request in prior art;
Fig. 2 is the schematic flow sheet of the source side method of returning in order in content distributing network of the embodiment of the present invention;
Fig. 3 is the structured flowchart returning source apparatus in order in content distributing network of the embodiment of the present invention.
Embodiment
Below in conjunction with specific embodiments and the drawings, the invention will be further described, but should not limit the scope of the invention with this.
The present embodiment returns source side method in order and mainly comprises the following steps:
Receive user's request, such as edge node server receives the multiple user's requests from client;
If this user request does not need Hui Yuan, then directly it is processed by edge node server;
If this user request needs Hui Yuan, then judge whether server current processing load in source station reaches default limit value;
If this source station server does not reach default being limited to, then by source station server, it is processed;
If server current processing load in described source station reaches this limit value preset, then sequence wait is carried out to this user request;
When the limit value that the processing load of described source station server is preset lower than this, described source station server asks preferentially to process to sequence user at first.
Below with reference to Fig. 2, in conjunction with an example, this time source side method is described in detail.
At S21 place, client sends multiple user request, and the plurality of user's request can be the HTTP request from multiple different clients.
At S22 place, edge node server receives request.
At S23 place, judge that the client's request received is the need of Hui Yuan.Such as, judge that client asks pointed resource to obtain the need of access originator site server.
If this client request does not need Hui Yuan, such as client asks the resource pointed in edge node server, have corresponding cache file, then go to S24, by edge node server processing response, such as, the file of local cache is directly returned to user side by edge node server.
If this client request needs Hui Yuan, then advance to S25, judge whether server current processing load in source station reaches default limit value.This limit value can be preset according to the actual treatment ability of source station server, client's number of requests that such as source station server can process simultaneously concomitantly.
If server current processing load in source station not yet reaches this limit value, then go to S28, asked by this client of source station processor process, such as, inquire about the resource that this client asks to point to, and advance to S24, transfer to edge node server to respond to client.
If server current processing load in source station has reached this limit value, then advance to S26, to sort wait according to the time order and function order of client's request, such as, can user's request be added in waiting list according to time order and function order, time client's request comparatively early to come in waiting list the position comparatively, and time more late client's request to come in waiting list the position comparatively.
Advance to S27 afterwards, judge whether source station server can receive request, whether the processing load that also namely source station server is current is less than this limit value, if words, then take out client's request of waiting list head of the queue, and and then advance to S28, asked to process to the client taken out by source station server, and result is returned to edge node server, response is returned to the user side sending this client request by edge node server again; If processing load is not less than this limit value yet, then return S26, continue to wait for.
In addition, the processing load of source station server lower than limit value and current waiting list is empty time, can ask new user directly to process, and not need to wait in line.
It should be noted that, carry out sequence according to time order and function order to user's request in above-mentioned example to wait for, but be not limited to this, such as can also carry out sequence according to the priority of client's request to user's request to wait for, client's request that priority is higher comes comparatively, can obtain priority treatment.
Fig. 3 shows the structured flowchart returning source apparatus in order in content distributing network of the present embodiment, comprising: user interface section 31, judging unit 32, sequencing unit 33, first processing unit 34, second processing unit 35, the 3rd processing unit 36, fourth processing unit 37.
Wherein, user interface section 31 is asked for the user received from client.Judging unit 32 the need of Hui Yuan, and judges when needing go back to source whether server current processing load in source station reaches default limit value in this client request.If this user request does not need Hui Yuan, then user's request transfers to edge node server to carry out processing and response results being returned to client by the second processing unit 35.
If server current processing load in source station does not reach default limit value, then the 3rd processing unit 36 access this source station server obtain needed for data, such as client asks the resource data pointed to.If server current processing load in source station has reached default limit value, then sequencing unit 33 pairs of users' requests have been carried out sequence and have been waited for, such as, can carry out sequence wait according to time order and function order.When the processing load of source station server is lower than the limit value preset, sequence user is at first asked preferentially to give source station server and processes by the first processing unit 34, result is transferred to edge node server by source station server, and returns to client further by edge node server.If the processing load of source station server lower than the limit value preset and the current user's request (such as waiting list is for empty) waited for of not sorting, then asks directly to process to new user and waits for without the need to sequence.
About more detailed contents of this time source apparatus, please refer in previous embodiment about the detailed description of returning source side method.
Although the present invention with preferred embodiment openly as above; but it is not for limiting the present invention; any those skilled in the art without departing from the spirit and scope of the present invention; can make possible variation and amendment, the scope that therefore protection scope of the present invention should define with the claims in the present invention is as the criterion.
Claims (10)
1. time source side method in order in content distributing network, is characterized in that, comprising:
Receive user's request;
If this user request needs Hui Yuan, then judge whether server current processing load in source station reaches default limit value;
If server current processing load in described source station reaches this limit value preset, then the priority of asking according to described user is carried out sequence to this user request and is waited for, the user that priority is higher asks to come comparatively;
When the limit value that the processing load of described source station server is preset lower than this, described source station server asks preferentially to process to sequence user at first.
2. method according to claim 1, is characterized in that, also comprises: if this user request does not need Hui Yuan, then asked by this user of edge node server process and returned response results.
3. method according to claim 1, is characterized in that, also comprises: if server current processing load in described source station does not reach the limit value that this is preset, then access described source station server and obtain desired data.
4. method according to claim 1, is characterized in that, carries out sorting etc. to be included to this user request: this user request is added into waiting list according to time order and function order.
5. method according to claim 1, is characterized in that, also comprises: if the user's request waited for and current nothing sorts lower than this limit value preset of the processing load of described source station server, then ask directly to process to new user and wait for without the need to sorting.
6. time source apparatus in order in content distributing network, is characterized in that, comprising:
User interface section, receives user's request;
Judging unit, if this user request needs Hui Yuan, then judges whether server current processing load in source station reaches default limit value;
Sequencing unit, if server current processing load in described source station reaches the limit value that this is preset, then the priority of asking according to described user is carried out sequence to this user request and is waited for, the user that priority is higher asks to come comparatively;
First processing unit, when the limit value that the processing load of described source station server is preset lower than this, asks preferentially to transfer to described source station server to process by sequence user at first.
7. orderly time source apparatus according to claim 6, is characterized in that, also comprise:
Second processing unit, if this user request does not need Hui Yuan, then transfers to edge node server process by this user request and returns response results.
8. orderly time source apparatus according to claim 6, is characterized in that, also comprise:
3rd processing unit, if server current processing load in described source station does not reach the limit value that this is preset, then accesses described source station server and obtains desired data.
9. orderly time source apparatus according to claim 6, is characterized in that, this user request is added into waiting list according to time order and function order by described sequencing unit.
10. orderly time source apparatus according to claim 6, is characterized in that, also comprise:
Fourth processing unit, if the user's request waited for and current nothing sorts lower than this limit value preset of the processing load of described source station server, then asks directly to process to new user and waits for without the need to sorting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310149174.XA CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310149174.XA CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103237031A CN103237031A (en) | 2013-08-07 |
CN103237031B true CN103237031B (en) | 2016-04-20 |
Family
ID=48885048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310149174.XA Active CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103237031B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025105B (en) * | 2015-07-27 | 2018-10-30 | 广州华多网络科技有限公司 | request processing method and device |
CN105246052B (en) * | 2015-10-14 | 2018-08-03 | 中国联合网络通信集团有限公司 | A kind of method and device of data distribution |
CN106572166B (en) * | 2016-11-02 | 2019-07-05 | Oppo广东移动通信有限公司 | Data transmission method, backup server and mobile terminal |
CN110392074B (en) * | 2018-04-19 | 2022-05-17 | 贵州白山云科技股份有限公司 | Scheduling method and device based on dynamic acceleration |
CN109005118A (en) * | 2018-08-21 | 2018-12-14 | 中国平安人寿保险股份有限公司 | Search method, apparatus, computer equipment and the storage medium of CDN source station address |
CN110858844A (en) * | 2018-08-22 | 2020-03-03 | 阿里巴巴集团控股有限公司 | Service request processing method, control method, device, system and electronic equipment |
CN110636104B (en) * | 2019-08-07 | 2022-05-10 | 咪咕视讯科技有限公司 | Resource request method, electronic device and storage medium |
CN110933467B (en) * | 2019-12-02 | 2021-07-27 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device and computer readable storage medium |
CN115250294B (en) * | 2021-04-25 | 2024-03-22 | 贵州白山云科技股份有限公司 | Cloud distribution-based data request processing method and system, medium and equipment thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101406025A (en) * | 2006-03-28 | 2009-04-08 | 汤姆森许可贸易公司 | Centralization type scheduling device aiming at content transmission network |
CN102594921A (en) * | 2012-03-22 | 2012-07-18 | 网宿科技股份有限公司 | Synchronization file access method and system based on content distribution system |
CN102790798A (en) * | 2012-05-23 | 2012-11-21 | 蓝汛网络科技(北京)有限公司 | Transparent proxy implementation method, device and system in content distribution network |
CN102970381A (en) * | 2012-12-21 | 2013-03-13 | 网宿科技股份有限公司 | Multi-source load balance method and system for proportional polling based on content distribution network |
-
2013
- 2013-04-26 CN CN201310149174.XA patent/CN103237031B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101406025A (en) * | 2006-03-28 | 2009-04-08 | 汤姆森许可贸易公司 | Centralization type scheduling device aiming at content transmission network |
CN102594921A (en) * | 2012-03-22 | 2012-07-18 | 网宿科技股份有限公司 | Synchronization file access method and system based on content distribution system |
CN102790798A (en) * | 2012-05-23 | 2012-11-21 | 蓝汛网络科技(北京)有限公司 | Transparent proxy implementation method, device and system in content distribution network |
CN102970381A (en) * | 2012-12-21 | 2013-03-13 | 网宿科技股份有限公司 | Multi-source load balance method and system for proportional polling based on content distribution network |
Also Published As
Publication number | Publication date |
---|---|
CN103237031A (en) | 2013-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103237031B (en) | Time source side method and device in order in content distributing network | |
CN102882939B (en) | Load balancing method, load balancing equipment and extensive domain acceleration access system | |
CN103716251B (en) | For the load-balancing method and equipment of content distributing network | |
CN103678408B (en) | A kind of method and device of inquiry data | |
WO2019056640A1 (en) | Order processing method and device | |
CN103412786B (en) | High performance server architecture system and data processing method thereof | |
CN110602156A (en) | Load balancing scheduling method and device | |
CN105468690A (en) | Inventory data processing method and device | |
CN104796449B (en) | Content delivery method, device and equipment | |
CN102394880B (en) | Method and device for processing jump response in content delivery network | |
CN103516744A (en) | A data processing method, an application server and an application server cluster | |
CN109951566A (en) | A kind of Nginx load-balancing method, device, equipment and readable storage medium storing program for executing | |
CN112202918B (en) | Load scheduling method, device, equipment and storage medium for long connection communication | |
CN105871591A (en) | Method and device for distributing CDN (Content Delivery Network) addresses | |
CN104601534A (en) | Method and system for processing CDN system images | |
US20190370293A1 (en) | Method and apparatus for processing information | |
CN107517243A (en) | Request scheduling method and device | |
CN112989239A (en) | Method for hybrid client-server data provision | |
CN111010453B (en) | Service request processing method, system, electronic device and computer readable medium | |
CN110839074A (en) | Data request receiving and processing method and device | |
CN110309229A (en) | The data processing method and distributed system of distributed system | |
CN104852964A (en) | Multifunctional server scheduling method | |
US8908855B1 (en) | Systems and methods for allocation of telephony resources on-demand | |
WO2016101115A1 (en) | Resource scheduling method and related apparatus | |
CN107045452B (en) | Virtual machine scheduling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |