CN102148759A - Method for saving export bandwidth of backbone network by cache acceleration system - Google Patents

Method for saving export bandwidth of backbone network by cache acceleration system Download PDF

Info

Publication number
CN102148759A
CN102148759A CN2011100811991A CN201110081199A CN102148759A CN 102148759 A CN102148759 A CN 102148759A CN 2011100811991 A CN2011100811991 A CN 2011100811991A CN 201110081199 A CN201110081199 A CN 201110081199A CN 102148759 A CN102148759 A CN 102148759A
Authority
CN
China
Prior art keywords
user
buffer memory
request
acting server
accelerating system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100811991A
Other languages
Chinese (zh)
Inventor
许旭
余兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2011100811991A priority Critical patent/CN102148759A/en
Publication of CN102148759A publication Critical patent/CN102148759A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method for saving the export bandwidth of a backbone network by a cache acceleration system, which comprises the following steps of: accelerating a hyper text transport protocol (http) request of a user by using the cache acceleration system, dynamically allocating the http request of the user according to the load condition of the cache acceleration system, realizing the process of making user Internet protocol (IP) addresses corresponding to public network export IP addresses one to one by using real-time authentication data and firewall network address translation (NAT), performing storage and reutilization on duplicate resources according to a target http request format, providing a user authentication management system of Web by using the cache acceleration system, and performing authentication according to the user request by using a proxy server. By the method, the massive needs of providers and enterprise users in the bandwidth are satisfied, monitoring management requirements, which cannot be met by the industry, corresponding to private network and public network IP addresses of a metropolitan area network are preferably met, public network export bandwidth resources can be saved by about 28 percent, and resources and public network export investment are reduced.

Description

Save the method for backbone network outlet bandwidth by the buffer memory accelerating system
Technical field
The present invention relates to the resource distribution technology, relate in particular to the method for saving the backbone network outlet bandwidth by the buffer memory accelerating system.
Background technology
The key factor that information rate has become present restriction Internet development is slowly obtained in online.How can under conventional network resources, improve the speed that the user obtains information, to become a great problem of puzzlement numerous enterprises and service provider.
Web buffer memory equipment can be installed in the diverse location of network, realizes that the mode of Web buffer memory comprises proxy caching, transparent caching and reverse caching.
Transparent access is a kind of ability of Web buffer memory, promptly accepts and respond the Web server of the access ability user goes up to(for) any Internet.It is made user's visit and replying, and has the IP address of source server, and the user is imperceptible at the visit local cache.Web buffer memory equipment is given the user the required content delivery of user capture of storage, the user capture that is not stored in buffer memory is redirected to the server that the user will visit originally, when being transferred to the user, stay a backup in buffer memory, so that service next time to be provided the information of returning.
Therefore, operator wishes the mode with the buffer memory acceleration, allow a large amount of network traffics no longer need to internet, to go to search and download, saved a large amount of network egress bandwidth, the visit of big flow has been saved in the metropolitan area network, because content is directly returned by the buffer memory accelerating system, and response speed also can be improved greatly, user experience can be enhanced simultaneously.
Consult Fig. 1, there is the problem of following 4 aspects in traditional single Cache system:
1. the single equipment disposal ability is limited, and no off-the-shelf equipment can directly satisfy service needed, and main frame can become the bottleneck of business development, does not utilize expansion.
2. traditional C ache equipment generally is not special flow treatment facility, and very big ability can consume on data analysis and transmitting, and can not bring into play the most important purpose of Cache, is exactly this accessing acceleration by the Cache static resource.
3. Cache deployed with devices aspect, no matter be tactful road or other the extension, a large amount of flows can cause the device processes ability not enough, need provide and the cooperating of cooperative device stability, break down to reach the Cache system, do not influence user's use, not enough on the autgmentability that cooperates different business.
When traditional C ache equipment quickens, can't can't can't be supervised by the source of the request in the metropolitan area network, have legal risk by after the tactful route outlet.
Summary of the invention
At the problem that above-mentioned prior art exists, the invention provides a kind of method of saving the backbone network outlet bandwidth by the buffer memory accelerating system.
To achieve these goals, the technical solution used in the present invention is: a kind of method of saving the backbone network outlet bandwidth by the buffer memory accelerating system, this method comprises by the buffer memory accelerating system carries out accelerator to user's http request, according to buffer memory accelerating system loading condition, dynamic assignment user's http request is also passed through the real-time authentication data and fire compartment wall NAT conversion, realize User IP and the public network outlet man-to-man corresponding process of IP and according to http target request form, carry out the storage and the recycling process of repetition resource, described buffer memory accelerating system provides the authentification of user management system of Web, and acting server authenticates according to user's request.
As preferably, describedly by the buffer memory accelerating system accelerator is carried out in user's http request and is:
A. the user initiates the http request, and request arrives buffer memory accelerating system load-balancing device;
B. load-balancing device according to the automatic request for allocation of acting server pressure to the little acting server of resource occupation;
C. acting server obtains verify data by coming source user information, judges whether that operation provides buffer memory to quicken service.
As preferably, described according to buffer memory accelerating system loading condition, dynamic assignment user's http request is also passed through the real-time authentication data and fire compartment wall NAT conversion, realizes the man-to-man corresponding process of User IP and public network outlet IP:
A. can provide buffer memory to quicken the request of service, acting server defines this outlet IP address rule according to verify data, when the user goes out public network in the different different IP addresses of acting server binding, converts same public network IP address to by fire compartment wall NAT.
B. acting server is classified according to the http request type, judges that the purpose resource is can store data or dynamically can not store data, and determines it is to initiate request to internet, still initiates to ask to storage server;
Described according to http target request form, carry out the storage and the recycling process of repetition resource:
A. acting server obtains dynamically and can not return to the user by storage resources from internet;
B. but acting server obtains storage resources from storage server, no longer goes internet to obtain resource.
As preferably, described acting server is equipped with more than 4 nuclears of Windows Server or Linux enterprise version operating system, the hardware device that internal memory 6GB is above, and moved buffer memory accelerator software.
As preferably, described buffer memory accelerator software for possess by the http protocol layer transmit, authentication, the control of storage white list, private net address binding and storage resources transmit.
Compared with prior art, the invention has the advantages that: solve operator, enterprise customer extensive requirement on the demand of bandwidth, emphasis solves the supervisory management expectancy of indeterminable metropolitan area network private network of industry and public network IP correspondence.At buffer memory accelerating system software section, can handle concurrent flow more than the unit 10000/S by http underlying protocol technology, and may operate on the cross-platform acting server, expansion servers quantity reaches the requirement of dynamic support more users request arbitrarily.The networking construction scheme of metropolitan area private network is provided, can saves the public network outlet bandwidth resource about 28%, save resource and public network export investment.
Description of drawings
Fig. 1 is Cache system network architecture figure single in the prior art;
Fig. 2 is a flow chart of the present invention;
Fig. 3 is the flow through a network figure of embodiments of the invention;
Fig. 4 is an IP transition diagram of the present invention;
Fig. 5 is a networking structure schematic diagram of the present invention;
Fig. 6 is a buffer memory accelerating system networking topological diagram of the present invention.
Embodiment
The invention will be further described below in conjunction with accompanying drawing.
As one embodiment of the present invention, consult Fig. 2, a kind ofly reach the method for saving the backbone network outlet bandwidth by the buffer memory accelerating system, comprise the steps:
1) user initiates the http request, and request arrives buffer memory accelerating system load-balancing device, and user's http request will be passed through load-balancing device, must route can reach between user and the buffer memory accelerating system so;
2) load-balancing device according to the automatic request for allocation of acting server pressure to the little acting server of resource occupation; Described load-balancing device is one can come the forwarding unit of automatic distributing user request by test agent server load pressure;
3) acting server obtains verify data by coming source user information, judges whether that operation provides buffer memory to quicken service; Described acting server is to be equipped with more than 4 nuclears of Windows Server or Linux enterprise version operating system, the hardware device that internal memory 6GB is above, and moved buffer memory accelerator software;
4) can provide buffer memory to quicken the request of service, acting server defines this outlet IP address rule according to verify data, when the user goes out public network in the different different IP addresses of acting server binding, converts same public network IP address to by fire compartment wall NAT; The buffer memory accelerating system provides the authentification of user management system of Web, and acting server can authenticate according to user's request, so, does not pass through the request of authentication with denied access.
5) acting server is classified according to the http request type, judges that the purpose resource is can store data or dynamically can not store data, and determines it is to initiate request to internet, still initiates to ask to storage server
6) acting server obtains dynamically and can not return to the user by storage resources from internet
7) but acting server obtains storage resources from storage server, no longer go internet to obtain resource, reach the purpose of saving bandwidth
8) storage resources upgrades automatically according to self expiration rule and internet resource updates time attribute
9) when when returning to the user,, thereby promoting user experience because of the metropolitan area network response is fast by the metropolitan area network high bandwidth data cached
The user can to the workflow of many acting server groups be by the automatic converting flow of load equipment:
1) user can export the IP address according to one of the corresponding binding of certain rule on each acting server, and changes public network exit address of this regular address one-tenth by fire compartment wall NAT
2) fire compartment wall is one or 2 and is equipped with mutually, can handle big concurrently to the public network request, and possesses the network equipment that private net address is converted to public network exit address function by regular NAT;
Described buffer memory accelerating system will reach the purpose of acceleration, be that the purpose resource format is judged, picture, video, Streaming Media, static file in download etc. are stored and upgraded, in the time of the same resource of user capture, to no longer take public-network bandwidth, and utilize the better advantage of metropolitan area network bandwidth, allow user web visit quicken, its process is:
1) storage server is and addressable internet, and the big capacity storage and the control appliance that communicate by super large link and acting server, possess super large concurrent processing receiving ability, massive storage space can be according to the Disk Array of detailed programs requirement outfit different size
2) storage server work depends on the storage server software that moves on it, and this software possesses cross-platform, supports 64 processing, supports the function of big concurrent storage and forwarding capability.
Buffer memory accelerating system white list determined by manager, and is effectively legal, but moves a tabulation of website static state storage resources storage under this white list.
Consult Fig. 3, this figure be the present invention in metropolitan area network, be the network design scheme explanation that 400 ~ 500 tame terminal computer platform numbers provide buffer memory to quicken service:
1. the address, Internet bar is the private network address of 10.X.X.1/255.0.0.0 section, and request arrives load balancing service address 192.168.3.1 by routing device, and reality is 192.168.128.100 for the address of service, Internet bar;
2. load communicates by second line of a couplet address 192.168.128.254/24 section and 4 acting server 192.168.128.X section network interface cards;
3. every acting server upper outlet public network address pond is 172.X.X.X/255.248.0.0, the B sector address of 4 C, and Internet bar's 10 sector address requests are come, and to acting server, can be bound into 172 sector addresses of appointment, initiate request to fire compartment wall;
4. the 172.X.X.X/255.248.0.0 address can convert fixing public network address visit public network to according to rule;
5. the public network request resource of returning can return to network bar users by former private network route;
Consult Fig. 4, this figure is the conversion key diagram that private network of the present invention changes into public network:
1. work as under the enough situations in public network IP pond, user private network IP and public network IP realize conversion one to one, such as: user private network IP is 10.1.1.1, correspondence should bind address be this C sector address of 172.1.1.X on acting server, bind 172.1.1.1 on No. 1 server, bind 172.1.1.2 on No. 2 servers, by that analogy, convert this public network address of 125.69.91.1 to by fire compartment wall so
Transformational relation explanation for example:
I. 10.1.1.1 binds 172.1.1.X (X is 1 ~ 254), and NAT converts 125.69.91.1 to
Ii. 10.1.1.2 binds 172.1.2.X (X is 1 ~ 254), and NAT converts 125.69.91.2 to
Iii. by that analogy
2. under the not enough situation in public network IP pond, a plurality of private net addresses can binds one 172 sector address and gone out public network so, the realization multi-to-multi is to one conversion
Transformational relation explanation for example:
I. 10.1.1.1 and 10.1.1.2 bind 172.1.1.X (X is 1 ~ 254), and NAT converts 125.69.91.1 to
Ii. 10.1.1.3 and 10.1.1.4 bind 172.1.2.X (X is 1 ~ 254), and NAT converts 125.69.91.2 to
Iii. by that analogy
Consult Fig. 5, this figure is the easy structure schematic diagram of network implementation scheme of the present invention, illustrate that the user cache accelerating system is to get involved between user and Internet, the user asks and can at first arrive the buffer memory accelerating system by routing device, is distributed local resource or is obtained resource to return to the user from Internet by the buffer memory accelerating system then.
Consult Fig. 6, this figure is that its operation principle is in conjunction with the topology of the networking in Fig. 3 and Fig. 4 specific embodiments detail drawing:
1. the user can cross SR by the load-balancing device that private network arrives the buffer memory accelerating system, T640 and core route exchange device, and after request arrived load balancing, load balancing uplink and downlink designing requirement was the 4GE link, could satisfy user concurrent and bandwidth request.
2. the buffer memory acceleration equipment is by core switching device, load-balancing device (two-way), and many acting servers, storage server, log server, index upgrade server and outlet NAT fire compartment wall (two-way) are formed
3. the fire compartment wall of load balancing and NAT conversion all is that two-way heat is equipped with, and can accomplish that the heat that breaks down is equipped with, the instant switching
4. many acting servers are hung in load down, and as fruit part acting server cisco unity malfunction, load equipment can automatic request for allocation arrive normal server, and not influencing is that user cache quickens service.
To the above-mentioned explanation of the disclosed embodiments, make this area professional and technical personnel can realize or use the present invention.Multiple modification to these embodiment will be conspicuous concerning those skilled in the art, and defined herein General Principle can realize under situation about not breaking away from the spirit or scope of the present invention in other embodiments.Therefore, the present invention will can not be limited to these embodiment shown in this article, but will accord with principle disclosed herein and features of novelty the wideest corresponding to scope.

Claims (6)

1. one kind is passed through the method that the buffer memory accelerating system is saved the backbone network outlet bandwidth, it is characterized in that: this method comprises by the buffer memory accelerating system carries out accelerator to user's http request, according to buffer memory accelerating system loading condition, dynamic assignment user's http request is also passed through the real-time authentication data and fire compartment wall NAT conversion, realize User IP and the public network outlet man-to-man corresponding process of IP and according to http target request form, carry out the storage and the recycling process of repetition resource, described buffer memory accelerating system provides the authentification of user management system of Web, and acting server authenticates according to user's request.
2. method of saving the backbone network outlet bandwidth by the buffer memory accelerating system according to claim 1 is characterized in that: describedly by the buffer memory accelerating system accelerator is carried out in user's http request and be:
The user initiates the http request, and request arrives buffer memory accelerating system load-balancing device;
Load-balancing device according to the automatic request for allocation of acting server pressure to the little acting server of resource occupation;
Acting server obtains verify data by coming source user information, judges whether that operation provides buffer memory to quicken service.
3. method of saving the backbone network outlet bandwidth by the buffer memory accelerating system according to claim 1, it is characterized in that: described according to buffer memory accelerating system loading condition, dynamic assignment user's http request is also passed through the real-time authentication data and fire compartment wall NAT conversion, realizes the man-to-man corresponding process of User IP and public network outlet IP:
A. can provide buffer memory to quicken the request of service, acting server defines this outlet IP address rule according to verify data, when the user goes out public network in the different different IP addresses of acting server binding, converts same public network IP address to by fire compartment wall NAT.
4.B. acting server is classified according to the http request type, judges that the purpose resource is can store data or dynamically can not store data, and determines it is to initiate request to internet, still initiates to ask to storage server;
Described according to http target request form, carry out the storage and the recycling process of repetition resource:
A. acting server obtains dynamically and can not return to the user by storage resources from internet;
B. but acting server obtains storage resources from storage server, no longer goes internet to obtain resource.
5. method of saving the backbone network outlet bandwidth by the buffer memory accelerating system according to claim 1, it is characterized in that: described acting server is equipped with more than 4 nuclears of Windows Server or Linux enterprise version operating system, the hardware device that internal memory 6GB is above, and moved buffer memory accelerator software.
6. method of saving the backbone network outlet bandwidth by the buffer memory accelerating system according to claim 4 is characterized in that: described buffer memory accelerator software is for possessing by the forwarding of http protocol layer, authentication, the control of storage white list, private net address binding and storage resources forwarding.
CN2011100811991A 2011-04-01 2011-04-01 Method for saving export bandwidth of backbone network by cache acceleration system Pending CN102148759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100811991A CN102148759A (en) 2011-04-01 2011-04-01 Method for saving export bandwidth of backbone network by cache acceleration system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100811991A CN102148759A (en) 2011-04-01 2011-04-01 Method for saving export bandwidth of backbone network by cache acceleration system

Publications (1)

Publication Number Publication Date
CN102148759A true CN102148759A (en) 2011-08-10

Family

ID=44422769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100811991A Pending CN102148759A (en) 2011-04-01 2011-04-01 Method for saving export bandwidth of backbone network by cache acceleration system

Country Status (1)

Country Link
CN (1) CN102148759A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629938A (en) * 2012-03-14 2012-08-08 网宿科技股份有限公司 Method for carrying out video acceleration on network video loading and system thereof
CN102932393A (en) * 2011-10-09 2013-02-13 广州盛华信息技术有限公司 Method and system for accessing internet data
CN104994028A (en) * 2015-07-15 2015-10-21 上海地面通信息网络有限公司 Bandwidth saving control device based on NAT address translator
CN105357258A (en) * 2015-09-28 2016-02-24 华为技术有限公司 Acceleration management node, acceleration node, client and method
CN105472031A (en) * 2015-12-29 2016-04-06 深圳市鼎芯无限科技有限公司 Method and device for accessing load balancing data
CN105635273A (en) * 2015-12-25 2016-06-01 国云科技股份有限公司 Method for enhancing private cloud network bandwidth utilization rate
CN106210028A (en) * 2016-07-05 2016-12-07 广州华多网络科技有限公司 A kind of server prevents method, server and the system of overload
CN106657183A (en) * 2015-10-30 2017-05-10 中兴通讯股份有限公司 Caching acceleration method and apparatus
CN112637254A (en) * 2019-09-24 2021-04-09 拉扎斯网络科技(上海)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114157343A (en) * 2020-12-05 2022-03-08 南通先进通信技术研究院有限公司 Working method of CDN network system based on satellite communication

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005094041A1 (en) * 2004-03-22 2005-10-06 Qualcomm Incorporated Http acceleration over a network link
CN101123620A (en) * 2007-08-28 2008-02-13 南京联创科技股份有限公司 Method for electronic data processing for concurrent request of a large number of services
CN101127701A (en) * 2007-07-24 2008-02-20 深圳市深信服电子科技有限公司 Method for realizing proxy server load balance via network device
CN101257485A (en) * 2007-03-02 2008-09-03 华为技术有限公司 Web applied system and method
CN101729598A (en) * 2009-11-18 2010-06-09 福建星网锐捷网络有限公司 Method and system for increasing Web service response speed and network processor
CN101945103A (en) * 2010-08-09 2011-01-12 中国电子科技集团公司第五十四研究所 IP (Internet Protocol) network application accelerating system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005094041A1 (en) * 2004-03-22 2005-10-06 Qualcomm Incorporated Http acceleration over a network link
CN101257485A (en) * 2007-03-02 2008-09-03 华为技术有限公司 Web applied system and method
CN101127701A (en) * 2007-07-24 2008-02-20 深圳市深信服电子科技有限公司 Method for realizing proxy server load balance via network device
CN101123620A (en) * 2007-08-28 2008-02-13 南京联创科技股份有限公司 Method for electronic data processing for concurrent request of a large number of services
CN101729598A (en) * 2009-11-18 2010-06-09 福建星网锐捷网络有限公司 Method and system for increasing Web service response speed and network processor
CN101945103A (en) * 2010-08-09 2011-01-12 中国电子科技集团公司第五十四研究所 IP (Internet Protocol) network application accelerating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄纪林: "《一个基于HTTP重定向的Web服务负载均衡系统的设计和实现》", 《现代图书情报技术》, no. 2, 29 February 2008 (2008-02-29), pages 82 - 86 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932393A (en) * 2011-10-09 2013-02-13 广州盛华信息技术有限公司 Method and system for accessing internet data
CN102629938B (en) * 2012-03-14 2015-05-06 网宿科技股份有限公司 Method for carrying out video acceleration on network video loading and system thereof
CN102629938A (en) * 2012-03-14 2012-08-08 网宿科技股份有限公司 Method for carrying out video acceleration on network video loading and system thereof
CN104994028A (en) * 2015-07-15 2015-10-21 上海地面通信息网络有限公司 Bandwidth saving control device based on NAT address translator
US10628190B2 (en) 2015-09-28 2020-04-21 Huawei Technologies Co., Ltd. Acceleration management node, acceleration node, client, and method
CN105357258A (en) * 2015-09-28 2016-02-24 华为技术有限公司 Acceleration management node, acceleration node, client and method
US11579907B2 (en) 2015-09-28 2023-02-14 Huawei Technologies Co., Ltd. Acceleration management node, acceleration node, client, and method
US11080076B2 (en) 2015-09-28 2021-08-03 Huawei Technologies Co., Ltd. Acceleration management node, acceleration node, client, and method
CN105357258B (en) * 2015-09-28 2020-06-26 华为技术有限公司 Acceleration management node, acceleration node, client and method
CN106657183A (en) * 2015-10-30 2017-05-10 中兴通讯股份有限公司 Caching acceleration method and apparatus
CN105635273A (en) * 2015-12-25 2016-06-01 国云科技股份有限公司 Method for enhancing private cloud network bandwidth utilization rate
CN105472031A (en) * 2015-12-29 2016-04-06 深圳市鼎芯无限科技有限公司 Method and device for accessing load balancing data
CN106210028B (en) * 2016-07-05 2019-09-06 广州华多网络科技有限公司 A kind of server prevents method, server and the system of overload
CN106210028A (en) * 2016-07-05 2016-12-07 广州华多网络科技有限公司 A kind of server prevents method, server and the system of overload
CN112637254A (en) * 2019-09-24 2021-04-09 拉扎斯网络科技(上海)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN112637254B (en) * 2019-09-24 2023-04-07 拉扎斯网络科技(上海)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114157343A (en) * 2020-12-05 2022-03-08 南通先进通信技术研究院有限公司 Working method of CDN network system based on satellite communication

Similar Documents

Publication Publication Date Title
CN102148759A (en) Method for saving export bandwidth of backbone network by cache acceleration system
US10491523B2 (en) Load distribution in data networks
CN102137014B (en) Resource management method, system and resource manager
CN113596110B (en) Cloud primary micro-service platform oriented to heterogeneous cloud
CN101262488B (en) A content distribution network system and method
US20030191838A1 (en) Distributed intelligent virtual server
CN111612466B (en) Consensus and resource transmission method, device and storage medium
CN108780410A (en) The network virtualization of container in computing system
US20100037225A1 (en) Workload routing based on greenness conditions
CN105577549A (en) Method and system for realizing content delivery network based on software defined network
CN102394929A (en) Conversation-oriented cloud computing load balancing system and method therefor
CN103596066B (en) Method and device for data processing
CN105068755B (en) A kind of data trnascription storage method towards cloud computing content distributing network
JP2009500968A (en) Integrated architecture for remote network access
CN105338016B (en) Data high-speed caching method and device and resource request response method and device
CN104780221A (en) Intellectual property comprehensive service platform system for middle and small-sized enterprises
CN109962961A (en) A kind of reorientation method and system of content distribution network CDN service node
CN112988378A (en) Service processing method and device
CN101262489B (en) A content distribution network system and method
CN113254160B (en) IO resource request method and device
KR20150011087A (en) Distributed caching management method for contents delivery network service and apparatus therefor
US20210337041A1 (en) Orchestrated proxy service
Chen et al. Using service brokers for accessing backend servers for web applications
CN104468832B (en) A kind of light distributed structure/architecture based on http agreements
CN115988080B (en) Micro-service resource calling method and system based on proxy middleware

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110810