TWI535249B - Automate the way to extend the network cache system - Google Patents

Automate the way to extend the network cache system Download PDF

Info

Publication number
TWI535249B
TWI535249B TW103104547A TW103104547A TWI535249B TW I535249 B TWI535249 B TW I535249B TW 103104547 A TW103104547 A TW 103104547A TW 103104547 A TW103104547 A TW 103104547A TW I535249 B TWI535249 B TW I535249B
Authority
TW
Taiwan
Prior art keywords
cache
network
host
server
user agent
Prior art date
Application number
TW103104547A
Other languages
Chinese (zh)
Other versions
TW201532412A (en
Inventor
yi-xiang Lin
You-Xin Yan
Original Assignee
D Link Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by D Link Corp filed Critical D Link Corp
Priority to TW103104547A priority Critical patent/TWI535249B/en
Publication of TW201532412A publication Critical patent/TW201532412A/en
Application granted granted Critical
Publication of TWI535249B publication Critical patent/TWI535249B/en

Links

Landscapes

  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

自動化擴展網路快取系統的方法 Method of automating the expansion of the network cache system

本發明係關於網路快取系統,尤指一種能夠自動檢測出當前各個快取主機的網路拓樸,而不需使用者個別設定的方法。 The present invention relates to a network cache system, and more particularly to a method for automatically detecting the current network topology of each cache host without requiring the user to individually set it.

按,隨著網路世界的蓬勃發展,各式各樣之網路設備不斷地被開發出來,且被各行各業廣泛地使用於其生活及工作環境中,此一發展趨勢,不僅加速了資訊流通之速度及效率,亦為人們在生活及工作上帶來極大了之便利。其中,電腦與網際網路之連接技術,進步得非常快速,頻寬由最初的14.4K,已發展到目前的10M甚至100M以上頻寬,而連線的方式,亦由過去單一的電話撥號方式,發展成現在多樣的有線和無線的連線方式,終端裝置也開始朝向行動通訊設備發展,故,現今的人們已習慣隨時隨地透過全球資訊網(World Wild Web)取得所需的各種新聞資訊。 According to the rapid development of the online world, various kinds of network equipment are continuously developed and widely used in various living and working environments by various industries. This development trend not only accelerates the information. The speed and efficiency of circulation also bring great convenience to people in life and work. Among them, the connection technology between the computer and the Internet has progressed very rapidly. The bandwidth has been developed from the original 14.4K to the current 10M or even 100M bandwidth, and the connection method is also a single telephone dialing method. As it has evolved into a variety of wired and wireless connection methods, terminal devices have also begun to develop toward mobile communication devices. Therefore, people today are accustomed to obtaining various news information through the World Wild Web anytime and anywhere.

承上,隨著網路人口的大幅增長,網路頻寬、伺服器負荷能力及瀏覽網頁速度…等問題,亦成為相關業者的重要考量,畢竟,前述問題影響著人們的網路使用經驗是否良好。一般言,為能減少網路頻寬的浪費及加快使用者瀏覽網頁的速度,業者通常會使用網路快取系統(Network Cache System),主要是將常用的網頁存放在該網路快取系統中的快取記憶體或硬碟上,以加快存取速度,畢竟,對於同個區域而言,該區域內的人們通常具有較高機率瀏覽相同的網頁。查,根據網路快取系統所擺放的位置及功能來說,大致上可分為兩類,第一類是「轉送快取(Forward Cache)」,其通常離使用者端比較接近,主要是存放使用者觀看過的網頁內容,目前,一般企業、學校或電信業者為了節省出口頻寬的耗費、過濾不當資訊存取或者阻擋病毒,通常會採用「轉送快取」的方式。 In fact, with the rapid growth of the Internet population, issues such as network bandwidth, server load capacity and browsing speed, etc. have become important considerations for related companies. After all, the above issues affect people's experience in network use. good. In general, in order to reduce the waste of network bandwidth and speed up users' browsing of web pages, the industry usually uses the Network Cache System, which mainly stores commonly used web pages in the network cache system. In the cache or on the hard drive, to speed up access, after all, for the same area, people in the area usually have a higher chance to browse the same web page. According to the location and function of the network cache system, it can be roughly divided into two categories. The first category is "Forward Cache", which is usually close to the user end. It is to store the content of the webpage that the user has watched. At present, in order to save the bandwidth of the export bandwidth, filter inappropriate information access or block the virus, the general enterprise, school or telecom operator usually adopts the "transfer cache" method.

第二類是「反向快取(Reverse Cache)」,其通常是存放特定 伺服器的內容,主要用以分散網路伺服器的運作負擔,意即,使用者端的裝置所查詢的IP位址係先行指向網路快取系統中之快取主機的IP位址,若該快取主機內並未存放使用者端所需的網頁內容時,則使用者端的裝置才會連線至具有該網頁內容的網頁伺服器。其中「內容分發網路(Content Delivery Network,簡稱CDN)」係為「反向快取(Reverse Cache)」的一種應用,其主要是能夠分散掉網路伺服器的負擔,增加速度並且讓網站速度更快,且CDN伺服器通常會在很多不同的地方擺放,以使其承受的負擔量上升,同時,其尚會利用暫存的技術,將原本存放於網路伺機器的網頁內容暫時存放於CDN伺服器上,如此一來,即可以有效減少網路伺服器所承受的頻寬,且使用者於瀏覽網頁時,便會直接存取CDN伺服器上的內容,此外,將CDN部屬在網路伺服器之內網的應用為iCDN(Internetworking of Content Delivery Networks),常用於部署串流媒體服務。 The second type is "Reverse Cache", which usually stores specific The content of the server is mainly used to distribute the operation load of the network server, that is, the IP address queried by the device on the user side first points to the IP address of the cache host in the network cache system, if When the webpage content required by the user terminal is not stored in the cache host, the device on the client side is connected to the web server having the content of the webpage. Among them, "Content Delivery Network (CDN)" is an application of "Reverse Cache", which mainly distributes the burden of the network server, increases the speed and makes the website speed. Faster, and CDN servers are usually placed in many different places to increase the burden on them. At the same time, they will use temporary storage technology to temporarily store the contents of web pages originally stored on the network. On the CDN server, the bandwidth of the network server can be effectively reduced, and the user directly accesses the content on the CDN server when browsing the webpage. In addition, the CDN belongs to The application of the intranet of the network server is iCDN (Internetworking of Content Delivery Networks), which is commonly used to deploy streaming media services.

惟,無論是使用何種網路快取系統,其在安裝設定及使用上,通常需要使用者進行網路架構變更,或是複雜的設定程序後,才能增加快取主機的數量,以提高容量擴充性,並正常運作,例如:網路路由的更改方面,便牽涉到相關網路設備的設定、網路IP及DNS設定…等,需要專業的網路規劃能力,此舉,勢必會提高使用上的不便利性,且使用者並無法輕易地擴充所需的快取主機數量,故,如何有效解決前述問題,即成為許多網路服務業者刻正努力研發並亟欲達成的一重要目標。 However, no matter which network cache system is used, it usually requires the user to change the network architecture or the complicated setting procedure to increase the number of cached hosts to improve the capacity. Scalability and normal operation, for example, changes in network routing involve related network device settings, network IP and DNS settings, etc., which require professional network planning capabilities, which will inevitably increase the use. The inconvenience, and the user can not easily expand the number of cache hosts required, so how to effectively solve the above problems is an important goal that many Internet service providers are striving to develop and hope to achieve.

有鑑於現有的網路快取系統,於安裝上具有高複雜度,造成使用者需花費大量的設定配置與偵錯時間,故,發明人經過長久努力研究與實驗,終於開發設計出本發明之一種自動化擴展網路快取系統的方法,以期藉由本發明之問世,能有效解決前述問題。 In view of the existing network cache system, the installation has high complexity, causing users to spend a lot of configuration and debugging time. Therefore, after long-term efforts and experiments, the inventors finally developed and designed the present invention. A method of automatically expanding a network cache system, with the expectation that the present invention can effectively solve the aforementioned problems.

本發明之一目的,係提供一種自動化擴展網路快取系統的方法,以便使用者能夠輕易地增加快取主機的數量,且不需進行繁複的設定,該方法係應用於一網路快取系統,該網路快取(Cache)系統包括一使用者代理裝置(User Agent,簡稱UA)、至少一快取主機及一伺服器,其中該等快取主機係分別有線連結至該使用者代理裝置或該伺服器,或者彼此間相 互連結,該方法係使各該快取主機於開機後,能先執行一偵測快取架構程序,其中該偵測快取架構程序會使該快取主機從自身的所有連接埠送出偵測封包至相鄰的使用者代理裝置、伺服器或其它快取主機,嗣,該快取主機會根據各該連接埠所傳來之回覆封包,判斷出自身與相鄰的使用者代理裝置、伺服器或其它快取主機間所連通的連接埠,以取得網路線的接線方式為一進一出類型或多進多出類型,又,該快取主機尚會根據各該回覆封包的內容,判斷出自身與相鄰的使用者代理裝置、伺服器或其它快取主機之間是否支援頻寬聚集協定(Link Aggregation Control Protocol,簡稱LACP)架構;若是,則使自身與相鄰的使用者代理裝置、伺服器或其它快取主機間的所有頻寬合併使用;若否,則使自身與相鄰的使用者代理裝置、伺服器或其它快取主機間採用多路橋接(multiple bridge)協定架構;如此,使用者只要在該使用者代理裝置與該伺服器間連結上所需數量的快取主機後,該等快取主機便能夠自行偵測出當前的網路拓樸狀況,避免使用者需對每一台快取主機進行設定,大幅提高使用上的便利性。 It is an object of the present invention to provide a method for automatically expanding a network cache system so that a user can easily increase the number of cache hosts without complicated settings, and the method is applied to a network cache. The network cache system includes a user agent (UA), at least one cache host, and a server, wherein the cache hosts are respectively wired to the user agent. Device or the server, or between each other The interconnecting method is configured to enable each of the cached hosts to execute a detection cache architecture program after booting, wherein the detection cache architecture program causes the cache host to send detections from all connections of the cache. Packets are sent to adjacent user agent devices, servers, or other cache hosts. The cache host will determine its own and adjacent user agent devices and servos based on the reply packets sent from the ports. The connection between the device or other cache host to obtain the connection mode of the network route is one-to-one type or multiple input type and multiple output type, and the cache host still determines according to the content of each reply packet. Whether it supports the Link Aggregation Control Protocol (LACP) architecture between itself and the adjacent user agent device, server or other cache host; if so, it and itself and the adjacent user agent device, All bandwidths between servers or other cache hosts are combined; if not, multi-way bridges are used between themselves and adjacent user agent devices, servers or other cache hosts (multip Le bridge) protocol structure; in this way, the user can detect the current network topology by itself after connecting the required number of cache hosts between the user agent device and the server. The situation, to avoid the need for the user to set each cache host, greatly improving the convenience of use.

本發明之另一目的,係該網路快取系統設有多台快取主機時,各該快取主機尚會執行一負載平衡程序,其中該負載平衡程序能夠根據每一個快取主機的運作能力,調整各該快取主機間的網路流量負擔,以提高該網路快取系統的使用效率及速度,另,由於前述程序均是在各該快取主機被啟動後(如:重新開機、線路中斷又恢復連線),便會自動執行,因此,使用者僅需於增加快取主機的數量後,再啟動快取主機,便能輕易地擴展該網路快取系統。 Another object of the present invention is that when the network cache system is provided with a plurality of cache hosts, each cache host performs a load balancing program, wherein the load balancing program can operate according to each cache host. Ability to adjust the network traffic burden between each cache host to improve the efficiency and speed of the network cache system. In addition, since the foregoing procedures are started after each cache host is started (for example, rebooting) If the line is interrupted and the connection is restored, it will be executed automatically. Therefore, the user can easily expand the network cache system only after increasing the number of cached hosts and then starting the cache host.

為便 貴審查委員能對本發明目的、技術特徵及其功效,做更進一步之認識與瞭解,茲舉實施例配合圖式,詳細說明如下: For your convenience, the review committee can make a further understanding and understanding of the purpose, technical features and effects of the present invention. The embodiments are combined with the drawings, and the details are as follows:

〔習知〕 [study]

no

〔本發明〕 〔this invention〕

1‧‧‧網路快取系統 1‧‧‧Network cache system

11‧‧‧使用者代理裝置 11‧‧‧User agent device

12A、12B、12C、14A、14B、17A、17B、17C、18A、18B、18C、19A、19B、21A、21B、21C、22A、22B、23A、23B、24A、24B、25A、25B、25C‧‧‧網路線 12A, 12B, 12C, 14A, 14B, 17A, 17B, 17C, 18A, 18B, 18C, 19A, 19B, 21A, 21B, 21C, 22A, 22B, 23A, 23B, 24A, 24B, 25A, 25B, 25C‧ ‧‧Network route

13、16‧‧‧快取主機 13, 16‧‧‧ cache host

15‧‧‧伺服器 15‧‧‧Server

第1A圖係本發明之網路快取系統設有一台快取主機的第一架構示意圖;第1B圖係本發明之網路快取系統設有一台快取主機的第二架構示意圖; 第1C圖係本發明之網路快取系統設有一台快取主機的第三架構示意圖;第2A圖係本發明之網路快取系統設有複數台快取主機的第一架構示意圖;第2B圖係本發明之網路快取系統設有複數台快取主機的第二架構示意圖;第2C圖係本發明之網路快取系統設有複數台快取主機的第三架構示意圖;第3A圖係本發明之網路快取系統設有複數台快取主機的第四架構示意圖;第3B圖係第3A圖之一使用狀態示意圖;第3C圖係第3A圖之另一使用狀態示意圖;及第4圖係本發明之網路快取系統的運作時序圖。 1A is a first schematic diagram of a network cache system of the present invention having a cache host; FIG. 1B is a second schematic diagram of a network cache system of the present invention having a cache host; 1C is a third schematic diagram of a network cache system of the present invention having a cache host; FIG. 2A is a first schematic diagram of a network cache system of the present invention having a plurality of cache hosts; 2B is a second schematic diagram of a network cache system of the present invention having a plurality of cache hosts; and FIG. 2C is a third schematic diagram of a network cache system of the present invention having a plurality of cache hosts; 3A is a fourth schematic diagram of a network cache system of the present invention with a plurality of cache hosts; FIG. 3B is a schematic diagram of a state of use of FIG. 3A; and FIG. 3C is a schematic diagram of another state of use of FIG. And Figure 4 is a timing diagram of the operation of the network cache system of the present invention.

本發明係一種自動化擴展網路快取系統的方法,該方法係應用於一網路快取系統1(Network Cache System),請參閱第1A圖所示,在一實施例中,該網路快取系統1包括一使用者代理裝置11(User Agent,簡稱UA)、至少一快取主機13及一伺服器15,其中各該快取主機13係分別有線連結至該使用者代理裝置11或該伺服器15,又,當設有複數台快取主機13時,該等快取主機13尚能依不同的連接方式(如:串聯方式、並聯方式或串並聯方式)而彼此間相互連結。另,在實際實施上,使用者代理裝置11與快取主機13間,除能直接連接之外,尚能透過交換機或其它網路裝置而間接連接,合先陳明。 The present invention is a method for automatically expanding a network cache system. The method is applied to a Network Cache System. Referring to FIG. 1A, in an embodiment, the network is fast. The system 1 includes a user agent device (UA), at least one cache host 13 and a server 15, wherein each of the cache hosts 13 is wired to the user agent device 11 or The server 15, in addition, when a plurality of cache masters 13 are provided, the cache masters 13 can be connected to each other according to different connection modes (for example, series mode, parallel mode or series-parallel mode). In addition, in actual implementation, the user agent device 11 and the cache host 13 can be directly connected through a switch or other network device in addition to being directly connected.

復請參閱第1A圖所示,茲僅就該網路快取系統1設有一台快取主機13為例,進行說明,其中該快取主機13支援各種標準常用的網路通訊協定,例如:IEEE 802.1ab網路連結偵測協定(Link Layer Discovery Protocol,簡稱LLDP),其會將自身(即,快取主機13)的信息組織成TLV(Type/Length/Value,類型/長度/值),並封裝在一鏈路層發現協議數據單元(Link Layer Discovery Protocol Data Unit,簡稱LLDPDU)訊息中,發送給 與其直接連結的其它設備,同時也將從其它設備接收的LLDPDU,以標準管理信息庫(Management Information Base,簡稱MIB)的形式保存起來,如此,藉由LLDP,快取主機13可以保存和管理自己以及直連其它設備的信息,以針對不同接線組合,自動偵測網路架構且自動設定;橋接模式(Bridge)並可支援IEEE 802.11d STP網路擴展樹協定(Spanning Tree Protocol)偵測迴路(Loop)自動封鎖,以及IEEE 802.3ad頻寬聚集協定(Link Aggregation Control Protocol,簡稱LACP),以連結匯聚控制協定將頻寬合併使用;此外,該快取主機13尚可支援IEEE 802.11q虛擬區域網路(Virtual LAN,簡稱VLAN)協定,以能串接進有VLAN trunk的環境內使用,且能互相分享快取內容,以達到快速存取的目的;再者,該快取主機13於硬體方面尚具有硬體旁路(hardware bypass)功能,以達到不斷線服務的目的。惟,前述之網路通訊協定僅是一實施例,在本發明之其它實施例中,業者能依不同的使用需求,而使該快取主機13支援更多或更少之網路通訊協定,合先陳明。 For details, please refer to FIG. 1A. For example, the network cache system 1 is provided with a cache host 13 as an example. The cache host 13 supports various commonly used network communication protocols, for example: The IEEE 802.1ab Link Layer Discovery Protocol (LLDP), which organizes the information of itself (that is, the cache host 13) into TLV (Type/Length/Value, Type/Length/Value). And encapsulated in a Link Layer Discovery Protocol Data Unit (LLDPDU) message, and sent to The LLDPDUs that are directly connected to other devices are also saved in the form of a Management Information Base (MIB). Thus, with LLDP, the cache host 13 can save and manage itself. And directly connected to other devices to automatically detect the network architecture and automatically set for different wiring combinations; Bridge mode (Bridge) and support IEEE 802.11d STP network Spanning Tree Protocol detection loop ( Loop) automatic blocking, and the IEEE 802.3ad Link Aggregation Control Protocol (LACP), which combines the bandwidth with the convergence control protocol; in addition, the cache host 13 can support the IEEE 802.11q virtual area network. Virtual LAN (VLAN) protocol, which can be used in a VLAN trunk environment, and can share cached content with each other for fast access; further, the cache host 13 is hardware. The aspect also has a hardware bypass function to achieve the purpose of continuous line service. However, the foregoing network communication protocol is only an embodiment. In other embodiments of the present invention, the cache host 13 can support more or less network communication protocols according to different usage requirements. Heming Chen Ming.

承上,復請參閱第1A圖所示,當該快取主機13被啟動後,其會執行一偵測快取架構程序,其中該偵測快取架構程序會使該快取主機13從自身的所有連接埠送出偵測封包至相鄰的使用者代理裝置11與伺服器15,嗣,該快取主機13會根據各該連接埠所傳來之回覆封包,判斷出自身與該使用者代理裝置11及伺服器15間所連通的連接埠,以得知自身與該使用者代理裝置11及伺服器15間之網路線的接線方式為一進一出類型或多進多出類型,其中一進一出類型係指使用者代理裝置11與快取主機13間只有一條網路線通行,而快取主機13與伺服器15間只有一條網路線通行;多進多出類型係指使用者代理裝置11與快取主機13間有複數條網路線通行,而快取主機13與伺服器15間有複數條網路線通行。又,當該使用者代理裝置11、快取主機13與伺服器15間的網路線之接線方式為一進一出類型時,如第1A圖所示,則該使用者代理裝置11、快取主機13與伺服器15採用橋接模式(Bridge)。在此特別一提者,前述之快取主機13的啟動情況,包括快取主機13被重新開機,或者快取主機13之線路中斷或異常後,再恢復連線等,合先敘明。 As shown in FIG. 1A, when the cache host 13 is started, it executes a detection cache architecture program, wherein the detection cache architecture program causes the cache host 13 to self. All the connections send the detection packet to the adjacent user agent device 11 and the server 15, and the cache host 13 determines the user and the user agent according to the reply packet sent from each connection port. The connection between the device 11 and the server 15 is such that the connection manner between the device and the user agent device 11 and the server 15 is one-to-one type or multiple input type and multiple output type, one to one. The type of the user means that there is only one network route between the user agent device 11 and the cache host 13, and only one network route is passed between the cache host 13 and the server 15. The multi-input and multi-out type refers to the user agent device 11 and The cache host 13 has a plurality of network routes, and the cache host 13 and the server 15 have a plurality of network routes. Moreover, when the connection manner of the network route between the user agent device 11, the cache host 13 and the server 15 is one-in-one type, as shown in FIG. 1A, the user agent device 11 and the cache host are 13 and the server 15 adopt a bridge mode (Bridge). In particular, the foregoing startup condition of the cache host 13 includes that the cache host 13 is rebooted, or the line of the cache host 13 is interrupted or abnormal, and then the connection is resumed, etc., which is first described.

另,請參閱第1B圖所示,當該使用者代理裝置11、快取主機13與伺服器15間的網路線之接線方式為三進二出類型(即,使用者代理裝置11與快取主機13間有三條網路線通行,而快取主機13與伺服器15間有兩條網路線通行)時,則該快取主機會根據各該回覆封包的內容,判斷出自身與相鄰的使用者代理裝置11與伺服器15之間的接線設備是否支援頻寬聚集協定(Link Aggregation Control Protocol,簡稱LACP)架構,若是,則該使用者代理裝置11與快取主機13間的三條網路線即會合併使用(如虛框所示),以達到三倍頻寬的效果,且該快取主機13與伺服器15間的兩條網路線亦會合併使用(如虛框所示),以達到兩倍頻寬的效果。 In addition, as shown in FIG. 1B, when the user agent device 11, the cache host 13 and the server 15 are connected to the network route in a three-in-two-out type (ie, the user agent device 11 and the cache device) When there are three network routes between the host 13 and two network routes between the cache host 13 and the server 15, the cache host will determine the use of itself and the neighbor according to the contents of the reply packets. Whether the connection device between the proxy device 11 and the server 15 supports a Link Aggregation Control Protocol (LACP) architecture, and if so, the three network routes between the user agent device 11 and the cache host 13 Will be combined (as indicated by the dashed box) to achieve the effect of triple bandwidth, and the two network routes between the cache host 13 and the server 15 will also be combined (as indicated by the dashed box) to achieve Double the effect of bandwidth.

再者,請參閱第1C圖所示,若該使用者代理裝置11、快取主機13與伺服器15間的網路線之接線方式為三進二出類型,且不支援LACP時,則該快取主機13會採用多路橋接(multiple bridge)協定架構,意即,該使用者代理裝置11與快取主機13間的第一條網路線12A,及快取主機13與伺服器15間的第一條網路線14A會形成第一組橋接模式,該使用者代理裝置11與快取主機13間的第二條網路線12B,及快取主機13與伺服器15間的第二條網路線14B會形成第二組橋接模式,該使用者代理裝置11與快取主機13間的第三條網路線12C則可作為備援使用,同時,每一組橋接模式均能支援硬體旁路(hardware bypass)功能,以增加該網路快取系統1的容錯率。如此,當快取主機13完成前述偵測快取架構程序後,即可得出當前的網路拓樸狀況。 In addition, as shown in FIG. 1C, if the connection manner between the user agent device 11, the cache host 13 and the server 15 is three-in and two-out, and LACP is not supported, then the fast The host 13 will adopt a multiple bridge protocol architecture, that is, the first network route 12A between the user agent device 11 and the cache host 13, and the first between the cache host 13 and the server 15. A network route 14A forms a first group of bridge modes, a second network route 12B between the user agent device 11 and the cache host 13, and a second network route 14B between the cache host 13 and the server 15. A second group of bridge modes is formed, and the third network route 12C between the user agent device 11 and the cache host 13 can be used as a backup. At the same time, each group of bridge modes can support hardware bypass (hardware). Bypass) function to increase the fault tolerance of the network cache system 1. Thus, when the cache host 13 completes the aforementioned detection cache architecture program, the current network topology can be obtained.

在本發明之其它實施例中,當快取主機的數量超過一台時,如第2A~2C圖所示,其包括兩台快取主機13、16,且該等快取主機13、16為串聯方式,則在該等快取主機13、16開機並執行該偵測快取架構程序後,其除了將偵測封包分別傳送至相鄰的使用者代理裝置11或伺服器15之外,尚會彼此傳送該偵測封包,又,該等快取主機13、16同樣會根據彼此間是否支援LACP,在支援LACP的狀態下,該等快取主機13、16彼此間能合併使用頻寬(如第2B圖所示,其中虛框表示該等網路線的頻寬為合併使用);在不支援LACP的狀態下,該等快取主機13、16彼此間會採用多路橋接協定架構,如第2C圖所示,該使用者代理裝置11、快取主機13、 16與伺服器15間的第一條網路線17A、18A、19A,會形成第一組橋接模式;該使用者代理裝置11、快取主機13、16與伺服器15間的第二條網路線17B、18B、19B,會形成第二組橋接模式;其餘網路線17C、18C則作為備援使用。 In other embodiments of the present invention, when the number of cache hosts exceeds one, as shown in FIGS. 2A-2C, it includes two cache hosts 13, 16 and the cache hosts 13, 16 are In the serial mode, after the cache hosts 13 and 16 are powered on and execute the detection cache architecture program, in addition to transmitting the detection packets to the adjacent user agent device 11 or the server 15, respectively. The detection packets are transmitted to each other. In addition, the cache hosts 13 and 16 can also use LACP according to whether they support LACP. In the state of supporting LACP, the cache hosts 13 and 16 can use the bandwidth together. As shown in FIG. 2B, the virtual box indicates that the bandwidth of the network routes is used in combination; in the state where the LACP is not supported, the cache hosts 13, 16 adopt a multi-way bridge protocol structure, such as As shown in FIG. 2C, the user agent device 11, the cache host 13, The first network route 17A, 18A, 19A between the server 16 and the server 15 forms a first group of bridge modes; the second network route between the user agent device 11, the cache host 13, 16 and the server 15 17B, 18B, and 19B will form a second group of bridge modes; the remaining network routes 17C and 18C will be used as backup.

當快取主機13、16皆分別與使用者代理裝置11、伺服器15間皆有網路線互連,且該等快取主機13、16彼此間亦有網路線互連,形成複雜的接線方式時,如第3A圖所示,該等快取主機13、16會優先採用並聯方式,使得該等快取主機13、16能互相備援,並能採用主動/主動(Active/Active)設定模式,或者主動/被動(Active/Standby)設定模式,以能保障網路穩定性及發揮最大頻寬,該等快取主機13、16於執行該偵測快取架構程序後,會形成如第3B圖之網路拓樸狀況,其中該使用者代理裝置11、快取主機13、16與伺服器15間會形成四組橋接模式,其中第一組橋接模式為使用者代理裝置11、快取主機13與伺服器15間的網路線21A、22A;第二組橋接模式為使用者代理裝置11、快取主機13與伺服器15間的網路線21B、22B;第三組橋接模式為使用者代理裝置11、快取主機16與伺服器15間的網路線23A、24A;第四組橋接模式為使用者代理裝置11、快取主機16與伺服器15間的網路線23B、24B;其餘網路線21C、25A、25B、25C則作為備援使用,且不傳輸資料(如第3B圖所繪示之X符號)。另,當快取主機13與伺服器15間的網路不穩定或不通時(如第3C圖所繪示之X符號),該快取主機13與伺服器15間的資料才會經由快取主機16進行傳輸。 When the cache hosts 13 and 16 are respectively connected to the user agent device 11 and the server 15, the network routes are interconnected, and the cache hosts 13 and 16 are also interconnected with each other to form a complicated wiring manner. When, as shown in FIG. 3A, the cache masters 13 and 16 are preferentially connected in parallel, so that the cache hosts 13 and 16 can be mutually redundant and can adopt an active/active setting mode. , or Active/Standby setting mode, in order to ensure network stability and maximize the bandwidth, the cache hosts 13, 16 will form the 3B after executing the detection cache architecture program. The network topology of the figure, wherein the user agent device 11, the cache host 13, 16 and the server 15 form a four-group bridge mode, wherein the first group of bridge modes is the user agent device 11, the cache host 13 network route 21A, 22A with the server 15; the second group of bridge mode is the user agent device 11, the network route 21B, 22B between the cache host 13 and the server 15; the third group of bridge mode is the user agent Device 11, network route 23A, 2 between cache host 16 and server 15. 4A; the fourth group of bridge modes is the user agent device 11, the network route 23B, 24B between the cache host 16 and the server 15; the remaining network routes 21C, 25A, 25B, 25C are used as backup, and no data is transmitted. (As shown in Figure 3B, the X symbol). In addition, when the network between the cache host 13 and the server 15 is unstable or unreachable (such as the X symbol shown in FIG. 3C), the data between the cache host 13 and the server 15 will be cached. The host 16 transmits.

在此特別一提者當該網路快取系統1具有複數台快取主機13、16時,該快取主機13、16尚會執行一負載平衡程序,其中該負載平衡程序能夠根據每一個快取主機13、16的運作能力(例如:快取空間大小、中央處理單元(CPU)的運算速度、記憶體容量大小或當前流量負擔…等因素),調整每一台快取主機13、16內的資料容量,以分散各該快取主機13、16間的網路流量負擔。另,茲就本發明之網路快取系統1的資料快取方式,進行說明,請參閱第4圖所示,首先,該使用者代理裝置11係直接與伺服器15進行資料傳輸,如:使用者代理裝置11向伺服器15傳送一第 一需求封包(如第4圖之a1),則該伺服器會傳送一第一回應封包予該使用者代理裝置11(如第4圖之a2),嗣,當該使用者代理裝置11與伺服器15間串聯兩台快取主機13、16後,該等快取主機13、16會執行偵測快取架構程序(即,快取主機13、16從自身的所有連接埠送出偵測封包至相鄰的使用者代理裝置11、伺服器15或其它快取主機16、13),嗣,該等快取主機13、16會根據各該連接埠所傳來之回覆封包,判斷出當前的網路拓樸狀況(如第4圖之b1、b2)及/或執行該負載平衡程序後,彼此間便會相互連結並開始提供服務(如第4圖之b3)。 In particular, when the network cache system 1 has a plurality of cache hosts 13, 16, the cache masters 13, 16 still execute a load balancing program, wherein the load balancing program can be fast according to each Take the operation capability of the host 13, 16 (for example, the size of the cache space, the operation speed of the central processing unit (CPU), the size of the memory capacity or the current traffic load, etc.), and adjust each cache host 13, 16 The data capacity is used to distribute the network traffic burden between the cache hosts 13 and 16. In addition, the data cache mode of the network cache system 1 of the present invention is described. Referring to FIG. 4, first, the user agent device 11 directly performs data transmission with the server 15, such as: The user agent device 11 transmits a message to the server 15. A demand packet (as in a1 of Figure 4), the server will transmit a first response packet to the user agent device 11 (as in Figure 4 a2), 嗣, when the user agent device 11 and the servo After the two cache hosts 13 and 16 are connected in series, the cache hosts 13 and 16 execute the detection cache architecture program (that is, the cache hosts 13, 16 send detection packets from all the connections of the host to Adjacent user agent device 11, server 15 or other cache host 16, 13), 快, the cache hosts 13, 16 will determine the current network according to the reply packet sent by each connection port. After the road topology (such as b1, b2 in Figure 4) and/or after performing the load balancing procedure, they will be connected to each other and begin to provide services (as shown in Figure 4b3).

復請參閱第4圖所示,當使用者代理裝置11傳送一第二需求封包並被透通導向至該快取主機13後(如第4圖之c1),該快取主機13會檢視自身所存放的資料內容,當快取主機13發現自身並無該第二需求封包所需的資料內容後(如第4圖之c2),該快取主機13會將該第二需求封包轉送至快取主機16(如第4圖之c3),當快取主機16自身亦無該資料內容時,其會將該第二需求封包轉送至伺服器15(如第4圖之c4),當快取主機16接收到該伺服器15傳來之一第二回應封包(如第4圖之c5)後,其會將該第二回應封包內的資料內容儲存起來,並將該第二回應封包傳送至快取主機13(如第4圖之c6),該快取主機16接收到該第二回應封包後,其會將該第二回應封包內的資料內容儲存起來,並將該第二回應封包傳送至使用者代理裝置11(如第4圖之c7)。另,當使用者代理裝置11傳送一第三需求封包至該快取主機13後(如第4圖之d1),該快取主機13發現自身已存放有符合該第三需求封包所需的資料內容後(如第4圖之d2),該快取主機13回直接傳送一第三回應封包至該使用者代理裝置11(如第4圖之d3),如此,藉由本發明之網路快取系統1,使用者只要在該使用者代理裝置11與該伺服器15間連結上所需數量的快取主機13、16後,該等快取主機13、16便能夠自行偵測出當前的網路拓樸狀況,避免使用者需對每一台快取主機13、16進行設定,大幅提高使用上的便利性,且當使用者每次重新對快取主機13、16執行開機後,各該快取主機13、16便會主動地重新偵測出當前的網路拓樸狀況,以便使用者擴展該網路快取系統1。 Referring to FIG. 4, when the user agent device 11 transmits a second request packet and is transparently directed to the cache host 13 (as shown in FIG. 4 c1), the cache host 13 checks itself. The content of the stored data, when the cache host 13 finds that it does not have the data content required by the second demand packet (such as c2 in FIG. 4), the cache host 13 will forward the second demand packet to the fast Taking the host 16 (as shown in Figure 4 c3), when the cache host 16 itself does not have the data content, it will forward the second demand packet to the server 15 (as shown in Figure 4, c4), when the cache After receiving the second response packet (such as c5 in FIG. 4) sent by the server 15, the host 16 stores the content of the data in the second response packet, and transmits the second response packet to the second response packet. The cache host 13 (such as c6 in FIG. 4), after receiving the second response packet, the cache host 16 stores the content of the data in the second response packet, and transmits the second response packet. To the user agent device 11 (as shown in Figure 4, c7). In addition, when the user agent device 11 transmits a third demand packet to the cache host 13 (as shown in FIG. 4, d1), the cache host 13 finds that it has stored the data required to meet the third requirement packet. After the content (such as d2 in FIG. 4), the cache host 13 directly transmits a third response packet to the user agent device 11 (as shown in FIG. 4, d3). Thus, the network cache is performed by the present invention. In the system 1, after the user connects the required number of cache hosts 13 and 16 between the user agent device 11 and the server 15, the cache hosts 13 and 16 can detect the current network by themselves. The topology of the road avoids the need for the user to set each cache host 13, 16 to greatly improve the convenience of use, and each time the user re-boots the cache host 13, 16 each time, The cache host 13, 16 will actively re-detect the current network topology so that the user can expand the network cache system 1.

按,以上所述,僅係本發明之較佳實施例,惟,本發明所主 張之權利範圍,並不侷限於此,按凡熟悉該項技藝人士,依據本發明所揭露之技術內容,可輕易思及之等效變化,均應屬不脫離本發明之保護範疇。 According to the above, it is only a preferred embodiment of the present invention, but the present invention is The scope of the present invention is not limited thereto, and those skilled in the art, based on the technical content disclosed in the present invention, can easily think of equivalent changes, and should not fall within the protection scope of the present invention.

1‧‧‧網路快取系統 1‧‧‧Network cache system

11‧‧‧使用者代理裝置 11‧‧‧User agent device

13、16‧‧‧快取主機 13, 16‧‧‧ cache host

15‧‧‧伺服器 15‧‧‧Server

Claims (4)

一種自動化擴展網路快取系統的方法,該方法係應用於一網路快取系統,該網路快取系統包括一使用者代理裝置、至少一快取主機及一伺服器,其中該等快取主機係分別有線連結至該使用者代理裝置或該伺服器,或者彼此間相互連結,該方法係使各該快取主機被啟動後,能執行下列步驟:執行一偵測快取架構程序,其中該偵測快取架構程序會使該快取主機從自身的所有連接埠送出偵測封包至相鄰的使用者代理裝置、伺服器或其它快取主機;根據各該連接埠所傳來之回覆封包,判斷出自身與相鄰的使用者代理裝置、伺服器或其它快取主機間所連通的連接埠,以得知自身與相鄰的使用者代理裝置、伺服器或其它快取主機間之網路線的接線方式為一進一出類型或多進多出類型;及在多進多出類型的狀態下,根據各該回覆封包的內容,判斷自身與相鄰的使用者代理裝置、伺服器或其它快取主機之間是否支援頻寬聚集協定架構,若是,則使自身與相鄰的使用者代理裝置、伺服器或其它快取主機間的所有頻寬合併使用,若否,則使自身與相鄰的使用者代理裝置、伺服器或其它快取主機間採用多路橋接協定架構。 A method for automatically expanding a network cache system, the method being applied to a network cache system, the network cache system comprising a user agent device, at least one cache host and a server, wherein the fast The host computers are respectively wired to the user agent device or the server, or are connected to each other. The method is such that after each cache host is started, the following steps can be performed: executing a detection cache architecture program, The detection cache architecture program causes the cache host to send a detection packet from all its connections to an adjacent user agent device, server or other cache host; according to the connection Responding to the packet and determining the connection between itself and the adjacent user agent device, server or other cache host to know that it is between the user and the adjacent user agent device, server or other cache host. The wiring mode of the network route is one-in-one-out type or multi-input and multi-out type; and in the state of multi-input and multi-out type, judging itself and neighboring according to the content of each reply packet Whether the proxy aggregation device, the server, or other cache host supports the bandwidth aggregation protocol architecture, and if so, combines all the bandwidths between itself and the adjacent user agent device, server, or other cache host. If not, the multi-way bridging protocol architecture is employed between itself and the adjacent user agent device, server or other cache host. 如請求項1所述之方法,在該網路快取系統設有複數個台快取主機的狀態下,各該快取主機尚會執行一負載平衡程序,以分散各該快取主機間的網路流量負擔。 According to the method of claim 1, in the state that the network cache system is provided with a plurality of cache hosts, each cache host further performs a load balancing program to distribute the cache between the cache hosts. Network traffic burden. 如請求項1所述之方法,在該網路快取系統設有複數個台快取主機的狀態下,各該快取主機判斷出自身皆分別與該使用者代理裝置、伺服器及其它快取主間皆有網路線互連,則會優先採用並聯方式。 According to the method of claim 1, in the state in which the network cache system is provided with a plurality of cache hosts, each of the cache hosts determines that they are respectively associated with the user agent device, the server, and others. If there is a network route interconnection between the main rooms, the parallel mode will be preferred. 如請求項1~3項任一項所述之方法,該快取主機支援網路連結偵測協定、網路擴展樹協定及/或虛擬區域網路協定。 The method of any one of claims 1 to 3, wherein the cache host supports a network link detection protocol, a network extension tree protocol, and/or a virtual area network protocol.
TW103104547A 2014-02-12 2014-02-12 Automate the way to extend the network cache system TWI535249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103104547A TWI535249B (en) 2014-02-12 2014-02-12 Automate the way to extend the network cache system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103104547A TWI535249B (en) 2014-02-12 2014-02-12 Automate the way to extend the network cache system

Publications (2)

Publication Number Publication Date
TW201532412A TW201532412A (en) 2015-08-16
TWI535249B true TWI535249B (en) 2016-05-21

Family

ID=54343244

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103104547A TWI535249B (en) 2014-02-12 2014-02-12 Automate the way to extend the network cache system

Country Status (1)

Country Link
TW (1) TWI535249B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI592796B (en) 2016-09-19 2017-07-21 Univ Nat Central Packet-aware fault-tolerant method and system for virtual machine for cloud service, computer readable recording medium and computer program product

Also Published As

Publication number Publication date
TW201532412A (en) 2015-08-16

Similar Documents

Publication Publication Date Title
US20200304576A1 (en) Method and apparatus for multipath communication
JP5960050B2 (en) Load balancing across layer 2 domains
US8825867B2 (en) Two level packet distribution with stateless first level packet distribution to a group of servers and stateful second level packet distribution to a server within the group
US9369375B2 (en) Link-layer level link aggregation autoconfiguration
CN111638957B (en) Method for realizing cluster sharing type public cloud load balance
US12095855B2 (en) Distributed resilient load-balancing for multipath transport protocols
CN113572831B (en) Communication method, computer equipment and medium between Kubernetes clusters
EP3576347B1 (en) Network device snapshots
CN103905284B (en) A kind of flow load sharing method and apparatus based on EVI networks
WO2022001669A1 (en) Method for establishing vxlan tunnel, and related device
WO2012037787A1 (en) Method and system for terminal access and management in cloud computing
US10594602B2 (en) Web services across virtual routing and forwarding
JP2008293492A (en) Intelligent failback in load-balanced network environment
US20180262387A1 (en) Restoring control-plane connectivity with a network management entity
CN113839862B (en) Method, system, terminal and storage medium for synchronizing ARP information between MCLAG neighbors
CN105191339A (en) Software redundancy in a non-homogenous virtual chassis
WO2014180199A1 (en) Network establishment method and control device
US11412442B2 (en) Predictive service advertisements by service discovery gateway
US9246804B1 (en) Network routing
US11012304B1 (en) Networking device replacement system
CN101599907B (en) Method and system for forwarding flow
WO2014071811A1 (en) Construction method, node and system of trill network
CN113965521A (en) Data packet transmission method, server and storage medium
Ammar et al. Dynamic SDN controllers-switches mapping for load balancing and controller failure handling
TWI535249B (en) Automate the way to extend the network cache system