TW201622381A - Date center network provisioning method and system thereof - Google Patents
Date center network provisioning method and system thereof Download PDFInfo
- Publication number
- TW201622381A TW201622381A TW103142668A TW103142668A TW201622381A TW 201622381 A TW201622381 A TW 201622381A TW 103142668 A TW103142668 A TW 103142668A TW 103142668 A TW103142668 A TW 103142668A TW 201622381 A TW201622381 A TW 201622381A
- Authority
- TW
- Taiwan
- Prior art keywords
- data center
- center network
- switches
- transmission
- virtual machines
- Prior art date
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
本發明揭露一種資料中心網路的配置方法及其系統,尤指一種在軟體定義網路(Software Defined Network)下的網路的配置方法及其系統。 The invention discloses a data center network configuration method and a system thereof, in particular, a network configuration method and a system thereof under a Software Defined Network.
近年來隨著雲端運算的蓬勃發展,虛擬化(Virtualization)技術成為熱門的研究議題,其中主機虛擬化由於可將單一實體主機轉化為多部共同運作的虛擬主機(Virtual Machine,VM),透過數台主機執行平行化運算,來提供可靠的服務品質。但虛擬化的技術應用於雲端等級的網路,需要巨量的運算能力、記憶體以及資料儲存空間。為此,史丹佛大學發展了一種軟體定義網路(Software Defined Network,SDN)的系統,制定了OpenFlow的架構。原目標為延伸校園網路的交換線路之可程式規劃特性,並提供對應的虛擬平台。一般而言,軟體定義網路包含一個集中式的控制機(Controller)和數以萬計的交換機(Switch)組合而成,這些交換機相互連結,並提供傳輸路徑至所有的實體機器上。而這種連結關係為一種拓樸架構,這種拓樸架構同時也構成了軟體定義網路下的資料中心(Date Center)系統。 In recent years, with the flourishing development of cloud computing, virtualization technology has become a hot research topic. Host virtualization can convert a single entity host into multiple virtual machines (VMs) that work together. The host performs parallelization to provide reliable service quality. But virtualization technology is applied to cloud-level networks, requiring huge amounts of computing power, memory, and data storage. To this end, Stanford University has developed a software defined network (SDN) system, and developed the OpenFlow architecture. The original goal was to extend the programmable programming features of the campus network's switching lines and provide a corresponding virtual platform. In general, a software-defined network consists of a centralized controller (Controller) and tens of thousands of switches (Switch) that are connected to each other and provide a transmission path to all physical machines. The connection relationship is a topology structure, which also constitutes a data center (Date Center) system under the software definition network.
目前的雲端網路使用軟體定義網路下的資料中心,會面臨到幾個關鍵的問題。第一,在資料中心下的實體機器透過數個交換機與另一台實體機器建立連結,然而這個連結的繞徑距離(Hop Distance)如果未被最佳化,則資料中心最上層的核心交換機(Core Switch)將容易遭遇到傳輸壅塞(Traffic Congestion)的情況,並導致網路延遲時間增加。第二,實體機器上所使用的交換機數量如果未被最佳化,則會導致功率損耗的問題。第三,在兩台實體 機器上互相傳輸資料時,是透過數個交換機建立連結,然而,當某個交換機產生傳輸壅塞時(Traffic Congestion),透過該交換機的傳輸速度就會大幅降低。此時,若沒有一個最佳化的繞徑(Reroute)機制,其資料傳輸的吞吐量(Throughput)亦會下降而導致連線品質劣化。 The current cloud network uses software to define the data center under the network, and will face several key issues. First, the physical machine under the data center establishes a connection with another physical machine through several switches. However, if the link distance of the link is not optimized, the core switch of the uppermost layer of the data center ( Core Switch) will be vulnerable to traffic congestion and increase network latency. Second, if the number of switches used on a physical machine is not optimized, it can cause power loss problems. Third, in two entities When the data is transmitted to each other on the machine, the connection is established through several switches. However, when a switch generates a traffic congestion (Traffic Congestion), the transmission speed through the switch is greatly reduced. At this time, if there is no optimized reroute mechanism, the throughput of the data transmission (Throughput) will also decrease, resulting in deterioration of the connection quality.
因此,發展一種資料中心網路的配置方法及其系統,能同時在上述三個傳輸問題中找出平衡的最佳化解,是非常重要的。 Therefore, it is very important to develop a data center network configuration method and its system to find a balanced optimal solution among the above three transmission problems.
本發明一實施例提出一種資料中心網路的配置方法,包含依據資料中心網路之拓樸資訊,將資料中心網路內的複數個虛擬機分為複數個群組,將每一群組的複數個虛擬機分配至資料中心網路內的複數個伺服機,依據資料中心網路內的每一交換機的傳輸流量,分配該些伺服機相互的傳輸路徑。 An embodiment of the present invention provides a data center network configuration method, which includes dividing a plurality of virtual machines in a data center network into a plurality of groups according to topology information of a data center network, and each group A plurality of virtual machines are allocated to a plurality of servers in the data center network, and the transmission paths of the servers are allocated according to the transmission traffic of each switch in the data center network.
本發明另一實施例提出一種資料中心網路的配置系統,包含複數個交換機、複數個伺服機、複數個虛擬機及控制機。該些交換機中的每一交換機依據拓樸資訊相互連結,該些伺服機是連結於該些交換機,該些虛擬機是連結於該些伺服機,控制機是連結於該些交換機、該些伺服機及該些虛擬機,其中控制機依據資料中心網路之拓樸資訊,將資料中心網路內的複數個虛擬機分為複數個群組,控制機將每一群組的複數個虛擬機分配至資料中心網路內的複數個伺服機,並依據資料中心網路內的每一交換機的傳輸流量,分配該些伺服機相互的傳輸路徑。 Another embodiment of the present invention provides a data center network configuration system, including a plurality of switches, a plurality of servers, a plurality of virtual machines, and a control machine. Each of the switches is connected to each other according to topology information. The servers are connected to the switches, and the virtual machines are connected to the servers, and the controller is connected to the switches, and the servos are connected to the switches. And the virtual machines, wherein the control machine divides the plurality of virtual machines in the data center network into a plurality of groups according to the topology information of the data center network, and the control machine divides the plurality of virtual machines in each group Allocating to a plurality of servers in the data center network, and allocating the transmission paths of the servers according to the transmission traffic of each switch in the data center network.
100‧‧‧資料中心 100‧‧‧Data Center
1至12‧‧‧交換機 1 to 12‧‧‧ switches
13‧‧‧伺服機 13‧‧‧ Servo
14、16、17‧‧‧虛擬機 14, 16, 17‧‧ virtual machines
15‧‧‧控制機 15‧‧‧Control machine
S201至S203‧‧‧步驟 S201 to S203‧‧‧ steps
G1‧‧‧第一群組 G1‧‧‧First Group
G2‧‧‧第二群組 G2‧‧‧ second group
第1圖係為本發明之軟體定義網路的資料中心之實施例的架構圖。 1 is an architectural diagram of an embodiment of a data center of a software-defined network of the present invention.
第2圖係為第1圖實施例之網路配置方法的流程圖。 Figure 2 is a flow chart of the network configuration method of the embodiment of Figure 1.
第3圖係為第1圖實施例中,降低繞徑距離方法的示意圖。 Fig. 3 is a schematic view showing a method of reducing the winding distance in the embodiment of Fig. 1.
第4圖係為第1圖實施例中,重新繞徑之方法的示意圖。 Fig. 4 is a schematic view showing a method of rewinding in the embodiment of Fig. 1.
第1圖係為本發明之軟體定義網路的資料中心之實施例的架構圖。如第1圖所示,資料中心100包含複數個核心層交換機10、複數個聚合層交換機11、複數個邊緣層交換機12、複數個伺服機13、複數個虛擬機14以及控制機15。在本實施例中,資料中心100是以寬樹狀拓樸資訊(Fat-Tree Information)建置,具有三層交換機的結構,複數個核心層交換機10為拓樸結構最上層的交換機,複數個聚合層交換機11為拓樸結構中間層的交換機,而複數個邊緣層交換機12為拓樸結構最下層的交換機。三層的交換機以第1圖的方式相互連結。而複數個伺服機13在本實施例中為實體機器(Physical Machine),成對的伺服機13透過交換機建立傳輸路徑後即可傳輸資料。虛擬機14是為每一個使用者所使用的非實體機器,這些虛擬機14會依附在伺服機13之下,使用伺服機13的硬體資源以及頻寬。舉例來說,在第2圖中,最左邊的伺服機13之下執行了3個虛擬機14,而這三個虛擬機14可透過最左邊的伺服機13傳輸資料至別的虛擬機。控制機15連結於所有的交換機10至12、伺服機13及虛擬機14,以用來控管整個資料中心100的分配和傳輸流量。然而,傳統資料中心並未將成對的伺服機13連結路徑的繞徑距離(Hop Distance)最佳化,且未將每個伺服機13上所配置的虛擬機14數量最佳化,亦未考慮當某個交換機10、11或12產生傳輸壅塞時(Traffic Congestion),避免傳輸速度大幅降低的最佳化的重新繞徑(Reroute)機制。因此整個傳統資料中心做資料傳輸的可靠度較差且耗能較高。為了同時改善傳輸頻寬、可靠度以及降低耗能,本發明提出一種資料中心100網路的配置方法,而其步驟將詳述於下。 1 is an architectural diagram of an embodiment of a data center of a software-defined network of the present invention. As shown in FIG. 1, the data center 100 includes a plurality of core layer switches 10, a plurality of aggregation layer switches 11, a plurality of edge layer switches 12, a plurality of server machines 13, a plurality of virtual machines 14, and a control unit 15. In this embodiment, the data center 100 is built with a wide tree-like information (Fat-Tree Information), has a three-layer switch structure, and a plurality of core layer switches 10 are top-level switches of the topological structure, and a plurality of switches. The aggregation layer switch 11 is a switch of the intermediate layer of the topology structure, and the plurality of edge layer switches 12 are the switches of the lowest layer of the topology structure. The three-layer switches are connected to each other in the manner shown in FIG. The plurality of servos 13 are physical machines in this embodiment, and the paired servos 13 can transmit data after establishing a transmission path through the switch. The virtual machine 14 is a non-physical machine used by each user, and these virtual machines 14 are attached under the servo 13, using the hardware resources and bandwidth of the server 13. For example, in FIG. 2, three virtual machines 14 are executed under the leftmost server 13, and the three virtual machines 14 can transmit data to other virtual machines through the leftmost server 13. The control unit 15 is coupled to all of the switches 10 to 12, the servos 13, and the virtual machines 14 for controlling the distribution and transmission of traffic throughout the data center 100. However, the conventional data center does not optimize the Hop Distance of the paired servo 13 link paths, and does not optimize the number of virtual machines 14 configured on each of the servos 13, nor does it Consider an optimized reroute mechanism that avoids a significant reduction in transmission speed when a switch 10, 11 or 12 generates a traffic congestion (Traffic Congestion). Therefore, the reliability of data transmission in the entire traditional data center is poor and energy consumption is high. In order to simultaneously improve the transmission bandwidth, reliability, and power consumption, the present invention proposes a method for configuring the data center 100 network, and the steps thereof will be described in detail below.
第2圖為本發明之資料中心100網路的配置方法的流程圖。在第2圖中,資料中心100網路的配置方法包含三個步驟S201至S203,為下所示:S201:依據資料中心網路100之拓樸資訊,將資料中心網路內的 複數個虛擬機14分為複數個群組;S202:將每一群組的複數個虛擬機14分配至資料中心網路內的複數個伺服機13;及S203:依據資料中心100網路內的每一交換機10至12的傳輸流量,分配該些伺服機13相互的傳輸路徑。 2 is a flow chart of a method for configuring a data center 100 network of the present invention. In FIG. 2, the data center 100 network configuration method includes three steps S201 to S203, as shown below: S201: according to the topology information of the data center network 100, the data center network The plurality of virtual machines 14 are divided into a plurality of groups; S202: assigning a plurality of virtual machines 14 of each group to a plurality of server machines 13 in the data center network; and S203: according to the data center 100 The transmission traffic of each of the switches 10 to 12 distributes the transmission paths of the servos 13 to each other.
在資料中心100網路的配置方法的流程圖中,步驟S201是將資料中心網路內的複數個虛擬機14執行重新分組的步驟,其目的是為了要降低繞徑距離(Hop Distance Reduction)。步驟S202是將伺服機13的硬體資源重新分配至複數個虛擬機14上,其目的是為了要節省能源消耗(Energy Saving)。步驟S203是將伺服機13相互的傳輸路徑執行重新繞路(Reroute)的規劃,其目的是為了讓伺服機13之間的流量分配平衡(Load Balancing)。因此,本發明的資料中心100網路的配置方法可以視為同時考慮繞徑距離、能源消耗以及流量分配平衡之最佳化的演算法。而步驟S201至步驟S203的過程和原理將詳述於下。 In the flowchart of the configuration method of the data center 100 network, step S201 is a step of performing re-grouping of a plurality of virtual machines 14 in the data center network, the purpose of which is to reduce the Hop Distance Reduction. Step S202 is to reallocate the hardware resources of the server 13 to the plurality of virtual machines 14, the purpose of which is to save energy saving. Step S203 is a plan for performing re-routing of the transmission paths of the servos 13 to each other in order to balance the load distribution between the servos 13. Therefore, the configuration method of the data center 100 network of the present invention can be regarded as an algorithm that simultaneously considers the optimization of the path distance, energy consumption, and traffic distribution balance. The process and principle of steps S201 to S203 will be described in detail below.
請參閱第3圖,第3圖為第2圖之資料中心100執行步驟201以降低繞徑距離的示意圖。在此步驟中,資料中心100的虛擬機14會被分為兩個群組(本發明不以此為限,其它實施例可以分為N個群組,N為大於2的正整數)。第一群組G1內包含複數個虛擬機16,第二群組G2內包含複數個虛擬機17。每一個群組中的資料傳輸相關性是很高的,意即同一群組內的虛擬機,會以較高機率傳輸資料至同一群組內的另外一台虛擬機。而兩個不同群組中資料傳輸相關性較低,意即第一群組G1內的虛擬機16傳輸資料至第二群組內G2的虛擬機17的機率較低。而這兩個群組G1和G2如何由第1圖中原始的資料中心100區分出來,將描述於下。第3圖的資料中心100具有複數個伺服機13,假設這些伺服機13的索引值(Index)為1至M,M為正整數。此時,因為在資料中心100,成對的伺服機13會透過數個交換機建立連結路徑後,即可傳輸資料。因此,成對的伺服機13對應的連結路徑,會對應一個
所使用交換機數目的數值,這個數值稱為跳線數目(Hop Number)。因此,第i個伺服機13經由數個交換機對第j個伺服機13進行傳輸時,其跳線數目可由控制機15得知,在這裡假設為hij。因此,在第1圖的資料中心100中,可以將所有伺服機13傳輸的跳線數目表示為一個中繼點數量矩陣,為下:
其中H為中繼點數量矩陣,中繼點數量矩陣H為對稱矩陣(H=H T)且其中每一個元素都是實數。再者,中繼點數量矩陣H的對角線(Diagonal)值部分為0,這個原因是當伺服機將資料傳至自己本身時,是不需要經過交換機的。當中繼點數量矩陣H建置完成後,會依據中繼點數量矩陣H,建立對應的傳輸負載量矩陣,定義為下。當第i個伺服機13經由數個交換機對第j個伺服機13進行傳輸時,其傳輸負載量(Traffic Load)可由控制機15得知,在這裡假設為lij。因此,在第2圖的資料中心100中,可以將所有伺服機13的傳輸負載量表示為一個傳輸負載量矩陣,為下:
其中L為傳輸負載量矩陣,傳輸負載量矩陣L為對稱矩陣(L=L T)且其中每一個元素都是實數。再者,傳輸負載量矩陣L的對角線(Diagonal)值部分為0,這個原因是因為伺服機若將資料傳至自己本身,是不需要經過 交換機傳輸的。根據中繼點數量矩陣H以及傳輸負載量矩陣L,可以經由一些傳輸指標將伺服機以及對應所使用的虛擬機分組。舉例來說,計算中繼點數量矩陣H每一列(Row)上數值之合,為。的數值意義即為第i個伺服器連外的所有路徑所用的交換機數量總和;計算傳輸負載量矩陣L每一列(Row)上數值之合,。的數值意義即為第i個伺服器連外的所有路徑之傳輸負載量的總和。因此,根據中繼點數量矩陣H以及傳輸負載量矩陣L的特性,可以將交換機進行如第3圖的分組。在第3圖中,因為同一群組內的虛擬機,會以較高機率傳輸資料至同一群組內的另外一台虛擬機。兩個不同群組中資料傳輸相關性較低。因此在第3圖的資料中心100,其跨組傳輸所必須經過的核心層交換機10之使用率就會大幅降低,故可以減低網路壅塞的機率。並且,由於虛擬機以較高機率在同一組內傳輸資料,因此,無論是第一群組G1內的虛擬機16,或是第二群組G2內的虛擬機16,均能以較短路徑傳輸資料。因此,步驟S201可以達成降低繞徑距離(Hop Distance Reduction)的功效。 Where L is the transmission load matrix, and the transmission load matrix L is a symmetric matrix ( L=L T ) and each of the elements is a real number. Furthermore, the Diagonal value of the transmission load matrix L is partially zero. This is because the servo does not need to transmit through the switch if it transmits the data to itself. According to the relay point quantity matrix H and the transmission load amount matrix L , the server and the corresponding used virtual machine can be grouped via some transmission indicators. For example, calculate the sum of the values on each column (Row) of the relay point number matrix H , . The numerical meaning is the sum of the number of switches used for all paths outside the i-th server connection; calculate the sum of the values on each column of the transmission load matrix L (Row), . The numerical meaning is the sum of the transmission loads of all paths outside the i-th server. Therefore, according to the characteristics of the relay point number matrix H and the transmission load amount matrix L , the switch can be grouped as shown in FIG. In Figure 3, because virtual machines in the same group will transfer data to another virtual machine in the same group at a higher probability. Data transfer correlation is low in two different groups. Therefore, in the data center 100 of FIG. 3, the usage rate of the core layer switch 10 through which the cross-group transmission must pass is greatly reduced, so that the probability of network congestion can be reduced. Moreover, since the virtual machine transmits data in the same group at a high probability, the virtual machine 16 in the first group G1 or the virtual machine 16 in the second group G2 can have a shorter path. Transfer data. Therefore, step S201 can achieve the effect of reducing the Hop Distance Reduction.
在執行步驟S201後,虛擬機以及伺服機被分為兩個群組G1及G2,而控制器15也取得每一個群組下虛擬機的數量。接下來,為了能使資料中心100內,所有實體的伺服機能以最小能量消耗的型態運作,控制器15會執行步驟S202,將每一群組的複數個虛擬機分配至資料中心100網路內的複數個伺服機,以使負載大的伺服機的數量是最少的,其方法詳述於下。首先,資料中心100中的控制機15取得每一個群組內的所有虛擬機的系統資源需求。舉例來說,控制機15會取得第一群組G1內所有虛擬機16的系統資源需求,以及取得第二群組G2內所有虛擬機17的系統資源需求。這裡所述之系統資源需求可為處理器負載需求、記憶體容量需求及/或頻寬使用量需求,然而本發明不限於此,系統資源需求亦可是其它的資源指標。而控制機15亦會取得每一個伺服機13所支援的系統資源上限。這裡所述之系統資源上限可 為處理器負載上限、記憶體容量上限及/或頻寬使用量上限,然而本發明不限於此,系統資源上限亦可是其它的資源上限。接下來,控制機15將每一個虛擬機的系統資源需求分別相加,並由大到小排序。舉例來說,第一群組G1內第i個虛擬機16,其資源需求包含處理器負載需求CRi,記憶體容量需求MRi以及頻寬使用量需求BRi。因此,第i個虛擬機16的資源總需求為TRi=CRi+MRi+BRi。假設第一群組G1內有MG個虛擬機,控制機15就會將虛擬機的資源總需求{TR1、...、TRMG}由大到小排序,而由資源總需求較高的虛擬機開始分配至伺服機13,而分配的方式即利用背包問題演算法(Knapsack Problem Algorithm)的流程,將每一群組虛擬機分配至伺服機。舉例來說,第1個開啟的伺服機13,其處理器負載上限為CU1,記憶體容量上限為MU1以及頻寬使用量上限為BU1。因此,將虛擬機逐一分配至伺服機時,必須隨時監控伺服機13是否已經超出負載量,例如第1個開啟的伺服機13已經分配了兩個虛擬機,因此符合CU1>CR1+CR2,MU1>MR1+MR2,BU1>BR1+BR2的關係,每一個資源負載量尚未超載。若將第3個虛擬機再分配至伺服機13時,且發生CU1<CR1+CR2+CR3,MU1<MR1+MR2+MR3,BU1<BR1+BR2+BR3其中一種以上的情況時,表示其中一種以上的資源負載量超載,此時,控制器15就會再開啟一個伺服機13來分配第3個虛擬機。在上述針對系統資源需求以及系統資源上限的計算時,為了計算度量是一致的,會將這些參數量化並標準化(Normalization)。在步驟202中,控制器15盡量使單一伺服機13所分配到的虛擬機數量為最多,換言之,在系統負載(System Loading)未滿的時候,其休眠狀態或未使用的伺服器數量亦為最多,等同於步驟202可以使用最少數量的伺服機13而滿足資料中心100之所有虛擬機資源需求的功效,因此可以節省能源消耗(Energy Saving)。 After performing step S201, the virtual machine and the server are divided into two groups G1 and G2, and the controller 15 also obtains the number of virtual machines in each group. Next, in order to enable the servos of all entities in the data center 100 to operate in a minimum energy consumption mode, the controller 15 performs step S202 to allocate a plurality of virtual machines of each group to the data center 100 network. There are a plurality of servos in the machine to minimize the number of servos with large loads, and the method is described in detail below. First, the controller 15 in the data center 100 obtains system resource requirements for all virtual machines in each group. For example, the controller 15 will obtain the system resource requirements of all the virtual machines 16 in the first group G1 and the system resource requirements of all the virtual machines 17 in the second group G2. The system resource requirements described herein may be processor load requirements, memory capacity requirements, and/or bandwidth usage requirements. However, the present invention is not limited thereto, and system resource requirements may be other resource indicators. The control unit 15 also obtains the upper limit of the system resources supported by each of the servos 13. The system resource upper limit described herein may be a processor load upper limit, a memory capacity upper limit, and/or a bandwidth usage upper limit. However, the present invention is not limited thereto, and the system resource upper limit may be another resource upper limit. Next, the control unit 15 adds the system resource requirements of each virtual machine separately, and sorts them from large to small. For example, the first group G1 i-th virtual machine 16, which includes a processor resource requirements CR i load demand, demand for memory capacity and bandwidth usage, the MR i demand BR i. Therefore, the total resource requirement of the i-th virtual machine 16 is TR i = CR i + MR i + BR i . Assuming that there are MG virtual machines in the first group G1, the control machine 15 sorts the total resource requirements {TR 1 , ..., TR MG } of the virtual machine from large to small, and the total resource demand is high. The virtual machine starts to be distributed to the server 13, and the allocation method uses the flow of the Knapsack Problem Algorithm to allocate each group of virtual machines to the server. For example, the first open servo 13 has a processor load upper limit of CU 1 , a memory capacity upper limit of MU 1 , and a bandwidth usage upper limit of BU 1 . Therefore, when the virtual machines are assigned to the server one by one, it is necessary to monitor whether the servo 13 has exceeded the load amount at any time. For example, the first open server 13 has already allocated two virtual machines, thus complying with CU 1 >CR 1 +CR 2 , MU 1 > MR 1 + MR 2 , BU 1 > BR 1 + BR 2 relationship, each resource load has not been overloaded. If the third virtual machine is redistributed to the servo 13, and CU 1 <CR 1 +CR 2 +CR 3 occurs, MU 1 <MR 1 +MR 2 +MR 3 , BU 1 <BR 1 +BR 2 + In the case of more than one of BR 3 , it indicates that one or more of the resource loads are overloaded. At this time, the controller 15 turns on a server 13 to allocate the third virtual machine. In the above calculations for system resource requirements and system resource caps, these parameters are quantized and normalized in order to calculate metrics that are consistent. In step 202, the controller 15 tries to maximize the number of virtual machines to which a single server 13 is allocated. In other words, when the system load is not full, the number of idle states or unused servers is also At most, equivalent to step 202, the minimum number of servers 13 can be used to meet the power requirements of all virtual machine resources of the data center 100, thereby saving energy savings.
在執行步驟S201以及S202之後,資料中心100的拓樸結構被分為兩個群組,而控制器15在每個群組會開啟最小數量的伺服機13給對應的虛擬機使用。然而,當實體的伺服機13以及虛擬機的數量於步驟S202被決 定之後,每一台伺服機13透過數個交換機連結的路徑至另外一台伺服機13也被規劃完成。此時,於步驟S203中,控制機15會考慮重新繞徑(Reroute)的問題。首先,控制機15會偵測或監控資料中心100網路內每一個交換機的傳輸流量,並計算每一對伺服機13相互的傳輸路徑所經過之交換機傳輸流量的總和。若傳輸路徑所經過之交換機傳輸流量的總和大於門檻值(Threshold),控制機15會找尋一條不衝突的路徑(Disjointed Path),並將該傳輸路徑重新繞道(Reroute),其中這條新的路徑,其所經過之交換機傳輸流量的總和不大於門檻值。 After performing steps S201 and S202, the topology of the material center 100 is divided into two groups, and the controller 15 opens a minimum number of servers 13 for each virtual group to use for the corresponding virtual machine. However, when the number of the server 13 and the virtual machine of the entity is determined in step S202 After that, each server 13 is planned to be completed through a path connecting several switches to another server 13. At this time, in step S203, the controller 15 considers the problem of re-routing. First, the control unit 15 detects or monitors the transmission traffic of each switch in the data center 100 network, and calculates the sum of the transmission traffic of the switches through which each pair of servos 13 communicate with each other. If the sum of the transmission traffic of the switch through which the transmission path passes is greater than the threshold (Threshold), the control unit 15 searches for a non-conflicting path (Disjointed Path) and reroutes the transmission path, wherein the new path The sum of the transmission traffic of the switch through which it passes is not greater than the threshold.
第4圖為本發明實施例之重新繞徑步驟的示意圖。為了簡化說明,第4圖僅用10個交換機來描述繞徑過程。在第4圖中,10個交換機分別為交換機0至交換機9,每一個交換機有其傳輸流量。在第4圖中,考慮交換機6至交換機8的傳輸路徑。首先,控制機15規劃的預定路徑為路徑P1,表示如下:P1:交換機6→交換機2→交換機0→交換機4→交換機8然而,控制機15將路徑P1中所有的傳輸流量加總後,發現其超過門檻值,因此判斷路徑P1為過度使用(Over-Utilized)的路徑,可能會有較高機率發生傳輸壅塞。因此,控制機15會將P1路徑進行重新繞道(Reroute)的操作,修正交換機6至交換機8的傳輸路徑為P2路徑,表示如下:P2:交換機6→交換機3→交換機1→交換機5→交換機8經由重新繞道後,可以有效降低交換機6至交換機8做資料傳輸時,發生傳輸壅塞的機率。因此,在步驟203中,由於控制機15會將可能發生傳輸壅塞的路徑繞道,因此在整體資料傳輸的品質可以獲得更進一步的提升。 Figure 4 is a schematic illustration of the rewinding step of the embodiment of the present invention. To simplify the description, Figure 4 uses only 10 switches to describe the routing process. In Figure 4, the 10 switches are switch 0 to switch 9, respectively, and each switch has its transmission traffic. In Fig. 4, the transmission path of the switch 6 to the switch 8 is considered. First, the predetermined path planned by the control unit 15 is the path P1, which is expressed as follows: P1: switch 6 → switch 2 → switch 0 → switch 4 → switch 8 However, the control machine 15 adds up all the transmission traffic in the path P1 and finds It exceeds the threshold value, so it is judged that the path P1 is an over-utilized path, and there may be a high probability of transmission congestion. Therefore, the control unit 15 performs the operation of re-routing the P1 path, and corrects the transmission path of the switch 6 to the switch 8 as the P2 path, which is expressed as follows: P2: switch 6 → switch 3 → switch 1 → switch 5 → switch 8 After re-bypassing, the probability of transmission congestion occurring when the switch 6 to the switch 8 perform data transmission can be effectively reduced. Therefore, in step 203, since the control unit 15 detours the path where the transmission congestion may occur, a further improvement in the quality of the overall data transmission can be obtained.
綜上所述,本發明描述一種資料中心網路的配置方法,利用中繼點數量矩陣以及傳輸負載量矩陣,將資料中心的拓樸結構分組,以降低兩伺服機之間的所用的交換機數量,達到傳輸路徑的距離降低的功效。並利用每 一個群組間虛擬機的系統資源需求以及伺服機的系統資源上限,將虛擬機分配至適當的伺服機,以用最少數量的伺服器開啟虛擬機,而達成節省能源消耗的功效。再者,控制機將檢查成對伺服機傳輸資料所用的路徑是否可能發生傳輸壅塞,必要時將可能發生傳輸壅塞的路徑繞道,以使整體資料傳輸的品質可以獲得更進一步的提升。因此,本發明的網路的配置方法,資料中心可以同時達到低傳輸距離、高節能以及高傳輸品質的功效。 In summary, the present invention describes a method for configuring a data center network, which uses a relay point quantity matrix and a transmission load matrix to group data center topology structures to reduce the number of switches used between the two servers. , the effect of reducing the distance of the transmission path. And use each The system resource requirements of a virtual machine between groups and the system resource limit of the server allocate virtual machines to appropriate servers to enable virtual machines with a minimum number of servers to achieve energy saving. Furthermore, the control unit will check whether the path used by the pair of servos to transmit data may cause a transmission congestion, and if necessary, a path bypass of the transmission congestion may occur, so that the quality of the overall data transmission can be further improved. Therefore, in the network configuration method of the present invention, the data center can simultaneously achieve low transmission distance, high energy saving, and high transmission quality.
以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。 The above are only the preferred embodiments of the present invention, and all changes and modifications made to the scope of the present invention should be within the scope of the present invention.
100‧‧‧資料中心 100‧‧‧Data Center
10至12‧‧‧交換機 10 to 12‧‧‧ switches
13‧‧‧伺服機 13‧‧‧ Servo
14‧‧‧虛擬機 14‧‧‧Virtual Machine
15‧‧‧控制機 15‧‧‧Control machine
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW103142668A TW201622381A (en) | 2014-12-08 | 2014-12-08 | Date center network provisioning method and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW103142668A TW201622381A (en) | 2014-12-08 | 2014-12-08 | Date center network provisioning method and system thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
TW201622381A true TW201622381A (en) | 2016-06-16 |
Family
ID=56755645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW103142668A TW201622381A (en) | 2014-12-08 | 2014-12-08 | Date center network provisioning method and system thereof |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW201622381A (en) |
-
2014
- 2014-12-08 TW TW103142668A patent/TW201622381A/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9130856B2 (en) | Creating multiple NoC layers for isolation or avoiding NoC traffic congestion | |
JP5324637B2 (en) | Dynamic flowlet scheduling system, flow scheduling method, and flow scheduling program | |
CN105337861B (en) | A kind of method for routing based on energy efficiency priority and cognitive theory | |
CN105827528B (en) | A kind of route selection method suitable for the flexible optical-fiber network of frequency spectrum | |
Fang et al. | Power-efficient virtual machine placement and migration in data centers | |
Li et al. | Load balancing researches in SDN: A survey | |
CN110505539B (en) | Physical optical network virtualization mapping method, device, controller and storage medium | |
US9467380B2 (en) | Data center network flow migration method and system thereof | |
US10237202B2 (en) | Network control device, network control method, and recording medium for program | |
Ibrahim et al. | A multi-objective routing mechanism for energy management optimization in SDN multi-control architecture | |
WO2020134133A1 (en) | Resource allocation method, substation, and computer-readable storage medium | |
CN108476175B (en) | Transfer SDN traffic engineering method and system using dual variables | |
US20160019084A1 (en) | Method and system for inter-cloud virtual machines assignment | |
CN107612771A (en) | A kind of SDN load-balancing method based on dynamic migration | |
Ford et al. | Provisioning low latency, resilient mobile edge clouds for 5G | |
Liu et al. | AAMcon: an adaptively distributed SDN controller in data center networks | |
Liu | Intelligent routing based on deep reinforcement learning in software-defined data-center networks | |
CN105704180B (en) | The configuration method and its system of data center network | |
CN105391651B (en) | Virtual optical network multi-layer resource convergence method and system | |
CN113348651A (en) | Dynamic inter-cloud placement of sliced virtual network functions | |
JPWO2012141241A1 (en) | Network, data transfer node, communication method and program | |
CN110011858A (en) | It is a kind of that mapping method is optimized based on load balancing and the switch reconfigured | |
Aleyadeh et al. | Optimal container migration/re-instantiation in hybrid computing environments | |
Li et al. | Availability-aware provision of service function chains in mobile edge computing | |
KR20150080183A (en) | Method and Apparatus for dynamic traffic engineering in Data Center Network |