TWI695329B - Data sharding management system and method on container platform - Google Patents

Data sharding management system and method on container platform Download PDF

Info

Publication number
TWI695329B
TWI695329B TW108111522A TW108111522A TWI695329B TW I695329 B TWI695329 B TW I695329B TW 108111522 A TW108111522 A TW 108111522A TW 108111522 A TW108111522 A TW 108111522A TW I695329 B TWI695329 B TW I695329B
Authority
TW
Taiwan
Prior art keywords
data
container
cluster information
computing node
service
Prior art date
Application number
TW108111522A
Other languages
Chinese (zh)
Other versions
TW202038146A (en
Inventor
施嘉峻
Original Assignee
中華電信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華電信股份有限公司 filed Critical 中華電信股份有限公司
Priority to TW108111522A priority Critical patent/TWI695329B/en
Application granted granted Critical
Publication of TWI695329B publication Critical patent/TWI695329B/en
Publication of TW202038146A publication Critical patent/TW202038146A/en

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This invention provides a data sharding management system and a method on a container platform. The system is based on the original container management mechanism of the container platform, and incorporates a data sharding management mechanism to enable the container platform to natively support functions such as data partitioning of large data applications, separation of master-slave, and the like, and dispersedly store the data in the storage medium that comes with the host group where the container platform is located, such that the container platform does not need to build additional distributed storage media, and does not need to rely on the data sharding management mechanism of an individual distributed data management software.

Description

一種建置於容器平台的資料碎片管理系統及方法 Data fragment management system and method built on container platform

本發明係關於一種資料碎片管理系統及方法,尤其關於一種建置於容器平台的資料碎片管理系統以及實現資料碎片管理的方法。 The invention relates to a data fragment management system and method, in particular to a data fragment management system built on a container platform and a method for implementing data fragment management.

隨著容器(container)應用的普及化,容器平台逐漸進入企業的基礎設施架構堆疊中,然而,習知的容器平台善於支撐無狀態(Stateless)應用的佈署擴充與管理,但對於有狀態(Stateful)應用的支持,仍未如無狀態應用一樣完整。 With the popularization of container applications, container platforms have gradually entered the enterprise's infrastructure stack. However, conventional container platforms are good at supporting the deployment and expansion of stateless applications, but for stateful ( Stateful) application support is still not as complete as stateless applications.

已知現階段的容器平台若需儲存恆久的狀態性資料,需倚賴外掛的儲存媒介,但外掛的儲存媒介擴充價格昂貴,難以承載大量資料的儲存。此外,對於大資料應用,多將資料分割成碎片並分散儲存以節省成本。 It is known that if the current container platform needs to store permanent status data, it needs to rely on the external storage medium. However, the external storage medium is expensive to expand, and it is difficult to store large amounts of data. In addition, for large data applications, the data is divided into fragments and distributed to save costs.

舉例來說,通用的容器平台,例如Kubernetes,並不內建提供儲存媒介的管理,其需要搭配外部的儲存媒介管理系統,才能儲存恆久的狀態性資料。習知的容器平台因缺乏提供儲存媒介之管理方面的支持,無法管理大資料應用,並且需要倚賴大資料應用軟體以提供分散式資料管 理功能。 For example, a general-purpose container platform, such as Kubernetes, does not have built-in storage media management. It requires an external storage media management system to store persistent state data. The conventional container platform cannot manage large data applications due to lack of management to provide storage media, and needs to rely on big data application software to provide decentralized data management 理 Functionality.

為解決上述問題,本案揭露一種在容器平台上的資料碎片管理系統,該系統基於容器平台原有的容器管理機制,加上資料碎片管理機制,使容器平台能原生支援大資料應用的資料分割、主從分離等功能,並將資料分散儲存於容器平台所在主機群自帶的儲存媒介中,而不需要額外建置分散式儲存媒介,也不需要倚賴分散式資料管理軟體。 In order to solve the above problems, this case discloses a data fragmentation management system on the container platform. The system is based on the container management mechanism of the container platform, plus the data fragmentation management mechanism, so that the container platform can natively support data segmentation of large data applications, Functions such as master-slave separation, and distributed storage of data in the storage media of the host group where the container platform is located, without the need to build additional distributed storage media or rely on distributed data management software.

首先,本案之容器平台上的資料碎片管理系統,包括:協調器,用於存取叢集資訊;複數個運算節點,用於在各該運算節點佈署容器以存取該叢集資訊;複數個存儲器,其各自與該複數個運算節點之至少一個連接;容器協同模組,用於接收一架設資料服務之要求及存取該叢集資訊,其中,該資料服務區分為複數個資料碎片,以及依照該架設資料服務之要求及該叢集資訊,決定用以儲存各該資料碎片的存儲器,以及決定用以服務各該資料碎片的各個容器所在的運算節點,其中,該容器協同模組根據對該存儲器及對該運算節點之決定結果更新該叢集資訊,以供各該運算節點根據該更新的叢集資訊開啟對應數量的容器,而該協調器更於已開啟的容器向該協調器註冊時協調出至少一領導者容器;以及負載平衡器,用於接收使用者對該資料服務的資料請求及存取該叢集資訊,以依照該更新之叢集資訊將該資料請求導至各該運算節點的代理構件,而該代理構件進一步將該資料請求導至各該運算節點中的容器。 First, the data fragment management system on the container platform of this case includes: a coordinator for accessing cluster information; a plurality of computing nodes for deploying containers on each computing node to access the cluster information; a plurality of memories , Each of which is connected to at least one of the plurality of computing nodes; a container collaboration module, used to receive a request to set up a data service and access the cluster information, wherein the data service is divided into a plurality of data fragments, and according to the The requirements for setting up data services and the cluster information determine the memory used to store each data fragment and the computing node where each container used to serve each data fragment is located. Update the cluster information on the decision result of the computing node for each computing node to open a corresponding number of containers according to the updated cluster information, and the coordinator further coordinates at least one when the opened container is registered with the coordinator A leader container; and a load balancer for receiving user data requests for the data service and accessing the cluster information to direct the data requests to the agent components of the computing nodes according to the updated cluster information, and The agent component further directs the data request to the container in each computing node.

其次,本案之容器平台上的資料碎片之管理方法,包括:接 收架設資料服務的要求,其中,該資料服務區分為複數個資料碎片;依照該架設資料服務之要求及一叢集資訊,決定用以儲存各該資料碎片的存儲器,以及決定用以服務各該資料碎片的各個容器所在的運算節點;根據對該存儲器及對該運算節點之決定結果更新該叢集資訊;根據該更新的叢集資訊開啟對應數量的容器,以將開啟成功的容器之存取位址更新至該叢集資訊中;註冊各該容器以自服務同一資料碎片的容器中協調出至少一領導者容器,以將協調結果寫入該叢集資訊中,進而完成該架設資料服務的要求;以及接收使用者對於架設完成之資料服務之資料請求,以依照該更新之叢集資訊將該資料請求導至該運算節點中的代理構件,進一步將該資料請求導至該運算節點中的容器。 Secondly, the management method of data fragments on the container platform in this case includes: The request to set up a data service, where the data service is divided into a plurality of data fragments; according to the request for the set up data service and a cluster of information, the memory used to store each data fragment is determined, and the data is used to serve each data The computing node where each container of the shard is located; update the cluster information according to the decision result of the memory and the computing node; open the corresponding number of containers according to the updated cluster information to update the access address of the successfully opened container To the cluster information; register each of the containers to coordinate at least one leader container from the container serving the same data fragment, to write the coordination result into the cluster information, and then complete the request for setting up the data service; and receive and use For the data request of the completed data service, the data request is directed to the agent component in the computing node according to the updated cluster information, and the data request is further directed to the container in the computing node.

為讓本發明之上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明。在以下描述內容中將部分闡述本發明之額外特徵及優點,且此等特徵及優點將部分自所述描述內容可得而知,或可藉由對本發明之實踐習得。本發明之特徵及優點借助於在申請專利範圍中特別指出的元件及組合來認識到並達到。應理解,前文一般描述與以下詳細描述兩者均僅為例示性及解釋性的,且不欲約束本發明所欲主張之範圍。 In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description. Additional features and advantages of the present invention will be partially explained in the following description, and these features and advantages will be partially known from the description, or may be learned by practicing the present invention. The features and advantages of the present invention are recognized and achieved by means of the elements and combinations particularly pointed out in the scope of the patent application. It should be understood that both the foregoing general description and the following detailed description are merely exemplary and explanatory, and are not intended to limit the scope of the claimed invention.

100‧‧‧容器協同模組 100‧‧‧Container Collaboration Module

200、300‧‧‧集群 200, 300‧‧‧ cluster

210、220、310‧‧‧運算節點 210, 220, 310‧‧‧ computing nodes

211、221、311‧‧‧控制構件 211, 221, 311

212、222、312‧‧‧代理構件 212, 222, 312

213、223、313、314‧‧‧容器 213, 223, 313, 314‧‧‧ container

230、320‧‧‧存儲器 230, 320‧‧‧ memory

400‧‧‧協調器 400‧‧‧Coordinator

500‧‧‧負載平衡器 500‧‧‧Load balancer

610~613、615、620~622、630~633‧‧‧資料區塊 610~613, 615, 620~622, 630~633 ‧‧‧ data block

614、615、623‧‧‧資料子目錄 614, 615, 623

S101~S105、S201~S204‧‧‧步驟 S101~S105, S201~S204 ‧‧‧ steps

第1圖係根據本案揭露之具體實施例,展示容器平台上的資料碎片管理系統之系統架構圖。 Figure 1 is a system architecture diagram showing a data fragment management system on a container platform according to the specific embodiment disclosed in this case.

第2圖係根據本案揭露之具體實施例,展示容器平台上的資 料碎片管理系統所管理叢集資訊的資料結構圖。 Figure 2 shows the information on the container platform according to the specific embodiment disclosed in this case. A data structure diagram of cluster information managed by a material fragment management system.

第3圖係根據本案揭露之具體實施例,展示容器平台上的資料碎片管理系統接收使用者之資料請求的路由設定圖。 Figure 3 is a diagram showing a routing setting for a data fragment management system on a container platform to receive user data requests according to a specific embodiment disclosed in this case.

第4圖係根據本案揭露之具體實施例,展示管理資料碎片的控制流程之流程圖。 Figure 4 is a flowchart showing the control process of managing data fragments according to the specific embodiment disclosed in this case.

第5圖係根據本案揭露之具體實施例,展示管理資料碎片的請求流程之流程圖。 Figure 5 is a flowchart showing a request process for managing data fragments according to the specific embodiment disclosed in this case.

本文中所揭示的是在容器平台上的資料碎片管理系統及方法。 What is disclosed in this article is a data fragment management system and method on a container platform.

請參閱圖式,第1圖揭示容器平台上的資料碎片管理系統之系統架構示意圖。運算節點(例如主機)210、220和310可開啟各自的容器213、223、313和314。存儲器(例如儲存媒介)230和320各者可與運算節點210、220和310之至少一者連接。協調器400存取有叢集資訊。容器協同模組(例如容器協同工具(Container Orchestration Tool))100用於接收一架設資料服務之要求,其中該資料服務區分為複述個資料碎片,以依照該架設資料服務之要求及該叢集資訊,決定用以儲存各該資料碎片的存儲器230或320以及用以服務各該資料碎片的容器213、223或313所在的運算節點210、220或310。此外,負載平衡器500可接收使用者對該資料服務的資料請求。 Please refer to the drawings. Figure 1 shows the system architecture of the data fragment management system on the container platform. Computing nodes (eg, hosts) 210, 220, and 310 can open respective containers 213, 223, 313, and 314. Each of the memories (eg, storage media) 230 and 320 may be connected to at least one of the computing nodes 210, 220, and 310. The coordinator 400 accesses cluster information. A container collaboration module (such as Container Orchestration Tool) 100 is used to receive a request for an erection data service, where the data service is divided into restatement data fragments to comply with the requirements for the erection data service and the cluster information, Determine the computing node 210, 220 or 310 where the memory 230 or 320 for storing each data fragment and the container 213, 223 or 313 for serving each data fragment are located. In addition, the load balancer 500 can receive data requests from users for the data service.

在所示系統架構中,容器協同模組100之功能在於管理運算節點(Node)。在本案揭露的具體實施例中,容器協同模組100管理由三個運 算節點210(Node1)、220(Node2)、310(Node3)組成的容器平台,其中,在此實施例中,運算節點210、220屬於集群200(Cluster1)並共用存儲器(Storage)230,以及運算節點310屬於集群300(Cluster2)並使用存儲器320。 In the system architecture shown, the function of the container collaboration module 100 is to manage the computing node (Node). In the specific embodiment disclosed in this case, the container collaboration module 100 manages A container platform composed of computing nodes 210 (Node1), 220 (Node2), 310 (Node3), wherein, in this embodiment, the computing nodes 210, 220 belong to the cluster 200 (Cluster1) and share storage (Storage) 230, and computing Node 310 belongs to cluster 300 (Cluster2) and uses storage 320.

本案揭露之容器協同模組100又具備一額外功能為接收資料服務提供者所提出之架設資料服務的請求,以將該請求轉為叢集資訊(Clusters,如第2圖所示),並將此叢集資訊儲存於協調器(Coordinator)400。 The container collaboration module 100 disclosed in this case has an additional function for receiving a data service provider's request for setting up a data service, so as to convert the request into cluster information (Clusters, as shown in Fig. 2), and transfer this The cluster information is stored in Coordinator 400.

運算節點210、220、310中可包含多個啟動的容器(Container)。例如本案所揭露之具體實施例中,在運算節點210中啟動容器213、在運算節點220中啟動容器223以及在運算節點310中啟動容器313及314。 The computing nodes 210, 220, and 310 may include multiple activated containers. For example, in the specific embodiment disclosed in this case, the container 213 is activated in the computing node 210, the container 223 is activated in the computing node 220, and the containers 313 and 314 are activated in the computing node 310.

運算節點210、220、310復包括以常駐方式存在的控制構件211、221、311及代理構件212、222、312。 The computing nodes 210, 220, 310 also include control components 211, 221, 311 and agent components 212, 222, 312 that exist in a resident manner.

控制構件211、221、311之功能為為所屬之運算節點管理容器,並訂閱協調器400中儲存的叢集資訊(如第2圖所示),根據該叢集資訊來新增或刪除容器、在容器失效時啟動新的容器以取代之、以及將所管理之容器狀態回報至協調器400以更新叢集資訊。 The functions of the control components 211, 221, and 311 are to manage containers for the operation nodes to which they belong, and to subscribe to the cluster information stored in the coordinator 400 (as shown in FIG. 2), and add or delete containers based on the cluster information. When it fails, a new container is started to replace it, and the managed container status is reported to the coordinator 400 to update the cluster information.

代理構件212、222、312之功能為接收由負載平衡器500導入之使用者請求並依照其儲存之快取資訊中合適的路由表將此請求導至運算節點中的容器。本案揭露之代理構件212、222、312還具備一額外功能:訂閱協調器400中的叢集資訊(如第2圖所示)以取得該叢集資訊中各資料碎片的路由表。 The function of the proxy component 212, 222, 312 is to receive the user request imported by the load balancer 500 and direct the request to the container in the computing node according to the appropriate routing table in the cache information stored in it. The proxy components 212, 222, and 312 disclosed in this case also have an additional function: subscribing to the cluster information in the coordinator 400 (as shown in FIG. 2) to obtain a routing table for each data fragment in the cluster information.

負載平衡器500的功能為接受使用者的資料讀寫請求,並依 照其儲存之快取資訊中的一叢集資訊決定引導該資料讀寫請求的路由方式。其中,該叢集資訊係由訂閱協調器400中的叢集資訊(如第2圖所示)取得。 The function of the load balancer 500 is to accept user data read and write requests, and According to the cluster information in the cache information it stores, the routing method for guiding the data read-write request is determined. The cluster information is obtained from the cluster information in the subscription coordinator 400 (as shown in FIG. 2).

協調器400的功能為維持資料碎片管理系統全域之叢集資訊(如第2圖所示),該叢集資訊紀錄以下元資料(metadata),包括但不限於:集群中各自的運算節點之控制構件與代理構件的存取位址、各運算節點中啟動的容器、各容器所能服務的資料碎片及服務種類(例如,讀/寫)、資料切片的方式等。上述之元資料皆提供訂閱功能並在該等元資料更新時通知訂閱者取得新資料。上述元資料由容器協同模組100和控制構件211、221、311提供,並且被控制構件211、221、311、代理構件212、222、312、負載平衡器500訂閱。 The function of the coordinator 400 is to maintain the global cluster information of the data fragmentation management system (as shown in Figure 2). The cluster information records the following metadata (including metadata), including but not limited to: the control components of the respective computing nodes in the cluster and The access address of the agent component, the container started in each computing node, the data fragments and service types (for example, read/write) that each container can serve, and the method of data slicing. The above metadata all provide a subscription function and notify the subscriber to obtain new data when the metadata is updated. The above metadata is provided by the container collaboration module 100 and the control components 211, 221, and 311, and is subscribed by the control components 211, 221, and 311, the proxy components 212, 222, and 312, and the load balancer 500.

協調器400的另一功能為提供領導者選舉(Leader Election),領導者選舉會於資料碎片管理系統開啟的容器中協調多個潛在的領導者,使其達成分散式共識並從中選擇其一作為領導者。 Another function of the coordinator 400 is to provide Leader Election. The leader election will coordinate multiple potential leaders in the container opened by the data fragment management system to achieve a decentralized consensus and choose one of them. leader.

請參閱第2圖,揭示本案具體實施例中協調器400所維持之叢集資訊(Clusters),該叢集資訊係以樹狀方式保存,其中,該叢集資訊在接收資料服務提供者架設資料服務的要求前的初始狀態如下所述。 Please refer to FIG. 2 to reveal the cluster information (Clusters) maintained by the coordinator 400 in the specific embodiment of the present case. The cluster information is stored in a tree-like manner, wherein the cluster information is received by the data service provider to set up a data service request The initial state before is as follows.

如前文所述,容器協同模組100管理集群200(Cluster1)及集群300(Cluster2)。集群200中包含運算節點210(Node1)、運算節點220(Node2)及存儲器230(Storage),以上形容之狀態係以第2圖中資料區塊610下的資料區塊611、資料子目錄614、資料區塊612、資料子目錄615及資料區塊613表示。集群300中包含運算節點310(Node3)及存儲器320,以上形容之狀態 係以第2圖中資料區塊620下的資料區塊621、資料子目錄623和資料區塊622表示。 As described above, the container collaboration module 100 manages the cluster 200 (Cluster1) and the cluster 300 (Cluster2). The cluster 200 includes a computing node 210 (Node1), a computing node 220 (Node2), and a storage 230 (Storage). The states described above are based on the data block 611 and the data subdirectory 614 under the data block 610 in FIG. 2. The data block 612, the data subdirectory 615 and the data block 613 are represented. The cluster 300 includes a computing node 310 (Node3) and a memory 320, the state described above It is represented by data block 621, data subdirectory 623, and data block 622 under data block 620 in Figure 2.

協調器400的叢集資訊還儲存各運算節點中控制構件與代理構件的存取位址,例如,資料區塊611儲存Node1的控制構件與代理構件的存取位址;資料區塊612儲存Node2的控制構件與代理構件的存取位址;以及資料區塊621儲存Node3的控制構件與代理構件的存取位址。 The cluster information of the coordinator 400 also stores the access addresses of the control component and the proxy component in each computing node. For example, the data block 611 stores the access addresses of the control component and the proxy component of Node1; the data block 612 stores the node 2 The access addresses of the control component and the proxy component; and the data block 621 stores the access addresses of the control component and the proxy component of Node3.

協調器400的叢集資訊進一步紀錄各運算節點能提供的資料服務並儲存於資料子目錄中。例如,資料子目錄614係紀錄Node1能提供的資料服務;資料子目錄615係紀錄Node2能提供的資料服務;以及資料子目錄623係紀錄Node3能提供的資料服務。值得注意的是,資料子目錄614、615、623在初始狀態下的叢集資訊中皆為空,尚未有資料。並且資料子目錄614、615、623中的資料分別被對應運算節點的控制構件211、221、311所訂閱。 The cluster information of the coordinator 400 further records the data services that each computing node can provide and stores them in the data subdirectory. For example, the data subdirectory 614 records data services that Node1 can provide; the data subdirectory 615 records data services that Node2 can provide; and the data subdirectory 623 records data services that Node3 can provide. It is worth noting that the data subdirectories 614, 615, 623 are all empty in the cluster information in the initial state, and there is no data yet. And the materials in the material sub-directories 614, 615, 623 are subscribed by the control members 211, 221, 311 of the corresponding computing nodes, respectively.

協調器400的叢集資訊亦紀錄有資料碎片管理系統中各集群使用的存儲器之存取位址。例如,資料區塊613儲存Cluster1使用之存儲器230的存取位址;以及資料區塊622儲存Cluster2使用之存儲器320的存取位址。 The cluster information of the coordinator 400 also records the access address of the memory used by each cluster in the data fragment management system. For example, the data block 613 stores the access address of the memory 230 used by Cluster1; and the data block 622 stores the access address of the memory 320 used by Cluster2.

本案揭露之管理資料碎片的方法係區分為控制流程(Control Flow)與使用流程(Request Flow)兩部分,係分別以第4圖及第5圖在第1、2圖的基礎上進行描述。 The method for managing data fragments disclosed in this case is divided into two parts: Control Flow and Request Flow, which are described on the basis of Figures 1 and 2 based on Figure 4 and Figure 5, respectively.

第4圖揭示控制流程之流程圖,請參閱步驟S101,控制流程由容器協同模組100接收資料服務提供者架設一資料服務的要求開始,在本 案揭示的具體實施例中,資料服務提供者欲架設的資料服務為一鍵-值(key-value)資料服務,其命名為Data1。並且,資料服務提供者要求的Data1規格如下: Figure 4 reveals the flow chart of the control flow. Please refer to step S101. The control flow starts when the container collaboration module 100 receives the request of the data service provider to set up a data service. In the specific embodiment disclosed in the case, the data service that the data service provider wants to set up is a key-value data service, which is named Data1. In addition, Data1 specifications required by data service providers are as follows:

- Data1之主鍵範圍為0000至9999 -The primary key range of Data1 is 0000 to 9999

- Data1分為兩個資料碎片,第一資料碎片之主鍵範圍為0000至4999,第二資料碎片之主鍵範圍為5000至9999 -Data1 is divided into two data fragments, the primary key range of the first data fragment is 0000 to 4999, and the primary key range of the second data fragment is 5000 to 9999

- 每一資料碎片需有二個容器提供服務,其中一主容器(Master)可提供寫入/讀取服務,另一從容器(Slave)可提供讀取服務。 -Each data fragment needs two containers to provide services, one of which can provide write/read services, and the other can provide read services.

參照步驟S102,容器協同模組100係依據以上要求的Data1的規格,根據協調器400中目前為初始狀態之叢集資訊,為每一資料碎片選擇一存儲器作為儲存載體。假設目前的選擇方式為循環制(Round-Robin),則選取集群200(Cluster1)之存儲器230作為第一資料碎片(Shard1)之儲存載體,以及選取集群300(Cluster2)之存儲器320作為第二資料碎片(Shard2)之儲存載體。另外,其他選擇方式亦可選擇目前負擔較輕的運算節點、或根據應用的需求將多個(例如兩個)資料服務佈署在相同的運算節點。 Referring to step S102, the container collaboration module 100 selects a memory as a storage carrier for each data fragment according to the above-required Data1 specifications and the cluster information currently in the initial state in the coordinator 400. Assuming that the current selection method is Round-Robin, the memory 230 of the cluster 200 (Cluster1) is selected as the storage carrier of the first data fragment (Shard1), and the memory 320 of the cluster 300 (Cluster2) is selected as the second data Storage carrier for Shard2. In addition, other selection methods can also select a computing node that is currently less burdened, or deploy multiple (for example, two) data services on the same computing node according to application requirements.

容器協同模組100亦須根據要求的Data1規格選擇為服務資料碎片而開啟之容器所在的運算節點。其中,若選定之儲存載體被多個運算節點連結,則亦假定以循環制選擇運算節點。如本案具體實施例中,集群200(Cluster1)之存儲器230被運算節點210(Node1)、220(Node2)連結,因此容器協同模組100會選定在運算節點210(Node1)上開啟一容器以服務第一資料碎片,以及在運算節點220(Node2)上開啟另一容器以服務第一資料碎片。 The container collaboration module 100 must also select the computing node where the container opened for serving the data fragments according to the required Data1 specification. Among them, if the selected storage carrier is connected by multiple computing nodes, it is also assumed that the computing nodes are selected on a round-robin basis. As in the specific embodiment of this case, the memory 230 of the cluster 200 (Cluster1) is connected by the computing nodes 210 (Node1), 220 (Node2), so the container collaboration module 100 will choose to open a container on the computing node 210 (Node1) to serve The first data fragment, and another container is opened on the computing node 220 (Node2) to serve the first data fragment.

因集群300(Cluster2)之存儲器320只被運算節點310(Node3)連接,因此容器協同模組100會選定在運算節點310(Node3)上開啟二個容器以服務第二資料碎片。 Since the memory 320 of the cluster 300 (Cluster2) is only connected by the computing node 310 (Node3), the container collaboration module 100 will select to open two containers on the computing node 310 (Node3) to serve the second data fragment.

請參照步驟S103,容器協同模組100會將以上對存儲器、運算節點之選定結果寫入協調器400的叢集資訊中的資料區塊610及620(如第2圖所示),亦即,將一筆“Data1/Shard1/1”寫入Node1的資料子目錄614;將一筆“Data1/Shard1/2”寫入Node2的資料子目錄615;以及將一筆“Data2/Shard2/3”及另一筆“Data2/Shard2/4”分別寫入Node3的資料子目錄623。值得注意的是,第2圖中各資料子目錄614、615、623所示之Master、Slave須待後續步驟更新;容器存取位址(IP)係須由控制構件更新,在此步驟中皆尚未寫入。 Referring to step S103, the container collaboration module 100 writes the above selection results of the memory and computing nodes into the data blocks 610 and 620 in the cluster information of the coordinator 400 (as shown in FIG. 2), that is, Write a "Data1/Shard1/1" to Node1's data subdirectory 614; write a "Data1/Shard1/2" to Node2's data subdirectory 615; and write a "Data2/Shard2/3" and another "Data2" /Shard2/4" is written to Node3's data subdirectory 623, respectively. It is worth noting that the Master and Slave shown in the data subdirectories 614, 615, and 623 in Figure 2 need to be updated in subsequent steps; the container access address (IP) must be updated by the control component. Not yet written.

容器協同模組100亦會在協調器400的叢集資訊中新增一資料區塊630,以儲存資料服務Data1的服務位址及描述各資料碎片規格的元資料。 The container collaboration module 100 will also add a data block 630 to the cluster information of the coordinator 400 to store the service address of the data service Data1 and metadata describing the specifications of each data fragment.

例如在本案揭示之實施例中,容器協同模組100會產生資料服務Data1的服務位址(例如:10.0.0.1:6379)並紀錄於資料區塊631中,使用者可利用此服務位址存取資料服務Data1。 For example, in the embodiment disclosed in this case, the container collaboration module 100 will generate the service address of the data service Data1 (for example: 10.0.0.1:6379) and record it in the data block 631. The user can use this service address to store Get data service Data1.

容器協同模組100亦會將描述資料服務提供者要求的各資料碎片規格之元資料儲存於資料區塊630中。例如,資料區塊632會紀錄第一資料碎片(Shard1)的主鍵範圍為0000-4999,並於成員子目錄(Member)下紀錄服務第一資料碎片之成員為Node1及Node2,並可由成員子目錄的首筆資料得知Node1中將設有主容器(Master);以及資料區塊633會紀錄第二資料 碎片(Shard2)的主鍵範圍為5000-9999,並於成員子目錄(Member)下紀錄服務第二資料碎片之成員為Node3。 The container collaboration module 100 also stores the metadata describing the specifications of each data fragment requested by the data service provider in the data block 630. For example, the data block 632 records the primary key range of the first data shard (Shard1) as 0000-4999, and records the members of the first data shard as Node1 and Node2 under the member subdirectory (Member). The first piece of information learned that Node1 will have a master container (Master); and the data block 633 will record the second data The primary key range of Shard2 is 5000-9999, and the member of the second service data shard recorded in the member subdirectory (Member) is Node3.

以上為容器協同模組100針對接受資料服務者對架設資料服務Data1的要求所做的動作。 The above are the actions performed by the container collaboration module 100 in response to the request of the data service provider to set up the data service Data1.

請參閱步驟S104,控制構件211、221、311係分別訂閱協調器400所保存的資料子目錄614、615、623,因此在容器協同模組100寫入資料時獲得通知,並即依訂閱內容即資料子目錄614、615、623中新增的資料,開啟對應的容器。亦即,控制構件211針對資料子目錄614中新增的“Data1/Shard1/1”開啟容器213,並將其存取位址(例如,192.168.1.2:1111)更新到資料子目錄614中;控制構件221針對資料子目錄615中新增的“Data1/Shard1/2”開啟容器223,並將其存取位址(例如,192.168.1.2:1111)更新到資料子目錄615中;以及控制構件311針對資料子目錄623中新增的“Data2/Shard2/3”及“Data2/Shard2/4”開啟容器223,並將其存取位址(例如,192.168.1.2:1111及192.168.1.3:1111)更新到資料子目錄623中。 Referring to step S104, the control components 211, 221, and 311 are subscribed to the data subdirectories 614, 615, and 623 stored in the coordinator 400, respectively. Therefore, when the container collaboration module 100 writes data, the notification is obtained and the content is subscribed according to the subscription content. Open the corresponding container for the newly added data in the data sub-directories 614, 615, and 623. That is, the control component 211 opens the container 213 for the newly added "Data1/Shard1/1" in the data subdirectory 614, and updates its access address (for example, 192.168.1.2:1111) to the data subdirectory 614; The control component 221 opens the container 223 for the newly added "Data1/Shard1/2" in the data subdirectory 615, and updates its access address (for example, 192.168.1.2:1111) to the data subdirectory 615; and the control component 311 opens the container 223 for the newly added "Data2/Shard2/3" and "Data2/Shard2/4" in the data subdirectory 623, and accesses the addresses (for example, 192.168.1.2: 1111 and 192.168.1.3: 1111 ) Update to the information subdirectory 623.

控制構件211、221、311係利用容器平台內建的健康檢查(Health Check)機制,定期檢查其所管理之容器的健康狀態,刪除不健康的容器,並以新的容器替代之,同時向協調器400更新資料子目錄614、615、623中記載的容器存取位址。 The control components 211, 221, and 311 use the built-in health check mechanism of the container platform to regularly check the health status of the containers they manage, delete unhealthy containers, and replace them with new ones. 400 Update the container access address recorded in the data sub-directories 614, 615, 623.

以上即為控制構件211、221、311進行的動作。 The above is the operation performed by the control members 211, 221, and 311.

請參閱步驟S105,容器成功啟動時,會向協調器400提出註冊,並觸發領導者選舉(Leader Election)機制,詳細情況如下所述。 Please refer to step S105. When the container is successfully started, it will submit a registration to the coordinator 400 and trigger a leader election (Leader Election) mechanism. The details are as follows.

本案揭露之協調器400係以etcd(https://coreos.com/etcd/)或 zookeeper(https://zookeeper.apache.org/)實現,兩者皆支援階層式(亦及,樹狀)的資料組織方式,可用來儲存如第2圖的資料叢集內容,並支援領導者選舉機制,能夠允許多個行程向一目錄進行註冊,協調該多個行程,從中選出一行程作為領導者。以本案揭露之具體實施例來說,容器213、223(即行程)向協調器400的資料區塊632(即目錄)做註冊,若領導者仍未選出,則會通過領導者選舉機制,從容器213、223中協調出一領導者,以本實施例來說容器213為選出的領導者;以及,容器313、314向協調器400的資料區塊633做註冊,若領導者仍未選出,則會通過領導者選舉機制,從容器313、314中協調出一領導者,以本實施例來說容器314為選出的領導者。此處所描述之領導者即為資料服務中的主容器(Master),其餘容器則為從容器(Slave)。此領導者選舉結果會對應寫入協調器400所保存之資料子目錄614、615、623中,並標示為“Master”或“Slave”。 The coordinator 400 disclosed in this case is based on etcd (https://coreos.com/etcd/) or Zookeeper (https://zookeeper.apache.org/) implementation, both of which support hierarchical (and tree-like) data organization, can be used to store the content of the data cluster as shown in Figure 2, and support leader election The mechanism can allow multiple itineraries to register with a directory, coordinate the multiple itineraries, and select one itinerary as the leader. In the specific embodiment disclosed in this case, the containers 213 and 223 (ie the itinerary) are registered with the data block 632 (ie the directory) of the coordinator 400. If the leader is still not selected, the leader election mechanism will be used to A leader is coordinated in the containers 213, 223. In this embodiment, the container 213 is the selected leader; and, the containers 313, 314 register with the data block 633 of the coordinator 400. If the leader is not yet selected, A leader is selected from the containers 313 and 314 through a leader election mechanism. In this embodiment, the container 314 is the selected leader. The leader described here is the master container in the data service, and the remaining containers are slaves. The result of this leader election will be written into the sub-directories 614, 615, 623 of the data stored in the coordinator 400 and marked as "Master" or "Slave".

至此,已完成資料服務提供者架設資料服務Data1之要求,完成二資料碎片的主容器/從容器服務啟動,可準備提供給使用者請求,並請參見第5圖所示之使用流程(Request Flow)。 At this point, the requirements for the data service provider to set up the data service Data1 have been completed, and the master/slave container service of the second data fragment has been completed. It can be prepared to provide users with requests. Please refer to the use flow shown in Figure 5 (Request Flow ).

請參閱步驟S201,使用者對資料服務Data1的請求,可以REST Protocol指定之。如:若要在Data1寫入一筆主鍵為4999的資料,可發出一HTTP POST請求(例如:http://10.0.0.1:6379/4999),並將欲寫入值寫在POST Data中傳送。若要在Data1讀取一筆主鍵為4999的資料,則可發出一HTTP GET請求(例如:http://10.0.0.1:6379/4999)。 Please refer to step S201. The user's request for the data service Data1 can be specified by the REST Protocol. For example, if you want to write a piece of data with a primary key of 4999 in Data1, you can issue an HTTP POST request (for example: http://1/0.0.0.1: 6379/4999), and write the value to be written in POST Data for transmission. If you want to read a piece of data with a primary key of 4999 in Data1, you can send an HTTP GET request (for example: http://1/0.0.0.1: 6379/4999).

參閱步驟S202、S203,使用者的請求會發送至負載平衡器500。負載平衡器500訂閱協調器400所維護的所有叢集資訊,因此可直接參 照其本地儲存之該叢集資訊的快取資訊來決定引導使用者請求的路由。本案揭示之具體實施例採用HAProxy作為負載平衡器,其路由設定可參閱第3圖所示。 Referring to steps S202 and S203, the user's request is sent to the load balancer 500. The load balancer 500 subscribes to all cluster information maintained by the coordinator 400, so it can directly participate in According to the cache information of the cluster information stored locally, the route to guide the user's request is determined. The specific embodiment disclosed in this case uses HAProxy as the load balancer, and its routing settings can be seen in FIG. 3.

如本案具體實施例所示,當負載平衡器500接收一HTTP POST請求(例如,http://10.0.0.1:6379/4999)時,負載平衡器500可利用其本地快取儲存之協調器400保存的資料區塊630,判斷此使用者請求要求之資料位於第一資料碎片(亦即,Data1/Shard1),並且因為此請求要求寫入服務,可推斷需將此請求導至服務第一資料碎片Shard1的主容器(Master),根據資料區塊632可得知該主容器為Node1,故根據資料區塊614所保存之容器存取位址將此請求導至運算節點Node1的代理構件212。 As shown in the specific embodiment of this case, when the load balancer 500 receives an HTTP POST request (for example, http://10.0.0.1:6379/4999), the load balancer 500 can use its local cache storage coordinator 400 The saved data block 630 determines that the data requested by the user is located in the first data fragment (ie, Data1/Shard1), and because the request requires writing to the service, it can be inferred that the request needs to be directed to the service first data According to the data block 632, the master container of the shard Shard1 can be known as Node1, so the request is directed to the proxy component 212 of the computing node Node1 according to the container access address stored in the data block 614.

參閱步驟S204,每一運算節點上的代理構件亦訂閱協調器400中保存之所屬運算節點的元資料,例如:代理構件212訂閱資料區塊611和資料子目錄614,並在代理構件212上儲存所訂閱之資料區塊611和資料子目錄614的快取資訊。代理構件在接收負載平衡器500傳來的HTTP POST請求(http://10.0.0.1:6379/4999)時,可利用此快取資訊,將此請求改寫成http://192.168.1.2:1111/4999並導至容器213。 Referring to step S204, the proxy component on each computing node also subscribes to the metadata of the computing node it belongs to in the coordinator 400. For example, the proxy component 212 subscribes to the data block 611 and the data subdirectory 614, and stores it on the proxy component 212 Cached information of the subscribed data block 611 and data subdirectory 614. When the proxy component receives the HTTP POST request (http://10.0.0.1:6379/4999) from the load balancer 500, it can use this cache information to rewrite the request as http://192.168.1.2:1111 /4999and leads to the container 213.

以上即為使用流程的步驟。 The above are the steps of using the process.

上述實施形態僅例示性說明本發明之原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施形態進行修飾與改變。任何運用本發明所揭示內容而完成之等效改變及修飾,均仍應為申請專利範圍所涵蓋。因此,本發明之權利保護範圍,應如申請專利範圍所列。 The above-mentioned embodiments only exemplarily illustrate the principles, characteristics and effects of the present invention, and are not intended to limit the scope of the invention. Anyone who is familiar with this skill can do the above without departing from the spirit and scope of the present invention. The embodiment is modified and changed. Any equivalent changes and modifications made using the disclosure of the present invention should still be covered by the scope of the patent application. Therefore, the scope of protection of the rights of the present invention should be as listed in the scope of patent application.

100‧‧‧容器協同模組 100‧‧‧Container Collaboration Module

200、300‧‧‧集群 200, 300‧‧‧ cluster

210、220、310‧‧‧運算節點 210, 220, 310‧‧‧ computing nodes

211、221、311‧‧‧控制構件 211, 221, 311

212、222、312‧‧‧代理構件 212, 222, 312

213、223、313、314‧‧‧容器 213, 223, 313, 314‧‧‧ container

230、320‧‧‧存儲器 230, 320‧‧‧ memory

400‧‧‧協調器 400‧‧‧Coordinator

500‧‧‧負載平衡器 500‧‧‧Load balancer

Claims (12)

一種容器平台上之資料碎片管理系統,包括:協調器,用於管理叢集資訊;複數個運算節點,用於在各該運算節點中佈署容器以存取該叢集資訊;複數個存儲器,其各自與該複數個運算節點之至少一個連接;容器協同模組,用於接收一架設資料服務之要求及存取該叢集資訊,其中,該資料服務區分為複數個資料碎片,以依照該架設資料服務之要求及該叢集資訊,決定用以儲存各該資料碎片的存儲器以及決定用以服務各該資料碎片的各個容器所在的運算節點,其中,該容器協同模組根據對該存儲器及對該運算節點之決定結果更新該叢集資訊,以供各該運算節點根據該更新的叢集資訊開啟對應數量的容器,而該協調器更於已開啟的容器向該協調器註冊時協調出至少一領導者容器;以及負載平衡器,用於接收對該資料服務的資料請求及存取該叢集資訊。 A data fragment management system on a container platform, including: a coordinator for managing cluster information; a plurality of computing nodes for deploying containers in each computing node to access the cluster information; a plurality of memories, each of which Connected to at least one of the plurality of computing nodes; a container collaboration module for receiving a request for setting up a data service and accessing the cluster information, wherein the data service is divided into a plurality of data fragments to conform to the setting up data service The requirements and the cluster information determine the memory used to store each data fragment and the computing node where each container serving each data fragment is located, wherein the container coordination module is based on the memory and the computing node The decision result updates the cluster information for each computing node to open a corresponding number of containers according to the updated cluster information, and the coordinator further coordinates at least one leader container when the opened container registers with the coordinator; And a load balancer for receiving data requests for the data service and accessing the cluster information. 如申請專利範圍第1項所述之資料碎片管理系統,其中,該容器協同模組將對該存儲器及對該運算節點之決定結果寫入該資料叢集的資料子目錄中,以及在該叢集資訊中新增資料區塊以儲存該資料服務的服務位址及各該資料碎片的元資料。 The data fragment management system as described in item 1 of the patent scope, wherein the container collaboration module writes the decision results of the memory and the computing node into the data subdirectory of the data cluster, and the information in the cluster A data block is newly added to store the service address of the data service and metadata of each data fragment. 如申請專利範圍第1項所述之資料碎片管理系統,其中,該叢集資訊包括該複數個運算節點之代理構件及控制構件之存取位址、該存儲器之存取位址、該資料服務的服務位址、及各該資料碎片的元資料。 The data fragment management system as described in item 1 of the patent application scope, wherein the cluster information includes the access addresses of the proxy components and control components of the plurality of computing nodes, the access addresses of the memory, and the data service Service address and metadata of each piece of data. 如申請專利範圍第1項所述之資料碎片管理系統,其中,各該運算節點包括各自的控制構件,該控制構件用於訂閱該協調器之叢集資訊、根據該叢集資訊開啟或刪除容器、以及將各該容器之狀態回報至該協調器。 The data fragment management system as described in item 1 of the patent application scope, wherein each of the computing nodes includes its own control component for subscribing to the cluster information of the coordinator, opening or deleting containers based on the cluster information, and Report the status of each container to the coordinator. 如申請專利範圍第1項所述之資料碎片管理系統,其中,該負載平衡器進一步訂閱該協調器之叢集資訊,以依照該叢集資訊決定將該資料請求導至該複數個運算節點之代理構件的路由。 The data fragment management system as described in item 1 of the patent application scope, wherein the load balancer further subscribes to the cluster information of the coordinator to decide to direct the data request to the proxy components of the plurality of computing nodes according to the cluster information Route. 如申請專利範圍第1項所述之資料碎片管理系統,其中,各該運算節點包括各自的代理構件,該代理構件用於訂閱該叢集資訊、以及接收該負載平衡器傳入之資料請求以將該資料請求導至各該容器。 The data fragment management system as described in item 1 of the patent application scope, wherein each of the computing nodes includes its own proxy component, which is used to subscribe to the cluster information and receive data requests from the load balancer to transfer The data request is directed to each container. 一種容器平台上之資料碎片之管理方法,包括:令容器協同模組接收架設資料服務的要求,其中,該資料服務區分為複數個資料碎片;令該容器協同模組依照該架設資料服務之要求及一叢集資訊,決定用以儲存各該資料碎片的存儲器,以及決定用以服務各該資料碎片的各個容器所在的運算節點;令該容器協同模組根據對該存儲器及對該運算節點之決定結果更新該叢集資訊;令該運算節點根據該更新的叢集資訊開啟對應數量的容器,以將開啟成功的容器之存取位址更新至該叢集資訊中; 令協調器註冊各該容器以自服務同一資料碎片的容器中協調出至少一領導者容器,以將協調結果寫入該叢集資訊中,進而完成該架設資料服務的要求;以及令負載平衡器接收對於架設完成之資料服務之資料請求。 A method for managing data fragments on a container platform includes: causing a container collaboration module to receive a request to erect data services, wherein the data service is divided into a plurality of data fragments; and the container collaboration module is to comply with the requirements for erecting data services And a cluster of information, determine the memory used to store each of the data fragments, and determine the computing node where each container serving each of the data fragments is located; make the container cooperative module based on the decision on the memory and the computing node The result updates the cluster information; causes the computing node to open a corresponding number of containers according to the updated cluster information, so as to update the access address of the successfully opened container to the cluster information; Let the coordinator register each of the containers to coordinate at least one leader container from the container serving the same data fragment to write the coordination result into the cluster information to complete the request for setting up the data service; and make the load balancer receive Data request for the completed data service. 如申請專利範圍第7項所述之方法,其中,該叢集資訊於初始狀態時,該叢集資訊中具有該運算節點之控制構件與代理構件之存取位址、以及該存儲器之存取位址,且該叢集資訊的資料子目錄為空。 The method as described in item 7 of the patent application scope, wherein, when the cluster information is in an initial state, the cluster information includes the access addresses of the control component and the proxy component of the computing node, and the access address of the memory , And the data subdirectory of the cluster information is empty. 如申請專利範圍第7項所述之方法,其中,所述令該容器協同模組根據對該存儲器及對該運算節點之決定結果更新該叢集資訊包括:令該容器協同模組將對該存儲器及對該運算節點之決定結果寫入該叢集資訊之資料子目錄中;以及令該容器協同模組在該叢集資訊中新增資料區塊以儲存該資料服務的服務位址及各該資料碎片的元資料。 The method according to item 7 of the patent application scope, wherein the command to update the cluster information based on the determination result of the memory and the computing node of the container coordination module includes: causing the container cooperation module to update the memory And the decision result of the computing node is written into the data subdirectory of the cluster information; and the container cooperative module adds a data block to the cluster information to store the service address of the data service and each data fragment Metadata. 如申請專利範圍第7項所述之方法,更包括令該運算節點之控制構件通過健康檢查機制定期檢查該運算節點中容器的健康狀態。 The method as described in item 7 of the patent application scope further includes causing the control component of the computing node to periodically check the health status of the container in the computing node through a health check mechanism. 如申請專利範圍第7項所述之方法,其中,所述令負載平衡器接收對於架設完成之資料服務之資料請求進一步包括:由使用者利用該資料服務的服務位址存取該資料服務;由使用者指定對該資料服務的資料請求;令該負載平衡器接收該資料請求並決定將該資料請求導至該運算節點的代理構件之路由;以及令該代理構件將該資料請求導至該運算節點中的容器。 The method as described in item 7 of the patent application scope, wherein the causing the load balancer to receive the data request for the completed data service further includes: the user accessing the data service using the service address of the data service; The user specifies the data request for the data service; causes the load balancer to receive the data request and decides the routing of the agent component that directs the data request to the computing node; and causes the agent component to direct the data request to the data service The container in the operation node. 如申請專利範圍第7項所述之方法,其中,所述令該容器協調模組決定用以服務各該資料碎片的各個容器所在的運算節點係以循環制(Round-Robin)來選擇、或選擇目前負擔較輕的運算節點、或根據應用的需求將多個資料服務佈署在相同的運算節點。 The method described in item 7 of the patent application scope, wherein the container coordination module determines that the computing node where each container used to serve each data fragment is located is selected by a round-robin system, or Choose a computing node that is currently less burdensome, or deploy multiple data services on the same computing node according to the needs of the application.
TW108111522A 2019-04-01 2019-04-01 Data sharding management system and method on container platform TWI695329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108111522A TWI695329B (en) 2019-04-01 2019-04-01 Data sharding management system and method on container platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108111522A TWI695329B (en) 2019-04-01 2019-04-01 Data sharding management system and method on container platform

Publications (2)

Publication Number Publication Date
TWI695329B true TWI695329B (en) 2020-06-01
TW202038146A TW202038146A (en) 2020-10-16

Family

ID=72176081

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108111522A TWI695329B (en) 2019-04-01 2019-04-01 Data sharding management system and method on container platform

Country Status (1)

Country Link
TW (1) TWI695329B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468579A (en) * 2021-07-23 2021-10-01 挂号网(杭州)科技有限公司 Data access method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870339A (en) * 2014-03-06 2014-06-18 上海华为技术有限公司 Cluster resource allocation method and cluster resource allocation device
CN104881322A (en) * 2015-05-18 2015-09-02 中国科学院计算技术研究所 Method and device for dispatching cluster resource based on packing model
US9172750B2 (en) * 2011-04-26 2015-10-27 Brian J. Bulkowski Cluster-node load balancing in a distributed database system
TW201706839A (en) * 2015-04-29 2017-02-16 微軟技術授權有限責任公司 Optimal allocation of dynamic cloud computing platform resources
US9607071B2 (en) * 2014-03-07 2017-03-28 Adobe Systems Incorporated Managing a distributed database across a plurality of clusters
TWI584654B (en) * 2015-03-27 2017-05-21 林勝雄 Method and system for optimization service
CN108170517A (en) * 2018-01-08 2018-06-15 武汉斗鱼网络科技有限公司 A kind of container allocation method, apparatus, server and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9172750B2 (en) * 2011-04-26 2015-10-27 Brian J. Bulkowski Cluster-node load balancing in a distributed database system
CN103870339A (en) * 2014-03-06 2014-06-18 上海华为技术有限公司 Cluster resource allocation method and cluster resource allocation device
US9607071B2 (en) * 2014-03-07 2017-03-28 Adobe Systems Incorporated Managing a distributed database across a plurality of clusters
TWI584654B (en) * 2015-03-27 2017-05-21 林勝雄 Method and system for optimization service
TW201706839A (en) * 2015-04-29 2017-02-16 微軟技術授權有限責任公司 Optimal allocation of dynamic cloud computing platform resources
CN104881322A (en) * 2015-05-18 2015-09-02 中国科学院计算技术研究所 Method and device for dispatching cluster resource based on packing model
CN108170517A (en) * 2018-01-08 2018-06-15 武汉斗鱼网络科技有限公司 A kind of container allocation method, apparatus, server and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468579A (en) * 2021-07-23 2021-10-01 挂号网(杭州)科技有限公司 Data access method, device, equipment and storage medium

Also Published As

Publication number Publication date
TW202038146A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN109783438B (en) Distributed NFS system based on librados and construction method thereof
CN103067433B (en) A kind of data migration method of distributed memory system, equipment and system
CN105324770B (en) Effectively read copy
CN103150394B (en) Distributed file system metadata management method facing to high-performance calculation
CN101568919B (en) Single view of data in a networked computer system with distributed storage
US20070276838A1 (en) Distributed storage
CN109189334B (en) Block chain network service platform, capacity expansion method thereof and storage medium
US20100161657A1 (en) Metadata server and metadata management method
JP2003248611A (en) Storage management integration system and its storage management control method
EP3761611B1 (en) Management of multiple clusters of distributed file systems
CN104823170A (en) Distributed caching cluster management
WO2021057956A1 (en) Method, apparatus and system for managing mirror image file, and computer device and storage medium
US10686875B2 (en) Elastically scalable document-oriented storage services
US20130332418A1 (en) Method of managing data in asymmetric cluster file system
CN113672175A (en) Distributed object storage method, device and equipment and computer storage medium
JP5638608B2 (en) Method for accessing file system files according to metadata and apparatus for implementing the method
CN106775446A (en) Based on the distributed file system small documents access method that solid state hard disc accelerates
US10579597B1 (en) Data-tiering service with multiple cold tier quality of service levels
CN108632397B (en) Method and system for controlling network connection
WO2007134918A1 (en) Distributed storage
CN111125049A (en) RDMA (remote direct memory Access) -and-nonvolatile-memory-based distributed file data block reading and writing method and system
JP5504165B2 (en) Method for accessing data file object, client device, program, and system
US10986065B1 (en) Cell-based distributed service architecture with dynamic cell assignment
TWI695329B (en) Data sharding management system and method on container platform
JP5729003B2 (en) Thin client system, location information management server, migration method, and program