TWI689823B - Method and server for dynamic work transfer - Google Patents

Method and server for dynamic work transfer Download PDF

Info

Publication number
TWI689823B
TWI689823B TW107100259A TW107100259A TWI689823B TW I689823 B TWI689823 B TW I689823B TW 107100259 A TW107100259 A TW 107100259A TW 107100259 A TW107100259 A TW 107100259A TW I689823 B TWI689823 B TW I689823B
Authority
TW
Taiwan
Prior art keywords
job
cost
nodes
node
target node
Prior art date
Application number
TW107100259A
Other languages
Chinese (zh)
Other versions
TW201931145A (en
Inventor
陳律翰
王慶堯
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to TW107100259A priority Critical patent/TWI689823B/en
Priority to CN201810146994.6A priority patent/CN110012044B/en
Priority to US15/924,297 priority patent/US10764359B2/en
Publication of TW201931145A publication Critical patent/TW201931145A/en
Application granted granted Critical
Publication of TWI689823B publication Critical patent/TWI689823B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0826Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Abstract

A method and a server for dynamic work transfer are provided. The method includes following steps: regularly collecting and recording network resources of multiple nodes in a network, in which the nodes include a cloud node and multiple edge nodes; receiving a request for a first job at a first time point, calculating a cost of configuring the first job to each node according to the network resource of each node at the first time point, and configuring the first job to a first target node; receiving a request for a second job at a second time point, calculating a cost for configuring the second job to each node according to the network resource of each node at the first time point, and determining a second target node that is suitable for configuring the second job and whether to transfer the first job; and configuring the second job and keeping or transferring the first job according to the determination result.

Description

動態工作移轉方法及伺服器Dynamic work transfer method and server

本發明是有關於一種工作轉移方法及裝置,且特別是有關於一種動態工作轉移方法及伺服器。The present invention relates to a work transfer method and device, and particularly relates to a dynamic work transfer method and server.

過去十年,由於網路技術的發展以及雲端服務產業的興起,提供了大量多元的資訊應用服務。然而,隨著物聯網(Internet of Things,IoT)技術的興起,聯網設備愈來愈多,當大量物聯網裝置同時聯網時,將可能造成網路資源(如頻寬、儲存空間、CPU運算能力)大量消耗,使得雲端運算(Cloud Computing)面臨挑戰。In the past decade, due to the development of network technology and the rise of the cloud service industry, a large number of diverse information application services have been provided. However, with the rise of Internet of Things (IoT) technology, there are more and more connected devices. When a large number of IoT devices are connected at the same time, it may cause network resources (such as bandwidth, storage space, CPU computing power) ) A large amount of consumption makes Cloud Computing face challenges.

邊緣運算(Edge Computing)被提出來以減輕雲端運算的負擔。詳細來說,邊緣運算是一種就近運算的概念,將運算能更靠近資料源所在的本地區網內運算,盡可能的不用將資料回傳至雲端中,以減少資料往返雲端的等待時間與降低網路頻寬成本。除此之外,適當地佈局多個邊緣節點也可以增加擴展性(Scalability)與可靠度(Reliability)。Edge computing is proposed to reduce the burden of cloud computing. In detail, edge computing is a concept of nearby computing, which can bring the computing closer to the local network where the data source is located. It is not necessary to return the data to the cloud as much as possible, so as to reduce the waiting time and reduce the data to and from the cloud. Network bandwidth cost. In addition, proper layout of multiple edge nodes can also increase scalability and reliability.

當然,並不是全部的資料都能放在本地端來進行運算。舉例來說,有些資料需要更進一步的進行分析與判斷,並且需要傳至雲端中加以進行處理,或是作為長期存取的使用。因此,有必要因應當前的工作(Job)需求,適當配置網路節點以處理所有工作。Of course, not all data can be put on the local side for calculation. For example, some data needs further analysis and judgment, and needs to be transferred to the cloud for processing or for long-term access. Therefore, it is necessary to appropriately configure network nodes to handle all tasks in response to the current job requirements.

本發明提供一種動態工作移轉方法及伺服器,依據網路中所有工作的特性、需求與網路資源,重新配置處理各工作的網路節點,可提升整體運算及網路傳輸效能。The present invention provides a method and server for dynamic work transfer. According to the characteristics, demands and network resources of all work in the network, the network nodes for processing each work can be reconfigured to improve the overall computing and network transmission performance.

本發明的動態工作移轉方法,適用於雲與邊緣運算架構下的雲端節點,此方法包括以下步驟:定時蒐集並記錄網路中多個節點的網路資源,多個節點包括雲端節點及多個邊緣節點;於第一時間點接收第一工作的請求,而依據各節點於第一時間點的網路資源計算將第一工作配置到各節點的成本,以配置第一工作於節點中的第一目標節點;於第二時間點接收第二工作的請求,而依據各節點於第一時間點的網路資源計算將第二工作配置到各節點的成本,決定適於配置第二工作的第二目標節點並判斷是否需轉移第一工作,其中第二時間點在該第一時間點之後;根據判斷結果配置第二工作並維持或移轉第一工作。The dynamic work transfer method of the present invention is applicable to cloud nodes under the cloud and edge computing architecture. This method includes the following steps: regularly collect and record the network resources of multiple nodes in the network. The multiple nodes include cloud nodes and multiple nodes. Edge nodes; receive the first job request at the first time, and calculate the cost of allocating the first job to each node based on the network resources of each node at the first time to configure the first job in the node The first target node receives the request for the second job at the second time, and calculates the cost of allocating the second job to each node according to the network resources of each node at the first time, and determines the appropriate for configuring the second job The second target node determines whether the first job needs to be transferred, where the second time point is after the first time point; configure the second work according to the judgment result and maintain or transfer the first work.

本發明伺服器適於作為雲與邊緣運算架構下的雲端節點。此伺服器包括通訊裝置、儲存裝置以及處理器。其中,通訊裝置連接網路,並與網路中的多個邊緣節點進行通訊。處理器耦接至通訊裝置與儲存裝置,執行儲存裝置中記錄的程式以:利用通訊裝置定時蒐集網路中多個節點的網路資源並記錄於儲存裝置中,多個節點包括雲端節點及多個邊緣節點;利用通訊裝置於第一時間點接收第一工作的請求,而依據各節點於第一時間點的網路資源計算將第一工作配置到各節點的成本,以配置第一工作於節點中的第一目標節點;利用通訊裝置於第二時間點接收第二工作的請求,而依據各節點於第一時間點的網路資源計算將第二工作配置到各節點的成本,決定適於配置第二工作的第二目標節點並判斷是否需轉移第一工作,其中第二時間點在第一時間點之後;根據判斷結果配置第二工作並維持或移轉第一工作。The server of the present invention is suitable as a cloud node under the cloud and edge computing architecture. The server includes a communication device, a storage device, and a processor. Among them, the communication device is connected to the network and communicates with multiple edge nodes in the network. The processor is coupled to the communication device and the storage device, and executes a program recorded in the storage device to: regularly collect network resources of multiple nodes in the network and record them in the storage device using the communication device. The multiple nodes include cloud nodes and multiple Edge nodes; use the communication device to receive the first job request at the first time, and calculate the cost of allocating the first job to each node based on the network resources of each node at the first time to configure the first job at The first target node in the node; use the communication device to receive the second job request at the second time point, and calculate the cost of allocating the second job to each node based on the network resources of each node at the first time point to determine the appropriate To configure the second target node of the second job and determine whether the first job needs to be transferred, where the second time point is after the first time point; configure the second job according to the judgment result and maintain or transfer the first job.

基於上述,本發明的動態工作移轉方法及伺服器主要是藉由定時蒐集網路中多個節點的網路資訊,以在接收到第一工作的請求時,依據當下各節點的網路資源,配置此第一工作。而當接收到第二工作的請求時,依據配置第一工作前各節點的網路資源,重新計算出將此第二工作配置到各節點所需要的成本,藉以決定此第二工作所適合配置的節點,並判斷是否需將第一工作轉移至其它的節點中。透過上述的方法,本發明的伺服器可在接收到新的工作請求時,依據網路上所有工作的特性、需求及網路資源,重新配置用以處理各個工作的節點,藉此達到提升整體運算及網路傳輸效能的目的。Based on the above, the dynamic work transfer method and server of the present invention mainly collect network information of multiple nodes in the network at regular intervals, so as to receive the first work request based on the current network resources of each node , Configure this first job. When receiving the request for the second job, according to the network resources of each node before the first job is configured, the cost required to allocate the second job to each node is recalculated to determine the suitable configuration for the second job Node and determine whether the first job needs to be transferred to another node. Through the above method, when receiving a new job request, the server of the present invention can reconfigure the nodes used to process each job according to the characteristics, needs and network resources of all jobs on the network, thereby improving the overall operation And the purpose of network transmission performance.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description as follows.

在伺服器透過網路來進行傳輸的過程中,當通訊裝置偵測到新的工作請求需要被執行時,本發明提出了一種動態工作移轉的方法,使得伺服器可以利用處理器來依據網路中所有工作的特性、需求及網路資源,來重新配置處理特定限制之工作的網路節點,達到動態資源使用最佳化之目的,藉以提升網路整體運算及傳輸效能。During the transmission of the server through the network, when the communication device detects that a new work request needs to be executed, the present invention proposes a method of dynamic work transfer, so that the server can use the processor to refer to the network The characteristics, requirements and network resources of all the tasks in the road are used to reconfigure the network nodes that handle the tasks with specific restrictions to achieve the purpose of optimizing the use of dynamic resources, thereby improving the overall computing and transmission performance of the network.

圖1是依照本發明一實施例所繪示之伺服器100的方塊圖。在本實施例中,伺服器100適用於作為雲與邊緣運算架構下的雲端節點,並且包括通訊裝置110、儲存裝置120以及處理器130。其中,通訊裝置110可以連接至網路,藉以透過與網路連線的方式,來與網路中的多個邊緣節點進行通訊。處理器130分別耦接至通訊裝置110與儲存裝置120,並且用以執行儲存裝置120中所記錄的程式。FIG. 1 is a block diagram of a server 100 according to an embodiment of the invention. In this embodiment, the server 100 is suitable as a cloud node under a cloud and edge computing architecture, and includes a communication device 110, a storage device 120, and a processor 130. The communication device 110 can be connected to the network, so as to communicate with multiple edge nodes in the network by connecting to the network. The processor 130 is coupled to the communication device 110 and the storage device 120 respectively, and is used to execute the program recorded in the storage device 120.

在本實施例中,通訊裝置110例如是支援乙太網路(Ethernet)等有線網路連結的網路卡或是支援電機和電子工程師協會(Institute of Electrical and Electronics Engineers,IEEE)802.11n/b/g等無線通訊標準的無線網路卡,其可透過有線或無線方式連接網路並與網路上的裝置交換資料。儲存裝置120可以例如是任何型態的固定式或可移動式隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、可變電阻式記憶體(Resistive Random-Access Memory,RRAM)、鐵電隨機存取記憶體(Ferroelectric RAM,FeRAM)、磁阻隨機存取記憶體(MagnetoresistiveRAM,MRAM)、相變式隨機存取記憶體(Phase changeRAM,PRAM)、導通微通道記憶體(Conductive bridge RAM,CBRAM) 、動態隨機存取記憶體(Dynamic Random Access Memory,DRAM),但不限於此。處理器130可以例如是中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合,但不限於此。In this embodiment, the communication device 110 is, for example, a network card supporting wired network connection such as Ethernet or supporting the Institute of Electrical and Electronics Engineers (IEEE) 802.11n/b /g and other wireless communication standard wireless network cards, which can connect to the network through wired or wireless methods and exchange data with devices on the network. The storage device 120 may be, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory (flash memory) , Variable resistance memory (Resistive Random-Access Memory, RRAM), ferroelectric random access memory (Ferroelectric RAM, FeRAM), magnetoresistive random access memory (MagnetoresistiveRAM, MRAM), phase change random access Memory (Phase change RAM, PRAM), conductive micro-channel memory (Conductive bridge RAM, CBRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), but not limited to this. The processor 130 may be, for example, a central processing unit (Central Processing Unit, CPU), or other programmable general-purpose or special-purpose microprocessor (Microprocessor), digital signal processor (DSP), or Programmable controllers, application specific integrated circuits (Application Specific Integrated Circuit, ASIC) or other similar components or combinations of the above components, but not limited to this.

圖2是依照本發明一實施例所繪示之網路架構200的示意圖。在圖2中,網路架構200包括雲端節點210、多個邊緣節點220~240以及多個用戶裝置U1~U8,其中雲端節點210例如以圖1中的伺服器100實施。以下請同時參照圖1以及圖2,在本實施例中,雲端節點210會定時蒐集網路上邊緣節點220~240及其本身的網路資源,並記錄所蒐集的網路資源(例如記錄於儲存裝置120)。邊緣節點220~240分別是鄰近用戶裝置U1~U8的節點,其可就近處理用戶裝置U1~U8所提出的工作。詳言之,雲端節點210可經由網路接收用戶裝置U1~U8所發出的工作請求,藉以根據先前蒐集的網路資源,對各個節點(包括雲端節點210及邊緣節點220~240)中的網路資源與資料特性進行計算與評估,進而將這些工作配置到最適合的節點來進行處理。FIG. 2 is a schematic diagram of a network architecture 200 according to an embodiment of the invention. In FIG. 2, the network architecture 200 includes a cloud node 210, a plurality of edge nodes 220-240, and a plurality of user devices U1-U8, where the cloud node 210 is implemented by the server 100 in FIG. 1, for example. The following please refer to FIGS. 1 and 2 at the same time. In this embodiment, the cloud node 210 periodically collects the edge nodes 220-240 and their own network resources on the network, and records the collected network resources (for example, records in storage Device 120). The edge nodes 220-240 are nodes adjacent to the user devices U1-U8, respectively, and can process the work proposed by the user devices U1-U8 nearby. In detail, the cloud node 210 can receive the work request sent by the user devices U1 to U8 via the network, so that according to the previously collected network resources, the network in each node (including the cloud node 210 and the edge nodes 220 to 240) Road resources and data characteristics are calculated and evaluated, and then these tasks are allocated to the most suitable nodes for processing.

圖3是依照本發明一實施例所繪示之動態工作移轉方法的流程圖。以下請同時參照圖1以及圖3,本實施例的方法適用於圖1的伺服器100,以下即搭配伺服器100中的各項元件說明本發明之動態工作移轉方法的詳細步驟。FIG. 3 is a flowchart of a dynamic work transfer method according to an embodiment of the invention. 1 and FIG. 3 at the same time. The method of this embodiment is applicable to the server 100 of FIG. 1. The detailed steps of the dynamic work transfer method of the present invention will be described below with the components in the server 100.

在步驟S310中,處理器130可以利用通訊裝置110來定時蒐集並記錄網路中多個節點的網路資源,其中,這些網路資源可以是網路上的雲端節點與多個邊緣節點。針對上述步驟S310的實施方式,請同時參照圖4,圖4是依照本發明一實施例所繪示之多個節點相互交換網路資源的網路架構400的示意圖。在本實施例中,網路架構400包括雲端節點410、多個邊緣節點420~440以及第一、第二用戶裝置J1、J2。In step S310, the processor 130 may use the communication device 110 to periodically collect and record network resources of multiple nodes in the network, where the network resources may be cloud nodes and multiple edge nodes on the network. For the implementation of the above step S310, please also refer to FIG. 4, which is a schematic diagram of a network architecture 400 in which multiple nodes exchange network resources with each other according to an embodiment of the invention. In this embodiment, the network architecture 400 includes a cloud node 410, a plurality of edge nodes 420-440, and first and second user devices J1, J2.

詳細來說,上述的網路資源可以由各個邊緣節點420~440分別透過定時廣播活動訊號(Heartbeat)來與其它節點進行訊息交換,藉以取得與其它節點之間的傳輸延遲(Latency)。各個邊緣節點420~440在取得傳輸延遲的資訊之後,還會將本身的網路資源回報至雲端節點410中,使雲端節點410可以針對各個邊緣節點420~440的網路資源與資料特性進行計算與評估。除此之外,雲端節點410由第一、第二用戶裝置J1、J2接收的第一工作Jt 以及第二工作Jt+1 中,分別具有設定檔(Profile),此設定檔記錄著第一工作Jt 或第二工作Jt+1 的延遲時間、頻寬、功率以及儲存空間。其中,設定檔中的數字(1、3)分別表示各條件(延遲時間、頻寬、功率或儲存空間)的重要性或權重。透過這個設定檔,可以使處理器130獲得這些工作所需要的工作條件。In detail, the above network resources can be exchanged with other nodes through regular broadcast activity signals (Heartbeat) by each edge node 420-440 to obtain the transmission delay (Latency) with other nodes. After obtaining the transmission delay information, each edge node 420-440 also reports its own network resources to the cloud node 410, so that the cloud node 410 can calculate the network resources and data characteristics of each edge node 420-440 And evaluation. In addition, the first task J t and the second task J t+1 received by the first and second user devices J1 and J2 in the cloud node 410 each have a profile. This profile records the first Delay time, bandwidth, power and storage space of the first working J t or the second working J t+1 . Among them, the numbers (1, 3) in the configuration file indicate the importance or weight of each condition (delay time, bandwidth, power, or storage space). Through this configuration file, the processor 130 can obtain the working conditions required for these tasks.

同樣請參照圖1以及圖3,在步驟S320中,處理器130可以利用通訊裝置110於第一時間點接收第一工作的請求,而依據各節點於第一時間點的網路資源來計算出將第一工作配置到各節點的成本,以將第一工作配置於這些節點中的第一目標節點。Also referring to FIGS. 1 and 3, in step S320, the processor 130 may use the communication device 110 to receive the first job request at the first time point, and calculate according to the network resources of each node at the first time point The first job is allocated to the cost of each node, so that the first job is allocated to the first target node among these nodes.

詳細來說,圖5是依照本發明一實施例所繪示之計算配置工作到網路架構500中各節點的成本的示意圖。在本實施例中,網路架構500包括雲端節點510、多個邊緣節點520~540以及第一、第二用戶裝置J1、J2。在圖5中,處理器130可以依據各節點(邊緣節點520~540以及雲端節點510)於第一時間點(t)的網路資源,分別計算出將第一工作Jt 配置到這些節點所需的傳輸成本、計算成本及儲存成本。處理器130可進一步計算出這些傳輸成本、計算成本及儲存成本加權後的總和,以作為將第一工作Jt 配置到這些節點的成本。舉例來說,對於某一節點,可藉由將第一工作Jt 配置到此節點所需的傳輸成本CT 、計算成本CC 及儲存成本CS 分別乘上權重w 1w 2w 3 ,而獲得將第一工作Jt 配置到此節點的總成本Csum ,其公式如下:

Figure 02_image001
。In detail, FIG. 5 is a schematic diagram illustrating the cost of calculating the configuration work to each node in the network architecture 500 according to an embodiment of the invention. In this embodiment, the network architecture 500 includes a cloud node 510, a plurality of edge nodes 520-540, and first and second user devices J1, J2. In FIG. 5, the processor 130 can calculate the allocation of the first work J t to these nodes according to the network resources of each node (edge nodes 520-540 and cloud node 510) at the first time point (t). Required transmission cost, calculation cost and storage cost. The processor 130 may further calculate the weighted sum of these transmission costs, calculation costs, and storage costs as the cost of configuring the first job J t to these nodes. For example, for a certain node, the transmission cost C T , the calculation cost C C and the storage cost C S required by the first job J t can be multiplied by the weights w 1 , w 2 , w 3 , and the total cost C sum of configuring the first job J t to this node is obtained as follows:
Figure 02_image001
.

其中,為了避免單一節點的網路資源數值過高(例如雲端節點的儲存空間遠高於邊緣節點)以致影響成本的平衡計算,處理器130例如是透過正規化各節點的網路資源,以轉換為傳輸成本CT 、計算成本CC 及儲存成本CS Among them, in order to avoid that the network resource value of a single node is too high (for example, the storage space of the cloud node is much higher than that of the edge node) and affects the balance calculation of the cost, the processor 130 converts the network resource of each node, for example For transmission cost C T , calculation cost C C and storage cost C S.

舉例來說,儲存成本與各節點的儲存空間成反比,即儲存空間愈大,儲存成本愈低。假設各邊緣節點520~540以及雲端節點510的儲存空間的比例為3:3:5:10,則其對應的儲存成本的比例為正規化後的1/3:1/3:1/5:1/10,即10:10:6:3。另一方面,傳輸成本與傳輸的延遲時間為正比的關係,而與傳輸頻寬則為反比的關係,即延遲時間愈久,傳輸成本愈高;傳輸頻寬愈大,傳輸成本愈低。因此,綜合考量上述延遲時間、傳輸頻寬與傳輸成本的關係,可計算出各邊緣節點520~540以及雲端節點510的傳輸成本的比例為2:3:7:10。此外,計算成本與各節點的計算能力成反比,即計算能力愈好,計算成本愈低,據此可計算出各邊緣節點520~540以及雲端節點510的傳輸成本的比例為5:4:5:4。For example, the storage cost is inversely proportional to the storage space of each node, that is, the larger the storage space, the lower the storage cost. Assuming that the storage space ratio of each edge node 520-540 and cloud node 510 is 3:3:5:10, the corresponding storage cost ratio is 1/3 after normalization: 1/3:1/5: 1/10, that is 10:10:6:3. On the other hand, the transmission cost is directly proportional to the transmission delay time, and is inversely proportional to the transmission bandwidth. That is, the longer the delay time, the higher the transmission cost; the larger the transmission bandwidth, the lower the transmission cost. Therefore, considering the relationship between the delay time, the transmission bandwidth, and the transmission cost, the ratio of the transmission cost of each edge node 520-540 and cloud node 510 can be calculated as 2:3:7:10. In addition, the computing cost is inversely proportional to the computing power of each node, that is, the better the computing power, the lower the computing cost. Based on this, the ratio of the transmission costs of each edge node 520-540 and cloud node 510 can be calculated as 5:4:5 : 4.

請再次參照圖3,在步驟S330中,處理器130可以利用通訊裝置110於第二時間點接收第二工作的請求,而依據各節點於第一時間點的網路資源,來計算出將此第二工作配置到各節點的成本,以決定適於配置此第二工作的第二目標節點,並且判斷是否需轉移上述的第一工作,從而在步驟S340中,根據判斷結果來配置第二工作並維持或移轉第一工作。其中,上述的第二時間點是在第一時間點之後。Please refer to FIG. 3 again. In step S330, the processor 130 may use the communication device 110 to receive the second work request at the second time point, and calculate the cost based on the network resources of each node at the first time point. The cost of the second job configuration to each node to determine the second target node suitable for configuring the second job, and determine whether the first job needs to be transferred, so in step S340, the second job is configured according to the judgment result And maintain or transfer the first job. The above second time point is after the first time point.

詳細來說,在接收到第二工作的請求時,網路中適於處理該第二工作的最佳節點可能是尚未被配置工作的節點,但也可能是已被配置用來處理第一工作的節點。為了避免因為該最佳節點被第一工作占用,以致第二工作被迫分配到次佳節點,而不能達到資源分配最佳化,在本實施例中,當處理器130利用通訊裝置110於第二時間點接收到來自用戶裝置所發出的新的第二工作的請求時,處理器130會追溯回(接收到第一工作時的)第一時間點,而根據第一時間點的網路資源,重新計算出將此第二工作配置到各個節點的成本,據以將第二工作配置到最為適合的節點。同時,處理器130也依據此計算結果來判斷是否需要轉移先前的第一工作。即,若適於處理第二工作的最佳節點為先前被配置用以處理第一工作的節點,則可進一步考量將該第一工作轉移到其他節點的成本,來決定是否需要轉移第一工作。In detail, when a request for a second job is received, the best node in the network suitable for processing the second job may be a node that has not been configured to work, but may also be configured to process the first job Node. In order to avoid that the best node is occupied by the first work, so that the second work is forced to be allocated to the second best node, and the resource allocation cannot be optimized, in this embodiment, when the processor 130 uses the communication device 110 to When a new second job request from the user device is received at the second time point, the processor 130 will trace back to the first time point (when the first work is received), and according to the network resources at the first time point , Recalculate the cost of allocating this second job to each node, and accordingly configure the second job to the most suitable node. At the same time, the processor 130 also determines whether the previous first job needs to be transferred according to the calculation result. That is, if the best node suitable for processing the second job is the node previously configured to process the first job, the cost of transferring the first job to other nodes can be further considered to determine whether the first job needs to be transferred .

詳細來說,圖6是依照本發明一實施例所繪示之判斷是否需轉移工作的方法流程圖。以下請同時參照圖1以及圖6。在步驟S610中,處理器130會判斷決定適於配置第二工作的第二目標節點是否與先前被配置用以處理第一工作的第一目標節點相同。In detail, FIG. 6 is a flowchart of a method for determining whether a job needs to be transferred according to an embodiment of the invention. Please refer to FIGS. 1 and 6 below. In step S610, the processor 130 determines whether the second target node suitable for configuring the second job is the same as the first target node previously configured to process the first job.

若第二目標節點與第一目標節點不同,在步驟S620中,則處理器130無需移轉第一工作,因此將第二工作配置到第二目標節點,而不移轉第一工作。反之,若第二目標節點與第一目標節點相同,在步驟S630中,處理器130會進一步判斷於第一時間點時,將第二工作配置到第一目標節點的成本是否大於將第一工作配置到第一目標節點的成本。執行步驟S640,反之,則執行步驟S650。If the second target node is different from the first target node, in step S620, the processor 130 does not need to transfer the first work, so the second work is configured to the second target node without transferring the first work. Conversely, if the second target node is the same as the first target node, in step S630, the processor 130 will further determine whether the cost of configuring the second job to the first target node at the first time point is greater than the cost of the first job The cost allocated to the first target node. Step S640 is executed, otherwise, step S650 is executed.

若上述步驟S630的判斷結果為是,則處理器130執行步驟S640,依據各節點於第二時間點的網路資源重新計算將第二工作配置到各節點的成本,以配置第二工作於第三目標節點,並且不移轉第一工作。詳細來說,處理器130依據第二時間點所計算出第二工作配置到各節點的成本,以使處理器130可以將此第二工作配置到所需成本最小的節點。If the judgment result of the above step S630 is yes, the processor 130 executes step S640 to recalculate the cost of allocating the second job to each node according to the network resources of each node at the second time point, so as to configure the second job to the second Three target nodes, and the first job is not transferred. In detail, the processor 130 calculates the cost of the second work configuration to each node according to the second time point, so that the processor 130 can configure the second work to the node with the smallest required cost.

若上述步驟S630的判斷結果為否,則處理器130執行步驟S650,依據於第一時間點將第一工作配置到第一目標節點的鄰近節點的成本以及於第二時間點將第一工作維持在第一目標節點的成本,來將第一工作移轉至鄰近節點中的第四目標節點或不移轉第一工作。If the judgment result of the above step S630 is NO, the processor 130 executes step S650, and the first job is allocated to the neighboring node of the first target node according to the first time point and the first job is maintained at the second time point The cost of a target node to transfer the first work to the fourth target node among the neighboring nodes or not to transfer the first work.

舉例來說,圖7是依照本發明一實施例所繪示之判斷是否需轉移工作的網路架構700的示意圖。在本實施例中,網路架構700包括雲端節點710、多個邊緣節點720~740以及第一、第二用戶裝置J1、J2。假設雲端節點710於第一時間點t將第一用戶裝置J1的第一工作Jt 配置到邊緣節點730。而當雲端節點710於第二時間點t+1接收到第二用戶裝置J2的第二工作Jt+1 的請求時,由於處理器130判斷第二工作Jt+1 配置於邊緣節點730的成本小於第一工作Jt 配置於邊緣節點730所需要的成本,因此,於第二時間點t+1時,處理器130會將第二工作Jt+1 配置到邊緣節點730中。在此情況下,處理器130將會判斷第一工作Jt 配置到其他節點(雲端節點710或邊緣節點720~740)所需要的成本,以將第一工作Jt 配置到所需成本最小的節點。For example, FIG. 7 is a schematic diagram of a network architecture 700 according to an embodiment of the present invention for determining whether a job needs to be transferred. In this embodiment, the network architecture 700 includes a cloud node 710, a plurality of edge nodes 720-740, and first and second user devices J1, J2. Assume that the cloud node 710 configures the first work J t of the first user device J1 to the edge node 730 at the first time point t. When the cloud node 710 receives the request for the second job J t+1 of the second user device J2 at the second time point t+1, the processor 130 determines that the second job J t+1 is configured on the edge node 730 The cost is less than the cost required to configure the first work J t at the edge node 730. Therefore, at the second time point t+1, the processor 130 configures the second work J t+1 to the edge node 730. In this case, the processor 130 will determine the cost required to configure the first work J t to other nodes (cloud node 710 or edge nodes 720 to 740) to configure the first work J t to the minimum cost node.

換言之,此時雲端節點710會進一步計算出,於第一時間點將第一工作Jt 配置於(邊緣節點730相鄰的)雲端節點710或邊緣節點720、740所需的成本,同時也計算出於第二時間點將第一工作Jt 配置於邊緣節點730所需的成本。若處理器130依據這些計算結果中判斷將第一工作Jt 配置於邊緣節點730的相鄰節點的成本較低,則處理器130會將第一工作Jt 移轉至其它的節點中(如,雲端節點710以及邊緣節點720、740中的其中之一)。反之,若處理器130判斷將第一工作Jt 維持於邊緣節點730的成本較低,則處理器130會將第一工作Jt 維持由邊緣節點730處理。In other words, at this time, the cloud node 710 will further calculate the cost required to configure the first work J t on the cloud node 710 (adjacent to the edge node 730) or the edge nodes 720, 740 at the first time point, and also calculate The cost required to configure the first job J t at the edge node 730 at the second time point. If the processor 130 determines from the calculation results that the cost of arranging the first working J t in the neighboring node of the edge node 730 is relatively low, the processor 130 will transfer the first working J t to other nodes (such as , One of the cloud node 710 and the edge nodes 720, 740). On the contrary, if the processor 130 determines that the cost of maintaining the first job J t at the edge node 730 is lower, the processor 130 will maintain the first job J t to be processed by the edge node 730.

需說明的是,在一實施例中,處理器130在因應第二工作Jt+1 的請求而移轉工作時,例如僅會移轉距離第二工作Jt+1 為k個節點個數(hop)內的工作,意即處理器130僅移轉所配置節點至第二工作Jt+1 的發送端之間經過的節點個數在k之內的工作,其中k為正整數。It should be noted that, in an embodiment, when the processor 130 transfers the work in response to the request of the second work J t+1 , for example, only the distance from the second work J t+1 is k nodes The work in (hop) means that the processor 130 only transfers the work of the number of nodes passed between the configured node and the sending end of the second work J t+1 within k, where k is a positive integer.

此外,在上述實施例中,第一工作是指第二工作的前一個工作。而在另一實施例中,第一工作也可以是第二工作之前第n個工作,其中n為正整數。也就是說,當處理器130接收到新的工作時,例如會回溯到先前第n個工作配置前的時間點,而以當時的網路資源重新評估適於處理此第n個工作到目前接收到的新工作之間多個工作的節點,並重新配置用以處理這些工作的節點,藉此可進一步提升整體運算及網路傳輸效能。In addition, in the above-described embodiment, the first job refers to the job preceding the second job. In another embodiment, the first job may also be the nth job before the second job, where n is a positive integer. In other words, when the processor 130 receives a new job, for example, it will go back to the time point before the previous n-th job configuration, and the re-evaluation with the current network resources is suitable for processing the n-th job to the current reception There are multiple working nodes between the new tasks, and the nodes to handle these tasks are reconfigured, thereby further improving the overall computing and network transmission performance.

綜上所述,本發明的動態工作移轉方法及伺服器除了利用成本函數提供負載平衡的機制外,更在接收到新工作時,回溯到先前配置其他工作時的網路資源,而重新統籌配置處理這些工作的節點。且有鑑於重新配置所有工作所涉及的複雜度較高,本發明例如針對重新配置的工作的空間(即距離新工作的發送端的節點個數)以及時間(即回溯及重新配置的工作數目)做限制,來達到動態資源使用最佳化的目的。 In summary, in addition to using the cost function to provide a load balancing mechanism, the dynamic work migration method and server of the present invention, when a new job is received, traces back to the network resources when other jobs were previously configured, and re-coordinates Configure the nodes that handle these tasks. And in view of the high complexity involved in reconfiguring all work, the present invention does, for example, the space for reconfiguration work (ie, the number of nodes away from the sender of the new work) and time (ie, the number of backtracking and reconfiguration work) Restrictions to achieve the goal of optimizing the use of dynamic resources.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed as above with examples, it is not intended to limit the present invention. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be subject to the scope defined in the appended patent application.

100:伺服器 100: server

120:儲存裝置 120: storage device

130:處理器 130: processor

200、400、500、700:網路架構 200, 400, 500, 700: network architecture

210、410、510、710:雲端節點 210, 410, 510, 710: cloud nodes

220~240、420~440、520~540、720~740:邊緣節點 220~240, 420~440, 520~540, 720~740: edge node

U1~U8、J1、J2:用戶裝置 U1~U8, J1, J2: user device

Jt:第一工作 J t : first job

Jt+1:第二工作 J t+1 : second job

t:第一時間點 t: first time point

t+1:第二時間點 t+1: second time point

S310~S340:動態工作移轉方法的步驟 S310~S340: Steps of dynamic work transfer method

S610~S650:判斷是否需轉移工作的方法的步驟 S610~S650: Steps of the method to judge whether it is necessary to transfer work

圖1是依照本發明一實施例所繪示之伺服器的方塊圖。 圖2是依照本發明一實施例所繪示之網路架構的示意圖。 圖3是依照本發明一實施例所繪示之動態工作移轉方法的流程圖。 圖4是依照本發明一實施例所繪示之多個節點相互交換網路資源的網路架構的示意圖。 圖5是依照本發明一實施例所繪示之計算配置工作到網路架構中各節點的成本的示意圖。 圖6是依照本發明一實施例所繪示之判斷是否需轉移工作的方法流程圖。 圖7是依照本發明一實施例所繪示之判斷是否需轉移工作的網路架構的示意圖。FIG. 1 is a block diagram of a server according to an embodiment of the invention. 2 is a schematic diagram of a network architecture according to an embodiment of the invention. FIG. 3 is a flowchart of a dynamic work transfer method according to an embodiment of the invention. 4 is a schematic diagram of a network architecture in which multiple nodes exchange network resources with each other according to an embodiment of the invention. FIG. 5 is a schematic diagram illustrating the cost of calculating the configuration work to each node in the network architecture according to an embodiment of the invention. FIG. 6 is a flowchart of a method for determining whether a job needs to be transferred according to an embodiment of the invention. FIG. 7 is a schematic diagram of a network architecture for determining whether a job needs to be transferred according to an embodiment of the invention.

S310~S340‧‧‧動態工作移轉方法的步驟 S310~S340‧‧‧‧steps of dynamic work transfer method

Claims (14)

一種動態工作移轉方法,適用於雲與邊緣運算架構下的雲端節點,該方法包括以下步驟:定時蒐集並記錄網路中多個節點的網路資源,所述多個節點包括所述雲端節點及多個邊緣節點;於第一時間點接收第一工作的請求,而依據各所述節點於該第一時間點的網路資源計算將該第一工作配置到各所述節點的成本,以配置該第一工作於所述節點中的第一目標節點;於第二時間點接收第二工作的請求,而依據各所述節點於該第一時間點的網路資源計算將該第二工作配置到各所述節點的成本,決定適於配置該第二工作的第二目標節點並判斷是否需轉移該第一工作,其中該第二時間點在該第一時間點之後;以及根據判斷結果配置該第二工作並維持或移轉該第一工作。 A dynamic work transfer method, suitable for cloud nodes under cloud and edge computing architecture, the method includes the following steps: regularly collect and record network resources of multiple nodes in the network, the multiple nodes including the cloud nodes And a plurality of edge nodes; receiving a request for a first job at a first time, and calculating the cost of allocating the first job to each of the nodes based on the network resources of the nodes at the first time, to Configure the first work among the first target nodes of the nodes; receive a request for a second work at a second time, and calculate the second work according to the network resources of each of the nodes at the first time The cost allocated to each of the nodes determines a second target node suitable for configuring the second job and determines whether the first job needs to be transferred, wherein the second time point is after the first time point; and based on the judgment result Configure the second job and maintain or transfer the first job. 如申請專利範圍第1項所述的動態工作移轉方法,其中判斷是否需轉移該第一工作的步驟包括:判斷所決定適於配置該第二工作的第二目標節點是否與該第一目標節點相同;若不同,配置該第二工作於該第二目標節點,並不移轉該第一工作;若相同,判斷於該第一時間點將該第二工作配置到該第一目標節點的成本是否大於將該第一工作配置到該第一目標節點的成本; 若是,依據各所述節點於該第二時間點的網路資源重新計算將該第二工作配置到各所述節點的成本,以配置該第二工作於第三目標節點,並不移轉該第一工作;以及若否,判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的鄰近節點的成本是否大於於該第二時間點將該第一工作維持在該第一目標節點的成本,其中,若判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的所述鄰近節點的成本大於於該第二時間點將該第一工作維持在該第一目標節點的成本時,則不移轉該第一工作,並且,若判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的所述鄰近節點的成本未大於於該第二時間點將該第一工作維持在該第一目標節點的成本時,則移轉該第一工作至所述鄰近節點中的第四目標節點。 The dynamic job transfer method as described in item 1 of the patent application scope, wherein the step of judging whether the first job needs to be transferred includes: judging whether the second target node determined to be suitable for configuring the second job and the first goal Nodes are the same; if they are different, configure the second job to the second target node without transferring the first job; if they are the same, determine the cost of configuring the second job to the first target node at the first point in time Is greater than the cost of allocating the first job to the first target node; If yes, recalculate the cost of allocating the second job to each of the nodes based on the network resources of the nodes at the second time point to configure the second job to the third target node without transferring the The first job; and if not, determining whether the cost of allocating the first job to the neighboring node adjacent to the first target node at the first time point is greater than the second time point to maintain the first job at the first time point The cost of a target node, wherein, if it is determined that the cost of allocating the first job to the neighboring node adjacent to the first target node at the first time point is greater than the second time point, the first task is maintained at When the cost of the first target node is not transferred, the first work is not transferred, and if it is determined that the cost of allocating the first work to the neighboring node adjacent to the first target node at the first time point is not When it is greater than the cost of maintaining the first work at the first target node at the second time point, the first work is transferred to the fourth target node among the neighboring nodes. 如申請專利範圍第1項所述的動態工作移轉方法,其中所述網路資源包括由各所述邊緣節點藉由定時與其他節點交換訊息而取得的與各所述其他節點間的傳輸延遲。 The dynamic work transfer method as described in item 1 of the patent application scope, wherein the network resource includes a transmission delay between each edge node and each other node obtained by periodically exchanging messages with other nodes . 如申請專利範圍第1項所述的動態工作移轉方法,其中依據各所述節點於該第一時間點的網路資源計算將該第一工作配置到各所述節點的成本的步驟包括:依據各所述節點於該第一時間點的網路資源,分別計算將第一工作配置到該節點所需的傳輸成本、計算成本及儲存成本,並 計算所述傳輸成本、所述計算成本及所述儲存成本加權後的總和,作為將該第一工作配置到該節點的成本。 The dynamic job transfer method as described in item 1 of the patent application scope, wherein the step of calculating the cost of allocating the first job to each of the nodes based on the network resources of each of the nodes at the first point in time includes: Based on the network resources of each of the nodes at the first point in time, calculate the transmission cost, calculation cost, and storage cost required to allocate the first job to the node, and Calculate the weighted sum of the transmission cost, the calculated cost, and the storage cost as the cost of configuring the first job to the node. 如申請專利範圍第4項所述的動態工作移轉方法,其中依據各所述節點於該第一時間點的網路資源,分別計算將第一工作配置到該節點所需的傳輸成本、計算成本及儲存成本的步驟包括:正規化各所述節點的所述網路資源,以轉換為所述傳輸成本、所述計算成本及所述儲存成本。 The dynamic work transfer method as described in item 4 of the patent application scope, in which the transmission cost and calculation required to allocate the first work to the node are calculated based on the network resources of each of the nodes at the first point in time The steps of cost and storage cost include: normalizing the network resources of each of the nodes to be converted into the transmission cost, the calculation cost, and the storage cost. 如申請專利範圍第1項所述的動態工作移轉方法,其中該第一工作為該第二工作之前第n個工作,其中n為正整數。 The dynamic job transfer method as described in item 1 of the patent application scope, wherein the first job is the nth job before the second job, where n is a positive integer. 如申請專利範圍第1項所述的動態工作移轉方法,其中根據判斷結果配置該第二工作並維持或移轉該第一工作的步驟包括:僅移轉所配置節點至該第二工作的發送端之間經過的節點個數在k之內的工作,其中k為正整數。 The dynamic job transfer method as described in item 1 of the patent application scope, wherein the step of configuring the second job based on the judgment result and maintaining or transferring the first job includes: transferring only the configured node to the second job The number of nodes passing between the sending ends is within k, where k is a positive integer. 一種伺服器,適於作為雲與邊緣運算架構下的雲端節點,包括:通訊裝置,連接網路,並與該網路中的多個邊緣節點進行通訊;儲存裝置;以及處理器,耦接至所述通訊裝置與所述儲存裝置,執行所述儲存裝置中記錄的程式以: 利用所述通訊裝置定時蒐集該網路中多個節點的網路資源並記錄於所述儲存裝置中,所述多個節點包括所述雲端節點及所述多個邊緣節點;利用所述通訊裝置於第一時間點接收第一工作的請求,而依據各所述節點於該第一時間點的網路資源計算將該第一工作配置到各所述節點的成本,以配置該第一工作於所述節點中的第一目標節點;利用所述通訊裝置於第二時間點接收第二工作的請求,而依據各所述節點於該第一時間點的網路資源計算將該第二工作配置到各所述節點的成本,決定適於配置該第二工作的第二目標節點並判斷是否需轉移該第一工作,其中該第二時間點在該第一時間點之後;以及根據判斷結果配置該第二工作並維持或移轉該第一工作。 A server suitable as a cloud node under a cloud and edge computing architecture, including: a communication device, connected to a network, and communicating with multiple edge nodes in the network; a storage device; and a processor, coupled to The communication device and the storage device execute the programs recorded in the storage device to: Use the communication device to periodically collect network resources of multiple nodes in the network and record them in the storage device, the multiple nodes include the cloud node and the multiple edge nodes; use the communication device Receiving a request for a first job at a first time, and calculating the cost of allocating the first job to each of the nodes based on the network resources of the nodes at the first time to configure the first job at A first target node of the nodes; using the communication device to receive a request for a second job at a second time point, and calculating the second job configuration according to the network resources of each of the nodes at the first time point The cost to each of the nodes determines a second target node suitable for configuring the second job and determines whether the first job needs to be transferred, wherein the second time point is after the first time point; and the configuration is based on the judgment result The second job and maintain or transfer the first job. 如申請專利範圍第8項所述的伺服器,其中所述處理器包括:判斷所決定適於配置該第二工作的第二目標節點是否與該第一目標節點相同;若不同,所述處理器配置該第二工作於該第二目標節點,並不移轉該第一工作; 若相同,所述處理器判斷於該第一時間點將該第二工作配置到該第一目標節點的成本是否大於將該第一工作配置到該第一目標節點的成本;若是,則所述處理器依據各所述節點於該第二時間點的網路資源重新計算將該第二工作配置到各所述節點的成本,以配置該第二工作於第三目標節點,並不移轉該第一工作;以及若否,則所述處理器判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的鄰近節點的成本是否大於於該第二時間點將該第一工作維持在該第一目標節點的成本,其中,若所述處理器判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的所述鄰近節點的成本大於於該第二時間點將該第一工作維持在該第一目標節點的成本時,所述處理器不移轉該第一工作,並且,若所述處理器判斷於該第一時間點將該第一工作配置到相鄰於該第一目標節點的所述鄰近節點的成本未大於於該第二時間點將該第一工作維持在該第一目標節點的成本時,所述處理器移轉該第一工作至所述鄰近節點中的第四目標節點。 The server according to item 8 of the patent application scope, wherein the processor includes: determining whether the second target node determined to be suitable for configuring the second job is the same as the first target node; if different, the processing The device configures the second job to the second target node, and does not transfer the first job; If they are the same, the processor determines whether the cost of configuring the second job to the first target node at the first point in time is greater than the cost of configuring the first job to the first target node; if so, the process The controller recalculates the cost of allocating the second job to each of the nodes according to the network resources of the nodes at the second time point, so as to configure the second job to the third target node without transferring the first A job; and if not, the processor determines whether the cost of allocating the first job to the neighboring node adjacent to the first target node at the first time point is greater than the first job at the second time point Maintaining the cost of the first target node, wherein if the processor determines that the cost of allocating the first job to the neighboring node adjacent to the first target node at the first point in time is greater than the second When maintaining the first work at the cost of the first target node at a time point, the processor does not transfer the first work, and if the processor determines that the first work is allocated to the phase at the first time point When the cost of the neighboring node adjacent to the first target node is not greater than the cost of maintaining the first job at the first target node at the second time point, the processor transfers the first job to the The fourth target node among neighboring nodes. 如申請專利範圍第8項所述的伺服器,其中所述網路資源包括由各所述邊緣節點藉由定時與其他節點交換訊息而取得的與各所述其他節點間的傳輸延遲。 The server according to item 8 of the patent application scope, wherein the network resource includes a transmission delay between each edge node and each other node obtained by periodically exchanging messages with other nodes. 如申請專利範圍第8項所述的伺服器,其中所述處理器包括依據各所述節點於該第一時間點的網路資源,分別計算將第 一工作配置到該節點所需的傳輸成本、計算成本及儲存成本,並計算所述傳輸成本、所述計算成本及所述儲存成本加權後的總和,作為將該第一工作配置到該節點的成本。 The server as described in item 8 of the patent application scope, wherein the processor includes, according to the network resources of each of the nodes at the first time point, calculating the A transmission cost, a calculation cost, and a storage cost required for a job to be allocated to the node, and a weighted sum of the transmission cost, the calculation cost, and the storage cost is calculated as the first job to be allocated to the node cost. 如申請專利範圍第11項所述的伺服器,其中所述處理器包括正規化各所述節點的所述網路資源,以轉換為所述傳輸成本、所述計算成本及所述儲存成本。 The server according to item 11 of the patent application scope, wherein the processor includes normalizing the network resources of each of the nodes to be converted into the transmission cost, the calculation cost, and the storage cost. 如申請專利範圍第8項所述的伺服器,其中該第一工作為該第二工作之前第n個工作,其中n為正整數。 The server as described in item 8 of the patent application scope, wherein the first job is the nth job before the second job, where n is a positive integer. 如申請專利範圍第8項所述的伺服器,其中所述處理器僅移轉所配置節點至該第二工作的發送端之間經過的節點個數在k之內的工作,其中k為正整數。 The server as described in item 8 of the patent application scope, wherein the processor only transfers the work of which the number of nodes passing between the configured node and the sending end of the second work is within k, where k is positive Integer.
TW107100259A 2018-01-04 2018-01-04 Method and server for dynamic work transfer TWI689823B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
TW107100259A TWI689823B (en) 2018-01-04 2018-01-04 Method and server for dynamic work transfer
CN201810146994.6A CN110012044B (en) 2018-01-04 2018-02-12 Dynamic work transfer method and server
US15/924,297 US10764359B2 (en) 2018-01-04 2018-03-19 Method and server for dynamic work transfer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107100259A TWI689823B (en) 2018-01-04 2018-01-04 Method and server for dynamic work transfer

Publications (2)

Publication Number Publication Date
TW201931145A TW201931145A (en) 2019-08-01
TWI689823B true TWI689823B (en) 2020-04-01

Family

ID=67060023

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107100259A TWI689823B (en) 2018-01-04 2018-01-04 Method and server for dynamic work transfer

Country Status (3)

Country Link
US (1) US10764359B2 (en)
CN (1) CN110012044B (en)
TW (1) TWI689823B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10834017B2 (en) * 2018-11-11 2020-11-10 International Business Machines Corporation Cloud-driven hybrid data flow and collection
JP7150585B2 (en) * 2018-12-06 2022-10-11 エヌ・ティ・ティ・コミュニケーションズ株式会社 Data retrieval device, its data retrieval method and program, edge server and its program
JP7150584B2 (en) 2018-12-06 2022-10-11 エヌ・ティ・ティ・コミュニケーションズ株式会社 Edge server and its program
JP7175731B2 (en) * 2018-12-06 2022-11-21 エヌ・ティ・ティ・コミュニケーションズ株式会社 Storage management device, method and program
US11800166B2 (en) * 2019-10-14 2023-10-24 Qatar Foundation For Education, Science And Community Development Forecasting and reservation of transcoding resources for live streaming
TWI729606B (en) * 2019-12-05 2021-06-01 財團法人資訊工業策進會 Load balancing device and method for an edge computing network
CN112291304B (en) * 2020-09-30 2024-03-29 国电南瑞科技股份有限公司 Edge internet of things proxy equipment and combined message processing method thereof
CN114064294B (en) * 2021-11-29 2022-10-04 郑州轻工业大学 Dynamic resource allocation method and system in mobile edge computing environment
TWI821038B (en) * 2022-11-22 2023-11-01 財團法人工業技術研究院 Computing task dispatching method, terminal electronic device and computing system using the same

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106333A1 (en) * 2010-10-29 2012-05-03 Futurewei Technologies, Inc. Network Aware Global Load Balancing System and Method
US20160299497A1 (en) * 2015-04-09 2016-10-13 Honeywell International Inc. Methods for on-process migration from one type of process control device to different type of process control device

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008128452A1 (en) * 2007-04-18 2008-10-30 Huawei Technologies Co., Ltd. The method, system and cn node for load transferring in the pool area
KR101483497B1 (en) * 2008-09-25 2015-01-20 에스케이텔레콤 주식회사 Video Encoding/Decoding Apparatus and Method of Considering Impulse Signal
CN101753461B (en) * 2010-01-14 2012-07-25 中国建设银行股份有限公司 Method for realizing load balance, load balanced server and group system
US8751638B2 (en) 2010-07-02 2014-06-10 Futurewei Technologies, Inc. System and method to implement joint server selection and path selection
CN102123179A (en) * 2011-03-28 2011-07-13 中国人民解放军国防科学技术大学 Load balancing method and system applied to distributed application system
US8745267B2 (en) 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
CN102970379A (en) * 2012-12-19 2013-03-13 中国电子科技集团公司第十五研究所 Method for realizing load balance among multiple servers
US9065734B2 (en) * 2013-03-08 2015-06-23 Telefonaktiebolaget L M Ericsson (Publ) Network bandwidth allocation in multi-tenancy cloud computing networks
TWM460351U (en) 2013-04-03 2013-08-21 Univ Hungkuang Cloud computation dynamic work load decision device
US9277439B2 (en) 2013-06-28 2016-03-01 Intel Corporation Device-to-device contention management scheme for mobile broadband networks
US9722815B2 (en) 2013-07-10 2017-08-01 Sunil Mukundan Edge-gateway multipath method and system
US9378063B2 (en) 2013-10-15 2016-06-28 Qualcomm Incorporated Mobile coprocessor system and methods
TWI517048B (en) 2013-12-25 2016-01-11 國立屏東科技大學 A method for balancing load of the cloud service
CN103957231B (en) * 2014-03-18 2015-08-26 成都盛思睿信息技术有限公司 A kind of virtual machine distributed task dispatching method under cloud computing platform
US9654401B2 (en) 2014-03-30 2017-05-16 Juniper Networks, Inc. Systems and methods for multipath load balancing
CN104615498B (en) * 2015-01-22 2018-04-03 北京仿真中心 A kind of group system dynamic load balancing method of task based access control migration
CN106154904A (en) 2015-04-24 2016-11-23 绿创新科技股份有限公司 Street lamp intelligent perception monitoring and reporting system
CN105959411A (en) * 2016-06-30 2016-09-21 中原智慧城市设计研究院有限公司 Dynamic load balance distributed processing method in cloud computing environment based on coordination
CN106936892A (en) * 2017-01-09 2017-07-07 北京邮电大学 A kind of self-organizing cloud multi-to-multi computation migration method and system
CN106973021A (en) * 2017-02-27 2017-07-21 华为技术有限公司 The method and node of load balancing in network system
CN106951059A (en) * 2017-03-28 2017-07-14 中国石油大学(华东) Based on DVS and the cloud data center power-economizing method for improving ant group algorithm
CN107220118A (en) * 2017-06-01 2017-09-29 四川大学 Resource pricing is calculated in mobile cloud computing to study with task load migration strategy
CN107196865B (en) * 2017-06-08 2020-07-24 中国民航大学 Load-aware adaptive threshold overload migration method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106333A1 (en) * 2010-10-29 2012-05-03 Futurewei Technologies, Inc. Network Aware Global Load Balancing System and Method
US20160299497A1 (en) * 2015-04-09 2016-10-13 Honeywell International Inc. Methods for on-process migration from one type of process control device to different type of process control device

Also Published As

Publication number Publication date
US10764359B2 (en) 2020-09-01
TW201931145A (en) 2019-08-01
CN110012044B (en) 2022-01-14
CN110012044A (en) 2019-07-12
US20190208006A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
TWI689823B (en) Method and server for dynamic work transfer
CN105528330B (en) The method, apparatus of load balancing is gathered together and many-core processor
WO2019001092A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
CN111813330B (en) System and method for dispatching input-output
WO2014208661A1 (en) Device, method, system, and program for designing placement of virtual machine
JPWO2015141337A1 (en) Received packet distribution method, queue selector, packet processing device, program, and network interface card
CN110365748A (en) Treating method and apparatus, storage medium and the electronic device of business datum
CN107302580B (en) Load balancing method and device, load balancer and storage medium
CN112003660B (en) Dimension measurement method of resources in network, calculation force scheduling method and storage medium
CN107707612B (en) Method and device for evaluating resource utilization rate of load balancing cluster
Misra et al. Traffic-aware efficient mapping of wireless body area networks to health cloud service providers in critical emergency situations
CN109818997A (en) A kind of load-balancing method, system and storage medium
Tseng et al. Link-aware virtual machine placement for cloud services based on service-oriented architecture
US8959210B2 (en) Method and device for agile computing
CN103997515A (en) Distributed cloud computing center selection method and application thereof
WO2022142277A1 (en) Method and system for dynamically adjusting communication architecture
JP7458424B2 (en) SYSTEM AND METHOD FOR PROVIDING BIDIRECTIONAL FORWARDING DETECTION WITH PERFORMANCE ROUTING MEASUREMENTS - Patent application
Wilson Prakash et al. DServ‐LB: Dynamic server load balancing algorithm
Feng et al. Topology-aware virtual network embedding through the degree
WO2010118672A1 (en) Method and device for grouping nodes in peer to peer network
JP5135416B2 (en) Network relay device
CN107239407B (en) Wireless access method and device for memory
CN108063814A (en) A kind of load-balancing method and device
Viswanathan et al. An autonomic resource provisioning framework for mobile computing grids
US20130198270A1 (en) Data sharing system, terminal, and data sharing method