WO2015131555A1 - 一种多协处理器负载均衡的方法、装置及主处理器 - Google Patents

一种多协处理器负载均衡的方法、装置及主处理器 Download PDF

Info

Publication number
WO2015131555A1
WO2015131555A1 PCT/CN2014/091401 CN2014091401W WO2015131555A1 WO 2015131555 A1 WO2015131555 A1 WO 2015131555A1 CN 2014091401 W CN2014091401 W CN 2014091401W WO 2015131555 A1 WO2015131555 A1 WO 2015131555A1
Authority
WO
WIPO (PCT)
Prior art keywords
allocated
coprocessor
traffic
weight
virtual
Prior art date
Application number
PCT/CN2014/091401
Other languages
English (en)
French (fr)
Inventor
靳康
臧亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015131555A1 publication Critical patent/WO2015131555A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Definitions

  • the invention relates to the field of IPSec (Internet Protocol Security), in particular to a method, a device and a main processor for implementing multi-coprocessor load balancing.
  • IPSec Internet Protocol Security
  • IPsec is an open standard framework that ensures secure and secure communications over Internet Protocol networks by using encrypted security services.
  • IPsec defines the security services used at the Internet layer, and its functions include data encryption, access control to network elements, data source address verification, data integrity checking, and replay prevention attacks.
  • a single main processor (MP) has limited processing capacity, and the device usually needs to handle not only IPSec packets but also other functions. Therefore, multiple coprocessors (CPs) need to be configured for IPSec packet processing. .
  • MP main processor
  • CPs coprocessors
  • Embodiments of the present invention provide a method, an apparatus, and a main processor for implementing multi-coprocessor load balancing.
  • the main processor obtains the traffic weight of each virtual interface to be allocated
  • the main processor obtains a total value of traffic weights on each coprocessor; wherein the total traffic weight on each coprocessor is the sum of traffic weights of all virtual interfaces allocated on the coprocessor ;
  • the main processor sequentially allocates the virtual interface to be allocated with a significant traffic weight to a coprocessor having a small total flow weight.
  • the main processor allocates the virtual interface to be allocated with a large flow weight to the coprocessor with a small total flow weight, including:
  • the main processor performs a descending order on all the virtual interfaces to be allocated according to the size of the traffic weight
  • the main processor performs an ascending order on all coprocessors according to the total value of the traffic weights
  • the main processor allocates a virtual interface with a large traffic weight to a coprocessor with a small total flow weight according to the order of the virtual interfaces to be allocated and the order of the coprocessors.
  • the virtual interface to be allocated is X, and the coprocessor is Y;
  • the main processor allocates a virtual interface with a large traffic weight to a coprocessor with a small total traffic weight according to the order of the virtual interfaces to be allocated and the order of the coprocessors, including:
  • the main processor When X is greater than Y, after all the Y virtual interfaces to be allocated are allocated, the main processor re-arranges all the coprocessors according to the total value of the traffic weights;
  • the main processor allocates the remaining virtual interfaces to be allocated with a large traffic weight to the coprocessor with a small total flow weight according to the order of the virtual interfaces to be allocated and the new order of the coprocessors.
  • the main processor obtains the traffic weight of each virtual interface to be allocated, including:
  • the main processor quantizes the traffic weight of each virtual interface to be allocated according to the traffic volume corresponding to the virtual interface to be allocated.
  • another embodiment of the present invention further provides an apparatus for multi-coprocessor load balancing, including:
  • a first obtaining module configured to obtain a traffic weight of each virtual interface to be allocated
  • a second obtaining module configured to obtain a total traffic weight value on each coprocessor; wherein, the total traffic weight on the coprocessor is a sum of traffic weights of all virtual interfaces allocated on the coprocessor;
  • the allocation module is configured to allocate the virtual interfaces to be allocated with the traffic rights to the flow rights in order A coprocessor with a small total value; where the total traffic weight on the coprocessor is the sum of the traffic weights of all virtual interfaces allocated on the coprocessor.
  • the distribution module includes:
  • the first arranging sub-module is configured to perform descending ordering on all the virtual interfaces to be allocated according to the size of the traffic weight
  • the second permutation sub-module is configured to perform an ascending order on all coprocessors according to the total value of the traffic weights
  • the allocation sub-module is configured to allocate a virtual interface with a significant flow weight to a coprocessor having a small total flow weight according to the order of the virtual interfaces to be allocated and the order of the coprocessors.
  • the virtual interface to be allocated is X, and the coprocessor is Y;
  • the second permutation sub-module re-arranges all the co-processors according to the total value of the traffic weights; then, the control station
  • the second arranging sub-module allocates the remaining virtual interfaces to be allocated with significant traffic weights to the co-processor with a small total traffic weight value according to the order of the virtual interfaces to be allocated and the new order of the co-processors.
  • the first obtaining module is configured to quantize the traffic weight of each virtual interface to be allocated according to the traffic size corresponding to the virtual interface to be allocated.
  • another embodiment of the present invention further provides a main processor, including the multi-coprocessor load balancing device described above.
  • Another embodiment of the present invention also provides a computer program and a carrier thereof, the computer program comprising program instructions that, when executed by a main processing device, enable the device to implement the multi-coprocessor load balancing method.
  • each virtual interface corresponds to the traffic weight of the respective traffic.
  • the coprocessor is allocated for the virtual interface, the balance of the total traffic weight corresponding to each coprocessor can be maintained, so that the associations are coordinated.
  • the traffic corresponding to the processing of the processor is balanced, and the effect of load balancing is more obvious than the method of simply allocating according to the number of virtual interfaces of the coprocessor.
  • FIG. 1 is a schematic diagram of steps of a method for multi-coprocessor load balancing according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an apparatus for multi-coprocessor load balancing according to an embodiment of the present invention.
  • an embodiment of the present invention provides a method for load balancing, including:
  • Step 11 the main processor obtains the traffic weight of each virtual interface to be allocated
  • Step 12 The main processor obtains a total traffic weight value on each coprocessor; wherein the total traffic weight on the coprocessor is a sum of traffic weights of all virtual interfaces allocated on the coprocessor;
  • Step 13 The main processor sequentially allocates the virtual interface to be allocated with a large traffic weight to the coprocessor with a small total flow weight.
  • each virtual interface corresponds to the traffic weight of the respective traffic, and when the coprocessor is allocated for the virtual interface, the balance of the total traffic weight corresponding to each coprocessor is maintained. Therefore, the traffic corresponding to each coprocessor is balanced. Compared with the method of simply allocating according to the number of virtual interfaces of the coprocessor, the effect of load balancing is more obvious.
  • the main processor quantizes the traffic weight of each virtual interface to be allocated according to the traffic volume corresponding to the virtual interface to be allocated. That is, the size of the traffic weight of the virtual interface can indicate the corresponding traffic size.
  • step 13 specifically includes:
  • Step 131 The main processor performs a descending order on all the virtual interfaces to be allocated according to the size of the traffic weight;
  • Step 132 The main processor performs an ascending order on all coprocessors according to the total value of the traffic weights
  • Step 133 the main processor according to the order of the virtual interfaces to be allocated, and the coprocessor Arrange the order, and assign a virtual interface with a large flow weight to a coprocessor with a small total flow weight;
  • Steps 131 to 133 are described in detail below in conjunction with the embodiments.
  • a device has three coprocessors, namely, a coprocessor with a total traffic weight of 10, a coprocessor 2 with a total traffic weight of 8, and a coprocessor 3 with a total traffic weight of 13.
  • the coprocessors are sorted in ascending order according to the total value of the traffic weights, that is, the order of the coprocessors is as shown in Table 1:
  • Sort order Coprocessor number Total flow weight 1 Coprocessor 2 8 2 Coprocessor 1 10 3 Coprocessor 3 13
  • the virtual interfaces that are not configured with the coprocessor are arranged in descending order.
  • the order of the virtual interfaces is as shown in Table 2:
  • the virtual interfaces 2, 1, and 3 are selected from Table 2 in the order of arrangement.
  • the virtual interface 2 is then assigned to the coprocessor 2 in the first order in Table 1
  • the virtual interface 1 is assigned to the coprocessor 1 in the second order in Table 1
  • the virtual interface 3 is assigned to the order in Table 1.
  • the total traffic weight corresponding to coprocessor 1 is 15, the total traffic weight corresponding to coprocessor 2 is 16, and the total traffic weight corresponding to coprocessor 3. Is 17. Since the total traffic weight value represents the total traffic size of all virtual interfaces corresponding to the coprocessor, the coprocessor implements load balancing on the traffic after the configuration ends.
  • order of coprocessors can be dynamically updated when the virtual interface is allocated. A possible implementation is described below.
  • the virtual interfaces to be allocated are X, and the coprocessors are Y;
  • the main processor when performing the above step 133, after all the Y virtual interfaces to be allocated are allocated, the main processor re-arranges all the co-processors according to the total value of the traffic weights; The order of the virtual interfaces and the new order of the coprocessors are allocated to the remaining virtual interfaces to be allocated with a large traffic weight to the coprocessor with a small total flow weight.
  • a device has three coprocessors, namely, a coprocessor with a total traffic weight of 31, a coprocessor 2 with a total traffic weight of 32, and a coprocessor 3 with a total traffic weight of 33.
  • the coprocessors are sorted in ascending order according to the total value of the traffic weights, that is, the order of the coprocessors is as shown in Table 3:
  • Sort order Coprocessor number Total flow weight 1 Coprocessor 1 31 2 Coprocessor 2 32 3 Coprocessor 3 33
  • the virtual interfaces that are not configured with the coprocessor are arranged in descending order, and the order of the virtual interfaces is as shown in Table 4:
  • the first three virtual interfaces 6, 5, and 4 of the order are selected from Table 2.
  • the virtual interface 6 is then assigned to the coprocessor 1 in the first order in Table 1
  • the virtual interface 5 is assigned to the coprocessor 2 in the first order in Table 1
  • the virtual interface 4 is assigned to the order in Table 1.
  • Sort order Coprocessor number Total flow weight 1 Coprocessor 3 40 2 Coprocessor 2 45 3 Coprocessor 1 47
  • the remaining three virtual interfaces are selected in order of arrangement, namely virtual interfaces 3, 2, and 1.
  • the virtual interface 3 is allocated to the coprocessor 3 of the first order in the updated table three
  • the virtual interface 2 is assigned to the coprocessor 2 of the second order in the updated table three
  • the virtual interface 1 is allocated.
  • the third coprocessor 1 is arranged in the updated table three.
  • the total traffic weight corresponding to the coprocessor 1 is 48
  • the total traffic weight corresponding to the coprocessor 2 is 48
  • the total traffic weight corresponding to the coprocessor 3 is 45. It can be seen that, according to the configuration scheme of the embodiment, the total traffic weight of the co-processing 1, 2, and 3 approaches the equalization, and the total traffic weight represents the total traffic size of all virtual interfaces corresponding to the coprocessor, so After the configuration is complete, the coprocessor implements load balancing on the traffic.
  • the method in this embodiment implements load balancing on the traffic when the coprocessor is allocated for the virtual interface.
  • another embodiment of the present invention further provides an apparatus for implementing multi-coprocessor load balancing, including:
  • a first obtaining module configured to obtain a traffic weight of each virtual interface to be allocated
  • a second obtaining module configured to obtain a total traffic weight value on each coprocessor; wherein, the total traffic weight on the coprocessor is a sum of traffic weights of all virtual interfaces allocated on the coprocessor;
  • An allocation module configured to sequentially allocate a virtual interface to be allocated with a significant traffic weight to a coprocessor having a small total traffic weight; wherein the total traffic weight on the coprocessor is all that is allocated on the coprocessor The sum of the traffic weights of the virtual interfaces.
  • each virtual interface corresponds to the traffic weight of the respective traffic, and when the coprocessor is allocated for the virtual interface, the balance of the total traffic weight corresponding to each coprocessor is maintained. Therefore, the traffic corresponding to each coprocessor is balanced. Compared with the method of simply allocating according to the number of virtual interfaces of the coprocessor, the effect of load balancing is more obvious.
  • the first obtaining module is configured to quantize the traffic weight of each virtual interface to be allocated according to the traffic size corresponding to the virtual interface to be allocated.
  • the allocation module includes:
  • the first arranging sub-module is configured to perform descending ordering on all the virtual interfaces to be allocated according to the size of the traffic weight
  • the second permutation sub-module is configured to perform an ascending order on all coprocessors according to the total value of the traffic weights
  • the allocation sub-module is configured to allocate a virtual interface with a significant flow weight to a coprocessor having a small total flow weight according to the order of the virtual interfaces to be allocated and the order of the coprocessors.
  • the number of virtual interfaces to be allocated is X, and the number of coprocessors is Y;
  • the second permutation sub-module re-arranges all the co-processors according to the total value of the traffic weights;
  • the second array sub-module is arranged according to the order of the virtual interfaces to be allocated, and The new sorting order of the coprocessor allocates the remaining virtual interfaces to be allocated with significant traffic weights to the coprocessor with a small total traffic weight.
  • the apparatus of this embodiment can achieve the same technical effect corresponding to the multi-coprocessor load balancing method provided by the present invention.
  • the embodiment of the present invention further provides a main processor, which includes the multi-coprocessor load balancing device provided by the embodiment of the present invention, which can maintain the traffic balance corresponding to each coprocessor when the co-processor is allocated for the virtual interface. .
  • the embodiment of the invention further provides a computer program and a carrier thereof, the computer program comprising program instructions, when the program instruction is executed by the main processing device, enabling the device to implement the multi-coprocessor load balancing method.
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve. Thus, the invention is not limited to any specific combination of hardware and software.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • each device/function module/functional unit in the above embodiment When each device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the balance of the total values makes the flow processed by each coprocessor equalized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种多协处理器负载均衡的方法及装置;方法包括:主处理器获得每一个待分配的虚接口的流量权重(11);主处理器获得每一个协处理器上的流量权重总值;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和(12);主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器(13)。每个虚接口都对应有各自流量的流量权重,在为虚接口分配协处理器时,维持各协处理器对应的流量权重总值的均衡,从而使得各协处理器对应处理的流量达到均衡。与单纯根据协处理器的虚接口数量进行分配的方法相比,负载均衡的效果更明显。

Description

一种多协处理器负载均衡的方法、装置及主处理器 技术领域
本发明涉及IPSec(Internet协议安全性)领域,特别是一种实现多协处理器负载均衡的方法、装置及主处理器。
背景技术
IPSec是一种开放标准的框架结构,通过使用加密的安全服务以确保在Internet协议网络上进行保密而安全的通讯。IPsec定义了在网际层使用的安全服务,其功能包括数据加密、对网络单元的访问控制、数据源地址验证、数据完整性检查和防止重放攻击。
由于IPSec处理流程复杂,实时要求性高,对设备处理能力提出较高的要求。单个主处理器(main processor,MP)的处理能力有限,而且设备通常不仅仅处理IPSec报文,还需要承担其他功能,因此需要配置多个协处理器(coprocessor,CP)专门进行IPSec报文处理。
当存在多个协处理器时,为协处理器分配虚接口则需要保证负载均衡。目前保证协处理器负载均衡的方法是按照协处理器的虚接口数量来进行均衡分配的。由于不同虚接口所对应的流量大小不同,因此,采用上述现有的分配方法并不能真正意义上的实现协处理器的流量均衡。
发明内容
本发明实施例提供一种实现多协处理器负载均衡的方法、装置及主处理器,能够。
本发明实施例提供的一种多协处理器负载均衡的方法,包括:
主处理器获得每一个待分配的虚接口的流量权重;
所述主处理器获得每一个协处理器上的流量权重总值;其中,所述每一个协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和;
所述主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器。
其中,所述主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器,包括:
所述主处理器根据流量权重的大小对所有待分配的虚接口进行降序排列;
所述主处理器根据流量权重总值的大小对所有协处理器进行升序排列;
所述主处理器按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器。
其中,所述待分配的虚接口为X个,协处理器为Y个;
所述主处理器按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器,包括:
当X大于Y时,所述主处理器在分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;
所述主处理器根据待分配的虚接口的排列顺序,以及协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
其中,所述主处理器获得每一个待分配的虚接口的流量权重,包括:
所述主处理器根据待分配的虚接口对应的流量大小,量化出每一个待分配的虚接口的流量权重。
此外,本发明的另一实施例还提供一种多协处理器负载均衡的装置,包括:
第一获取模块,设置为获得每一个待分配的虚接口的流量权重;
第二获取模块,设置为获得每一个协处理器上的流量权重总值;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和;
分配模块,设置为按顺序将流量权重大的待分配的虚接口分配给流量权 重总值小的协处理器;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和。
其中,所述分配模块包括:
第一排列子模块,设置为根据流量权重的大小对所有待分配的虚接口进行降序排列;
第二排列子模块,设置为根据流量权重总值的大小对所有协处理器进行升序排列;
分配子模块,设置为按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器。
其中,所述待分配的虚接口为X个,协处理器为Y个;
当X大于Y时,第二排列子模块在所述分配子模块分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;然后,控制所述第二排列子模块根据所述待分配的虚接口的排列顺序,以及协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
其中,所述第一获取模块是设置为根据待分配的虚接口对应的流量大小,量化出每一个待分配的虚接口的流量权重。
此外,本发明的另一实施例还提供一种主处理器,包括上述的多协处理器负载均衡的装置。
本发明另一实施例还提供一种计算机程序及其载体,该计算机程序包括程序指令,当该程序指令被主处理设备执行时,使得该设备可实施上述多协处理器负载均衡的方法。
本发明实施例的上述技术方案的有益效果如下:
在本发明实施例的方案中,每个虚接口都对应有各自流量的流量权重,在为虚接口分配协处理器时,可以维持各协处理器对应的流量权重总值的均衡,使得各协处理器对应处理的流量达到均衡,与单纯根据协处理器的虚接口数量进行分配的方法相比,负载均衡的效果更明显。
附图概述
图1为本发明实施例的多协处理器负载均衡的方法的步骤示意图;
图2为本发明实施例的多协处理器负载均衡的装置的结构示意图。
本发明的较佳实施方式
下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
如图1所示,本发明的实施例提供一种负载均衡的方法,包括:
步骤11,主处理器获得每一个待分配的虚接口的流量权重;
步骤12,主处理器获得每一个协处理器上的流量权重总值;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和;
步骤13,主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器。
通过上述描述可以知道,在本实施例的方法中,每个虚接口都对应有各自流量的流量权重,在为虚接口分配协处理器时,维持各协处理器对应的流量权重总值的均衡,从而使得各协处理器对应处理的流量达到均衡。与单纯根据协处理器的虚接口数量进行分配的方法相比,负载均衡的效果更明显。
可选地,在执行步骤11中,主处理器根据待分配的虚接口对应的流量大小,量化出每一个待分配的虚接口的流量权重。即,虚接口的流量权重的大小即可表示其对应的流量大小。
可选地,上述步骤13具体包括:
步骤131,主处理器根据流量权重的大小对所有待分配的虚接口进行降序排列;
步骤132,主处理器根据流量权重总值的大小对所有协处理器进行升序排列;
步骤133,主处理器按照待分配的虚接口的排列顺序,以及协处理器的 排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器;
下面结合实施例对步骤131-步骤133进行详细介绍。
<实施例一>
示例性地,在本实施例一中,假设有3个未配置有协处理器的虚接口,即虚接口1-3,其对应的流量权重分别是5,8,4。某一设备共有3个协处理器,即当前流量权重总值为10的协处理器1、流量权重总值为8的协处理器2以及流量权重总值为13的协处理器3。
在配置虚接口时,对协处理器按照流量权重总值的大小进行升序排列,即协处理器的排列次序为表一所示:
排列次序 协处理器编号 流量权重总值
1 协处理器2 8
2 协处理器1 10
3 协处理器3 13
表一
之后根据流量权重的大小,对未配置有协处理器的虚接口进行降序排列,其排列次序如表二所示:
排列次序 虚接口编号 流量权重
1 虚接口2 8
2 虚接口1 5
3 虚接口3 4
表二
按照排列次序从表二中选取虚接口2、1、3。之后将虚接口2分配至表一中排列次序第一的协处理器2,将虚接口1分配至表一中排列次序第二的协处理器1,以及虚接口3分配至表一中排列次序第三的协处理器3。
在虚接口1、2、3全部配置完成后,协处理器1对应的流量权重总值为15,协处理器2对应的流量权重总值为16,协处理器3对应的流量权重总值 为17。由于流量权重总值即代表着协处理器对应的所有虚接口的流量总大小,因此在配置结束后,协处理器在流量上实现了负载均衡。
此外,可在分配虚接口时,可动态更新协处理器的排列顺序,下面介绍一个可行的实现方式。
在上述基础上,所述待分配的虚接口为X个,协处理器为Y个;
当X大于Y时,在执行上述步骤133时,主处理器在分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;之后根据待分配的虚接口的排列顺序,以及协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
下面以一个实施例对上述方案进行详细介绍。
<实施例二>
示例性地,在本实施例二中,假设有6个未配置有协处理器的虚接口,即虚接口1-6,其对应的流量权重分别是1,3,5,7,13,16。某一设备共有3个协处理器,即当前流量权重总值为31的协处理器1、流量权重总值为32的协处理器2以及流量权重总值为33的协处理器3。
在配置虚接口时,对协处理器按照流量权重总值的大小进行升序排列,即协处理器的排列次序如表三所示:
排列次序 协处理器编号 流量权重总值
1 协处理器1 31
2 协处理器2 32
3 协处理器3 33
表三
之后根据流量权重的大小,对未配置有协处理器的虚接口进行降序排列,其排列次序如表四所示:
排列次序 虚接口编号 流量权重
1 虚接口6 16
2 虚接口5 13
3 虚接口4 7
4 虚接口3 5
5 虚接口2 3
6 虚接口1 1
表四
由于总共有3个协处理器,因此从表二中选取排列次序前3个的虚接口6、5、4。之后将虚接口6分配至表一中排列次序第一的协处理器1,将虚接口5分配至表一中排列次序第一的协处理器2,以及虚接口4分配至表一中排列次序第一的协处理器3。
之后重新根据流量权重总值的大小对协处理器进行升序排列,对应的更新后的表三如下所示:
排列次序 协处理器编号 流量权重总值
1 协处理器3 40
2 协处理器2 45
3 协处理器1 47
更新后的表三
从表二中按照排列次序选取余下的3个虚接口,即虚接口3、2、1。将虚接口3分配至更新后的表三中排列次序第一的协处理器3,将虚接口2分配至更新后的表三中排列次序第二的协处理器2,以及将虚接口1分配至更新后的表三中排列次序第三的协处理器1。
在虚接口1-6全部配置完成后,协处理器1对应的流量权重总值为48,协处理器2对应的流量权重总值为48,协处理器3对应的流量权重总值为45。可见,按照本实施例的配置方案,协处理1、2、3的流量权重总值趋近于均衡,由于流量权重总值即代表着协处理器对应的所有虚接口的流量总大小,因此在配置结束后,协处理器在流量上实现了负载均衡。
需要说明的,上述实施例二仅仅是一种可行的实现方案,本发明的方法也可每分配一个虚接口就重新对协处理器进行升序排列。
综上所述,本实施例的方法在为虚接口分配协处理器时,真正在流量上实现了负载均衡。
此外,如图2所示,本发明的另一实施例还提供一种实现多协处理器负载均衡的装置,包括:
第一获取模块,设置为获得每一个待分配的虚接口的流量权重;
第二获取模块,设置为获得每一个协处理器上的流量权重总值;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和;
分配模块,设置为按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和。
通过上述描述可以知道,在本实施例的装置中,每个虚接口都对应有各自流量的流量权重,在为虚接口分配协处理器时,维持各协处理器对应的流量权重总值的均衡,从而使得各协处理器对应处理的流量达到均衡。与单纯根据协处理器的虚接口数量进行分配的方法相比,负载均衡的效果更明显。
可选地,所述第一获取模块是设置为根据待分配的虚接口对应的流量大小,量化出每一个待分配的虚接口的流量权重。
此外,在上述基础之上,所述分配模块包括:
第一排列子模块,设置为根据流量权重的大小对所有待分配的虚接口进行降序排列;
第二排列子模块,设置为根据流量权重总值的大小对所有协处理器进行升序排列;
分配子模块,设置为按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器。
可选地,所述待分配的虚接口为X个,协处理器为Y个;
当X大于Y时,第二排列子模块在所述分配子模块分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;之后,控制所述第二排列子模块根据所述待分配的虚接口的排列顺序,以及 协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
显然,本实施例的装置,与本发明提供的多协处理器负载均衡的方法相对应,均能够实现相同的技术效果。
此外,本发明实施例还提供一种主处理器,包括本发明实施例提供的多协处理器负载均衡的装置,能够在为虚接口分配协处理器时,维持各协处理器对应的流量均衡。
本发明实施例还提供一种计算机程序及其载体,该计算机程序包括程序指令,当该程序指令被主处理设备执行时,使得该设备可实施上述多协处理器负载均衡的方法。
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(如系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
上述实施例中的各装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。
上述实施例中的各装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。
任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求所述的保护范围为准。
工业实用性
本发明实施例提供的多协处理器负载均衡的方法及装置,每个虚接口都对应有各自流量的流量权重,在为虚接口分配协处理器时,可以维持各协处理器对应的流量权重总值的均衡,使得各协处理器对应处理的流量达到均衡。

Claims (11)

  1. 一种多协处理器负载均衡的方法,包括:
    主处理器获得每一个待分配的虚接口的流量权重;
    所述主处理器获得每一个协处理器上的流量权重总值;其中,所述每一个协处理器上的流量权重总值为所述协处理器上分配到的所有虚接口的流量权重之和;
    所述主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器。
  2. 根据权利要求1所述的方法,其中,
    所述主处理器按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器,包括:
    所述主处理器根据流量权重的大小对所有待分配的虚接口进行降序排列;
    所述主处理器根据流量权重总值的大小对所有协处理器进行升序排列;
    所述主处理器按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器。
  3. 根据权利要求2所述的方法,其中,
    所述待分配的虚接口为X个,协处理器为Y个;
    所述主处理器按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器,包括:
    当X大于Y时,所述主处理器在分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;
    所述主处理器根据待分配的虚接口的排列顺序,以及协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
  4. 根据权利要求1所述的方法,其中,
    所述主处理器获得每一个待分配的虚接口的流量权重,包括:
    所述主处理器根据待分配的虚接口对应的流量大小,量化出每一个待分配的虚接口的流量权重。
  5. 一种多协处理器负载均衡的装置,包括:
    第一获取模块,设置为获得每一个待分配的虚接口的流量权重;
    第二获取模块,设置为获得每一个协处理器上的流量权重总值;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和;
    分配模块,设置为按顺序将流量权重大的待分配的虚接口分配给流量权重总值小的协处理器;其中,协处理器上的流量权重总值为该协处理器上分配到的所有虚接口的流量权重之和。
  6. 根据权利要求5所述的装置,其中,
    所述分配模块包括:
    第一排列子模块,设置为根据流量权重的大小对所有待分配的虚接口进行降序排列;
    第二排列子模块,设置为根据流量权重总值的大小对所有协处理器进行升序排列;
    分配子模块,设置为按照待分配的虚接口的排列顺序,以及协处理器的排列顺序,将流量权重大的虚接口分配给流量权重总值小的协处理器。
  7. 根据权利要求6所述的装置,其中,
    所述待分配的虚接口为X个,协处理器为Y个;
    当X大于Y时,第二排列子模块在所述分配子模块分配完第Y个待分配的虚接口后,根据流量权重总值的大小对所有协处理器重新进行升序排列;然后,控制所述第二排列子模块根据所述待分配的虚接口的排列顺序,以及协处理器新的排列顺序,将流量权重大的剩余待分配的虚接口分配给流量权重总值小的协处理器。
  8. 根据权利要求5所述的装置,其中,
    所述第一获取模块是设置为根据待分配的虚接口对应的流量大小,量化 出每一个待分配的虚接口的流量权重。
  9. 一种主处理器,包括如权利要求5-8任一所述的装置。
  10. 一种计算机程序,包括程序指令,当该程序指令被主处理设备执行时,使得该设备可实施权利要求1-4任一项的方法。
  11. 一种载有权利要求10所述计算机程序的载体。
PCT/CN2014/091401 2014-09-29 2014-11-18 一种多协处理器负载均衡的方法、装置及主处理器 WO2015131555A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410514102.5A CN105530192B (zh) 2014-09-29 2014-09-29 一种多协处理器负载均衡的方法、装置及主处理器
CN201410514102.5 2014-09-29

Publications (1)

Publication Number Publication Date
WO2015131555A1 true WO2015131555A1 (zh) 2015-09-11

Family

ID=54054438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/091401 WO2015131555A1 (zh) 2014-09-29 2014-11-18 一种多协处理器负载均衡的方法、装置及主处理器

Country Status (2)

Country Link
CN (1) CN105530192B (zh)
WO (1) WO2015131555A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846420B (zh) * 2017-12-20 2021-07-20 深圳市沃特沃德股份有限公司 与协处理器的通信匹配的方法和车载主系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217467A (zh) * 2007-12-28 2008-07-09 杭州华三通信技术有限公司 核间负载分发装置及方法
CN102223395A (zh) * 2011-05-11 2011-10-19 田文洪 一种无线射频识别网络中间件动态负载均衡的方法及装置
CN102387071A (zh) * 2011-10-12 2012-03-21 苏州阔地网络科技有限公司 一种网络负载均衡的方法、处理器及系统
CN103500124A (zh) * 2013-10-22 2014-01-08 中国农业银行股份有限公司 一种向多图形处理器分配数据的方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217467A (zh) * 2007-12-28 2008-07-09 杭州华三通信技术有限公司 核间负载分发装置及方法
CN102223395A (zh) * 2011-05-11 2011-10-19 田文洪 一种无线射频识别网络中间件动态负载均衡的方法及装置
CN102387071A (zh) * 2011-10-12 2012-03-21 苏州阔地网络科技有限公司 一种网络负载均衡的方法、处理器及系统
CN103500124A (zh) * 2013-10-22 2014-01-08 中国农业银行股份有限公司 一种向多图形处理器分配数据的方法和系统

Also Published As

Publication number Publication date
CN105530192B (zh) 2019-08-23
CN105530192A (zh) 2016-04-27

Similar Documents

Publication Publication Date Title
CN105391797B (zh) 基于sdn的云服务器负载均衡方法及装置
AU2019216649B2 (en) Method and system for providing reference architecture pattern-based permissions management
Gopinath et al. An in-depth analysis and study of Load balancing techniques in the cloud computing environment
CN103117947B (zh) 一种负载分担方法及装置
CN104780115B (zh) 云计算环境中负载均衡方法及系统
US11762699B2 (en) Assignment of resources to database connection processes based on application information
WO2016041421A1 (zh) 网络通信方法及客户端
CN104995604A (zh) 虚拟机的资源分配方法及装置
CN108989110A (zh) 一种vpc网络模型的构建方法及其相关设备
WO2016095758A1 (zh) 一种跨板转发的方法和装置
CN106844397A (zh) 基于分库分表的任务传输方法、装置及系统
Qiao et al. Preliminary interference study about job placement and routing algorithms in the fat-tree topology for HPC applications
US20190065265A1 (en) Performance characterization for datacenters
Alhazmi et al. Optimized provisioning of SDN-enabled virtual networks in geo-distributed cloud computing datacenters
WO2015131555A1 (zh) 一种多协处理器负载均衡的方法、装置及主处理器
US10616116B1 (en) Network traffic load balancing using rotating hash
WO2018181840A1 (ja) 制御装置、制御システム、制御方法及びプログラム
CN105283864A (zh) 管理裸机客户
US8830854B2 (en) System and method for managing parallel processing of network packets in a wireless access device
CN103905473B (zh) 云计算系统、负载均衡系统、负载均衡方法及装置
Banerjee et al. An approach toward amelioration of a new cloudlet allocation strategy using Cloudsim
CN108829340A (zh) 存储处理方法、装置、存储介质及处理器
WO2015139433A1 (zh) 静态IPSec虚接口负载均衡的方法、装置及主处理器
WO2017023256A1 (en) Cloud provisioning for networks
CN106982169A (zh) 报文转发方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14884394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14884394

Country of ref document: EP

Kind code of ref document: A1