WO2016074497A1 - 堆叠系统中实现分工的方法、主设备、备设备和系统 - Google Patents

堆叠系统中实现分工的方法、主设备、备设备和系统 Download PDF

Info

Publication number
WO2016074497A1
WO2016074497A1 PCT/CN2015/084962 CN2015084962W WO2016074497A1 WO 2016074497 A1 WO2016074497 A1 WO 2016074497A1 CN 2015084962 W CN2015084962 W CN 2015084962W WO 2016074497 A1 WO2016074497 A1 WO 2016074497A1
Authority
WO
WIPO (PCT)
Prior art keywords
management
message
policy
standby
master
Prior art date
Application number
PCT/CN2015/084962
Other languages
English (en)
French (fr)
Inventor
潘庭山
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016074497A1 publication Critical patent/WO2016074497A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management

Definitions

  • This application relates to, but is not limited to, stacking techniques.
  • Device stacking technology is widely used in communications, computers and other fields.
  • Device stacking technology refers to the cascading of multiple devices with the same or similar functions to form a stacking system, which can be regarded as a technology of one device.
  • multiple network devices with the same or similar functions and functions, such as switches are cascaded through interfaces such as Ethernet interfaces to logically form a network device.
  • device stacking technology has many advantages, such as reducing the complexity of device functionality and/or performance upgrades, thereby ensuring long-term market competitiveness.
  • Devices in a stacked system have unique device IDs. From the perspective of the role of the device, the devices in the stacking system can be divided into three roles: the master device that manages the entire stacking system, that is, the management function of the entire stacking system, and the business functions of the stacking system under the management of the master device.
  • the master device that manages the entire stacking system, that is, the management function of the entire stacking system, and the business functions of the stacking system under the management of the master device.
  • a slave device that takes over the role of the slave device when the master device fails, and the slave device functions as the slave device when the master device works normally.
  • the master device also acts as a slave device.
  • the service function in the stacking system mainly processes each type of message according to the set processing policy.
  • the switch stacking system processes and forwards the packets of each protocol according to the set processing policy; and the management function is mainly obtained according to the execution management function.
  • the management result determines and sets a processing policy for each type of message.
  • the management function in the switch stack system determines the processing strategy of each protocol packet in multiple ways. Take the switch stack system as an example. Because the types of protocol packets are often very large, and protocol packets including different identities such as MAC addresses and/or IP addresses usually correspond to different processing policies, it is very important to determine the corresponding processing policy.
  • the protocol packets exchanged between the two parties through the switch stacking system are often related. The two communicating parties often need to exchange multiple protocol packets to establish communication.
  • the two communicating parties need to exchange multiple protocols to close the communication, so that the management functions need to be managed.
  • the communication process is performed in the whole process of communication between the two parties.
  • the management function is to manage the entire process of communication between each pair of communication parties by saving the management result, which makes the management function more complicated.
  • each device in the stacking system has the same or similar functionality and performance, each device follows The relatively equal electoral strategy elects the primary and backup devices, and the other devices that are not elected as the primary devices naturally become the secondary devices. At present, the equal electoral strategy has been widely adopted in the stacking system. Because the election strategy is equal, each device in the stacking system has the opportunity to become a master device, a slave device, or a slave device, which increases the flexibility of the stacking system.
  • the master device in the stack system needs to play a dual role. Therefore, compared with the standby device or the slave device, the master device is the device with the highest processing pressure in the stack system, and often becomes the performance bottleneck of the stack system.
  • the equality of the electoral strategy also makes it impossible to improve the performance of the primary device by increasing the performance of each device in a stacked system, as this would double the cost of the stacked system.
  • the pressure of the master device is reduced in such a manner that the slave device shares the management function of the master device as much as possible. In this way, although the possibility that the primary device becomes a performance bottleneck of the stack system is reduced to some extent, the performance of the stack system is improved.
  • the master device since the information of the entire stack system that the slave device knows is limited, the management of the master device shared by the device is limited. The function is limited, and the work of many management functions still needs to be completed by the master device. Therefore, when the load of the stack system is further increased, the master device is still a performance bottleneck of the stack system. That is to say, in the method of the master-slave device, when the load of the stack system is increased to a certain extent, the master device may still become a performance bottleneck of the stack system and affect the performance of the stack system.
  • This document provides a method for implementing the division of labor in a stacking system, which enables the standby device to further share the processing pressure of the master device, thereby improving the performance of the stacking system.
  • a method for implementing division of labor in a stacking system comprising:
  • the primary device detects that the load is greater than the preset high load threshold, the corresponding management message is sent to the standby device according to a predetermined division of labor policy for indicating which management messages are performed by the standby device.
  • the standby device executes a management function according to the received management message, and generates a policy message for setting a processing policy according to the management result obtained by the execution, and sends the policy message to the device that generates the management message.
  • the management message includes a source device identifier, and the policy message includes a source device identifier and a destination device identifier.
  • the policy message for setting a processing policy is generated according to the management result obtained by the execution, and the device that is sent to generate the management message includes: :
  • the source device identifier in the policy message is set as the device identifier of the standby device, and the policy message is sent.
  • the load of the primary device is the utilization rate of the primary processing unit of the primary device, and the high load threshold is a product of the utilization rate of the primary processing unit of the primary device and a preset first proportional parameter.
  • the standby device performs a management function according to the received management message, generates a policy message for setting a processing policy according to the management result obtained, and sends the policy message to the device that generates the management message, and further includes: when the master device detects The master device stops transmitting the management message to the standby device when its load is less than a preset low load threshold.
  • the low load threshold is a product of a utilization rate of a main processing unit of the master device and a preset second proportional parameter.
  • the second proportional parameter is smaller than the first proportional parameter.
  • the method of the embodiment of the present invention further includes: the standby device synchronizing the management result generated by the execution management function to the master device.
  • a main device for implementing division of labor in a stacking system comprising a detecting unit, a judging unit and a processing unit, wherein
  • the detecting unit is configured to: detect whether the load of the master device is greater than a preset high load threshold;
  • the determining unit is configured to: when the detection result from the detecting unit is greater than the high load threshold, determine whether the management message is processed by the standby device according to a predetermined division of labor policy for indicating which management messages are performed by the standby device;
  • the processing unit is configured to: when the judgment result from the judging unit is a standby device, send a corresponding management message to the standby device.
  • the load of the primary device is the utilization rate of the primary processing unit of the primary device
  • the high load threshold is the product of the utilization rate of the primary processing unit of the primary device and the first proportional parameter.
  • the detecting unit is further configured to: detect whether the load of the primary device is less than a preset low load threshold;
  • the processing unit is further configured to: stop sending the management message to the standby device when the detection result from the detecting unit is less than a low load threshold.
  • the load of the primary device is the utilization rate of the primary processing unit of the primary device, and the low load threshold is a product of the utilization rate of the primary processing unit of the primary device and the second proportional parameter.
  • a backup device for implementing a division of labor in a stacking system comprising a management unit and a sending unit, wherein
  • a management unit configured to: execute a management function according to the received management message, and generate a policy message for setting a processing policy according to the management result obtained by the execution;
  • the sending unit is configured to: send a policy message from the management unit to the device that generates the management message.
  • the management message includes a source device identifier; the policy message includes a source device identifier and a destination device identifier; and the sending unit is configured to:
  • the destination device identifier in the policy message as the source device identifier in the management message; determining, according to the source device identifier in the management message, whether the device that generates the management message is a slave device or a master device; The device sets the source device identifier in the policy message to the device identifier of the master device, and sends the policy message; if the result is determined to be the master device, the source device identifier in the policy message is set to the The device identifier of the standby device and sends the policy message.
  • the standby device of the embodiment of the present invention further includes a synchronization unit configured to: synchronize the management result from the management unit to the master device.
  • a system for implementing division of labor in a stacking system comprising the master device and the standby device.
  • a computer readable storage medium storing computer executable instructions for performing the method of any of the above.
  • the technical solution of the embodiment of the present invention includes: when the primary device detects that the load is greater than a preset high load threshold, according to a preset division of labor strategy for indicating which management messages are performed by the standby device.
  • the corresponding management message is sent to the standby device.
  • the standby device performs a management function according to the received management message, and generates a policy message for setting the processing policy according to the management result obtained by the execution, and sends the policy message to the device that generates the management message.
  • the master device when the processing pressure of the master device in the stack system is large, that is, the load of the master device is greater than a preset high load threshold, the master device allocates a part of the management function to the standby device.
  • the management message is sent to the standby device, which effectively reduces the processing pressure of the master device, further reduces the possibility that the master device becomes a performance bottleneck of the stack system, thereby improving the performance of the stack system.
  • FIG. 1 is a flowchart of a method for implementing a division of labor in a stacking system according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a device for implementing a division of labor in a stacking system according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a structure of a master device that implements a division of labor in a stacking system according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a backup device for implementing a division of labor in a stacking system according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for implementing a division of labor in a stacking system according to an embodiment of the present invention, including:
  • Step 101 When the master device detects that the load is greater than the preset high load threshold, the corresponding management message is sent to the standby device according to a predetermined division of labor policy for indicating which management messages are performed by the standby device.
  • the management message is a message for which a management function needs to be performed.
  • the device that generates the management message is the master device, the slave device, or the standby device.
  • the high load threshold is the product of the utilization rate of the primary processing unit of the primary device and the first proportional parameter.
  • the first proportional parameter is greater than 0.5.
  • the first proportional parameter can be 0.55, 0.6, or 0.65.
  • the ratio of the management message processed by the standby device to all management messages indicated by the division of labor policy is about 50%.
  • the implementation of the division of labor policy can be: a preset forwarding table such as an access control list (ACL).
  • ACL access control list
  • the corresponding management message is sent to the standby device according to the division of labor policy in this step.
  • the master switch queries the ACL for each management message, such as the protocol packet that needs to be managed by the management function.
  • the protocol packet that needs to be managed by the management function is sent to the standby switch. Otherwise, the master switch continues to process the protocol packets processed by the management function. among them,
  • the ACL indicates that the standby switch performs the management function on the protocol packets that need to be managed by the management function.
  • the ratio of the protocol packets processed by the standby switch and the protocol packets that need to be managed by the ACL is about the ratio of the protocol packets processed by the standby switch. 50%.
  • Step 102 The standby device performs a management function according to the received management message, and generates a policy message for setting a processing policy according to the management result obtained by the execution, and sends the policy message to the device that generates the management message.
  • the source device identifier is included in the management message; the source device identifier and the destination device identifier are included in the policy message.
  • the device that is sent to generate the management message includes:
  • Devices sent to generate management messages include:
  • the destination device identifier in the policy message Setting the destination device identifier in the policy message to the source device identifier in the management message; determining whether the device generating the management message is the slave device or the master device according to the source device identifier in the management message; if the determination result is the slave device, the policy message is The source device identifier is set to the device ID of the master device, and the policy message is sent. If the result is determined to be the master device, the source device identifier in the policy message is set to the device identifier of the standby device, and the policy message is sent.
  • the slave device does not need to sense and process the policy message generated by the standby device to perform the management function, which simplifies the design of the stacking system and increases the scalability of the stacking system.
  • the policy message is a packet including a processing policy.
  • the method further includes:
  • the master device When the master device detects that its load is less than a preset low load threshold, the master device stops transmitting the management message to the standby device.
  • the master device when the master device sends a management message corresponding to the division of labor policy to the standby device, and detects that the load is less than the preset low load threshold, the master device performs the process of stopping.
  • the master device when the processing pressure of the master device is small, the master device performs all management functions while maintaining the performance of the stack system. Compared with the master and slave devices, the power consumption of the stack system is reduced.
  • the master device In the process in which the master device processes the management message alone, when the load is detected to be less than the preset low load threshold, the master device does not perform any processing.
  • the low load threshold is the product of the utilization rate of the primary processing unit of the primary device and the second proportional parameter.
  • the second proportional parameter is smaller than the first proportional parameter.
  • the second ratio parameter is less than 0.5.
  • the second proportional parameter can be 0.35, 0.4, or 0.45.
  • the master device can seamlessly take over and perform all the management functions.
  • the method in the embodiment of the present invention further includes: managing the management function generated by the standby device The result is synchronized to the master device. Which will include management The management result message is sent to the master device, so that the backup device synchronizes the management result with the master device.
  • the management result message is a packet including the management result.
  • FIG. 2 is a schematic structural diagram of a system for implementing a division of labor in a stacking system according to an embodiment of the present invention, including a master device 21 and a standby device 22.
  • the master device includes a detecting unit 211, a determining unit 212, and a processing unit 213, and the device includes a management unit 221 and a sending unit 222, where
  • the detecting unit 211 is configured to: detect whether the load of the master device is greater than a preset high load threshold;
  • the determining unit 212 is configured to: when the detection result from the detecting unit 211 is greater than the high load threshold, determine whether the management message is provided by the standby device according to a predetermined division of labor policy for indicating which management messages the standby device performs the management function. deal with;
  • the processing unit 213 is configured to: when the determination result from the determining unit 211 is a standby device, send a corresponding management message to the standby device.
  • the management unit 221 is configured to: execute a management function according to the received management message, and generate a policy message for setting a processing policy according to the management result obtained by the execution;
  • the sending unit 222 is configured to: send the policy message from the management unit 221 to the device that generates the management message.
  • the high load threshold is the product of the utilization rate of the primary processing unit of the primary device and the first proportional parameter.
  • the detecting unit 211 is further configured to: detect whether the load of the primary device is less than a preset low load threshold;
  • the processing unit 213 is further configured to stop transmitting the management message to the standby device when the detection result from the detecting unit 211 is less than the low load threshold.
  • the low load threshold is the product of the utilization rate of the primary processing unit of the primary device and the second proportional parameter.
  • the second proportional parameter is smaller than the first proportional parameter.
  • the source device identifier is included in the management message; the source device identifier and the destination device identifier are included in the policy message;
  • the sending unit 222 is configured to: set the destination device identifier in the policy message as the source device identifier in the management message; determine, according to the source device identifier in the management message, whether the device that generates the management message is the slave device or the master device; if the determination result is On the device, set the source device ID in the policy message to the device ID of the master device and send the policy message. If the result is determined to be the master device, set the source device ID in the policy message to the device ID of the standby device and send the policy. Message.
  • the standby device may further include a synchronization unit configured to: synchronize management results from the management unit to the master device.
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/functional unit in the above embodiment When the device/function module/functional unit in the above embodiment is implemented in the form of a software function module and sold or used as a stand-alone product, it can be stored in a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the master device when the processing pressure of the master device in the stacking system is large, that is, the load of the master device is greater than the preset high load threshold, the master device allocates a part of the management function to the standby device for a part of the management message. Sending to the standby device effectively reduces the processing pressure of the primary device and reduces the possibility that the primary device becomes a performance bottleneck of the stacked system, thereby improving the performance of the stacked system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Telephonic Communication Services (AREA)
  • Hardware Redundancy (AREA)

Abstract

本文公布一种堆叠系统中实现分工的方法、主设备、备设备和系统,包括:当主设备检测到其负荷大于预先设定的高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,将相应的管理消息发送给备设备;备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备。

Description

堆叠系统中实现分工的方法、主设备、备设备和系统 技术领域
本申请涉及但不限于堆叠技术。
背景技术
设备堆叠技术广泛应用到通讯、计算机等领域。设备堆叠技术指的是,将多台功能和性能相同或相似的设备级联起来构成堆叠系统,逻辑上可看成一台设备的技术。例如,在通讯领域中,将多台功能和性能相同或相似的网络设备如交换机通过接口如以太网接口相连的方式级联起来,逻辑上构成一台网络设备。在通讯领域中,设备堆叠技术具有很多优点,例如,降低了设备功能和/或性能升级的复杂性,从而保证了设备具有长期市场竞争力。
堆叠系统中的设备具有唯一设备标识。从设备发挥的作用的角度,堆叠系统中的设备可以分成三种角色:负责管理整个堆叠系统即承担整个堆叠系统的管理功能的主设备、以及在主设备的管理下完成堆叠系统的业务功能的从设备、主设备出现故障时接替主设备、主设备正常工作时发挥从设备的作用的备设备。主设备除了负责管理整个堆叠系统之外,主设备也发挥从设备的作用。通常,堆叠系统中的业务功能主要是按照设置的处理策略处理每种消息,如交换机堆叠系统中按照设置的处理策略处理并转发每种协议的报文;而管理功能主要是根据执行管理功能得到的管理结果为每种消息确定并设置处理策略,如交换机堆叠系统中管理功能通过多种方式确定每种协议报文的处理策略。以交换机堆叠系统为例,由于协议报文的种类往往非常多,而且包括不同身份标识如MAC地址和/或IP地址的协议报文通常对应不同的处理策略,因此确定相应的处理策略的工作非常复杂;此外,通信双方通过交换机堆叠系统交互的协议报文往往具有相关性,通信双方往往需要交互多种协议报文建立通信,相应地通信双方需要交互多种协议关闭通信,这样管理功能需要管理通信双方进行通信的全过程,管理功能是通过保存管理结果的方式实现对每对通信双方进行通信的全过程管理的,这使得管理功能更加复杂。
由于堆叠系统中的每个设备功能和性能相同或相似,因此每个设备按照 相对平等的选举策略选举出主设备和备设备,未当选主设备的其它设备自然成为从设备。目前,平等的选举策略在堆叠系统中得到了广泛的采用。由于选举策略具有平等性,因此,堆叠系统中的每个设备均有机会成为主设备、被设备或从设备,这样增加了堆叠系统的灵活性。
不难看出,堆叠系统中的主设备需要发挥双方面作用,因此相比于备设备或从设备,主设备是堆叠系统中处理压力最大的设备,往往成为堆叠系统的性能瓶颈。选举策略的平等性也使得在堆叠系统中,不可能通过提高每个设备的性能来提高主设备的性能,因为这样将成倍的增加堆叠系统的成本。目前的堆叠系统中,采用使从设备尽量分担主设备的管理功能的方式降低主设备的压力。这样,虽然一定程度上降低了主设备成为堆叠系统的性能瓶颈的可能性,提高了堆叠系统的性能,然而,由于从设备了解的整个堆叠系统的信息有限,因此从设备分担的主设备的管理功能有限,很多管理功能的工作仍然需要由主设备完成,因此当堆叠系统的负荷进一步增加时,主设备仍然是堆叠系统的性能瓶颈。也就是说,这种主从设备的方法,当堆叠系统的负荷增加到一定程度时,主设备仍然会成为堆叠系统的性能瓶颈,影响堆叠系统的性能。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文提供了一种堆叠系统中实现分工的方法,能够实现备设备进一步分担主设备的处理压力,从而提高堆叠系统的性能。
一种堆叠系统中实现分工的方法,包括:
当主设备检测到其负荷大于预先设定的高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,将相应的管理消息发送给备设备;
备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备。
所述管理消息中包括源设备标识;所述策略消息中包括源设备标识和目的设备标识;所述按照执行得到的管理结果生成用于设置处理策略的策略消息,发送给生成管理消息的设备包括:
将所述策略消息中的目的设备标识设置为所述管理消息中的源设备标识;
根据所述管理消息中的源设备标识确定生成管理消息的设备为从设备还是所述主设备;
如果确定结果为从设备,将所述策略消息中的源设备标识设置为所述主设备的设备标识,并发送所述策略消息;
如果确定结果为所述主设备,将所述策略消息中的源设备标识设置为所述备设备的设备标识,并发送所述策略消息。
所述主设备的负荷为所述主设备的主处理单元的利用率,所述高负荷阈值为所述主设备的主处理单元的利用率与预先设定的第一比例参数的乘积。
可选地,
所述备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备之后,还包括:当所述主设备检测到其负荷小于预先设定的低负荷阈值时,所述主设备停止将所述管理消息发送给所述备设备。
所述低负荷阈值为所述主设备的主处理单元的利用率与预先设定的第二比例参数的乘积。
可选地,所述第二比例参数小于所述第一比例参数。
可选地,
在所述备设备执行管理功能中,本发明实施例方法还包括:所述备设备将所述执行管理功能产生的管理结果同步到所述主设备。
一种堆叠系统中实现分工的主设备,包括检测单元、判断单元和处理单元,其中,
检测单元,设置为:检测主设备的负荷是否大于预先设定的高负荷阈值;
判断单元,设置为:当来自检测单元的检测结果为大于高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,判断管理消息是否由备设备处理;
处理单元,设置为:当来自判断单元的判断结果为备设备,将相应的管理消息发送给备设备。
所述主设备的负荷为所述主设备的主处理单元的利用率,所述高负荷阈值为所述主设备的主处理单元的利用率与第一比例参数的乘积。
可选地,
所述检测单元还设置为:检测主设备的负荷是否小于预先设定的低负荷阈值;
所述处理单元还设置为:当来自检测单元的检测结果为小于低负荷阈值时,停止将所述管理消息发送给所述备设备。
所述主设备的负荷为所述主设备的主处理单元的利用率,所述低负荷阈值为所述主设备的主处理单元的利用率与第二比例参数的乘积。
一种堆叠系统中实现分工的备设备,包括管理单元和发送单元,其中,
管理单元,设置为:根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息;
发送单元,设置为:将来自管理单元的策略消息发送给生成管理消息的设备。
所述管理消息中包括源设备标识;所述策略消息中包括源设备标识和目的设备标识;所述发送单元是设置为:
将所述策略消息中的目的设备标识设置为所述管理消息中的源设备标识;根据所述管理消息中的源设备标识确定生成管理消息的设备为从设备还是主设备;如果确定结果为从设备,将所述策略消息中的源设备标识设置为所述主设备的设备标识,并发送所述策略消息;如果确定结果为主设备,将所述策略消息中的源设备标识设置为所述备设备的设备标识,并发送所述策略消息。
可选地,
本发明实施例备设备还包括同步单元,设置为:将来自所述管理单元的管理结果同步到主设备。
一种堆叠系统中实现分工的系统,包括所述主设备和所述备设备。
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述任一项的方法。
与相关技术相比,本发明实施例技术方案包括:当主设备检测到其负荷大于预先设定的高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,将相应的管理消息发送给备设备;备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备。通过本发明实施例技术方案,当堆叠系统中的主设备的处理压力较大即主设备的负荷大于预先设定的高负荷阈值时,主设备将一部分管理功能的工作分摊到备设备上即将一部分管理消息发送给备设备,这样有效降低了主设备的处理压力,进一步降低了主设备成为堆叠系统的性能瓶颈的可能性,从而提高了堆叠系统的性能。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本发明实施例堆叠系统中实现分工的方法的流程图;
图2为本发明实施例堆叠系统中实现分工的装置的组成结构示意图;
图3为本发明实施例堆叠系统中实现分工的主设备的组成结构示意图;
图4为本发明实施例堆叠系统中实现分工的备设备的组成结构示意图。
本发明的实施方式
下文中将结合附图对本发明的实施方式进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1为本发明实施例堆叠系统中实现分工的方法的流程图,包括:
步骤101:当主设备检测到其负荷大于预先设定的高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,将相应的管理消息发送给备设备。
其中,管理消息为需要对其执行管理功能的消息。生成管理消息的设备为主设备、从设备或备设备。
当主设备的负荷为主设备的主处理单元的利用率时,高负荷阈值为主设备的主处理单元的利用率与第一比例参数的乘积。
可选地,第一比例参数大于0.5。第一比例参数可以为0.55、0.6、或0.65。
可选地,分工策略表明的由备设备处理的管理消息与所有管理消息的比值约为50%。
以交换机堆叠系统为例来看,在该系统中,分工策略的实现可以为:预先设置的转发表如访问控制列表(ACL)。这样,本步骤中按照分工策略将相应的管理消息发送给备设备包括:针对每个管理消息如需要管理功能处理的协议报文,主交换机查询ACL;如果得到的查询结果表明由备交换机处理,将需要管理功能处理的协议报文发送给备交换机;否则,主交换机继续处理该需要管理功能处理的协议报文。其中,
ACL中表明备交换机对哪些需要管理功能处理的协议报文执行管理功能,且ACL表明的由备交换机处理的需要管理功能处理的协议报文与所有需要管理功能处理的协议报文的比值约为50%。
步骤102:备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备。
管理消息中包括源设备标识;策略消息中包括源设备标识和目的设备标识。
本步骤中,发送给生成管理消息的设备包括:
发送给生成管理消息的设备包括:
将策略消息中的目的设备标识设置为管理消息中的源设备标识;根据管理消息中的源设备标识确定生成管理消息的设备为从设备还是主设备;如果确定结果为从设备,将策略消息中的源设备标识设置为主设备的设备标识,并发送策略消息;如果确定结果为主设备,将策略消息中的源设备标识设置为备设备的设备标识,并发送策略消息。
这样,保证了从设备无需感知并处理备设备执行管理功能产生的策略消息,简化了堆叠系统的设计,增加了堆叠系统的扩展性。
在交换机堆叠系统中,策略消息为包括处理策略的报文。
可选地,本发明实施例方法之后还包括:
当主设备检测到其负荷小于预先设定的低负荷阈值时,主设备停止将管理消息发送给备设备。
也就是说,在主设备将与分工策略相应的管理消息发送给备设备的过程中,检测到其负荷小于预先设定的低负荷阈值时,主设备执行停止的处理。
这样,保证了当主设备的处理压力较小时,在保证堆叠系统的性能的情况下,恢复到主设备执行全部管理功能,相比于主备设备同时执行管理功能,降低了堆叠系统的功耗。
在主设备独自处理管理消息的过程中,检测到其负荷小于预先设定的低负荷阈值时,主设备不执行任何处理。
当主设备的负荷为主设备的主处理单元的利用率时,低负荷阈值为主设备的主处理单元的利用率与第二比例参数的乘积。
可选地,第二比例参数小于第一比例参数。
可选地,第二比例参数小于0.5。第二比例参数可以为0.35、0.4、或0.45。
为了保证主设备停止将管理消息发送给备设备之后,主设备可以无缝接管并执行所有管理功能,备设备执行管理功能时,本发明实施例方法还包括:备设备将执行管理功能产生的管理结果同步到主设备。其中,将包括管理结 果的管理结果消息发送给主设备,以实现备设备向主设备同步管理结果。
在交换机堆叠系统中,管理结果消息为包括管理结果的报文。
图2为本发明实施例堆叠系统中实现分工的系统的组成结构示意图,包括主设备21和备设备22。如图3和图4所示,主设备包括检测单元211、判断单元212和处理单元213,被设备包括管理单元221和发送单元222,其中,
检测单元211,设置为:检测主设备的负荷是否大于预先设定的高负荷阈值;
判断单元212,设置为:当来自检测单元211的检测结果为大于高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,判断管理消息是否由备设备处理;
处理单元213,设置为:当来自判断单元211的判断结果为备设备,将相应的管理消息发送给备设备。
管理单元221,设置为:根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息;
发送单元222,设置为:将来自管理单元221的策略消息发送给生成管理消息的设备。
当主设备的负荷为主设备的主处理单元的利用率时,高负荷阈值为主设备的主处理单元的利用率与第一比例参数的乘积。
可选地,
检测单元211还设置为:检测主设备的负荷是否小于预先设定的低负荷阈值;
相应地,
处理单元213还设置为:当来自检测单元211的检测结果为小于低负荷阈值时,停止将管理消息发送给备设备。
当主设备的负荷为主设备的主处理单元的利用率时,低负荷阈值为主设备的主处理单元的利用率与第二比例参数的乘积。
可选地,第二比例参数小于第一比例参数。
管理消息中包括源设备标识;策略消息中包括源设备标识和目的设备标识;
发送单元222是设置为:将策略消息中的目的设备标识设置为管理消息中的源设备标识;根据管理消息中的源设备标识确定生成管理消息的设备为从设备还是主设备;如果确定结果为从设备,将策略消息中的源设备标识设置为主设备的设备标识,并发送策略消息;如果确定结果为主设备,将策略消息中的源设备标识设置为备设备的设备标识,并发送策略消息。
备设备还可包括同步单元,设置为:将来自管理单元的管理结果同步到主设备。
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(如系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。
工业实用性
通过本发明实施例,当堆叠系统中的主设备的处理压力较大即主设备的负荷大于预先设定的高负荷阈值时,主设备将一部分管理功能的工作分摊到备设备上即将一部分管理消息发送给备设备,这样有效降低了主设备的处理压力,以及降低了主设备成为堆叠系统的性能瓶颈的可能性,从而提高了堆叠系统的性能。

Claims (15)

  1. 一种堆叠系统中实现分工的方法,包括:
    当主设备检测到其负荷大于预先设定的高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,将相应的管理消息发送给备设备;
    备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备。
  2. 根据权利要求1所述的方法,其中,所述管理消息中包括源设备标识;所述策略消息中包括源设备标识和目的设备标识;
    所述按照执行得到的管理结果生成用于设置处理策略的策略消息,发送给生成管理消息的设备包括:
    将所述策略消息中的目的设备标识设置为所述管理消息中的源设备标识;
    根据所述管理消息中的源设备标识确定生成管理消息的设备为从设备还是所述主设备;
    如果确定结果为从设备,将所述策略消息中的源设备标识设置为所述主设备的设备标识,并发送所述策略消息;
    如果确定结果为所述主设备,将所述策略消息中的源设备标识设置为所述备设备的设备标识,并发送所述策略消息。
  3. 根据权利要求1所述的方法,其中,所述主设备的负荷为所述主设备的主处理单元的利用率;
    所述高负荷阈值为所述主设备的主处理单元的利用率与预先设定的第一比例参数的乘积。
  4. 根据权利要求3所述的方法,其中,所述备设备根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息,并发送给生成管理消息的设备之后,还包括:
    当所述主设备检测到其负荷小于预先设定的低负荷阈值时,所述主设备 停止将所述管理消息发送给所述备设备。
  5. 根据权利要求4所述的方法,其中,所述低负荷阈值为所述主设备的主处理单元的利用率与预先设定的第二比例参数的乘积。
  6. 根据权利要求5所述的方法,其中,所述第二比例参数小于所述第一比例参数。
  7. 根据权利要求1~6任一项所述的方法,其中,在所述备设备执行管理功能中,该方法还包括:所述备设备将所述执行管理功能产生的管理结果同步到所述主设备。
  8. 一种堆叠系统中实现分工的主设备,包括检测单元、判断单元和处理单元,其中,
    检测单元,设置为:检测主设备的负荷是否大于预先设定的高负荷阈值;
    判断单元,设置为:当来自检测单元的检测结果为大于高负荷阈值时,按照预先设定的用于表明备设备对哪些管理消息执行管理功能的分工策略,判断管理消息是否由备设备处理;
    处理单元,设置为:当来自判断单元的判断结果为备设备,将相应的管理消息发送给备设备。
  9. 根据权利要求8所述的主设备,其中,
    所述检测单元还设置为:检测主设备的负荷是否小于预先设定的低负荷阈值;
    所述处理单元还设置为:当来自检测单元的检测结果为小于低负荷阈值时,停止将所述管理消息发送给所述备设备。
  10. 根据权利要求9所述的主设备,其特征在于,所述主设备的负荷为所述主设备的主处理单元的利用率;所述高负荷阈值为所述主设备的主处理单元的利用率与第一比例参数的乘积;所述低负荷阈值为所述主设备的主处理单元的利用率与第二比例参数的乘积。
  11. 一种堆叠系统中实现分工的备设备,包括管理单元和发送单元,其中,
    管理单元,设置为:根据接收到的管理消息,执行管理功能,按照执行得到的管理结果生成用于设置处理策略的策略消息;
    发送单元,设置为:将来自管理单元的策略消息发送给生成管理消息的设备。
  12. 根据权利要求11所述的备设备,其中,所述管理消息中包括源设备标识;所述策略消息中包括源设备标识和目的设备标识;所述发送单元是设置为:
    将所述策略消息中的目的设备标识设置为所述管理消息中的源设备标识;根据所述管理消息中的源设备标识确定生成管理消息的设备为从设备还是主设备;如果确定结果为从设备,将所述策略消息中的源设备标识设置为所述主设备的设备标识,并发送所述策略消息;如果确定结果为主设备,将所述策略消息中的源设备标识设置为所述备设备的设备标识,并发送所述策略消息。
  13. 根据权利要求12所述的备设备,还包括同步单元,设置为:将来自所述管理单元的管理结果同步到主设备。
  14. 一种堆叠系统中实现分工的系统,包括权利要求8~10任一项所述的主设备和权利要求11~13任一项所述的备设备。
  15. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1-7任一项的方法。
PCT/CN2015/084962 2014-11-12 2015-07-23 堆叠系统中实现分工的方法、主设备、备设备和系统 WO2016074497A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410637714.3A CN105656647B (zh) 2014-11-12 2014-11-12 堆叠系统中实现分工的方法、主设备、备设备和系统
CN201410637714.3 2014-11-12

Publications (1)

Publication Number Publication Date
WO2016074497A1 true WO2016074497A1 (zh) 2016-05-19

Family

ID=55953701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/084962 WO2016074497A1 (zh) 2014-11-12 2015-07-23 堆叠系统中实现分工的方法、主设备、备设备和系统

Country Status (2)

Country Link
CN (1) CN105656647B (zh)
WO (1) WO2016074497A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319679B (zh) * 2018-01-30 2021-03-23 杭州迪普科技股份有限公司 一种主键的生成方法及装置
CN111147449A (zh) * 2019-12-09 2020-05-12 杭州迪普科技股份有限公司 一种包过滤策略的测试方法、装置、系统及设备、介质
CN112804337A (zh) * 2021-01-22 2021-05-14 苏州浪潮智能科技有限公司 一种主节点压力分摊方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022726A1 (en) * 2009-07-23 2011-01-27 Hangzhou H3C Technologies Co., Ltd. Method and apparatus for traffic forwarding in a stacking apparatus
CN101977153A (zh) * 2010-11-15 2011-02-16 杭州华三通信技术有限公司 一种流量调节方法和设备
CN102204165A (zh) * 2011-05-27 2011-09-28 华为技术有限公司 控制备用设备的方法、主用设备和备用设备
CN103516744A (zh) * 2012-06-20 2014-01-15 阿里巴巴集团控股有限公司 一种数据处理的方法和应用服务器及集群

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605102B (zh) * 2009-07-16 2012-03-14 杭州华三通信技术有限公司 一种irf堆叠中的负载分担方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022726A1 (en) * 2009-07-23 2011-01-27 Hangzhou H3C Technologies Co., Ltd. Method and apparatus for traffic forwarding in a stacking apparatus
CN101977153A (zh) * 2010-11-15 2011-02-16 杭州华三通信技术有限公司 一种流量调节方法和设备
CN102204165A (zh) * 2011-05-27 2011-09-28 华为技术有限公司 控制备用设备的方法、主用设备和备用设备
CN103516744A (zh) * 2012-06-20 2014-01-15 阿里巴巴集团控股有限公司 一种数据处理的方法和应用服务器及集群

Also Published As

Publication number Publication date
CN105656647B (zh) 2020-06-30
CN105656647A (zh) 2016-06-08

Similar Documents

Publication Publication Date Title
US9807017B2 (en) Multicast traffic load balancing over virtual link aggregation
US10764119B2 (en) Link handover method for service in storage system, and storage device
WO2016150066A1 (zh) 一种主节点选举方法、装置及存储系统
US20150365270A1 (en) Active ip forwarding in an event driven virtual link aggregation (vlag) system
US9473360B2 (en) System and method for primary switch election in peer groups
US20140301401A1 (en) Providing aggregation link groups in logical network device
US9402205B2 (en) Traffic forwarding method and system based on virtual switch cluster
US20150046572A1 (en) Extending Virtual Station Interface Discovery Protocol (VDP) and VDP-Like Protocols for Dual-Homed Deployments in Data Center Environments
US20180210800A1 (en) Hot standby method, apparatus, and system
WO2017000832A1 (zh) Mac地址的同步方法、装置及系统
US9671841B2 (en) System and method for temperature management of information handling systems
CA2980911A1 (en) Systems and methods for guaranteeing delivery of pushed data to remote clients
WO2016101825A1 (zh) 一种分布式保护中控制器热备份的方法和装置
WO2017008641A1 (zh) 冗余端口的切换方法及装置
WO2016074497A1 (zh) 堆叠系统中实现分工的方法、主设备、备设备和系统
US20180278577A1 (en) High availability bridging between layer 2 networks
US20190319875A1 (en) Inter-chassis link failure management system
WO2016150307A1 (zh) 一种防火墙双机热备方法、装置及系统
CN104202364A (zh) 一种控制器的自动发现和配置方法和设备
WO2015067144A1 (zh) 软件部署的方法和装置
CN106716870B (zh) 卫星设备处的本地分组交换
EP2775675B1 (en) Synchronization method among network devices, network device and system
US9300529B2 (en) Communication system and network relay device
WO2016180081A1 (zh) 一种同步配置信息的方法、主设备和备设备
EP3253030B1 (en) Method and device for reporting openflow switch capability

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15858479

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15858479

Country of ref document: EP

Kind code of ref document: A1