CN109039931B - Method and device for optimizing performance of virtualization equipment - Google Patents

Method and device for optimizing performance of virtualization equipment Download PDF

Info

Publication number
CN109039931B
CN109039931B CN201810786378.7A CN201810786378A CN109039931B CN 109039931 B CN109039931 B CN 109039931B CN 201810786378 A CN201810786378 A CN 201810786378A CN 109039931 B CN109039931 B CN 109039931B
Authority
CN
China
Prior art keywords
message
state
time
member device
consuming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810786378.7A
Other languages
Chinese (zh)
Other versions
CN109039931A (en
Inventor
楚泽彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN201810786378.7A priority Critical patent/CN109039931B/en
Publication of CN109039931A publication Critical patent/CN109039931A/en
Application granted granted Critical
Publication of CN109039931B publication Critical patent/CN109039931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present disclosure provides a method for optimizing performance of a virtualized device, the method is applied to a standby member device in a virtual switch matrix VSM system, the VSM system further includes a primary member device, the method includes: receiving the state information of which the information type at least comprises a time sequence type and a synchronization type of the main member equipment, and determining the type of the state information according to a preset information classification rule; aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment; after responding to the confirmation message, processing the state message; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, processing the state message and responding to the confirmation message to the main member equipment. By applying the embodiment of the disclosure, communication congestion can be reduced and communication efficiency can be improved under the condition that a large amount of state messages exist in the main member device and the standby member device in the VSM system and need to be backed up.

Description

Method and device for optimizing performance of virtualization equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for optimizing performance of a virtualization device.
Background
With the continuous development of the IT (Information Technology) industry, the demand for network devices is increasing. In this era, virtual Switch matrix (vsm) technology has come into play, in which a plurality of member network devices are connected through one or a group of physical ports to form a virtual logical device, and the physical ports interconnected among the member network devices are also called cascade ports. One member device may need to forward a user datagram out of the egress of another member device, such that the user datagram needs to pass through the cascade port. In addition, the member devices are further divided into a main member device and a standby member device, and the main member device needs to send various state messages such as the latest software and hardware configuration, the user service state and the like to the standby member device according to a strict synchronization mechanism, so that the standby member device is consistent with the main member device in the software and hardware configuration and state, and the purpose of taking over at any time is achieved. In addition, in daily network operation, a large amount of service information is interacted through the cascade interface, and when the user data and the state information are more, congestion at the cascade interface can be caused.
Disclosure of Invention
In view of this, the present application provides a method for optimizing performance of a virtualization device and a member device, so as to solve the problem that, in a virtual switch matrix VSM, a large amount of service information interacts through a cascade interface in a daily network operation, and when the service information is more, congestion at the cascade interface may be caused.
Specifically, the method is realized through the following technical scheme:
in a first aspect of the present disclosure, a method for optimizing performance of a virtualized device is applied to a member device in a virtual switch matrix VSM system, where when the VSM system operates, the member device may be configured as a primary member device and/or a standby member device, and when the member device is the standby member device, the method includes:
the standby member equipment receives the state message of the main member equipment from the backup channel and determines the type of the state message according to a preset message classification rule; wherein the status message types at least comprise a timing class and a synchronization class; aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment; processing the state message after responding to the confirmation message aiming at the state message stored in the first message queue; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, processing the state message and responding to the confirmation message to the main member equipment.
With reference to the first aspect, in a first possible implementation manner, the status message type further includes a first time-consuming class, which includes:
and the standby member equipment stores the state information into a first message queue aiming at the first time-consuming type state message.
With reference to the first aspect or the first possible implementation manner of the first aspect, the state type further includes a second time-consuming class, and in a second possible implementation manner, the method includes:
the standby member equipment stores the state information into a third information queue aiming at the second time-consuming state information; and aiming at the state message stored in the third message queue, processing the state message after responding to the confirmation message to the main member device.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the method includes:
the member device comprises at least a first processor and a second processor, wherein the first message queue is processed by a corresponding first kernel thread on the first processor, and the third message queue is processed by a corresponding second kernel thread on the second processor.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the method includes:
the average processing time length of the first time-consuming state message is longer than that of the synchronous state message, and the average processing time length of the second time-consuming state message is longer than that of the synchronous state message.
In a second aspect of the present disclosure, an apparatus for optimizing performance of a virtualized device is provided, where the apparatus is applied to a member device in a virtual switch matrix VSM system, and when the VSM system is running, the member device may be configured as a primary member device and/or a standby member device, and when the member device is the standby member device, the apparatus includes:
the receiving and sending module is used for receiving the state message and the response confirmation message of the main member equipment from the backup channel;
the processing module is used for determining the type of the state message according to a preset message classification rule; wherein the status message types at least comprise a timing class and a synchronization class; aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment through the transceiver module; aiming at the state message stored in the first message queue, processing the state message after responding to the confirmation message to the main member equipment through the transceiver module; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, and responding to the main member equipment by the transceiver module after processing the state message.
With reference to the second aspect, in a first possible implementation manner, the status message type further includes a first time-consuming class, and the apparatus further includes:
and the processing module stores the state information into a first message queue aiming at the first time-consuming state message.
With reference to the second aspect or the first possible implementation manner of the second aspect, in a second possible implementation manner, the state type further includes a second time-consuming class, and the apparatus further includes:
the processing module stores the state message into a third message queue aiming at the second time-consuming state message;
and the processing module is used for processing the state message after responding to the confirmation message to the main member equipment through the receiving and sending module aiming at the state message stored in the third message queue.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner, the apparatus includes:
wherein the member device includes at least a first processor and a second processor, wherein the first message queue is processed by a corresponding first kernel thread on the first processor and the third message queue is processed by a corresponding second kernel thread on the second processor.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner, the apparatus includes:
the average processing time length of the first time-consuming state message is longer than that of the synchronous state message, and the average processing time length of the second time-consuming state message is longer than that of the synchronous state message.
In a third aspect of the present disclosure, a data processing apparatus is provided, which includes a communication interface, at least two processors, a memory, and a bus, where the communication interface, the processors, and the memory are connected to each other through the bus. The memory stores machine-readable instructions, and the processor executes the aforementioned method by calling the machine-readable instructions.
In a fourth aspect of the present disclosure, there is provided a machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to carry out the method of the first aspect of the present disclosure.
In the disclosure, a virtual switch matrix VSM system includes a main member device and a standby member device, where the standby member device receives, from a backup channel, a message type of the main member device, which at least includes a time sequence type and a synchronization type status message, and determines the type of the status message according to a preset message classification rule; aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment; after responding to the confirmation message, processing the state message; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, processing the state message and responding to the confirmation message to the main member equipment. By applying the embodiment of the disclosure, communication congestion can be reduced and communication efficiency can be improved under the condition that a large amount of state messages exist in the main member device and the standby member device in the VSM system and need to be backed up.
Drawings
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method provided by an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a status message classification and processing process provided by an embodiment of the present disclosure;
FIG. 4 is a block diagram of functional blocks of an apparatus provided by the present disclosure;
fig. 5 is a hardware block diagram of the apparatus shown in fig. 4 provided by the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to make those skilled in the art better understand the technical solution in the embodiment of the present disclosure, a brief description is given below of a system architecture to which the embodiment of the present disclosure is applicable.
Referring to the VSM system illustrated in fig. 1, which includes a primary member device and a secondary member device, only two member devices are illustrated in fig. 1, but there may in fact be a VSM system with more member devices. In other words, in a VSM system, each member device may become a master member device or a slave member device depending on the election mechanism running on the respective member device or manual settings of the administrator. Generally speaking, if the main member device fails, according to an automatic election mechanism, a pre-agreed standby member device can become the main member device immediately, or a new main member device is elected in a re-election mode. Ideally, at this time the new master member device needs to be able to seamlessly take over the management of the service and the entire system.
Seamless takeover is a very severe requirement, and it is required that a new master member device needs to be kept consistent with an old master member device in many aspects, for example, configuration information of software and hardware, and an administrator needs to write the same configuration information in a standby member device in time according to the configuration information issued by the master member device. For another example, the master member device needs to interact with external network devices through various protocols including routing protocols, and the running states of the protocols need to be perfectly copied to the slave member devices, otherwise, it is difficult to take over the protocols. For another example, the primary member device may also have state information of key services such as user-oriented security authentication, and the information also needs to be copied to the secondary member device in time, so that the state information is quite numerous and is not enumerated one by one. To keep the state of the standby member device consistent with the master member device in time to facilitate future seamless takeover. The master member device typically needs to send a large number of status messages to the standby member devices. Briefly, the interaction between two parties can be roughly simplified as the following model:
the main member device sends the state message in time and receives the response of the standby member device to the state message;
and the standby member equipment receives the status message from the main member equipment, processes and responds to the status message in time. In the current design, the standby member device usually processes the status message and then responds for the perfect seamless connection mechanism, and the main member device usually receives the response and then sends the next status message.
The method for optimizing the performance of the virtualization device provided by the present disclosure is applied to a standby member device on a virtual switch matrix VSM system, and greatly reduces the possibility of congestion occurrence in a message classification processing manner. Referring to fig. 2, a flow chart of a method for classifying and processing status messages according to the present disclosure may include the following steps:
step 201, the standby member device receives the status message sent by the main member device from the backup channel, and determines the type of the status message according to a preset message classification rule; wherein the status message types include at least a timing class and a synchronization class.
The explanation for the timing class status message and the synchronization class status message is as follows:
a) the timing class status message: that is, some status messages must have a sequential execution order, and status messages meeting such conditions can be classified as sequential messages. For example, if one status message a is executed to set an interface to block and not allow any data traffic to be forwarded, and another status message B is executed to clear all forwarding entries under the interface, the status message setting the interface to block must be executed before the entry status message is cleared. If the status message B is processed before the status message a is processed, after all forwarding entries of the interface are cleared, before the status message a is not successfully processed, the interface is still in a forwarding state, and the interface is likely to relearn a part of forwarding entries, and the interface is set to a blocking state until the status message a is successfully processed. This means that the interface state becomes blocked, but there are a small number of corresponding forwarding entries, so that after a user data packet (also called user traffic) comes, by querying the forwarding entries, it is still possible to hit a certain forwarding entry and need to go out from the interface, but the interface is already blocked, which may result in the traffic not passing through. In fact, the administrator wants to clear the forwarding table entry and guide the user traffic to be forwarded from other interfaces, so that the status message B and the status message a need to be executed in a reasonable order to reflect the actual intention of the administrator.
b) The synchronization class status message is explained as follows: the main member device sends a state message C to the standby member device, the corresponding state message C contains an Action which needs to be executed by the standby member device, when the standby member device executes the Action, a response message ACK is sent to the main member device, and the processed and responded messages are classified as synchronous state messages in the example.
Step 202, the standby member device stores the state message into a first message queue according to a preset position sorting rule aiming at the time sequence type state message, and responds a confirmation message to the main member device; and processing the state message after responding to the confirmation message aiming at the state message stored in the first message queue.
After the VSM system is initialized to be on-line and operated for a period of time, the standby member device usually receives M (M > ═ 1) historical status messages, and then receives the M +1 th message, the standby member device analyzes the message type of the M +1 th status message, if the message type belongs to the time sequence class, the standby member device needs to be compared with the message types of the M existing historical status messages in the first message queue, and the position of the message in the queue is determined according to a preset position sorting rule.
Supposing that a current state message, namely an M +1 th message, and an Mth historical state message have a time-sequence relationship, and the current state message and the Mth historical state message do not have the time-sequence relationship, namely the M +1 th message needs to be executed before the Mth historical state message, and standby member equipment sorts the M +1 th state message and the Mth historical state message to insert the M +1 th state message before the Mth historical state message; assuming that the first message queue is executed sequentially, the M +1 th status message will be processed by the standby member device before the mth status message.
In the above-described process of sorting status messages, the status messages stored in the first message queue are processed, and in this example, the standby member device processes the status messages after responding to the acknowledgment messages.
It can be seen that the status messages in the first message queue are generally processed asynchronously, for example, the master member device sends a status message M to the standby member device, the corresponding status message M includes an Action to be executed by the standby member device, and when the standby member device sends a response message ACK to the master member device first and then executes the Action, such a processing sequence is also referred to as asynchronous processing in the present application, whereas in the foregoing example, a sequence of processing first and then responding to an acknowledgement message is referred to as synchronous processing.
Step 203, storing the synchronization status messages in a second message queue, processing the status messages and responding to a confirmation message to the main member device, where the status messages in the second message queue are all synchronization status messages.
Thus, the flow shown in fig. 2 is completed.
As can be seen from the process shown in fig. 2, the standby member device receives the status message of which the message type at least includes the timing class and the synchronization class from the backup channel, and determines the type of the status message according to the preset message classification rule; aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment; after responding to the confirmation message, processing the state message; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, processing the state message and responding to the confirmation message to the main member equipment. By applying the embodiment of the disclosure, the asynchronous or synchronous processing is performed in parallel through message classification and a plurality of classified message queues, so that communication congestion can be reduced and communication efficiency can be improved under the condition that a large amount of state messages need to be backed up in the main member device and the standby member device in the VSM system. More specifically, for the timing class message to respond to the acknowledgement message first, the problem caused by the mth message being processed first in the above example can be avoided, because the mth status message should be processed after the (M + 1) th status message; the mechanism can effectively avoid the problem caused by disorder of the transmission of the main member device.
In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present disclosure, the following describes the technical solutions provided by the embodiments of the present disclosure with reference to specific application scenarios. Please refer to fig. 3, which is a schematic diagram illustrating a status message classifying and processing process according to an embodiment of the disclosure.
A member device, assuming it is configured and provided with the following initial conditions:
1) the backup member device establishes a backup channel for communicating the status message with the main member device and receiving the status message sent by the main member device;
2) t represents the time period for acquiring the status messages, and the standby member equipment acquires 4 pieces of status messages CM1, CM2, CM3 and CM4 in sequence in the T time period;
3) three first-in first-out message queues are created in advance, including:
the first message queue is named as an asynchronous message queue, namely AMQ;
the second message queue is named as a synchronous message queue, namely the SMQ;
the third message queue is named as a high-consumption asynchronous message queue, namely HAMQ;
4) in the first message queue AMQ, there are already three history status messages, for example HM1, HM2, HM3, the order relationship in the queue, please refer to table 1 for example:
head of line In team Team tail
HM1 HM2 HM3
TABLE 1
5) The member device has at least two CPUs, such as CPU1 and CPU2, where a corresponding first kernel thread on CPU1 manages and maintains an asynchronous message queue AMQ; managing and maintaining a high-consumption asynchronous message queue (HAMQ) at a corresponding second kernel thread on the CPU 2; the SMQ may be managed and maintained by the first kernel thread or the second kernel thread, or may be managed and maintained by a separate kernel thread.
6) Configuring a preset message classification rule table, including a timing class, a first time-consuming class, a second time-consuming class and a synchronization class, please refer to table 2 as an example:
Figure BDA0001733836920000101
TABLE 2
The first time-consuming type message and the second time-consuming type message may also be referred to as high time-consuming type messages in this application, and the common characteristic of the two types of messages is that the time consumed by the state message corresponding to the member device to perform the action needs to be greater than a certain time threshold, which is relatively high compared to other messages. However, when a comparison is made between the first time-consuming class and the second time-consuming class, the time threshold T2 of the second time-consuming class is significantly greater than the time threshold T1 of the first time-consuming class, for example: t2 is at least 2 times as large as T1, T2 ═ 12 seconds > T1 ═ 5 seconds. In addition to these two time-consuming class status messages, the other class status messages are typically less time consuming for the member device to perform actions, and may also be referred to as synchronization class messages in this application. It should be noted that the processing times T2 and T1 are average processing times, and for example, in the same software and hardware environment, it can be found through testing that the average processing time length of each second time-consuming status message is T2, and the average processing time length of each first time-consuming status message in this environment is T1.
7) Configuring a position ordering rule for the timing class status message, please refer to table 3 as an example:
Figure BDA0001733836920000111
TABLE 3
Based on the above configuration, at times T1, T2, T3, T4, where tn < T, n is 1, 2, 3, 4, and T1< T2< T3< T4, each time the current status message at the corresponding time is acquired as CM1, CM2, CM3, and CM4, the process flow shown in fig. 3 may be performed:
step a, the standby member device analyzes the message characteristics of the state message CM1 at the time t1, determines that the CM1 belongs to the time sequence class according to the preset message classification rule table-rule check in the table 2, and at this time, the CM needs to sort the rule according to the position of the time sequence class state message-rule in the table 3, and checks whether the executed time sequence relation exists with the three history state messages HM1, HM2 and HM3 existing in the cache queue according to the rule.
Step b, assuming that the CM1 does not have a time sequence relation with the HM1 and the HM2, the CM1 has a time sequence relation with the HM3, the CM1 contains the operation type to be executed firstly, and the HM3 contains the operation type to be executed after being executed. After the member device checks according to the position sorting rule for the time-series class status message-the rule in table 3, the current status message CM1 and the history status messages HM1, HM2, and HM3 are sorted, and the sorted result is placed in the asynchronous message queue AMQ, where the position of CM1 in the AMQ queue will be before HM3, please refer to the example in table 4:
head of line In team In team Team tail
HM1 HM2 CM1 HM3
TABLE 4
Step c, as another possible situation of step b, the standby member device continues to acquire the status message CM2 at the current time t2, and checks according to the preset message classification rule table-the rule in table 2, the standby member device acquires the message type of the status message CM2, and analyzes according to the rule in table 2, if the message type of the status message CM2 is the first time-consuming class, but no temporal association exists among history messages, such as CM2, HM1, HM2, HM3, and CM1, so that no ordering with other messages is needed for the moment according to the position ordering rule, and the ordering is directly inserted into the tail of the queue, that is, the status message CM2 is added into the asynchronous message queue AMQ. Please refer to table 5 for example:
head of line In team In team In team Team tail
HM1 HM2 CM1 HM3 CM2
TABLE 5
Step d, as another possible situation of step b, the standby member device continues to acquire the status message CM3 at the current time t3, and checks the message type of the status message CM3 according to the rule in the preset message classification rule table-table 2, and according to the analysis of the rule in table 2, if the message type of the status message CM3 is a synchronization class, the status message CM3 is added to the synchronization message queue SMQ, please refer to table 6 as an example:
head of line In team Team tail
CM3 NULL (empty) NULL (empty)
TABLE 6
Step e, as another possible situation of step b, the standby member device continues to obtain the status message CM4 at the current time t4, checks the message type of CM4 according to the rule in the preset message classification rule table-table 2, and according to the analysis of the rule in table 2, if the message type of the status message CM4 is the second time-consuming class, adds the status message CM4 to the high-consumption asynchronous message queue HAMQ, please refer to the example in table 7:
head of line In team Team tail
CM4 NULL (empty) NULL (empty)
TABLE 7
And step f, the standby member device respectively and sequentially acquires each state message according to the rules of the sequence of the head of the slave queue, the middle of the queue and the tail of the queue based on the asynchronous message queue AMQ, the synchronous message queue SMQ and the high-consumption asynchronous message queue HAMQ, and sends the state messages to the main member device.
The member device has at least two CPUs, such as CPU1 and CPU2, where a corresponding first kernel thread on CPU1 manages and maintains an asynchronous message queue AMQ; managing and maintaining a high-consumption asynchronous message queue (HAMQ) at a corresponding second kernel thread on the CPU 2; and sending response messages with the time threshold value smaller than a certain time threshold value to the main member device aiming at the asynchronous message queue AMQ and the high-consumption asynchronous message queue HAMQ, wherein the time threshold value is 1 second for example. For the synchronous message queue SMQ, the standby member device executes the execution of the response status message, and sends a response message greater than a certain time threshold to the main member device, for example, the time threshold is 5 seconds.
The methods provided by the present disclosure are described above. The following describes the apparatus provided by the present disclosure.
Referring to fig. 4, a member device in a virtual switch matrix-based VSM system provided by the present disclosure may be configured as a primary member device and/or a standby member device when the VSM system is operating. As shown in fig. 4, the member device may include the following modules:
a transceiver module 401, configured to receive a status message and a response confirmation message of the primary member device from the backup channel;
a classification module 402, configured to determine a type of the status message according to a preset message classification rule; wherein the status message types at least comprise a timing class and a synchronization class;
a processing module 403, configured to store, according to a preset position sorting rule, a time sequence status message in a first message queue, and respond to a confirmation message to the primary member device through the transceiver module; aiming at the state message stored in the first message queue, processing the state message after responding to the confirmation message to the main member equipment through the transceiver module; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, and responding to the main member equipment by the transceiver module after processing the state message.
In one embodiment, the status message type further includes a first time-consuming class, and the processing module 403 stores the status information in a first message queue for the first time-consuming class status message.
In one embodiment, the status type further includes a second time-consuming class, and the processing module 403 stores, for a status message of the second time-consuming class, the status message in a third message queue; the processing module 403, aiming at the status message stored in the third message queue, processes the status message after responding the confirmation message to the main member device through the transceiver module.
In one embodiment, the member device where the transceiver module 401 and the processing module 403 are located includes at least a first processor and a second processor, where the first message queue is processed by a corresponding first kernel thread on the first processor, and the third message queue is processed by a corresponding second kernel thread on the second processor.
In one embodiment, the processing module 403 processes the first time-consuming status message for an average processing duration longer than that of the synchronization status message, and processes the second time-consuming status message for an average processing duration longer than that of the synchronization status message.
The description of the apparatus shown in fig. 4 is thus completed.
Correspondingly, the present disclosure also provides a hardware structure of a member standby device of the apparatus shown in fig. 4, referring to fig. 5, and fig. 5 is a schematic diagram of a hardware structure of a member standby device provided by the present disclosure. The access device includes: a communication interface 501, a processor 502 and a processor 503, a memory 504, a nonvolatile memory 505, and a bus 506; the communication interface 501, the processor 502, the processor 503, the memory 504, and the nonvolatile memory 505 are configured to communicate with each other via a bus 506.
The communication interface 501 is used for sending and receiving messages. The processor 502 and the processor 503 may be a Central Processing Unit (CPU), the memory 504 may read from the non-volatile memory 505 and store machine-readable instructions, and the processor 502 and the processor 503 may execute the machine-readable instructions stored in the memory 504 to implement the method shown in fig. 2.
Memory 504, as referred to herein, may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, or similar storage medium, or a combination thereof. The non-volatile memory 505 referred to herein may be any non-volatile memory, flash memory, storage drive (e.g., hard drive), solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or similar storage medium, or a combination thereof.
To this end, the description of the hardware configuration shown in fig. 5 is completed.
In addition, the present application also provides a machine-readable storage medium, such as the machine-readable memory 504 and the non-volatile memory 505 in fig. 5, which includes machine executable instructions, which can be executed by the processor 501 and the processor 502 in the data processing apparatus to implement the data processing method described above.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. A method for optimizing the performance of a virtualized device, the method is applied to a member device in a Virtual Switch Matrix (VSM) system, when the VSM system is operated, the member device can be configured as a main member device or a standby member device, when the member device is the standby member device, the method comprises the following steps:
receiving a state message sent by the main member device from the backup channel, and determining the type of the state message according to a preset message classification rule; wherein the status message types at least comprise a timing class and a synchronization class;
aiming at the time sequence type state message, storing the state message into a first message queue according to a preset position sorting rule, and responding a confirmation message to the main member equipment; the position sorting rule is used for expressing the sequential execution time sequence relation among the time sequence type state messages;
processing the state message after responding to the confirmation message aiming at the state message stored in the first message queue;
and aiming at the synchronous state message, storing the synchronous state message in a second message queue, processing the state message and responding to the confirmation message to the main member equipment.
2. The method of claim 1, wherein the status message type further comprises a first time consuming class, the method further comprising:
and aiming at the first time-consuming state message, storing the state message into a first message queue according to the preset position sorting rule.
3. The method according to claim 1 or 2, wherein the status message type further comprises a second time consuming class, the method further comprising:
for the second time-consuming state message, storing the state message into a third message queue;
the first time-consuming state message and the second time-consuming state message are messages of which the time consumed by the equipment to execute the action is greater than a certain time threshold, and the time threshold of the second time-consuming state message is greater than the time threshold of the first time-consuming state message;
and aiming at the state message stored in the third message queue, processing the state message after responding to the confirmation message to the main member device.
4. The method of claim 3, wherein the member device comprises at least a first processor and a second processor, wherein a first message queue is processed by a corresponding first kernel thread on the first processor and a third message queue is processed by a corresponding second kernel thread on the second processor.
5. The method of claim 3, wherein the average processing duration of the first time-consuming status message is longer than the average processing duration of the synchronization status message, and wherein the average processing duration of the second time-consuming status message is longer than the average processing duration of the synchronization status message.
6. A virtualized device performance optimization apparatus, the apparatus being applied to a member device in a virtual switch matrix, VSM, system, wherein when the VSM system is running, the member device can be configured as a primary member device and/or a standby member device, and when the member device is the standby member device, the apparatus comprises:
the receiving and sending module is used for receiving the state message and the response confirmation message of the main member equipment from the backup channel;
the classification module is used for determining the type of the state message according to a preset message classification rule; wherein the status message types at least comprise a timing class and a synchronization class;
the processing module is used for storing the state information into a first information queue according to a preset position sorting rule aiming at the time sequence type state information and responding a confirmation information to the main member equipment through the receiving and sending module; the position sorting rule is used for expressing the sequential execution time sequence relation among the time sequence type state messages; aiming at the state message stored in the first message queue, processing the state message after responding to the confirmation message to the main member equipment through the transceiver module; and aiming at the synchronous state message, storing the synchronous state message in a second message queue, and responding to the main member equipment by the transceiver module after processing the state message.
7. The apparatus of claim 6, wherein the status message type further comprises a first time consuming class, wherein:
the processing module is further configured to store, for the first time-consuming state message, the state message in the first message queue according to the preset position sorting rule.
8. The apparatus of claim 6 or 7, wherein the status message type further comprises a second time consuming class, wherein:
the processing module is further configured to store the state message in a third message queue for the second time-consuming state message;
the first time-consuming state message and the second time-consuming state message are messages of which the time consumed by the equipment to execute the action is greater than a certain time threshold, and the time threshold of the second time-consuming state message is greater than the time threshold of the first time-consuming state message;
and aiming at the state message stored in the third message queue, processing the state message after responding to the confirmation message to the main member equipment through the transceiver module.
9. The apparatus of claim 8, wherein the member device comprises at least a first processor and a second processor, wherein a first message queue is processed by a corresponding first kernel thread on the first processor and a third message queue is processed by a corresponding second kernel thread on the second processor.
10. The apparatus of claim 8, wherein the average processing duration of the first time-consuming status message is longer than the average processing duration of the synchronization status message, and wherein the average processing duration of the second time-consuming status message is longer than the average processing duration of the synchronization status message.
11. A data processing device is characterized by comprising a communication interface, at least two processors, a memory and a bus, wherein the communication interface, the processors and the memory are connected with each other through the bus;
the memory has stored therein machine-readable instructions, the processor executing the method of any one of claims 1 to 5 by calling the machine-readable instructions.
12. A machine readable storage medium having stored thereon machine readable instructions which, when invoked and executed by a processor, cause the processor to carry out the method of any of claims 1 to 5.
CN201810786378.7A 2018-07-17 2018-07-17 Method and device for optimizing performance of virtualization equipment Active CN109039931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810786378.7A CN109039931B (en) 2018-07-17 2018-07-17 Method and device for optimizing performance of virtualization equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810786378.7A CN109039931B (en) 2018-07-17 2018-07-17 Method and device for optimizing performance of virtualization equipment

Publications (2)

Publication Number Publication Date
CN109039931A CN109039931A (en) 2018-12-18
CN109039931B true CN109039931B (en) 2021-12-24

Family

ID=64643651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810786378.7A Active CN109039931B (en) 2018-07-17 2018-07-17 Method and device for optimizing performance of virtualization equipment

Country Status (1)

Country Link
CN (1) CN109039931B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026802A (en) * 2007-03-16 2007-08-29 华为技术有限公司 Information push method and device
CN104683486A (en) * 2015-03-27 2015-06-03 杭州华三通信技术有限公司 Method and device for processing synchronous messages in distributed system and distributed system
CN106375219A (en) * 2016-08-22 2017-02-01 杭州迪普科技有限公司 Method and device for forwarding message

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8271773B2 (en) * 2006-10-17 2012-09-18 Endress + Hauser Gmbh + Co. Kg Configurable field device for use in process automation systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026802A (en) * 2007-03-16 2007-08-29 华为技术有限公司 Information push method and device
CN104683486A (en) * 2015-03-27 2015-06-03 杭州华三通信技术有限公司 Method and device for processing synchronous messages in distributed system and distributed system
CN106375219A (en) * 2016-08-22 2017-02-01 杭州迪普科技有限公司 Method and device for forwarding message

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
运营商级应用网关设备虚拟集群技术研究;杭州迪普科技有限公司;《电信技术》;20131130;全文 *

Also Published As

Publication number Publication date
CN109039931A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN110297801B (en) System and method for just-in-one transaction semantics of transaction system based on fault-tolerant FPGA
US11099902B1 (en) Parallelized ingress compute architecture for network switches in distributed artificial intelligence and other applications
US10686890B2 (en) Keep-alive scheduler in a network device
US8428076B2 (en) System and method for priority scheduling of plurality of message types with serialization constraints and dynamic class switching
US11328222B1 (en) Network switch with integrated gradient aggregation for distributed machine learning
CN107451012B (en) Data backup method and stream computing system
US20130160028A1 (en) Method and apparatus for low latency communication and synchronization for multi-thread applications
US11502967B2 (en) Methods and apparatuses for packet scheduling for software-defined networking in edge computing environment
US10931602B1 (en) Egress-based compute architecture for network switches in distributed artificial intelligence and other applications
Behrens et al. RDMC: A reliable RDMA multicast for large objects
CN111722944B (en) NIO-based AIRT-ROS communication method and system
US7564860B2 (en) Apparatus and method for workflow-based routing in a distributed architecture router
CN114928579A (en) Data processing method and device, computer equipment and storage medium
Cerrato et al. An efficient data exchange algorithm for chained network functions
CN107005489B (en) System, method, medium, and apparatus for supporting packet switching
CN112769639B (en) Method and device for parallel issuing configuration information
CN109039931B (en) Method and device for optimizing performance of virtualization equipment
CN111225063B (en) Data exchange system and method for static distributed computing architecture
CN116633875B (en) Time order-preserving scheduling method for multi-service coupling concurrent communication
CN112272933B (en) Queue control method, device and storage medium
CN113010464A (en) Data processing apparatus and device
US20190044872A1 (en) Technologies for targeted flow control recovery
CN110445580A (en) Data transmission method for uplink and device, storage medium, electronic device
US20160191422A1 (en) System and method for supporting efficient virtual output queue (voq) packet flushing scheme in a networking device
CN109274609A (en) A kind of port setting method, device, network board and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant