CN111314432A - Message processing method and device - Google Patents

Message processing method and device Download PDF

Info

Publication number
CN111314432A
CN111314432A CN202010063483.5A CN202010063483A CN111314432A CN 111314432 A CN111314432 A CN 111314432A CN 202010063483 A CN202010063483 A CN 202010063483A CN 111314432 A CN111314432 A CN 111314432A
Authority
CN
China
Prior art keywords
control board
main control
cache
standby
message sent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010063483.5A
Other languages
Chinese (zh)
Other versions
CN111314432B (en
Inventor
李跃武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN202010063483.5A priority Critical patent/CN111314432B/en
Publication of CN111314432A publication Critical patent/CN111314432A/en
Application granted granted Critical
Publication of CN111314432B publication Critical patent/CN111314432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present application relates to the field of network communication technologies, and in particular, to a method and an apparatus for processing a packet. The interface board applied to the network equipment, the network equipment also includes main control board and spare main control board, the method includes: receiving a target message sent by a standby main control board; caching a target message sent by a standby main control board to a first cache; judging whether a target message sent by a main control board is cached in a first cache; if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to the opposite connector; otherwise, caching the target message sent by the standby main control board to a second cache. By adopting the method, the condition that the important protocol message is lost due to the occurrence of the master-slave switching, and further a large amount of data is lost can be avoided.

Description

Message processing method and device
Technical Field
The present application relates to the field of network communication technologies, and in particular, to a method and an apparatus for processing a packet.
Background
At present, in order to ensure the reliability and stability of the operation of the device, network devices such as routers and switches are usually set as a master-slave board system, so as to implement master-slave forwarding, for example, when a fault occurs in the master-slave board, the master-slave board can continue to forward the service traffic, thereby ensuring the smooth network.
In a common main and standby board system, a main control board and a standby main control board usually work independently, and the standby main control board is used as a backup board of the main control board. The main control board and the standby control board realize information synchronization through the communication channel between the boards. Illustratively, refer to fig. 1, which is a schematic diagram of a network device in the prior art. The main control board and the standby main control board are connected with the same interface board in a downward mode and used for receiving and sending messages to the outside, in the receiving direction, the interface board can process the received messages sent by the external equipment to the exchange chips of the main control board and the standby main control board, namely, the interface board receives the first message and the second message, in the sending direction, the interface board only can select the messages sent by the main control board to be processed to the external equipment, namely, the interface board sends the first message or the second message.
However, during operation of the main board system, the following situations may occur: because the forwarding states of the switching chips of the main control board and the standby main control board in the main and standby board system cannot be completely kept consistent, and a certain delay exists in forwarding, it may happen that the message a forwarded by the main control board is not yet sent to the interface board, the message a forwarded by the standby main control board is already sent to the interface board, and only the message forwarded by the main control board is sent due to the alternative sending direction, so the message a forwarded by the standby main control board is discarded by the interface board FIFO. If the active-standby switching occurs at this time, after the switching is completed, the message a forwarded by the original main control board is still discarded, and the message a forwarded by the original standby main control board is already discarded, so that the message a loses packets, and if the message a is an important protocol message (for example, Bidirectional Forwarding Detection (BFD)), protocol oscillation may be caused, which causes a large amount of data packet loss.
Disclosure of Invention
The application provides a message processing method and a message processing device, which are used for solving the problem that protocol oscillation is caused due to protocol message loss in the prior art, so that a large amount of data is lost.
In a first aspect, the present application provides a packet processing method, applied to an interface board of a network device, where the network device further includes an active main control board and a standby main control board, and the method includes:
receiving a target message sent by the standby main control board;
caching the target message sent by the standby main control board to a first cache;
judging whether a target message sent by the main control board is cached in the first cache;
if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to a pair of connectors; otherwise, caching the target message sent by the standby main control board to a second cache.
Optionally, the method further comprises:
and if detecting that the main control board and the standby control board are switched between the main control board and the standby control board, sending the target message sent by the standby control board to a pair of connectors.
Optionally, the first buffer includes a first buffer queue connected to the active main control board, a second buffer queue connected to the standby main control board, and a third buffer queue connected to the first buffer queue and the second buffer queue respectively;
the step of caching the target message sent by the standby main control board to a first cache comprises:
caching the target message sent by the standby main control board to the second cache queue;
the step of judging whether the first cache stores the target packet sent by the main control board includes:
and judging whether the target message sent by the main control board is cached in the first cache queue and the third cache queue.
Optionally, the first buffer is a first-in first-out FIFO memory.
Optionally, the method further comprises:
and if the number of the messages cached in the second cache is greater than or equal to a preset threshold value, discarding the target message sent by the standby main control board.
In a second aspect, the present application provides a packet processing apparatus, applied to an interface board of a network device, where the network device further includes an active main control board and a standby main control board, the apparatus includes a receiving unit, a caching unit, a determining unit, a discarding unit, and a sending unit,
the receiving unit is used for receiving the target message sent by the standby main control board;
the cache unit is used for caching the target message sent by the standby main control board to a first cache;
the judging unit is configured to judge whether a target packet sent by the main master control board is cached in the first cache;
if the judging unit judges that the target message sent by the main control board is cached in the first cache, the discarding unit is used for discarding the target message sent by the standby main control board, and the sending unit sends the target message sent by the main control board to a pair of connectors; if the judging unit judges that the target message sent by the main control board is not cached in the first cache, the caching unit is used for caching the target message sent by the standby main control board to a second cache.
Optionally, the apparatus further comprises a detection unit:
if the detection unit detects that the main control board and the standby control board are switched between the main control board and the standby control board, the sending unit sends the target message sent by the standby control board to a pair of connectors.
Optionally, the first buffer includes a first buffer queue connected to the active main control board, a second buffer queue connected to the standby main control board, and a third buffer queue connected to the first buffer queue and the second buffer queue respectively;
when the target packet sent by the standby main control board is cached to a first cache, the cache unit is specifically configured to:
caching the target message sent by the standby main control board to the second cache queue;
when determining whether the target packet sent by the main control board is cached in the first cache, the determining unit is specifically configured to:
and judging whether the target message sent by the main control board is cached in the first cache queue and the third cache queue.
Optionally, the first buffer is a first-in first-out FIFO memory.
Optionally, if the number of the messages cached in the second cache is greater than or equal to a preset threshold, the discarding unit discards the target message sent by the standby main control board.
In a third aspect, an embodiment of the present application provides a network device, where the network device includes:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps of the method according to any one of the above first aspects in accordance with the obtained program instructions.
In a fourth aspect, the present application further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the steps of the method according to any one of the above first aspects.
To sum up, in the embodiment of the present application, a target message sent by a standby main control board is received; caching the target message sent by the standby main control board to a first cache; judging whether a target message sent by the main control board is cached in the first cache; if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to a pair of connectors; otherwise, caching the target message sent by the standby main control board to a second cache.
By adopting the message processing method provided by the embodiment of the application, when the target message sent by the standby main control board is received, the target message sent by the standby main control board is not directly discarded, but is cached into the first cache, when the target message sent by the main control board is determined to be received, the target message sent by the standby main control board is deleted, and when whether the target message sent by the main control board is received or not is not determined, the target message sent by the standby main control board is not directly discarded, so that the condition that protocol messages are lost due to the occurrence of main-standby switching and further a large amount of data are lost is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
Fig. 1 is a schematic structural diagram of a network device in the prior art;
fig. 2 is a flowchart of a message processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another network device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another network device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
Exemplarily, referring to fig. 2, a detailed flowchart of a message processing method provided in an embodiment of the present application is applied to an interface board of a network device, where the network device further includes an active main control board and a standby main control board, and the method includes the following steps:
step 200: and receiving the target message sent by the standby main control board.
In this embodiment, since the network device is configured as a master-slave board system, that is, the network device includes a master control board, a slave control board, and an interface board, where information synchronization is achieved between the master control board and the slave control board through a board communication channel, and the interface board is connected to the master control board and the slave control board, respectively, if the network device forwards the packet a to the outside, the interface board can receive both the packet a sent by the master control board and the packet a sent by the slave control board.
Step 210: and caching the target message sent by the standby main control board to a first cache.
In the embodiment of the application, when receiving a target message sent by a standby main control board, an interface board does not directly discard the message, but caches the message in a first cache, so as to execute a corresponding message processing operation according to subsequent judgment.
Specifically, in this embodiment of the present application, a preferred implementation manner is that the first buffer may include a first buffer queue connected to the active main control board and a second buffer queue connected to the standby main control board. Then, when receiving the message sent by the standby main control board, the interface board can buffer the message into the second buffer queue.
In this embodiment, a preferred implementation manner is that the first buffer further includes a third buffer queue connected to the first buffer queue and the second buffer queue, respectively. Namely, the sending process of a message in the interface board is as follows: the first buffer queue/the second buffer queue- > the third buffer queue- > the pair connector.
Preferably, in the embodiment of the present application, the First buffer may be a First-in-First-out (FIFO) memory. That is, the first buffer queue, the second buffer queue, and the third buffer queue may be FIFO memories.
Step 220: and judging whether the first cache stores the target message sent by the main control board.
In this embodiment, a preferred embodiment of the present invention is to determine whether the target packet sent by the main control board is cached in the first buffer queue.
In this embodiment, a preferred embodiment of the present invention is that, when the first buffer includes a first buffer queue, a second buffer queue, and a third buffer queue, whether the target packet sent by the main control board is buffered in the first buffer queue and the third buffer queue is determined.
That is, when receiving the message a sent by the standby main control board, the interface board needs to determine whether the message a sent by the main control board has been received, and determine a subsequent processing mode for the message a sent by the standby main control board according to the determination result.
Step 230: if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to a pair of connectors; otherwise, caching the target message sent by the standby main control board to a second cache.
In this embodiment, an optional implementation manner is that the second cache may be an independent Buffer.
Further, if it is detected that the active-standby switching between the active main control board and the standby main control board occurs, the target packet sent by the standby main control board is sent to a pair of connectors.
Specifically, in this embodiment of the present application, if the interface board determines that the target packet sent by the active main control board exists in the first cache, the existing alternative sending policy may be executed, and the target packet sent by the standby main control board is directly discarded.
In practical application, if a target packet (e.g., packet a) sent by the main control board exists in the first cache, the interface board may send the target packet sent by the main control board to the opposite connection port, and the target packet sent by the standby control board may be directly discarded without sending the target packet (e.g., packet a) sent by the standby control board to the opposite connection port.
In this embodiment of the present application, if the interface board determines that the target packet sent by the active main control board does not exist in the first cache, the interface board may store the target packet sent by the standby main control board in the second cache.
In practical application, if the target packet sent by the main master control board does not exist in the first cache, the interface board will cache the target packet sent by the standby master control board into the second cache, so as to execute corresponding packet processing operation according to subsequent judgment.
In practical application, the condition that the target packet sent by the active main control board does not exist in the first cache may include the following two cases:
in the first case, the interface board temporarily does not receive the target message sent by the main control board;
in the second case, the interface board has received the target packet sent by the main control board and has sent the target packet to the opposite connection port.
For the first case, if the interface board does not detect the active-standby switching operation before receiving the target packet sent by the active main control board, after receiving the target packet sent by the active main control board, the interface board sends the target packet sent by the active main control board to the opposite connection port, and discards the target packet sent by the standby main control board in the second cache.
For the second situation, before receiving the target packet sent by the primary main control board, if the interface board detects the primary-secondary switching operation, the original primary main control board will be used as a new standby main control board, and the original standby main control board will be used as a new primary main control board, then the interface board will use the target packet sent by the original standby main control board in the second cache as the target packet sent by the new primary main control board, and send the target packet sent by the new primary main control board to the opposite connector.
Further, the second cache may only cache a limited number of messages sent by the standby main control board, and then, in this embodiment of the application, if the number of messages cached in the second cache is greater than or equal to the preset threshold, the target message sent by the standby main control board is discarded.
That is to say, when the number of the messages cached in the second cache is greater than or equal to the preset threshold, all the messages sent by the standby main control board cached in the second cache may be discarded after a fixed time delay.
Certainly, when the number of the messages cached in the second cache is greater than or equal to the preset threshold, the message sent by the at least one standby main control board cached earliest in the second cache may be directly discarded.
Exemplarily, referring to fig. 3, a schematic structural diagram of a network device according to an embodiment of the present application is provided. The network equipment comprises a main control board, a standby main control board and an interface board, wherein the main control board and the standby main control board realize information synchronization through a communication channel between the boards, and the interface board at least comprises a cache 1 connected with the main control board, a cache 2 connected with the standby main control board, a cache 3 connected with the cache 1 and the cache 2, a comparator respectively connected with the cache 1, the cache 2 and the cache 3, a sending alternative module and a main/standby switching module.
When the network device forwards a message (for example, a message a), the main control board and the standby control board both perform forwarding operation of the message a, if the interface board receives the message a sent by the main control board, the message a sent by the main control board is cached in the cache 1 and then cached in the cache 3, and finally the message a is sent to the opposite connector, if the interface board receives the message a sent by the standby control board, the message a sent by the standby control board is cached in the cache 2, at this time, the interface board compares whether the message a sent by the main control board is cached in the cache 1 and the cache 3 through the comparator, if yes, the interface board directly deletes the message a sent by the standby control board in the cache 2, if not, the interface board caches the message a sent by the standby control board cached in the cache 2 into the Buffer, so that, if the main-standby switching module of the interface board at this time is switched between main and standby, then, the interface board will use the original main control board as a new standby main control board, and use the original standby main control board as a new main control board, at this time, the interface board will use the message a sent by the original standby main control board cached in the Buffer as the message a sent by the new main control board, and the interface board will cache the message a sent by the original standby main control board cached in the Buffer into the cache 3, and send the message a sent by the original standby main control board to the opposite connection port.
Based on the same inventive concept as the method embodiment, the embodiment of the present application further provides a message processing apparatus.
As shown in fig. 4 for example, a schematic structural diagram of another packet processing apparatus provided in this embodiment of the present application is applied to an interface board of a network device, where the network device further includes an active main control board and a standby main control board, and the apparatus at least includes a receiving unit 40, a buffering unit 41, a determining unit 42, a discarding unit 43, and a sending unit 44, where,
a receiving unit 40, configured to receive a target packet sent by the standby main control board;
a cache unit 41, configured to cache the target packet sent by the standby main control board to a first cache;
a determining unit 42, configured to determine whether a target packet sent by the active main control board is cached in the first cache;
if the determining unit 42 determines that the target packet sent by the main control board is cached in the first cache, the discarding unit 43 is configured to discard the target packet sent by the standby main control board, and the sending unit 44 sends the target packet sent by the main control board to a pair of connectors; if the determining unit 42 determines that the target packet sent by the active main control board is not cached in the first cache, the caching unit 41 is configured to cache the target packet sent by the standby main control board to a second cache.
Optionally, the apparatus further comprises a detection unit:
if the detection unit detects that the main control board and the standby control board are switched between the main control board and the standby control board, the sending unit sends the target message sent by the standby control board to a pair of connectors.
Optionally, the first buffer includes a first buffer queue connected to the active main control board, a second buffer queue connected to the standby main control board, and a third buffer queue connected to the first buffer queue and the second buffer queue respectively;
when the target packet sent by the standby main control board is cached to a first cache, the cache unit is specifically configured to:
caching the target message sent by the standby main control board to the second cache queue;
when determining whether the target packet sent by the main control board is cached in the first cache, the determining unit is specifically configured to:
and judging whether the target message sent by the main control board is cached in the first cache queue and the third cache queue.
Optionally, the first buffer is a first-in first-out FIFO memory.
Optionally, if the number of the messages cached in the second cache is greater than or equal to a preset threshold, the discarding unit discards the target message sent by the standby main control board.
The above units may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above units is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
To sum up, in the embodiment of the present application, a target message sent by a standby main control board is received; caching the target message sent by the standby main control board to a first cache; judging whether a target message sent by the main control board is cached in the first cache; if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to a pair of connectors; otherwise, caching the target message sent by the standby main control board to a second cache.
By adopting the message processing method provided by the embodiment of the application, when the target message sent by the standby main control board is received, the target message sent by the standby main control board is not directly discarded, but is cached into the first cache, when the target message sent by the main control board is determined to be received, the target message sent by the standby main control board is deleted, and when whether the target message sent by the main control board is received or not is not determined, the target message sent by the standby main control board is not directly discarded, so that the condition that protocol messages are lost due to the occurrence of main-standby switching and further a large amount of data are lost is avoided.
Further, in the network device provided in the embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the network device may be shown in fig. 5, where the network device may include: a memory 50 and a processor 51, which,
the memory 50 is used for storing program instructions; the processor 51 calls the program instructions stored in the memory 50 and executes the above-described method embodiments according to the obtained program instructions. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application also provides a network device, comprising at least one processing element (or chip) for performing the above method embodiments.
Optionally, the present application also provides a program product, such as a computer-readable storage medium, having stored thereon computer-executable instructions for causing the computer to perform the above-described method embodiments.
Here, a machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and so forth. For example, the machine-readable storage medium may be: a RAM (random access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A message processing method is characterized in that the method is applied to an interface board of network equipment, the network equipment also comprises an active main control board and a standby main control board, and the method comprises the following steps:
receiving a target message sent by the standby main control board;
caching the target message sent by the standby main control board to a first cache;
judging whether a target message sent by the main control board is cached in the first cache;
if the judgment result is yes, discarding the target message sent by the standby main control board, and sending the target message sent by the main control board to a pair of connectors; otherwise, caching the target message sent by the standby main control board to a second cache.
2. The method of claim 1, wherein the method further comprises:
and if detecting that the main control board and the standby control board are switched between the main control board and the standby control board, sending the target message sent by the standby control board to a pair of connectors.
3. The method of claim 1, wherein the first buffer includes a first buffer queue connected to the active main control board, a second buffer queue connected to the standby main control board, and a third buffer queue connected to the first buffer queue and the second buffer queue, respectively;
the step of caching the target message sent by the standby main control board to a first cache comprises:
caching the target message sent by the standby main control board to the second cache queue;
the step of judging whether the first cache stores the target packet sent by the main control board includes:
and judging whether the target message sent by the main control board is cached in the first cache queue and the third cache queue.
4. The method of any of claims 1-3, wherein the first buffer is a first-in-first-out (FIFO) memory.
5. The method of claim 1, wherein the method further comprises:
and if the number of the messages cached in the second cache is greater than or equal to a preset threshold value, discarding the target message sent by the standby main control board.
6. A message processing device is characterized in that the device is applied to an interface board of a network device, the network device also comprises a main control board and a standby main control board, the device comprises a receiving unit, a cache unit, a judging unit, a discarding unit and a sending unit, wherein,
the receiving unit is used for receiving the target message sent by the standby main control board;
the cache unit is used for caching the target message sent by the standby main control board to a first cache;
the judging unit is configured to judge whether a target packet sent by the main master control board is cached in the first cache;
if the judging unit judges that the target message sent by the main control board is cached in the first cache, the discarding unit is used for discarding the target message sent by the standby main control board, and the sending unit sends the target message sent by the main control board to a pair of connectors; if the judging unit judges that the target message sent by the main control board is not cached in the first cache, the caching unit is used for caching the target message sent by the standby main control board to a second cache.
7. The apparatus of claim 6, further comprising a detection unit:
if the detection unit detects that the main control board and the standby control board are switched between the main control board and the standby control board, the sending unit sends the target message sent by the standby control board to a pair of connectors.
8. The apparatus of claim 6, wherein the first buffer comprises a first buffer queue connected to the active main control board, a second buffer queue connected to the standby main control board, and a third buffer queue connected to the first buffer queue and the second buffer queue, respectively;
when the target packet sent by the standby main control board is cached to a first cache, the cache unit is specifically configured to:
caching the target message sent by the standby main control board to the second cache queue;
when determining whether the target packet sent by the main control board is cached in the first cache, the determining unit is specifically configured to:
and judging whether the target message sent by the main control board is cached in the first cache queue and the third cache queue.
9. The apparatus of any of claims 6-7, wherein the first buffer is a first-in-first-out (FIFO) memory.
10. The apparatus of claim 6,
and if the number of the messages cached in the second cache is greater than or equal to a preset threshold value, the discarding unit discards the target message sent by the standby main control board.
CN202010063483.5A 2020-01-20 2020-01-20 Message processing method and device Active CN111314432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010063483.5A CN111314432B (en) 2020-01-20 2020-01-20 Message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010063483.5A CN111314432B (en) 2020-01-20 2020-01-20 Message processing method and device

Publications (2)

Publication Number Publication Date
CN111314432A true CN111314432A (en) 2020-06-19
CN111314432B CN111314432B (en) 2022-02-22

Family

ID=71148353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010063483.5A Active CN111314432B (en) 2020-01-20 2020-01-20 Message processing method and device

Country Status (1)

Country Link
CN (1) CN111314432B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311621A (en) * 2020-10-15 2021-02-02 新华三技术有限公司合肥分公司 Communication detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529459A (en) * 2003-10-16 2004-09-15 港湾网络有限公司 Main-standby rotation realizing method facing to high-side exchange board
CN102195845A (en) * 2010-03-03 2011-09-21 杭州华三通信技术有限公司 Method, device and equipment for realizing active-standby switching of main control board
WO2016065751A1 (en) * 2014-10-31 2016-05-06 中兴通讯股份有限公司 Method for recovering link communication, service line card and system
CN107241154A (en) * 2016-03-28 2017-10-10 中兴通讯股份有限公司 1588 file transmitting methods and device
CN108880868A (en) * 2018-05-31 2018-11-23 新华三技术有限公司 BFD keep alive Packet transmission method, device, equipment and machine readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1529459A (en) * 2003-10-16 2004-09-15 港湾网络有限公司 Main-standby rotation realizing method facing to high-side exchange board
CN102195845A (en) * 2010-03-03 2011-09-21 杭州华三通信技术有限公司 Method, device and equipment for realizing active-standby switching of main control board
WO2016065751A1 (en) * 2014-10-31 2016-05-06 中兴通讯股份有限公司 Method for recovering link communication, service line card and system
CN107241154A (en) * 2016-03-28 2017-10-10 中兴通讯股份有限公司 1588 file transmitting methods and device
CN108880868A (en) * 2018-05-31 2018-11-23 新华三技术有限公司 BFD keep alive Packet transmission method, device, equipment and machine readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311621A (en) * 2020-10-15 2021-02-02 新华三技术有限公司合肥分公司 Communication detection method and device
CN112311621B (en) * 2020-10-15 2022-05-24 新华三技术有限公司合肥分公司 Communication detection method and device

Also Published As

Publication number Publication date
CN111314432B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN107451012B (en) Data backup method and stream computing system
CN108243116B (en) Flow control method and switching equipment
CN106878164B (en) Message transmission method and device
CN112910802B (en) Message processing method and device
CN107257301B (en) Method and device for detecting repeated messages in parallel redundant network
US20130114593A1 (en) Reliable Transportation a Stream of Packets Using Packet Replication
CN111314432B (en) Message processing method and device
US20030014516A1 (en) Recovery support for reliable messaging
CN113507431B (en) Message management method, device, equipment and machine-readable storage medium
CN114189477B (en) Message congestion control method and device
CN114157609B (en) PFC deadlock detection method and device
CN111865716B (en) Port congestion detection method, device, equipment and machine-readable storage medium
CN112737940A (en) Data transmission method and device
CN112152872B (en) Network sub-health detection method and device
US7796645B2 (en) Technique for controlling selection of a write adapter from multiple adapters connected to a high speed switch
CN108616461B (en) Policy switching method and device
CN104811391B (en) Data packet processing method and device and server
CN108874530B (en) Method and device for expanding and shrinking service board of message forwarding equipment
CN108206823A (en) A kind of method and the network equipment for handling message
CN101984586A (en) Method and device for monitoring multiple links
CN111654434B (en) Flow switching method and device and forwarding equipment
CN112383471A (en) Method, device and equipment for managing knife box link and machine readable storage medium
JP2015536063A (en) Method of Random Access Message Retrieval from First-In-First-Out Transport Mechanism
CN112187578B (en) Table entry generation method, device and equipment
US9894007B2 (en) Packet transfer processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant