CN114430362A - Link switching method, FPGA chip, device and storage medium - Google Patents

Link switching method, FPGA chip, device and storage medium Download PDF

Info

Publication number
CN114430362A
CN114430362A CN202111626734.7A CN202111626734A CN114430362A CN 114430362 A CN114430362 A CN 114430362A CN 202111626734 A CN202111626734 A CN 202111626734A CN 114430362 A CN114430362 A CN 114430362A
Authority
CN
China
Prior art keywords
message
task queue
scheduling task
data path
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111626734.7A
Other languages
Chinese (zh)
Other versions
CN114430362B (en
Inventor
刘雄
石金博
沙琪
陈理辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QKM Technology Dongguan Co Ltd
Original Assignee
QKM Technology Dongguan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QKM Technology Dongguan Co Ltd filed Critical QKM Technology Dongguan Co Ltd
Priority to CN202111626734.7A priority Critical patent/CN114430362B/en
Publication of CN114430362A publication Critical patent/CN114430362A/en
Application granted granted Critical
Publication of CN114430362B publication Critical patent/CN114430362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/626Queue scheduling characterised by scheduling criteria for service slots or service orders channel conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a link switching method, an FPGA chip, equipment and a storage medium, and relates to the technical field of communication; the method comprises the steps of receiving a first message transmitted by a network port from a device node; caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes; detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path; and judging whether to forward the second message from the corresponding first data path to the corresponding second data path or not according to the first path state corresponding to the second message to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue. The FPGA chip, the device and the storage medium can improve the real-time performance of link switching and the safety and the stability of a system by applying the link switching method.

Description

Link switching method, FPGA chip, device and storage medium
Technical Field
The embodiment of the application relates to, but is not limited to, the technical field of communication, and in particular relates to a link switching method, an FPGA chip, a device and a storage medium.
Background
In a network topology of a communication system, a packet needs to be transmitted to a destination node through a plurality of device nodes, and each device node forwards the packet after analyzing and checking the packet in the transmission process. For one device node, a plurality of transmission paths are correspondingly arranged, and when more messages need to be processed, the message processing is easy to delay and the switching efficiency of the transmission paths is low, so that the service is damaged.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides a link switching method, an FPGA chip, equipment and a storage medium, and can improve the real-time performance of link switching and the safety and stability of a system.
In a first aspect, an embodiment of the present application provides a link switching method, which is applied to a device node, and includes:
receiving a first message transmitted by a network port of the equipment node;
caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes;
detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path;
and judging whether to forward the second message from the corresponding first data access to the corresponding second data access according to the first access state corresponding to the second message to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue.
In a second aspect, an embodiment of the present application further provides an FPGA chip, including:
the analysis distribution module is used for receiving a first message transmitted by a network port of the equipment node and caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes;
the detection module is used for detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path;
and the redundancy module is used for judging whether to forward the second message from the corresponding first data path to the corresponding second data path or not according to the first path state corresponding to the second message to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue.
In a third aspect, an embodiment of the present application further provides an apparatus, including: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the method of link switching according to any one of the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, where the computer-executable instructions are configured to perform the method for link handover according to any one of the first aspect.
According to the above embodiments of the present application, at least the following advantages are provided: for each scheduling task queue, each second packet can be processed because the scheduling task queue is periodically called within the corresponding scheduling time based on a polling mechanism. And judging whether to switch the second data access or not according to the first access state corresponding to the second message to be forwarded within the scheduling time corresponding to the scheduling task queue, so that the failed first data access can be identified in time and the second data access can be switched.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
Fig. 1 is a flowchart illustrating a method for switching a link according to an embodiment of the present application;
fig. 2 is a schematic flowchart of heartbeat detection of a method of link switching according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an FPGA chip according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
In a network topology of a communication system, a packet needs to be transmitted to a destination node through a plurality of device nodes, and each device node forwards the packet after analyzing and checking the packet in the transmission process. For one device node, a plurality of transmission paths are correspondingly arranged, and when more messages need to be processed, the message processing is easy to delay and the switching efficiency of the transmission paths is low, so that the service is damaged. Based on this, the application provides a link switching method, an FPGA chip, a device and a storage medium, which can improve the real-time performance of link switching.
Referring to fig. 1, the method for switching a link applied to a device node includes:
step S100, receiving a first message transmitted by a network port of an equipment node.
It should be noted that the device node may be a network device, or may be a controller of the robot, and the first message may be transmitted to the internet access through a network cable connection or a wireless connection.
Step S200, caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes.
It should be noted that the message attribute may be a message type or a priority, a plurality of scheduling task queues are provided, and the scheduling task queues may be cached according to the message type or paired with the priority. Illustratively, in some embodiments, N scheduling task queues are provided, and correspond to priorities of 1 to N, respectively, and when a priority included in a packet attribute of a first packet is M, and M is less than or equal to N, the first packet is stored in the scheduling task queue with the priority of M.
Step S300, detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second packet is matched with the first data path.
It should be noted that the first data path represents a transmission link communicating with the current device node, and may be a link between device nodes of adjacent one hop, or a link between multiple device nodes of adjacent one hop. The path of the second message transmission represented by the transmission attribute is matched to the first data path.
It should be noted that the detection of the first path state and the receiving of the first message are independent from each other.
Step S400, in the scheduling time of the scheduling task queue, whether the second message to be forwarded is forwarded from the corresponding first data path to the corresponding second data path is judged according to the first path state corresponding to the second message to be forwarded in the scheduling task queue.
It should be noted that each scheduling task queue corresponds to a scheduling time, and polling execution of each scheduling task queue is realized. Due to the difference of the destination addresses of the plurality of second messages, each second message corresponds to one first data path for message transmission. The second data path is a backup link of the first data path, and is configured at the initial stage of network topology creation, and the first data path and the second data path are set in the device node to backup each other, so that after the first data path fails, the second message is switched to the second data path in time by executing step S400, and the probability that the second message is sent to the failed link is reduced. Meanwhile, the link switching is carried out when each second message is to be forwarded, so that the processing of judging the link fault can be reduced, and the switching efficiency is further improved.
It should be noted that the second packet is used to distinguish the first packet, and when the first packet is cached in the corresponding scheduling task queue, the first packet is also the second packet.
Therefore, for each scheduling task queue, each second packet can be processed because the scheduling task queue is periodically called within the corresponding scheduling time based on the polling mechanism. And judging whether to switch the second data access or not according to the first access state corresponding to the second message to be forwarded within the scheduling time corresponding to the scheduling task queue, so that the failed first data access can be identified in time and the second data access can be switched.
It should be noted that, in the device node, multiple types of first packets are simultaneously supported, and each first packet may be set with a priority. Illustratively, a switch may transmit a real-Time (Isoch) message, a Precision Time Protocol (PTP) message, an asynchronous (Async) message, and a Transmission Control Protocol/Internet Protocol (TCP/IP) message, where the Async message is triggered by an event. And setting the processing priorities of the Isoch message, the PTP message, the Async message and the TCP/IP message as the Isoch message, the PTP message, the Async message and the TCP/IP message in sequence. When receiving the Isoch message, caching the Isoch message to a scheduling task queue with corresponding priority according to the priority of the Isoch message. In other embodiments, the buffering may also be performed according to the message type, for example, if the message type is an Isoch message, the message is buffered to a first task queue, and if the message type is another, the message is buffered to a second task queue, where the first task queue and the second task queue are both scheduling task queues. And storing the Async message and the TCP/IP message into a second task queue after receiving the Async message and the TCP/IP message.
It should be noted that, for each step in the steps S100 to S400, the step may be implemented in an FPGA chip of the device node, or may be implemented partially in the FPGA, or partially in an upper application module (such as a network port) connected to the FPGA chip.
It can be understood that, in step S200, according to the packet attribute of the first packet, caching the first packet into the corresponding scheduling task queue, including: and caching the first message with the priority higher than the preset priority to a corresponding scheduling task queue in an FPGA chip of the equipment node according to the message attribute.
It should be noted that the processing efficiency of the FPGA chip is higher, and the first message with the high priority is cached in the scheduling task queue corresponding to the FPGA, so that the scheduling can be performed by the FPGA chip during the subsequent scheduling, and the processing efficiency of the first message is further improved.
For example, suppose the network messages include an Isoch message, a PTP message, an Async message, and a TCP/IP message, where the priorities of the Isoch message, Async message, and TCP/IP message are all higher than a preset priority, and the priority of the PTP message is lower than the preset priority, after receiving the PTP message at the network port, the scheduling is directly completed at the network port, and the Isoch message, Async message, and TCP/IP message are cached in the corresponding scheduling task queue in the FPGA chip.
It should be noted that the preset priority may be dynamically set by the user, may be dynamically loaded in a configuration file, or may be preset in a program.
It can be understood that, in the step S400, in the scheduling time of the scheduling task queue, according to the first path state corresponding to the second packet to be forwarded in the scheduling task queue, determining whether to forward the packet from the corresponding first data path to the corresponding second data path includes: and in the scheduling time of the scheduling task queue, the FPGA chip judges whether to forward the second message to be forwarded from the corresponding first data access to the corresponding second data access or not according to the first access state corresponding to the second message to be forwarded in the scheduling task queue.
It should be noted that, the direct link switching between the first data path and the second data path is realized through the FPGA chip, the switching efficiency is higher, and the probability of transmission interruption in the network message transmission process is reduced.
It can be understood that step S200 is to cache the first packet into the corresponding scheduling task queue according to the packet attribute of the first packet, and further includes: and caching the first message with the priority lower than the preset priority to a corresponding scheduling task queue set by an upper application module of the equipment node according to the message attribute.
It should be noted that, the upper layer application module sets a scheduling task queue corresponding to the priority, so that the upper layer application module performs polling processing on the first messages received in sequence. The upper layer application module is, in some embodiments, a portal of the device node.
It can be understood that, referring to fig. 2, the step S300 of detecting the first path status of the preset at least one first data path includes:
step S310, a first heartbeat detection packet is sent to each first data path.
It should be noted that, when a plurality of first data paths are provided, a first heartbeat detection message is sent to each first data path. And when the first data path is a link between the adjacent one-hop equipment nodes, the first heartbeat detection message is sent to the adjacent one-hop equipment nodes.
Step S320, receiving first response data responded by the first data path according to the first heartbeat detection packet.
It should be noted that, taking the link between the device nodes of which the first data path is an adjacent one-hop as an example, after receiving the first heartbeat detection packet, the device node of the adjacent one-hop may send normal first response data if there is no failure, or may send failure information according to the link quality and the like. It should be noted that, when the first data path fails, and the device node adjacent to the first hop cannot receive the first heartbeat detection packet, the first data path is also considered as a failure.
And step S330, determining a corresponding first channel state according to the first response data.
It should be noted that the first heartbeat detection packet may be sent in real time or periodically, where the period is usually set to be ms level, for example, once in 1ms, and when the first heartbeat detection packet is detected, the state of each first path is stored, so as to be used when the corresponding scheduling task queue is scheduled.
It will be appreciated that the method further comprises: acquiring a second path state of an interactive component connected with the equipment node; the second channel state is determined according to the response of the interactive component to the second heartbeat detection message; and judging whether to forward the second message in the scheduling task queue from the corresponding first data path to the corresponding second data path or not according to the state of the second path within the scheduling time of the scheduling task queue.
It should be noted that, in some embodiments, the parsing process and control of the first packet depend on the upper layer interaction of the device node, and therefore, in some embodiments, the second path state of the upper layer interaction component is taken as one of the switching conditions. Illustratively, an upper computer of the robot is taken as a relatively interactive component, the upper computer is connected with and manages a plurality of device nodes, and when a second message is forwarded to a destination device node B through a device node a and needs to pass through a device node C managed by the upper computer (i.e., a first data path needs to pass through the device node C), the security of transmission of the second message is further ensured by detecting the state of the second path of the upper computer in advance.
It can be understood that, in step S200, according to the packet attribute of the first packet, caching the first packet into the corresponding scheduling task queue, including: determining a scheduling task queue according to the message attribute of the first message; and caching the first message into the scheduling task queue or discarding the first message according to the network bandwidth threshold value corresponding to the scheduling task queue.
It should be noted that the network bandwidth threshold and the scheduling task queue are in one-to-one correspondence, and the threshold is set to implement early warning for sending the first messages of different types, after all, the number of the first messages transmitted is too large, and the device node cannot process the first messages in time, resulting in too many first messages being lost.
It will be appreciated that the method further comprises: and determining a network bandwidth threshold value corresponding to the scheduling task queue according to a preset priority ratio, wherein the priority ratio represents the relationship between the scheduling task queue and the total transmission bandwidth.
For example, assuming that the total transmission bandwidth is 1G, for an Isoch message, a PTP message, an Async message, and a TCP/IP message, setting priority ratios of 20% to 10% and 30% to 40% according to application scenarios or historical processing numbers of the Isoch message, the PTP message, the Async message, and the TCP/IP message, respectively, the cache of the task queue corresponding to the Isoch message is 1G × 20%, and so on to obtain the cache of the task queue corresponding to the network message of each message type. At this time, in the polling period, each scheduling task queue is polled respectively.
It can be understood that, when a plurality of scheduling task queues are provided, the method further comprises: and when the bandwidth occupied by the plurality of scheduling task queues is larger than a preset threshold value, stopping the scheduling of at least one scheduling task queue according to a preset selection algorithm.
It should be noted that the preset threshold value may be set by a user or by dynamic statistics according to historical data, or may be loaded by a configuration file.
It should be noted that the preset threshold is used to determine whether the second packet backlog exists in the high-priority scheduling task queue, and by stopping part of the low-priority scheduling task queue, the bandwidth resource allocated to the stopped scheduling task queue is provided for the high-priority processing, so that the high-priority second packet can be timely processed.
It is understood that, before step S200, the method further comprises: and carrying out error correction check on the first message.
It should be noted that the error correction check includes a CRC check.
It should be noted that, in some embodiments, the error correction check is set in the FPGA chip, and different parsing modules are set for different message attributes of the first message, so that the multi-thread parsing processing can be implemented.
It will be appreciated that, with reference to the embodiment shown in figure 1, the method further comprises: and receiving a priority setting request from a user, wherein the priority setting request is used for allocating priority to the scheduling task queue.
Referring to the embodiment of fig. 3, the present application further provides an FPGA chip, including:
the analysis allocation module 100 is configured to receive a first packet transmitted from a network port of an equipment node, and cache the first packet in a corresponding scheduling task queue according to a packet attribute of the first packet; the scheduling task queues are arranged in one-to-one correspondence with the message attributes;
a detecting module 200, configured to detect a first path state of a plurality of preset first data paths; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path;
the redundancy module 300 is configured to determine whether to forward the second packet from the corresponding first data path to the corresponding second data path according to a first path state corresponding to the second packet to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue.
It should be noted that, after the network packet is accessed from the chip of the FPGA, in some embodiments, error correction and verification are performed first, and then the network packet enters the cache after passing verification and is sent to the parsing module one by one for processing.
It should be noted that, in some embodiments, the parsing and allocating module 100 may be divided into different message parsing modules according to message types, so as to perform asynchronous parsing processing on network messages of different message types, thereby improving processing efficiency. Illustratively, the parsing and allocating module 100 includes an Isoch message parsing module, a PTP message parsing module, an Async message parsing module, and a TCP/IP message parsing module, where the Isoch message parsing module parses and checks an Isoch message; the PTP message analysis module analyzes and checks the PTP message, the Async message analysis module analyzes and checks the Async message, and the TCP/IP message analysis module analyzes and checks the TCP/IP message.
It should be noted that, for the detection module 200, in some embodiments, the detection module 200 is configured to obtain a path state of the first data path of each first packet entering from the internet access and a device state of the interactive component communicatively connected to the device node.
The analysis allocation module 100 is electrically connected to the detection module 200, and the redundancy module 300 is electrically connected to the detection module 200, the analysis allocation module 100, and the redundancy module 300.
Those skilled in the art will appreciate that the topology shown in fig. 3 is not meant to be a limitation of embodiments of the present application and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
It will be appreciated that with reference to the embodiment shown in figure 4, the present application also proposes an apparatus comprising: a memory 600, a processor 500 and a computer program stored on the memory 600 and executable on the processor 500, the method of link switching as described above when the processor 500 executes the computer program.
The memory 600, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory 600 may include high-speed random access memory 600, and may also include non-transitory memory 600, such as at least one piece of disk memory 600, flash memory device, or other non-transitory solid-state memory 600. In some embodiments, the memory 600 may optionally include memory 600 located remotely from the processor 500, and these remote memories 600 may be connected to the processor 500 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It should be noted that the device node in this embodiment may be applied to a device node of a network architecture as shown in fig. 1, and the device node in this embodiment and the method for link switching as shown in fig. 1 have the same inventive concept, so these embodiments have the same implementation principle and technical effect, and are not described in detail here.
Non-transitory software programs and instructions required to implement the information processing method of the above-described embodiment are stored in the memory 600, and when executed by the processor 500, perform the information processing method of the above-described embodiment, for example, perform the method steps corresponding to fig. 1 described above or sub-steps thereof.
It is understood that the present application also provides a computer-readable storage medium storing computer-executable instructions for performing the above-mentioned link switching method.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and such equivalent modifications or substitutions are included in the scope of the present invention defined by the claims.

Claims (14)

1. A method for link switching, applied to a device node, is characterized in that the method includes:
receiving a first message transmitted by a network port of the equipment node;
caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes;
detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path;
and judging whether to forward the second message from the corresponding first data access to the corresponding second data access according to the first access state corresponding to the second message to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue.
2. The method of claim 1,
the caching the first message into a corresponding scheduling task queue according to the message attribute of the first message includes:
and caching the first message with the priority higher than the preset priority to a corresponding scheduling task queue in an FPGA chip of the equipment node according to the message attribute.
3. The method of claim 2,
the determining, within the scheduling time of the scheduling task queue, whether to forward the second packet to be forwarded from the corresponding first data path to the corresponding second data path according to the first path state corresponding to the second packet to be forwarded in the scheduling task queue includes:
and in the scheduling time of the scheduling task queue, the FPGA chip judges whether to forward the second message to be forwarded from the corresponding first data access to the corresponding second data access or not according to the first access state corresponding to the second message to be forwarded in the scheduling task queue.
4. The method of claim 1,
the caching the first message into a corresponding scheduling task queue according to the message attribute of the first message further comprises:
and caching the first message with the priority lower than the preset priority to a corresponding scheduling task queue set by an upper application module of the equipment node according to the message attribute.
5. The method of claim 1,
the detecting a first path state of at least one preset first data path includes:
sending a first heartbeat detection message to each first data path;
receiving first response data responded by the first data path according to the first heartbeat detection message;
and determining a corresponding first access state according to the first response data.
6. The method of claim 1, further comprising:
acquiring a second path state of an interactive component connected with the equipment node; the second channel state is determined according to a second heartbeat detection message responded by the interactive component;
and judging whether to forward a second message in the scheduling task queue from the corresponding first data path to the corresponding second data path or not according to the state of the second path within the scheduling time of the scheduling task queue.
7. The method of claim 1, wherein the caching the first packet into a corresponding scheduling task queue according to the packet attribute of the first packet comprises:
determining the scheduling task queue according to the message attribute of the first message;
and caching the first message into the scheduling task queue or discarding the first message according to a network bandwidth threshold value corresponding to the scheduling task queue.
8. The method of claim 7, further comprising:
and determining a network bandwidth threshold value corresponding to the scheduling task queue according to a preset priority ratio, wherein the priority ratio represents the relationship between the scheduling task queue and the total transmission bandwidth.
9. The method of claim 1, wherein the scheduled task queue is provided in plurality, the method further comprising:
and when the bandwidth occupied by the plurality of scheduling task queues is larger than a preset threshold value, stopping the scheduling of at least one scheduling task queue according to a preset selection algorithm.
10. The method of claim 1, further comprising, prior to buffering the first packet in the corresponding scheduled task queue:
and carrying out error correction check on the first message.
11. The method of claim 1, further comprising:
receiving a priority setting request from a user, wherein the priority setting request is used for allocating priority to the scheduling task queue.
12. An FPGA chip, comprising:
the analysis distribution module is used for receiving a first message transmitted by a network port of the equipment node and caching the first message into a corresponding scheduling task queue according to the message attribute of the first message; the scheduling task queues are arranged in one-to-one correspondence with the message attributes;
the detection module is used for detecting a first path state of at least one preset first data path; the first data path is used for forwarding a second message in the scheduling task queue; the transmission attribute of the second message is matched with the first data path;
and the redundancy module is used for judging whether to forward the second message from the corresponding first data path to the corresponding second data path or not according to the first path state corresponding to the second message to be forwarded in the scheduling task queue within the scheduling time of the scheduling task queue.
13. An apparatus, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of link switching according to any of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium having stored thereon computer-executable instructions for performing at least the method of link switching of any of claims 1 to 11.
CN202111626734.7A 2021-12-28 2021-12-28 Link switching method, FPGA chip, equipment and storage medium Active CN114430362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111626734.7A CN114430362B (en) 2021-12-28 2021-12-28 Link switching method, FPGA chip, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111626734.7A CN114430362B (en) 2021-12-28 2021-12-28 Link switching method, FPGA chip, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114430362A true CN114430362A (en) 2022-05-03
CN114430362B CN114430362B (en) 2024-04-12

Family

ID=81311156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111626734.7A Active CN114430362B (en) 2021-12-28 2021-12-28 Link switching method, FPGA chip, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114430362B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050132256A1 (en) * 2003-11-20 2005-06-16 Yasuo Watanabe Storage system, and control method, job scheduling processing method, and failure handling method therefor, and program for each method
US20120198002A1 (en) * 2011-01-27 2012-08-02 T-Mobile Usa, Inc. Unified Notification Platform
US20140280522A1 (en) * 2011-02-02 2014-09-18 Imvu Inc. System and method for providing an actively invalidated client-side network resource cache
WO2017016300A1 (en) * 2015-07-29 2017-02-02 深圳市中兴微电子技术有限公司 Method and apparatus for processing token application, computer storage medium
CN107896200A (en) * 2017-11-08 2018-04-10 中国人民解放军国防科技大学 Message scheduling method compatible with virtual link and packet switching mechanism
CN111130893A (en) * 2019-12-27 2020-05-08 中国联合网络通信集团有限公司 Message transmission method and device
CN111327391A (en) * 2018-12-17 2020-06-23 深圳市中兴微电子技术有限公司 Time division multiplexing method, device, system and storage medium
CN112615796A (en) * 2020-12-10 2021-04-06 北京时代民芯科技有限公司 Queue management system considering storage utilization rate and management complexity

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050132256A1 (en) * 2003-11-20 2005-06-16 Yasuo Watanabe Storage system, and control method, job scheduling processing method, and failure handling method therefor, and program for each method
US20120198002A1 (en) * 2011-01-27 2012-08-02 T-Mobile Usa, Inc. Unified Notification Platform
US20140280522A1 (en) * 2011-02-02 2014-09-18 Imvu Inc. System and method for providing an actively invalidated client-side network resource cache
WO2017016300A1 (en) * 2015-07-29 2017-02-02 深圳市中兴微电子技术有限公司 Method and apparatus for processing token application, computer storage medium
CN107896200A (en) * 2017-11-08 2018-04-10 中国人民解放军国防科技大学 Message scheduling method compatible with virtual link and packet switching mechanism
CN111327391A (en) * 2018-12-17 2020-06-23 深圳市中兴微电子技术有限公司 Time division multiplexing method, device, system and storage medium
CN111130893A (en) * 2019-12-27 2020-05-08 中国联合网络通信集团有限公司 Message transmission method and device
CN112615796A (en) * 2020-12-10 2021-04-06 北京时代民芯科技有限公司 Queue management system considering storage utilization rate and management complexity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KHALED SALAH;TAREK RAHIL SHELTAMI: "Performance modeling of cloud apps using message queueing as a service(MaaS)", 2017 20TH CONFERENCE ON INNOVATIONS IN CLOUDS, INTERNET AND NETWORKS, 17 April 2017 (2017-04-17) *
左霞: "光纤骨干网实时流测量关键技术研究", 中国优秀硕士学位论文数据库, 16 February 2014 (2014-02-16) *

Also Published As

Publication number Publication date
CN114430362B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN105607590B (en) Method and apparatus to provide redundancy in a process control system
CN112214441B (en) Communication switching method, equipment and system based on serial bus polling protocol
US11146090B2 (en) Battery management system, and method and apparatus for transmitting information
CN112787960B (en) Stack splitting processing method, device and equipment and storage medium
US9197373B2 (en) Method, apparatus, and system for retransmitting data packet in quick path interconnect system
CN101729231B (en) Industrial Ethernet in distributed control system
US20140071813A1 (en) Protection switching method and apparatus
US9059899B2 (en) Method and system for interrupt throttling and prevention of frequent toggling of protection groups in a communication network
JP2014217062A (en) Link failure diagnosis device and method
CN112737940A (en) Data transmission method and device
JP2017505011A (en) Method and node apparatus for operating a node in a network
CN110808917B (en) Multilink aggregation data retransmission method and transmitting equipment
CN112866338A (en) Server state detection method and device
CN114430362B (en) Link switching method, FPGA chip, equipment and storage medium
CN108282406B (en) Data transmission method, stacking equipment and stacking system
JP2010205234A (en) Monitoring system, network apparatus, monitoring information providing method, and program
CN110661836B (en) Message routing method, device and system, and storage medium
CN108199986B (en) Data transmission method, stacking equipment and stacking system
CN113098709B (en) Network recovery method and device based on distributed networking system and computer equipment
CN116260887A (en) Data transmission method, data transmission device, data reception device, and storage medium
CN109818870B (en) Multicast routing method, device, service board and machine readable storage medium
CN109218135B (en) BFD detection method and device
CN112332956A (en) Information sharing method and device in redundant network and computer storage medium
WO2015120581A1 (en) Traffic loop detection in a communication network
WO2023273088A1 (en) Control method for ring node, and network device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant