CN109327403B - Flow control method, flow control device, network equipment and storage medium - Google Patents

Flow control method, flow control device, network equipment and storage medium Download PDF

Info

Publication number
CN109327403B
CN109327403B CN201811475028.5A CN201811475028A CN109327403B CN 109327403 B CN109327403 B CN 109327403B CN 201811475028 A CN201811475028 A CN 201811475028A CN 109327403 B CN109327403 B CN 109327403B
Authority
CN
China
Prior art keywords
message processing
assembly line
processing assembly
message
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811475028.5A
Other languages
Chinese (zh)
Other versions
CN109327403A (en
Inventor
陈诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie Networks Co Ltd filed Critical Ruijie Networks Co Ltd
Priority to CN201811475028.5A priority Critical patent/CN109327403B/en
Publication of CN109327403A publication Critical patent/CN109327403A/en
Application granted granted Critical
Publication of CN109327403B publication Critical patent/CN109327403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal

Abstract

The embodiment of the application provides a flow control method, a flow control device, network equipment and a storage medium, wherein the method comprises the following steps: detecting the amount of available cache resources in a cache queue of the network equipment; when detecting that the available cache resource amount is smaller than a preset lower limit value, determining a suspended message processing assembly line and a normal message processing assembly line from all message processing assembly lines according to the importance degree of the current bearing service of all message processing assembly lines sharing the cache queue; and suspending receiving of the message of the current bearer service of the suspended message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving. In the embodiment, the packet loss rate of the important service can be reduced, and the service quality of the important service is effectively improved.

Description

Flow control method, flow control device, network equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a flow control method, an apparatus, a network device, and a storage medium.
Background
In the current society, people have stronger dependence on the internet, and computer nodes connected to the network are increased in a series mode; the performance of a network device, which acts as a data switching node for the interconnection of computer networks, determines the overall quality of the network.
At present, network devices usually discard packets reaching a network port randomly when network congestion occurs, so as to alleviate the network congestion. However, this may cause random packet loss of the network traffic carried by the network device, which may affect the normal use of the network traffic. Especially for important network services, if packet loss occurs, the stability of the entire network may be affected, and therefore, a reasonable flow control scheme is urgently needed.
Disclosure of Invention
Aspects of the present application provide a flow control method, apparatus, network device, and storage medium, so as to improve the quality of service of an important service carried by the network device.
The embodiment of the application provides a flow control method, which comprises the following steps:
detecting the amount of available cache resources in a cache queue of the network equipment;
when detecting that the amount of the available cache resources is smaller than a preset lower limit value, determining a suspended message processing assembly line and a normal message processing assembly line from all the message processing assembly lines according to the importance degree of the current bearing service of all the message processing assembly lines sharing the cache queue;
and suspending receiving of the message of the current bearer service of the suspended message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving.
An embodiment of the present application further provides a flow control device, which includes:
the detection module is used for detecting the available buffer resource amount in the buffer queue of the network equipment;
the configuration module is used for determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue when the available cache resource amount is detected to be smaller than a preset lower limit value;
and the control module is used for pausing and receiving the message of the current bearing service of the paused message processing assembly line so as to ensure that the message of the current bearing service of the normal message processing assembly line is successfully received into the cache queue when arriving.
The embodiment of the application also provides a network device, which comprises a memory and a processor;
the memory is to store one or more computer instructions;
the processor, coupled with the memory, to execute the one or more computer instructions to:
detecting the amount of available cache resources in a cache queue of the network equipment;
when detecting that the available cache resource amount is smaller than a preset lower limit value, determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue;
and pausing to receive the message of the current bearer service of the pause message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to execute the aforementioned flow control method.
In the embodiment of the application, whether the network equipment is congested or not can be determined by detecting the available buffer resource amount in the buffer queue of the network equipment; when network congestion occurs, a suspended message processing flow line and a normal message processing flow line can be determined from each message processing flow line according to the importance degree of the service carried by each message processing flow line; and can suspend receiving the message of the current bearing service of the suspended message processing pipeline. Therefore, the message of part of the service can be selectively received into the buffer queue according to the service importance degree, the packet loss rate of the important service is favorably reduced, and the service quality of the important service is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a flow control method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an operating architecture in which a multi-core network device bears multiple services according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a network device according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a fluidic device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, network devices usually discard packets reaching a network port randomly when network congestion occurs, so as to alleviate the network congestion. However, this will cause random packet loss of the network service carried by the network device, which will affect the normal use of the network service. In some embodiments of the present application: whether network congestion occurs in the network equipment can be determined by detecting the amount of available cache resources in a cache queue of the network equipment; when network congestion occurs, a suspended message processing flow line and a normal message processing flow line can be determined from each message processing flow line according to the importance degree of the service carried by each message processing flow line; and can suspend receiving the message of the current bearing service of the suspended message processing pipeline. Therefore, the message of part of the service can be selectively received into the buffer queue according to the service importance degree, the packet loss rate of the important service is favorably reduced, and the service quality of the important service is effectively improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a flow control method according to an embodiment of the present application. As shown in fig. 1, the method includes:
100. detecting the amount of available cache resources in a cache queue of the network equipment;
101. when detecting that the amount of available cache resources is smaller than a preset lower limit value, determining a suspended message processing pipeline and a normal message processing pipeline from all message processing pipelines according to the importance degree of the current bearing service of all message processing pipelines sharing a cache queue;
102. and suspending receiving of the message of the current bearing service of the suspended message processing pipeline so as to ensure that the message of the current bearing service of the normal message processing pipeline is successfully received into the cache queue when arriving.
In this embodiment, the network device may be a data switching node device such as a switch, a router, or a gateway, and certainly may also be other devices capable of bearing network services, which is not limited in this embodiment. Accordingly, the flow control method provided by this embodiment can be applied to various types of network devices to improve the quality of service of important services carried by the network devices.
In this embodiment, the network device may use a message processing pipeline to carry various services. In the process of initializing the network equipment, the number of message processing pipelines supported by the network equipment can be set according to parameters such as the number of processor cores of the network equipment, the total bandwidth of a communication interface and the like. One message processing pipeline can bear one service and can continue to process other services after one service is processed, that is, the service borne by the same message processing pipeline is dynamically changed. Therefore, in this embodiment, the network device can simultaneously carry multiple services based on multiple packet processing pipelines.
The message processing pipelines can share a buffer queue, and the buffer queue is used for storing messages to be processed. In this embodiment, whether the network device enters the network congestion state may be determined by detecting the amount of available cache resources in the cache queue of the network device. For example, it may be determined that the network device enters a network congestion state when it is detected that the amount of available cache resources is less than a preset lower limit. In some practical applications, a cache resource query request may be sent to a hardware monitoring component of a network device; and receiving a query result returned by the hardware monitoring component according to the cache resource query request, wherein the query result comprises the available cache resource amount of the cache queue. Of course, the amount of the cache resource available in the cache queue of the network device may also be obtained in other manners or from other ways, and the embodiment is not limited thereto.
In this embodiment, the amount of available buffer resources in the buffer queue of the network device may be detected at intervals, and the interval time may be set according to an actual situation. For example, the interval time may be set according to the total bandwidth of the communication interface of the network device, the core count of the processor, and the total number of buffers in the buffer queue. Of course, the interval time may also be determined according to other performance indicators of the network device, which is not limited in this embodiment.
In addition, the preset lower limit value adopted when judging whether the network equipment enters the network congestion state can be set according to the actual situation. For example, the appropriate lower limit value may be set according to the total bandwidth of the communication interface of the network device and the total amount of buffers in the buffer queue. Of course, the lower limit value may also be determined according to other performance indicators of the network device, which is not limited in this embodiment.
When the network equipment is determined to enter the network congestion state, a suspended message processing flow line and a normal message processing flow line can be determined from all the message processing flow lines according to the importance degree of the current bearing service of all the message processing flow lines sharing the cache queue. Based on the foregoing description of the packet processing pipeline in the network device, in this embodiment, different packet processing pipelines in the network device carry different services, and the importance degrees of the different services may not be completely the same. In order to preferentially ensure the service quality of important services, in some practical applications, a message processing pipeline carrying a service with a lower importance degree in each message processing pipeline can be determined as a suspended message processing pipeline, and a message processing pipeline carrying a service with a higher importance degree can be determined as a normal message processing pipeline. For example, when there are 5 message processing pipelines in the network device, the message processing pipelines may be sorted according to the importance of the service carried by each message processing pipeline, 2 message processing pipelines with lower importance are determined as suspended message processing pipelines, and the other 3 message processing pipelines are determined as normal message processing pipelines.
Based on the determined pause message processing flow line and the normal message processing flow line, when a message reaches a communication interface of the network device, the message processing flow line corresponding to the message can be determined according to the service corresponding to the message, when the message processing flow line corresponding to the message is the pause message processing flow line, the message is discarded, and when the message processing flow line corresponding to the message is the normal message processing flow line, the message can be normally received to the cache queue. Accordingly, in this embodiment, the processing of the service carried by the suspended message processing pipeline can be suspended by suspending receiving the message of the service carried by the suspended message processing pipeline, so as to centralize the available resources of the network device on the processing of the service carried by the normal message processing pipeline. In the above example, when the packet is a packet of a service carried by 2 suspended packet processing pipelines, the packet is discarded, so that no more packet of a service carried by 2 suspended packet processing pipelines is added to the buffer queue, and accordingly, the 2 suspended packet processing pipelines are suspended.
In this embodiment, whether network congestion occurs in the network device may be determined by detecting the amount of available cache resources in the cache queue of the network device; when network congestion occurs, a suspended message processing flow line and a normal message processing flow line can be determined from each message processing flow line according to the importance degree of the service carried by each message processing flow line; and can suspend receiving the message of the current bearing service of the suspended message processing pipeline. Therefore, the message of part of the service can be selectively received into the buffer queue according to the service importance degree, the packet loss rate of the important service is favorably reduced, and the service quality of the important service is effectively improved.
In the above or the following embodiments, after suspending receiving the message of the current bearer service of the suspended message processing pipeline, the amount of available buffer resources in the buffer queue will dynamically change along with the continuous operation of each normal message processing pipeline. Therefore, after the message of the current bearing service of the pause message processing assembly line is paused to be received, the available buffer resource amount in the buffer queue can be continuously detected; and when the available cache resource amount is continuously detected to be still less than the preset lower limit value, determining a message processing pipeline changed into a suspended message processing pipeline from the normal message processing pipeline according to the importance degree of the current bearer service of the normal message processing pipeline, and suspending receiving the message of the current bearer service of the message processing pipeline changed into the suspended message processing pipeline.
If the available buffer resource amount is still smaller than the preset lower limit value, it indicates that the buffer resource amount occupied by the normal message processing pipeline is still large, and the network device is still in a network congestion state. In this case, the message processing pipeline with the lower importance of the service to be carried can be selected from the normal message processing pipelines and changed into the suspended message processing pipeline. The foregoing example is carried over, if the network device is still in a network congestion state after performing service processing according to the configuration of 2 suspended message processing pipelines and 3 normal message processing pipelines in the network device, 1 message processing pipeline can be selected from the 3 normal message processing pipelines to be changed into a suspended message processing pipeline, and accordingly, the network device will include the 3 suspended message processing pipelines and the 2 normal message processing pipelines.
By reducing the number of normal message processing pipelines, the network congestion state of the network device can be further alleviated. Certainly, the operation of detecting the amount of available buffer resources in the buffer queue of the network device is an intermittent and continuous operation, after the proportion configuration of the pause class and the normal class in each message processing pipeline in the network device is adjusted, the adjustment effect can be determined by detecting the amount of available buffer resources in the buffer queue of the network device, and when the network congestion state of the network device still cannot be changed, the number of normal message processing pipelines can be continuously reduced until the network congestion state of the network device is changed. In practical application, the network congestion state of the network device can be effectively changed only by determining a smaller number of message processing pipelines as message processing pipelines to be suspended, so that the service quality of most important services can be ensured.
As described above, by reducing the number of normal message processing pipelines in the network device, the network congestion state of the network device can be improved, which is mainly because the number of messages received in the cache queue is reduced as the number of normal message processing pipelines is reduced, and the amount of available cache resources in the cache queue is increased as the messages to be processed in the cache queue are continuously processed. Therefore, in this embodiment, when it is detected that the amount of the available cache resources is greater than or equal to the preset upper limit, according to the importance degree of the current bearer service of the suspended message processing pipeline, a message processing pipeline which is restored to the normal message processing pipeline is determined from the suspended message processing pipeline, and the message of the current bearer service of the message processing pipeline which is restored to the normal message processing pipeline is normally received into the cache queue when the message arrives. Wherein the preset upper limit value may be greater than or equal to the preset lower limit value mentioned above.
When the amount of the available cache resources is detected to be larger than or equal to the preset upper limit value, it is indicated that the network equipment has entered a network unobstructed state, and at this time, the network equipment can carry more services. Therefore, one or more message processing pipelines can be selected from the suspended message processing pipelines to be recovered to be the normal message processing pipelines, namely, the proportion of the normal message processing pipelines in the network equipment is increased so as to process more services. Certainly, after the proportion of the normal message processing pipeline in the network device is increased, the available amount of the cache resources in the cache queue of the network device may be continuously detected, when it is detected that the available amount of the cache resources is still greater than or equal to the preset upper limit value, the proportion of the normal message processing pipeline may be continuously increased, and when it is detected that the available amount of the cache resources is less than the preset lower limit value, the proportion of the normal message processing pipeline needs to be decreased.
Accordingly, in this embodiment, the proportion of the normal packet processing pipeline in the network device can be adaptively adjusted according to the dynamic change of the amount of the available buffer resources in the buffer queue of the network device, so that the network device can operate in a relatively stable network state, and the service quality of services as much as possible can be improved.
In the foregoing or following embodiments, the priority of each packet processing pipeline may be determined according to the type of the current service carried by each packet processing pipeline, where the type of the service may represent the importance of the service; and selecting at least one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
In this embodiment, when the message processing pipeline is determined to be suspended, the priority of each message processing pipeline may be defined according to the type of the service carried by each message processing pipeline. As described above, the bearer service of the message processing pipeline is dynamically changed, so that the current bearer service type of each message processing pipeline can be monitored in real time, and the priority of each message processing pipeline can be dynamically adjusted when the current bearer service type of each message processing pipeline changes. In this way, the priority of each message processing pipeline in the network device may change dynamically with the change of the bearer service. Furthermore, as the priority of the message processing pipeline dynamically changes, the processing state of the message processing pipeline may also dynamically change. For example, when the priority of a message processing pipeline is low, it is determined to be a suspended-type message processing pipeline, and when the priority of the message processing pipeline is raised due to a change in traffic carried by the message processing pipeline, the message processing pipeline is adjusted to be a normal message processing pipeline. The method can ensure the conformity between the configuration of the processing state of each message processing pipeline in the network equipment and the bearing service of each message processing pipeline, and avoid neglecting to process important services.
The types of the services include, but are not limited to, Enterprise Resource Planning (ERP), database, Office Automation (OA), website video, online games, and the like. The type of business may characterize the importance of the business, e.g., ERP is typically more important than online gaming. In some practical applications, the corresponding relationship between the type of the service and the priority can be established in advance, and accordingly, the priority of each message processing pipeline can be determined according to the type of the service carried by each message processing pipeline. For example, if the priority is determined to be in the range of 0 to 2, the types of the services can be sorted according to the importance degree and associated with an appropriate priority value.
Therefore, when the processing state of each message processing pipeline in the network equipment is configured, each message processing pipeline in the network equipment can be sequenced according to the priority, at least one message processing pipeline with the lowest priority is used as a pause message processing pipeline, and other message processing pipelines are used as normal message processing pipelines. In this way, traffic of relatively low importance can be suspended, with priority to ensure that important traffic is processed normally.
In this embodiment, the force for adjusting the ratio of the normal message processing pipeline to the suspended message processing pipeline in the network device may be set as a message processing pipeline in combination with the operation of detecting the amount of available buffer resources in the buffer queue of the network device. When the network equipment is in a network congestion state, one message processing assembly line with the lowest priority level can be selected from the normal message processing assembly lines each time and changed into a pause message processing assembly line, the available buffer resource amount in the buffer queue of the network equipment is continuously detected, and if the network equipment is still in the network congestion state, the rest message processing assembly line with the lowest priority level is selected from the normal message processing assembly lines and changed into the pause message processing assembly line until the network equipment is separated from the network congestion state. And when the network equipment enters a network unobstructed state, selecting one message processing assembly line with the highest priority from the pause message processing assembly lines each time to recover to be a normal message processing assembly line, and continuously detecting the available buffer resource amount in the buffer queue of the network equipment, if the network equipment is still in the network unobstructed state, continuously selecting one message processing assembly line with the highest priority from the pause message processing assembly lines to recover to be the normal message processing assembly line until the network equipment is found to leave the network unobstructed state.
In practical applications, a network stable state between a network congestion state and a network smooth state is defined for the network device by setting the preset upper limit value mentioned in the foregoing embodiments to be greater than the preset lower limit value, and the network stable state is used as an adjustment target, and the network device is maintained in the network stable state by adjusting the ratio of a normal message processing pipeline to a pause message processing pipeline in the network device, so as to ensure the quality of service of an important service.
Fig. 2 is a schematic diagram of a working architecture in which a multi-core network device bears multiple services according to an embodiment of the present application. As shown in fig. 2, the network device includes three message processing pipelines, which can respectively carry different services, and each message processing pipeline can be divided into three modules, taking a message forwarding task for processing various services as an example: the system comprises a drive receiving module, a processing and forwarding module and a drive sending module, wherein the three modules respectively correspond to three stages of a message forwarding task, namely: a message receiving stage, a message processing stage and a message sending stage. In fig. 2, the network device includes a plurality of core processors, and processing jobs corresponding to respective modules on three message processing pipelines can be scheduled to appropriate core processors by the scheduling module for execution, where the scheduling module can be deployed in other core processors besides the core processors for executing processing jobs on the respective message processing pipelines. In this way, each module in a message processing pipeline may be executed by a different core processor. It should be noted that the number of packet processing pipelines, the number of modules included in the packet processing pipeline, contents, and the like in fig. 2 are all exemplary and should not limit the scope of the present application.
Accordingly, in the above or following embodiments, the message to be processed may also be read from the buffer queue, and the message to be processed is sequentially sent to the driving receiving module, the processing forwarding module, and the driving sending module on the message processing pipeline where the service corresponding to the message to be processed is located to perform message processing pipeline processing; and after the message processing pipeline processing of the message to be processed is finished, deleting the message to be processed from the cache queue to release cache resources.
In this embodiment, for the messages to be processed in the cache queue, the messages to be processed may be sequentially sent to the subsequent processing stages according to the processing stages included in the message processing pipeline. For example, for the message forwarding requirements of various services, the message processing pipeline may include a driving receiving module, a processing forwarding module, and a driving sending module. For a network device with a multi-core processor, the processing tasks of the modules can be scheduled to different core processors for processing, so that a plurality of message processing pipelines can run in parallel, and the network device can process a plurality of services in parallel.
After the message processing pipeline processing of the message to be processed is completed, the message to be processed can be deleted from the cache queue, the cache resource occupied by the message to be processed is released into an available cache resource, and accordingly, the amount of the available cache resource in the cache queue of the network device changes.
In this embodiment, the processing progress of the to-be-processed packet in the cache queue will bring dynamic changes of the amount of available cache resources in the cache queue, and based on the dynamic changes of the amount of available cache resources in the cache queue, the ratio between a normal packet processing pipeline and a suspended packet processing pipeline in the network device can be dynamically adjusted, and the network device is already maintained in a network stable state, so that the packet loss rate of the important service is reduced, and the service quality of the important service is improved.
Fig. 3 is a schematic structural diagram of a network device according to another embodiment of the present application. As shown in fig. 3, the network device includes: a memory 30 and one or more processors 31.
The memory 30 is used to store computer programs and may be configured to store other various data to support operations on the network device. Examples of such data include instructions for any application or method operating on the network device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 30 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 31, coupled to the memory 30, for executing the computer program in the memory 30 for:
detecting the amount of available cache resources in a cache queue of the network equipment;
when detecting that the available cache resource amount is smaller than a preset lower limit value, determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue;
and suspending receiving of the message of the current bearer service of the suspended message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving.
In this embodiment, the network device includes one or more processors 31, and the one or more processors 31 may be configured to jointly perform message processing pipeline processing on each message processing pipeline in addition to performing the above-described actions. The working architecture in which one or more processors 31 work in conjunction to perform message processing pipeline processing on each message processing pipeline can be seen in fig. 2. In this embodiment, one processor 31 may be responsible for processing one processing stage in one message processing pipeline, such as message reception, message processing, or message transmission.
In an optional embodiment, after suspending receiving the packet suspending the current bearer service of the packet processing pipeline, the processor 31 is further configured to:
continuously detecting the amount of available cache resources in the cache queue;
and when the available cache resource amount is continuously detected to be still less than the preset lower limit value, determining a message processing pipeline changed into a suspended message processing pipeline from the normal message processing pipeline according to the importance degree of the current bearer service of the normal message processing pipeline, and suspending receiving the message of the current bearer service of the message processing pipeline changed into the suspended message processing pipeline.
In an alternative embodiment, the processor 31 is further configured to:
and when the available buffer resource amount is continuously detected to be larger than or equal to the preset upper limit value, determining a message processing pipeline which is restored to the normal message processing pipeline from the suspended message processing pipeline according to the importance degree of the current bearing service of the suspended message processing pipeline, and normally receiving the message of the current bearing service of the message processing pipeline which is restored to the normal message processing pipeline into the buffer queue when the message arrives, wherein the preset upper limit value is larger than or equal to the preset lower limit value.
In an optional embodiment, when determining that the message processing pipeline is suspended from the message processing pipelines according to the importance of the current bearer service of the message processing pipelines sharing the buffer queue, the processor 31 is configured to:
determining the priority of each message processing assembly line according to the type of the current bearing service of each message processing assembly line, wherein the type of the service can represent the importance degree of the service;
and selecting at least one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
In an alternative embodiment, the processor 31, when selecting at least one of the packet processing pipelines with the lowest priority as the suspended packet processing pipeline according to the priority of each packet processing pipeline, is configured to:
and selecting one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
In an alternative embodiment, the processor 31 is further configured to:
and monitoring the type of the current bearing service of each message processing assembly line in real time, and dynamically adjusting the priority of each message processing assembly line when the type of the current bearing service of each message processing assembly line changes.
In an alternative embodiment, the processor 31 is further configured to:
reading a message to be processed from the cache queue, and sequentially sending the message to be processed to a drive receiving module, a processing forwarding module and a drive sending module on a message processing pipeline where a service corresponding to the message to be processed is located for message processing pipeline processing;
and after the message processing pipeline processing of the message to be processed is finished, deleting the message to be processed from the cache queue to release cache resources.
In an optional embodiment, the processor 31, when detecting an amount of available buffer resources in a buffer queue of the network device, is configured to:
sending a cache resource query request to a hardware monitoring component of the network equipment;
and receiving a query result returned by the hardware monitoring component according to the cache resource query request, wherein the query result comprises the available cache resource amount of the cache queue.
Further, as shown in fig. 3, the network device further includes: communication components 32, power components 33, and the like. Only some of the components are schematically shown in fig. 3, and it is not meant that the network device includes only the components shown in fig. 3.
Wherein the communication component 32 is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technology to facilitate short-range communications.
The power supply unit 33 supplies power to various components of the device in which the power supply unit is installed. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the network device in the foregoing method embodiments when executed.
Fig. 4 is a schematic structural diagram of a fluidic device according to another embodiment of the present application. As shown in fig. 4, the fluidic device may include:
a detecting module 40, configured to detect an amount of available cache resources in a cache queue of a network device;
a configuration module 41, configured to determine, when it is detected that the amount of the available cache resources is smaller than a preset lower limit value, a suspended message processing pipeline and a normal message processing pipeline from among the message processing pipelines according to an importance degree of a current bearer service of each message processing pipeline sharing the cache queue;
and the control module 42 is configured to suspend receiving the packet of the current bearer service of the suspended packet processing pipeline, so as to ensure that the packet of the current bearer service of the normal packet processing pipeline is successfully received into the cache queue when arriving.
In an optional embodiment, the detecting module 40 is further configured to continue to detect the amount of available buffer resources in the buffer queue;
the configuration module 41 is further configured to determine, according to the importance degree of the current bearer service of the normal message processing pipeline, a message processing pipeline that is changed to a suspended message processing pipeline from the normal message processing pipeline when it is continuously detected that the amount of the available cache resources is still smaller than the preset lower limit value, and suspend receiving a message of the current bearer service of the message processing pipeline that is changed to the suspended message processing pipeline.
In an alternative embodiment, the configuration module 41 is further configured to:
and when the available buffer resource amount is continuously detected to be larger than or equal to the preset upper limit value, determining a message processing pipeline which is restored to the normal message processing pipeline from the suspended message processing pipeline according to the importance degree of the current bearing service of the suspended message processing pipeline, and normally receiving the message of the current bearing service of the message processing pipeline which is restored to the normal message processing pipeline into the buffer queue when the message arrives, wherein the preset upper limit value is larger than or equal to the preset lower limit value.
In an alternative embodiment, the configuration module 41 is configured to:
determining the priority of each message processing assembly line according to the type of the current bearing service of each message processing assembly line, wherein the type of the service can represent the importance degree of the service;
and selecting at least one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
In an optional embodiment, the configuration module 41 is specifically configured to:
and selecting one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
In an alternative embodiment, the configuration module 41 is further configured to:
and monitoring the type of the current bearing service of each message processing assembly line in real time, and dynamically adjusting the priority of each message processing assembly line when the type of the current bearing service of each message processing assembly line changes.
In an alternative embodiment, the control module 42 is further configured to:
reading a message to be processed from the cache queue, and sequentially sending the message to be processed to a drive receiving module, a processing forwarding module and a drive sending module on a message processing pipeline where a service corresponding to the message to be processed is located for message processing pipeline processing;
and after the message processing pipeline processing of the message to be processed is finished, deleting the message to be processed from the cache queue to release cache resources.
In an alternative embodiment, the detection module 40 is configured to:
sending a cache resource query request to a hardware monitoring component of the network equipment;
and receiving a query result returned by the hardware monitoring component according to the cache resource query request, wherein the query result comprises the available cache resource amount of the cache queue.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A fluidic method, comprising:
detecting the amount of available cache resources in a cache queue of network equipment, wherein the cache queue is used for storing messages to be processed;
when detecting that the available cache resource amount is smaller than a preset lower limit value, determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue;
and suspending receiving of the message of the current bearer service of the suspended message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving.
2. The method of claim 1, wherein after suspending receiving packets of a current bearer service of the suspended packet processing pipeline, further comprising:
continuously detecting the available buffer resource amount in the buffer queue;
and when the available cache resource amount is continuously detected to be still smaller than the preset lower limit value, determining a message processing assembly line changed into a pause message processing assembly line from the normal message processing assembly line according to the importance degree of the current bearer service of the normal message processing assembly line, and pausing and receiving the message of the current bearer service of the message processing assembly line changed into the pause message processing assembly line.
3. The method of claim 2, further comprising:
and when the available cache resource amount is continuously detected to be larger than or equal to a preset upper limit value, determining a message processing assembly line which is restored to a normal message processing assembly line from the pause message processing assembly line according to the importance degree of the current bearer service of the pause message processing assembly line, and normally receiving the message of the current bearer service of the message processing assembly line which is restored to the normal message processing assembly line into the cache queue when the message arrives, wherein the preset upper limit value is larger than or equal to the preset lower limit value.
4. The method according to any of claims 1-3, wherein said determining a suspended message processing pipeline from among the message processing pipelines that share the cache queue according to the importance of the current bearer service of each message processing pipeline comprises:
determining the priority of each message processing assembly line according to the type of the current bearing service of each message processing assembly line, wherein the type of the service represents the importance degree of the service;
and selecting at least one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
5. The method of claim 4, further comprising:
and monitoring the type of the current bearer service of each message processing assembly line in real time, and dynamically adjusting the priority of each message processing assembly line when the type of the current bearer service of each message processing assembly line changes.
6. The method according to any one of claims 1-3, wherein the detecting an amount of available buffer resources in a buffer queue of a network device comprises:
sending a cache resource query request to a hardware monitoring component of the network device;
and receiving a query result returned by the hardware monitoring component according to the cache resource query request, wherein the query result comprises the available cache resource amount of the cache queue.
7. A fluidic device, comprising:
the detection module is used for detecting the amount of available cache resources in a cache queue of the network equipment, wherein the cache queue is used for storing the message to be processed;
the configuration module is used for determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue when the available cache resource amount is detected to be smaller than a preset lower limit value;
and the control module is used for pausing and receiving the message of the current bearing service of the paused message processing assembly line so as to ensure that the message of the current bearing service of the normal message processing assembly line is successfully received into the cache queue when arriving.
8. A network device comprising one or more memories and one or more processors;
the one or more memories are to store one or more computer instructions;
the one or more processors coupled with the one or more memories for executing the one or more computer instructions for:
detecting the amount of available cache resources in a cache queue of network equipment, wherein the cache queue is used for storing messages to be processed;
when detecting that the available cache resource amount is smaller than a preset lower limit value, determining a suspended message processing pipeline and a normal message processing pipeline from all the message processing pipelines according to the importance degree of the current bearing service of all the message processing pipelines sharing the cache queue;
and suspending receiving of the message of the current bearer service of the suspended message processing assembly line so as to ensure that the message of the current bearer service of the normal message processing assembly line is successfully received into the cache queue when arriving.
9. The apparatus of claim 8, wherein the processor, after suspending receipt of the message suspending the current bearer service of the message processing pipeline, is further configured to:
continuously detecting the available buffer resource amount in the buffer queue;
and when the available cache resource amount is continuously detected to be still smaller than the preset lower limit value, determining a message processing assembly line changed into a pause message processing assembly line from the normal message processing assembly line according to the importance degree of the current bearer service of the normal message processing assembly line, and pausing and receiving the message of the current bearer service of the message processing assembly line changed into the pause message processing assembly line.
10. The device of claim 9, wherein the processor is further configured to:
and when the available cache resource amount is continuously detected to be larger than or equal to a preset upper limit value, determining a message processing assembly line which is restored to a normal message processing assembly line from the pause message processing assembly line according to the importance degree of the current bearer service of the pause message processing assembly line, and normally receiving the message of the current bearer service of the message processing assembly line which is restored to the normal message processing assembly line into the cache queue when the message arrives, wherein the preset upper limit value is larger than or equal to the preset lower limit value.
11. The apparatus according to any of claims 8-10, wherein the processor, when determining a suspended message processing pipeline from among the message processing pipelines sharing the cache queue according to the importance of the current bearer service of each message processing pipeline, is configured to:
determining the priority of each message processing assembly line according to the type of the current bearing service of each message processing assembly line, wherein the type of the service represents the importance degree of the service;
and selecting at least one message processing pipeline with the lowest priority as a suspended message processing pipeline according to the priority of each message processing pipeline.
12. The device of claim 11, wherein the processor is further configured to:
and monitoring the type of the current bearing service of each message processing assembly line in real time, and dynamically adjusting the priority of each message processing assembly line when the type of the current bearing service of each message processing assembly line changes.
13. The device of any of claims 8-10, wherein the processor, in detecting an amount of buffer resources available in a buffer queue of a network device, is configured to:
sending a cache resource query request to a hardware monitoring component of the network device;
and receiving a query result returned by the hardware monitoring component according to the cache resource query request, wherein the query result comprises the available cache resource amount of the cache queue.
14. A computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the flow control method of any of claims 1-6.
CN201811475028.5A 2018-12-04 2018-12-04 Flow control method, flow control device, network equipment and storage medium Active CN109327403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811475028.5A CN109327403B (en) 2018-12-04 2018-12-04 Flow control method, flow control device, network equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811475028.5A CN109327403B (en) 2018-12-04 2018-12-04 Flow control method, flow control device, network equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109327403A CN109327403A (en) 2019-02-12
CN109327403B true CN109327403B (en) 2022-08-16

Family

ID=65256709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811475028.5A Active CN109327403B (en) 2018-12-04 2018-12-04 Flow control method, flow control device, network equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109327403B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259247B (en) * 2020-02-11 2022-11-25 华为技术有限公司 Cache device in network equipment and data management method in cache device
US11915357B2 (en) * 2020-03-16 2024-02-27 Intel Corporation Apparatus and method for throttling a ray tracing pipeline
CN113301605B (en) * 2021-05-18 2023-03-24 成都欧珀通信科技有限公司 Message transmission method, system and related device
CN113689075A (en) * 2021-07-22 2021-11-23 深圳泰德激光科技有限公司 Processing control method and device for double processing lines and computer storage medium
CN114189477B (en) * 2021-10-22 2023-12-26 新华三大数据技术有限公司 Message congestion control method and device
CN115878334B (en) * 2023-03-08 2023-05-12 深圳云豹智能有限公司 Data caching processing method and system, storage medium and electronic equipment thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770090A (en) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 Method and apparatus for controlling register in streamline

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102025638A (en) * 2010-12-21 2011-04-20 福建星网锐捷网络有限公司 Data transmission method and device based on priority level as well as network equipment
US9154455B1 (en) * 2013-08-30 2015-10-06 Qlogic, Corporation Method and system for determining drop eligibility of network information
US9749256B2 (en) * 2013-10-11 2017-08-29 Ge Aviation Systems Llc Data communications network for an aircraft
CN105337895B (en) * 2014-07-14 2019-02-19 新华三技术有限公司 A kind of network equipment main computer unit, network equipment subcard and the network equipment
CN106961445B (en) * 2017-04-28 2019-10-29 中国人民解放军信息工程大学 Packet parsing device based on FPGA hardware parallel pipeline

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770090A (en) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 Method and apparatus for controlling register in streamline

Also Published As

Publication number Publication date
CN109327403A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109327403B (en) Flow control method, flow control device, network equipment and storage medium
US11902092B2 (en) Systems and methods for latency-aware edge computing
US9853906B2 (en) Network prioritization based on node-level attributes
CN103053146B (en) Data migration method and device
US11315125B2 (en) Prioritized data synchronization
CN104378308A (en) Method and device for detecting message sending rate
CN112600878B (en) Data transmission method and device
CN113760452A (en) Container scheduling method, system, equipment and storage medium
EP3058705B1 (en) Data classification for adaptive synchronization
CN113315671A (en) Flow rate limit and information configuration method, routing node, system and storage medium
Peralta et al. Fog to cloud and network coded based architecture: Minimizing data download time for smart mobility
CN115087043A (en) Multi-path redundant transmission method, user equipment, network entity and storage medium
CN114679416A (en) Robot communication method, system, equipment and storage medium
US20170220382A1 (en) Weight adjusted dynamic task propagation
CN106790354B (en) Communication method and device for preventing data congestion
CN112968845A (en) Bandwidth management method, device, equipment and machine-readable storage medium
KR102201799B1 (en) Dynamic load balancing method and dynamic load balancing device in sdn-based fog system
CN103442257A (en) Method, device and system for achieving flow resource management
JP6886874B2 (en) Edge devices, data processing systems, data transmission methods, and programs
US20190108060A1 (en) Mobile resource scheduler
CN114024968B (en) Message sending method and device based on intermediate equipment and electronic equipment
CN115391051A (en) Video computing task scheduling method, device and computer readable medium
US9647966B2 (en) Device, method and non-transitory computer readable storage medium for performing instant message communication
WO2016188078A1 (en) Method and apparatus for networking subscriber identity module
CN112995311A (en) Service providing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant