WO2015149460A1 - Fiber channel over ethernet flow control method, device and system - Google Patents

Fiber channel over ethernet flow control method, device and system Download PDF

Info

Publication number
WO2015149460A1
WO2015149460A1 PCT/CN2014/083645 CN2014083645W WO2015149460A1 WO 2015149460 A1 WO2015149460 A1 WO 2015149460A1 CN 2014083645 W CN2014083645 W CN 2014083645W WO 2015149460 A1 WO2015149460 A1 WO 2015149460A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic
connection
priority
magnetic array
source node
Prior art date
Application number
PCT/CN2014/083645
Other languages
French (fr)
Chinese (zh)
Inventor
牛克强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015149460A1 publication Critical patent/WO2015149460A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport

Definitions

  • FC Fibre Channel over Ethernet
  • SAN storage area network
  • FCoE Fibre Channel over Ethernet
  • FCoE Initiator When most storage arrays in the data center are FC targets, the FCoE Initiator is connected to Lossless Ethernet, and the FC data is transferred to the FC SAN through the FCoE switch. Access the FC target.
  • FCoE Target FCoE Target
  • FCoE Enabler can access the FCoE Target directly through the lossless Ethernet. Since the magnetic array device is a shared resource, its front-end port is much smaller than the host. In order to facilitate more host access, a typical S AN network must have an S AN switch, so the typical network of the magnetic array. It is a port in which multiple hosts are connected to one magnetic array or one magnetic array.
  • the SAN service data flow needs to be stored and forwarded through all levels of nodes in the network. During the forwarding process, the traffic model of the network service data flow will change, and indicators such as bursts and jitter will decrease. For example, the video service based on the IP network, the data stream sent from the video source end is stored and forwarded by the nodes along the path, and the burst and jitter indicators are reduced, which exceeds the buffer processing capability of the receiving end, resulting in packet loss, mosaic, etc. Quality issues.
  • FIG. 1 is a schematic diagram of a structure of a magnetic array network according to the related art
  • FIG. 2 is a schematic diagram of fluctuations of a traffic bandwidth of a magnetic array network according to the related art.
  • Gbps gigabits per second
  • the networking module must have a large amount of congestion before the switch and the magnetic array, resulting in the application of the host HI and H2.
  • the performance jitter of the program The traffic bandwidth of the host HI and H2 applications will fluctuate as shown in Figure 2.
  • the related art solution is to implement loss-based Ethernet flow control, based on 802.1Qbb and 802.3bd, to implement priority-based flow control on full-duplex point-to-point links.
  • Current Ethernet can also achieve no packet loss by using PAUSE frame pause, but it will block all traffic on one link, essentially suspending the entire link.
  • Priority-based Flow Control allows eight virtual channels to be created on an Ethernet link and an IEEE 802.1P priority level assigned to each virtual channel, allowing separate pauses and restarts Any one of the virtual channels allows traffic of other virtual channels to pass without interruption.
  • the buffer space is allocated in the eight queues of the switch port, and eight virtualized channels in the network are formed, and the data stream carries its own channel label (identified by 802.1P).
  • the buffer size allows each queue to have different data caching capabilities.
  • transient congestion that is, a device's queue cache consumes faster and exceeds a certain threshold
  • the device sends back pressure information in the direction in which the data enters.
  • the upstream device receives the back pressure information, it stops sending according to the back pressure information.
  • Delay sending data, and store the data in the local port buffer; if the buffer consumption of the local port exceeds the threshold, continue to backflush upstream, so the first level back pressure, until the network terminal equipment, thereby eliminating the loss of the network node due to congestion package.
  • FIG. 3 is a schematic diagram of congestion spreading caused by a PFC according to the related art. As shown in FIG. 3, when congestion occurs on the P7 port of the switch S3, a PAUSE frame is sent along the dotted line in the figure to the upstream device, including the switch S1 and the switcher. S2, which may cause congestion of upstream devices of S1 and S2.
  • the drawback of the flow control mechanism in the related art is that the PAUSE frame is a flow control belonging to the L2 layer, because in the related art FCoE protocol, the L2 (link link) layer only guarantees the link control, the L2 layer.
  • FCoE stack that cannot be passed to the L3 (network network) layer when a PAUSE frame occurs.
  • networking in the switch When PAUSE occurs in the L2 layer of the P3 to the magnetic array T1, the application of the host HI and the host H2 cannot be perceived, and the service is still delivered at a rate of 10G.
  • 4 is a schematic diagram of network congestion in a SAN network according to the related art, and a typical description of congestion phenomenon of the network is shown in FIG. 4.
  • the congestion control mechanism in the related art includes two strategies of congestion avoidance and congestion control.
  • the former aims to avoid congestion when the network capacity is reached; the latter is the processing after reaching the network capacity.
  • the former is a "preventive” measure to maintain the network's high-throughput, low-latency state to avoid congestion; the latter is a “recovery” measure that allows the network to recover from congestion to enter normal operating conditions.
  • Congestion increases the delay and jitter of the request transmission, which may cause retransmission of the request, resulting in more congestion.
  • the effective throughput of the network is reduced, resulting in a decrease in the utilization of network resources.
  • the architecture of the SAN must ensure that the underlying network devices are designed to be properly loaded. Overloaded links may be allowed by some applications, but some applications cannot accept such overloads and timeouts, so FCoE must be considered from top to bottom. Flow control. And if the capacity provided by the magnetic array device fluctuates too much, it will cause some user business interruption.
  • the key indicator for the test SPC-1 test results in the storage industry is the delay, in which the proportion of requests for different load levels falling in each time period is counted.
  • a method for controlling a traffic of an Ethernet Fibre Channel includes: determining whether a traffic of an ingress port of a magnetic array device exceeds a predetermined threshold; and transmitting a configuration message if the determination result is yes.
  • the method further includes: determining the ingress node and the multiple source node devices A flow control policy for each connection between the two; according to the flow control policy, the traffic of each connection is processed. Determining a flow control policy for each connection between the ingress node and the plurality of source node devices includes: obtaining the flow control policy according to a preset configuration; and/or according to the priority of each connection and/or Or adjusting the traffic control policy by the load condition of the service on each connection.
  • Adjusting the flow control policy according to the priority of each connection and/or the traffic condition on each connection includes: in the case that the traffic on the connection with the high priority is increased, the priority is The allocated bandwidth of the low-level connection is provided to the high-priority connection to meet the bandwidth requirement of the traffic on the high-priority connection; and/or the traffic condition of each connection is detected within a predetermined time. And adding the bandwidth allocated by the connection in which the traffic is not detected within the predetermined time to the common resource area for use by other connections.
  • Detecting the traffic condition of each connection in the predetermined time includes: saving a receiving time of the latest read/write request of the source node device to the ingress node on each connection; according to the predetermined time The time interval detects the receiving time of each connection; if the interval between the receiving time and the current time is greater than or equal to the predetermined time, determining that no traffic is detected on the corresponding connection within the predetermined time.
  • the processing, according to the traffic control policy, the traffic of each connection includes: determining, according to the bandwidth weight of each connection, the allocated bandwidth of each connection The traffic of each connection is processed by the bandwidth allocated by each of the connections.
  • Processing the traffic of each connection by using the bandwidth allocated by each connection includes: receiving a read/write request of the source node device; determining a traffic of the connection between the source node device and the ingress node port Whether the allocated bandwidth is reached; if the judgment result is YES, the read/write request is discarded or suspended.
  • the traffic control policy includes the priority of the traffic type
  • processing the traffic of each connection includes: in the case that the bandwidth required by the traffic type with high priority is increased, The bandwidth allocated by the traffic type with the lower priority is allocated to the traffic type with the higher priority; the traffic type of the lower priority is sent to the connection of the traffic type with the lower priority in each connection. Read and write requests.
  • a flow control device for an Ethernet Fibre Channel including: a determining module, configured to determine whether a traffic of an ingress port of the magnetic array device exceeds a predetermined threshold; and a sending module, configured to When the determination result is yes, sending a configuration message to the source section connected to the ingress port The point device, where the configuration message is used to instruct the source node device to adjust a data transmission rate of its application.
  • a flow control system for an Ethernet Fibre Channel including a source node device and a magnetic array device, and further includes: a magnetic array control module, located in the magnetic array device, configured to Monitoring the traffic of the ingress port of the magnetic array device, and sending a configuration message to the source node control module if the traffic exceeds a predetermined threshold; the source node control module, the Ethernet located at the source node device.
  • the Fibre Channel FCoE driver protocol layer is configured to receive the configuration message, and adjust, according to the configuration message, a rate at which an application in the source node device sends data to the magnetic array device.
  • FIG. 1 is a schematic diagram of a magnetic array networking structure according to the related art
  • FIG. 2 is a schematic diagram of a flow bandwidth fluctuation of a magnetic array networking according to the related art
  • FIG. 3 is a schematic diagram of congestion diffusion caused by a PFC according to the related art.
  • 4 is a schematic diagram of network congestion in a SAN network according to the related art
  • FIG. 5 is a schematic flowchart of a flow control method of an Ethernet Fibre Channel according to an embodiment of the present invention
  • FIG. 6 is an Ethernet Fibre Channel according to an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of a flow control system for an Ethernet Fibre Channel according to an embodiment of the present invention
  • FIG. 8 is a schematic structural diagram of an FCoE frame structure according to a preferred embodiment of the present invention
  • FIG. 10 is a system structural diagram of a flow control system according to a preferred embodiment of the present invention
  • FIG. 11 is a flow chart showing a flow control method of a magnetic array device according to a preferred embodiment of the present invention
  • FIG. 12 is a flow chart showing IOPS control in a flow control method for a magnetic array device according to a preferred embodiment of the present invention
  • FIG. 14 is a schematic diagram of a process flow of a magnetic array device receiving host 10 request according to a preferred embodiment of the present invention
  • FIG. 11 is a flow chart showing a flow control method of a magnetic array device according to a preferred embodiment of the present invention
  • FIG. 12 is a flow chart showing IOPS control in a flow control method for a magnetic array device according to a preferred embodiment of the present invention
  • FIG. 14 is a schematic diagram of a process flow of a magnetic array device receiving host 10 request according to a preferred embodiment of the present invention
  • FIG. 11 is a flow chart showing a flow control method of a magnetic array
  • FIG. 15a is a schematic diagram of a magnetic array port receiving host request according to a preferred embodiment of the present invention
  • Figure 15b is a schematic diagram of a magnetic array device sorting all traffic types according to a priority growth direction according to a preferred embodiment of the present invention
  • Figure 15c is a magnetic array device according to a preferred embodiment of the present invention, sorting according to the growth direction of the magnetic array scheduling priority
  • FIG. 16 is a flow chart showing the flow monitoring process requested by the magnetic array device receiving host 10 according to a preferred embodiment of the present invention
  • FIG. 17 is a flow chart for adjusting the current weight and default weight of the magnetic array device according to a preferred embodiment of the present invention. schematic diagram.
  • FIG. 5 is a schematic flowchart of a method for controlling a traffic of an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG.
  • the process includes the following steps: S502, determining whether the traffic of the ingress port of the magnetic array device exceeds a predetermined threshold; Step S504, if the determination result is yes, sending a configuration message to the source node device connected to the ingress port, where the configuration message is used to indicate The source node device adjusts the data transmission rate of its application.
  • Step S502 determining whether the traffic of the ingress port of the magnetic array device exceeds a predetermined threshold
  • Step S504 if the determination result is yes, sending a configuration message to the source node device connected to the ingress port, where the configuration message is used to indicate The source node device adjusts the data transmission rate of its application.
  • the process may further include: before step S502, step S501: determining each port between the ingress node and the plurality of source node devices a connected flow control policy; Step S500, processing the traffic of each connection according to the flow control policy.
  • a flow control policy is set for each connection, so that the magnetic array device can perform flow control processing on each connected traffic according to the set flow control policy, thereby further improving the effect of avoiding traffic congestion.
  • the flow control policy for each connection is obtained in various manners.
  • the flow control policy for determining each connection between the ingress node and the plurality of source node devices in step S501 may be: Set the configuration to obtain a flow control policy; and/or adjust the flow control policy based on the priority of each connection and/or the load of the traffic on each connection.
  • adjusting the flow control policy according to the priority of each connection and/or the traffic condition on each connection includes: the bandwidth allocated by the connection with the lower priority in the case of the increase of the traffic on the connection with the higher priority Providing a connection with a higher priority to meet the bandwidth requirement of the traffic on the connection with a higher priority; and/or detecting the traffic condition of each connection within a predetermined time, and allocating the connection for which the traffic is not detected within the predetermined time The bandwidth is added to the common resource area for use by other connections.
  • bandwidth-based allocation mechanism based on the priority of the connection is provided, and the bandwidth of the low-priority connection is allocated to the high-priority connection, thereby ensuring the transmission of the service on the high-priority connection;
  • the bandwidth resources of the connection are shared for use by other working connections, thereby improving the utilization efficiency of the bandwidth resources.
  • detecting the traffic condition of each connection within a predetermined time may be as follows: saving the receiving time of the latest read/write request of the source node device to the ingress node of each connection; according to the predetermined time The time interval detects the receiving time of each connection; when the interval between the receiving time and the current time is greater than or equal to the predetermined time, it is determined that the traffic is not detected on the corresponding connection within the predetermined time.
  • the recording time of the latest read/write request is recorded, and the traffic condition of each connection is traversed by using a certain interval time to detect the receiving time, thereby avoiding the magnetic array device frequently detecting the traffic on each connection. Additional system resources are consumed, reducing the overhead of the magnetic array device.
  • the step S500 may include: determining, according to the bandwidth weight of each connection, the allocated bandwidth of each connection; processing each connection by using the allocated bandwidth of each connection flow.
  • processing the traffic of each connection by using the allocated bandwidth of each connection includes: receiving a read/write request of the source node device; determining whether the traffic of the connection between the source node device and the ingress node reaches the The allocated bandwidth; in the case of a yes result, the read or write request is discarded or suspended.
  • the step S500 may include: allocating the allocated bandwidth of the traffic type with the lower priority in the case that the required bandwidth of the traffic type with the higher priority is increased. Give the traffic type with high priority; randomly discard the read and write requests corresponding to the traffic type with lower priority on the connection of the traffic type with lower priority in each connection. In this way, the corresponding priority is set for each traffic type, and the traffic types with different priorities are processed separately, thereby ensuring the transmission of the traffic of the high priority.
  • the embodiment further provides a flow control device for the Fibre Channel of the Ethernet, and the device is configured to implement the flow control method of the Ethernet Fibre Channel.
  • FIG. 6 is a schematic structural diagram of a flow control apparatus for an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG. 6, the apparatus includes: a judging module 62 and a sending module 64, wherein the judging module 62 is configured to determine a magnetic array.
  • the sending module 64 is coupled to the determining module 62, and configured to send a configuration message to the source node device connected to the ingress port, where the determination result is yes, where the configuration message is Used to instruct the source node device to adjust the data sending rate of its application.
  • the apparatus further comprises a determination module 60 configured to determine the traffic of each connection between the ingress node port and the plurality of source node devices if the number of source node devices connected to the ingress port is plural The control module; the processing module 61, coupled to the determining module 60 and the determining module 62, is configured to process the traffic of each connection according to the flow control policy.
  • the determining module 60 is further configured to obtain a flow control policy according to a preset configuration; and/or adjust the flow control policy according to a priority of each connection and/or a load condition of the service on each connection.
  • the determining module 60 is further configured to provide the bandwidth allocated by the connection with the lower priority to the connection with the higher priority in the case of the increase of the traffic on the connection with the higher priority, so as to satisfy the connection with the higher priority.
  • the determining module 60 is further configured to save a receiving time of the latest read/write request of the source node device to the ingress port port on each connection; detecting the receiving time of each connection according to the time interval of the predetermined time; When the interval of the current time is greater than or equal to the predetermined time, it is determined that the traffic is not detected on the corresponding connection within the predetermined time.
  • the traffic control policy includes bandwidth weight, according to the traffic control policy, the processing module
  • the processing module 61 can be set to determine the bandwidth allocated for each connection based on the bandwidth weight of each connection; handle the traffic of each connection through the bandwidth allocated by each connection.
  • the processing module 61 is further configured to receive a read/write request of the source node device; determine whether the traffic of the connection between the source node device and the ingress port reaches the allocated bandwidth; and if the judgment result is yes, discard Or suspend the read and write request.
  • the processing module 61 is further configured to allocate the allocated bandwidth of the low priority traffic type in the case that the traffic control policy includes the priority of the traffic type, and the bandwidth required by the high priority traffic type increases.
  • FIG. 7 is a schematic structural diagram of a flow control system for an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG. 7, the system includes a source node device. 72 and magnetic array device 74, further comprising: a magnetic array control module 742 and a source node control module 722.
  • the magnetic array control module 742 is located in the magnetic array device 74, and is configured to monitor the traffic of the ingress port of the magnetic array device 74, and send a configuration message to the source node control module 722 if the traffic exceeds a predetermined threshold; Module 722 is coupled to magnetic array control module 742, located in the FCoE drive protocol layer of source node device 72, configured to receive configuration messages, and to adjust the rate at which applications in the source node device transmit data to the magnetic array device based on the configuration messages. Description and description are made below in conjunction with the preferred embodiments.
  • FCoE PDU format The reserved field of the PDU format (FCoE PDU format) is added to the member, where the weight indicates the proportion of bandwidth allocated by the destination node to the source node; the priority is for applications that require higher latency. The priority is adjusted to ensure the execution delay; the queue depth is the queue depth currently being processed by the destination node FCoE protocol stack, and the source node service adjusts the application sending rhythm and flow control according to the actual load condition of the destination node.
  • FCoE PDU key fields and their meanings are:
  • Encapsulated FC Frame Encapsulated FC frame;
  • the added members in the field include: Default weight: The user configures the weight for a connection configuration on the ingress port; Current weight: The current weight of the connection; Priority: The user configures the priority for a certain connection configuration on the ingress port; Queue depth: The number of requests is being processed by the ingress port; In the preferred embodiment, in order to ensure smooth processing of traffic, it is determined according to the priority.
  • the processing sequence inside the magnetic array first satisfies the high-priority resources, including the central processing unit (CPU) scheduling order, input/output (I/O) processing priority, and adjustment cache (CACHE).
  • CPU central processing unit
  • I/O input/output
  • CACHE adjustment cache
  • the resource then processes the low-priority request and determines that when the bandwidth exceeds the pre-allocated amount, the request is discarded according to the corresponding policy, and the weight of the discard is calculated.
  • the weight is basically increased by a factor. If it is 1.2, the magnetic array configured in Figure 1 provides 6G bandwidth to the hosts HI and H2, respectively.
  • FIG. 9 is a schematic diagram showing fluctuations in traffic bandwidth of a magnetic array network according to an embodiment of the present invention. The traffic bandwidth of the applications of the hosts HI and H2 fluctuates as shown in FIG. 9.
  • the bandwidth and traffic fluctuations are balanced, and the flow control balance can be achieved among applications, switches, and magnetic arrays.
  • the user traffic jitter caused by the LCo (FC) FCoE lacking effective flow control can be solved; Magnetic array devices cannot be used when accepting such overloads, jitters, and timeouts.
  • the adjustment of the current weight and the default weight Since the resources of the magnetic array are valuable, there will be many hosts connected to one magnetic array at the same time. When the user configures the weight of a certain host, but the host does not have the service to be delivered for a certain period of time, it needs Consider adjusting the current weight of the host to free up resources for the running magnetic array.
  • the hosts HI and H2 are configured as follows, the default weight is 50, the current weight is x, the priority is 0, and the queue depth is x. If the HI host does not deliver the service for a certain period of time, you need to Dynamically adjusting the current weight of the HI host to free up bandwidth to the H2 host; adding a current weight calculation device to the magnetic array port T1, setting a flag for each connection of a port, and when the connection has a service, the request is Timestamp write flag, set a timer for a port, traverse all connected timestamps every 30 seconds, when the timestamp value is found to be different from the current time by 30 seconds or more, mark the connection for the first time without service, and so on.
  • 10 is a system configuration diagram of a flow control system according to a preferred embodiment of the present invention, as shown in FIG. 10, through a host 92 (corresponding to the source node device 72 described above) and a storage device 94 (corresponding to the above-described magnetic array device 74).
  • a functional module added by the driver protocol layer, by adding this module enables the host 92 to perform the method of the preferred embodiment.
  • the user can execute the flow control method of the preferred embodiment by the host control module 922 manually or according to a preset policy according to the networking and bandwidth requirements.
  • Performing the method of the preferred embodiment in the magnetic array device is a magnetic array control module 942 (corresponding to the above-mentioned magnetic array control module 742), and the magnetic array control module 942 is an improved module of the original flow control module on the magnetic array device side.
  • . 11 is a schematic flowchart of a flow control method of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG.
  • Step S1001 The magnetic array control module is initialized;
  • Step S1002 the magnetic array acquires a preset a data object, and obtaining a flow control policy of the data object according to the preset data object, where the flow control policy may be configured by the user on the magnetic array side or using a default configuration of the magnetic array before the host accesses the storage device;
  • Step S1003 If the user does not pre-read the flow control policy data object, the default data object is used, which is initially configured by the magnetic array, and all the connections on one port of the magnetic array are equally weighted, and the initial priorities are equal;
  • S1004 if the user does not pre-read the flow control policy data object, the data object is a default weight of a connection, that is, the user configures a weight for a connection configuration on the ingress port port; the current weight of a connection is the current connection of the connection.
  • the weight is equal to the default weight in the case; the priority is the priority configured by the user on the ingress port for the configuration of the connection; the queue depth is zero for the initial number of requests processed by the ingress port.
  • the application requires the magnetic array to provide the required number of read/write I/O operations per second (IOPS) as required, then different priorities can be set, and each application can have different priorities. For high priority applications.
  • the access can be prioritized on the backup side of the storage array; the IOPS corresponding to a connection is monitored as the product of the current weight and the maximum IOPS of the link speed.
  • Step S1101 a magnetic array receives a host request
  • Step S1102 a magnetic array acquisition The preset data object
  • Step S1103 the magnetic array determines whether the IOPS reaches the upper limit.
  • FIG. 13 is a schematic structural diagram of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG. 13, the magnetic array device may include: a policy configuration module 1201, a service control module 1202, and a policy analysis module 1203.
  • the policy configuration module 1201 is configured to extract a corresponding data object from a preset data object or a default data object, including: a default weight of a connection, that is, a weight configured on the ingress node user configuration for a connection; a connection The current weight is the current weight of the connection, which is equal to the default weight in the initial case; the priority is the priority configured by the user on the ingress node for a connection; the queue depth is the ingress port is being The number of requests is zero in the initial case; the service control module 1202 is configured to perform quality of service control on the connection.
  • the policy analysis module 1203 is configured to analyze and manage the policies on each connection.
  • the magnetic array device further includes a policy monitoring module 1204 configured to monitor bandwidth and IOPS.
  • the monitoring of bandwidth and IOPS is not real-time and synchronous.
  • the magnetic array does not immediately change the weight in real time in monitoring the change of the data object.
  • the notification host and the magnetic array internal processing system to avoid increasing the burden of the magnetic array device; regular monitoring methods can be used.
  • the policy adjustment module 1205 may be configured to adjust the weight according to the policy.
  • the policy sending module 1206 is configured to send the current flow control policy to the host.
  • the magnetic array device of the preferred embodiment can control and dynamically adjust the bandwidth and IOPS of a connection by setting the policy configuration module 1201 and the service control module 1202, etc., and solve the problem that when multiple hosts are connected to one magnetic array, Or a port of a magnetic array, due to L3 (network) FCoE lack of user traffic jitter caused by effective flow control; avoid some applications can not accept this overload, jitter, timeout and can not use magnetic array equipment, saving The system resources of the magnetic array system improve the service quality of the magnetic array system and more accurately meet the user's needs.
  • FIG. 14 is a schematic diagram of a process flow of a magnetic array device receiving host 10 according to a preferred embodiment of the present invention. As shown in FIG.
  • Step S1301 A magnetic array receives a host request; Step S1302, a policy analysis module 1203 Obtaining the relevant configuration; Step S1303, determining whether the flow control upper limit is reached, if yes, entering the service control module 1202; Step S1304, determining whether the flow control upper limit is reached, if not, proceeding to the next step; Step S1305, determining whether it is a high priority, If yes, the service control module 1202 is entered, and the priority is processed according to the high priority; in step S1306, it is determined whether it is a high priority, and if not, it is put into the normal queue and other non-priority queues for fair processing; Step S1307, threshold and priority After the judgment ends, the policy adjustment module 1205 and the policy monitoring module are entered.
  • Priority is used to identify the priority of request transmission. It can be divided into two categories: request carrying priority and magnetic array scheduling priority.
  • the request carrying priority mainly means that the 802.1p priority is handled by the L2 layer.
  • the preferred embodiment provides a method of processing a magnetic array scheduling priority.
  • the priority of the magnetic array scheduling refers to the priority used when requesting the CPU, the cache CACHE resource, and the queue depth in the magnetic array, and is only valid for the current magnetic array device itself.
  • the main meanings of the magnetic array scheduling priority include the following.
  • the magnetic array is a local-priority priority assigned to the request. Each local priority corresponds to a queue. The request with the larger local priority value has priority.
  • FIG. 15a to 15c are schematic diagrams showing a priority processing flow of a magnetic array device receiving host 10 request according to a preferred embodiment of the present invention, the flow comprising the following steps: Step S1401, FIG. 15a is a magnetic array port receiving according to a preferred embodiment of the present invention. Schematic diagram of the host request, as shown in FIG.
  • FIG. 15a is a schematic diagram of the magnetic array device sorting all traffic types according to the priority growth direction according to a preferred embodiment of the present invention. As shown in FIG. 15b, all traffic types are organized according to the growth direction of the priority; Step S1404, FIG.
  • Step S1405 If the user-specified magnetic array scheduling priority, the priority is determined within a certain traffic type, the magnetic array resource allocation is performed according to the scheduling in the magnetic array execution request, and the scheduling priority is reordered within the type.
  • Traffic monitoring is to monitor the specification of a certain traffic entering the network, limit it to a reasonable range, or "punish" the excess traffic to process the request according to the preset policy; if a connection is found If the traffic exceeds the standard, traffic monitoring can choose to discard the request or reset the priority of the request.
  • traffic monitoring can choose to discard the request or reset the priority of the request.
  • An efficient combination of packet loss policy and source-side flow control mechanism maximizes network throughput and utilization efficiency and minimizes request drop and latency. Traffic monitoring should avoid the following situation: when the queue discards multiple connections at the same time, multiple connections will enter the congestion avoidance state at the same time.
  • the host will lower the transmission rate and adjust the traffic, and then will appear at a certain time. Traffic peaks. Repeatedly, the network traffic is suddenly large and small, and the network keeps oscillating. Therefore, the detection method used is mainly as follows.
  • the random drop request causes other connections to have a higher transmission speed when a connection request is discarded and deceleration is started. In this way, whenever there is always a connection for faster transmission, the utilization of the line bandwidth is improved.
  • the upper and lower limits are set for each queue according to the preset policy.
  • the request in the queue is processed as follows. When the length of the queue is less than the lower limit, the request is not discarded. When the length of the queue exceeds the upper limit, random discarding begins. request.
  • FIG. 16 is a schematic diagram of a traffic monitoring process requested by a magnetic array device receiving host 10 according to a preferred embodiment of the present invention. As shown in FIG.
  • Step S1501 a magnetic array port receives a host request
  • Step S1502 according to traffic The type is sorted and monitored according to the time point
  • step S1504 the traffic 1 has the lowest priority.
  • the traffic 3 will preempt the bandwidth of the traffic 1, the traffic 3 bandwidth is increased to 5 Gb, and the bandwidth of the traffic 1 is reduced to 2 Gb. That is, the bandwidth of the traffic 1 is reduced, but the host of the traffic 1 is not aware.
  • Step S1505 if each traffic type corresponds to two connections on the magnetic array side, as shown in Figure 16 for the connection 1 and the connection 2 queue, the flow control policy is preset, for example, the traffic 3 needs to occupy 5 Gb bandwidth at the time T3, the policy monitoring module The bandwidth of traffic 1 and traffic 3 is adjusted at time T3.
  • Step S1506 after the flow control policy is set, when a traffic type exceeds the standard, the multiple connection requests cannot be discarded at the same time to avoid the network from oscillating; And the random discarding request on the connection 2, and the request discarding is performed according to the length of the queue on the connection 1 and the connection 2 and the preset discarding policy restriction, and feedback to the host side, the host side actively reduces the transmission rate, and gradually reduces the discarding amount and Probability to achieve smoothness of the bandwidth curve.
  • Add a policy monitoring module to the magnetic array device to set a flag for each connection of a port. When the connection has a service, write the timestamp of the request to the tag, set a timer for a port, and traverse every 30 seconds.
  • Time stamp of all connections when the time stamp value is found to be different from the current time by 30 seconds or more, the connection is marked as having no service for the first time, and so on.
  • the statistics are counted to the third time, the current weight value of the connection is lowered, and the value is increased.
  • the current weight value of other connections Similarly, when the connection that is lowering the current weight has a request, its current weight is restored to the default weight.
  • 17 is a schematic diagram of a process of adjusting a current weight and a default weight of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG.
  • the weight value of each connection is counted, for example, the weight of a connection is 30, and 30 are Credit, when each host request is received, the credit value is reduced. When the credit value is restored after the processing is completed, the traffic monitoring module detects every 30 seconds. If the credit on a connection is equal to the default weight, If there is no change, it is considered that there is no host business requirement for the connection, and the credit can be contributed out and put into a common resource area. The other credits use the public credit according to the load. If there is enough credit on the connection, it can be used to process the host request.
  • the embodiments of the present invention and the preferred embodiment can effectively solve the problem that the upper layer service in the FCoE network cannot be aware of the link congestion.
  • the existing implementation method is that the maintenance personnel manually configure the switch bandwidth to ensure that the service jitter is small.
  • the bandwidth caused by the static configuration of the switch is wasted, and when the networking is complicated, the manual configuration efficiency is low and the load adjustment between all nodes is complicated.
  • the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices.
  • they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device, or they may be separately fabricated into individual integrated circuit modules, or they may be Multiple modules or steps are made into a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.
  • Industrial Applicability As described above, the method, device, and system for controlling the flow rate of the Ethernet Fibre Channel provided by the embodiments of the present invention have the following beneficial effects: The service quality of the magnetic array system is improved.

Abstract

Disclosed are a fiber channel over Ethernet (FCoE) flow control method, device and system, the method comprising: determining whether the flow at an ingress node port of a magnetic array device exceeds a predetermined threshold; if yes, then transmitting a configuration message to a source node device connected to the ingress node port, the configuration message being used for instructing the source node device to adjust the data transmission rate of the applications thereof. The present invention solves the problem in the prior art that the upper layer service in the FCoE network cannot sense link congestion, and improves quality of service of the magnetic array system.

Description

以太网光纤通道的流量控制方法、 装置及系统 技术领域 本发明涉及通信领域, 具体而言, 涉及一种以太网光纤通道的流量控制方法、 装 置及系统。 背景技术 光纤通道 (Fiber Channel, 简称为 FC) 是存储局域网络 (Storage Area Network, 简称为 SAN) 中应用最为广泛的一种协议, 即 SAN 中基于 FC网络传输数据。 但是 实现 FC 网络需使用的交换机、 网络接口卡、 以及线缆的数量较大, 加之这些设备的 成本较高, 因而使得 FC 网络的设备成本高、 维护难度大、 可扩展性差。 为解决上述 问题, 现有技术利用以太网光纤通道 (Fiber Channel over Ethernet, 简称为 FCoE) 协 议, 在以太网的基础上承载 FC协议, 以将 SAN和局域网 (Local Area Network , 简 称为 LAN) 整合。 数据中心内大多数存储阵列为 FC 目标器 (FC Target) 时, FCoE 启动器 (FCoE Initiator) 连接到无损以太网 (Lossless Ethernet), 通过 FCoE交换机 (FCoE Switch) 将 FC数据传输到 FC SAN, 最终访问 FC目标器。 但是, 随着 FCoE标准的发展, 开 始出现了 FCoE 目标器 (FCoE Target), FCoE启动器可以直接通过无损以太网访问 FCoE Target。 由于磁阵设备是一种共享资源, 其前端端口比主机少的多, 为了方便更多的主机 接入, 所以典型的 S AN网络必然会有 S AN switch接入, 所以磁阵的典型组网是多台 主机连接一台磁阵或一台磁阵的一个端口; 随着网络技术的快速发展, 网络业务对数 据流的传输质量要求也越来越高, 例如网络语音业务和网络视频业务, 都对数据流的 突发、 抖动等指标有着较高的要求。 SAN业务数据流需要通过网络中各级节点存储转 发, 在转发过程中, 网络业务数据流的流量模型将会发生变化, 突发、 抖动等指标会 降低。 比如基于 IP网络的视频业务, 从视频源端发送的数据流经过沿途节点的存储转 发, 其突发、 抖动等指标会降低, 超出了接收端的缓存处理能力, 导致出现丢包、 马 赛克等影响视频质量的问题。 图 1是根据相关技术的磁阵组网结构示意图, 图 2是根据相关技术的磁阵组网的 流量带宽波动情况示意图, 如图 1所示, 当两台主机 Hl, H2通过交换机的同一端口 P3接入阵列一个端口 T1时,由于主机 HI到交换机 PI的速率是 10吉比特每秒 (Gbps, 简称为 G) 主机 H2到交换机 P2的速率是 10G, 而交换机 P3到磁阵 T1的速率只有 10G, 所以这种组网模块在交换机与磁阵之前肯定有大量拥塞, 导致主机 HI和 H2的 应用程序的性能抖动。 主机 HI和 H2的应用程序的流量带宽会如图 2—样波动。 相关技术的解决方案是使用无损以太网流控, 基于 802.1Qbb和 802.3bd, 实现了 在全双工点对点链路上基于优先级的流控功能。当前以太网使用 PAUSE帧暂停也能达 到无丢包的要求, 但它会阻止一条链路上的所有流量, 本质上会暂停整条链路。 基于 优先级的流量控制(Priority-based Flow Control, 简称为 PFC)允许在一条以太网链路 上创建 8个虚拟通道, 并为每条虚拟通道指定一个 IEEE 802.1P优先等级, 允许单独 暂停和重启其中任意一条虚拟通道, 同时允许其它虚拟通道的流量无中断通过。 这一 方法使网络能够为单个虚拟链路创建无丢包类别的服务, 使其能够与同一接口上的其 它流量类型共存。 在相关技术的上述方案中, 在交换机端口的 8个队列进行缓存(buffer)空间的分 配, 形成了网络中 8个虚拟化通道, 数据流带上自身的通道标签(通过 802.1P进行标 识), buffer大小使得各队列有不同的数据缓存能力。 一旦出现瞬时拥塞 (即某个设备 的队列缓存消耗较快, 且超过一定阈值) 设备即向数据进入的方向发送反压信息, 上 游设备接收到反压信息, 会根据反压信息指示停止发送或延迟发送数据, 并将数据存 储在本地端口 buffer; 如果本地端口的 buffer消耗超过阈值, 则继续向上游反压, 如 此一级级反压, 直到网络终端设备, 从而消除网络节点因拥塞造成的丢包。 可见, 在相关技术的上述方案中, 在本端检测到拥塞时, 通过发送暂停(PAUSE) 帧通知对端暂时停止发送请求, PAUSE帧携带停止请求发送的时间长短信息及 IEEE 802. IP优先级向量信息, 通知对端在上述时间段内暂时停止发送携带上述优先级的请 求。 TECHNICAL FIELD The present invention relates to the field of communications, and in particular to a method, device, and system for controlling traffic of an Ethernet Fibre Channel. BACKGROUND Fibre Channel (FC) is the most widely used protocol in a storage area network (SAN), that is, a data transmission based on an FC network in a SAN. However, the number of switches, network interface cards, and cables required to implement the FC network is large, and the cost of these devices is high, which makes the FC network equipment costly, difficult to maintain, and poorly scalable. To solve the above problem, the prior art uses the Fibre Channel over Ethernet (FCoE) protocol to carry the FC protocol on the basis of Ethernet to integrate the SAN and the local area network (LAN). . When most storage arrays in the data center are FC targets, the FCoE Initiator is connected to Lossless Ethernet, and the FC data is transferred to the FC SAN through the FCoE switch. Access the FC target. However, with the development of the FCoE standard, the FCoE Target (FCoE Target) has begun to appear, and the FCoE Enabler can access the FCoE Target directly through the lossless Ethernet. Since the magnetic array device is a shared resource, its front-end port is much smaller than the host. In order to facilitate more host access, a typical S AN network must have an S AN switch, so the typical network of the magnetic array. It is a port in which multiple hosts are connected to one magnetic array or one magnetic array. With the rapid development of network technology, the quality of data transmission of network services is also increasing, such as network voice services and network video services. Both have high requirements for data stream bursts, jitter and other indicators. The SAN service data flow needs to be stored and forwarded through all levels of nodes in the network. During the forwarding process, the traffic model of the network service data flow will change, and indicators such as bursts and jitter will decrease. For example, the video service based on the IP network, the data stream sent from the video source end is stored and forwarded by the nodes along the path, and the burst and jitter indicators are reduced, which exceeds the buffer processing capability of the receiving end, resulting in packet loss, mosaic, etc. Quality issues. 1 is a schematic diagram of a structure of a magnetic array network according to the related art, and FIG. 2 is a schematic diagram of fluctuations of a traffic bandwidth of a magnetic array network according to the related art. As shown in FIG. 1, when two hosts H1 and H2 pass through the same port of the switch. When P3 accesses a port T1 of the array, the rate of the host HI to the switch PI is 10 gigabits per second (Gbps, Referred to as G), the rate of the host H2 to the switch P2 is 10G, and the rate of the switch P3 to the magnetic array T1 is only 10G. Therefore, the networking module must have a large amount of congestion before the switch and the magnetic array, resulting in the application of the host HI and H2. The performance jitter of the program. The traffic bandwidth of the host HI and H2 applications will fluctuate as shown in Figure 2. The related art solution is to implement loss-based Ethernet flow control, based on 802.1Qbb and 802.3bd, to implement priority-based flow control on full-duplex point-to-point links. Current Ethernet can also achieve no packet loss by using PAUSE frame pause, but it will block all traffic on one link, essentially suspending the entire link. Priority-based Flow Control (PFC) allows eight virtual channels to be created on an Ethernet link and an IEEE 802.1P priority level assigned to each virtual channel, allowing separate pauses and restarts Any one of the virtual channels allows traffic of other virtual channels to pass without interruption. This approach enables the network to create a service with no packet loss class for a single virtual link, enabling it to coexist with other traffic types on the same interface. In the above solution of the related art, the buffer space is allocated in the eight queues of the switch port, and eight virtualized channels in the network are formed, and the data stream carries its own channel label (identified by 802.1P). The buffer size allows each queue to have different data caching capabilities. In the event of transient congestion (that is, a device's queue cache consumes faster and exceeds a certain threshold), the device sends back pressure information in the direction in which the data enters. When the upstream device receives the back pressure information, it stops sending according to the back pressure information. Delay sending data, and store the data in the local port buffer; if the buffer consumption of the local port exceeds the threshold, continue to backflush upstream, so the first level back pressure, until the network terminal equipment, thereby eliminating the loss of the network node due to congestion package. It can be seen that, in the foregoing solution of the related art, when the local end detects the congestion, the PAUSE frame is used to notify the opposite end to temporarily stop the transmission request, and the PAUSE frame carries the time length information of the stop request transmission and the IEEE 802. IP priority. The vector information is used to notify the peer end to temporarily stop sending the request carrying the above priority within the above time period.
PFC的优点是对拥塞的反应比较快, 能够及时流控; 缺点是容易导致拥塞扩散。 另外, PFC 的 PAUSE帧如果比较频繁地向上游发送, 将对链路的带宽利用产生负面 影响。 图 3是根据相关技术的 PFC导致拥塞扩散的示意图, 如图 3所示, 当交换机 S3 的 P7端口出现拥塞, 则会沿图中的虚线发送 PAUSE帧给上游设备, 包括交换机 S1 和夂换机 S2, 从而可能造成 S1和 S2的上游设备拥塞。 由上述描述可见,相关技术中的流控机制的缺陷是 PAUSE帧是属于 L2层的流控, 由于在相关技术的 FCoE协议中, L2(链路 link)层只保证链路的控制, L2层出现 PAUSE 帧时无法传递给 L3 (网络 network) 层的 FCoE协议栈。 如图 1中的组网, 在交换机 P3到磁阵 Tl的 L2层出现 PAUSE时,主机 HI和主机 H2的应用程序无法感知到,还 是继续按照 10G的速率下发业务。 图 4是根据相关技术的 SAN网络中网络拥塞的示意图,对于网络的拥塞现象典型 的描述如图 4所示。 当网络负载较小时, 吞吐量基本上随着负载的增长而增长, 呈线 性关系, 响应时间增长缓慢; 当负载达到网络容量时, 吞吐量呈现出缓慢增长, 而响 应时间急剧增加; 如果负载继续增加, 路由器开始丢包; 当负载超过一定量时, 吞吐 量开始急剧下降。 相关技术中的拥塞控制机制包含拥塞避免和拥塞控制两种策略。 前者的目的是在 达到网络容量时避免拥塞的发生; 后者则是达到网络容量后的处理。 前者是一种 "预 防"措施, 维持网络的高吞吐量、 低延迟状态, 以避免进入拥塞; 后者是一种 "恢复" 措施, 使网络从拥塞中恢复过来, 以进入正常的运行状态。 拥塞增加了请求传输的延迟和抖动, 可能会引起请求重传, 从而导致更多的拥塞 产生; 同时还会使网络的有效吞吐率降低, 造成网络资源的利用率降低。 SAN的体系 结构必须确保底层网络设备被设计成合适的负载, 超载的链路可能会被一些应用程序 允许, 但是一些应用程序无法接受这种超载和超时, 所以必须要考虑 FCoE中从上至 下的流控控制。 且如果磁阵设备提供的能力波动太大, 会引起一些用户业务中断。 对 于存储业界的测试 SPC-1测试结果的关键指标是时延, 其中会统计不同负载等级落在 每个时间段的请求比例, 如果抖动过大最后的曲线会产生波动导致测试无效。 针对相关技术的 FCoE组网中上层业务无法感知链路拥塞的问题, 目前尚未提出 有效的解决方案。 发明内容 本发明实施例提供了一种以太网光纤通道的流量控制方法、 装置及系统, 以至少 解决相关技术的 FCoE组网中上层业务无法感知链路拥塞的问题。 根据本发明的一个实施例, 提供了一种以太网光纤通道的流量控制方法, 包括: 判断磁阵设备的入节点端口的流量是否超过预定阈值; 在判断结果为是的情况下, 发 送配置消息至与所述入节点端口连接的源节点设备, 其中, 所述配置消息用于指示所 述源节点设备调整其应用程序的数据发送速率。 在与所述入节点端口连接的源节点设备的数目为多个的情况下, 在判断所述流量 是否超过预定阈值之前, 所述方法还包括: 确定所述入节点端口与多个源节点设备之 间的每个连接的流量控制策略; 根据所述流量控制策略, 处理所述每个连接的流量。 确定所述入节点端口与多个源节点设备之间的每个连接的流量控制策略包括: 根 据预设配置,获取所述流量控制策略;和 /或根据所述每个连接的优先级和 /或所述每个 连接上业务的负载情况, 调整所述流量控制策略。 根据所述每个连接的优先级和 /或所述每个连接上的流量情况,调整所述流量控制 策略包括: 在所述优先级高的连接上的流量增加的情况下, 将所述优先级低的连接所 分配的带宽提供给所述优先级高的连接, 以满足所述优先级高的连接上的流量的带宽 需求; 和 /或在预定时间内检测所述每个连接的流量情况, 并将所述预定时间内未检测 到流量的连接所分配的带宽添加到公共资源区, 以供其他连接使用。 在所述预定时间内检测所述每个连接的流量情况包括: 保存所述每个连接上所述 源节点设备对所述入节点端口的最新的读写请求的接收时间; 根据所述预定时间的时 间间隔检测每个连接的所述接收时间; 在所述接收时间与当前时间的间隔大于或等于 所述预定时间的情况下, 确定所述预定时间内对应的连接上未检测到流量。 在所述流量控制策略包括带宽权重的情况下, 根据所述流量控制策略, 处理所述 每个连接的流量包括: 根据所述每个连接的带宽权重, 确定所述每个连接所分配的带 宽; 通过所述每个连接所分配的带宽, 处理所述每个连接的流量。 通过所述每个连接所分配的带宽, 处理所述每个连接的流量包括: 接收所述源节 点设备的读写请求; 判断所述源节点设备与所述入节点端口之间的连接的流量是否达 到所分配的带宽; 在判断结果为是的情况下, 丢弃或者挂起所述读写请求。 在所述流量控制策略包括流量类型的优先级的情况下, 根据所述流量控制策略, 处理所述每个连接的流量包括: 在优先级高的流量类型所需求的带宽增加的情况下, 将优先级低的流量类型所分配的带宽分配给所述优先级高的流量类型; 随机丢弃所述 每个连接中发送所述优先级低的流量类型的连接上所述优先级低的流量类型对应的读 写请求。 根据本发明的另一个实施例, 还提供了一种以太网光纤通道的流量控制装置, 包 括: 判断模块, 设置为判断磁阵设备的入节点端口的流量是否超过预定阈值; 发送模 块, 设置为在判断结果为是的情况下, 发送配置消息至与所述入节点端口连接的源节 点设备, 其中, 所述配置消息用于指示所述源节点设备调整其应用程序的数据发送速 率。 根据本发明的另一个实施例, 还提供了一种以太网光纤通道的流量控制系统, 包 括源节点设备和磁阵设备, 还包括: 磁阵控制模块, 位于所述磁阵设备中, 设置为监 控所述磁阵设备的入节点端口的流量, 并在所述流量超过预定阈值的情况下, 发送配 置消息至源节点控制模块; 所述源节点控制模块, 位于所述源节点设备的以太网光纤 通道 FCoE驱动协议层中, 设置为接收所述配置消息, 并根据所述配置消息调整所述 源节点设备中的应用程序向所述磁阵设备发送数据的速率。 通过本发明实施例, 判断磁阵设备的入节点端口的流量是否超过预定阈值; 在判 断结果为是的情况下, 发送配置消息至与入节点端口连接的源节点设备, 其中, 配置 消息用于指示该源节点设备调整其应用程序的数据发送速率的方式, 解决了相关技术 的 FCoE组网中上层业务无法感知链路拥塞的问题, 提高了磁阵系统的服务质量。 附图说明 构成本申请的一部分的附图用来提供对本发明的进一步理解, 本发明的示意性实 施例及其说明用于解释本发明, 并不构成对本发明的不当限定。 在附图中: 图 1是根据相关技术的磁阵组网结构示意图; 图 2是根据相关技术的磁阵组网的流量带宽波动情况示意图; 图 3是根据相关技术的 PFC导致拥塞扩散的示意图; 图 4是根据相关技术的 SAN网络中网络拥塞的示意图; 图 5是根据本发明实施例的以太网光纤通道的流量控制方法的流程示意图; 图 6是根据本发明实施例的以太网光纤通道的流量控制装置的结构示意图; 图 7是根据本发明实施例的以太网光纤通道的流量控制系统的结构示意图; 图 8是根据发明优选实施例的 FCoE帧结构的结构示意图; 图 9是根据本发明实施例的磁阵组网的流量带宽波动情况示意图; 图 10是根据本发明优选实施例的流量控制系统的系统结构图; 图 11是根据本发明优选实施例的磁阵设备流量控制方法的流程示意图; 图 12 是根据本发明优选实施例的磁阵设备流量控制方法中 IOPS 控制的流程示 ; 图 13是根据本发明优选实施例的磁阵设备的结构示意图; 图 14是根据本发明优选实施例的磁阵设备接收主机 10请求的处理流程示意图; 图 15a是根据本发明优选实施例的磁阵端口接收主机请求的示意图; 图 15b是根据本发明优选实施例的磁阵设备按照优先级的增长方向整理所有流量 类型的示意图; 图 15c是根据本发明优选实施例的磁阵设备按照磁阵调度优先级的增长方向整理 所有流量类型的示意图; 图 16是根据本发明优选实施例的磁阵设备接收主机 10请求的流量监控流程示意 图; 图 17 是根据本发明优选实施例的磁阵设备当前权重与默认权重的调整流程示意 图。 具体实施方式 需要说明的是, 在不冲突的情况下, 本申请中的实施例及实施例中的特征可以相 互组合。 下面将参考附图并结合实施例来详细说明本发明。 并且, 在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系 统中执行, 虽然在流程图中示出了逻辑顺序, 但是在某些情况下, 可以以不同于此处 的顺序执行所示出或描述的步骤。 本实施例提供了一种以太网光纤通道的流量控制方法, 图 5是根据本发明实施例 的以太网光纤通道的流量控制方法的流程示意图, 如图 5所示,该流程包括如下步骤: 步骤 S502, 判断磁阵设备的入节点端口的流量是否超过预定阈值; 步骤 S504, 在判断结果为是的情况下, 发送配置消息至与入节点端口连接的源节 点设备, 其中, 配置消息用于指示源节点设备调整其应用程序的数据发送速率。 通过上述步骤, 在磁阵设备的入节点端口的流量超过预定阈值的情况下向同样处 于 L3 层的、 与入节点端口连接的源节点设备发送配置消息, 以指示源节点设备调整 其应用程序的数据发送速率, 从而解决了相关技术的 FCoE组网中上层业务无法感知 链路拥塞的问题, 提高了磁阵系统的服务质量。 优选地, 在与入节点端口连接的源节点设备的数目为多个的情况下, 在步骤 S502 之前, 该流程还可以包括: 步骤 S501 , 确定入节点端口与多个源节点设备之间的每个连接的流量控制策略; 步骤 S500, 根据流量控制策略, 处理每个连接的流量。 通过该方式, 对于每个连接设置流量控制策略, 从而使得磁阵设备可以根据设置 的流量控制策略对每个连接的流量进行流控处理, 进一步提升避免流量拥塞的效果。 优选地,对于每个连接设置的流量控制策略的获取方式有多种,例如,在步骤 S501 中确定入节点端口与多个源节点设备之间的每个连接的流量控制策略可以采用: 根据 预设配置,获取流量控制策略;和 /或根据每个连接的优先级和 /或每个连接上业务的负 载情况, 调整流量控制策略。 优选地,根据每个连接的优先级和 /或每个连接上的流量情况调整流量控制策略包 括: 在优先级高的连接上的流量增加的情况下, 将优先级低的连接所分配的带宽提供 给优先级高的连接, 以满足优先级高的连接上的流量的带宽需求; 和 /或在预定时间内 检测每个连接的流量情况, 并将预定时间内未检测到流量的连接所分配的带宽添加到 公共资源区, 以供其他连接使用。 通过该方式, 提供了一种基于连接的优先级的带宽 分配机制, 通过将低优先级的连接的带宽分配给高优先级的连接, 从而保证了高优先 级连接上业务的传输; 通过将闲置连接的带宽资源共享给其他工作连接使用, 从而提 高了带宽资源的利用效率。 优选地, 在上述方式中, 在预定时间内检测每个连接的流量情况可以采用如下方 式: 保存每个连接上源节点设备对入节点端口的最新的读写请求的接收时间; 根据预 定时间的时间间隔检测每个连接的接收时间; 在接收时间与当前时间的间隔大于或等 于预定时间的情况下, 确定预定时间内对应的连接上未检测到流量。 通过上述方式, 采用记录最新的读写请求的接收时间, 并采用一定的间隔时间检测接收时间的方式遍 历每个连接的流量状况, 避免了磁阵设备频繁检测每条连接上的流量情况导致的额外 系统资源被消耗, 从而降低了磁阵设备的开销。 优选地, 在流量控制策略包括带宽权重的情况下, 步骤 S500可以包括: 根据每个 连接的带宽权重, 确定每个连接所分配的带宽; 通过每个连接所分配的带宽, 处理每 个连接的流量。 优选地, 在上述方式中, 通过每个连接所分配的带宽处理每个连接的流量包括: 接收源节点设备的读写请求; 判断源节点设备与入节点端口之间的连接的流量是否达 到所分配的带宽; 在判断结果为是的情况下, 丢弃或者挂起读写请求。 优选地, 在流量控制策略包括流量类型的优先级的情况下, 步骤 S500可以包括: 在优先级高的流量类型所需求的带宽增加的情况下, 将优先级低的流量类型所分配的 带宽分配给优先级高的流量类型; 随机丢弃每个连接中发送优先级低的流量类型的连 接上优先级低的流量类型对应的读写请求。 通过该方式, 对于每个流量类型设置相应 的优先级, 并对优先级不同的流量类型分别进行处理, 从而保证了高优先级的流量类 型的传输。 本实施例还提供了一种以太网光纤通道的流量控制装置, 该装置设置为实现上述 以太网光纤通道的流量控制方法。 该装置中涉及的模块和单元的功能可以结合上述文 件分块方法对应的功能实现进行结合描述和说明, 在本实施例中将不再赘述。 图 6是根据本发明实施例的以太网光纤通道的流量控制装置的结构示意图, 如图 6所示, 该装置包括: 判断模块 62和发送模块 64, 其中, 判断模块 62, 设置为判断 磁阵设备的入节点端口的流量是否超过预定阈值; 发送模块 64耦合至判断模块 62, 设置为在判断结果为是的情况下, 发送配置消息至与入节点端口连接的源节点设备, 其中, 配置消息用于指示源节点设备调整其应用程序的数据发送速率。 优选地, 该装置还包括确定模 60, 设置为在与入节点端口连接的源节点设备的数 目为多个的情况下, 确定入节点端口与多个源节点设备之间的每个连接的流量控制策 略; 处理模块 61, 耦合至确定模块 60和判断模块 62, 设置为根据流量控制策略, 处 理每个连接的流量。 优选地, 确定模块 60还可以设置为根据预设配置, 获取流量控制策略; 和 /或根 据每个连接的优先级和 /或每个连接上业务的负载情况, 调整流量控制策略。 优选地,确定模块 60还可以设置为在优先级高的连接上的流量增加的情况下,将 优先级低的连接所分配的带宽提供给优先级高的连接, 以满足优先级高的连接上的流 量的带宽需求; 和 /或在预定时间内检测每个连接的流量情况, 并将预定时间内未检测 到流量的连接所分配的带宽添加到公共资源区, 以供其他连接使用。 优选地,确定模块 60还可以设置为保存每个连接上源节点设备对入节点端口的最 新的读写请求的接收时间; 根据预定时间的时间间隔检测每个连接的接收时间; 在接 收时间与当前时间的间隔大于或等于预定时间的情况下, 确定预定时间内对应的连接 上未检测到流量。 优选地, 在流量控制策略包括带宽权重的情况下, 根据流量控制策略, 处理模块The advantage of PFC is that it responds quickly to congestion and can flow control in time; the disadvantage is that it tends to cause congestion to spread. In addition, if the PAUSE frame of the PFC is sent to the upstream more frequently, it will have a negative impact on the bandwidth utilization of the link. 3 is a schematic diagram of congestion spreading caused by a PFC according to the related art. As shown in FIG. 3, when congestion occurs on the P7 port of the switch S3, a PAUSE frame is sent along the dotted line in the figure to the upstream device, including the switch S1 and the switcher. S2, which may cause congestion of upstream devices of S1 and S2. It can be seen from the above description that the drawback of the flow control mechanism in the related art is that the PAUSE frame is a flow control belonging to the L2 layer, because in the related art FCoE protocol, the L2 (link link) layer only guarantees the link control, the L2 layer. FCoE stack that cannot be passed to the L3 (network network) layer when a PAUSE frame occurs. As shown in Figure 1, networking, in the switch When PAUSE occurs in the L2 layer of the P3 to the magnetic array T1, the application of the host HI and the host H2 cannot be perceived, and the service is still delivered at a rate of 10G. 4 is a schematic diagram of network congestion in a SAN network according to the related art, and a typical description of congestion phenomenon of the network is shown in FIG. 4. When the network load is small, the throughput basically increases with the load, linearly, and the response time grows slowly; when the load reaches the network capacity, the throughput shows a slow increase, and the response time increases sharply; if the load continues Increase, the router starts to drop packets; when the load exceeds a certain amount, the throughput begins to drop sharply. The congestion control mechanism in the related art includes two strategies of congestion avoidance and congestion control. The former aims to avoid congestion when the network capacity is reached; the latter is the processing after reaching the network capacity. The former is a "preventive" measure to maintain the network's high-throughput, low-latency state to avoid congestion; the latter is a "recovery" measure that allows the network to recover from congestion to enter normal operating conditions. Congestion increases the delay and jitter of the request transmission, which may cause retransmission of the request, resulting in more congestion. At the same time, the effective throughput of the network is reduced, resulting in a decrease in the utilization of network resources. The architecture of the SAN must ensure that the underlying network devices are designed to be properly loaded. Overloaded links may be allowed by some applications, but some applications cannot accept such overloads and timeouts, so FCoE must be considered from top to bottom. Flow control. And if the capacity provided by the magnetic array device fluctuates too much, it will cause some user business interruption. The key indicator for the test SPC-1 test results in the storage industry is the delay, in which the proportion of requests for different load levels falling in each time period is counted. If the jitter is too large, the last curve will cause fluctuations and the test will be invalid. The upper layer services in the FCoE networking of the related technology cannot be aware of the problem of link congestion. Currently, no effective solution has been proposed. SUMMARY OF THE INVENTION The embodiments of the present invention provide a method, a device, and a system for controlling traffic of an Ethernet Fibre Channel, so as to solve at least the problem that the upper layer service in the related art FCoE networking cannot detect the link congestion. According to an embodiment of the present invention, a method for controlling a traffic of an Ethernet Fibre Channel includes: determining whether a traffic of an ingress port of a magnetic array device exceeds a predetermined threshold; and transmitting a configuration message if the determination result is yes. And the source node device connected to the ingress port, where the configuration message is used to instruct the source node device to adjust a data transmission rate of the application. In a case that the number of the source node devices connected to the ingress node is multiple, before determining whether the traffic exceeds a predetermined threshold, the method further includes: determining the ingress node and the multiple source node devices A flow control policy for each connection between the two; according to the flow control policy, the traffic of each connection is processed. Determining a flow control policy for each connection between the ingress node and the plurality of source node devices includes: obtaining the flow control policy according to a preset configuration; and/or according to the priority of each connection and/or Or adjusting the traffic control policy by the load condition of the service on each connection. Adjusting the flow control policy according to the priority of each connection and/or the traffic condition on each connection includes: in the case that the traffic on the connection with the high priority is increased, the priority is The allocated bandwidth of the low-level connection is provided to the high-priority connection to meet the bandwidth requirement of the traffic on the high-priority connection; and/or the traffic condition of each connection is detected within a predetermined time. And adding the bandwidth allocated by the connection in which the traffic is not detected within the predetermined time to the common resource area for use by other connections. Detecting the traffic condition of each connection in the predetermined time includes: saving a receiving time of the latest read/write request of the source node device to the ingress node on each connection; according to the predetermined time The time interval detects the receiving time of each connection; if the interval between the receiving time and the current time is greater than or equal to the predetermined time, determining that no traffic is detected on the corresponding connection within the predetermined time. In the case that the traffic control policy includes the bandwidth weight, the processing, according to the traffic control policy, the traffic of each connection includes: determining, according to the bandwidth weight of each connection, the allocated bandwidth of each connection The traffic of each connection is processed by the bandwidth allocated by each of the connections. Processing the traffic of each connection by using the bandwidth allocated by each connection includes: receiving a read/write request of the source node device; determining a traffic of the connection between the source node device and the ingress node port Whether the allocated bandwidth is reached; if the judgment result is YES, the read/write request is discarded or suspended. In the case that the traffic control policy includes the priority of the traffic type, according to the traffic control policy, processing the traffic of each connection includes: in the case that the bandwidth required by the traffic type with high priority is increased, The bandwidth allocated by the traffic type with the lower priority is allocated to the traffic type with the higher priority; the traffic type of the lower priority is sent to the connection of the traffic type with the lower priority in each connection. Read and write requests. According to another embodiment of the present invention, a flow control device for an Ethernet Fibre Channel is provided, including: a determining module, configured to determine whether a traffic of an ingress port of the magnetic array device exceeds a predetermined threshold; and a sending module, configured to When the determination result is yes, sending a configuration message to the source section connected to the ingress port The point device, where the configuration message is used to instruct the source node device to adjust a data transmission rate of its application. According to another embodiment of the present invention, a flow control system for an Ethernet Fibre Channel is provided, including a source node device and a magnetic array device, and further includes: a magnetic array control module, located in the magnetic array device, configured to Monitoring the traffic of the ingress port of the magnetic array device, and sending a configuration message to the source node control module if the traffic exceeds a predetermined threshold; the source node control module, the Ethernet located at the source node device The Fibre Channel FCoE driver protocol layer is configured to receive the configuration message, and adjust, according to the configuration message, a rate at which an application in the source node device sends data to the magnetic array device. According to the embodiment of the present invention, it is determined whether the traffic of the ingress port of the magnetic array device exceeds a predetermined threshold; if the determination result is yes, the configuration message is sent to the source node device connected to the ingress port, where the configuration message is used. The method of instructing the source node device to adjust the data transmission rate of the application program solves the problem that the upper layer service in the related technology cannot detect the link congestion in the FCoE network, and improves the service quality of the magnetic array system. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in FIG. In the drawings: FIG. 1 is a schematic diagram of a magnetic array networking structure according to the related art; FIG. 2 is a schematic diagram of a flow bandwidth fluctuation of a magnetic array networking according to the related art; FIG. 3 is a schematic diagram of congestion diffusion caused by a PFC according to the related art. 4 is a schematic diagram of network congestion in a SAN network according to the related art; FIG. 5 is a schematic flowchart of a flow control method of an Ethernet Fibre Channel according to an embodiment of the present invention; FIG. 6 is an Ethernet Fibre Channel according to an embodiment of the present invention; FIG. 7 is a schematic structural diagram of a flow control system for an Ethernet Fibre Channel according to an embodiment of the present invention; FIG. 8 is a schematic structural diagram of an FCoE frame structure according to a preferred embodiment of the present invention; FIG. 10 is a system structural diagram of a flow control system according to a preferred embodiment of the present invention; FIG. 11 is a flow chart showing a flow control method of a magnetic array device according to a preferred embodiment of the present invention; FIG. 12 is a flow chart showing IOPS control in a flow control method for a magnetic array device according to a preferred embodiment of the present invention; FIG. 14 is a schematic diagram of a process flow of a magnetic array device receiving host 10 request according to a preferred embodiment of the present invention; FIG. 15a is a schematic diagram of a magnetic array port receiving host request according to a preferred embodiment of the present invention; Figure 15b is a schematic diagram of a magnetic array device sorting all traffic types according to a priority growth direction according to a preferred embodiment of the present invention; Figure 15c is a magnetic array device according to a preferred embodiment of the present invention, sorting according to the growth direction of the magnetic array scheduling priority FIG. 16 is a flow chart showing the flow monitoring process requested by the magnetic array device receiving host 10 according to a preferred embodiment of the present invention; FIG. 17 is a flow chart for adjusting the current weight and default weight of the magnetic array device according to a preferred embodiment of the present invention. schematic diagram. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. The invention will be described in detail below with reference to the drawings in conjunction with the embodiments. Also, the steps illustrated in the flowchart of the figures may be executed in a computer system such as a set of computer executable instructions, although the logical order is illustrated in the flowchart, in some cases, may be different from this The steps shown are performed in the order shown or described. This embodiment provides a flow control method for an Ethernet Fibre Channel. FIG. 5 is a schematic flowchart of a method for controlling a traffic of an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG. 5, the process includes the following steps: S502, determining whether the traffic of the ingress port of the magnetic array device exceeds a predetermined threshold; Step S504, if the determination result is yes, sending a configuration message to the source node device connected to the ingress port, where the configuration message is used to indicate The source node device adjusts the data transmission rate of its application. Through the above steps, when the traffic of the ingress port of the magnetic array device exceeds a predetermined threshold, a configuration message is sent to the source node device that is also in the L3 layer and connected to the ingress port, to indicate that the source node device adjusts its application. The data transmission rate solves the problem that the upper layer service in the related art FCoE networking cannot sense the link congestion, and improves the service quality of the magnetic array system. Preferably, in the case that the number of the source node devices connected to the ingress node is multiple, the process may further include: before step S502, step S501: determining each port between the ingress node and the plurality of source node devices a connected flow control policy; Step S500, processing the traffic of each connection according to the flow control policy. In this manner, a flow control policy is set for each connection, so that the magnetic array device can perform flow control processing on each connected traffic according to the set flow control policy, thereby further improving the effect of avoiding traffic congestion. Preferably, the flow control policy for each connection is obtained in various manners. For example, the flow control policy for determining each connection between the ingress node and the plurality of source node devices in step S501 may be: Set the configuration to obtain a flow control policy; and/or adjust the flow control policy based on the priority of each connection and/or the load of the traffic on each connection. Preferably, adjusting the flow control policy according to the priority of each connection and/or the traffic condition on each connection includes: the bandwidth allocated by the connection with the lower priority in the case of the increase of the traffic on the connection with the higher priority Providing a connection with a higher priority to meet the bandwidth requirement of the traffic on the connection with a higher priority; and/or detecting the traffic condition of each connection within a predetermined time, and allocating the connection for which the traffic is not detected within the predetermined time The bandwidth is added to the common resource area for use by other connections. In this way, a bandwidth-based allocation mechanism based on the priority of the connection is provided, and the bandwidth of the low-priority connection is allocated to the high-priority connection, thereby ensuring the transmission of the service on the high-priority connection; The bandwidth resources of the connection are shared for use by other working connections, thereby improving the utilization efficiency of the bandwidth resources. Preferably, in the above manner, detecting the traffic condition of each connection within a predetermined time may be as follows: saving the receiving time of the latest read/write request of the source node device to the ingress node of each connection; according to the predetermined time The time interval detects the receiving time of each connection; when the interval between the receiving time and the current time is greater than or equal to the predetermined time, it is determined that the traffic is not detected on the corresponding connection within the predetermined time. In the above manner, the recording time of the latest read/write request is recorded, and the traffic condition of each connection is traversed by using a certain interval time to detect the receiving time, thereby avoiding the magnetic array device frequently detecting the traffic on each connection. Additional system resources are consumed, reducing the overhead of the magnetic array device. Preferably, in the case that the traffic control policy includes the bandwidth weight, the step S500 may include: determining, according to the bandwidth weight of each connection, the allocated bandwidth of each connection; processing each connection by using the allocated bandwidth of each connection flow. Preferably, in the above manner, processing the traffic of each connection by using the allocated bandwidth of each connection includes: receiving a read/write request of the source node device; determining whether the traffic of the connection between the source node device and the ingress node reaches the The allocated bandwidth; in the case of a yes result, the read or write request is discarded or suspended. Preferably, in the case that the traffic control policy includes the priority of the traffic type, the step S500 may include: allocating the allocated bandwidth of the traffic type with the lower priority in the case that the required bandwidth of the traffic type with the higher priority is increased. Give the traffic type with high priority; randomly discard the read and write requests corresponding to the traffic type with lower priority on the connection of the traffic type with lower priority in each connection. In this way, the corresponding priority is set for each traffic type, and the traffic types with different priorities are processed separately, thereby ensuring the transmission of the traffic of the high priority. The embodiment further provides a flow control device for the Fibre Channel of the Ethernet, and the device is configured to implement the flow control method of the Ethernet Fibre Channel. The functions of the modules and units involved in the device may be combined with the description and description of the functions corresponding to the file blocking method described above, and will not be further described in this embodiment. FIG. 6 is a schematic structural diagram of a flow control apparatus for an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG. 6, the apparatus includes: a judging module 62 and a sending module 64, wherein the judging module 62 is configured to determine a magnetic array. Whether the traffic of the ingress port of the device exceeds a predetermined threshold; the sending module 64 is coupled to the determining module 62, and configured to send a configuration message to the source node device connected to the ingress port, where the determination result is yes, where the configuration message is Used to instruct the source node device to adjust the data sending rate of its application. Preferably, the apparatus further comprises a determination module 60 configured to determine the traffic of each connection between the ingress node port and the plurality of source node devices if the number of source node devices connected to the ingress port is plural The control module; the processing module 61, coupled to the determining module 60 and the determining module 62, is configured to process the traffic of each connection according to the flow control policy. Preferably, the determining module 60 is further configured to obtain a flow control policy according to a preset configuration; and/or adjust the flow control policy according to a priority of each connection and/or a load condition of the service on each connection. Preferably, the determining module 60 is further configured to provide the bandwidth allocated by the connection with the lower priority to the connection with the higher priority in the case of the increase of the traffic on the connection with the higher priority, so as to satisfy the connection with the higher priority. The bandwidth requirement of the traffic; and/or detecting the traffic condition of each connection within a predetermined time, and adding the allocated bandwidth of the connection that does not detect the traffic within the predetermined time to the common resource area for use by other connections. Preferably, the determining module 60 is further configured to save a receiving time of the latest read/write request of the source node device to the ingress port port on each connection; detecting the receiving time of each connection according to the time interval of the predetermined time; When the interval of the current time is greater than or equal to the predetermined time, it is determined that the traffic is not detected on the corresponding connection within the predetermined time. Preferably, in the case that the traffic control policy includes bandwidth weight, according to the traffic control policy, the processing module
61可以设置为根据每个连接的带宽权重, 确定每个连接所分配的带宽; 通过每个连接 所分配的带宽, 处理每个连接的流量。 优选地, 处理模块 61还可以设置为接收源节点设备的读写请求;判断源节点设备 与入节点端口之间的连接的流量是否达到所分配的带宽; 在判断结果为是的情况下, 丢弃或者挂起读写请求。 优选地, 处理模块 61还设置为在流量控制策略包括流量类型的优先级的情况下, 在优先级高的流量类型所需求的带宽增加的情况下将优先级低的流量类型所分配的带 宽分配给优先级高的流量类型; 随机丢弃每个连接中发送优先级低的流量类型的连接 上优先级低的流量类型对应的读写请求。 本实施例还提供了一种以太网光纤通道的流量控制系统, 图 7是根据本发明实施 例的以太网光纤通道的流量控制系统的结构示意图, 如图 7所示, 该系统包括源节点 设备 72和磁阵设备 74, 其中, 还包括: 磁阵控制模块 742和源节点控制模块 722。 磁 阵控制模块 742, 位于磁阵设备 74中, 设置为监控磁阵设备 74的入节点端口的流量, 并在流量超过预定阈值的情况下, 发送配置消息至源节点控制模块 722; 源节点控制 模块 722耦合至磁阵控制模块 742, 位于源节点设备 72的 FCoE驱动协议层中, 设置 为接收配置消息, 并根据配置消息调整源节点设备中的应用程序向磁阵设备发送数据 的速率。 下面结合优选实施例进行描述和说明。 本优选实施例提供了一种以太网光纤通道流量控制的方法和系统, 该方法属于达 到网络容量前的预防措施, 以实现 L3 (network)层的 FCoE协议栈根据目标节点的处 理能力来有效控制源结点的带宽和速率。 本优选实施例采用如下的技术方案: 图 8是根据发明优选实施例的 FCoE帧结构的结构示意图, 如图 8所示, 在 FCoE 帧结构中添加表示带宽相关的成员,例如,在 FCoE协议数据单元(Protocol Data Unit, 简称为 PDU)格式 (FCoE PDU format) 的保留 (Reserved)字段增加成员, 其中, 权 重表示目的节点端口给源节点分配的带宽占比; 优先级针对那种对时延要求较高的应 用程序可以调高优先级, 以保证执行时延; 队列深度是目的节点 FCoE协议栈目前正 处理的队列深度, 源节点业务根据目的节点的实际负载情况来调整应用程序发送节奏 和流量控制。 其中, FCoE PDU重点字段及其含义为: 61 can be set to determine the bandwidth allocated for each connection based on the bandwidth weight of each connection; handle the traffic of each connection through the bandwidth allocated by each connection. Preferably, the processing module 61 is further configured to receive a read/write request of the source node device; determine whether the traffic of the connection between the source node device and the ingress port reaches the allocated bandwidth; and if the judgment result is yes, discard Or suspend the read and write request. Preferably, the processing module 61 is further configured to allocate the allocated bandwidth of the low priority traffic type in the case that the traffic control policy includes the priority of the traffic type, and the bandwidth required by the high priority traffic type increases. Give the traffic type with high priority; randomly discard the read and write requests corresponding to the traffic type with lower priority on the connection of the traffic type with lower priority in each connection. The embodiment also provides a flow control system for an Ethernet Fibre Channel. FIG. 7 is a schematic structural diagram of a flow control system for an Ethernet Fibre Channel according to an embodiment of the present invention. As shown in FIG. 7, the system includes a source node device. 72 and magnetic array device 74, further comprising: a magnetic array control module 742 and a source node control module 722. The magnetic array control module 742 is located in the magnetic array device 74, and is configured to monitor the traffic of the ingress port of the magnetic array device 74, and send a configuration message to the source node control module 722 if the traffic exceeds a predetermined threshold; Module 722 is coupled to magnetic array control module 742, located in the FCoE drive protocol layer of source node device 72, configured to receive configuration messages, and to adjust the rate at which applications in the source node device transmit data to the magnetic array device based on the configuration messages. Description and description are made below in conjunction with the preferred embodiments. The preferred embodiment provides a method and system for Ethernet Fibre Channel flow control, which is a precautionary measure before network capacity is achieved, so that the FCoE protocol stack of the L3 (network) layer is effectively controlled according to the processing capability of the target node. The bandwidth and rate of the source node. The preferred embodiment adopts the following technical solution: FIG. 8 is a schematic structural diagram of an FCoE frame structure according to a preferred embodiment of the present invention. As shown in FIG. 8, a bandwidth-related member is added to an FCoE frame structure, for example, in FCoE protocol data. Unit (Protocol Data Unit, The reserved field of the PDU format (FCoE PDU format) is added to the member, where the weight indicates the proportion of bandwidth allocated by the destination node to the source node; the priority is for applications that require higher latency. The priority is adjusted to ensure the execution delay; the queue depth is the queue depth currently being processed by the destination node FCoE protocol stack, and the source node service adjusts the application sending rhythm and flow control according to the actual load condition of the destination node. Among them, the FCoE PDU key fields and their meanings are:
Version: FCoE帧的版本号; Version: The version number of the FCoE frame;
Encapsulated FC Frame: 封装的 FC帧; 主要利用的 Reserved字段中, 在该字段中增加的成员包括: 默认权重: 在入节点端口用户配置针对某连接配置的权重; 当前权重: 该连接当前的权重; 优先级: 在入节点端口用户配置针对某连接配置的优先级; 队列深度: 入节点端口正在处理请求数量; 在本优选实施例中, 为了尽量保证流量的平滑处理, 根据优先级的高低来决定在 磁阵内部的处理顺序,首先满足高优先级的资源,包括中央处理单元(Central Processing Unit, 简称为 CPU) 调度顺序, 输入 /输出 (I/O) 处理优先级, 调整缓存 (CACHE) 的资源; 然后再处理低优先级请求, 同时判断当带宽超过预分配的额度时, 根据相应 的策略丢弃请求, 并计算丢弃的比重。 如图 1中的组网, 如果主机 HI和 H2配置如下, 默认权重 =50, 当前权重 =x, 优 先级 =0,队列深度 =x,为了不浪费带宽和流量,在权重基本上增加一个系数假如是 1.2, 则图 1中配置的磁阵对主机 HI和 H2分别提供 6G的带宽。 图 9是根据本发明实施例的磁阵组网的流量带宽波动情况示意图, 主机 HI和 H2 的应用程序的流量带宽会如图 9所示的波动。 由图 9可见,带宽和流量波动比较平衡, 可以在应用程序、 交换机、 磁阵三者之间达到流量控制的平衡。 通过上述的方式, 可以解决当多台主机连接一台磁阵或一台磁阵的一个端口时, 由于 L3 (network)的 FCoE缺失有效流控引起的用户业务抖动的问题; 避免一些应用 程序无法接受这种超载、 抖动、 超时时而无法使用磁阵设备。 当前权重与默认权重的调整: 由于磁阵的资源是宝贵的, 会有很多主机同时连接一台磁阵, 当用户配置了某主 机的权重, 但是某段时间该主机却没有业务下发, 需要考虑调整该主机的当前权重, 以腾出资源给正在运行的磁阵。 如图 1中的组网, 如果主机 HI和 H2配置如下, 默认权重 =50, 当前权重 =x, 优 先级 =0, 队列深度 =x, 在某段时间 HI主机却没有业务下发, 则需要动态调整 HI主机 的当前权重, 腾出带宽给 H2主机; 在磁阵端口 T1增加一种当前权重计算装置, 为一 个端口的每条连接设置一个标记, 当该连接有业务时, 将该请求的时间戳写入标记, 为一个端口设置一个定时器, 每隔 30秒遍历所有连接的时间标记, 当发现时间标记值 与当前时间相差 30秒及以上, 标记该连接第一次没有业务, 依次类推, 等统计到第三 次时开始降低该连接的当前权重值, 并提高其它连接的当前权重值; 同样, 当被的降 低当前权重的连接有请求时, 恢复其当前权重为默认权重。 为使本发明的目的、 技术方案和优点更加清楚, 以下对上述优选实施例进行进一 步的描述和说明。 图 10是根据本发明优选实施例的流量控制系统的系统结构图, 如图 10所示, 通 过主机 92 (相当于上述源节点设备 72) 和存储设备 94 (相当于上述磁阵设备 74) 之 间的通信实现对数据的访问;其中主机 92中执行本优选实施例方法的是主机控制模块 922 (相当于上述源节点控制模块 722), 该主机控制模块 922是相对于相关技术在主 机的 FCoE驱动协议层增加的一个功能模块,通过增加该模块使得主机 92能够执行本 优选实施例的方法。 用户可以根据组网和带宽需求, 手动的或者根据预先设定的策略 执行主机控制模块 922执行本优选实施例的流量控制方法。 磁阵设备中执行本优选实 施例方法的是磁阵控制模块 942 (相当于上述磁阵控制模块 742),该磁阵控制模块 942 是对磁阵设备侧原有的流量控制模块改进后的模块。 图 11 是根据本发明优选实施例的磁阵设备流量控制方法的流程示意图, 如图 11 所示, 该流程包括如下步骤: 步骤 S1001 , 磁阵控制模块初始化; 步骤 S1002, 磁阵获取预设的数据对象, 并根据预设的数据对象, 获取所述数据 对象的流控策略, 其中所述流控策略可以是在主机访问存储设备之前, 由用户在磁阵 侧配置或者使用磁阵默认配置; 步骤 S1003 , 如果用户没有预读其中所述流控策略数据对象, 则使用默认数据对 象, 是由磁阵初始默认配置, 根据磁阵一个端口上所有连接均分权重, 初始优先级都 相等; 步骤 S1004, 如果用户没有预读其中所述流控策略数据对象, 所述数据对象是某 连接的默认权重即在入节点端口用户配置针对某连接配置的权重; 某连接的当前权重 是该连接当前的权重, 情况下等于默认权重; 优先级是在入节点端口用户配置针对某 连接配置的优先级; 队列深度是入节点端口正在处理请求数量初始情况下为零。 如果应用需要磁阵尽量按配置提供需要的每秒进行读写 I/O操作的次数 (IOPS ), 则可以设置不同的优先级, 各个应用之间可以有不同的优先级, 对高优先级应用的访 问在存磁阵备侧可以进行优先处理; 监控到某连接对应的 IOPS 为是当前权重与链路 速度最大 IOPS的积, 当该连接的 IOPS还尚未达到预设的目标, 则磁阵设备将继续调 整为该连接对应访问所分配的系统资源, 包括增加其 CPU的占用比例, 调整 10优先 级, 调整队列深度和数量; 直至达到预设的 IOPS。 图 12 是根据本发明优选实施例的磁阵设备流量控制方法中 IOPS 控制的流程示 意, 如图 12所示, 该流程包括如下步骤: 步骤 S1101 , 磁阵接收主机请求; 步骤 S1102, 磁阵获取预设的数据对象; 步骤 S1103 , 磁阵判断 IOPS是否达到上限, 如果达到上限, 交给监控模块进行相 应处理, 由监控模块进行丢弃或者挂起等待处理; 步骤 S1104, 磁阵判断 IOPS是否达到上限, 没有达到上限, 增加其 CPU的占用 比例,调整 10优先级,调整队列深度和数量,使该连接上的 IOPS尽快达到预设的 IOPS。 本发明实施例中的磁阵设备可以执行本发明任意实施例的磁阵设备流量控制方 法。 图 13是根据本发明优选实施例的磁阵设备的结构示意图, 如图 13所示, 该磁阵 设备可以包括: 策略配置模块 1201、 服务控制模块 1202、 策略分析模块 1203。 其中, 策略配置模块 1201, 设置为从预设的数据对象或默认的数据对象提取对应 的数据对象, 包括: 某连接的默认权重, 即在入节点端口用户配置针对某连接配置的 权重; 某连接的当前权重, 是该连接当前的权重, 初始情况下等于默认权重; 优先级, 是在入节点端口用户配置针对某连接配置的优先级; 队列深度, 是入节点端口正在处 理请求数量, 初始情况下为零; 服务控制模块 1202, 设置为对所述连接进行服务质量 控制。 策略分析模块 1203, 设置为对每个连接上的策略进行分析, 并进行管理。 优选地, 该磁阵设备还包括策略监控模块 1204, 设置为对带宽和 IOPS监控。 其 中, 带宽和 IOPS 的监控并不是实时、 同步的, 为了减小监控模块对磁阵设备的性能 影响, 保证磁阵设备的可用性, 在监控数据对象变化方面, 磁阵并不是马上将权重变 化实时的通告主机和磁阵内部处理系统, 以避免增加磁阵设备负担; 可以使用定期监 测的方式。 策略调整模块 1205, 可以设置为根据所述的策略对权重进行调整; 策略发 送模块 1206, 设置为将当前的流量控制策略发送给主机。 本优选实施例的磁阵设备通过设置策略配置模块 1201和服务控制模块 1202等, 可以使得磁阵设备对某连接的带宽和 IOPS进行控制和动态调整, 解决了当多台主机 连接一台磁阵或一台磁阵的一个端口时, 由于 L3 (network)的 FCoE缺失有效流控引 起的用户业务抖动的问题; 避免一些应用程序无法接受这种超载、 抖动、 超时时而无 法使用磁阵设备, 节约了磁阵系统的系统资源, 提高了磁阵系统的服务质量, 更加能 准确满足用户需求。 图 14是根据本发明优选实施例的磁阵设备接收主机 10请求的处理流程示意图, 如图 14所示, 该流程包括如下步骤: 步骤 S1301 , 磁阵接收主机请求; 步骤 S1302, 策略分析模块 1203获取相关配置; 步骤 S1303 , 判断是否达到流量控制上限, 如果达到则进入服务控制模块 1202; 步骤 S1304, 判断是否达到流量控制上限, 没有达到则进入下一步; 步骤 S1305, 判断是否是高优先级, 如果是则进入服务控制模块 1202, 按照高优 先级优先处理; 步骤 S1306, 判断是否是高优先级, 如果不是则放入普通队列与其它无优先级队 列公平处理; 步骤 S1307, 阀值和优先级判断结束后进入策略调整模块 1205 和策略监控模块Encapsulated FC Frame: Encapsulated FC frame; In the Reserved field, the added members in the field include: Default weight: The user configures the weight for a connection configuration on the ingress port; Current weight: The current weight of the connection; Priority: The user configures the priority for a certain connection configuration on the ingress port; Queue depth: The number of requests is being processed by the ingress port; In the preferred embodiment, in order to ensure smooth processing of traffic, it is determined according to the priority. The processing sequence inside the magnetic array first satisfies the high-priority resources, including the central processing unit (CPU) scheduling order, input/output (I/O) processing priority, and adjustment cache (CACHE). The resource then processes the low-priority request and determines that when the bandwidth exceeds the pre-allocated amount, the request is discarded according to the corresponding policy, and the weight of the discard is calculated. As shown in Figure 1, if the hosts HI and H2 are configured as follows, the default weight = 50, the current weight = x, the priority = 0, and the queue depth = x. In order not to waste bandwidth and traffic, the weight is basically increased by a factor. If it is 1.2, the magnetic array configured in Figure 1 provides 6G bandwidth to the hosts HI and H2, respectively. FIG. 9 is a schematic diagram showing fluctuations in traffic bandwidth of a magnetic array network according to an embodiment of the present invention. The traffic bandwidth of the applications of the hosts HI and H2 fluctuates as shown in FIG. 9. As can be seen from Figure 9, the bandwidth and traffic fluctuations are balanced, and the flow control balance can be achieved among applications, switches, and magnetic arrays. In the above manner, when a plurality of hosts are connected to one magnetic array or one magnetic array, the user traffic jitter caused by the LCo (FC) FCoE lacking effective flow control can be solved; Magnetic array devices cannot be used when accepting such overloads, jitters, and timeouts. The adjustment of the current weight and the default weight: Since the resources of the magnetic array are valuable, there will be many hosts connected to one magnetic array at the same time. When the user configures the weight of a certain host, but the host does not have the service to be delivered for a certain period of time, it needs Consider adjusting the current weight of the host to free up resources for the running magnetic array. As shown in Figure 1, if the hosts HI and H2 are configured as follows, the default weight is 50, the current weight is x, the priority is 0, and the queue depth is x. If the HI host does not deliver the service for a certain period of time, you need to Dynamically adjusting the current weight of the HI host to free up bandwidth to the H2 host; adding a current weight calculation device to the magnetic array port T1, setting a flag for each connection of a port, and when the connection has a service, the request is Timestamp write flag, set a timer for a port, traverse all connected timestamps every 30 seconds, when the timestamp value is found to be different from the current time by 30 seconds or more, mark the connection for the first time without service, and so on. When the statistics are counted to the third time, the current weight value of the connection is lowered, and the current weight value of other connections is increased; likewise, when the connection of the current weight reduction is requested, the current weight is restored to the default weight. In order to make the objects, technical solutions and advantages of the present invention more clear, the above preferred embodiments are further described and illustrated below. 10 is a system configuration diagram of a flow control system according to a preferred embodiment of the present invention, as shown in FIG. 10, through a host 92 (corresponding to the source node device 72 described above) and a storage device 94 (corresponding to the above-described magnetic array device 74). The communication between the two implements access to the data; wherein the method of the preferred embodiment is implemented in the host 92 is a host control module 922 (corresponding to the source node control module 722 described above), the host control module 922 is an FCoE on the host relative to the related art. A functional module added by the driver protocol layer, by adding this module enables the host 92 to perform the method of the preferred embodiment. The user can execute the flow control method of the preferred embodiment by the host control module 922 manually or according to a preset policy according to the networking and bandwidth requirements. Performing the method of the preferred embodiment in the magnetic array device is a magnetic array control module 942 (corresponding to the above-mentioned magnetic array control module 742), and the magnetic array control module 942 is an improved module of the original flow control module on the magnetic array device side. . 11 is a schematic flowchart of a flow control method of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG. 11, the flow includes the following steps: Step S1001: The magnetic array control module is initialized; Step S1002, the magnetic array acquires a preset a data object, and obtaining a flow control policy of the data object according to the preset data object, where the flow control policy may be configured by the user on the magnetic array side or using a default configuration of the magnetic array before the host accesses the storage device; Step S1003: If the user does not pre-read the flow control policy data object, the default data object is used, which is initially configured by the magnetic array, and all the connections on one port of the magnetic array are equally weighted, and the initial priorities are equal; S1004, if the user does not pre-read the flow control policy data object, the data object is a default weight of a connection, that is, the user configures a weight for a connection configuration on the ingress port port; the current weight of a connection is the current connection of the connection. The weight is equal to the default weight in the case; the priority is the priority configured by the user on the ingress port for the configuration of the connection; the queue depth is zero for the initial number of requests processed by the ingress port. If the application requires the magnetic array to provide the required number of read/write I/O operations per second (IOPS) as required, then different priorities can be set, and each application can have different priorities. For high priority applications. The access can be prioritized on the backup side of the storage array; the IOPS corresponding to a connection is monitored as the product of the current weight and the maximum IOPS of the link speed. When the connected IOPS has not yet reached the preset target, the magnetic array device It will continue to adjust the system resources allocated for the access corresponding to the connection, including increasing the CPU usage ratio, adjusting the 10 priority, and adjusting the queue depth and number; until the preset IOPS is reached. 12 is a schematic flow chart of IOPS control in a magnetic array device flow control method according to a preferred embodiment of the present invention. As shown in FIG. 12, the flow includes the following steps: Step S1101, a magnetic array receives a host request; Step S1102, a magnetic array acquisition The preset data object; Step S1103, the magnetic array determines whether the IOPS reaches the upper limit. If the upper limit is reached, the monitoring module performs corresponding processing, and the monitoring module discards or suspends waiting for processing; Step S1104, the magnetic array determines whether the IOPS reaches the upper limit. , does not reach the upper limit, increase the proportion of its CPU, adjust the 10 priority, adjust the queue depth and quantity, so that the IOPS on the connection reaches the preset IOPS as soon as possible. The magnetic array device in the embodiment of the present invention can perform the magnetic array device flow control method according to any embodiment of the present invention. FIG. 13 is a schematic structural diagram of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG. 13, the magnetic array device may include: a policy configuration module 1201, a service control module 1202, and a policy analysis module 1203. The policy configuration module 1201 is configured to extract a corresponding data object from a preset data object or a default data object, including: a default weight of a connection, that is, a weight configured on the ingress node user configuration for a connection; a connection The current weight is the current weight of the connection, which is equal to the default weight in the initial case; the priority is the priority configured by the user on the ingress node for a connection; the queue depth is the ingress port is being The number of requests is zero in the initial case; the service control module 1202 is configured to perform quality of service control on the connection. The policy analysis module 1203 is configured to analyze and manage the policies on each connection. Preferably, the magnetic array device further includes a policy monitoring module 1204 configured to monitor bandwidth and IOPS. Among them, the monitoring of bandwidth and IOPS is not real-time and synchronous. In order to reduce the performance impact of the monitoring module on the magnetic array device and ensure the availability of the magnetic array device, the magnetic array does not immediately change the weight in real time in monitoring the change of the data object. The notification host and the magnetic array internal processing system to avoid increasing the burden of the magnetic array device; regular monitoring methods can be used. The policy adjustment module 1205 may be configured to adjust the weight according to the policy. The policy sending module 1206 is configured to send the current flow control policy to the host. The magnetic array device of the preferred embodiment can control and dynamically adjust the bandwidth and IOPS of a connection by setting the policy configuration module 1201 and the service control module 1202, etc., and solve the problem that when multiple hosts are connected to one magnetic array, Or a port of a magnetic array, due to L3 (network) FCoE lack of user traffic jitter caused by effective flow control; avoid some applications can not accept this overload, jitter, timeout and can not use magnetic array equipment, saving The system resources of the magnetic array system improve the service quality of the magnetic array system and more accurately meet the user's needs. FIG. 14 is a schematic diagram of a process flow of a magnetic array device receiving host 10 according to a preferred embodiment of the present invention. As shown in FIG. 14, the process includes the following steps: Step S1301: A magnetic array receives a host request; Step S1302, a policy analysis module 1203 Obtaining the relevant configuration; Step S1303, determining whether the flow control upper limit is reached, if yes, entering the service control module 1202; Step S1304, determining whether the flow control upper limit is reached, if not, proceeding to the next step; Step S1305, determining whether it is a high priority, If yes, the service control module 1202 is entered, and the priority is processed according to the high priority; in step S1306, it is determined whether it is a high priority, and if not, it is put into the normal queue and other non-priority queues for fair processing; Step S1307, threshold and priority After the judgment ends, the policy adjustment module 1205 and the policy monitoring module are entered.
1204处理; 步骤 S1308, 上述所有流程处理完成, 开始处理主机 10请求。 优先级用于标识请求传输的优先程度, 可以分为两类: 请求携带优先级和磁阵调 度优先级。 请求携带优先级主要指 802.1p优先级由 L2层来处理。 本优选实施例提供 了一种处理磁阵调度优先级的方法。 磁阵调度优先级是指请求在磁阵内申请 CPU、 高速缓冲存储器 CACHE资源、 队 列深度时所使用的优先级, 只对当前磁阵设备自身有效。 磁阵调度优先级主要含义包 括有以下几种, 磁阵为请求分配的一种具有本地意义的优先级, 每个本地优先级对应 一个队列, 本地优先级值越大的请求, 进入的队列优先级越高, 从而能够获得优先的 调度。 在进行请求丢弃时参考的参数, 优先级值越小的请求越被优先丢弃。 对于用户 没有指定优先级的请求, 磁阵设备对于进入的流量, 会自动获取请求 L2层的优先级 或者按照磁阵默认优先级来处理。 图 15a〜图 15c是根据本发明优选实施例的磁阵设备接收主机 10请求的优先级处 理流程示意图, 该流程包括如下步骤: 步骤 S1401 , 图 15a是根据本发明优选实施例的磁阵端口接收主机请求的示意图, 如图 15a所示, 磁阵端口接收主机请求; 步骤 S1402, 如果用户指定了优先级, 则按流量类型进行分类整理, 将请求按照 类型整理成 3类, 由于某种流量类型只有一种优先级, 在确定的流量类型内部, 不区 分具体报文的优先级; 步骤 S1403 , 图 15b是根据本发明优选实施例的磁阵设备按照优先级的增长方向 整理所有流量类型的示意图, 如图 15b所示, 按照优先级的增长方向整理所有流量类 型; 步骤 S1404, 图 15c是根据本发明优选实施例的磁阵设备按照磁阵调度优先级的 增长方向整理所有流量类型的示意图, 如图 15c所示, 按照磁阵调度优先级的增长方 向整理所有流量类型; 步骤 S1405 , 如果用户指定的磁阵调度优先级, 则在某种流量类型内部区分优先 级, 在磁阵执行请求按照调度进行磁阵资源分配, 在类型内重新排序调度优先级。 流量监控是监控进入网络的某一流量的规格, 把它限制在一个合理的范围之内, 或对超出的部分流量进行"惩罚", 以按照预设策略进行请求处理; 如果发现某个连接 的流量超标, 流量监控可以选择丢弃请求, 或重新设置请求的优先级。 磁阵在丢弃请求时, 需要与源端的流量控制动作相配合, 调整网络的流量到一个 合理的负载状态。 丢包策略和源端流控机制有效的组合, 可以使网络的吞吐量和利用 效率最大化, 并且使请求丢弃和延迟最小化。 流量监控应该避免下面的情况出现, 即当队列同时丢弃多条连接的请求时, 将造 成多条连接同时进入拥塞避免状态, 主机会降低发送速率并调整流量, 而后又会在某 个时间同时出现流量高峰。 如此反复, 使网络流量忽大忽小, 网络不停震荡。 所以采用的检测方法主要如下, 随机丢弃请求使得当某条连接的请求被丢弃、 开 始减速发送的时候, 其他的连接仍然有较高的发送速度。 这样, 无论什么时候, 总有 连接在进行较快的发送, 提高了线路带宽的利用率。 为每个队列根据预设的策略都设定上限和下限, 对队列中的请求进行如下处理, 当队列的长度小于下限时, 不丢弃请求; 当队列的长度超过上限时, 开始随机丢弃到 来的请求。 队列越长, 丢弃概率越高, 但有一个最大丢弃概率。 或者为不同优先级的请求设定计算队列平均长度时的指数、 上限、 下限、 丢弃概 率, 从而对不同优先级的请求提供不同的丢弃特性。 需要在丢弃请求与引入额外延迟之间找到平衡, 一方面可以按预设策略进行流量 控制, 另一方面尽量减少对主机请求引入额外延迟。 图 16是根据本发明优选实施例的磁阵设备接收主机 10请求的流量监控流程示意 图, 如图 16所示, 该流程包括如下步骤: 步骤 S1501 , 磁阵端口接收主机请求; 步骤 S1502, 根据流量类型进行整理且按时间点监控; 步骤 S1503 , 现有的处理方式, 流量 3的优先级最高, 在 T2时刻由于在空闲带宽 可占用, 所以流量 3占据剩余空闲流量, 将其带宽提高至 4Gbps; 步骤 S1504, 流量 1优先级最低, 在 T3时刻由于流量 3的主机提高了发送带宽, 所以流量 3会抢占流量 1的带宽, 将流量 3带宽提高至 5Gb, 将流量 1的带宽降低至 2Gb; 问题就是流量 1的带宽被降低了, 但是流量 1的主机并感知不到, 当流量 1对 应的主机发送带宽超过 2Gb时, 磁阵端口会发送 PAUSE帧通知对端暂时停止发送请 求, 主机应用无法感知会继续以 3Gb速率发送, 会造成流量 1对应主机应用产生大量 超时甚至业务瞬断; 步骤 S1505 , 假如每种流量类型在磁阵侧对应 2条连接, 如图 16中的连接 1和连 接 2队列, 预设好流控策略比如在 T3时刻流量 3要占用 5Gb带宽, 则策略监控模块 会在 T3时刻调整流量 1和流量 3的带宽; 步骤 S1506, 设置好流控策略之后, 当某种流量类型出现超标, 不能同时丢弃多 条连接的请求以免造成网络不停震荡; 就在连接 1和连接 2上随机丢弃请求, 并且根 据连接 1和连接 2上队列长度的情况和预设的丢弃策略限制进行请求丢弃, 同时反馈 给主机侧, 由主机侧主动降低发送速率, 逐渐降低丢弃数量和概率, 以达到带宽曲线 的平稳。 在磁阵设备增加策略监控模块, 为一个端口的每条连接设置一个标记, 当该连接 有业务时, 将该请求的时间戳写入标记, 为一个端口设置一个定时器, 每隔 30秒遍历 所有连接的时间标记, 当发现时间标记值与当前时间相差 30秒及以上,标记该连接第 一次没有业务, 依次类推, 等统计到第三次时开始降低该连接的当前权重值, 并提高 其它连接的当前权重值; 同样, 当被的降低当前权重的连接有请求时, 恢复其当前权 重为默认权重。 图 17 是根据本发明优选实施例的磁阵设备当前权重与默认权重的调整流程示意 图, 如图 17所示, 统计每条连接上的权重数值, 比如某条连接的权重是 30则有 30 个信用, 当每接收一个主机请求, 减少该信用值, 当 10处理完成以后恢复该信用值, 流量监控模块每隔 30秒检测一次,如果连接 3次某条连接上的信用都与默认权重相等 即没有变化, 则认为该条连接暂时没有主机业务需求, 可以将其信用贡献出来, 放入 一个公共资源区, 由其它连接根据负载的情况来使用该公共信用。 如果连接上有足够 的信用才可以用来处理主机请求。 综上所述, 通过本发明实施例和优选实施例, 可以有效解决 FCoE组网中上层业 务无法感知链路拥塞的问题, 现有实施方法是维护人员通过手动配置交换机带宽来保 证业务抖动小时, 由于交换机的静态配置带来的带宽浪费, 且当组网复杂时人工配置 效率较低且所有节点间的负载调整比较复杂。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可以用通用 的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布在多个计算装置所 组成的网络上, 可选地, 它们可以用计算装置可执行的程序代码来实现, 从而, 可以 将它们存储在存储装置中由计算装置来执行, 或者将它们分别制作成各个集成电路模 块, 或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。 这样, 本发明 不限制于任何特定的硬件和软件结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本领域的技 术人员来说, 本发明可以有各种更改和变化。 凡在本发明的精神和原则之内, 所作的 任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。 工业实用性 如上所述, 本发明实施例提供的一种以太网光纤通道的流量控制方法、 装置及系 统, 具有以下有益效果: 提高了磁阵系统的服务质量。 1204 processing; Step S1308, all the above processes are completed, and the processing of the host 10 request is started. Priority is used to identify the priority of request transmission. It can be divided into two categories: request carrying priority and magnetic array scheduling priority. The request carrying priority mainly means that the 802.1p priority is handled by the L2 layer. The preferred embodiment provides a method of processing a magnetic array scheduling priority. The priority of the magnetic array scheduling refers to the priority used when requesting the CPU, the cache CACHE resource, and the queue depth in the magnetic array, and is only valid for the current magnetic array device itself. The main meanings of the magnetic array scheduling priority include the following. The magnetic array is a local-priority priority assigned to the request. Each local priority corresponds to a queue. The request with the larger local priority value has priority. The higher the level, the better the scheduling can be achieved. The parameter referenced when requesting discarding, the smaller the priority value, the more preferentially discarded. For a request that the user does not specify a priority, the magnetic array device automatically acquires the priority of the requested L2 layer for the incoming traffic or processes according to the default priority of the magnetic array. 15a to 15c are schematic diagrams showing a priority processing flow of a magnetic array device receiving host 10 request according to a preferred embodiment of the present invention, the flow comprising the following steps: Step S1401, FIG. 15a is a magnetic array port receiving according to a preferred embodiment of the present invention. Schematic diagram of the host request, as shown in FIG. 15a, the magnetic array port receives the host request; step S1402, if the user specifies the priority, sorting according to the traffic type, and sorting the request into three types according to the type, due to a certain traffic type There is only one priority, within the determined traffic type, the priority of the specific packet is not distinguished; Step S1403, FIG. 15b is a schematic diagram of the magnetic array device sorting all traffic types according to the priority growth direction according to a preferred embodiment of the present invention. As shown in FIG. 15b, all traffic types are organized according to the growth direction of the priority; Step S1404, FIG. 15c is a schematic diagram of the magnetic array device sorting all traffic types according to the growth direction of the magnetic array scheduling priority according to a preferred embodiment of the present invention, As shown in Figure 15c, all traffic types are organized according to the growth direction of the magnetic array scheduling priority. Step S1405: If the user-specified magnetic array scheduling priority, the priority is determined within a certain traffic type, the magnetic array resource allocation is performed according to the scheduling in the magnetic array execution request, and the scheduling priority is reordered within the type. Traffic monitoring is to monitor the specification of a certain traffic entering the network, limit it to a reasonable range, or "punish" the excess traffic to process the request according to the preset policy; if a connection is found If the traffic exceeds the standard, traffic monitoring can choose to discard the request or reset the priority of the request. When the magnetic array discards the request, it needs to cooperate with the flow control action of the source to adjust the traffic of the network to a reasonable load state. An efficient combination of packet loss policy and source-side flow control mechanism maximizes network throughput and utilization efficiency and minimizes request drop and latency. Traffic monitoring should avoid the following situation: when the queue discards multiple connections at the same time, multiple connections will enter the congestion avoidance state at the same time. The host will lower the transmission rate and adjust the traffic, and then will appear at a certain time. Traffic peaks. Repeatedly, the network traffic is suddenly large and small, and the network keeps oscillating. Therefore, the detection method used is mainly as follows. The random drop request causes other connections to have a higher transmission speed when a connection request is discarded and deceleration is started. In this way, whenever there is always a connection for faster transmission, the utilization of the line bandwidth is improved. The upper and lower limits are set for each queue according to the preset policy. The request in the queue is processed as follows. When the length of the queue is less than the lower limit, the request is not discarded. When the length of the queue exceeds the upper limit, random discarding begins. request. The longer the queue, the higher the drop probability, but there is a maximum drop probability. Or, the index, the upper limit, the lower limit, and the drop probability when calculating the average queue length are set for the request of different priorities, thereby providing different discarding characteristics for requests of different priorities. A balance needs to be struck between discarding requests and introducing additional delays. On the one hand, flow control can be performed according to a preset policy, and on the other hand, additional delays are introduced to host requests. FIG. 16 is a schematic diagram of a traffic monitoring process requested by a magnetic array device receiving host 10 according to a preferred embodiment of the present invention. As shown in FIG. 16, the process includes the following steps: Step S1501, a magnetic array port receives a host request; Step S1502, according to traffic The type is sorted and monitored according to the time point; Step S1503, in the existing processing mode, the priority of the traffic 3 is the highest, and at the time T2, since the idle bandwidth can be occupied, the traffic 3 occupies the remaining idle traffic, and the bandwidth is increased to 4 Gbps ; In step S1504, the traffic 1 has the lowest priority. At the time T3, since the host of the traffic 3 increases the transmission bandwidth, the traffic 3 will preempt the bandwidth of the traffic 1, the traffic 3 bandwidth is increased to 5 Gb, and the bandwidth of the traffic 1 is reduced to 2 Gb. That is, the bandwidth of the traffic 1 is reduced, but the host of the traffic 1 is not aware. When the transmission bandwidth of the host corresponding to the traffic 1 exceeds 2 Gb, the magnetic array port sends a PAUSE frame to notify the opposite end to temporarily stop sending the request, and the host application cannot perceive the request. Will continue to send at 3Gb rate, which will cause traffic 1 to generate a large number of timeouts or even business interruptions for the host application; Step S1505, if each traffic type corresponds to two connections on the magnetic array side, as shown in Figure 16 for the connection 1 and the connection 2 queue, the flow control policy is preset, for example, the traffic 3 needs to occupy 5 Gb bandwidth at the time T3, the policy monitoring module The bandwidth of traffic 1 and traffic 3 is adjusted at time T3. Step S1506, after the flow control policy is set, when a traffic type exceeds the standard, the multiple connection requests cannot be discarded at the same time to avoid the network from oscillating; And the random discarding request on the connection 2, and the request discarding is performed according to the length of the queue on the connection 1 and the connection 2 and the preset discarding policy restriction, and feedback to the host side, the host side actively reduces the transmission rate, and gradually reduces the discarding amount and Probability to achieve smoothness of the bandwidth curve. Add a policy monitoring module to the magnetic array device to set a flag for each connection of a port. When the connection has a service, write the timestamp of the request to the tag, set a timer for a port, and traverse every 30 seconds. Time stamp of all connections, when the time stamp value is found to be different from the current time by 30 seconds or more, the connection is marked as having no service for the first time, and so on. When the statistics are counted to the third time, the current weight value of the connection is lowered, and the value is increased. The current weight value of other connections; Similarly, when the connection that is lowering the current weight has a request, its current weight is restored to the default weight. 17 is a schematic diagram of a process of adjusting a current weight and a default weight of a magnetic array device according to a preferred embodiment of the present invention. As shown in FIG. 17, the weight value of each connection is counted, for example, the weight of a connection is 30, and 30 are Credit, when each host request is received, the credit value is reduced. When the credit value is restored after the processing is completed, the traffic monitoring module detects every 30 seconds. If the credit on a connection is equal to the default weight, If there is no change, it is considered that there is no host business requirement for the connection, and the credit can be contributed out and put into a common resource area. The other credits use the public credit according to the load. If there is enough credit on the connection, it can be used to process the host request. In summary, the embodiments of the present invention and the preferred embodiment can effectively solve the problem that the upper layer service in the FCoE network cannot be aware of the link congestion. The existing implementation method is that the maintenance personnel manually configure the switch bandwidth to ensure that the service jitter is small. The bandwidth caused by the static configuration of the switch is wasted, and when the networking is complicated, the manual configuration efficiency is low and the load adjustment between all nodes is complicated. Obviously, those skilled in the art should understand that the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device, or they may be separately fabricated into individual integrated circuit modules, or they may be Multiple modules or steps are made into a single integrated circuit module. Thus, the invention is not limited to any specific combination of hardware and software. The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention. Industrial Applicability As described above, the method, device, and system for controlling the flow rate of the Ethernet Fibre Channel provided by the embodiments of the present invention have the following beneficial effects: The service quality of the magnetic array system is improved.

Claims

权 利 要 求 书 Claim
1. 一种以太网光纤通道的流量控制方法, 包括: 判断磁阵设备的入节点端口的流量是否超过预定阈值; A method for controlling traffic of an Ethernet Fibre Channel, comprising: determining whether a traffic of an ingress port of a magnetic array device exceeds a predetermined threshold;
在判断结果为是的情况下, 发送配置消息至与所述入节点端口连接的源节 点设备, 其中, 所述配置消息用于指示所述源节点设备调整其应用程序的数据 发送速率。  If the result of the determination is yes, the configuration message is sent to the source node device connected to the ingress port, where the configuration message is used to instruct the source node device to adjust the data transmission rate of the application.
2. 根据权利要求 1所述的方法, 其中, 在与所述入节点端口连接的源节点设备的 数目为多个的情况下, 在判断所述流量是否超过预定阈值之前, 所述方法还包 括: 2. The method according to claim 1, wherein, in a case where the number of source node devices connected to the ingress port is plural, before determining whether the traffic exceeds a predetermined threshold, the method further includes :
确定所述入节点端口与多个源节点设备之间的每个连接的流量控制策略; 根据所述流量控制策略, 处理所述每个连接的流量。  Determining a flow control policy for each connection between the ingress node and the plurality of source node devices; processing the traffic of each connection according to the flow control policy.
3. 根据权利要求 2所述的方法, 其中, 确定所述入节点端口与多个源节点设备之 间的每个连接的流量控制策略包括: 3. The method according to claim 2, wherein the flow control policy for determining each connection between the ingress node and the plurality of source node devices comprises:
根据预设配置, 获取所述流量控制策略; 和 /或 根据所述每个连接的优先级和 /或所述每个连接上业务的负载情况,调整所 述流量控制策略。  Acquiring the flow control policy according to a preset configuration; and/or adjusting the flow control policy according to the priority of each connection and/or the load condition of the service on each connection.
4. 根据权利要求 3所述的方法, 其中, 根据所述每个连接的优先级和 /或所述每个 连接上的流量情况, 调整所述流量控制策略包括: 4. The method according to claim 3, wherein adjusting the flow control policy according to the priority of each connection and/or the traffic condition on each connection comprises:
在所述优先级高的连接上的流量增加的情况下, 将所述优先级低的连接所 分配的带宽提供给所述优先级高的连接, 以满足所述优先级高的连接上的流量 的带宽需求; 和 /或  When the traffic on the connection with the high priority is increased, the bandwidth allocated by the connection with the lower priority is provided to the connection with the higher priority to satisfy the traffic on the connection with the higher priority. Bandwidth requirements; and/or
在预定时间内检测所述每个连接的流量情况, 并将所述预定时间内未检测 到流量的连接所分配的带宽添加到公共资源区, 以供其他连接使用。  The traffic condition of each connection is detected within a predetermined time, and the bandwidth allocated by the connection in which the traffic is not detected within the predetermined time is added to the common resource area for use by other connections.
5. 根据权利要求 4所述的方法, 其中, 在所述预定时间内检测所述每个连接的流 量情况包括: 5. The method according to claim 4, wherein detecting the traffic condition of each connection within the predetermined time comprises:
保存所述每个连接上所述源节点设备对所述入节点端口的最新的读写请求 的接收时间; 根据所述预定时间的时间间隔检测每个连接的所述接收时间; And storing, by the source node device, a receiving time of the latest read/write request of the ingress node; Detecting the receiving time of each connection according to the time interval of the predetermined time;
在所述接收时间与当前时间的间隔大于或等于所述预定时间的情况下, 确 定所述预定时间内对应的连接上未检测到流量。  In a case where the interval between the reception time and the current time is greater than or equal to the predetermined time, it is determined that no traffic is detected on the corresponding connection within the predetermined time.
6. 根据权利要求 2所述的方法, 其中, 在所述流量控制策略包括带宽权重的情况 下, 根据所述流量控制策略, 处理所述每个连接的流量包括: 根据所述每个连接的带宽权重, 确定所述每个连接所分配的带宽; 通过所述每个连接所分配的带宽, 处理所述每个连接的流量。 The method according to claim 2, wherein, in the case that the traffic control policy includes a bandwidth weight, according to the traffic control policy, processing the traffic of each connection includes: according to each connection Bandwidth weights, determining the bandwidth allocated by each of the connections; processing the traffic of each connection by the bandwidth allocated by each of the connections.
7. 根据权利要求 6所述的方法, 其中, 通过所述每个连接所分配的带宽, 处理所 述每个连接的流量包括: 接收所述源节点设备的读写请求; The method according to claim 6, wherein, by using the allocated bandwidth of each connection, processing the traffic of each connection comprises: receiving a read/write request of the source node device;
判断所述源节点设备与所述入节点端口之间的连接的流量是否达到所分配 的带宽;  Determining whether the traffic of the connection between the source node device and the ingress node port reaches the allocated bandwidth;
在判断结果为是的情况下, 丢弃或者挂起所述读写请求。  In the case where the judgment result is YES, the read/write request is discarded or suspended.
8. 根据权利要求 2至 7中任一项所述的方法, 其中, 在所述流量控制策略包括流 量类型的优先级的情况下, 根据所述流量控制策略, 处理所述每个连接的流量 包括: 在优先级高的流量类型所需求的带宽增加的情况下, 将优先级低的流量类 型所分配的带宽分配给所述优先级高的流量类型; The method according to any one of claims 2 to 7, wherein, in the case that the traffic control policy includes a priority of a traffic type, processing the traffic of each connection according to the traffic control policy The method includes: assigning, according to an increase in bandwidth required by a traffic type with a high priority, a bandwidth allocated by the traffic type with a lower priority to the traffic type with the higher priority;
随机丢弃所述每个连接中发送所述优先级低的流量类型的连接上所述优先 级低的流量类型对应的读写请求。  The read/write request corresponding to the low priority traffic type on the connection of the low priority traffic type is randomly discarded in each connection.
9. 一种以太网光纤通道的流量控制装置, 包括: 判断模块, 设置为判断磁阵设备的入节点端口的流量是否超过预定阈值; 发送模块, 设置为在判断结果为是的情况下, 发送配置消息至与所述入节 点端口连接的源节点设备, 其中, 所述配置消息用于指示所述源节点设备调整 其应用程序的数据发送速率。 9. A Fibre Channel traffic control device, comprising: a judging module configured to determine whether a traffic of an ingress port of a magnetic array device exceeds a predetermined threshold; a sending module, configured to send in a case where the determination result is yes The configuration message is sent to the source node device connected to the ingress port, where the configuration message is used to instruct the source node device to adjust the data transmission rate of its application.
10. 一种以太网光纤通道的流量控制系统, 包括源节点设备和磁阵设备, 还包括: 磁阵控制模块, 位于所述磁阵设备中, 设置为监控所述磁阵设备的入节点 端口的流量, 并在所述流量超过预定阈值的情况下, 发送配置消息至源节点控 制模块; 10. A Fibre Channel over Ethernet traffic control system, including a source node device and a magnetic array device, further comprising: a magnetic array control module, located in the magnetic array device, configured to monitor traffic of an ingress port of the magnetic array device, and send a configuration message to a source node control module if the flow exceeds a predetermined threshold;
所述源节点控制模块, 位于所述源节点设备的以太网光纤通道 FCoE驱动 协议层中, 设置为接收所述配置消息, 并根据所述配置消息调整所述源节点设 备中的应用程序向所述磁阵设备发送数据的速率。  The source node control module is located in an Ethernet Fibre Channel FCoE driver protocol layer of the source node device, configured to receive the configuration message, and adjust an application program in the source node device according to the configuration message. The rate at which the magnetic array device transmits data.
PCT/CN2014/083645 2014-04-04 2014-08-04 Fiber channel over ethernet flow control method, device and system WO2015149460A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410136580.7 2014-04-04
CN201410136580.7A CN104980359A (en) 2014-04-04 2014-04-04 Flow control method of fiber channel over Ethernet (FCoE), flow control device of FCoE and flow control system of FCoE

Publications (1)

Publication Number Publication Date
WO2015149460A1 true WO2015149460A1 (en) 2015-10-08

Family

ID=54239353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/083645 WO2015149460A1 (en) 2014-04-04 2014-08-04 Fiber channel over ethernet flow control method, device and system

Country Status (2)

Country Link
CN (1) CN104980359A (en)
WO (1) WO2015149460A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107896161A (en) * 2017-11-07 2018-04-10 国网江苏省电力公司盐城供电公司 A kind of service control system and its control method of terminal communication access net
US10992580B2 (en) 2018-05-07 2021-04-27 Cisco Technology, Inc. Ingress rate limiting in order to reduce or prevent egress congestion
CN113824649A (en) * 2021-09-17 2021-12-21 上海航天计算机技术研究所 Data flow control device for fixed frame length
CN115834411A (en) * 2023-02-16 2023-03-21 北京派网软件有限公司 Network performance analysis method and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107623638B (en) * 2016-07-15 2020-09-25 中国电信股份有限公司 Fault processing method and device for load balancing path
CN106850452B (en) * 2017-02-22 2020-05-22 华南理工大学 Distributed optimal flow control method with limited network cache
CN110213118B (en) * 2018-02-28 2021-04-06 中航光电科技股份有限公司 FC network system and flow control method thereof
CN109347762B (en) * 2018-10-26 2023-05-05 平安科技(深圳)有限公司 Cross-region outlet flow allocation method and device, computer equipment and storage medium
CN113098785B (en) * 2021-03-31 2022-05-27 新华三信息安全技术有限公司 Message processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1988496A (en) * 2005-12-21 2007-06-27 华为技术有限公司 Method for regulating optic fiber path data flow speed rate
CN103392324A (en) * 2011-03-08 2013-11-13 国际商业机器公司 Message forwarding toward a source end node in a converged network environment
CN103647723A (en) * 2013-12-26 2014-03-19 深圳市迪菲特科技股份有限公司 Method and system for monitoring flow

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7634582B2 (en) * 2003-12-19 2009-12-15 Intel Corporation Method and architecture for optical networking between server and storage area networks
US7961621B2 (en) * 2005-10-11 2011-06-14 Cisco Technology, Inc. Methods and devices for backward congestion notification
CN101340358B (en) * 2007-07-04 2011-04-20 鼎桥通信技术有限公司 Flow control method, system and flow control entity
CN101141406B (en) * 2007-10-17 2010-04-07 杭州华三通信技术有限公司 Distributed flow control method, system and device
US20110261696A1 (en) * 2010-04-22 2011-10-27 International Business Machines Corporation Network data congestion management probe system
CN102025617B (en) * 2010-11-26 2015-04-01 中兴通讯股份有限公司 Method and device for controlling congestion of Ethernet
CN102075437B (en) * 2011-02-12 2013-04-24 华为数字技术(成都)有限公司 Communication method, gateway and network
US20130205038A1 (en) * 2012-02-06 2013-08-08 International Business Machines Corporation Lossless socket-based layer 4 transport (reliability) system for a converged ethernet network
CN103023803B (en) * 2012-12-12 2015-05-20 华中科技大学 Method and system for optimizing virtual links of fiber channel over Ethernet

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1988496A (en) * 2005-12-21 2007-06-27 华为技术有限公司 Method for regulating optic fiber path data flow speed rate
CN103392324A (en) * 2011-03-08 2013-11-13 国际商业机器公司 Message forwarding toward a source end node in a converged network environment
CN103647723A (en) * 2013-12-26 2014-03-19 深圳市迪菲特科技股份有限公司 Method and system for monitoring flow

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107896161A (en) * 2017-11-07 2018-04-10 国网江苏省电力公司盐城供电公司 A kind of service control system and its control method of terminal communication access net
CN107896161B (en) * 2017-11-07 2023-11-24 国网江苏省电力公司盐城供电公司 Service control system and control method for terminal communication access network
US10992580B2 (en) 2018-05-07 2021-04-27 Cisco Technology, Inc. Ingress rate limiting in order to reduce or prevent egress congestion
CN113824649A (en) * 2021-09-17 2021-12-21 上海航天计算机技术研究所 Data flow control device for fixed frame length
CN113824649B (en) * 2021-09-17 2023-10-27 上海航天计算机技术研究所 Data flow control device for fixed frame length
CN115834411A (en) * 2023-02-16 2023-03-21 北京派网软件有限公司 Network performance analysis method and system
CN115834411B (en) * 2023-02-16 2023-06-27 北京派网软件有限公司 Network performance analysis method and system

Also Published As

Publication number Publication date
CN104980359A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
WO2015149460A1 (en) Fiber channel over ethernet flow control method, device and system
US11316795B2 (en) Network flow control method and network device
US9031094B2 (en) System and method for local flow control and advisory using a fairness-based queue management algorithm
CN109120544B (en) Transmission control method based on host end flow scheduling in data center network
EP2466476B1 (en) Network interface device with multiple physical ports and unified buffer memory
EP2959645B1 (en) Dynamic optimization of tcp connections
CN107454017B (en) Mixed data stream cooperative scheduling method in cloud data center network
WO2022016889A1 (en) Congestion control method and device
WO2020177263A1 (en) Traffic management method and system and fabric network processor
US11165705B2 (en) Data transmission method, device, and computer storage medium
US20210203620A1 (en) Managing virtual output queues
CN114531399B (en) Memory blocking balancing method and device, electronic equipment and storage medium
CN110708255B (en) Message control method and node equipment
Zhang et al. Revisiting Congestion Detection in Lossless Networks
Wang et al. Using SDN congestion controls to ensure zero packet loss in storage area networks
WO2022057462A1 (en) Congestion control method and apparatus
US8804521B1 (en) Quality of service for inbound network traffic flows during slow-start phases
Liu et al. L2BM: Switch Buffer Management for Hybrid Traffic in Data Center Networks
Devkota Performance of Quantized Congestion Notification in TXP Incast in Data Centers
CN114208131A (en) Flow balancing method, network equipment and electronic equipment
Chen et al. On meeting deadlines in datacenter networks
WO2020114133A1 (en) Pq expansion implementation method, device, equipment and storage medium
JP2001223714A (en) Data repeating method
JP2015050479A (en) Packet switch, packet switching method and band control program
Macura et al. Integration of Multimedia Traffic Into Computer Network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14888030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14888030

Country of ref document: EP

Kind code of ref document: A1