US9270600B2 - Low-latency lossless switch fabric for use in a data center - Google Patents

Low-latency lossless switch fabric for use in a data center Download PDF

Info

Publication number
US9270600B2
US9270600B2 US14/656,575 US201514656575A US9270600B2 US 9270600 B2 US9270600 B2 US 9270600B2 US 201514656575 A US201514656575 A US 201514656575A US 9270600 B2 US9270600 B2 US 9270600B2
Authority
US
United States
Prior art keywords
packet
switch
congestion
latency
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/656,575
Other versions
US20150188821A1 (en
Inventor
Alexander P. Campbell
Keshav G. Kamble
Vijoy A. Pandey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo International Ltd
Original Assignee
Lenovo Enterprise Solutions Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Enterprise Solutions Singapore Pte Ltd filed Critical Lenovo Enterprise Solutions Singapore Pte Ltd
Priority to US14/656,575 priority Critical patent/US9270600B2/en
Publication of US20150188821A1 publication Critical patent/US20150188821A1/en
Assigned to LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. reassignment LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANDEY, VIJOY A., CAMPBELL, ALEXANDER P., KAMBLE, KESHAV G.
Application granted granted Critical
Publication of US9270600B2 publication Critical patent/US9270600B2/en
Assigned to LENOVO INTERNATIONAL LIMITED reassignment LENOVO INTERNATIONAL LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.
Assigned to LENOVO INTERNATIONAL LIMITED reassignment LENOVO INTERNATIONAL LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE LTD.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/122Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • H04L49/206Real Time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element

Definitions

  • the present invention relates to data center infrastructure, and more particularly, this invention relates to utilizing a low-latency lossless switch fabric in a data center.
  • Low latency is a highly desirable feature for data center switch fabric. For example, in high-frequency transactions, low latency allows applications to execute large volumes of orders, such as automated stock trades, etc., at fractions of a second. Similarly, in real-time communications, such as video feeds, telemetry, etc., delays in processing information may be detrimental to user experience or efficient control of devices relying on the video feeds and/or telemetry.
  • a buffered switch is configured to send all packets through a memory buffer to avoid packet loss.
  • this solution causes increases in latency because moving a packet into and then out of memory takes time, thus increasing latency for the solution. Accordingly, a better solution would be beneficial to provide a low-latency lossless switch fabric in a data center.
  • a switch includes a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to receive a packet at an ingress port of the switch, forward the packet to a buffered switch when at least one congestion condition is met, where the buffered switch is configured to evaluate congestion conditions of a fabric network, and forward the packet to a low-latency switch when the at least one congestion condition is not met, where the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
  • a computer program product for providing low latency packet forwarding with guaranteed delivery includes a computer readable storage medium having computer readable program code embodied therewith.
  • the computer readable program code includes computer readable program code configured to receive a packet at an ingress port of a switch, computer readable program code configured to forward the packet to a buffered switch downstream of the switch when at least one congestion condition is met, where the buffered switch is configured to evaluate congestion conditions of a fabric network, and computer readable program code configured to forward the packet to a low-latency switch downstream of the switch when the at least one congestion condition is not met, where the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
  • a switch in yet another embodiment, includes a processor and logic integrated with and/or executable by the processor.
  • the logic is configured to cause the processor to receive a packet at an ingress port of the switch, receive congestion information, determine that at least one congestion condition is met based on at least the congestion information, apply a packet forwarding policy to the packet when the at least one congestion condition is met to determine where to forward the packet, determine whether the packet forwarding policy indicates to drop the packet and drop the packet when the packet forwarding policy indicates to drop the packet, forward the packet to a buffered switch downstream of the switch according to the packet forwarding policy when the at least one congestion condition is met, wherein the buffered switch is configured to evaluate congestion conditions of a fabric network, and forward the packet to a low-latency switch according to the packet forwarding policy when the at least one congestion condition is not met, wherein the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
  • FIG. 1 illustrates a network architecture, in accordance with one embodiment.
  • FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1 , in accordance with one embodiment.
  • FIG. 3 is a simplified diagram of a low-latency lossless switch fabric configuration within a data center, according to one embodiment.
  • FIG. 4 is a flowchart of a method, according to one embodiment.
  • FIG. 5 is a flowchart of a method, according to another embodiment.
  • FIG. 6 is a flowchart of a method, according to yet another embodiment.
  • a data center fabric may be configured with a combination of low-latency and buffered switches.
  • the low-latency switches may be provided with switching processors which have additional policy tables provided with forwarding decisions based on congestion of the fabric, which may be provided to the low-latency switches using a feedback mechanism.
  • a forwarding switch may send packets either to a low-latency or a buffered switch.
  • the forwarding switch may apply packet-forwarding policies.
  • the fabric configuration provides the best of both worlds: it has low latency and it enables lossless communications even while the fabric is congested.
  • Another advantage is that the fabric may be easily configured to adapt to a wide variety of data center conditions and data applications.
  • a system in one general embodiment, includes a switch configured for communicating with a low-latency switch and a buffered switch, the switch having a processor adapted for executing logic, logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is met based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
  • a computer program product for providing low latency packet forwarding with guaranteed delivery includes a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code including computer readable program code configured for receiving a packet at an ingress port of a switch, computer readable program code configured for determining that at least one congestion condition is met, computer readable program code configured for applying a packet forwarding policy to the packet when the at least one congestion condition is met, computer readable program code configured for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and computer readable program code configured for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
  • a method for providing low latency packet forwarding with guaranteed delivery includes receiving a packet at an ingress port of a switch, determining that at least one congestion condition is met, applying a packet forwarding policy to the packet when the at least one congestion condition is met, forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
  • a method for providing low latency packet forwarding with guaranteed delivery includes receiving a packet at an ingress port of a switch, receiving congestion information from one or more downstream switches, determining that at least one congestion condition is met based on at least the congestion information, processing the packet to determine at least one property of the packet, applying a packet forwarding policy to the packet when the at least one congestion condition is met, wherein the at least one property of the packet is used to determine if the packet satisfies the packet forwarding policy, forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a non-transitory computer readable storage medium.
  • a non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • non-transitory computer readable storage medium More specific examples (a non-exhaustive list) of the non-transitory computer readable storage medium include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a Blu-ray disc read-only memory (BD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a non-transitory computer readable storage medium may be any tangible medium that is capable of containing, or storing a program or application for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device, such as an electrical connection having one or more wires, an optical fibre, etc.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer or server may be connected to the user's computer through any type of network, including a local area network (LAN), storage area network (SAN), and/or a wide area network (WAN), or the connection may be made to an external computer, for example through the Internet using an Internet Service Provider (ISP).
  • LAN local area network
  • SAN storage area network
  • WAN wide area network
  • ISP Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 illustrates a network architecture 100 , in accordance with one embodiment.
  • a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106 .
  • a gateway 101 may be coupled between the remote networks 102 and a proximate network 108 .
  • the networks 104 , 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.
  • PSTN public switched telephone network
  • the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108 .
  • the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101 , and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
  • At least one data server 114 coupled to the proximate network 108 , and which is accessible from the remote networks 102 via the gateway 101 .
  • the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116 .
  • Such user devices 116 may include a desktop computer, laptop computer, handheld computer, printer, and/or any other type of logic-containing device.
  • a user device 111 may also be directly coupled to any of the networks, in some embodiments.
  • a peripheral 120 or series of peripherals 120 may be coupled to one or more of the networks 104 , 106 , 108 .
  • databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104 , 106 , 108 .
  • a network element may refer to any component of a network.
  • methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc.
  • This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
  • one or more networks 104 , 106 , 108 may represent a cluster of systems commonly referred to as a “cloud.”
  • cloud computing shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems.
  • Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used, as known in the art.
  • FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1 , in accordance with one embodiment.
  • FIG. 2 illustrates a typical hardware configuration of a workstation having a central processing unit (CPU) 210 , such as a microprocessor, and a number of other units interconnected via one or more buses 212 which may be of different types, such as a local bus, a parallel bus, a serial bus, etc., according to several embodiments.
  • CPU central processing unit
  • the workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214 , Read Only Memory (ROM) 216 , an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the one or more buses 212 , a user interface adapter 222 for connecting a keyboard 224 , a mouse 226 , a speaker 228 , a microphone 232 , and/or other user interface devices such as a touch screen, a digital camera (not shown), etc., to the one or more buses 212 , communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the one or more buses 212 to a display device 238 .
  • a communication network 235 e.g., a data processing network
  • display adapter 236 for connecting the one or more buses 212 to a display device 238 .
  • the workstation may have resident thereon an operating system such as the MICROSOFT WINDOWS Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
  • OS MICROSOFT WINDOWS Operating System
  • MAC OS MAC OS
  • UNIX OS UNIX OS
  • a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
  • a preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology.
  • Object oriented programming (OOP) which has become increasingly used to develop complex applications, may be used.
  • the switch fabric configuration 300 comprises a data center fabric 318 and various switches.
  • Switch 302 is adapted for receiving incoming traffic 310 .
  • the incoming traffic 310 may be received from any source, such as another switch, a router, a traffic source (like a communications device, a mainframe, a server, etc.).
  • the switch 302 is adapted for forwarding the received traffic 310 (as data payload packets 316 ) to either low-latency switch 304 or buffered switch 306 .
  • Switches 304 and 306 are adapted for forwarding the data payload packets 316 to a second low-latency switch 308 . All switches may be implemented as physical switches, virtual switches, or a combination thereof.
  • each physical switch may include a switching processor 320 , such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a microprocessor, a microcontroller, a central processing unit (CPU), or some other processor known in the art.
  • a switching processor 320 such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a microprocessor, a microcontroller, a central processing unit (CPU), or some other processor known in the art.
  • a processor of the server supporting the virtual switch may provide the switching functionality, as known in the art.
  • switch 302 is also adapted for receiving flow control information 312 from switches 304 and 306 .
  • the flow control information 312 may be sent/received in any suitable format, such as control packets, priority-based flow control (PFC), enhanced transmission selection (ETS), quantized congestion notification (QCN), Institute of Electrical and Electronics Engineers (IEEE) 802.3x, etc.
  • PFC priority-based flow control
  • ETS enhanced transmission selection
  • QCN quantized congestion notification
  • IEEE 802.3x Institute of Electrical and Electronics Engineers 802.3x
  • switch 302 determines whether congestion conditions exist in the data center fabric 318 . For example, when low-latency switch 304 is congested, switch 302 , instead of dropping one or more packets, forwards the one or more packets to buffered switch 306 .
  • Buffered switch 306 is also enabled to evaluate congestion conditions on the data center fabric 318 and, depending on the conditions, is adapted for forwarding the packet(s) to the second low-latency switch 308 .
  • the data center fabric 318 is adapted for selecting a path of least latency available at any given time.
  • switch 302 has access to packet forwarding policy 314 .
  • a physical switch may include the packet forwarding policy.
  • a server hosting a virtual switch may comprise the packet forwarding policy.
  • the packet forwarding policy 314 comprises criteria for forwarding packets in congestion conditions along with one or more alternative ports.
  • the criteria may include packet priority, a destination identifier, e.g., an IP address, a media access control (MAC) address, etc., a traffic flow identifier, e.g., a combination of source and destination addresses, a packet size, a packet latency, virtual local area network (VLAN) tag(s), and/or other related parameters.
  • the alternative port may be a physical port, logical interface, Link Aggregation (LAG) group, virtual port, etc.
  • one or more properties of the packet may be determined and used in the packet forwarding policy to determine if the packet satisfies the packet forwarding policy.
  • the property of the packet may include any of the following: a packet priority, a destination application identifier, a source address, a destination address, a packet size, a VLAN identifier, and/or an acceptable latency for the packet.
  • FIG. 4 a simplified flow chart of a method 400 is shown according to one embodiment.
  • the method 400 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3 , among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 4 may be included in method 400 , as would be understood by one of skill in the art upon reading the present descriptions.
  • each of the steps of the method 400 may be performed by any suitable component of the operating environment.
  • the method 400 may be partially or entirely performed by a switch in a data center fabric.
  • method 400 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
  • a packet of incoming traffic is received (such as at a switch in the data center fabric).
  • the at least one fabric congestion criteria may include receipt of back pressure from one or more low-latency switches downstream of the switch. In this way, if a low-latency switch is indicating congestion, traffic may be diverted from this switch until it is able to process the traffic it has already been forwarded.
  • the at least one congestion condition may be binary (Yes/No), multi-step, tiered, etc. That is, a multi-step condition may include various levels of congestion criteria in the fabric (e.g., high, medium, low).
  • a tiered condition may include categories, each category including one or more forwarding procedures. For example, different types of packets may be categorized and dealt with differently in the forwarding policy. Depending on the level of congestion, a default action may be adjusted to best handle a run-time situation.
  • the packet is forwarded to a low-latency switch in operation 414 .
  • This may be a default action in some approaches as it allows traffic to proceed through the data center fabric in a most expedient manner.
  • a packet forwarding policy is applied in operation 406 to determine how to forward the packet.
  • Application of the packet forwarding policy in operation 406 involves determining relevant attributes of the packet. For example, if the policy indicates that lossless treatment is to be provided to packets with a certain priority, priority information is extracted from the packet. All other parameters of the packet, either present in the packet or calculated using an algorithm, may be extracted for future comparison and/or for other comparisons or determinations.
  • the packet forwarding policy may indicate dropping the packet in one or more scenarios, as shown in operation 408 . For example, in one approach, if the packet does not satisfy policy criteria, the packet may be dropped, as shown in operation 412 .
  • the packet is forwarded to a buffered switch in operation 410 .
  • the decision whether to drop the packet or forward the packet to the buffered switch may be made based on a calculated fit between a value extracted in operation 406 and a specification in the packet forwarding policy, according to one embodiment.
  • Standard flow control protocols which may trigger this mechanism include 802.1Qbb—Priority Based Flow Control (PFC), 802.1az—Enhanced Transmission Selection (ETS), Quantized Congestion Notification (QCN), or any other regular flow control according to IEEE 802.3X.
  • PFC Primary Cost Based Flow Control
  • ETS Enhanced Transmission Selection
  • QCN Quantized Congestion Notification
  • any or all operations of method 400 may be implemented in a system or a computer program product.
  • FIG. 5 shows a simplified flow chart of control logic that may be used in conjunction with operation 404 of method 400 shown in FIG. 4 .
  • the method 500 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3 , among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 5 may be included in method 500 , as would be understood by one of skill in the art upon reading the present descriptions.
  • each of the steps of the method 500 may be performed by any suitable component of the operating environment.
  • the method 500 may be partially or entirely performed by a switch in a data center fabric.
  • method 500 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
  • control plane congestion information is received.
  • a switch may receive this information relevant to the fabric congestion conditions.
  • the information may be sent from switches connected directly to the receiving switch, from a configuration terminal (or some other central repository of congestion information, such as a server), or some other external agent, as would be understood by one of skill in the art upon reading the present descriptions.
  • switching ASICs from various switches in the data center fabric
  • the congestion information is processed and in operation 506 , it is determined whether at least one fabric congestion criteria is met.
  • the at least one fabric congestion criteria may include receipt of back pressure from one or more low-latency switches downstream of the switch. In this way, if a low-latency switch is indicating congestion, traffic may be diverted from this switch until it is able to process the traffic it has already been forwarded.
  • a congestion flag is set in operation 508 and a packet forwarding policy is loaded in operation 512 . After the packet forwarding policy is loaded, congestion information is continued to be monitored in operation 514 . If the at least one congestion criteria is not met, the congestion flag is removed in operation 510 and congestion information is continued to be monitored in operation 514 .
  • the processing of the congestion information in operation 504 may be implemented in a distributed manner. For example, processing of congestion information may be performed on an external device, a software entity, or some other processing facility capable of processing the congestion information. In this case, the external entity may communicate only the required portions of the congestion information to the switch.
  • the switch may be configured to modify the congestion criteria or upload policies dynamically, depending on its internal state and available resources.
  • FIG. 5 shows a binary implementation of the congestion logic. That is, congestion is determined as a Yes or No condition.
  • An alternative implementation may provide for multi-level congestion logic and/or tiered congestion logic, as described previously. Further, depending on the level of congestion, a different forwarding policy may be loaded and/or executed at the runtime.
  • any or all operations of method 500 may be implemented in a system or a computer program product.
  • FIG. 6 a flowchart of a method 600 for providing low latency switching to incoming traffic is shown, according to one embodiment.
  • the method 600 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3 , among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 6 may be included in method 600 , as would be understood by one of skill in the art upon reading the present descriptions.
  • each of the steps of the method 600 may be performed by any suitable component of the operating environment.
  • the method 600 may be partially or entirely performed by a switch in a data center fabric.
  • method 600 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
  • method 600 may initiate with operation 602 , where a packet is received at an ingress port of a switch.
  • the switch determines an egress port, such as by processing the packet and determining a destination address in a header of the packet, according to one embodiment.
  • the switch determines if the determined egress port is congested. If the egress port is not congested, the packet is forwarded to the egress port for forwarding further along in the fabric.
  • the packet is dropped. If it is determined that the packet should not be dropped, in operation 610 the packet forwarding policy is applied. In operation 612 , it is determined if the packet satisfies the policy. If not, the packet is dropped in operation 616 .
  • the packet is forwarded to a buffered egress port, in order to account for congestion in the fabric.
  • any or all operations of methods 400 , 500 , and/or 600 may be implemented in a system or a computer program product.
  • a system may comprise a switch connected to a low-latency switch and a buffered switch.
  • the switch may comprise a processor adapted for executing logic (such as an ASIC), logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is met based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
  • logic such as an ASIC
  • a computer program product for providing disjoint multi-paths in a network comprises a computer readable storage medium having computer readable program code embodied therewith.
  • the computer readable program code includes computer readable program code configured for receiving a packet at an ingress port of a switch, computer readable program code configured for determining that at least one congestion condition is met, computer readable program code configured for applying a packet forwarding policy to the packet when the at least one congestion condition is met, computer readable program code configured for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and computer readable program code configured for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

In one embodiment, a switch includes a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to receive a packet at an ingress port of the switch, forward the packet to a buffered switch when at least one congestion condition is met, where the buffered switch is configured to evaluate congestion conditions of a fabric network, and forward the packet to a low-latency switch when the at least one congestion condition is not met, where the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network. Other switches, systems, methods, and computer program products for providing low latency packet forwarding with guaranteed delivery are described according to more embodiments.

Description

BACKGROUND
The present invention relates to data center infrastructure, and more particularly, this invention relates to utilizing a low-latency lossless switch fabric in a data center.
Low latency is a highly desirable feature for data center switch fabric. For example, in high-frequency transactions, low latency allows applications to execute large volumes of orders, such as automated stock trades, etc., at fractions of a second. Similarly, in real-time communications, such as video feeds, telemetry, etc., delays in processing information may be detrimental to user experience or efficient control of devices relying on the video feeds and/or telemetry.
An important problem for low latency switch fabric implementations is that they do not provide deep buffering, and hence packets are lost when the fabric is congested. That is, a switch is not capable of forwarding a packet due to congestion conditions and the switch drops one or more packets, which causes a failure or significant delay of the transaction.
Existing solutions for lossless switches involve internal packet buffering. A buffered switch is configured to send all packets through a memory buffer to avoid packet loss. Unfortunately, this solution causes increases in latency because moving a packet into and then out of memory takes time, thus increasing latency for the solution. Accordingly, a better solution would be beneficial to provide a low-latency lossless switch fabric in a data center.
SUMMARY
In one embodiment, a switch includes a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to receive a packet at an ingress port of the switch, forward the packet to a buffered switch when at least one congestion condition is met, where the buffered switch is configured to evaluate congestion conditions of a fabric network, and forward the packet to a low-latency switch when the at least one congestion condition is not met, where the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
In another embodiment, a computer program product for providing low latency packet forwarding with guaranteed delivery includes a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code includes computer readable program code configured to receive a packet at an ingress port of a switch, computer readable program code configured to forward the packet to a buffered switch downstream of the switch when at least one congestion condition is met, where the buffered switch is configured to evaluate congestion conditions of a fabric network, and computer readable program code configured to forward the packet to a low-latency switch downstream of the switch when the at least one congestion condition is not met, where the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
In yet another embodiment, a switch includes a processor and logic integrated with and/or executable by the processor. The logic is configured to cause the processor to receive a packet at an ingress port of the switch, receive congestion information, determine that at least one congestion condition is met based on at least the congestion information, apply a packet forwarding policy to the packet when the at least one congestion condition is met to determine where to forward the packet, determine whether the packet forwarding policy indicates to drop the packet and drop the packet when the packet forwarding policy indicates to drop the packet, forward the packet to a buffered switch downstream of the switch according to the packet forwarding policy when the at least one congestion condition is met, wherein the buffered switch is configured to evaluate congestion conditions of a fabric network, and forward the packet to a low-latency switch according to the packet forwarding policy when the at least one congestion condition is not met, wherein the low-latency switch includes an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
Other aspects and embodiments of the present invention will become apparent from the following detailed description, which, when taken in conjunction with the drawings, illustrate by way of example the principles of the invention.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
FIG. 1 illustrates a network architecture, in accordance with one embodiment.
FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1, in accordance with one embodiment.
FIG. 3 is a simplified diagram of a low-latency lossless switch fabric configuration within a data center, according to one embodiment.
FIG. 4 is a flowchart of a method, according to one embodiment.
FIG. 5 is a flowchart of a method, according to another embodiment.
FIG. 6 is a flowchart of a method, according to yet another embodiment.
DETAILED DESCRIPTION
The following description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless otherwise specified.
According to various embodiments described herein, a data center fabric may be configured with a combination of low-latency and buffered switches. The low-latency switches may be provided with switching processors which have additional policy tables provided with forwarding decisions based on congestion of the fabric, which may be provided to the low-latency switches using a feedback mechanism. Depending on congestion conditions in the fabric, a forwarding switch may send packets either to a low-latency or a buffered switch. Further, according to one embodiment, in order to determine which type of switch to forward the packet or to drop the packet, the forwarding switch may apply packet-forwarding policies.
One advantage of this procedure is that the fabric configuration provides the best of both worlds: it has low latency and it enables lossless communications even while the fabric is congested. Another advantage is that the fabric may be easily configured to adapt to a wide variety of data center conditions and data applications.
In one general embodiment, a system includes a switch configured for communicating with a low-latency switch and a buffered switch, the switch having a processor adapted for executing logic, logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is met based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
In another general embodiment, a computer program product for providing low latency packet forwarding with guaranteed delivery includes a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code including computer readable program code configured for receiving a packet at an ingress port of a switch, computer readable program code configured for determining that at least one congestion condition is met, computer readable program code configured for applying a packet forwarding policy to the packet when the at least one congestion condition is met, computer readable program code configured for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and computer readable program code configured for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
In yet another general embodiment, a method for providing low latency packet forwarding with guaranteed delivery includes receiving a packet at an ingress port of a switch, determining that at least one congestion condition is met, applying a packet forwarding policy to the packet when the at least one congestion condition is met, forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
According to another general embodiment, a method for providing low latency packet forwarding with guaranteed delivery includes receiving a packet at an ingress port of a switch, receiving congestion information from one or more downstream switches, determining that at least one congestion condition is met based on at least the congestion information, processing the packet to determine at least one property of the packet, applying a packet forwarding policy to the packet when the at least one congestion condition is met, wherein the at least one property of the packet is used to determine if the packet satisfies the packet forwarding policy, forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the non-transitory computer readable storage medium include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a Blu-ray disc read-only memory (BD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a non-transitory computer readable storage medium may be any tangible medium that is capable of containing, or storing a program or application for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device, such as an electrical connection having one or more wires, an optical fibre, etc.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer or server may be connected to the user's computer through any type of network, including a local area network (LAN), storage area network (SAN), and/or a wide area network (WAN), or the connection may be made to an external computer, for example through the Internet using an Internet Service Provider (ISP).
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems), and computer program products according to various embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
FIG. 1 illustrates a network architecture 100, in accordance with one embodiment. As shown in FIG. 1, a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106. A gateway 101 may be coupled between the remote networks 102 and a proximate network 108. In the context of the present network architecture 100, the networks 104, 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.
In use, the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101, and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway 101. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116. Such user devices 116 may include a desktop computer, laptop computer, handheld computer, printer, and/or any other type of logic-containing device. It should be noted that a user device 111 may also be directly coupled to any of the networks, in some embodiments.
A peripheral 120 or series of peripherals 120, e.g., facsimile machines, printers, scanners, hard disk drives, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network.
According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud.” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used, as known in the art.
FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1, in accordance with one embodiment. FIG. 2 illustrates a typical hardware configuration of a workstation having a central processing unit (CPU) 210, such as a microprocessor, and a number of other units interconnected via one or more buses 212 which may be of different types, such as a local bus, a parallel bus, a serial bus, etc., according to several embodiments.
The workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214, Read Only Memory (ROM) 216, an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the one or more buses 212, a user interface adapter 222 for connecting a keyboard 224, a mouse 226, a speaker 228, a microphone 232, and/or other user interface devices such as a touch screen, a digital camera (not shown), etc., to the one or more buses 212, communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the one or more buses 212 to a display device 238.
The workstation may have resident thereon an operating system such as the MICROSOFT WINDOWS Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.
Now referring to FIG. 3, a low-latency lossless switch fabric configuration 300 within a data center is shown according to one embodiment. The switch fabric configuration 300 comprises a data center fabric 318 and various switches. Switch 302 is adapted for receiving incoming traffic 310. The incoming traffic 310 may be received from any source, such as another switch, a router, a traffic source (like a communications device, a mainframe, a server, etc.). The switch 302 is adapted for forwarding the received traffic 310 (as data payload packets 316) to either low-latency switch 304 or buffered switch 306. Switches 304 and 306 are adapted for forwarding the data payload packets 316 to a second low-latency switch 308. All switches may be implemented as physical switches, virtual switches, or a combination thereof.
For physical switch implementations, each physical switch may include a switching processor 320, such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a microprocessor, a microcontroller, a central processing unit (CPU), or some other processor known in the art.
For virtual switch implementations, a processor of the server supporting the virtual switch may provide the switching functionality, as known in the art.
Referring again to FIG. 3, switch 302 is also adapted for receiving flow control information 312 from switches 304 and 306. The flow control information 312 may be sent/received in any suitable format, such as control packets, priority-based flow control (PFC), enhanced transmission selection (ETS), quantized congestion notification (QCN), Institute of Electrical and Electronics Engineers (IEEE) 802.3x, etc. Depending on the flow control information 312 received, switch 302 determines whether congestion conditions exist in the data center fabric 318. For example, when low-latency switch 304 is congested, switch 302, instead of dropping one or more packets, forwards the one or more packets to buffered switch 306. Buffered switch 306 is also enabled to evaluate congestion conditions on the data center fabric 318 and, depending on the conditions, is adapted for forwarding the packet(s) to the second low-latency switch 308. As a result, the data center fabric 318 is adapted for selecting a path of least latency available at any given time.
According to various embodiments, switch 302 has access to packet forwarding policy 314. In one approach, a physical switch may include the packet forwarding policy. In an alternative approach, a server hosting a virtual switch may comprise the packet forwarding policy. The packet forwarding policy 314 comprises criteria for forwarding packets in congestion conditions along with one or more alternative ports.
For example, the criteria may include packet priority, a destination identifier, e.g., an IP address, a media access control (MAC) address, etc., a traffic flow identifier, e.g., a combination of source and destination addresses, a packet size, a packet latency, virtual local area network (VLAN) tag(s), and/or other related parameters. The alternative port may be a physical port, logical interface, Link Aggregation (LAG) group, virtual port, etc.
In other words, one or more properties of the packet may be determined and used in the packet forwarding policy to determine if the packet satisfies the packet forwarding policy. The property of the packet may include any of the following: a packet priority, a destination application identifier, a source address, a destination address, a packet size, a VLAN identifier, and/or an acceptable latency for the packet.
Now referring to FIG. 4, a simplified flow chart of a method 400 is shown according to one embodiment. The method 400 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 4 may be included in method 400, as would be understood by one of skill in the art upon reading the present descriptions.
Each of the steps of the method 400 may be performed by any suitable component of the operating environment. For example, in one embodiment, the method 400 may be partially or entirely performed by a switch in a data center fabric. Particularly, method 400 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
First, as shown in operation 402, a packet of incoming traffic is received (such as at a switch in the data center fabric). In operation 404, it is determined if at least one congestion condition is met. This determination may be made by a processor of a switch, in one embodiment, such as an ASIC, a microcontroller, a FPGA, etc.
In one embodiment, the at least one fabric congestion criteria may include receipt of back pressure from one or more low-latency switches downstream of the switch. In this way, if a low-latency switch is indicating congestion, traffic may be diverted from this switch until it is able to process the traffic it has already been forwarded.
According to various embodiments, the at least one congestion condition may be binary (Yes/No), multi-step, tiered, etc. That is, a multi-step condition may include various levels of congestion criteria in the fabric (e.g., high, medium, low). A tiered condition may include categories, each category including one or more forwarding procedures. For example, different types of packets may be categorized and dealt with differently in the forwarding policy. Depending on the level of congestion, a default action may be adjusted to best handle a run-time situation.
If the at least one congestion condition is not met, the packet is forwarded to a low-latency switch in operation 414. This may be a default action in some approaches as it allows traffic to proceed through the data center fabric in a most expedient manner.
If the at least one congestion condition is met, a packet forwarding policy is applied in operation 406 to determine how to forward the packet. Application of the packet forwarding policy in operation 406 involves determining relevant attributes of the packet. For example, if the policy indicates that lossless treatment is to be provided to packets with a certain priority, priority information is extracted from the packet. All other parameters of the packet, either present in the packet or calculated using an algorithm, may be extracted for future comparison and/or for other comparisons or determinations.
The packet forwarding policy may indicate dropping the packet in one or more scenarios, as shown in operation 408. For example, in one approach, if the packet does not satisfy policy criteria, the packet may be dropped, as shown in operation 412.
If the packet satisfies the packet forwarding policy and it is not dropped, the packet is forwarded to a buffered switch in operation 410. The decision whether to drop the packet or forward the packet to the buffered switch may be made based on a calculated fit between a value extracted in operation 406 and a specification in the packet forwarding policy, according to one embodiment.
Standard flow control protocols which may trigger this mechanism include 802.1Qbb—Priority Based Flow Control (PFC), 802.1az—Enhanced Transmission Selection (ETS), Quantized Congestion Notification (QCN), or any other regular flow control according to IEEE 802.3X.
In more embodiments, referring again to FIG. 4, any or all operations of method 400 may be implemented in a system or a computer program product.
FIG. 5 shows a simplified flow chart of control logic that may be used in conjunction with operation 404 of method 400 shown in FIG. 4. Referring again to FIG. 5, the method 500 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 5 may be included in method 500, as would be understood by one of skill in the art upon reading the present descriptions.
Each of the steps of the method 500 may be performed by any suitable component of the operating environment. For example, in one embodiment, the method 500 may be partially or entirely performed by a switch in a data center fabric. Particularly, method 500 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
In operation 502, control plane congestion information is received. According to one embodiment, a switch may receive this information relevant to the fabric congestion conditions. The information may be sent from switches connected directly to the receiving switch, from a configuration terminal (or some other central repository of congestion information, such as a server), or some other external agent, as would be understood by one of skill in the art upon reading the present descriptions. According to one embodiment, switching ASICs (from various switches in the data center fabric) may derive the congestion or flow control information from standard flow control protocols and may check their transmit queue level thresholds in order to obtain the control plane congestion information to send to the switch.
In operation 504, the congestion information is processed and in operation 506, it is determined whether at least one fabric congestion criteria is met.
In one embodiment, the at least one fabric congestion criteria may include receipt of back pressure from one or more low-latency switches downstream of the switch. In this way, if a low-latency switch is indicating congestion, traffic may be diverted from this switch until it is able to process the traffic it has already been forwarded.
If the at least one criteria is met, a congestion flag is set in operation 508 and a packet forwarding policy is loaded in operation 512. After the packet forwarding policy is loaded, congestion information is continued to be monitored in operation 514. If the at least one congestion criteria is not met, the congestion flag is removed in operation 510 and congestion information is continued to be monitored in operation 514.
The processing of the congestion information in operation 504 may be implemented in a distributed manner. For example, processing of congestion information may be performed on an external device, a software entity, or some other processing facility capable of processing the congestion information. In this case, the external entity may communicate only the required portions of the congestion information to the switch.
Further, the switch may be configured to modify the congestion criteria or upload policies dynamically, depending on its internal state and available resources.
FIG. 5 shows a binary implementation of the congestion logic. That is, congestion is determined as a Yes or No condition. An alternative implementation may provide for multi-level congestion logic and/or tiered congestion logic, as described previously. Further, depending on the level of congestion, a different forwarding policy may be loaded and/or executed at the runtime.
In more embodiments, referring again to FIG. 5, any or all operations of method 500 may be implemented in a system or a computer program product.
Now referring to FIG. 6, a flowchart of a method 600 for providing low latency switching to incoming traffic is shown, according to one embodiment. The method 600 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 6 may be included in method 600, as would be understood by one of skill in the art upon reading the present descriptions.
Each of the steps of the method 600 may be performed by any suitable component of the operating environment. For example, in one embodiment, the method 600 may be partially or entirely performed by a switch in a data center fabric. Particularly, method 600 may be partially or entirely performed by a processor of a switch, with access to packet forwarding policy.
As shown in FIG. 6, method 600 may initiate with operation 602, where a packet is received at an ingress port of a switch. In operation 604, the switch determines an egress port, such as by processing the packet and determining a destination address in a header of the packet, according to one embodiment.
In operation 606, the switch determines if the determined egress port is congested. If the egress port is not congested, the packet is forwarded to the egress port for forwarding further along in the fabric.
In one embodiment, it may be determined that the egress port is congested when back pressure is received from one or more low-latency switches downstream of the egress port.
If the egress port is congested, it is further determined if the packet should be dropped in operation 608. In operation 616, the packet is dropped. If it is determined that the packet should not be dropped, in operation 610 the packet forwarding policy is applied. In operation 612, it is determined if the packet satisfies the policy. If not, the packet is dropped in operation 616.
If the packet satisfies the policy, in operation 614, the packet is forwarded to a buffered egress port, in order to account for congestion in the fabric.
In more embodiments, referring again to FIGS. 4-6, any or all operations of methods 400, 500, and/or 600 may be implemented in a system or a computer program product.
For example, in one embodiment, a system may comprise a switch connected to a low-latency switch and a buffered switch. The switch may comprise a processor adapted for executing logic (such as an ASIC), logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is met based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
In another example, a computer program product for providing disjoint multi-paths in a network comprises a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code includes computer readable program code configured for receiving a packet at an ingress port of a switch, computer readable program code configured for determining that at least one congestion condition is met, computer readable program code configured for applying a packet forwarding policy to the packet when the at least one congestion condition is met, computer readable program code configured for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and computer readable program code configured for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of an embodiment of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (20)

What is claimed is:
1. A switch comprising a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to:
receive a packet at an ingress port of the switch;
forward the packet to a buffered switch when at least one congestion condition is met, wherein the buffered switch is configured to evaluate congestion conditions of a fabric network; and
forward the packet to a low-latency switch when the at least one congestion condition is not met, wherein the low-latency switch comprises an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
2. The switch as recited in claim 1, wherein the logic is further configured to cause the processor to:
receive congestion information;
determine that at least one congestion condition is met based on at least the congestion information; and
apply a packet forwarding policy to the packet when the at least one congestion condition is met to determine where to forward the packet.
3. The switch as recited in claim 2, wherein the logic is further configured to cause the processor to:
determine whether the packet forwarding policy indicates to drop the packet; and
drop the packet when the packet forwarding policy indicates to drop the packet.
4. The switch as recited in claim 2, wherein the at least one congestion condition comprises receipt of back pressure from one or more low-latency switches downstream of the switch.
5. The switch as recited in claim 4, wherein the logic is further configured to cause the processor to:
divert traffic from any low-latency switch indicating congestion until the low-latency switch is able to process the traffic already forwarded to the low-latency switch.
6. The switch as recited in claim 2, wherein the logic is further configured to cause the processor to:
process the packet to determine at least one property of the packet; and
use the at least one property of the packet to determine whether the packet satisfies the packet forwarding policy.
7. The switch as recited in claim 2, wherein the at least one property of the packet comprises one or more of: a packet priority, a destination application identifier, a source address, a destination address, a packet size, a virtual local area network (VLAN) identifier, and an acceptable latency for the packet.
8. The switch as recited in claim 7, wherein the packet forwarding policy is a multi-stage policy which takes into account the at least one property of the packet.
9. A computer program product for providing low latency packet forwarding with guaranteed delivery, the computer program product comprising a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising:
computer readable program code configured to receive a packet at an ingress port of a switch;
computer readable program code configured to forward the packet to a buffered switch downstream of the switch when at least one congestion condition is met, wherein the buffered switch is configured to evaluate congestion conditions of a fabric network; and
computer readable program code configured to forward the packet to a low-latency switch downstream of the switch when the at least one congestion condition is not met, wherein the low-latency switch comprises an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
10. The computer program product as recited in claim 9, wherein the computer readable program code further comprises:
computer readable program code configured to receive congestion information at the switch;
computer readable program code configured to determine that at least one congestion condition is met based on at least the congestion information; and
computer readable program code configured to apply a packet forwarding policy to the packet using the switch, when the at least one congestion condition is met, to determine where to forward the packet.
11. The computer program product as recited in claim 10, wherein the computer readable program code further comprises:
computer readable program code configured to determine whether the packet forwarding policy indicates to drop the packet; and
computer readable program code configured to drop the packet, using the switch, when the packet forwarding policy indicates to drop the packet.
12. The computer program product as recited in claim 10, wherein the at least one congestion condition comprises receipt of back pressure, at the switch, from one or more low-latency switches downstream of the switch.
13. The computer program product as recited in claim 12, wherein the computer readable program code further comprises:
computer readable program code configured to divert traffic from any low-latency switch indicating congestion until the low-latency switch is able to process the traffic already forwarded to the low-latency switch.
14. The computer program product as recited in claim 10, wherein the computer readable program code further comprises:
computer readable program code configured to process the packet to determine at least one property of the packet; and
computer readable program code configured to use the at least one property of the packet to determine whether the packet satisfies the packet forwarding policy.
15. The computer program product as recited in claim 14, wherein the at least one property of the packet comprises one or more of: a packet priority, a destination application identifier, a source address, a destination address, a packet size, a virtual local area network (VLAN) identifier, and an acceptable latency for the packet.
16. The computer program product as recited in claim 10, wherein the packet forwarding policy is a multi-stage policy which takes into account the at least one property of the packet.
17. A switch, comprising a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to:
receive a packet at an ingress port of the switch;
receive congestion information;
determine that at least one congestion condition is met based on at least the congestion information;
apply a packet forwarding policy to the packet when the at least one congestion condition is met to determine where to forward the packet;
determine whether the packet forwarding policy indicates to drop the packet and drop the packet when the packet forwarding policy indicates to drop the packet;
forward the packet to a buffered switch downstream of the switch according to the packet forwarding policy when the at least one congestion condition is met, wherein the buffered switch is configured to evaluate congestion conditions of a fabric network; and
forward the packet to a low-latency switch according to the packet forwarding policy when the at least one congestion condition is not met, wherein the low-latency switch comprises an additional policy table provided with forwarding decisions based on the congestion conditions of the fabric network.
18. The switch as recited in claim 17, wherein the at least one congestion condition comprises receipt of back pressure from one or more low-latency switches downstream of the switch, and wherein the logic is further configured to cause the processor to:
divert traffic from any low-latency switch indicating congestion until the low-latency switch is able to process the traffic already forwarded to the low-latency switch.
19. The switch as recited in claim 17, wherein the logic is further configured to cause the processor to:
process the packet to determine at least one property of the packet; and
use the at least one property of the packet to determine whether the packet satisfies the packet forwarding policy.
20. The switch as recited in claim 17, wherein the congestion information is received by the switch according to at least one of: 802.1Qbb—Priority Based Flow Control (PFC), 802.1az—Enhanced Transmission Selection (ETS), Quantized Congestion Notification (QCN), and regular flow control according to IEEE 802.3X.
US14/656,575 2013-01-14 2015-03-12 Low-latency lossless switch fabric for use in a data center Active US9270600B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/656,575 US9270600B2 (en) 2013-01-14 2015-03-12 Low-latency lossless switch fabric for use in a data center

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/741,346 US9014005B2 (en) 2013-01-14 2013-01-14 Low-latency lossless switch fabric for use in a data center
US14/656,575 US9270600B2 (en) 2013-01-14 2015-03-12 Low-latency lossless switch fabric for use in a data center

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/741,346 Continuation US9014005B2 (en) 2013-01-14 2013-01-14 Low-latency lossless switch fabric for use in a data center

Publications (2)

Publication Number Publication Date
US20150188821A1 US20150188821A1 (en) 2015-07-02
US9270600B2 true US9270600B2 (en) 2016-02-23

Family

ID=51165026

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/741,346 Active 2033-03-09 US9014005B2 (en) 2013-01-14 2013-01-14 Low-latency lossless switch fabric for use in a data center
US14/656,575 Active US9270600B2 (en) 2013-01-14 2015-03-12 Low-latency lossless switch fabric for use in a data center

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/741,346 Active 2033-03-09 US9014005B2 (en) 2013-01-14 2013-01-14 Low-latency lossless switch fabric for use in a data center

Country Status (4)

Country Link
US (2) US9014005B2 (en)
CN (1) CN105229976B (en)
DE (1) DE112013006417B4 (en)
WO (1) WO2014108773A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014063550A (en) 2012-09-21 2014-04-10 International Business Maschines Corporation Device, method and program for controlling data writing to tape recorder
US9014005B2 (en) 2013-01-14 2015-04-21 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Low-latency lossless switch fabric for use in a data center
US10341224B2 (en) * 2013-01-25 2019-07-02 Dell Products L.P. Layer-3 flow control information routing system
JP6089940B2 (en) * 2013-05-08 2017-03-08 富士通株式会社 Failure determination program, apparatus, system, and method
CN105306382B (en) * 2014-07-28 2019-06-11 华为技术有限公司 It is a kind of without caching NOC data processing method and NOC electronic component
US10355999B2 (en) * 2015-09-23 2019-07-16 Cisco Technology, Inc. Flow control with network named fragments
US10536379B2 (en) * 2017-09-28 2020-01-14 Argela Yazilim Ve Bilisim Teknolojileri San Ve Tic. A.S. System and method for control traffic reduction between SDN controller and switch
US11057305B2 (en) * 2018-10-27 2021-07-06 Cisco Technology, Inc. Congestion notification reporting for a responsive network
US10630554B1 (en) * 2018-10-29 2020-04-21 International Business Machines Corporation Input/output (I/O) performance of hosts through bi-directional bandwidth feedback optimization
US11171884B2 (en) * 2019-03-13 2021-11-09 Mellanox Technologies Tlv Ltd. Efficient memory utilization and egress queue fairness
US11848837B2 (en) * 2021-10-19 2023-12-19 Mellanox Technologies, Ltd. Network telemetry based on application-level information
CN115208842B (en) * 2022-07-29 2024-05-14 苏州特思恩科技有限公司 Use method of low-delay device based on 10G Ethernet

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000244506A (en) 1999-02-18 2000-09-08 Hitachi Ltd Network having duplex transmission line
US20050147032A1 (en) 2003-12-22 2005-07-07 Lyon Norman A. Apportionment of traffic management functions between devices in packet-based communication networks
US6925257B2 (en) 2000-02-29 2005-08-02 The Regents Of The University Of California Ultra-low latency multi-protocol optical routers for the next generation internet
CN1863198A (en) 2005-09-01 2006-11-15 华为技术有限公司 Apparatus and method of real-time recovering service
US20080138067A1 (en) 2006-12-12 2008-06-12 Maged E Beshai Network with a Fast-Switching Optical Core
US20080259798A1 (en) 2007-04-19 2008-10-23 Fulcrum Microsystems Inc. Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US20090300209A1 (en) 2008-06-03 2009-12-03 Uri Elzur Method and system for path based network congestion management
CN101605102A (en) 2009-07-16 2009-12-16 杭州华三通信技术有限公司 Load sharing method during a kind of IRF piles up and device
US7729259B1 (en) 2004-01-20 2010-06-01 Cisco Technology, Inc. Reducing latency jitter in a store-and-forward buffer for mixed-priority traffic
US20100316049A1 (en) 2009-06-12 2010-12-16 Wael William Diab Method and system for energy-efficiency-based packet classification
CN102355421A (en) 2011-10-12 2012-02-15 华为技术有限公司 Method for handling LSP (Label Switched Path) network congestion, device and system
US8554943B1 (en) 2006-03-31 2013-10-08 Emc Corporation Method and system for reducing packet latency in networks with both low latency and high bandwidths requirements
US20140198638A1 (en) 2013-01-14 2014-07-17 International Business Machines Corporation Low-latency lossless switch fabric for use in a data center

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7061929B1 (en) 2000-03-31 2006-06-13 Sun Microsystems, Inc. Data network with independent transmission channels
US20070067487A1 (en) * 2001-10-04 2007-03-22 Newnew Networks Innovations Limited Communications node
US20040264369A1 (en) * 2003-03-11 2004-12-30 Interactic Holdings, Llc Scalable network for computing and data storage management
US20060056308A1 (en) 2004-05-28 2006-03-16 International Business Machines Corporation Method of switching fabric for counteracting a saturation tree occurring in a network with nodes
US7720377B2 (en) * 2006-01-23 2010-05-18 Hewlett-Packard Development Company, L.P. Compute clusters employing photonic interconnections for transmitting optical signals between compute cluster nodes
US8265071B2 (en) * 2008-09-11 2012-09-11 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US9660940B2 (en) * 2010-12-01 2017-05-23 Juniper Networks, Inc. Methods and apparatus for flow control associated with a switch fabric

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000244506A (en) 1999-02-18 2000-09-08 Hitachi Ltd Network having duplex transmission line
US6925257B2 (en) 2000-02-29 2005-08-02 The Regents Of The University Of California Ultra-low latency multi-protocol optical routers for the next generation internet
US20050147032A1 (en) 2003-12-22 2005-07-07 Lyon Norman A. Apportionment of traffic management functions between devices in packet-based communication networks
US7729259B1 (en) 2004-01-20 2010-06-01 Cisco Technology, Inc. Reducing latency jitter in a store-and-forward buffer for mixed-priority traffic
CN1863198A (en) 2005-09-01 2006-11-15 华为技术有限公司 Apparatus and method of real-time recovering service
US8554943B1 (en) 2006-03-31 2013-10-08 Emc Corporation Method and system for reducing packet latency in networks with both low latency and high bandwidths requirements
US20080138067A1 (en) 2006-12-12 2008-06-12 Maged E Beshai Network with a Fast-Switching Optical Core
US20080259798A1 (en) 2007-04-19 2008-10-23 Fulcrum Microsystems Inc. Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US20090300209A1 (en) 2008-06-03 2009-12-03 Uri Elzur Method and system for path based network congestion management
US20100316049A1 (en) 2009-06-12 2010-12-16 Wael William Diab Method and system for energy-efficiency-based packet classification
CN101605102A (en) 2009-07-16 2009-12-16 杭州华三通信技术有限公司 Load sharing method during a kind of IRF piles up and device
CN102355421A (en) 2011-10-12 2012-02-15 华为技术有限公司 Method for handling LSP (Label Switched Path) network congestion, device and system
US20140198638A1 (en) 2013-01-14 2014-07-17 International Business Machines Corporation Low-latency lossless switch fabric for use in a data center
US9014005B2 (en) 2013-01-14 2015-04-21 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Low-latency lossless switch fabric for use in a data center

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
Campbell et al., U.S. Appl. No. 13/741,346, filed Jan. 14, 2013.
First Notice Informing the Applicant of the Communication of International Application from PCT Application No. PCT/IB2013/060799 dated Aug. 14, 2014.
International Search Report and Written Opinion from PCT Application No. PCT/IB2013/060799, dated May 8, 2014.
Non-Final Office Action from U.S. Appl. No. 13/741,346, filed Aug. 21, 2014.
Notice of Allowance from U.S. Appl. No. 13/741,346, filed Dec. 15, 2014.
Notification Concerning Availability of the Publication of the International Application from PCT Application No. PCT/IB2013/060799 dated Jul. 17, 2014.
Notification Concerning Submission, Obtention or Transmittal of Priority Document from PCT Application No. PCT/IB2013/060799 dated Jan. 20, 2014.
Notification of Receipt of Record, Notification of the International Application No. and of the International Filing Date, and Notification Concerning Payment of Prescribed Fees from PCT Application No. PCT/IB2013/060799 dated Jan. 17, 2014.
Notification of the International Application to Enter the Chinese National Phase on Chinese Application No. 201380074543.1, dated Sep. 25, 2015.
Notification of the Recording of a Change from PCT Application No. PCT/IB2013/060799, dated Feb. 5, 2015.
Notification of the Recording of a Change from PCT Application No. PCT/IB2013/060799, dated Feb. 6, 2015.

Also Published As

Publication number Publication date
US9014005B2 (en) 2015-04-21
DE112013006417B4 (en) 2023-04-27
CN105229976A (en) 2016-01-06
US20140198638A1 (en) 2014-07-17
CN105229976B (en) 2018-11-09
DE112013006417T5 (en) 2015-10-15
US20150188821A1 (en) 2015-07-02
WO2014108773A1 (en) 2014-07-17

Similar Documents

Publication Publication Date Title
US9270600B2 (en) Low-latency lossless switch fabric for use in a data center
US9800502B2 (en) Quantized congestion notification for computing environments
US11321271B2 (en) Host based non-volatile memory clustering mechanism using network mapped storage
US11036529B2 (en) Network policy implementation with multiple interfaces
US9462084B2 (en) Parallel processing of service functions in service function chains
US10177936B2 (en) Quality of service (QoS) for multi-tenant-aware overlay virtual networks
US9667653B2 (en) Context-aware network service policy management
US10412005B2 (en) Exploiting underlay network link redundancy for overlay networks
US9419900B2 (en) Multi-bit indicator set according to feedback based on an equilibrium length of a queue
US20170005933A1 (en) Machine for smoothing and/or polishing slabs of stone material, such as natural or agglomerated stone, ceramic and glass
US9571410B2 (en) Credit-based link level flow control and credit exchange using DCBX
US9699063B2 (en) Transitioning a routing switch device between network protocols
US20210359952A1 (en) Technologies for protocol-agnostic network packet segmentation
US8953607B2 (en) Internet group membership protocol group membership synchronization in virtual link aggregation
US20140204938A1 (en) Multicast route entry synchronization
US9374308B2 (en) Openflow switch mode transition processing
GB2532208A (en) Host network controller
EP3491792B1 (en) Deliver an ingress packet to a queue at a gateway device
US11194636B2 (en) Technologies for generating triggered conditional events

Legal Events

Date Code Title Description
AS Assignment

Owner name: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAMPBELL, ALEXANDER P.;KAMBLE, KESHAV G.;PANDEY, VIJOY A.;SIGNING DATES FROM 20130109 TO 20130111;REEL/FRAME:036884/0377

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: LENOVO INTERNATIONAL LIMITED, HONG KONG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.;REEL/FRAME:038483/0940

Effective date: 20160505

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: LENOVO INTERNATIONAL LIMITED, HONG KONG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE LTD.;REEL/FRAME:050301/0033

Effective date: 20160401

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8