WO2018009110A1 - Methods and systems for handling scalable network connections - Google Patents
Methods and systems for handling scalable network connections Download PDFInfo
- Publication number
- WO2018009110A1 WO2018009110A1 PCT/SE2016/050703 SE2016050703W WO2018009110A1 WO 2018009110 A1 WO2018009110 A1 WO 2018009110A1 SE 2016050703 W SE2016050703 W SE 2016050703W WO 2018009110 A1 WO2018009110 A1 WO 2018009110A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- memory
- network
- state information
- socket
- network socket
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
- H04L45/7453—Address table lookup; Address filtering using hashing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/161—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
- H04L69/162—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/163—In-band adaptation of TCP data exchange; In-band control procedures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/169—Special adaptations of TCP, UDP or IP for interworking of IP based networks with other networks
Definitions
- the present invention generally relates to communication networks and, more particularly, handling large quantities of network connections at a server.
- RAM random access memory
- TCP transmission control protocol
- TCP transmission control protocol
- existing operating systems consume a certain amount of random access memory (RAM) memory per open transmission control protocol (TCP) socket or TCP connection, e.g., to maintain read and write buffers for the socket, etc. This results in a hard limit on the capacity of a server to handle large amounts of TCP connections. Since a large portion of the TCP connections are open but not transferring data all of the time this adds to the somewhat inefficient consumption of the RAM memory available in a server.
- Existing operating systems thus have problems handling large amounts of parallel TCP connections due to having a limited amount RAM memory. It is common for servers to handle large amounts of traffic of different kinds, including long-lived connections with relatively sparse traffic exchanges that coexist with connections used for bulk transfer. Long-lived connections consume system resources throughout their existence and a large number of such long-lived connections have a large impact on the available RAM at the server, even though these connections do not consume much in terms of other network resources, e.g
- Virtual memory is the combination of physical memory and swap space on disk.
- swapping is an automatic way of allowing higher memory utilization, the kernel itself cannot swap memory.
- IP Internet Protocol
- moving to a user space TCP/Internet Protocol (IP) stack will allow swapping of all the memory to disk but the control of when the swapping occurs is not decided by the application itself.
- IP Internet Protocol
- Embodiments allow for handling large amounts of parallel network connections with a limited amount of RAM by saving a socket to a persistent storage based on certain criteria and then releasing that socket from RAM.
- the socket can be re-activated when new data arrives on its associated network connection.
- a method for handling network connections in a server includes: creating a network socket for a network connection in a first memory; monitoring the network connection for activity; and storing state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
- a server for handling network connections includes: a first memory in which a network socket for a network connection is created; a processor which monitors the network connection for activity; and a second memory in which state information associated with the network socket in the second memory when there is no activity on the network connection for a predetermined period is stored.
- FIG. 1 shows a sequence of how transmission control protocol (TCP) socket state information can be stored according to an embodiment
- FIG. 1 shows another sequence of how TCP socket state information can be stored according to an embodiment
- FIG. 3 shows another sequence of how TCP socket state information can be stored according to an embodiment
- Figure 4 illustrates an example of when to read TCP socket state information according to an embodiment
- Figure 5 show a flowchart of a method for handling network connections in a server according to an embodiment
- Figure 6 shows a server according to an embodiment.
- Embodiments allow for handling large amounts of parallel network connections with a limited amount of random access memory (RAM) by saving the network socket to a persistent storage based on certain criteria and then releasing that network socket from RAM.
- RAM random access memory
- a server typically creates a network socket when it receives a data segment with a particular flag set.
- a transmission control protocol (TCP) server creates a TCP socket when it receives a TCP segment with the SYN flag set.
- TCP transmission control protocol
- embodiments can be applied to TCP sockets, user datagram protocol (UDP) sockets and other types of network sockets and associated features/items, e.g., segment, server, flag, port, connection, etc.
- UDP user datagram protocol
- De-multiplexing describes the process of associating an IP datagram with a process and/or network socket listening to a specific network port.
- Serialization refers to a process of determining that a network socket which is established in RAM should be de-established in RAM and have its state information stored in secondary memory.
- De-serialization refers to the reverse process, i.e., the case where a socket has its state information stored in secondary memory, which state information is used to re-establish that socket in RAM as part of the de-multiplexing process.
- each network socket can be associated with an inactivity timer which is reset whenever there is activity on its network connection.
- a serialization process is initiated where state information associated with the network socket is stored in a socket-cache located in a secondary memory or storage, e.g., a persistent or non-volatile memory.
- a hash is computed from the connection five tuple in order to create a unique identification for the serialized socket.
- An example of a five tuple for TCP is 92.160.1 11.100/4011 1/71.100.122.70/71/6" for a packet arriving from port 4011 1 of IP address 192.160.1 11.100 with the packet arriving at port 71 of IP address 71.100.122.70 and using TCP.
- a five tuple can be created for other protocols, e.g., UDP.
- Equation 1 A purely illustrative example of a hash using the above-described five tuple can be mapped out as shown in Equation 1.
- hash (ip_source*Z) XOR ip_destination XOR source_port XOR (dst_port bitshifted
- Z is an arbitrary prime number, in this case 59.
- Z is an arbitrary prime number, in this case 59.
- the hash generated is 189580069603, given that the values are used in host byte order.
- Bitshifting is performed in this example because the IP addresses are 32-bit while the port number is only 16-bit.
- using the bitshifting and the arbitrary prime number Z allows a higher likelihood of obtaining a unique hash.
- the socket state information stored in the secondary memory is named with the given hash value.
- State information associated with the network socket and the network connection includes, but is not limited to, a source port, a destination port, connection established information, congestion window, Slow-Start Threshold (SSThresh) value, RTO state, a memory window size, negotiated options such as Selected
- the network connection hash is also stored in a lookup table in a primary memory, e.g., RAM memory of a blade server, and the lookup table is available to the IP-routing portion of the network stack. After serialization, all state information associated with the network socket and network connection is freed from the primary memory, thereby returning the RAM used to maintain that socket to the pool of free RAM that is available to the server for other purposes.
- a primary memory e.g., RAM memory of a blade server
- Figure 1 shows a sequence of how TCP sockets' state information can be stored, when not in use, in another storage media and released from RAM.
- TCP A TCP server A
- TCP B TCP server B
- a TCP socket is then setup by TCP B 104 as shown in step 108.
- a hash is created for this TCP socket and saved in a lookup table of TCP B 104's primary memory. Traffic occurs between TCP A 102 and TCP B 104 using this TCP socket as shown in step 1 10.
- step 1 12 traffic between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 1 12.
- a so-called "no traffic timer” is activated at TCP B 104 to track the amount of time that there is no traffic between TCP A 102 and TCP B 104 on this TCP socket as shown in step 1 14.
- step 116 when the timer reaches a value greater than a predetermined amount of time "x" then the TCP socket state information is saved.
- This socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory storage 118 as shown in step 120.
- the storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104. Additionally, after the TCP socket state information is saved in step 120, the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory. Those skilled in the art will appreciate that although only a single socket is discussed with respect to Figure 1 that the same process can be performed for any number of sockets which are established by a server.
- Figure 2 shows another sequence of how TCP sockets' state information can be stored, when not in use, in another storage media.
- a TCP connection is set-up between TCP A 102 and TCP B 104 as shown in step 106.
- a TCP socket is then setup by TCP B 104 as shown in step 108.
- a hash is created associated for this TCP socket and saved in a lookup table of TCP B 104's primary memory in, for example, a lookup table.
- Traffic occurs between TCP A 102 and TCP B 104 on this TCP socket as shown in step 1 10.
- traffic between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 110.
- step 202 when the memory pressure, i.e., RAM being used, reaches a value greater than a predetermined amount or percentage of memory "y" then the TCP socket state information is saved.
- This socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory "storage" 118 as shown in step 120.
- the storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104.
- the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory.
- Figure 3 shows another sequence of how TCP sockets' state information can be stored, when not in use, in another storage media.
- a TCP connection is set-up between TCP A 102 and TCP B 104 as shown in step 106.
- a TCP socket is then setup by TCP B 104 as shown in step 108.
- a hash is created associated for this TCP socket and saved in a lookup table of TCP B 104's primary memory in, for example, a lookup table.
- Traffic occurs between TCP A 102 and TCP B 104 on this TCP socket as shown in step 110.
- traffic occurs between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 110.
- a so-called "no traffic timer” is activated at TCP B 104 to track the amount of time that there is no traffic between TCP A 102 and TCP B 104 on this TCP socket as shown in step 114.
- a so-called "no traffic timer” is activated at TCP B 104 to track the amount of time that there is no traffic between TCP A 102 and TCP B 104 on this TCP socket as shown in step 114.
- step 302 when the timer reaches a value greater than a
- TCP socket state information is saved.
- This TCP socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory "storage” 1 18 as shown in step 120.
- the storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104. Additionally, after the TCP socket state information is saved in step 120, the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory.
- the inactivity timer is used for tracking an amount of time with no traffic being transmitted between TCP A 102 and TCP B 104 using a particular socket.
- the amount of time x is typically a predetermined amount. However, that predetermined amount can be different for different types of traffic as well as being influenced by other items such as configuration, traffic patterns and memory pressure. For example, if the traffic is video, then x could be measured in seconds, e.g., two seconds. If the traffic is a simpler form of data, e.g., text, then x could be measured in milliseconds (ms), e.g., two ms.
- ms milliseconds
- another trigger for storing network socket state information is memory pressure.
- Memory pressure can be described as an amount of free space remaining in a memory.
- a threshold y can be determined either as a percentage, e.g., ten percent or ten percent below Linux limits, or an amount of memory that when reached could be the trigger for storing network socket state information.
- This trigger can be used in conjunction with a network socket inactivity timer or by itself, as shown in the previous embodiments.
- a least recently used network socket can be serialized or the inactivity time threshold can be reduced from 'x' to another value 'z' which is less then 'x' to increase the storage of sockets and free up more RAM.
- serialization refers to moving socket state information into secondary memory and releasing that socket from RAM.
- De-multiplexing refers to handling packets incoming to the server when some sockets have been serialized, which process will now be described with respect to Figure 4.
- Figure 4 shows how to de-multiplex traffic incoming to a server which has (or may have) serialized some of its sockets.
- traffic is detected between TCP A 102 and TCP B 104.
- TCP B 104 uses that active TCP socket.
- TCP B 104 makes the decision to read the TCP socket state information previously stored.
- TCP B 104 reads the TCP socket state information.
- TCP B 104 uses the de-serialized TCP socket.
- step 406 the determination of whether the associated TCP socket associated with an incoming data packet is currently active (step 404) or whether that TCP socket has been serialized (step 406) can be performed in any desired order. If step 406 is performed first, then the process can be implemented as follows. Firstly, for each packet that enters the system and is destined for the blade server compute a hash on the connection five tuple. Then determine if the hash is present in the lookup table of serialized network socket identifiers in the primary memory.
- the hash is not present, continue with a standard de-multiplexing procedure, i.e., to find an active socket associated with the five tuple. If the hash is present, initiate de-serialization of the saved network socket state associated with the hash, i.e., associating an IP datagram with the network socket listening to a specific network port. When de-serialization is complete continue with the standard de-multiplexing procedure.
- step 404 can be performed first by implementing the flow as follows. Firstly, for each packet that enters the system, perform the standard demultiplexing procedure to search for an active TCP socket associated with the five tuple. If no network socket is found for the specific connection identifier, compute the hash using the connection five-tuple. Then determine if the hash is present in the lookup table of serialized network socket identifiers in the primary memory. If the hash is not present, continue with a standard de-multiplexing procedure. If the hash is present, initiate deserialization of the saved network socket state associated with the hash. When deserialization is complete, forward the packet to the activated network socket.
- the associated hash is removed from the lookup-table and the data stored in the secondary memory is marked as "dirty".
- a separate garbage collection process clears the unused data from the secondary memory.
- FIGS 1 -4 show embodiments where a single TCP socket is used.
- a single socket has been shown for simplicity. It is to be understood that the embodiments shown can be scaled up so that many, many more network connections can be created as well as terminated at roughly a same time such that the connections essentially occur in parallel.
- the method includes: in step 502, creating a network socket for a network connection in a first memory; in step 504, monitoring the network connection for activity; and in step 506, storing state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
- Embodiments described above can be implemented in a device, e.g., the blade server, to improve memory usage via network socket handling.
- a blade server An example of such a blade server is shown in Figure 6.
- the blade server 600 includes a processor 602 for executing instructions and performing the functions described herein, e.g., serialization, de-serialization and de-multiplexing.
- the blade server 600 also includes a primary memory 604, e.g., RAM memory, a secondary memory 606 which is a non-volatile memory, and an interface 608 for communicating with other portions of communication networks.
- the blade server 600 can act as a TCP proxy server or other device which handles a large number of network connections.
- Implementing the various embodiments allows for a better utilization of RAM memory for active network connections, instead of inactive network connections, as well as timely control of when network sockets should be serialized and which sockets to choose for serialization based for example on either a least recently used algorithm or decision criteria on specific IP ranges.
- the decision criteria could be implemented in any or all of steps 116, 202 and/or 303, from Figures 1 , 2 and 3 respectively.
- the decision criteria could only be implemented for one or more IP address ranges which correlate to various subscription levels.
- Cost savings can be obtained by storing the network socket state information in the secondary, non-volatile memory as compared to only using the primary RAM memory as RAM memory is more expensive than non-volatile memory. This also improves the ratio of the utilization of RAM peer active connection which can be desirable. Additionally, embodiments can benefit highly loaded blade servers where a sudden increase in the amount of network connection attempts would otherwise cause the server to hang.
- the embodiments may take the form of an entirely hardware embodiment or an embodiment combining hardware and software aspects. Further, portions of the embodiments, e.g., the predetermined thresholds or rules to determine the thresholds for x and y, may take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions embodied in the medium. Any suitable computer-readable medium may be utilized, including hard disks, CD-ROMs, digital versatile disc (DVD), optical storage devices, or magnetic storage devices such as floppy disk or magnetic tape. Other non-limiting examples of computer-readable media include flash-type memories or other known memories.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
There is described a method and system for handling network connections in a server. The method includes: creating a network socket for a network connection in a first memory; monitoring the network connection for activity; and storing state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
Description
METHODS AND SYSTEMS FOR HANDLING SCALABLE NETWORK
CONNECTIONS
TECHNICAL FIELD
[0001] The present invention generally relates to communication networks and, more particularly, handling large quantities of network connections at a server.
BACKGROUND
[0002] Over time the number of products and services provided to users of telecommunication products has grown significantly. Technology advanced and wireless phones of varying capabilities were introduced which had access to various services provided by network operators, e.g., data services. More recently there are numerous devices, e.g., so called "smart" phones and tablets, which can access communication networks in which the operators of the networks, and other parties, provide many different types of services, applications, etc. This has resulted in an increased amount of network traffic which in turn caused an increasing demand for high performing servers.
[0003] Existing operating systems consume a certain amount of random access memory (RAM) memory per open transmission control protocol (TCP) socket or TCP connection, e.g., to maintain read and write buffers for the socket, etc. This results in a hard limit on the capacity of a server to handle large amounts of TCP connections. Since a large portion of the TCP connections are open but not transferring data all of the time this adds to the somewhat inefficient consumption of the RAM memory available in a server.
[0004] Existing operating systems thus have problems handling large amounts of parallel TCP connections due to having a limited amount RAM memory. It is common for servers to handle large amounts of traffic of different kinds, including long-lived connections with relatively sparse traffic exchanges that coexist with connections used for bulk transfer. Long-lived connections consume system resources throughout their existence and a large number of such long-lived connections have a large impact on the available RAM at the server, even though these connections do not consume much in terms of other network resources, e.g., bandwidth.
[0005] Examples of proxy productions systems which are required to handle two million parallel connections per blade are not uncommon to find in use today. This amount of connections per blade server results in high requirements on the RAM memory with deployments of up to 256 GB of RAM. However addressing the problem of RAM consumption due to network socket support simply by continuing to add more RAM to newer servers is an unscalable solution due to cost.
[0006] Virtual memory is the combination of physical memory and swap space on disk. Although swapping is an automatic way of allowing higher memory utilization, the kernel itself cannot swap memory. Thus, for the above-described requirements on massively parallel connections the memory consumption of the sockets in the kernel is a limiting factor which cannot be alleviated by employing virtual memory. Additionally, moving to a user space TCP/Internet Protocol (IP) stack will allow swapping of all the memory to disk but the control of when the swapping occurs is not decided by the application itself. Thus even active connections can be swapped to disk incurring a large delay and reducing the throughput making this approach undesirable.
[0007] Thus, there is a need to provide methods and devices that overcome the above-described drawbacks of the associated with handling a large quantity of network connections.
SUMMARY
[0008] Embodiments allow for handling large amounts of parallel network connections with a limited amount of RAM by saving a socket to a persistent storage based on certain criteria and then releasing that socket from RAM. The socket can be re-activated when new data arrives on its associated network connection.
[0009] According to an embodiment, there is a method for handling network connections in a server. The method includes: creating a network socket for a network connection in a first memory; monitoring the network connection for activity; and storing state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
[0010] According to an embodiment, there is a server for handling network connections. The server includes: a first memory in which a network socket for a network connection is created; a processor which monitors the network connection for activity; and a second memory in which state information associated with the network socket in the second memory when there is no activity on the network connection for a predetermined period is stored.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate one or more embodiments and, together with the description, explain these embodiments. In the drawings:
[0012] Figure 1 shows a sequence of how transmission control protocol (TCP) socket state information can be stored according to an embodiment;
[0013] Figure 2 shows another sequence of how TCP socket state information can be stored according to an embodiment;
[0014] Figure 3 shows another sequence of how TCP socket state information can be stored according to an embodiment;
[0015] Figure 4 illustrates an example of when to read TCP socket state information according to an embodiment;
[0016] Figure 5 show a flowchart of a method for handling network connections in a server according to an embodiment; and
[0017] Figure 6 shows a server according to an embodiment.
DETAILED DESCRIPTION
[0018] The following description of the embodiments refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. The following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims. The embodiments to be discussed next are not limited to the configurations described below, but may be extended to other arrangements as discussed later.
[0019] Reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases "in one embodiment" or "in an
embodiment" in various places throughout the specification is not necessarily all referring to the same embodiment. Further, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
[0020] As described in the Background, there are problems associated with current methods of handling a large quantity of network connections. Embodiments allow for handling large amounts of parallel network connections with a limited amount of random access memory (RAM) by saving the network socket to a persistent storage based on certain criteria and then releasing that network socket from RAM. The network socket can be re-activated when new data arrives on its associated network connection
[0021] A server typically creates a network socket when it receives a data segment with a particular flag set. For example, a transmission control protocol (TCP) server
creates a TCP socket when it receives a TCP segment with the SYN flag set. By generically using the term "socket" in the description, it is to be understood that
embodiments can be applied to TCP sockets, user datagram protocol (UDP) sockets and other types of network sockets and associated features/items, e.g., segment, server, flag, port, connection, etc.
[0022] Prior to discussing various embodiments, some terminology is first introduced. De-multiplexing, as used herein, describes the process of associating an IP datagram with a process and/or network socket listening to a specific network port. Serialization, as used herein, refers to a process of determining that a network socket which is established in RAM should be de-established in RAM and have its state information stored in secondary memory. De-serialization refers to the reverse process, i.e., the case where a socket has its state information stored in secondary memory, which state information is used to re-establish that socket in RAM as part of the de-multiplexing process.
[0023] One characteristic which can be monitored to determine if a particular network socket should be serialized is the socket's usage over time. According to an embodiment, each network socket can be associated with an inactivity timer which is reset whenever there is activity on its network connection. When the timer reaches a configured timeout value, a serialization process is initiated where state information associated with the network socket is stored in a socket-cache located in a secondary memory or storage, e.g., a persistent or non-volatile memory. A hash is computed from the connection five tuple in order to create a unique identification for the serialized socket. An example of a five tuple for TCP is 92.160.1 11.100/4011 1/71.100.122.70/71/6" for a packet arriving
from port 4011 1 of IP address 192.160.1 11.100 with the packet arriving at port 71 of IP address 71.100.122.70 and using TCP. Similarly, a five tuple can be created for other protocols, e.g., UDP.
[0024] A purely illustrative example of a hash using the above-described five tuple can be mapped out as shown in Equation 1.
hash = (ip_source*Z) XOR ip_destination XOR source_port XOR (dst_port bitshifted
16) XOR proto_number (1 )
where Z is an arbitrary prime number, in this case 59. Using the above five tuple of 192.160.1 1 1 .100/401 1 1/71.100.122.70/71/6 and a Z value of 59, the hash generated is 189580069603, given that the values are used in host byte order. Bitshifting is performed in this example because the IP addresses are 32-bit while the port number is only 16-bit. In this exemplary hash function, using the bitshifting and the arbitrary prime number Z allows a higher likelihood of obtaining a unique hash.
[0025] The socket state information stored in the secondary memory is named with the given hash value. State information associated with the network socket and the network connection includes, but is not limited to, a source port, a destination port, connection established information, congestion window, Slow-Start Threshold (SSThresh) value, RTO state, a memory window size, negotiated options such as Selected
Acknowledgement (SACK), maximum segment size (MSS), Window scaling, etc., as well as last sent/acked sequence number/acknowledgement number. The network connection hash is also stored in a lookup table in a primary memory, e.g., RAM memory of a blade server, and the lookup table is available to the IP-routing portion of the network stack. After serialization, all state information associated with the network socket and network
connection is freed from the primary memory, thereby returning the RAM used to maintain that socket to the pool of free RAM that is available to the server for other purposes.
Examples of storing network socket information when not in use according to various embodiments are described below in more detail with respect to Figures 1-3. Additionally, while Figures 1 -3 are shown using TCP, other protocols could also be used.
[0026] According to an embodiment, Figure 1 shows a sequence of how TCP sockets' state information can be stored, when not in use, in another storage media and released from RAM. Initially, a TCP connection is set-up between TCP server A (TCP A) 102 and TCP server B (TCP B) 104 as shown in step 106. A TCP socket is then setup by TCP B 104 as shown in step 108. Additionally in step 108, a hash is created for this TCP socket and saved in a lookup table of TCP B 104's primary memory. Traffic occurs between TCP A 102 and TCP B 104 using this TCP socket as shown in step 1 10. At a future point in time traffic between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 1 12. At that time a so-called "no traffic timer" is activated at TCP B 104 to track the amount of time that there is no traffic between TCP A 102 and TCP B 104 on this TCP socket as shown in step 1 14. In step 116, when the timer reaches a value greater than a predetermined amount of time "x" then the TCP socket state information is saved. This socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory storage 118 as shown in step 120. The storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104. Additionally, after the TCP socket state information is saved in step 120, the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory. Those skilled in the art will appreciate that although only
a single socket is discussed with respect to Figure 1 that the same process can be performed for any number of sockets which are established by a server.
[0027] According to an embodiment, Figure 2 shows another sequence of how TCP sockets' state information can be stored, when not in use, in another storage media.
Initially, a TCP connection is set-up between TCP A 102 and TCP B 104 as shown in step 106. A TCP socket is then setup by TCP B 104 as shown in step 108. Additionally in step 108 a hash is created associated for this TCP socket and saved in a lookup table of TCP B 104's primary memory in, for example, a lookup table. Traffic occurs between TCP A 102 and TCP B 104 on this TCP socket as shown in step 1 10. At a future point in time traffic between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 110. In step 202, when the memory pressure, i.e., RAM being used, reaches a value greater than a predetermined amount or percentage of memory "y" then the TCP socket state information is saved. Thus this embodiment introduces another criterion which can be used to trigger socket serialization, memory pressure, as an alternative to or in addition to usage time. This socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory "storage" 118 as shown in step 120. The storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104. Additionally, after the TCP socket state information is saved in step 120, the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory.
[0028] According to an embodiment, Figure 3 shows another sequence of how TCP sockets' state information can be stored, when not in use, in another storage media.
Initially, a TCP connection is set-up between TCP A 102 and TCP B 104 as shown in step
106. A TCP socket is then setup by TCP B 104 as shown in step 108. Additionally in step 108 a hash is created associated for this TCP socket and saved in a lookup table of TCP B 104's primary memory in, for example, a lookup table. Traffic occurs between TCP A 102 and TCP B 104 on this TCP socket as shown in step 110. At a future point in time traffic between TCP A 102 and TCP B 104 ceases on this TCP socket as shown in step 110. At that time a so-called "no traffic timer" is activated at TCP B 104 to track the amount of time that there is no traffic between TCP A 102 and TCP B 104 on this TCP socket as shown in step 114. In step 302, when the timer reaches a value greater than a
predetermined amount of time "x" and when the memory pressure reaches a value greater than a predetermined amount or percentage of memory "y" then the TCP socket state information is saved. This TCP socket state information is transmitted from a primary memory, e.g., RAM memory, of TCP B 104 to another, different memory "storage" 1 18 as shown in step 120. The storage 118 can be a non-volatile form of memory and can be located within TCP B 104 or separately from TCP B 104. Additionally, after the TCP socket state information is saved in step 120, the TCP socket is de-activated in TCP B 104 which frees up a portion of TCP B 104's primary memory.
[0029] In Figures 1 and 3 the inactivity timer is used for tracking an amount of time with no traffic being transmitted between TCP A 102 and TCP B 104 using a particular socket. The amount of time x is typically a predetermined amount. However, that predetermined amount can be different for different types of traffic as well as being influenced by other items such as configuration, traffic patterns and memory pressure. For example, if the traffic is video, then x could be measured in seconds, e.g., two seconds. If
the traffic is a simpler form of data, e.g., text, then x could be measured in milliseconds (ms), e.g., two ms.
[0030] According to an embodiment, as described above, another trigger for storing network socket state information is memory pressure. Memory pressure can be described as an amount of free space remaining in a memory. A threshold y can be determined either as a percentage, e.g., ten percent or ten percent below Linux limits, or an amount of memory that when reached could be the trigger for storing network socket state information. This trigger can be used in conjunction with a network socket inactivity timer or by itself, as shown in the previous embodiments. Additionally, when serialization is triggered due to a detection by the server that a memory pressure threshold has been exceeded, a least recently used network socket can be serialized or the inactivity time threshold can be reduced from 'x' to another value 'z' which is less then 'x' to increase the storage of sockets and free up more RAM.
[0031] As mentioned previously, serialization, described above with respect to Figures 1 -3, refers to moving socket state information into secondary memory and releasing that socket from RAM. De-multiplexing refers to handling packets incoming to the server when some sockets have been serialized, which process will now be described with respect to Figure 4.
[0032] According to an embodiment, Figure 4 shows how to de-multiplex traffic incoming to a server which has (or may have) serialized some of its sockets. Initially, in step 402, traffic is detected between TCP A 102 and TCP B 104. As shown in step 404 if the associated TCP socket associated with the packet received as traffic is active then
TCP B 104 uses that active TCP socket. As shown in step 406, if the associated TCP socket associated with the packet received as traffic is not active then TCP B 104 makes the decision to read the TCP socket state information previously stored. In step 408, TCP B 104 reads the TCP socket state information. In step 410, TCP B 104 uses the de-serialized TCP socket.
[0033] The de-multiplexing which is generally described above with respect to Figure 4 can be implemented in different ways. For example, the determination of whether the associated TCP socket associated with an incoming data packet is currently active (step 404) or whether that TCP socket has been serialized (step 406) can be performed in any desired order. If step 406 is performed first, then the process can be implemented as follows. Firstly, for each packet that enters the system and is destined for the blade server compute a hash on the connection five tuple. Then determine if the hash is present in the lookup table of serialized network socket identifiers in the primary memory. If the hash is not present, continue with a standard de-multiplexing procedure, i.e., to find an active socket associated with the five tuple. If the hash is present, initiate de-serialization of the saved network socket state associated with the hash, i.e., associating an IP datagram with the network socket listening to a specific network port. When de-serialization is complete continue with the standard de-multiplexing procedure.
[0034] Alternatively, step 404 can be performed first by implementing the flow as follows. Firstly, for each packet that enters the system, perform the standard demultiplexing procedure to search for an active TCP socket associated with the five tuple. If no network socket is found for the specific connection identifier, compute the hash using the connection five-tuple. Then determine if the hash is present in the lookup table of
serialized network socket identifiers in the primary memory. If the hash is not present, continue with a standard de-multiplexing procedure. If the hash is present, initiate deserialization of the saved network socket state associated with the hash. When deserialization is complete, forward the packet to the activated network socket.
[0035] According to an embodiment, when a socket is de-serialized, the associated hash is removed from the lookup-table and the data stored in the secondary memory is marked as "dirty". A separate garbage collection process clears the unused data from the secondary memory.
[0036] Figures 1 -4 show embodiments where a single TCP socket is used. A single socket has been shown for simplicity. It is to be understood that the embodiments shown can be scaled up so that many, many more network connections can be created as well as terminated at roughly a same time such that the connections essentially occur in parallel.
[0037] According to an embodiment there is a method for handling network connections as shown in Figure 5. The method includes: in step 502, creating a network socket for a network connection in a first memory; in step 504, monitoring the network connection for activity; and in step 506, storing state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
[0038] Embodiments described above can be implemented in a device, e.g., the blade server, to improve memory usage via network socket handling. An example of such a blade server is shown in Figure 6. The blade server 600 includes a processor 602 for executing instructions and performing the functions described herein, e.g., serialization,
de-serialization and de-multiplexing. The blade server 600 also includes a primary memory 604, e.g., RAM memory, a secondary memory 606 which is a non-volatile memory, and an interface 608 for communicating with other portions of communication networks. The blade server 600 can act as a TCP proxy server or other device which handles a large number of network connections.
[0039] Implementing the various embodiments allows for a better utilization of RAM memory for active network connections, instead of inactive network connections, as well as timely control of when network sockets should be serialized and which sockets to choose for serialization based for example on either a least recently used algorithm or decision criteria on specific IP ranges. For example, the decision criteria could be implemented in any or all of steps 116, 202 and/or 303, from Figures 1 , 2 and 3 respectively. The decision criteria could only be implemented for one or more IP address ranges which correlate to various subscription levels. Cost savings can be obtained by storing the network socket state information in the secondary, non-volatile memory as compared to only using the primary RAM memory as RAM memory is more expensive than non-volatile memory. This also improves the ratio of the utilization of RAM peer active connection which can be desirable. Additionally, embodiments can benefit highly loaded blade servers where a sudden increase in the amount of network connection attempts would otherwise cause the server to hang.
[0040] The disclosed embodiments provide methods and devices for handling large amounts of parallel network connections with a limited amount of RAM. It should be understood that this description is not intended to limit the invention. On the contrary, the embodiments are intended to cover alternatives, modifications and equivalents, which
are included in the spirit and scope of the invention. Further, in the detailed description of the embodiments, numerous specific details are set forth in order to provide a
comprehensive understanding of the claimed invention. However, one skilled in the art would understand that various embodiments may be practiced without such specific details.
[0041] As also will be appreciated by one skilled in the art, the embodiments may take the form of an entirely hardware embodiment or an embodiment combining hardware and software aspects. Further, portions of the embodiments, e.g., the predetermined thresholds or rules to determine the thresholds for x and y, may take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions embodied in the medium. Any suitable computer-readable medium may be utilized, including hard disks, CD-ROMs, digital versatile disc (DVD), optical storage devices, or magnetic storage devices such as floppy disk or magnetic tape. Other non-limiting examples of computer-readable media include flash-type memories or other known memories.
[0042] Although the features and elements of the present embodiments are described in the embodiments in particular combinations, each feature or element can be used alone without the other features and elements of the embodiments or in various combinations with or without other features and elements disclosed herein. The methods or flowcharts provided in the present application may be implemented in a computer program, software or firmware tangibly embodied in a computer-readable storage medium for execution by a specifically programmed computer or processor.
Claims
1. A method for handling network connections in a server, the method comprising: creating (502) a network socket for a network connection in a first memory; monitoring (504) the network connection for activity; and
storing (506) state information associated with the network socket in a second memory when there is no activity on the network connection for a predetermined period of time.
2. The method of claim 1 , further comprising:
storing the state information associated with the network socket in the second memory when an inactivity timer reaches a predetermined threshold.
3. The method of claim 1 , further comprising:
storing the state information associated with the network socket in the second memory when an inactivity timer reaches a first predetermined threshold and when an amount of free space in the first memory reaches a second predetermined threshold.
4. The method of claims 1 -3, wherein the state information includes a source port, a destination port, connection established information and a memory window size.
5. The method of claims 1 -4, further comprising:
creating a unique identification associated with the network socket by generating a hash value based on a five tuple of the network connection; and
storing the hash value in a lookup table in the first memory.
6. The method of claim 5, further comprising:
removing the state information from the first memory after both storing the state information in the second memory and storing the hash value in the lookup table in the primary memory.
7. The method of claims 1 -6, wherein the first memory is random access memory and the second memory is non-volatile memory.
8. The method of claims 1 -7, further comprising:
retrieving the state information associated with the network socket from the second memory when there is activity on the network connection to re-establish the network socket in the first memory
9. The method of claim 8, further comprising:
generating, for each packet entering the server, a hash based on a five tuple of the packet;
determining whether the hash of the packet matches a hash value stored in the lookup table;
if so, using state information stored in the second memory to re-establish the network socket; and
if not, creating a new network socket to handle the packet.
10. The method of claim 8, further comprising:
performing a de-multiplexing procedure for each packet entering the server; if a network socket is found for a specific connection identifier associated with a packet determined from the de-multiplexing procedure, then use the network socket to process the packet;
if no network socket is found for the specific connection identifier, then performing the following steps:
generating a hash for a packet based on a five tuple of the packet;
determining whether the hash of the packet matches a hash value stored in the lookup table;
if so, using state information stored in the second memory to re-establish the network socket; and
if not, creating a new network socket to handle the packet; and
1 1. A server for handling network connections, the server comprising:
a first memory (604) in which a network socket for a network connection is created;
a processor (602) which monitors the network connection for activity; and a second memory (606) in which state information associated with the network socket in the second memory when there is no activity on the network connection for a predetermined period is stored.
12. The server of claim 1 1 , wherein the state information associated with the network socket is stored in the second memory when an inactivity timer reaches a
predetermined threshold.
13. The server of claim 1 1 , wherein the state information associated with the network socket is stored in the second memory when an inactivity timer reaches a first predetermined threshold and when an amount of free space in the first memory reaches a second predetermined threshold.
14. The server of claims 1 1 -13, wherein the state information includes a source port, a destination port, connection established information and a memory window size.
15. The server of claims 1 1 -14, further comprising:
the processor creates a unique identification associated with the network socket by generating a hash value based on a five tuple of the network connection, wherein the hash value is stored in a lookup table in the first memory.
16. The server of claim 15, wherein the state information is removed from the first memory after both storing the state information in the second memory and storing the hash value in the lookup table in the first memory.
17. The server of claims 1 1 -16, wherein the first memory is random access memory and the second memory is non-volatile memory.
18. The server of claims 1 1 -17, wherein the state information associated with the network socket is retrieved from the second memory when there is activity on the network connection to re-establish the network socket in the first memory.
19. The server of claim 18, further comprising:
the processor generates, for each packet entering the server, a hash based on a five tuple of the packet;
the processor determines whether the hash of the packet matches a hash value stored in the lookup table;
if so, using state information stored in the second memory to re-establish the network socket; and
if not, creating a new network socket to handle the packet.
20. The server of claim 18, further comprising:
the processor performs a de-multiplexing procedure for each packet entering the server;
if no network socket is found for a specific connection identifier determined from the de-multiplexing procedure, then the processor performs the following steps:
generating a hash for a packet based on a five tuple of the packet;
determining whether the hash of the packet matches a hash value stored in the lookup table;
if so, using state information stored in the second memory to re-establish the network socket; and
if not, creating a new network socket to handle the packet; and If a network socket is found for the specific connection identifier then the processor uses the network socket.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16908271.6A EP3482544A4 (en) | 2016-07-08 | 2016-07-08 | Methods and systems for handling scalable network connections |
PCT/SE2016/050703 WO2018009110A1 (en) | 2016-07-08 | 2016-07-08 | Methods and systems for handling scalable network connections |
US16/315,933 US20190253351A1 (en) | 2016-07-08 | 2016-07-08 | Methods and systems for handling scalable network connections |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/SE2016/050703 WO2018009110A1 (en) | 2016-07-08 | 2016-07-08 | Methods and systems for handling scalable network connections |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018009110A1 true WO2018009110A1 (en) | 2018-01-11 |
Family
ID=60913032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SE2016/050703 WO2018009110A1 (en) | 2016-07-08 | 2016-07-08 | Methods and systems for handling scalable network connections |
Country Status (3)
Country | Link |
---|---|
US (1) | US20190253351A1 (en) |
EP (1) | EP3482544A4 (en) |
WO (1) | WO2018009110A1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9971397B2 (en) | 2014-10-08 | 2018-05-15 | Apple Inc. | Methods and apparatus for managing power with an inter-processor communication link between independently operable processors |
US11792307B2 (en) | 2018-03-28 | 2023-10-17 | Apple Inc. | Methods and apparatus for single entity buffer pool management |
US11829303B2 (en) | 2019-09-26 | 2023-11-28 | Apple Inc. | Methods and apparatus for device driver operation in non-kernel space |
US11477123B2 (en) | 2019-09-26 | 2022-10-18 | Apple Inc. | Methods and apparatus for low latency operation in user space networking |
US11558348B2 (en) | 2019-09-26 | 2023-01-17 | Apple Inc. | Methods and apparatus for emerging use case support in user space networking |
WO2021060957A1 (en) * | 2019-09-27 | 2021-04-01 | Samsung Electronics Co., Ltd. | Method and device for performing asynchronous operations in a communication system |
US11606302B2 (en) | 2020-06-12 | 2023-03-14 | Apple Inc. | Methods and apparatus for flow-based batching and processing |
US11775359B2 (en) | 2020-09-11 | 2023-10-03 | Apple Inc. | Methods and apparatuses for cross-layer processing |
US11954540B2 (en) | 2020-09-14 | 2024-04-09 | Apple Inc. | Methods and apparatus for thread-level execution in non-kernel space |
US11799986B2 (en) | 2020-09-22 | 2023-10-24 | Apple Inc. | Methods and apparatus for thread level execution in non-kernel space |
US11876719B2 (en) | 2021-07-26 | 2024-01-16 | Apple Inc. | Systems and methods for managing transmission control protocol (TCP) acknowledgements |
US11882051B2 (en) | 2021-07-26 | 2024-01-23 | Apple Inc. | Systems and methods for managing transmission control protocol (TCP) acknowledgements |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7263556B1 (en) * | 2000-08-11 | 2007-08-28 | Microsoft Corporation | System and method of enhancing server throughput by minimizing timed-wait TCP control block (TWTCB) size |
US20070233822A1 (en) * | 2006-04-03 | 2007-10-04 | International Business Machines Corporation | Decrease recovery time of remote TCP client applications after a server failure |
US8737393B2 (en) * | 2009-06-04 | 2014-05-27 | Canon Kabushiki Kaisha | Communication apparatus, control method for communication apparatus, and computer program |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6880013B2 (en) * | 2000-12-29 | 2005-04-12 | International Business Machines Corporation | Permanent TCP connections across system reboots |
US7152111B2 (en) * | 2002-08-15 | 2006-12-19 | Digi International Inc. | Method and apparatus for a client connection manager |
US7957409B2 (en) * | 2003-01-23 | 2011-06-07 | Cisco Technology, Inc. | Methods and devices for transmitting data between storage area networks |
US8838817B1 (en) * | 2007-11-07 | 2014-09-16 | Netapp, Inc. | Application-controlled network packet classification |
US8892710B2 (en) * | 2011-09-09 | 2014-11-18 | Microsoft Corporation | Keep alive management |
JP2015530021A (en) * | 2012-09-10 | 2015-10-08 | ヒューレット−パッカード デベロップメント カンパニー エル.ピー.Hewlett‐Packard Development Company, L.P. | Using primary and secondary connection connection tables |
US20140115182A1 (en) * | 2012-10-24 | 2014-04-24 | Brocade Communications Systems, Inc. | Fibre Channel Storage Area Network to Cloud Storage Gateway |
US9781075B1 (en) * | 2013-07-23 | 2017-10-03 | Avi Networks | Increased port address space |
-
2016
- 2016-07-08 US US16/315,933 patent/US20190253351A1/en not_active Abandoned
- 2016-07-08 WO PCT/SE2016/050703 patent/WO2018009110A1/en unknown
- 2016-07-08 EP EP16908271.6A patent/EP3482544A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7263556B1 (en) * | 2000-08-11 | 2007-08-28 | Microsoft Corporation | System and method of enhancing server throughput by minimizing timed-wait TCP control block (TWTCB) size |
US20070233822A1 (en) * | 2006-04-03 | 2007-10-04 | International Business Machines Corporation | Decrease recovery time of remote TCP client applications after a server failure |
US8737393B2 (en) * | 2009-06-04 | 2014-05-27 | Canon Kabushiki Kaisha | Communication apparatus, control method for communication apparatus, and computer program |
Non-Patent Citations (1)
Title |
---|
See also references of EP3482544A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3482544A1 (en) | 2019-05-15 |
US20190253351A1 (en) | 2019-08-15 |
EP3482544A4 (en) | 2019-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190253351A1 (en) | Methods and systems for handling scalable network connections | |
WO2021164398A1 (en) | Packet processing system and method, and machine-readable storage medium and program product | |
WO2022088743A1 (en) | Flow table processing method and related device | |
US9083539B2 (en) | Method and apparatus for multicast packet reception | |
US10313247B2 (en) | System, method, and device for network load balance processing | |
KR101863024B1 (en) | Distributed load balancer | |
EP3361693B1 (en) | Tcp connection processing method, device and system | |
US8498229B2 (en) | Reduced power state network processing | |
JP4722157B2 (en) | Intelligent load balancing and failover of network traffic | |
JP4651692B2 (en) | Intelligent load balancing and failover of network traffic | |
US8472469B2 (en) | Configurable network socket aggregation to enable segmentation offload | |
US9787589B2 (en) | Filtering of unsolicited incoming packets to electronic devices | |
JP4840943B2 (en) | Intelligent load balancing and failover of network traffic | |
JP4925218B2 (en) | Intelligent failback in a load-balanced network environment | |
CN106790675A (en) | Load-balancing method, equipment and system in a kind of cluster | |
US20060176893A1 (en) | Method of dynamic queue management for stable packet forwarding and network processor element therefor | |
WO2023125380A1 (en) | Data management method and corresponding apparatus | |
JP4363404B2 (en) | Receiving apparatus and method, and program | |
US10298494B2 (en) | Reducing short-packet overhead in computer clusters | |
US20160149817A1 (en) | Analysis device | |
CN104702531B (en) | The method and the network equipment that a kind of network apparatus jamming avoids | |
CN117354253A (en) | Network congestion notification method, device and storage medium | |
WO2005055546A1 (en) | Session relay device, session relay method, and session relay program | |
CN114443281A (en) | Network card load adjusting method, device, equipment and readable storage medium | |
CN106254264B (en) | A kind of asymmetric network transmission protocol design method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16908271 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2016908271 Country of ref document: EP Effective date: 20190208 |