US20080259821A1 - Dynamic packet training - Google Patents
Dynamic packet training Download PDFInfo
- Publication number
- US20080259821A1 US20080259821A1 US12/147,778 US14777808A US2008259821A1 US 20080259821 A1 US20080259821 A1 US 20080259821A1 US 14777808 A US14777808 A US 14777808A US 2008259821 A1 US2008259821 A1 US 2008259821A1
- Authority
- US
- United States
- Prior art keywords
- packet
- processor
- load
- packets
- train
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012549 training Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 claims description 34
- 230000007246 mechanism Effects 0.000 abstract description 17
- 238000004891 communication Methods 0.000 description 28
- 238000012545 processing Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/36—Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
- H04L47/365—Dynamic adaptation of the packet size
Definitions
- This invention generally relates to data processing and communications, and more specifically relates to dynamically transmitting data packets in a packet train on a computer network or computer communication link.
- Computer systems communicate with each other over computer networks.
- Such networks include multiple nodes, which are typically computers, that may be distributed over vast distances and connected by communications links.
- Nodes in the computer network communicate with each other using data packets sent over the communication links.
- the data packets are the basic units of information transfer.
- a data packet contains data surrounded by control and routing information supplied by the various nodes.
- Sending, receiving, and processing of packets have an overhead, or associated cost. That is, it takes time for the central processing unit (CPU) at a node to receive a packet, to examine the packet's control information, and to determine the next action.
- One way to reduce the packet overhead is a method called packet training. Packet training consolidates individual packets into a group, called a train, so that a node can process the entire train of packets at once.
- train is in reference to a train of railroad cars. The packets are formed into a group of sequential packets like a line of railroad cars or a train. Processing a train of packets has less overhead, and thus better performance, than processing each packet individually.
- a node In a typical training method, a node will accumulate packets until the train reaches a fixed target-length. Then the node will process or retransmit the entire packet train at once. In order to ensure that the accumulated packets are eventually handled since the packet arrival rate at the node is unpredictable, the method will start a timer when the node receives the train's first packet. When the timer expires, the node will end the train and process it even if train has not reached its target length.
- This training method works well in times of heavy packet-traffic because the timer never expires. But in times of light packet-traffic, the packets that the node accumulates experience poor performance while waiting in vain for additional packets to arrive, and the ultimate timer expiration introduces additional processing overhead.
- the system dynamically adjusts the number of packets sent in a train from a node to reflect the rate-of-packets arriving at a node in a network.
- a packet controller determines the optimum train-length, that is the optimum number-of-packets to send in a train.
- the node also has a timer interval, which is the maximum time-to-wait before sending the next train.
- the packet controller samples the packet arrival-rate and calculates the elapsed time to receive a number-of-packets in a train. This elapsed time is referred to as a sampling interval.
- the packet controller calibrates the optimum train-length when the sampling interval changes significantly from the historic sampling-interval. This method provides dynamic training of packets but does not efficiently handle message latency, particularly for burst mode communication traffic in a low CPU utilization environment.
- Packet training can save a significant amount of CPU load in a heavy communications workload environment.
- packet training can have a detrimental affect on the latency of messages sent over the network.
- the message When a message is sent with packet training, the message may be delayed while a packet train is being assembled.
- Packet training decreases the load on the CPU but may increase the time for a message to be sent over the network due to the delay in building a train of packets. Without a way to optimize the tradeoff between CPU loading and network latency, the computer industry will continue to suffer from sub-optimum performance from a packet data network.
- a computer data system includes a packet control mechanism that dynamically adjusts packet training depending on the utilization load on the processor.
- the dynamic adjustment of packet training can be to enable and disable packet training, or adjust the number of packets in the packet train.
- the computer data system includes a processor utilization mechanism that indicates a load on a processor. When the packet control mechanism determines the load on the processor is above a threshold limit, the packet control mechanism reduces the processor load by processing the packets into a packet train. The training of the packets is stopped or reduced when the processor load is below a threshold in order to increase the data throughput on the network interface.
- FIG. 1 is a block diagram of a computer system according a preferred embodiment
- FIG. 2 is a more detailed block diagram of the computer system in FIG. 1 ;
- FIG. 3 depicts a data structure of an example packet, in accordance with the prior art
- FIG. 4 depicts a data structure of an example packet train, in accordance with the prior art
- FIG. 5 illustrates a method in accordance with a preferred embodiment
- FIG. 6 illustrates a method in accordance with another preferred embodiment.
- the present invention relates to dynamic packet training in a data packet network depending on the loading of the CPU.
- the Overview Section immediately below is intended to provide an introductory explanation of pack training operations and history for individuals who need additional background in this area. Those who are skilled in the art may wish to skip this section and begin with the Detailed Description section instead.
- Computer networks typically have multiple nodes connected by communications links, such as telephone networks.
- Each node typically includes a processing element, which processes data, and a communications-control unit, which controls the transmission and reception of data in the network across the communications link.
- the processing element can include one or more processors and memory.
- Nodes communicate with each other using packets, which are the basic units of information transfer.
- a packet contains data surrounded by control and routing information supplied by the various nodes in the network.
- a message from one node to another may be sent via a single packet, or the node can break the message up into several shorter packets with each packet containing a portion of the message.
- the communications-control unit at a node receives a packet from the communications link and sends the packet to the node's processing element for processing. Likewise, a node's processing element sends a packet to the node's communications-control unit, which transmits the packet across the network.
- header section 302 contains control information that encapsulates data 304 .
- header section 302 might contain protocol, session, source, or destination information used for routing packet 300 over network 170 ( FIG. 1 ).
- Data section 304 could contain electronic mail, files, documents, or any other information desired to be communicated over network 170 .
- Data section 304 could also contain another entire packet, including header and data sections. Processing of packets has an overhead, or cost, associated with it. That is, it takes time to receive a packet at a node, to examine the packet's control information, and to determine what to do next with the packet.
- Packet-training consolidates individual packets into a group, called a train, which reduces the overhead when compared to processing the same number of packets individually because a node can process the entire train of packets at once.
- Packet train 400 contains control information 402 , the number of packets 404 , a number of lengths 406 (length 406 a , length 406 b and so forth to length 406 c ), and a number of packets 408 (packet 408 a , packet 408 b and so forth to packet 408 c ).
- Control information 402 can specify, among other things, that the information that follows is part of a packet train.
- Number of packets 404 indicates how many packets are in the train. In this example, there are “n” packets in the train. Length 1 to length n are the lengths of packet 1 to packet n, respectively.
- Each packet 408 a to packet 408 c can contain header and data, as shown in FIG. 3 . Packet train 400 is transferred between nodes as one unit.
- Preferred embodiments illustrate a computer data system that dynamically adjusts packet training for network communication traffic on a network node depending on the processor loading.
- the network could have computer systems as its nodes, or the network could have processors in a multi-processor system as its nodes, or the network could be a combination of processors and computer systems.
- a node has a packet controller that dynamically enables and disables packet training.
- a suitable computer system is described below.
- Computer system 100 is shown in accordance with the preferred embodiments of the invention.
- Computer system 100 is an IBM eServer iSeries computer system.
- processor central processing unit or CPU
- main memory 120 main memory
- mass storage interface 130 main memory
- display interface 140 main memory
- network interface 150 network interface
- Mass storage interface 130 is used to connect mass storage devices, such as a direct access storage device 155 , to computer system 100 .
- mass storage devices such as a direct access storage device 155
- One specific type of direct access storage device 155 is a readable and writable CD RW drive, which may store data to and read data from a CD RW 195 .
- Processor 110 may be constructed from one or more microprocessors and/or integrated circuits. Processor 110 executes program instructions stored in main memory 120 . Main memory 120 stores programs and data that processor 110 may access. When computer system 100 starts up, processor 110 initially executes the program instructions that make up operating system 122 . Operating system 122 is a sophisticated program that manages the resources of computer system 100 . Some of these resources are processor 110 , main memory 120 , mass storage interface 130 , display interface 140 , network interface 150 , and system bus 160 .
- computer system 100 is shown to contain only a single processor and a single system bus, those skilled in the art will appreciate that the present invention may be practiced using a computer system that has multiple processors and/or multiple buses.
- the interfaces that are used in the preferred embodiment each include separate, fully programmed microprocessors that are used to off-load compute-intensive processing from processor 110 .
- processor 110 processors 110
- the present invention applies equally to computer systems that simply use I/O adapters to perform similar functions.
- Display interface 140 is used to directly connect one or more displays 165 to computer system 100 .
- These displays 165 which may be non-intelligent (i.e., dumb) terminals or fully programmable workstations, are used to allow system administrators and users to communicate with computer system 100 . Note, however, that while display interface 140 is provided to support communication with one or more displays 165 , computer system 100 does not necessarily require a display 165 , because all needed interaction with users and other processes may occur via network interface 150 .
- Network interface 150 is used to connect other computer systems and/or workstations (e.g., 175 in FIG. 1 ) to computer system 100 across a network 170 .
- the present invention applies equally no matter how computer system 100 may be connected to other computer systems and/or workstations, regardless of whether the network connection 170 is made using present-day analog and/or digital techniques or via some networking mechanism of the future.
- many different network protocols can be used to implement a network. These protocols are specialized computer programs that allow computers to communicate across network 170 .
- TCP/IP Transmission Control Protocol/Internet Protocol
- Main memory 120 in accordance with the preferred embodiments contains data 121 , an operating system 122 , an application 123 and a packet controller 124 .
- Data 121 represents any data that serves as input to or output from any program in computer system 100 .
- Operating system 122 is a multitasking operating system known in the industry as OS/400; however, those skilled in the art will appreciate that the spirit and scope of the present invention is not limited to any one operating system.
- the application 123 is any application software program operating in the system that processes data 121 .
- the packet controller 124 operates in conjunction with the communications controller 152 in the network interface 150 to dynamically adjust the packet compression as described further below.
- Packet controller 124 includes one or more thresholds 125 for comparing to the utilization level of the processor, and one or more maximum train sizes 126 for setting the maximum number of packets in a packet train.
- the thresholds 125 and maximum train sizes 126 are described further below.
- Computer system 100 utilizes well known virtual addressing mechanisms that allow the programs of computer system 100 to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities such as main memory 120 and DASD device 155 . Therefore, while data 121 , operating system 122 , application 123 , and the packet controller 124 are shown to reside in main memory 120 , those skilled in the art will recognize that these items are not necessarily all completely contained in main memory 120 at the same time. It should also be noted that the term “memory” is used herein to generically refer to the entire virtual memory of computer system 100 , and may include the virtual memory of other computer systems coupled to computer system 100 . Thus, while in FIG. 1 , the application 123 , and the packet controller 124 are all shown to reside in the main memory 120 of computer system 100 , in actual implementation these software components may reside in separate machines and communicate over network 170 .
- Network 170 may include a plurality of networks, such as local area networks, each of which includes a plurality of individual computers such as the computer 100 described above. Further the computers may be implemented utilizing any suitable computer, such as the PS/2 computer, AS/400 computer, or a RISC System/6000 computer, which are products of IBM Corporation located in Armonk, N.Y. “PS/2”, “AS/400”, and “RISC System/6000”are trademarks of IBM Corporation. A plurality of intelligent work stations (IWS) (not shown) coupled to a processor may also be utilized in such a network. Network 170 may also may include mainframe computers, which may be coupled to network 170 by means of a suitable communications link. A mainframe computer may be implemented by utilizing an ESA/370 computer, an ESA/390 computer, or an AS/400 computer available from IBM Corporation. “ESA/370”, “ESA/390”, and “AS/400” are trademarks of IBM Corporation.
- Computer system 100 may be used for training packets according to preferred embodiments.
- Computer system 100 could be implemented in any of the computers on the network 170 as described above, or in a gateway server or mainframe computer.
- Computer system 100 can contain both hardware and software to implement the packet control features described herein.
- Computer system 100 contains communications controller 152 connected to processor 110 and main memory 120 via system bus 160 .
- Computer system 100 includes a processor utilization mechanism 112 capable of determining the level of utilization of the processor.
- Processor utilization mechanism 112 can be implemented in hardware or software.
- processor utilization mechanism 112 is implemented as an API call to the operating system that is supported by hardware in the processor that determines the ratio of the run cycles to the total number of cycles.
- the utilization mechanism could use any manner of processor metric to determine processor utilization or processor loading such as wait state tasks divided by total cycles, or other suitable metric.
- Main memory 120 contains packet controller 124 , which contains instructions capable of being executed by processor 110 .
- packet controller 124 could be implemented by control circuitry through the use of logic gates, programmable logic devices, or other hardware components in lieu of a processor-based system.
- Packet controller 124 performs the packet-training method described herein below.
- Packet controller 124 includes one or more thresholds 125 for comparing to the utilization level of the processor. The thresholds are preferably selectable by the user or system programmer with an appropriate interface and stored in a memory area of the packet controller 124 . For example, the thresholds may be set as part of the process to change TCP attributes with an appropriate request to the operating system 122 ( FIG. 1 ).
- the packet controller 124 also includes one or more maximum train sizes 126 for setting the maximum number of packets in a packet train.
- Table 1 shows an illustrative example of thresholds and associated maximum train size 126, which specifies the maximum number of packets in a packet train. For a threshold of 30% utilization, a maximum train size of 0 is set, indicating that packet training is disabled. For a threshold of 50% utilization, a maximum train size of 50 is set (a moderate size of packet train). For a threshold of 90% utilization, a maximum train size of 100 is set (a large size of packet train or the maximum sized packet train). The maximum train size is the size is the number of packets that are accumulated before sending the packet train.
- the maximum train size and the invention herein can also be combined with the prior art method of a timer to send out a packet train after a selected amount of time.
- the listed thresholds and associated packet train sizes are for illustration only. Any suitable number of thresholds could be used with an associated packet train size to get a desired performance tradeoff.
- communications controller 152 contains communications front-end 204 , communications packet-controller 206 , packet storage 208 , and DMA (Direct Memory Access) controller 214 , all connected via communications bus 212 .
- DMA controller 214 is connected to DMA processor 210 .
- Communications front-end 204 is connected to network 170 , contains the circuitry for transmitting and receiving packets across network 170 , and is employed to communicate with other nodes coupled to network 170 .
- DMA processor 210 controls DMA controller 214 .
- DMA controller 214 receives packets from communications bus 212 and sends the packets to processor 110 through system bus 160 . The packets then are processed by packet controller 124 and stored in host memory 120 .
- host processor 110 desires to send packets to network 170 , it transmits the packets from host memory 120 to packet storage 208 using DMA controller 214 and DMA processor 210 .
- Communications packet controller 206 then uses communications front-end 204 to transmit the packets from packet storage 208 across communications link 212 to network 170 .
- FIG. 2 Although a specific hardware configuration is shown in FIG. 2 , a preferred embodiment of the present invention can apply to any hardware configuration that allows the training of packets, regardless of whether the hardware configuration is a complicated, multi-user computing apparatus, a single-user work station, or a network appliance that does not have non-volatile storage of its own.
- FIG. 5 shows a method 500 of adjusting the packet compression or packet training according to a preferred embodiment.
- the method 500 starts periodically to check the processor utilization (step 510 ).
- the threshold utilization percentage can be a parameter stored in memory that can be adjusted by a suitable software interface to the packet control mechanism 124 ( FIG. 1 ).
- FIG. 6 shows another method 600 of adjusting the packet compression or packet training according to a preferred embodiment.
- the threshold utilization can be a parameter stored in memory that can be adjusted by a suitable software interface to the packet control mechanism 124 ( FIG. 1 ). Similarly, other embodiments could include additional thresholds and corresponding maximum levels of packet training.
- Packet training decreases the load on the CPU but may increase the delay for a message to be sent over the network due to the delay in building a train of packets.
- the present invention provides the computer industry with an improved way to optimize the tradeoff between CPU loading and network latency to improve overall performance in a packet data network.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A packet control mechanism for a computer data system that dynamically adjusts packet training depending on the utilization load on the processor. The dynamic adjustment of packet training can be to enable and disable packet training, or adjust the number of packets in the packet train. In preferred embodiments, the computer data system includes a processor utilization mechanism that indicates a load on a processor. When the packet control mechanism determines the load on the processor is above a threshold limit, the packet control mechanism reduces the processor load by compressing the packets into the packet train. The compressing of the packets is stopped or reduced when the processor load is below a threshold in order to increase the data throughput on the network interface.
Description
- This patent application is a continuation of “U.S. Ser. No. 11/106,011 filed on Apr. 14, 2005, which is incorporated herein by reference.
- 1. Technical Field
- This invention generally relates to data processing and communications, and more specifically relates to dynamically transmitting data packets in a packet train on a computer network or computer communication link.
- 2. Background Art
- Computer systems communicate with each other over computer networks. Such networks include multiple nodes, which are typically computers, that may be distributed over vast distances and connected by communications links. Nodes in the computer network communicate with each other using data packets sent over the communication links. The data packets are the basic units of information transfer. A data packet contains data surrounded by control and routing information supplied by the various nodes.
- Sending, receiving, and processing of packets have an overhead, or associated cost. That is, it takes time for the central processing unit (CPU) at a node to receive a packet, to examine the packet's control information, and to determine the next action. One way to reduce the packet overhead is a method called packet training. Packet training consolidates individual packets into a group, called a train, so that a node can process the entire train of packets at once. The term “train” is in reference to a train of railroad cars. The packets are formed into a group of sequential packets like a line of railroad cars or a train. Processing a train of packets has less overhead, and thus better performance, than processing each packet individually.
- In a typical training method, a node will accumulate packets until the train reaches a fixed target-length. Then the node will process or retransmit the entire packet train at once. In order to ensure that the accumulated packets are eventually handled since the packet arrival rate at the node is unpredictable, the method will start a timer when the node receives the train's first packet. When the timer expires, the node will end the train and process it even if train has not reached its target length. This training method works well in times of heavy packet-traffic because the timer never expires. But in times of light packet-traffic, the packets that the node accumulates experience poor performance while waiting in vain for additional packets to arrive, and the ultimate timer expiration introduces additional processing overhead.
- In another prior art packet training method, described in U.S. Pat. No. 5,859,853 to David Glen Carlson and incorporated herein by reference, the system dynamically adjusts the number of packets sent in a train from a node to reflect the rate-of-packets arriving at a node in a network. A packet controller determines the optimum train-length, that is the optimum number-of-packets to send in a train. The node also has a timer interval, which is the maximum time-to-wait before sending the next train. The packet controller samples the packet arrival-rate and calculates the elapsed time to receive a number-of-packets in a train. This elapsed time is referred to as a sampling interval. The packet controller calibrates the optimum train-length when the sampling interval changes significantly from the historic sampling-interval. This method provides dynamic training of packets but does not efficiently handle message latency, particularly for burst mode communication traffic in a low CPU utilization environment.
- Packet training can save a significant amount of CPU load in a heavy communications workload environment. However, packet training can have a detrimental affect on the latency of messages sent over the network. When a message is sent with packet training, the message may be delayed while a packet train is being assembled. Thus there is a tradeoff between CPU load and communication latency when using packet training. Packet training decreases the load on the CPU but may increase the time for a message to be sent over the network due to the delay in building a train of packets. Without a way to optimize the tradeoff between CPU loading and network latency, the computer industry will continue to suffer from sub-optimum performance from a packet data network.
- According to the preferred embodiments, a computer data system includes a packet control mechanism that dynamically adjusts packet training depending on the utilization load on the processor. The dynamic adjustment of packet training can be to enable and disable packet training, or adjust the number of packets in the packet train. In preferred embodiments, the computer data system includes a processor utilization mechanism that indicates a load on a processor. When the packet control mechanism determines the load on the processor is above a threshold limit, the packet control mechanism reduces the processor load by processing the packets into a packet train. The training of the packets is stopped or reduced when the processor load is below a threshold in order to increase the data throughput on the network interface.
- The foregoing and other features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings.
- The preferred embodiments of the present invention will hereinafter be described in conjunction with the appended drawings, where like designations denote like elements, and:
-
FIG. 1 is a block diagram of a computer system according a preferred embodiment; -
FIG. 2 is a more detailed block diagram of the computer system inFIG. 1 ; -
FIG. 3 depicts a data structure of an example packet, in accordance with the prior art; -
FIG. 4 depicts a data structure of an example packet train, in accordance with the prior art; -
FIG. 5 illustrates a method in accordance with a preferred embodiment; and -
FIG. 6 illustrates a method in accordance with another preferred embodiment. - The present invention relates to dynamic packet training in a data packet network depending on the loading of the CPU. The Overview Section immediately below is intended to provide an introductory explanation of pack training operations and history for individuals who need additional background in this area. Those who are skilled in the art may wish to skip this section and begin with the Detailed Description section instead.
- Overview
- Computer networks typically have multiple nodes connected by communications links, such as telephone networks. Each node typically includes a processing element, which processes data, and a communications-control unit, which controls the transmission and reception of data in the network across the communications link. The processing element can include one or more processors and memory.
- Nodes communicate with each other using packets, which are the basic units of information transfer. A packet contains data surrounded by control and routing information supplied by the various nodes in the network. A message from one node to another may be sent via a single packet, or the node can break the message up into several shorter packets with each packet containing a portion of the message. The communications-control unit at a node receives a packet from the communications link and sends the packet to the node's processing element for processing. Likewise, a node's processing element sends a packet to the node's communications-control unit, which transmits the packet across the network.
- Referring to
FIG. 3 , the data structure for atypical packet 300 is depicted, which includesheader section 302 anddata section 304.Header section 302 contains control information that encapsulatesdata 304. For example,header section 302 might contain protocol, session, source, or destination information used for routingpacket 300 over network 170 (FIG. 1 ).Data section 304 could contain electronic mail, files, documents, or any other information desired to be communicated overnetwork 170.Data section 304 could also contain another entire packet, including header and data sections. Processing of packets has an overhead, or cost, associated with it. That is, it takes time to receive a packet at a node, to examine the packet's control information, and to determine what to do next with the packet. One way to reduce the packet overhead is to use a method called packet-training. Packet-training consolidates individual packets into a group, called a train, which reduces the overhead when compared to processing the same number of packets individually because a node can process the entire train of packets at once. - Referring to
FIG. 4 , a data structure example of apacket train 400, represents both the prior art and the packet train structure used by the preferred embodiments.Packet train 400 containscontrol information 402, the number ofpackets 404, a number of lengths 406 (length 406 a,length 406 b and so forth tolength 406 c), and a number of packets 408 (packet 408 a,packet 408 b and so forth topacket 408 c).Control information 402 can specify, among other things, that the information that follows is part of a packet train. Number ofpackets 404 indicates how many packets are in the train. In this example, there are “n” packets in the train.Length 1 to length n are the lengths ofpacket 1 to packet n, respectively. Eachpacket 408 a topacket 408 c can contain header and data, as shown inFIG. 3 .Packet train 400 is transferred between nodes as one unit. - Preferred embodiments illustrate a computer data system that dynamically adjusts packet training for network communication traffic on a network node depending on the processor loading. The network could have computer systems as its nodes, or the network could have processors in a multi-processor system as its nodes, or the network could be a combination of processors and computer systems. In the preferred embodiment, a node has a packet controller that dynamically enables and disables packet training. A suitable computer system is described below.
- Referring to
FIG. 1 , acomputer system 100 is shown in accordance with the preferred embodiments of the invention.Computer system 100 is an IBM eServer iSeries computer system. However, those skilled in the art will appreciate that the mechanisms and apparatus of the present invention apply equally to any computer system, regardless of whether the computer system is a complicated multi-user computing apparatus, a single user workstation, or an embedded control system. As shown inFIG. 1 ,computer system 100 comprises a processor (central processing unit or CPU) 110, amain memory 120, amass storage interface 130, adisplay interface 140, and anetwork interface 150. These system components are interconnected through the use of asystem bus 160.Mass storage interface 130 is used to connect mass storage devices, such as a directaccess storage device 155, tocomputer system 100. One specific type of directaccess storage device 155 is a readable and writable CD RW drive, which may store data to and read data from aCD RW 195. -
Processor 110 may be constructed from one or more microprocessors and/or integrated circuits.Processor 110 executes program instructions stored inmain memory 120.Main memory 120 stores programs and data thatprocessor 110 may access. Whencomputer system 100 starts up,processor 110 initially executes the program instructions that make upoperating system 122.Operating system 122 is a sophisticated program that manages the resources ofcomputer system 100. Some of these resources areprocessor 110,main memory 120,mass storage interface 130,display interface 140,network interface 150, andsystem bus 160. - Although
computer system 100 is shown to contain only a single processor and a single system bus, those skilled in the art will appreciate that the present invention may be practiced using a computer system that has multiple processors and/or multiple buses. In addition, the interfaces that are used in the preferred embodiment each include separate, fully programmed microprocessors that are used to off-load compute-intensive processing fromprocessor 110. However, those skilled in the art will appreciate that the present invention applies equally to computer systems that simply use I/O adapters to perform similar functions. -
Display interface 140 is used to directly connect one ormore displays 165 tocomputer system 100. Thesedisplays 165, which may be non-intelligent (i.e., dumb) terminals or fully programmable workstations, are used to allow system administrators and users to communicate withcomputer system 100. Note, however, that whiledisplay interface 140 is provided to support communication with one ormore displays 165,computer system 100 does not necessarily require adisplay 165, because all needed interaction with users and other processes may occur vianetwork interface 150. -
Network interface 150 is used to connect other computer systems and/or workstations (e.g., 175 inFIG. 1 ) tocomputer system 100 across anetwork 170. The present invention applies equally no matter howcomputer system 100 may be connected to other computer systems and/or workstations, regardless of whether thenetwork connection 170 is made using present-day analog and/or digital techniques or via some networking mechanism of the future. In addition, many different network protocols can be used to implement a network. These protocols are specialized computer programs that allow computers to communicate acrossnetwork 170. TCP/IP (Transmission Control Protocol/Internet Protocol) is an example of a suitable network protocol. -
Main memory 120 in accordance with the preferred embodiments containsdata 121, anoperating system 122, anapplication 123 and apacket controller 124.Data 121 represents any data that serves as input to or output from any program incomputer system 100.Operating system 122 is a multitasking operating system known in the industry as OS/400; however, those skilled in the art will appreciate that the spirit and scope of the present invention is not limited to any one operating system. Theapplication 123 is any application software program operating in the system that processesdata 121. Thepacket controller 124 operates in conjunction with thecommunications controller 152 in thenetwork interface 150 to dynamically adjust the packet compression as described further below.Packet controller 124 includes one ormore thresholds 125 for comparing to the utilization level of the processor, and one or moremaximum train sizes 126 for setting the maximum number of packets in a packet train. Thethresholds 125 andmaximum train sizes 126 are described further below. -
Computer system 100 utilizes well known virtual addressing mechanisms that allow the programs ofcomputer system 100 to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities such asmain memory 120 andDASD device 155. Therefore, whiledata 121,operating system 122,application 123, and thepacket controller 124 are shown to reside inmain memory 120, those skilled in the art will recognize that these items are not necessarily all completely contained inmain memory 120 at the same time. It should also be noted that the term “memory” is used herein to generically refer to the entire virtual memory ofcomputer system 100, and may include the virtual memory of other computer systems coupled tocomputer system 100. Thus, while inFIG. 1 , theapplication 123, and thepacket controller 124 are all shown to reside in themain memory 120 ofcomputer system 100, in actual implementation these software components may reside in separate machines and communicate overnetwork 170. - At this point, it is important to note that while the present invention has been and will continue to be described in the context of a fully functional computer system, those skilled in the art will appreciate that the present invention is capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of computer-readable signal bearing media used to actually carry out the distribution. Examples of suitable computer-readable signal bearing media include: recordable type media such as floppy disks and CD RW (e.g., 195 of
FIG. 1 ), and transmission type media such as digital and analog communications links. -
Network 170 may include a plurality of networks, such as local area networks, each of which includes a plurality of individual computers such as thecomputer 100 described above. Further the computers may be implemented utilizing any suitable computer, such as the PS/2 computer, AS/400 computer, or a RISC System/6000 computer, which are products of IBM Corporation located in Armonk, N.Y. “PS/2”, “AS/400”, and “RISC System/6000”are trademarks of IBM Corporation. A plurality of intelligent work stations (IWS) (not shown) coupled to a processor may also be utilized in such a network.Network 170 may also may include mainframe computers, which may be coupled tonetwork 170 by means of a suitable communications link. A mainframe computer may be implemented by utilizing an ESA/370 computer, an ESA/390 computer, or an AS/400 computer available from IBM Corporation. “ESA/370”, “ESA/390”, and “AS/400” are trademarks of IBM Corporation. - Referring to
FIG. 2 , a more detailed schematic representation ofcomputer system 100 is shown, which may be used for training packets according to preferred embodiments.Computer system 100 could be implemented in any of the computers on thenetwork 170 as described above, or in a gateway server or mainframe computer.Computer system 100 can contain both hardware and software to implement the packet control features described herein. -
Computer system 100 containscommunications controller 152 connected toprocessor 110 andmain memory 120 viasystem bus 160.Computer system 100 includes aprocessor utilization mechanism 112 capable of determining the level of utilization of the processor.Processor utilization mechanism 112 can be implemented in hardware or software. In a preferred embodiment,processor utilization mechanism 112 is implemented as an API call to the operating system that is supported by hardware in the processor that determines the ratio of the run cycles to the total number of cycles. The utilization mechanism could use any manner of processor metric to determine processor utilization or processor loading such as wait state tasks divided by total cycles, or other suitable metric. -
Main memory 120 containspacket controller 124, which contains instructions capable of being executed byprocessor 110. In the alternative,packet controller 124 could be implemented by control circuitry through the use of logic gates, programmable logic devices, or other hardware components in lieu of a processor-based system.Packet controller 124 performs the packet-training method described herein below.Packet controller 124 includes one ormore thresholds 125 for comparing to the utilization level of the processor. The thresholds are preferably selectable by the user or system programmer with an appropriate interface and stored in a memory area of thepacket controller 124. For example, the thresholds may be set as part of the process to change TCP attributes with an appropriate request to the operating system 122 (FIG. 1 ). - In preferred embodiments, the
packet controller 124 also includes one or moremaximum train sizes 126 for setting the maximum number of packets in a packet train. Table 1 below shows an illustrative example of thresholds and associatedmaximum train size 126, which specifies the maximum number of packets in a packet train. For a threshold of 30% utilization, a maximum train size of 0 is set, indicating that packet training is disabled. For a threshold of 50% utilization, a maximum train size of 50 is set (a moderate size of packet train). For a threshold of 90% utilization, a maximum train size of 100 is set (a large size of packet train or the maximum sized packet train). The maximum train size is the size is the number of packets that are accumulated before sending the packet train. The maximum train size and the invention herein can also be combined with the prior art method of a timer to send out a packet train after a selected amount of time. The listed thresholds and associated packet train sizes are for illustration only. Any suitable number of thresholds could be used with an associated packet train size to get a desired performance tradeoff. -
TABLE 1 Threshold 30 50 90 Max Train Size 0 50 100 (or maximum) - Referring again to
FIG. 2 ,communications controller 152 contains communications front-end 204, communications packet-controller 206,packet storage 208, and DMA (Direct Memory Access)controller 214, all connected viacommunications bus 212.DMA controller 214 is connected toDMA processor 210. Communications front-end 204 is connected to network 170, contains the circuitry for transmitting and receiving packets acrossnetwork 170, and is employed to communicate with other nodes coupled tonetwork 170. - When a packet is received by communications
front end 204 fromnetwork 170, the packet is examined by communications packet-controller 206 and stored inpacket storage 208 before being sent toDMA processor 210.DMA processor 210controls DMA controller 214.DMA controller 214 receives packets fromcommunications bus 212 and sends the packets toprocessor 110 throughsystem bus 160. The packets then are processed bypacket controller 124 and stored inhost memory 120. Whenhost processor 110 desires to send packets tonetwork 170, it transmits the packets fromhost memory 120 topacket storage 208 usingDMA controller 214 andDMA processor 210.Communications packet controller 206 then uses communications front-end 204 to transmit the packets frompacket storage 208 across communications link 212 tonetwork 170. - Although a specific hardware configuration is shown in
FIG. 2 , a preferred embodiment of the present invention can apply to any hardware configuration that allows the training of packets, regardless of whether the hardware configuration is a complicated, multi-user computing apparatus, a single-user work station, or a network appliance that does not have non-volatile storage of its own. -
FIG. 5 shows amethod 500 of adjusting the packet compression or packet training according to a preferred embodiment. Themethod 500 starts periodically to check the processor utilization (step 510). The method may be started by a timer interrupt or some other suitable means to insure the method runs with a suitable period. If the processor utilization is greater than a set threshold (step 540=yes), then the packet training is enabled (step 530). If the processor utilization is less than or equal to a set threshold (step 520=no) then the packet training is disabled (step 540). The threshold utilization percentage can be a parameter stored in memory that can be adjusted by a suitable software interface to the packet control mechanism 124 (FIG. 1 ). -
FIG. 6 shows anothermethod 600 of adjusting the packet compression or packet training according to a preferred embodiment. Themethod 600 starts periodically to check the processor utilization (step 610). The method may be started as described above. If the processor utilization is less than a first set threshold (step 620=yes), then the packet training is disabled (step 630). If the processor utilization is greater than or equal to the first set threshold (step 620=no) then the method continues withstep 640. If the processor utilization is less than a second set threshold (step 640=yes) then the maximum packet training is set to a first level (step 650). If the processor utilization is greater than or equal to the first set threshold (step 640=no) then the maximum packet training is set to a second level (step 660). The threshold utilization can be a parameter stored in memory that can be adjusted by a suitable software interface to the packet control mechanism 124 (FIG. 1 ). Similarly, other embodiments could include additional thresholds and corresponding maximum levels of packet training. - As described above, there is a tradeoff between CPU load and communication latency when using packet training. Packet training decreases the load on the CPU but may increase the delay for a message to be sent over the network due to the delay in building a train of packets. The present invention provides the computer industry with an improved way to optimize the tradeoff between CPU loading and network latency to improve overall performance in a packet data network.
- One skilled in the art will appreciate that many variations are possible within the scope of the present invention. Thus, while the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that these and other changes in form and details may be made therein without departing from the spirit and scope of the invention.
Claims (6)
1) A computer implemented method for packet training comprising the steps of:
determining load on a processor;
sending a plurality of data packets in a packet train over a network; and
dynamically adjusting packet training on the network depending on the load on the processor to enable packet training when the load on the processor is above a threshold and disable packet training when the load on the processor is below the threshold.
2) The method of claim 1 further comprising the step of enabling and disabling packet training depending on the load on the processor.
3) The method of claim 1 further comprising the step of adjusting size of the packet train depending on the load on the processor.
4) The method of claim 3 comprising the step of comparing the load on the processor to a plurality of predetermined thresholds and setting a maximum size of a packet train depending on a corresponding predetermined threshold.
5) The method of claim 4 wherein the predetermined threshold is set by a user of the computer data system.
6) The method of claim 1 wherein the load on the processor is determined using an API call.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/147,778 US20080259821A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/106,011 US7480238B2 (en) | 2005-04-14 | 2005-04-14 | Dynamic packet training |
US12/147,778 US20080259821A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/106,011 Continuation US7480238B2 (en) | 2005-04-14 | 2005-04-14 | Dynamic packet training |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080259821A1 true US20080259821A1 (en) | 2008-10-23 |
Family
ID=37078179
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/106,011 Expired - Fee Related US7480238B2 (en) | 2005-04-14 | 2005-04-14 | Dynamic packet training |
US12/147,778 Abandoned US20080259821A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
US12/147,793 Abandoned US20080259822A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
US12/147,766 Abandoned US20080263226A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/106,011 Expired - Fee Related US7480238B2 (en) | 2005-04-14 | 2005-04-14 | Dynamic packet training |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/147,793 Abandoned US20080259822A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
US12/147,766 Abandoned US20080263226A1 (en) | 2005-04-14 | 2008-06-27 | Dynamic packet training |
Country Status (2)
Country | Link |
---|---|
US (4) | US7480238B2 (en) |
CN (1) | CN1848813A (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100195538A1 (en) * | 2009-02-04 | 2010-08-05 | Merkey Jeffrey V | Method and apparatus for network packet capture distributed storage system |
CA2619141C (en) | 2004-12-23 | 2014-10-21 | Solera Networks, Inc. | Method and apparatus for network packet capture distributed storage system |
US7480238B2 (en) * | 2005-04-14 | 2009-01-20 | International Business Machines Corporation | Dynamic packet training |
US8521732B2 (en) | 2008-05-23 | 2013-08-27 | Solera Networks, Inc. | Presentation of an extracted artifact based on an indexing technique |
US8625642B2 (en) | 2008-05-23 | 2014-01-07 | Solera Networks, Inc. | Method and apparatus of network artifact indentification and extraction |
US8004998B2 (en) * | 2008-05-23 | 2011-08-23 | Solera Networks, Inc. | Capture and regeneration of a network data using a virtual software switch |
US20090292736A1 (en) * | 2008-05-23 | 2009-11-26 | Matthew Scott Wood | On demand network activity reporting through a dynamic file system and method |
WO2011060368A1 (en) * | 2009-11-15 | 2011-05-19 | Solera Networks, Inc. | Method and apparatus for storing and indexing high-speed network traffic data |
US20110125748A1 (en) * | 2009-11-15 | 2011-05-26 | Solera Networks, Inc. | Method and Apparatus for Real Time Identification and Recording of Artifacts |
US8849991B2 (en) | 2010-12-15 | 2014-09-30 | Blue Coat Systems, Inc. | System and method for hypertext transfer protocol layered reconstruction |
US8666985B2 (en) | 2011-03-16 | 2014-03-04 | Solera Networks, Inc. | Hardware accelerated application-based pattern matching for real time classification and recording of network traffic |
US9824131B2 (en) * | 2012-03-15 | 2017-11-21 | Hewlett Packard Enterprise Development Lp | Regulating a replication operation |
US9185578B2 (en) * | 2012-08-24 | 2015-11-10 | Ascom Network Testing Ab | Systems and methods for measuring available bandwidth in mobile telecommunications networks |
US9432458B2 (en) | 2013-01-09 | 2016-08-30 | Dell Products, Lp | System and method for enhancing server media throughput in mismatched networks |
CN105324765B (en) | 2013-05-16 | 2019-11-08 | 慧与发展有限责任合伙企业 | Selection is used for the memory block of duplicate removal complex data |
US10592347B2 (en) | 2013-05-16 | 2020-03-17 | Hewlett Packard Enterprise Development Lp | Selecting a store for deduplicated data |
CN106790162B (en) * | 2016-12-29 | 2020-07-03 | 中国科学院计算技术研究所 | Virtual network optimization method and system |
JP2021512567A (en) * | 2018-01-26 | 2021-05-13 | オパンガ ネットワークス,インコーポレイテッド | Systems and methods for identifying candidate flows in data packet networks |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6003089A (en) * | 1997-03-31 | 1999-12-14 | Siemens Information And Communication Networks, Inc. | Method for constructing adaptive packet lengths in a congested network |
US6298070B1 (en) * | 1998-05-07 | 2001-10-02 | International Business Machines Corporation | Packet training with an adjustable optimum number of packets |
US6304548B1 (en) * | 1997-07-17 | 2001-10-16 | Siemens Information And Communication Networks, Inc. | Apparatus and method for preventing network rerouting |
US20020145974A1 (en) * | 2001-04-06 | 2002-10-10 | Erlang Technology, Inc. | Method and apparatus for high speed packet switching using train packet queuing and providing high scalability |
US6597699B1 (en) * | 1999-09-28 | 2003-07-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Quality of service management in a packet data router system having multiple virtual router instances |
US6609157B2 (en) * | 1998-09-22 | 2003-08-19 | Microsoft Corporation | Method and apparatus for bundling messages at the expiration of a time-limit |
US6614808B1 (en) * | 1999-09-02 | 2003-09-02 | International Business Machines Corporation | Network packet aggregation |
US20030210701A1 (en) * | 2000-11-30 | 2003-11-13 | Koichi Saiki | Network supervisory control system |
US6687220B1 (en) * | 1999-09-28 | 2004-02-03 | Ericsson Inc. | Quality of service management in a packet data router having multiple virtual router instances |
US6687735B1 (en) * | 2000-05-30 | 2004-02-03 | Tranceive Technologies, Inc. | Method and apparatus for balancing distributed applications |
US6738371B1 (en) * | 1999-09-28 | 2004-05-18 | Ericsson Inc. | Ingress data queue management in a packet data router |
US6772217B1 (en) * | 2000-08-23 | 2004-08-03 | International Business Machines Corporation | Internet backbone bandwidth enhancement by initiating an additional data stream when individual bandwidth are approximately equal to the backbone limit |
US6886040B1 (en) * | 1998-10-28 | 2005-04-26 | Cisco Technology, Inc. | Codec-independent technique for modulating bandwidth in packet network |
US6889257B1 (en) * | 1999-12-03 | 2005-05-03 | Realnetworks, Inc. | System and method of transmitting data packets |
US6891852B1 (en) * | 1999-04-08 | 2005-05-10 | Lucent Technologies Inc. | Method of dynamically adjusting the duration of a burst transmission in wireless communication systems |
US6975624B1 (en) * | 1997-10-14 | 2005-12-13 | Kokusai Denshin Denwa Co., Ltd. | Network interworking device for LAN/internet |
US6996059B1 (en) * | 1999-05-19 | 2006-02-07 | Shoretel, Inc | Increasing duration of information in a packet to reduce processing requirements |
US7028182B1 (en) * | 1999-02-19 | 2006-04-11 | Nexsys Electronics, Inc. | Secure network system and method for transfer of medical information |
US7031338B2 (en) * | 2001-08-27 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | System and method for the consolidation of data packets |
US20060215579A1 (en) * | 2005-03-28 | 2006-09-28 | Nadeau Thomas D | Method and apparatus for the creation and maintenance of a self-adjusting repository of service level diagnostics test points for network based VPNs |
US7274711B2 (en) * | 2000-06-21 | 2007-09-25 | Fujitsu Limited | Network relay apparatus and method of combining packets |
US7277384B1 (en) * | 2000-04-06 | 2007-10-02 | Cisco Technology, Inc. | Program and method for preventing overload in a packet telephony gateway |
US7287097B1 (en) * | 2001-08-07 | 2007-10-23 | Good Technology, Inc. | System and method for full wireless synchronization of a data processing apparatus with a messaging system |
US20070258414A1 (en) * | 2004-03-02 | 2007-11-08 | Hong Cheng | System and Method for Negotiation of Wlan Entity |
US7356021B2 (en) * | 2004-09-29 | 2008-04-08 | Texas Instruments Incorporated | Increasing the throughput of voice over internet protocol data on wireless local area networks |
US7567576B2 (en) * | 1999-12-30 | 2009-07-28 | Cisco Technology, Inc. | Method and apparatus for throttling audio packets according to gateway processing capacity |
US7581077B2 (en) * | 1997-10-30 | 2009-08-25 | Commvault Systems, Inc. | Method and system for transferring data in a storage operation |
US7602730B1 (en) * | 2002-09-12 | 2009-10-13 | Juniper Networks, Inc. | Systems and methods for transitioning between fragmentation modes |
US7724775B2 (en) * | 2006-02-10 | 2010-05-25 | Nec Computer Techno, Ltd. | Data transmission circuit and method for controlling the data transmission circuit |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5237675A (en) * | 1990-06-04 | 1993-08-17 | Maxtor Corporation | Apparatus and method for efficient organization of compressed data on a hard disk utilizing an estimated compression factor |
US6092171A (en) * | 1991-09-16 | 2000-07-18 | Advanced Micro Devices, Inc. | System and method for using a memory management unit to reduce memory requirements |
US5729228A (en) * | 1995-07-06 | 1998-03-17 | International Business Machines Corp. | Parallel compression and decompression using a cooperative dictionary |
US5864859A (en) * | 1996-02-20 | 1999-01-26 | International Business Machines Corporation | System and method of compression and decompression using store addressing |
US5859853A (en) * | 1996-06-21 | 1999-01-12 | International Business Machines Corporation | Adaptive packet training |
US5761536A (en) * | 1996-08-21 | 1998-06-02 | International Business Machines Corporation | System and method for reducing memory fragmentation by assigning remainders to share memory blocks on a best fit basis |
US6000009A (en) * | 1997-05-06 | 1999-12-07 | International Business Machines Corporation | Method and apparatus for allocation of disk memory space for compressed data records |
US6681305B1 (en) * | 2000-05-30 | 2004-01-20 | International Business Machines Corporation | Method for operating system support for memory compression |
US6564305B1 (en) * | 2000-09-20 | 2003-05-13 | Hewlett-Packard Development Company Lp | Compressing memory management in a device |
US6877081B2 (en) * | 2001-02-13 | 2005-04-05 | International Business Machines Corporation | System and method for managing memory compression transparent to an operating system |
US6535238B1 (en) * | 2001-10-23 | 2003-03-18 | International Business Machines Corporation | Method and apparatus for automatically scaling processor resource usage during video conferencing |
US7391769B2 (en) * | 2003-06-27 | 2008-06-24 | Lucent Technologies Inc. | Packet aggregation for real time services on packet data networks |
US7814485B2 (en) * | 2004-12-07 | 2010-10-12 | Intel Corporation | System and method for adaptive power management based on processor utilization and cache misses |
US7480238B2 (en) * | 2005-04-14 | 2009-01-20 | International Business Machines Corporation | Dynamic packet training |
-
2005
- 2005-04-14 US US11/106,011 patent/US7480238B2/en not_active Expired - Fee Related
-
2006
- 2006-03-31 CN CNA200610067055XA patent/CN1848813A/en active Pending
-
2008
- 2008-06-27 US US12/147,778 patent/US20080259821A1/en not_active Abandoned
- 2008-06-27 US US12/147,793 patent/US20080259822A1/en not_active Abandoned
- 2008-06-27 US US12/147,766 patent/US20080263226A1/en not_active Abandoned
Patent Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6003089A (en) * | 1997-03-31 | 1999-12-14 | Siemens Information And Communication Networks, Inc. | Method for constructing adaptive packet lengths in a congested network |
US6304548B1 (en) * | 1997-07-17 | 2001-10-16 | Siemens Information And Communication Networks, Inc. | Apparatus and method for preventing network rerouting |
US6975624B1 (en) * | 1997-10-14 | 2005-12-13 | Kokusai Denshin Denwa Co., Ltd. | Network interworking device for LAN/internet |
US7581077B2 (en) * | 1997-10-30 | 2009-08-25 | Commvault Systems, Inc. | Method and system for transferring data in a storage operation |
US6298070B1 (en) * | 1998-05-07 | 2001-10-02 | International Business Machines Corporation | Packet training with an adjustable optimum number of packets |
US6609157B2 (en) * | 1998-09-22 | 2003-08-19 | Microsoft Corporation | Method and apparatus for bundling messages at the expiration of a time-limit |
US6886040B1 (en) * | 1998-10-28 | 2005-04-26 | Cisco Technology, Inc. | Codec-independent technique for modulating bandwidth in packet network |
US7028182B1 (en) * | 1999-02-19 | 2006-04-11 | Nexsys Electronics, Inc. | Secure network system and method for transfer of medical information |
US6891852B1 (en) * | 1999-04-08 | 2005-05-10 | Lucent Technologies Inc. | Method of dynamically adjusting the duration of a burst transmission in wireless communication systems |
US6996059B1 (en) * | 1999-05-19 | 2006-02-07 | Shoretel, Inc | Increasing duration of information in a packet to reduce processing requirements |
US6614808B1 (en) * | 1999-09-02 | 2003-09-02 | International Business Machines Corporation | Network packet aggregation |
US6597699B1 (en) * | 1999-09-28 | 2003-07-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Quality of service management in a packet data router system having multiple virtual router instances |
US6738371B1 (en) * | 1999-09-28 | 2004-05-18 | Ericsson Inc. | Ingress data queue management in a packet data router |
US6687220B1 (en) * | 1999-09-28 | 2004-02-03 | Ericsson Inc. | Quality of service management in a packet data router having multiple virtual router instances |
US6889257B1 (en) * | 1999-12-03 | 2005-05-03 | Realnetworks, Inc. | System and method of transmitting data packets |
US7451228B2 (en) * | 1999-12-03 | 2008-11-11 | Realnetworks, Inc. | System and method of transmitting data packets |
US7567576B2 (en) * | 1999-12-30 | 2009-07-28 | Cisco Technology, Inc. | Method and apparatus for throttling audio packets according to gateway processing capacity |
US7277384B1 (en) * | 2000-04-06 | 2007-10-02 | Cisco Technology, Inc. | Program and method for preventing overload in a packet telephony gateway |
US6687735B1 (en) * | 2000-05-30 | 2004-02-03 | Tranceive Technologies, Inc. | Method and apparatus for balancing distributed applications |
US7274711B2 (en) * | 2000-06-21 | 2007-09-25 | Fujitsu Limited | Network relay apparatus and method of combining packets |
US6772217B1 (en) * | 2000-08-23 | 2004-08-03 | International Business Machines Corporation | Internet backbone bandwidth enhancement by initiating an additional data stream when individual bandwidth are approximately equal to the backbone limit |
US20030210701A1 (en) * | 2000-11-30 | 2003-11-13 | Koichi Saiki | Network supervisory control system |
US20020145974A1 (en) * | 2001-04-06 | 2002-10-10 | Erlang Technology, Inc. | Method and apparatus for high speed packet switching using train packet queuing and providing high scalability |
US7287097B1 (en) * | 2001-08-07 | 2007-10-23 | Good Technology, Inc. | System and method for full wireless synchronization of a data processing apparatus with a messaging system |
US7031338B2 (en) * | 2001-08-27 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | System and method for the consolidation of data packets |
US7602730B1 (en) * | 2002-09-12 | 2009-10-13 | Juniper Networks, Inc. | Systems and methods for transitioning between fragmentation modes |
US20070258414A1 (en) * | 2004-03-02 | 2007-11-08 | Hong Cheng | System and Method for Negotiation of Wlan Entity |
US7356021B2 (en) * | 2004-09-29 | 2008-04-08 | Texas Instruments Incorporated | Increasing the throughput of voice over internet protocol data on wireless local area networks |
US20060215579A1 (en) * | 2005-03-28 | 2006-09-28 | Nadeau Thomas D | Method and apparatus for the creation and maintenance of a self-adjusting repository of service level diagnostics test points for network based VPNs |
US7724775B2 (en) * | 2006-02-10 | 2010-05-25 | Nec Computer Techno, Ltd. | Data transmission circuit and method for controlling the data transmission circuit |
Also Published As
Publication number | Publication date |
---|---|
CN1848813A (en) | 2006-10-18 |
US20060233118A1 (en) | 2006-10-19 |
US20080263226A1 (en) | 2008-10-23 |
US20080259822A1 (en) | 2008-10-23 |
US7480238B2 (en) | 2009-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7480238B2 (en) | Dynamic packet training | |
US10015104B2 (en) | Processing received data | |
US5418912A (en) | System and method for controlling buffer transmission of data packets by limiting buffered data packets in a communication session | |
US7474616B2 (en) | Congestion indication for flow control | |
EP1784735B1 (en) | Apparatus and method for supporting memory management in an offload of network protocol processing | |
EP1782602B1 (en) | Apparatus and method for supporting connection establishment in an offload of network protocol processing | |
US6961309B2 (en) | Adaptive TCP delayed acknowledgment | |
US6167029A (en) | System and method for integrated data flow control | |
US6477143B1 (en) | Method and apparatus for packet network congestion avoidance and control | |
US7493427B2 (en) | Apparatus and method for supporting received data processing in an offload of network protocol processing | |
US5175537A (en) | Method and apparatus for scheduling access to a CSMA communication medium | |
US20120054362A1 (en) | Mechanism for autotuning mass data transfer from a sender to a receiver over parallel connections | |
EP0521892A1 (en) | Method and apparatus for scheduling access to a csma communication medium | |
MXPA06010111A (en) | Method and apparatus for isochronous datagram delivery over contention-based data link. | |
US10324513B2 (en) | Control of peripheral device data exchange based on CPU power state | |
US20200252337A1 (en) | Data transmission method, device, and computer storage medium | |
US20040047361A1 (en) | Method and system for TCP/IP using generic buffers for non-posting TCP applications | |
US20070291782A1 (en) | Acknowledgement filtering | |
US6298070B1 (en) | Packet training with an adjustable optimum number of packets | |
US6625149B1 (en) | Signaled receiver processing methods and apparatus for improved protocol processing | |
CN106375240A (en) | Ethernet packet forwarding method and system among multiple Ethernet ports | |
US20060031474A1 (en) | Maintaining reachability measures | |
Lu et al. | Performance-adaptive prediction-based transport control over dedicated links | |
JPS63287233A (en) | Message transfer system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |