CN100399779C - A data transmission method with bandwidth prediction - Google Patents

A data transmission method with bandwidth prediction Download PDF

Info

Publication number
CN100399779C
CN100399779C CNB200310121859XA CN200310121859A CN100399779C CN 100399779 C CN100399779 C CN 100399779C CN B200310121859X A CNB200310121859X A CN B200310121859XA CN 200310121859 A CN200310121859 A CN 200310121859A CN 100399779 C CN100399779 C CN 100399779C
Authority
CN
China
Prior art keywords
packet
data
rtt
buffer pool
network bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB200310121859XA
Other languages
Chinese (zh)
Other versions
CN1630290A (en
Inventor
颜毅强
孙成昆
赵牧
赵俊先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CNB200310121859XA priority Critical patent/CN100399779C/en
Publication of CN1630290A publication Critical patent/CN1630290A/en
Application granted granted Critical
Publication of CN100399779C publication Critical patent/CN100399779C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The present invention discloses a data transmitting method for predicting bandwidth, which comprises a data thread obtaining step and a data thread transmitting step. In the method, a plurality of buffer pools with different space sizes are opened in memories; in the data thread obtaining step, firstly, a data packet to be transmitted at present is obtained; space is applied for the data packet in the first buffer pool; if the application is unsuccessful, space is applied for the data packet in the second buffer pool; if the application is still unsuccessful, waiting time is obtained; when the waiting time passes, the buffer pools are applied again; if the application is successful, whether a transmitting queue is full or not is judged; if the transmitting queue is full, the waiting time is obtained; when the waiting time passes, whether the transmitting queue is full or not is judged again; if the transmitting queue is not full, an index for obtaining the data packet at present is transmitted to the transmitting queue as a task node; the transmission of the data packet is finished by transmitting data threads. The method can enhance the real-time performance and the continuity of data transmission and can enhance the transmission efficiency of data.

Description

A kind of data transmission method for uplink of predicting bandwidth
Technical field
The present invention relates to data transmission technology, relate in particular to a kind of data transmission method for uplink of predicting bandwidth.
Background technology
Along with the application of wireless local area network technology is day by day popularized, wireless data transmission technology has applied to many fields, and for example, manufacturer of projector applies to wireless data transmission technology in the projector one after another, develop wireless projector, greatly convenient for users to use.
The IEEE802.11 agreement that common wireless network protocol has Institute of Electrical and Electric Engineers (IEEE) to propose is an example with the transmit image data, and popular wireless data sending method mainly contains following two kinds at present:
Prior art one: the obstruction mode transmits.To shown in the step 104, in this scheme, finish dealing with by obtaining and be transmitted in the same thread of view data as the step 101 among Fig. 1 for the flow process that the obstruction mode transmits.At first described processing threads obtains the current data that will send, send the data that these obtain again, after sending, data need to wait for recipient's affirmation, owing to do not know that the recipient provides the time of affirmation, program can be provided with the time of delay of a fixed length, when the recipient has affirmation, just can continue to obtain data.The shortcoming that this scheme causes is: this method is lived thread block when sending data, can not obtain data when promptly transmitting data, have only when Data Transfer Done and after receiving affirmation, could continue to obtain data, and then send the data newly obtain, can't guarantee the continuity that data are obtained, efficient is very low, and the frame losing phenomenon is serious; In addition, bide one's time when the once new delay of the every beginning of program etc., return if reply, at this moment thread also can continue to wait for the time to fixed length, therefore has the meaningless stand-by period, and efficient is not high.
Prior art two: asynchronous system transmits.Asynchronous communication mode has two threads, comprises obtaining the data thread and sending the data thread.Figure 2 shows that the flow chart that obtains the data thread, arrive shown in the step 205 as step 201 among Fig. 2, in obtaining the data thread, at first obtain the current packet that will send, apply for Buffer Pool then, application again, application so repeatedly again if application is failed, up to applying for successfully, the index of packet is sent to transmit queue as a task node; Because transmit queue can not endless, otherwise can cause transmission lag serious, so, reach in limited time if work as the task node number of transmit queue, rule of thumb, the upper limit node number of general transmit queue is 16, then sends the index of this packet more again to transmit queue, sends so repeatedly, discharge task node up to transmit queue, the index of this packet is joined in the transmit queue, begin to start transmission data thread, (TCP) finishes the transmission task by transmission control protocol.Fig. 3 is for sending the flow chart of data thread, and as shown in Figure 3, this flow process may further comprise the steps:
Step 301: from transmit queue, obtain the transmission task node;
Step 302: from Buffer Pool, obtain the pairing packet of task node in the read step 301;
Step 303: the packet that reads in the forwarding step 302;
Step 304: the space of the shared Buffer Pool of packet described in the release steps 303;
Step 305: in transmit queue, discharge described task node.
Adopt prior art two described methods can not block mutually so that obtain data and send data, prior art one has improved efficient relatively, but, packet can enter Buffer Pool before being sent out, the storage size of general Buffer Pool is a standard can receive maximum data packet, for example, in the transmission of view data, because view data generally is 2,000,000 (M) to the maximum, so the Buffer Pool size is generally 2M, if Buffer Pool has been filled with packet, then can apply for failure, this packet just needs to wait for, has only the packet in Buffer Pool to be sent out, when vacateing enough spaces in the Buffer Pool, new image data packets just can enter Buffer Pool.In prior art two, adopt two Buffer Pool storage packets that memory space is identical, the shortcoming of this scheme is: when the difference in size of packet great disparity relatively, and when having several big packets to generate continuously, each packet can occupy a Buffer Pool alone.At this moment, newly regardless of different kinds of the packet that produces, all need to wait in line, and the time that big packet transmits is long, has so just caused more new data packets need wait for for a long time and just can enter the transmission Buffer Pool.For example, in the technology that sends view data, the size of setting two Buffer Pools all is 2M, the data packet queue that a following size is arranged: 100K, 200K, 200K, 1.6M, 1.5M, 400K, 100K... at first, 100K, 200K, the packet of 200K can be assigned in first Buffer Pool, the packet of 1.6M can be assigned in second Buffer Pool then, then 1, the packet of 5M enters in first Buffer Pool, thereby causes two Buffer Pools very fast saturated, all need regardless of different kinds of the packet afterwards to wait for, and need to wait for that long time just can enter Buffer Pool, so this equal-sized double buffering pond mechanism method has caused the mass data bag to wait in line, efficient is not high.
Do not provide method how to obtain the stand-by period in the prior art two, therefore, when Buffer Pool does not have enough spaces, need apply for Buffer Pool repeatedly; Perhaps when transmit queue is full, need to send the packet index to transmit queue repeatedly, can expend a large amount of processor resources like this.
In wireless network,,, in existing underlying protocol, a kind of method of predicting bandwidth and calculating the stand-by period is arranged, these bandwidth prediction computational methods such as formula (1) so need the often actual bandwidth of prediction network because wireless signal is very unstable:
Total transmitting time/the total bytes of RTT=(1)
The network bandwidth can be used loop time (RTT, Round Trip Time) represents, RTT is that packet is received replying the time of being experienced of recipient from sending to, described total transmitting time a period of time for setting, described total bytes is the byte number sum of all packets of sending in described total transmitting time, utilize formula (2) need to obtain the time T of waiting for again:
T=sizeof(data)×RTT/2n (2)
Sizeof (data) in the formula (2) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, and RTT is the RTT that obtains according to above-mentioned formula (1).
RTT in the formula (1) is the average bandwidth of network in fact, because wireless network signal is subject to external interference, the bandwidth real-time change of wireless network is bigger, average bandwidth often can not reflect current network condition, so this computational methods often can not correctly be predicted current bandwidth, thereby can not obtain stand-by period T exactly, so can not in time packet be sent into Buffer Pool, efficient is not high, if packet is bigger, then may cause the follow-up data packet delay serious, even the frame losing mistake takes place.
Summary of the invention
In view of this, main purpose of the present invention provide a kind of data transmission method for uplink of predicting bandwidth, make in the process of transmitting of data, reduction expends processor resource, and minimizing needs the quantity of waiting data package, thereby improve real-time and continuity that data send, improve the transmitting efficiency of data.
To achieve these goals, technical scheme of the present invention specifically is achieved in that
A kind of data transmission method for uplink of predicting bandwidth comprises and obtains the data thread and send the data thread, it is characterized in that, and at least two block buffering ponds that opening space varies in size in the internal memory of transmitting apparatus, described data transmission method for uplink may further comprise the steps:
A, judged whether that packet will send,, then obtained a packet that will send if having; Otherwise, return steps A, continue to judge;
B, be that the packet that is obtained is applied for the space in first Buffer Pool, if apply for successfully, execution in step F then, otherwise, execution in step C;
C, be that the packet that is obtained is applied for the space in next Buffer Pool, if apply for successfully, execution in step F then; Otherwise, execution in step D then;
D, judge whether in addition Buffer Pool,, then return step C if having, if do not have, execution in step E then;
E, obtain the stand-by period,, return step B through after waiting time;
F, judge that transmit queue is whether full, if full, execution in step G then; Otherwise, execution in step H;
G, obtain the stand-by period,, return step F through after waiting time;
H, the index of the packet self that obtained is sent to transmit queue as a task node, finish the transmission of packet by sending the data thread, and return steps A.
Described Buffer Pool is the different Buffer Pools of two block sizes, and big Buffer Pool space size is the size of maximum data packet in data transmission procedure.
Described minibuffer pool space size is half of big Buffer Pool space size.
The described method of obtaining the stand-by period is: prediction current network bandwidth, the described network bandwidth is described with loop time (RTT), again according to T=sizeof (data) * RTT Cur/ (2 * n) calculate the stand-by period, and wherein, T is the stand-by period, and sizeof (data) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurBe the current network bandwidth.
The method of described prediction current network bandwidth is: according to RTT Cur=α * RTT Prevl+ (1-α) * RTT Prev2Iteration is obtained the current network bandwidth, wherein, and RTT PrevlThe network bandwidth when sending a last packet, RTT Prev2The network bandwidth when sending last packet, α is a weight coefficient, span is between 0 to 1;
Described RTT CurCalculating in, the value of initial two RTT for send packet used time/total bytes of this packet.
Described current network bandwidth is the network bandwidth when sending current data packet.
Described transmission data thread utilizes transmission control protocol TCP to realize.
Because the double buffering pond mechanism that method of the present invention has adopted the space to vary in size, can reduce new packet effectively and enter the Buffer Pool stand-by period before, can avoid big packet to block the phenomenon of Buffer Pool, make the size data bag all can in time enter the transmission Buffer Pool, reasonably reduced the time of waiting in line.In addition, the method of the invention is utilized dynamic prediction bandwidth technological prediction bandwidth and is obtained the stand-by period, thereby avoid repeatedly meaningless the repeating of Buffer Pool applied for, reduction expends processor resource, dynamic prediction bandwidth technology is than existing bandwidth prediction technology science more, more reasonable, the bandwidth that more can reflect current network, calculate current network bandwidth more accurately, reflect current network condition more in time, and according to this prediction bandwidth value obtain the stand-by period accurately, timely packet is thrown into the transmission Buffer Pool then, so just avoided the meaningless stand-by period, reduce the delay of packet transmission and the probability that reduces frame losing.In sum, method of the present invention has improved real-time and the continuity that data send when network bandwidth instability and size of data diversity ratio are big, has further improved the transmitting efficiency of data.
Description of drawings
Fig. 1 is the flow chart of prior art one described wireless data sending method;
Fig. 2 is the prior art two described flow charts that obtain the data thread;
Fig. 3 is the flow chart of prior art two described transmission data threads;
Fig. 4 is the described flow chart that obtains the data thread of the embodiment of the invention.
Embodiment
The present invention is further described in more detail below in conjunction with the drawings and specific embodiments.
In the present embodiment, be sent as example explanation method of the present invention with the wireless image data.Method of the present invention adopts the two-wire journey to finish the transmission of wireless data, comprises obtaining the data thread and sending the data thread.In obtaining the data thread, in order to improve the efficient of wireless transmission data, method of the present invention has adopted the many Buffer Pools mechanism that differs in size, and opens up the Buffer Pool that polylith varies in size in internal memory that is:.At the characteristics of view data, in the present embodiment, in the transmitting apparatus internal memory, open up two block buffering ponds, the space size of first Buffer Pool is half of the second Buffer Pool space size, the space size of second Buffer Pool is the size of maximum data packet.Be example to send view data in the present embodiment, generally speaking, maximum image data packets size be probably to be 2M Byte.Therefore in the present embodiment, the space of first Buffer Pool size is 1M Byte, and the space size of second Buffer Pool is 2M Byte.
After obtaining the current packet that will send, to be this packet application Buffer Pool, at first be in first Buffer Pool, to apply for, if first Buffer Pool is full or do not have enough spaces, then in second Buffer Pool, apply for, if also apply in second Buffer Pool less than memory headroom, then this packet just needs to wait for.After the packet of current transmission transmits end, the Buffer Pool that this packet takies will be released, and Buffer Pool just has enough spaces and holds new packet.The double buffering pond mechanism that employing is differed in size, can effectively reduce the queuing time of packet, when the difference in size of packet great disparity relatively, and when having several big packets to generate continuously, big packet is applied in big Buffer Pool, little packet can be applied in two Buffer Pools, two situations that Buffer Pool is all occupied by big packet have so just been avoided, the quantity of data packets that minimizing need be waited in line, make follow-up small data packets can in time enter Buffer Pool, reduce the probability of frame losing.For example, the data packet queue of a following size: 100K is arranged, 200K, 200K, 1.6M, 1.5M, 400K, 100K... at first, 100K, 200K, the packet of 200K can be assigned in first Buffer Pool, the packet of 1.6M can be assigned in second Buffer Pool then, at this moment the packet of 1.5M just needs to wait for application second Buffer Pool, and the 400K of 1.5M packet back and 100K packet still can enter first Buffer Pool, need waiting data package to have only one of 1.5M.So, this small one and large one double buffering pond design mechanism has reduced new packet effectively and has entered stand-by period before the Buffer Pool, can avoid big packet to block the phenomenon of Buffer Pool, make the size data bag all can in time enter the transmission Buffer Pool, reasonably reduced the time of waiting in line.
When Buffer Pool does not have enough spaces, the time that needs prediction bandwidth and calculated data bag to wait for, after the time of waiting for finishes, apply for Buffer Pool immediately again, in time packet is put into Buffer Pool.In order to solve the shortcoming that prediction bandwidth technology exists in the prior art, method of the present invention has adopted dynamic prediction bandwidth technology, the computational methods such as the formula (3) of prediction bandwidth:
RTT cur=α×RTT prevl+(1-α)×RTT prev2 (3)
In the formula (3), RTT CurRTT predicted value during for the transmission current data packet, RTT PrevlRTT predicted value when sending a last packet, RTT Prev2RTT predicted value when sending last packet, α is a weight coefficient, the span of α is between 0 to 1.Described formula (3) is predicted the current network bandwidth according to last twice network traffics, and the utilization Principle of Statistics, chooses suitable weight coefficient, and making predicts the outcome presses close to the real network bandwidth very much.Described formula (3) is the process of an iteration, and the RTT during two packets of delivery header obtains according to formula (4):
RTT=send packet used time/total bytes (4) of this packet
RTT in the time of can obtaining to send first packet according to formula (4), and the RTT when sending second packet, the RTT when obtaining each packet of follow-up transmission according to formula (3) iteration again.
Dope the bandwidth RTT of current network CurAfter, obtain stand-by period T according to this bandwidth, i.e. the delivery time of the packet of current transmission, acquisition methods such as formula (5)
T=sizeof(data)×RTT cur/2n (5)
Sizeof (data) in the above-mentioned formula (5) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurRTT when sending current data packet according to above-mentioned formula (3).
Dynamic prediction bandwidth technology of the present invention can calculate current network bandwidth more exactly, reflect current network condition in time, and according to this prediction bandwidth value obtain the stand-by period accurately, timely packet is thrown into the transmission Buffer Pool then, avoided the meaningless stand-by period.
Fig. 4 is the described flow chart that obtains the data thread of the embodiment of the invention, and as shown in Figure 4, this flow process may further comprise the steps:
Step 401, judged whether that packet will send,, then obtained a packet that will send as the current packet that will send if having; Otherwise, return step 401, continue to judge;
Step 402, apply for the space for the current packet that obtains in first Buffer Pool, if apply for successfully, then execution in step 406, otherwise, execution in step 403;
Step 403, apply for the space for the current packet that obtains in next Buffer Pool, if apply for successfully, then execution in step 406; Otherwise then execution in step 404;
If the polylith Buffer Pool is arranged, and do not apply for successful space in the second block buffering pond, then also will judge whether Buffer Pool in addition after step 403, if having, then return step 403, if do not have, then execution in step 404;
Step 404, according to above-mentioned formula (3) and formula (4) prediction current network bandwidth RTT Cur, and obtain stand-by period T according to above-mentioned formula (5);
Behind step 405, the above-mentioned stand-by period T of process, return step 402;
Step 406, judge that transmit queue is whether full, if transmit queue is full, then execution in step 407; Otherwise, execution in step 409;
Step 407, according to above-mentioned formula (3) and formula (4) prediction current network bandwidth RTT Cur, and obtain stand-by period T according to above-mentioned formula (5);
Behind step 408, the stand-by period T, return step 406 through step 407 acquisition;
Step 409, the described current index that obtains packet is sent to transmit queue as a task node, finish transmission by sending the data thread, and return step 401;
After this thread returns step 401, continue to obtain new image data packets, and with its adding transmit queue, this method of distribute data bag efficiently again and again, can make full use of the bandwidth of network, and reduce stand-by period of packet to greatest extent, therefore can obtain good view data transmission effect.
Transmission data thread of the present invention is identical with the transmission data thread of prior art two, and as shown in Figure 3, this thread may further comprise the steps:
Step 301: from transmit queue, obtain the transmission task node;
Step 302: from Buffer Pool, obtain the pairing packet of task node in the read step 301;
Step 303: the packet that reads in the forwarding step 302;
Step 304: the space of the shared Buffer Pool of packet described in the release steps 303;
Step 305: in transmit queue, discharge described task node.
The above, only for the preferable embodiment of the present invention, but method of the present invention is also applicable to the send mode of the unsettled data of other bandwidth, and for example the Internet (Internet) goes up transmission of data.Protection scope of the present invention is not limited thereto, and any people who is familiar with this technology is in the disclosed technical scope of the present invention, and the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.

Claims (6)

1. a data transmission method for uplink of predicting bandwidth comprises and obtains the data thread and send the data thread, it is characterized in that, and at least two block buffering ponds that opening space varies in size in the internal memory of transmitting apparatus, described data transmission method for uplink may further comprise the steps:
A, judged whether that packet will send,, then obtained a packet that will send if having; Otherwise, return steps A, continue to judge;
B, be that the packet that is obtained is applied for the space in first Buffer Pool, if apply for successfully, execution in step F then, otherwise, execution in step C;
C, be that the packet that is obtained is applied for the space in next Buffer Pool, if apply for successfully, execution in step F then; Otherwise, execution in step D then;
D, judge whether in addition Buffer Pool,, then return step C if having, if do not have, execution in step E then;
E, obtain the stand-by period,, return step B through after waiting time;
F, judge that transmit queue is whether full, if full, execution in step G then; Otherwise, execution in step H;
G, obtain the stand-by period,, return step F through after waiting time;
H, the index of the packet self that obtained is sent to transmit queue as a task node, finish the transmission of packet by sending the data thread, and return steps A.
2. the method for claim 1 is characterized in that, described Buffer Pool is the different Buffer Pools of two block sizes, and big Buffer Pool space size is the size of maximum data packet in data transmission procedure.
3. method as claimed in claim 2 is characterized in that, described minibuffer pool space size is half of big Buffer Pool space size.
4. the method for claim 1 is characterized in that, the described method of obtaining the stand-by period is: prediction current network bandwidth, the described network bandwidth is described with loop time RTT, again according to T=sizeof (data) * RTT Cur/ (2 * n) calculate the stand-by period, and wherein, T is the stand-by period, and sizeof (data) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurBe the current network bandwidth;
The method of described prediction current network bandwidth is: according to RTT Cur=α * RTT Prev1+ (1-α) * RTT Prev2Iteration is obtained the current network bandwidth, wherein, and RTT Prev1The network bandwidth when sending a last packet, RTT Prev2The network bandwidth when sending last packet, α is a weight coefficient, span is between 0 to 1;
Described RTT CurCalculating in, the value of initial two RTT for send packet used time/total bytes of this packet.
5. method as claimed in claim 4 is characterized in that, described current network bandwidth is the network bandwidth when sending current data packet.
6. the method for claim 1 is characterized in that, described transmission data thread utilizes transmission control protocol TCP to realize.
CNB200310121859XA 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction Expired - Fee Related CN100399779C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200310121859XA CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200310121859XA CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Publications (2)

Publication Number Publication Date
CN1630290A CN1630290A (en) 2005-06-22
CN100399779C true CN100399779C (en) 2008-07-02

Family

ID=34844308

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200310121859XA Expired - Fee Related CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Country Status (1)

Country Link
CN (1) CN100399779C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110282A (en) * 2011-03-14 2011-06-29 北京播思软件技术有限公司 Screen drawing method and system for embedded equipment
TWI673613B (en) * 2018-10-17 2019-10-01 財團法人工業技術研究院 A server and a resource adjustment control method thereof
CN114827047B (en) * 2022-06-24 2022-10-04 北京国科天迅科技有限公司 Data transmission method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5640389A (en) * 1994-09-01 1997-06-17 Oki Electric Industry Co., Ltd. Traffic shaper and packet communication apparatus
JPH09298577A (en) * 1996-05-01 1997-11-18 Tamura Seisakusho Co Ltd Data transmitter
CN1291029A (en) * 1999-07-26 2001-04-11 德克萨斯仪器股份有限公司 Effective envelope buffer management method in digit user special line multiplexor
CN1378143A (en) * 2001-03-30 2002-11-06 深圳市中兴通讯股份有限公司 Method for realizing quick data transfer
US20030177293A1 (en) * 2002-03-12 2003-09-18 International Business Machines Corporation Dynamic memory allocation between inbound and outbound buffers in a protocol handler
JP2003324442A (en) * 2002-04-26 2003-11-14 Matsushita Electric Ind Co Ltd Method for accelerating media access by using efficient buffer mechanism in wireless network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5640389A (en) * 1994-09-01 1997-06-17 Oki Electric Industry Co., Ltd. Traffic shaper and packet communication apparatus
JPH09298577A (en) * 1996-05-01 1997-11-18 Tamura Seisakusho Co Ltd Data transmitter
CN1291029A (en) * 1999-07-26 2001-04-11 德克萨斯仪器股份有限公司 Effective envelope buffer management method in digit user special line multiplexor
CN1378143A (en) * 2001-03-30 2002-11-06 深圳市中兴通讯股份有限公司 Method for realizing quick data transfer
US20030177293A1 (en) * 2002-03-12 2003-09-18 International Business Machines Corporation Dynamic memory allocation between inbound and outbound buffers in a protocol handler
JP2003324442A (en) * 2002-04-26 2003-11-14 Matsushita Electric Ind Co Ltd Method for accelerating media access by using efficient buffer mechanism in wireless network

Also Published As

Publication number Publication date
CN1630290A (en) 2005-06-22

Similar Documents

Publication Publication Date Title
US8547839B2 (en) TCP congestion control for heterogeneous networks
CA2118471C (en) Upc-based traffic control framework for atm networks
JP4473733B2 (en) Packet encapsulation method and system
TWI232658B (en) Packet transmission method and system, base station, wireless LAN terminal, and wireless LAN system using the same
US20020156910A1 (en) Flow control system and method
US7313152B2 (en) IP header compression dependent connection admission control and/or channel allocation
KR101005138B1 (en) Method and system for encapsulating cells
JPH0662042A (en) Improvement regarding data transmission system
JP2004538719A (en) Method for providing a non-linear, highly scalable increase-decrease congestion control mechanism
CN110808884B (en) Network congestion control method
CN105024940A (en) Link adaptation-based heterogeneous network TCP congestion control method
WO2011000307A1 (en) Network traffic accelerator
JP3046289B2 (en) Interface circuit between frame relay and ATM
Liu et al. Improving explicit congestion notification with the mark-front strategy
EP2957093A1 (en) System and method for compressing data associated with a buffer
CN100399779C (en) A data transmission method with bandwidth prediction
EP3297210A1 (en) Method for setting packet transmission mode and device
CN116192771A (en) Priority queue management method and system
JP3454998B2 (en) Communications system
Huszák et al. Content-aware interface selection method for multi-path video streaming in best-effort networks
Alam et al. Efficient MPEG video traffic shaping for the next generation Internet
Du et al. A deadline-aware transmission framework for H. 264/AVC video over IEEE 802.11 e EDCA wireless networks
KR100440575B1 (en) The traffic characterization scheme for realtime video transmission
Misic et al. Fixed cycles and adaptive bandwidth allocation can coexist in Bluetooth
Peng et al. Fast backward congestion notification mechanism for TCP congestion control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080702

Termination date: 20201219