CN1630290A - A data transmission method with bandwidth prediction - Google Patents

A data transmission method with bandwidth prediction Download PDF

Info

Publication number
CN1630290A
CN1630290A CNA200310121859XA CN200310121859A CN1630290A CN 1630290 A CN1630290 A CN 1630290A CN A200310121859X A CNA200310121859X A CN A200310121859XA CN 200310121859 A CN200310121859 A CN 200310121859A CN 1630290 A CN1630290 A CN 1630290A
Authority
CN
China
Prior art keywords
packet
data
buffer pool
rtt
transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200310121859XA
Other languages
Chinese (zh)
Other versions
CN100399779C (en
Inventor
颜毅强
孙成昆
赵牧
赵俊先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CNB200310121859XA priority Critical patent/CN100399779C/en
Publication of CN1630290A publication Critical patent/CN1630290A/en
Application granted granted Critical
Publication of CN100399779C publication Critical patent/CN100399779C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

This invention discloses data transmission method of predicted bandwidth containing obtaining data thread and transmitting thread, which comprises setting plurality of buffer pool with different space in memory, obtaining current transmitted data package, applying space for said data package in first buffer pool, if it is not successful the applying space in second buffer pool, if it said invention still unsuccessful then obtaining waiting time, applying again after waiting time, if it is successful then judging whether the transmission queue is full, if it is full then obtaining the waiting time, judging again after waiting time, if it is not full then transmitting current obtained data package index as a task point to transmission queue, completing data package transmission by transmitting data thread. Said method can raise the real time property, continuity and transmission efficiency.

Description

A kind of data transmission method for uplink of predicting bandwidth
Technical field
The present invention relates to data transmission technology, relate in particular to a kind of data transmission method for uplink of predicting bandwidth.
Background technology
Along with the application of wireless local area network technology is day by day popularized, wireless data transmission technology has applied to many fields, and for example, manufacturer of projector applies to wireless data transmission technology in the projector one after another, develop wireless projector, greatly convenient for users to use.
The IEEE802.11 agreement that common wireless network protocol has Institute of Electrical and Electric Engineers (IEEE) to propose is an example with the transmit image data, and popular wireless data sending method mainly contains following two kinds at present:
Prior art one: the obstruction mode transmits.To shown in the step 104, in this scheme, finish dealing with by obtaining and be transmitted in the same thread of view data as the step 101 among Fig. 1 for the flow process that the obstruction mode transmits.At first described processing threads obtains the current data that will send, send the data that these obtain again, after sending, data need to wait for recipient's affirmation, owing to do not know that the recipient provides the time of affirmation, program can be provided with the time of delay of a fixed length, when the recipient has affirmation, just can continue to obtain data.The shortcoming that this scheme causes is: this method is lived thread block when sending data, can not obtain data when promptly transmitting data, have only when Data Transfer Done and after receiving affirmation, could continue to obtain data, and then send the data newly obtain, can't guarantee the continuity that data are obtained, efficient is very low, and the frame losing phenomenon is serious; In addition, bide one's time when the once new delay of the every beginning of program etc., return if reply, at this moment thread also can continue to wait for the time to fixed length, therefore has the meaningless stand-by period, and efficient is not high.
Prior art two: asynchronous system transmits.Asynchronous communication mode has two threads, comprises obtaining the data thread and sending the data thread.Figure 2 shows that the flow chart that obtains the data thread, arrive shown in the step 205 as step 201 among Fig. 2, in obtaining the data thread, at first obtain the current packet that will send, apply for Buffer Pool then, application again, application so repeatedly again if application is failed, up to applying for successfully, the index of packet is sent to transmit queue as a task node; Because transmit queue can not endless, otherwise can cause transmission lag serious, so, reach in limited time if work as the task node number of transmit queue, rule of thumb, the upper limit node number of general transmit queue is 16, then sends the index of this packet more again to transmit queue, sends so repeatedly, discharge task node up to transmit queue, the index of this packet is joined in the transmit queue, begin to start transmission data thread, (TCP) finishes the transmission task by transmission control protocol.Fig. 3 is for sending the flow chart of data thread, and as shown in Figure 3, this flow process may further comprise the steps:
Step 301: from transmit queue, obtain the transmission task node;
Step 302: from Buffer Pool, obtain the pairing packet of task node in the read step 301;
Step 303: the packet that reads in the forwarding step 302;
Step 304: the space of the shared Buffer Pool of packet described in the release steps 303;
Step 305: in transmit queue, discharge described task node.
Adopt prior art two described methods can not block mutually so that obtain data and send data, prior art one has improved efficient relatively, but, packet can enter Buffer Pool before being sent out, the storage size of general Buffer Pool is a standard can receive maximum data packet, for example, in the transmission of view data, because view data generally is 2,000,000 (M) to the maximum, so the Buffer Pool size is generally 2M, if Buffer Pool has been filled with packet, then can apply for failure, this packet just needs to wait for, has only the packet in Buffer Pool to be sent out, when vacateing enough spaces in the Buffer Pool, new image data packets just can enter Buffer Pool.In prior art two, adopt two Buffer Pool storage packets that memory space is identical, the shortcoming of this scheme is: when the difference in size of packet great disparity relatively, and when having several big packets to generate continuously, each packet can occupy a Buffer Pool alone.At this moment, newly regardless of different kinds of the packet that produces, all need to wait in line, and the time that big packet transmits is long, has so just caused more new data packets need wait for for a long time and just can enter the transmission Buffer Pool.For example, in the technology that sends view data, the size of setting two Buffer Pools is 2M all, the data packet queue of a following size: 100K is arranged, 200K, 200K, 1.6M, 1.5M, 400K, 100K....At first, 100K, 200K, the packet of 200K all can be assigned in first Buffer Pool, the packet of 1.6M can be assigned in second Buffer Pool then, and the packet of 1.5M enters in first Buffer Pool then, thereby causes two Buffer Pools very fast saturated, all need regardless of different kinds of the packet afterwards to wait for, and need to wait for that long time just can enter Buffer Pool, so this equal-sized double buffering pond mechanism method has caused the mass data bag to wait in line, efficient is not high.
Do not provide method how to obtain the stand-by period in the prior art two, therefore, when Buffer Pool does not have enough spaces, need apply for Buffer Pool repeatedly; Perhaps when transmit queue is full, need to send the packet index to transmit queue repeatedly, can expend a large amount of processor resources like this.
In wireless network,,, in existing underlying protocol, a kind of method of predicting bandwidth and calculating the stand-by period is arranged, these bandwidth prediction computational methods such as formula (1) so need the often actual bandwidth of prediction network because wireless signal is very unstable:
Total transmitting time/the total bytes of RTT=(1)
The network bandwidth can be used loop time (RTT, Round Trip Time) represents, RTT is that packet is received replying the time of being experienced of recipient from sending to, described total transmitting time a period of time for setting, described total bytes is the byte number sum of all packets of sending in described total transmitting time, utilize formula (2) need to obtain the time T of waiting for again:
T=sizeof(data)×RTT/2n (2)
Sizeof (data) in the formula (2) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, and RTT is the RTT that obtains according to above-mentioned formula (1).
RTT in the formula (1) is the average bandwidth of network in fact, because wireless network signal is subject to external interference, the bandwidth real-time change of wireless network is bigger, average bandwidth often can not reflect current network condition, so this computational methods often can not correctly be predicted current bandwidth, thereby can not obtain stand-by period T exactly, so can not in time packet be sent into Buffer Pool, efficient is not high, if packet is bigger, then may cause the follow-up data packet delay serious, even the frame losing mistake takes place.
Summary of the invention
In view of this, main purpose of the present invention provide a kind of data transmission method for uplink of predicting bandwidth, make in the process of transmitting of data, reduction expends processor resource, and minimizing needs the quantity of waiting data package, shorten the stand-by period of packet, thereby improve real-time and continuity that data send, improve the transmitting efficiency of data.
To achieve these goals, technical scheme of the present invention specifically is achieved in that
A kind of data transmission method for uplink of predicting bandwidth comprises and obtains the data thread and send the data thread, it is characterized in that, and at least two block buffering ponds that opening space varies in size in the internal memory of transmitting apparatus, the described data thread that obtains may further comprise the steps:
A, judged whether that packet will send,, then obtained a packet that will send if having; Otherwise, return steps A, continue to judge;
B, be that the packet that is obtained is applied for the space in first Buffer Pool, if apply for successfully, execution in step F then, otherwise, execution in step C;
C, be that the packet that is obtained is applied for the space in next Buffer Pool, if apply for successfully, execution in step F then; Otherwise, execution in step D then;
D, judge whether in addition Buffer Pool,, then return step C if having, if do not have, execution in step E then;
E, obtain the stand-by period,, return step B through after waiting time;
F, judge that transmit queue is whether full, if full, execution in step G then; Otherwise, execution in step H;
G, obtain the stand-by period,, return step F through after waiting time;
H, the index of the packet self that obtained is sent to transmit queue as a task node, finish the transmission of packet by sending the data thread, and return steps A.
Described Buffer Pool is the different Buffer Pools of two block sizes, and big Buffer Pool space size is the size of maximum data packet in data transmission procedure.
Described minibuffer pool space size is half of big Buffer Pool space size.
The described method of obtaining the stand-by period is: prediction current network bandwidth, and again according to T=sizeof (data) * RTT Cur/ 2n calculates the stand-by period, and wherein, T is the stand-by period, and sizeof (data) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurBe the current network bandwidth.
Described current network bandwidth is the network bandwidth when sending current data packet.
The method of described prediction current network bandwidth is: according to RTT Cur=α * RTT Prev1+ (1-α) * RTT Prev2Iteration is obtained the current network bandwidth, wherein, and RTT Prev1The network bandwidth when sending a last packet, RTT Prev2The network bandwidth when sending last packet, α is a weight coefficient.
The span of described weight coefficient is between 0 to 1.
Described RTT CurCalculating in, the value of initial two RTT for send packet used time/total bytes of this packet.
Described transmission data thread utilizes transmission control protocol TCP to realize.
Because the double buffering pond mechanism that method of the present invention has adopted the space to vary in size, can reduce new packet effectively and enter the Buffer Pool stand-by period before, can avoid big packet to block the phenomenon of Buffer Pool, make the size data bag all can in time enter the transmission Buffer Pool, reasonably reduced the time of waiting in line.In addition, the method of the invention is utilized dynamic prediction bandwidth technological prediction bandwidth and is obtained the stand-by period, thereby avoid repeatedly meaningless the repeating of Buffer Pool applied for, reduction expends processor resource, dynamic prediction bandwidth technology is than existing bandwidth prediction technology science more, more reasonable, the bandwidth that more can reflect current network, calculate current network bandwidth more accurately, reflect current network condition more in time, and according to this prediction bandwidth value obtain the stand-by period accurately, timely packet is thrown into the transmission Buffer Pool then, so just avoided the meaningless stand-by period, reduce the delay of packet transmission and the probability that reduces frame losing.In sum, method of the present invention has improved real-time and the continuity that data send when network bandwidth instability and size of data diversity ratio are big, has further improved the transmitting efficiency of data.
Description of drawings
Fig. 1 is the flow chart of prior art one described wireless data sending method;
Fig. 2 is the prior art two described flow charts that obtain the data thread;
Fig. 3 is the flow chart of prior art two described transmission data threads;
Fig. 4 is the described flow chart that obtains the data thread of the embodiment of the invention.
Embodiment
The present invention is further described in more detail below in conjunction with the drawings and specific embodiments.
In the present embodiment, be sent as example explanation method of the present invention with the wireless image data.Method of the present invention adopts the two-wire journey to finish the transmission of wireless data, comprises obtaining the data thread and sending the data thread.In obtaining the data thread, in order to improve the efficient of wireless transmission data, method of the present invention has adopted the many Buffer Pools mechanism that differs in size, and opens up the Buffer Pool that polylith varies in size in internal memory that is:.At the characteristics of view data, in the present embodiment, in the transmitting apparatus internal memory, open up two block buffering ponds, the space size of first Buffer Pool is half of the second Buffer Pool space size, the space size of second Buffer Pool is the size of maximum data packet.Be example to send view data in the present embodiment, generally speaking, maximum image data packets size be probably to be 2M Byte.Therefore in the present embodiment, the space of first Buffer Pool size is 1M Byte, and the space size of second Buffer Pool is 2M Byte.
After obtaining the current packet that will send, to be this packet application Buffer Pool, at first be in first Buffer Pool, to apply for, if first Buffer Pool is full or do not have enough spaces, then in second Buffer Pool, apply for, if also apply in second Buffer Pool less than memory headroom, then this packet just needs to wait for.After the packet of current transmission transmits end, the Buffer Pool that this packet takies will be released, and Buffer Pool just has enough spaces and holds new packet.The double buffering pond mechanism that employing is differed in size, can effectively reduce the queuing time of packet, when the difference in size of packet great disparity relatively, and when having several big packets to generate continuously, big packet is applied in big Buffer Pool, little packet can be applied in two Buffer Pools, two situations that Buffer Pool is all occupied by big packet have so just been avoided, the quantity of data packets that minimizing need be waited in line, make follow-up small data packets can in time enter Buffer Pool, reduce the probability of frame losing.For example, the data packet queue of a following size: 100K is arranged, 200K, 200K, 1.6M, 1.5M, 400K, 100K....At first, 100K, 200K, the packet of 200K all can be assigned in first Buffer Pool, the packet of 1.6M can be assigned in second Buffer Pool then, at this moment the packet of 1.5M just needs to wait for application second Buffer Pool, and the 400K of 1.5M packet back and 100K packet still can enter first Buffer Pool, need waiting data package to have only one of 1.5M.So, this small one and large one double buffering pond design mechanism has reduced new packet effectively and has entered stand-by period before the Buffer Pool, can avoid big packet to block the phenomenon of Buffer Pool, make the size data bag all can in time enter the transmission Buffer Pool, reasonably reduced the time of waiting in line.
When Buffer Pool does not have enough spaces, the time that needs prediction bandwidth and calculated data bag to wait for, after the time of waiting for finishes, apply for Buffer Pool immediately again, in time packet is put into Buffer Pool.In order to solve the shortcoming that prediction bandwidth technology exists in the prior art, method of the present invention has adopted dynamic prediction bandwidth technology, the computational methods such as the formula (3) of prediction bandwidth:
RTT cur=α×RTT prev1+(1-α)×RTT prev2 (3)
In the formula (3), RTT CurRTT predicted value during for the transmission current data packet, RTT Prev1RTT predicted value when sending a last packet, RTT Prev2RTT predicted value when sending last packet, α is a weight coefficient, the span of α is between 0 to 1.Described formula (3) is predicted the current network bandwidth according to last twice network traffics, and the utilization Principle of Statistics, chooses suitable weight coefficient, and making predicts the outcome presses close to the real network bandwidth very much.Described formula (3) is the process of an iteration, and the RTT during two packets of delivery header obtains according to formula (4):
RTT=send packet used time/total bytes (4) of this packet
RTT in the time of can obtaining to send first packet according to formula (4), and the RTT when sending second packet, the RTT when obtaining each packet of follow-up transmission according to formula (3) iteration again.
Dope the bandwidth RTT of current network CurAfter, obtain stand-by period T according to this bandwidth, i.e. the delivery time of the packet of current transmission, acquisition methods such as formula (5)
T=sizeof(data)×RTT cur/2n (5)
Sizeof (data) in the above-mentioned formula (5) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurRTT when sending current data packet according to above-mentioned formula (3).
Dynamic prediction bandwidth technology of the present invention can calculate current network bandwidth more exactly, reflect current network condition in time, and according to this prediction bandwidth value obtain the stand-by period accurately, timely packet is thrown into the transmission Buffer Pool then, avoided the meaningless stand-by period.
Fig. 4 is the described flow chart that obtains the data thread of the embodiment of the invention, and as shown in Figure 4, this flow process may further comprise the steps:
Step 401, judged whether that packet will send,, then obtained a packet that will send as the current packet that will send if having; Otherwise, return step 401, continue to judge;
Step 402, apply for the space for the current packet that obtains in first Buffer Pool, if apply for successfully, then execution in step 406, otherwise, execution in step 403;
Step 403, apply for the space for the current packet that obtains in next Buffer Pool, if apply for successfully, then execution in step 406; Otherwise then execution in step 404;
If the polylith Buffer Pool is arranged, and do not apply for successful space in the second block buffering pond, then also will judge whether Buffer Pool in addition after step 403, if having, then return step 403, if do not have, then execution in step 404;
Step 404, according to above-mentioned formula (3) and formula (4) prediction current network bandwidth RTT Cur, and obtain stand-by period T according to above-mentioned formula (5);
Behind step 405, the above-mentioned stand-by period T of process, return step 402;
Step 406, judge that transmit queue is whether full, if transmit queue is full, then execution in step 407; Otherwise, execution in step 409;
Step 407, according to above-mentioned formula (3) and formula (4) prediction current network bandwidth RTT Cur, and obtain stand-by period T according to above-mentioned formula (5);
Behind step 408, the stand-by period T, return step 406 through step 407 acquisition;
Step 409, the described current index that obtains packet is sent to transmit queue as a task node, finish transmission by sending the data thread, and return step 401;
After this thread returns step 401, continue to obtain new image data packets, and with its adding transmit queue, this method of distribute data bag efficiently again and again, can make full use of the bandwidth of network, and reduce stand-by period of packet to greatest extent, therefore can obtain good view data transmission effect.
Transmission data thread of the present invention is identical with the transmission data thread of prior art two, and as shown in Figure 3, this thread may further comprise the steps:
Step 301: from transmit queue, obtain the transmission task node;
Step 302: from Buffer Pool, obtain the pairing packet of task node in the read step 301;
Step 303: the packet that reads in the forwarding step 302;
Step 304: the space of the shared Buffer Pool of packet described in the release steps 303;
Step 305: in transmit queue, discharge described task node.
The above, only for the preferable embodiment of the present invention, but method of the present invention is also applicable to the send mode of the unsettled data of other bandwidth, and for example the Internet (Internet) goes up transmission of data.Protection scope of the present invention is not limited thereto, and any people who is familiar with this technology is in the disclosed technical scope of the present invention, and the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.

Claims (9)

1, a kind of data transmission method for uplink of predicting bandwidth comprises and obtains the data thread and send the data thread, it is characterized in that, and at least two block buffering ponds that opening space varies in size in the internal memory of transmitting apparatus, the described data thread that obtains may further comprise the steps:
A, judged whether that packet will send,, then obtained a packet that will send if having; Otherwise, return steps A, continue to judge;
B, be that the packet that is obtained is applied for the space in first Buffer Pool, if apply for successfully, execution in step F then, otherwise, execution in step C;
C, be that the packet that is obtained is applied for the space in next Buffer Pool, if apply for successfully, execution in step F then; Otherwise, execution in step D then;
D, judge whether in addition Buffer Pool,, then return step C if having, if do not have, execution in step E then;
E, obtain the stand-by period,, return step B through after waiting time;
F, judge that transmit queue is whether full, if full, execution in step G then; Otherwise, execution in step H;
G, obtain the stand-by period,, return step F through after waiting time;
H, the index of the packet self that obtained is sent to transmit queue as a task node, finish the transmission of packet by sending the data thread, and return steps A.
2, the method for claim 1 is characterized in that, described Buffer Pool is the different Buffer Pools of two block sizes, and big Buffer Pool space size is the size of maximum data packet in data transmission procedure.
3, method as claimed in claim 2 is characterized in that, described minibuffer pool space size is half of big Buffer Pool space size.
4, the method for claim 1 is characterized in that, the described method of obtaining the stand-by period is: prediction current network bandwidth, and again according to T=sizeof (data) * RTT Cur/ 2n calculates the stand-by period, and wherein, T is the stand-by period, and sizeof (data) is the byte number sum of all packets in the transmit queue, and n is the node number of transmit queue, RTT CurBe the current network bandwidth.
5, method as claimed in claim 4 is characterized in that, described current network bandwidth is the network bandwidth when sending current data packet.
6, method as claimed in claim 4 is characterized in that, the method for described prediction current network bandwidth is: according to RTT Cur=α * RTT Prev1+ (1-α) * RTT Prev2Iteration is obtained the current network bandwidth, wherein, and RTT Prev1The network bandwidth when sending a last packet, RTT Prev2The network bandwidth when sending last packet, α is a weight coefficient.
7, method as claimed in claim 6 is characterized in that, the span of described weight coefficient is between 0 to 1.
8, method as claimed in claim 6 is characterized in that, described RTT CurCalculating in, the value of initial two RTT for send packet used time/total bytes of this packet.
9, the method for claim 1 is characterized in that, described transmission data thread utilizes transmission control protocol TCP to realize.
CNB200310121859XA 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction Expired - Fee Related CN100399779C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200310121859XA CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200310121859XA CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Publications (2)

Publication Number Publication Date
CN1630290A true CN1630290A (en) 2005-06-22
CN100399779C CN100399779C (en) 2008-07-02

Family

ID=34844308

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200310121859XA Expired - Fee Related CN100399779C (en) 2003-12-19 2003-12-19 A data transmission method with bandwidth prediction

Country Status (1)

Country Link
CN (1) CN100399779C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110282A (en) * 2011-03-14 2011-06-29 北京播思软件技术有限公司 Screen drawing method and system for embedded equipment
CN111061545A (en) * 2018-10-17 2020-04-24 财团法人工业技术研究院 Server and resource regulation and control method thereof
CN114827047A (en) * 2022-06-24 2022-07-29 北京国科天迅科技有限公司 Data transmission method and device, computer equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2922119B2 (en) * 1994-09-01 1999-07-19 沖電気工業株式会社 Bandwidth regulation device and packet communication device
JPH09298577A (en) * 1996-05-01 1997-11-18 Tamura Seisakusho Co Ltd Data transmitter
EP1073251A3 (en) * 1999-07-26 2003-09-10 Texas Instruments Incorporated Packet buffer management
CN1270243C (en) * 2001-03-30 2006-08-16 中兴通讯股份有限公司 Method for realizing quick data transfer
US6877048B2 (en) * 2002-03-12 2005-04-05 International Business Machines Corporation Dynamic memory allocation between inbound and outbound buffers in a protocol handler
JP2003324442A (en) * 2002-04-26 2003-11-14 Matsushita Electric Ind Co Ltd Method for accelerating media access by using efficient buffer mechanism in wireless network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110282A (en) * 2011-03-14 2011-06-29 北京播思软件技术有限公司 Screen drawing method and system for embedded equipment
CN111061545A (en) * 2018-10-17 2020-04-24 财团法人工业技术研究院 Server and resource regulation and control method thereof
CN114827047A (en) * 2022-06-24 2022-07-29 北京国科天迅科技有限公司 Data transmission method and device, computer equipment and storage medium
CN114827047B (en) * 2022-06-24 2022-10-04 北京国科天迅科技有限公司 Data transmission method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN100399779C (en) 2008-07-02

Similar Documents

Publication Publication Date Title
TWI232658B (en) Packet transmission method and system, base station, wireless LAN terminal, and wireless LAN system using the same
US7643420B2 (en) Method and system for transmission control protocol (TCP) traffic smoothing
JP2006514469A5 (en)
CN102577569A (en) Rate shaping for wireless communication using token bucket that allows token debt
CN1387715A (en) Method and apparatus for data transportation and synchronization between MAC and physical layers in wireless communication system
CN1518283A (en) Reactivity bandwidth control for stream data
CA2486746A1 (en) Method and system for encapsulating cells
CN1881916A (en) Method and apparatus for realizing communication between communication equipments
WO2012129922A1 (en) Packet handling method, forwarding device and system
WO2011000307A1 (en) Network traffic accelerator
CN1206863C (en) Frame converter and conversion method, digital pick-up camera and monitoring system thereof
CN101039273A (en) Communication equipment, transmission control method and transmission control program
US20140362864A1 (en) Transmitting apparatus, transmitting method, and storage medium
CN1812575A (en) Parallel transmission dispatching method for stream media data
CN1496079A (en) Method for dynamically-controlling read time muttimedia data generation rate and its device
CN1612501A (en) Transmitting data using multi-frames
CN1617525A (en) Method for guaranteeing general route package channel transmission reliability
CN1842080A (en) Method for adjusting transmission control protocol receive window
CN1494277A (en) Management method of data fransmission/receiving butter region in network communication
CN1630290A (en) A data transmission method with bandwidth prediction
CN1571418A (en) A method and system for implementing data transmission in flow control transmission protocol
CN1719801A (en) Method for improving connecting performance of multi-non-connecting layer in wireless network
CN1756227A (en) Method and device for realizing WLAN real-time and QoS
CN1612621A (en) Base station internal real-time service data transmitting method
Jani et al. SCTP performance in data center environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080702

Termination date: 20201219

CF01 Termination of patent right due to non-payment of annual fee