CN106130699B - Method and system for facilitating one-to-many data transmission with reduced network overhead - Google Patents

Method and system for facilitating one-to-many data transmission with reduced network overhead Download PDF

Info

Publication number
CN106130699B
CN106130699B CN201610491076.8A CN201610491076A CN106130699B CN 106130699 B CN106130699 B CN 106130699B CN 201610491076 A CN201610491076 A CN 201610491076A CN 106130699 B CN106130699 B CN 106130699B
Authority
CN
China
Prior art keywords
computing device
sink
sink computing
list
data packets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610491076.8A
Other languages
Chinese (zh)
Other versions
CN106130699A (en
Inventor
J.利普曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to CN201610491076.8A priority Critical patent/CN106130699B/en
Priority claimed from CN2009801629658A external-priority patent/CN102652411A/en
Publication of CN106130699A publication Critical patent/CN106130699A/en
Application granted granted Critical
Publication of CN106130699B publication Critical patent/CN106130699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/1607Details of the supervisory signal
    • H04L1/1628List acknowledgements, i.e. the acknowledgement message consisting of a list of identifiers, e.g. of sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0092Error control systems characterised by the topology of the transmission link
    • H04L2001/0093Point-to-multipoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/15Flow control; Congestion control in relation to multipoint traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present application relates to a method and system that facilitates one-to-many data transmission with reduced network overhead. Methods and systems that facilitate one-to-many data transmission with reduced network overhead include performing a round of data transmission from a source computing device to a plurality of sink computing devices. Each sink computing device generates a bucket list of lost data blocks for the round of data transmission and transmits the bucket list to the source computing device. The source computing device performs a subsequent round of data transmission based on the bucket list. One or more additional subsequent rounds may be performed until the bucket list for each sink computing device is empty.

Description

Method and system for facilitating one-to-many data transmission with reduced network overhead
The filing date of the present divisional application is 12/17/2009, application No. 200980162965.8, entitled "method and system for facilitating one-to-many data transmission with reduced network overhead".
Technical Field
The present application relates to methods and systems that facilitate one-to-many data transmission with reduced network overhead.
Background
Collaborative computing environments often rely on the transfer of data from a source computing device to multiple destination or sink computing devices. For example, in an educational environment, "classroom collaboration" relies on the transfer of files, video, and other data from a teacher's computing device to each student's computing device. In addition, management of the sink computing devices (e.g., the student's computing devices) may require that updates, new applications, or other management software or services be communicated to each destination or sink computing device. Typically, multicast, broadcast, or other one-to-many data transmissions are used to enable such data delivery to multiple sink computing devices.
Typical one-to-many data transmission techniques (e.g., multicast and broadcast) rely on acknowledgement feedback from each destination or sink computing device. Such acknowledgements are often implemented as unicast data transmissions sent from each sink computing device to the source computing device to inform the source computing device that a data packet or block has been received without error. Thus, in a network that includes a large number of destination or sink computing devices (e.g., a classroom collaboration environment may include sixty or more students), a large number of confirmation transmissions may be generated. A large number of individual unicast acknowledgement transmissions may cause a network "implosion" if the destination computing device attempts to transmit acknowledgements at approximately the same time. While some networks may include a "back-off" mechanism to prevent or reduce network implosion, such mechanisms can add additional communication delays in the network. Additionally, the unicast communication technology used by the destination computer, such as User Datagram Protocol (UDP) or Transmission Control Protocol (TCP), may require additional link layer acknowledgements from the source computing device. Such ping-ponging and network delays can occur simultaneously with each other in the network, which tends to increase the overhead of the network.
Disclosure of Invention
According to an embodiment of the invention, there is provided a method for use by a source computing device, comprising:
transmitting a plurality of data packets to a plurality of sink computing devices;
receiving a first list of lost data blocks from a first sink computing device of the plurality of sink computing devices at a first time slot if there are data packets in the plurality of data packets that the first sink computing device did not receive from the source computing device; and
receiving a second list of lost data blocks from a second sink computing device of the plurality of sink computing devices at a second time slot if there are data packets in the plurality of data packets that the second sink computing device did not receive from the source computing device,
wherein the first time slot precedes the second time slot if the first sink computing device has a higher priority than the second sink computing device.
According to an embodiment of the present invention, there is provided a method for use by a sink computing device of a plurality of sink computing devices, comprising:
receiving, from a source computing device, a message associated with an end of a transmission of a plurality of data packets from the source computing device to the sink computing device;
transmitting a first list of lost data blocks to the source computing device at a first time slot and in response to the message if there is a data packet in the plurality of data packets that the sink computing device did not receive from the source computing device,
wherein if there is a data packet in the plurality of data packets that another sink computing device of the plurality of sink computing devices did not receive from the source computing device, and if the sink computing device has a higher priority than the another sink computing device, the first time slot precedes the second time slot, and the another sink computing device communicates a second list of lost data blocks at the second time slot.
According to an embodiment of the present invention, there is provided a source computing apparatus including:
one or more processors; and
a memory device having stored therein a plurality of instructions that, when executed by the processor, cause the processor to:
transmitting a plurality of data packets to a plurality of sink computing devices;
receiving a first list of lost data blocks from a first sink computing device of the plurality of sink computing devices at a first time slot if there are data packets in the plurality of data packets that the first sink computing device did not receive from the source computing device; and
receiving a second list of lost data blocks from a second sink computing device of the plurality of sink computing devices at a second time slot if there are data packets in the plurality of data packets that the second sink computing device did not receive from the source computing device,
wherein the first time slot precedes the second time slot if the first sink computing device has a higher priority than the second sink computing device.
According to an embodiment of the present invention, there is provided a sink computing apparatus including:
one or more processors; and
a memory device having stored therein a plurality of instructions that, when executed by the processor, cause the processor to:
receiving, from a source computing device, a message associated with an end of a transmission of a plurality of data packets from the source computing device to the sink computing device;
transmitting a first list of lost data blocks to the source computing device at a first time slot and in response to the message if there is a data packet in the plurality of data packets that the sink computing device did not receive from the source computing device,
wherein if there is a data packet in the plurality of data packets that another sink computing device of the plurality of sink computing devices did not receive from the source computing device, and if the sink computing device has a higher priority than the another sink computing device, the first time slot precedes the second time slot, and the another sink computing device communicates a second list of lost data blocks at the second time slot.
Drawings
The systems, devices, and methods described herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. For simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.
FIG. 1 is a simplified block diagram of one embodiment of a system that facilitates data transfer to one or more computing devices with reduced network overhead;
FIG. 2 is a simplified block diagram of one embodiment of a computing device of the system of FIG. 1;
FIG. 3 is a simplified flow diagram of one embodiment of a method executed by a source computing device of the system of FIG. 1 to transfer data to a plurality of computing devices;
FIG. 4 is a simplified flow diagram of one embodiment of a method executed by a plurality of computing devices of the system of FIG. 1 of receiving data from a source computing device;
FIG. 5 is a simplified flow diagram of one embodiment of a method for transmitting a bucket list (bucket list) to a source computing device;
FIG. 6 is a simplified flow diagram of another embodiment of a method for transmitting a bucket list to a source computing device;
FIG. 7 is a simplified diagram illustrating a first round of data transmission to a plurality of computing devices during the method of FIG. 5;
FIG. 8 is a simplified diagram illustrating a bucket list for a plurality of computing devices after the data transfer shown in FIG. 7; and
FIG. 9 is a simplified diagram illustrating a second round of data transmission to a plurality of computing devices during the method of FIG. 5.
Detailed Description
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific exemplary embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit the concepts of the disclosure to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
In the following description, numerous specific details such as logic implementations, opcodes, methods of specifying operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices may be set forth in order to provide a more thorough understanding of the present disclosure. However, it will be appreciated by one skilled in the art that embodiments of the disclosure may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences may not be shown in detail in order not to obscure the disclosure. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Some embodiments of the disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the present disclosure implemented in a computer system may include one or more bus-based interconnects between components and/or one or more point-to-point interconnects between components. Embodiments of the invention may also be implemented as instructions stored on a machine-readable tangible medium, which may be read and executed by one or more processors. A machine-readable tangible medium may include any tangible mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable tangible medium may include Read Only Memory (ROM); random Access Memory (RAM); a magnetic disk storage medium; an optical storage medium; a flash memory device; and other tangible media.
Referring now to fig. 1, a system 100 that facilitates data transfer to a plurality of computing devices with reduced network overhead includes a source computing device 102 (or server computing device) and a computing device group 104 (or client computing device group). The source computing device 102 and the sink computing device 110 are communicatively coupled to each other over the network 106 via the access point 108. The source computing device 102 may be implemented as any type of computing device capable of performing the functions described herein. For example, in some embodiments, the source computing device 102 may be implemented as a desktop computer, a laptop computer, a Mobile Internet Device (MID), or other network-enabled computing device.
The computing device group 104 includes one or more destination or sink computing devices 110. Similar to the source computing device 102, each sink computing device 110 may be implemented as any type of computing device capable of performing the functions described herein. For example, each sink computing device 110 may be implemented as a desktop computer, laptop computer, Mobile Internet Device (MID), or other network-enabled computing device.
Access point 108 facilitates communication between source computing device 102 and sink computing device 110. Access point 108 may be implemented as any type of wired or wireless network communication routing device, such as a wired or wireless router, switch, hub, or other network communication device capable of communicatively coupling source computing device 102 and sink computing device 110. In some embodiments, access point 108 is also communicatively coupled to external network 130 via communication link 124. External network 130 may be implemented as any type of wired and/or wireless network, such as a local area network, a wide area network, a publicly available global network (e.g., the internet), or other network. Similarly, communication link 124 may be implemented as any type of wired or wireless communication link capable of facilitating communication between access point 108 and external network 130, e.g., any number of wireless or physical connections, wires, cables, and/or other interconnecting links or pathways (paths). Additionally, the external network 130 may include any number of additional devices, such as routers, switches, intermediary computers, and the like, to facilitate communication between the source computing device 102 and the sink computing device 110 and the remote computing device.
In some embodiments, the source computing device 102 and the computing device group 104 are located in a single room or are otherwise local to each other. For example, in one particular embodiment, the system 100 is incorporated into a classroom. In such embodiments, the source computing device 102 may be implemented as a teacher's or instructor's computing device, and the sink computing device 110 may be implemented as a student computing device. Of course, system 100 may also be used in other environments or implementations where a one-to-many data transmission is required.
In use, the source computing device 102 is configured to transmit data (e.g., files, video, images, text, and/or other data) to each sink computing device 110 by making multiple rounds of data transmissions in which multiple data blocks are transmitted to the sink computing device 110. During each round of data transmission, the sink computing device 110 records those data blocks that were not received or were received in a corrupted state as a list of buckets for lost data blocks. After each round of data transfer is completed, the sink computing device 110 transmits its separate bucket list to the source computing device 102. In some embodiments, as discussed in more detail below, sink computing device 110 may transmit its bucket list based on some criteria, such as a delay period (delay period) or the size of its respective bucket list (i.e., the number of data blocks in the bucket list). The source computing device 102 aggregates (aggregatates) the bucket lists received from each sink computing device 110 and performs subsequent rounds of data transmission, wherein data blocks identified in the aggregated bucket lists are retransmitted to the sink computing devices 110. One or more subsequent rounds may be conducted in such a manner, during which the sink computing devices 110 continue to report any missing data blocks to the source computing device 102 until the bucket list for each sink computing device 110 is empty.
In one particular embodiment, the sink computing device 110 is configured to: after receiving the data block from the source computing device 102, no acknowledgement transmission is sent. In such embodiments, the source computing device 102 is similarly configured to: such acknowledgement transmissions from the sink computing device 110 are not waited for between transmissions of data blocks. In this way, the overall network traffic, as well as errors and delays caused by such increased network traffic, may be reduced.
Referring now to fig. 2, in one embodiment, each of the source computing device 102 and the sink computing device 110 includes a processor 200, a chipset 204, and a memory 202. The source computing device 102 and the sink computing device 110 may be implemented as any type of computing device capable of performing the respective functions described herein. For example, as discussed above, the source computing device 102 and the sink computing device 110 may be implemented as desktop computers, laptop computers, Mobile Internet Devices (MIDs), or other network-enabled computing devices.
Processor 200 is illustratively implemented as a single core processor having a processor core 206. However, in other embodiments, the processor 200 may be implemented as a multi-core processor having multiple processor cores 206. Additionally, the source computing device 102 and the sink computing device 110 may include additional processors 200 having one or more processor cores 206. The processor 200 is communicatively coupled to a chipset 204 via a plurality of signal paths 208. The signal path 208 may be implemented as any type of signal path capable of facilitating communication between the processor 200 and the chipset 204. For example, the signal channels 208 may be implemented as any number of bus channels, printed circuit board traces (traces), lines, vias (via), intervening devices, and/or other interconnects.
The memory 202 may be implemented as one or more memory devices or data storage locations including, for example, dynamic random access memory Devices (DRAM), synchronous dynamic random access memory devices (SDRAM), double data rate synchronous dynamic random access memory devices (DDR SDRAM), and/or other volatile memory devices. Additionally, although only a single memory device 202 is shown in fig. 2, in other embodiments, the source computing device 102 and the sink computing device 110 may include additional memory devices.
The chipset 204 may include a Memory Controller Hub (MCH) or northbridge, an input/output controller hub (ICH) or southbridge, and firmware devices. In such embodiments, the firmware device may be implemented as a memory storage device for storing basic input/output system (BIOS) data and/or instructions and/or other information. Chipset 204 is communicatively coupled to memory 202 via a plurality of signal paths 210. Similar to the signal channels 208, the signal channels 210 may be implemented as any type of signal channels capable of facilitating communication between the chipset 204 and the memory device 202, such as any number of bus channels, printed circuit board traces, lines, vias, intervening devices, and/or other interconnects.
In other embodiments, chipset 204 may be implemented as a Platform Controller Hub (PCH). In such embodiments, a Memory Controller Hub (MCH) may be incorporated into processor 200 or otherwise associated with processor 200. Additionally, in such embodiments, the memory device 202 may be communicatively coupled to the processor 200, rather than the chipset 204 (i.e., platform controller hub), via a plurality of signal paths 212. Similar to signal channels 208, signal channels 212 may be implemented as any type of signal channels capable of facilitating communication between memory device 202 and processor 200, such as any number of bus channels, printed circuit board traces, lines, vias, intervening devices, and/or other interconnects.
The source computing device 102 and the sink computing device 110 also include communication circuitry 220 for communicating with each other over the network 106. The communication circuitry 220 may be implemented as any number of devices and circuitry for enabling communication between the energy computing device 102 and the sink computing device 110. For example, the communication circuitry 220 may be implemented as one or more wired or wireless Network Interface Cards (NICs) or other network communication cards, modules, or circuits for communicating with other source computing devices 102 and sink computing devices 110 via access points 108.
The source computing device 102 and sink computing device 110 may also include additional peripheral devices such as a data storage device 222, a display device 224, and other peripheral devices 226. Each of the communication circuitry 220, data storage 222, display 224, and other peripheral devices 226 is communicatively coupled to the chipset 204 via signal paths 230. Again, similar to the signal channels 208, the signal channels 230 may be implemented as any type of signal channels capable of facilitating communication between the chipset 204 and the communication circuitry 220, data storage 222, display device 224, and other peripheral devices 226, such as any number of bus channels, printed circuit board traces, lines, vias, intervening devices, and/or other interconnects.
The data storage device(s) 222 may be implemented as any type of device configured for short-term or long-term storage of data, such as memory devices and circuits, memory cards, hard drives, solid state drives, or other data storage devices. Display device 224 may be implemented as any type of display device for displaying data to users of source computing device 102 and sink computing device 110, such as a Liquid Crystal Display (LCD), Cathode Ray Tube (CRT) display, Light Emitting Diode (LED) display, or other display device. The peripheral devices 226 may include any number of additional peripheral devices, including input devices, output devices, and other interface devices. For example, the peripheral devices 226 may include a keyboard and/or mouse for supplying input to the source computing device 102 and the sink computing device 110. The particular number and type of devices included in the peripheral devices 226 may depend, for example, on the intended use of the source computing device 102 and the sink computing device 110.
Referring now to fig. 3, a method 300 for transmitting data to a plurality of sink computing devices 110 begins with block 302, wherein the source computing device 102 initializes a data transfer session. During block 302, the source computing device 102 may perform any number of calibration and initialization processes. Additionally, in some embodiments, in block 304, the source computing device 102 handshake with each sink computing device 110. Such handshaking may establish a communication protocol that will be used to communicate data to the sink computing device 110 and/or other information or data to prepare the sink computing device 110 to receive data transmissions. For example, in one embodiment, the source computing device 102 notifies the sink computing device 110 of information about the data files to be transferred to the sink computing device 110. The sink computing device 110 may use this information (e.g., the number of data blocks that the sink computing device 110 should expect to receive) to determine lost data blocks (i.e., data blocks that were not received or were received in a corrupted state).
As discussed above, the source computing device 102 is configured to communicate data files or other data to the sink computing device 110 via data transfer using one or more rounds. Those data transfer rounds subsequent to the first round are based on feedback from the sink computing device 110. The source computing device 102 may make as many rounds of data transfers as are required to successfully transfer a data file or other data to the sink computing device 110. Accordingly, in block 306, the source computing device initiates the next round of data transmission. For example, during a first iteration of the method 300, the source computing device 102 initiates a first round (i.e., X = 1) of data transfer to the sink computing device 110. In block 306, the source computing device 102 may, for example, transmit an announcement that the data transmission of the current round is about to begin to the sink computing device 110. In some embodiments, the source computing device 102 may also transmit data (e.g., an expected number of data blocks) regarding a particular round of data transmission to the sink computing device 110, such that the sink computing device 110 may determine whether a data block has been lost.
After the source computing device 102 has initiated the data transfer for the current round, in block 308, the source computing device 102 transmits data blocks for the data transfer for the current round to the sink computing device 110. In the first round of data transfer, each data block including a data file or other data to be transmitted to the sink computing device 110 is transmitted to the sink computing device 110. However, as discussed in more detail below, in subsequent rounds of data transmission, only those data blocks identified as lost by one or more sink computing devices are transmitted. Thus, in block 308, any number of data blocks may be transferred, e.g., depending on the size of the data file to be transferred, previous rounds of data transmission, feedback from the sink computing device 110 (as discussed in more detail below), and/or other criteria.
The source computing device 102 may use any suitable network communication technique to communicate the data blocks to the sink computing device 110. For example, in some embodiments, the source computing device 102 may use multicast data transmissions or broadcast data transmissions to deliver the data blocks to the sink computing device 110. Alternatively, in ONE particular embodiment, the source computing device 102 is configured TO use the ONE-TO-MANY DATA TRANSMISSION described in U.S. patent application XX/XXXXXXX entitled "METHOD AND SYSTEM FOR RFACILITTING ONE-TO-MANY DATA TRANSMISSION TO A PLURALITY OF COMPUTINGDEVICES" filed 12 months XX 2009. In such embodiments, the source computing device 102 is configured to select one of the sink computing devices 110 as the "sink" computing device. The remaining sink computing devices 110 are configured in a promiscuous communication mode (promiscuous communication mode). The source computing device 102 delivers the block of data to the selected sink computing device 110 by transmitting unicast data transmissions to the selected sink computing device 110. However, while the unicast data transmission is addressed to the selected sink computing device 110, the unicast data transmission is also received by each of the other non-selected sink computing devices 110. Because the non-selected sink computing devices 110 are configured in a promiscuous communication mode, the non-selected sink computing devices 110 also filter and process unicast data transmissions. In this way, one-to-many data transmission is achieved using unicast data transmission.
After the source computing device 102 has communicated the data blocks for the current round of data transfer, the source computing device 102 transmits an advertisement to each sink computing device 110 informing the sink computing device 110 that the current round of data transfer has been completed, block 310. Next, in block 312, the source computing device 102 receives any bucket list transmitted by the sink computing device 110. As discussed in more detail below, the bucket list identifies those data blocks from the current round that each report sink computing device 110 lost.
It is determined whether the particular sink computing device 110 transmitted the bucket list to the source computing device 102 in block 312 based on one or more criteria. For example, in one embodiment, in block 312, each sink computing device 110 having at least one missing data block in its respective bucket list transmits the bucket list to the source computing device upon or after receiving an announcement that the current round has ended. However, in other embodiments, sink computing device 110 may be configured to: transmitting its respective bucket list based on a delay period after receiving an announcement that the current round has ended. In such embodiments, the source computing device 102 is configured to: in block 314, a default delay value is transmitted to each sink computing device 110. The sink computing device 110 determines its respective delay time period after the end of round announcement based on a default delay value, as discussed in more detail below. Alternatively or additionally, in other embodiments, sink computing device 110 may be configured to transmit respective bucket lists based on the size of their bucket lists. In such embodiments, the source computing device 102 is configured to: in block 316, a predetermined bucket list size threshold is transmitted to each sink computing device 110. The sink computing device 110 determines whether to transmit its bucket list based on a threshold, as discussed in more detail below.
After the source computing device 102 receives the bucket list for the current round from the sink computing device 110, in block 318, the source computing device 102 aggregates the bucket list to generate a master bucket list. The master bucket list includes a marker or identification of each data block reported by one or more sink computing devices 110 as lost during the round of data transmission. If it is determined in block 320 that the master bucket list is empty (i.e., no sink computing device 110 sends a bucket list with at least one data block), then in block 322, the source computing device 102 transmits an announcement to each sink computing device 110 that the data transfer session has completed.
However, it should be appreciated that in embodiments where the source computing device 102 transmits the bucket list size threshold to the sink computing device 110 in block 316, the source computing device 102 may be configured to: if a bucket list is not received from the sink computing device 110, the threshold is lowered by a predetermined amount and the threshold is retransmitted to the sink computing device 110. In this way, the source computing device 102 may continue to lower the threshold each round until the threshold equals zero.
Referring back to block 320, if the master bucket list is not empty, the method 300 loops back to block 306 where the next round of data transfer is initiated. In this next round, the data blocks identified in the master bucket list are retransmitted to the sink computing device 110, as discussed above. As such, the source computing device 102 makes one or more rounds of data transfers to transfer data files or other data to the sink computing device 110. Typically, each subsequent round of data transmission will include a smaller number of data blocks to be retransmitted by the source computing device 102.
Referring now to fig. 4, a method for receiving data from a source computing device 102 begins with block 402, where each sink computing device 110 is in signal communication with the source computing device 102. In block 402, the source computing device 102 and the sink computing device 110 may establish a communication protocol that will be used to communicate data to the sink computing device 110 and/or other information or data to prepare the sink computing device 110 to receive the data transmission, as discussed above. For example, in some embodiments, the sink computing devices 110 may receive information about the data files to be transmitted by the source computing device 102, such as the number of data blocks that should be received by each sink computing device 110.
In block 404, the sink computing device determines whether data transmission for the current round has begun. As discussed above with respect to the method 300, the source computing device 102 may transmit an advertisement and/or other data to the sink computer 110 indicating the start of a round of data transmission and, in some embodiments, data about the current round of data transmission, e.g., the number of data blocks to be transmitted in the round.
The plurality of sink computing devices 110 receive the data blocks transmitted by the source computing device 102 in block 406 and update their respective bucket lists of missing data blocks in block 408. Each sink computing device 110 is configured to generate and/or update a list of the following data blocks or their indicia: data blocks transmitted by the source computing device 102 but not received by the sink computing device 110 or received in a corrupted state during the current round and thus must be retransmitted to the sink computing device 110. The sink computing device 110 may store the bucket list in its respective memory 202 and/or data storage 222. The respective bucket lists may be different from each other and may contain no data blocks or one or more data blocks based on the current round of data transmission. The particular data stored in the data blocks that indicates those data blocks that are lost may vary depending on the particular embodiment and implementation. For example, in one embodiment, sink computing device 110 is configured to store the packet identification number for each data block that is determined to be lost for the current round.
In block 410, the sink computing device 110 determines whether the round has been completed. If not, the method 400 loops back to blocks 406 and 408, where each sink computing device 110 continues to receive or otherwise waits to receive data blocks from the source computing device 102 and updates its bucket list of missing data blocks accordingly. However, if the sink computing devices 110 receive the round end advertisement from the source computing device 102, the method 400 proceeds to block 412, where each sink computing device 110 determines whether its bucket list is empty (i.e., whether it has received each data block of the current data transfer session). If so, in block 414, the special sink computing device 110 waits for an end of session announcement from the source computing device 102. However, if the sink computing device 110 has at least one missing data block identified in its bucket list of missing data blocks, the method 400 proceeds to block 416, where the sink computing device 110 is configured to transmit its bucket list of missing data blocks to the source computing device based on special criteria.
As discussed above, in some embodiments, the sink computing device 110 may be configured to: the bucket list is transmitted upon or after receiving an end of round notification from the source computing device 102. However, in other embodiments, sink computing device 110 may be configured to: in block 418, a list of buckets for the missing data blocks is transmitted based on the default delay value. To do so, sink computing device 110 may execute method 500 for transmitting a list of buckets shown in fig. 5. The method 500 begins with block 502, where the sink computing device 110 receives a default delay value from the source computing device 102. The default delay value defines a default time period after the current round is completed (e.g., after receiving a round end advertisement) after which sink computing device 110 is to transmit its respective bucket list. The default delay value may be implemented as a number of microseconds, seconds, minutes, or other time metric. In block 504, the sink computing device 110 determines its special delay period based on the size of its special bucket list. For example, in one particular embodiment, the delay period for each sink computing device 110 is calculated by: the default delay value is divided by the size of the corresponding bucket list (e.g., the number of data blocks identified in its bucket list). In block 506, the sink computing device 110 then transmits its respective bucket list at or after the expiration of the calculated delay period. In this way, sink computing devices with larger bucket lists transmit their bucket lists before those with smaller bucket lists. In addition, by spreading out (spread out) the transmission period of the sink computing device 110, the overall delay and the likelihood of network "implosion" may be reduced.
Referring back to block 416 of method 400, in other embodiments, the sink computing device may be configured to: in block 420, its special bucket list is transmitted based on the size of the bucket list. To do so, the sink computing device 110 may execute the method 600 for transmitting a list of buckets shown in fig. 6. The method 600 begins with block 602, where the sink computing device 110 receives a predetermined bucket list size threshold. As discussed above, the predetermined threshold may define a minimum number of data blocks. In block 604, the sink computing device 110 determines whether its particular bucket list has a predetermined relationship to the threshold. For example, in the illustrative embodiment of fig. 6, the sink computing device 110 determines whether its special bucket list is greater than a threshold. However, in other embodiments, sink computing device 110 may be configured to: in block 604, it is determined whether its special bucket list is equal to a threshold or less than a threshold. In the illustrative embodiment, if the bucket list size of the sink computing device 110 is greater than the threshold, in block 606, the sink computing device 110 transmits its bucket list to the source computing device 102. However, if the bucket list of sink computing device 110 is not greater than the threshold, sink computing device 110 does not transmit its bucket list for the current round. Rather, the sink computing device 110 retains the identified missing data block in its bucket list and updates it accordingly during the next round of data transmission. In this way, only those sink computing devices 110 that have a bucket list greater than the threshold transmit their respective bucket lists. Thus, the transmission of the bucket list by sink computing device 110 is fanned out or otherwise reduced on a per round basis, which may reduce the overall delay and the likelihood of network "implosion.
Referring now to fig. 7-9, illustrative first and second rounds of data transfer from the source computing device 102 to the four sink computing devices 110 are shown. In FIG. 7, the source computing device 102 transmits a data file implemented as a plurality of data blocks B1 through Bn to the sink computing device 110. During the first round of data transfer, the first sink computing device 110 receives data blocks B1 and B2, but does not receive or receives data blocks B3, B4, and B5 in a corrupted state. These blocks are shown in fig. 7 as data blocks X3, X4, and X5, respectively, to indicate that they are missing data blocks for the first sink computing device 110. The second sink computing device 110 receives data blocks B1, B2, B4, and B5 but data block B3 is lost. The third sink computing device 110 receives data chunks B2, B3, and B5 but data chunks B1 and B4 are lost. In addition, the fourth sink computing device 110 receives all data blocks of the first round of data transmission.
The respective bucket lists for each sink computing device 110 after the first round of data transmission are shown in fig. 8. The bucket list 801 of the first sink computing device 110 includes data blocks B3 (X3), B4 (X4), B5 (X5), and an identification or marker of any additional data blocks Xn that the first sink computing device 110 determines are lost during the first round of data transmission. Similarly, bucket list 802 of the second computing device includes data block B3 and an identification or marker of any additional data blocks that second sink computing device 110 determines are missing. The bucket list 803 of the third sink computing device 110 includes the data blocks B1 and B4 and an identification or marker of any additional data blocks that the third sink computing device 110 determines are missing. The bucket list 804 of the fourth sink computing device 110 is empty. Each sink computing device 110 having a non-empty bucket list transmits its respective bucket list to the source computing device 102 based on one or more criteria as discussed above (e.g., upon receiving an end of round announcement, a default delay, a value, a size of the bucket list, etc.).
Next, as shown in fig. 9, the source computing device 102 aggregates the bucket lists received from the sink computing device 110 and initiates a second round of data transfer. The second round of data transfer is based on the aggregated bucket list. For example, in the illustrative embodiment, the source computing device 102 retransmits the data blocks B1, B3, B4, B5, and so on. Thus, the source computing device 102 retransmits only those data blocks that are identified as lost by one or more reporting sink computing devices 110. The retransmitted data blocks are received by the sink computing device 110, and the sink computing device 110 updates its respective bucket list based on the receipt or lack of receipt of the particular data block. The source computing device 102 may proceed with a subsequent round of data transfer until each sink computing device 110 has a complete set of valid data blocks, indicating that the data file has been successfully transferred to each sink computing device 110.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered as illustrative and not restrictive in character, it being understood that only illustrative embodiments have been described and shown and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. For example, it should be appreciated that while system 100 has been generally described above for "one-to-many" data communications, the systems, apparatus and methods described above are equally applicable to "one-to-one data" communications. In such embodiments, the computing device set 104 may include a single sink computing device 110 that receives "one-to-one" data communications from the source computing device 102 using the devices and methods described herein.

Claims (29)

1. A method for use by a source computing device, comprising:
transmitting a plurality of data packets to a plurality of sink computing devices;
transmitting, to the plurality of sink computing devices, a message associated with an end of transmitting the plurality of data packets;
receiving a first list of lost data blocks from a first sink computing device of the plurality of sink computing devices at a first time slot if there are data packets in the plurality of data packets that the first sink computing device did not receive from the source computing device; and
receiving a second list of lost data blocks from a second sink computing device of the plurality of sink computing devices at a second time slot if there are data packets in the plurality of data packets that the second sink computing device did not receive from the source computing device,
wherein, if more data packets are indicated in the first list of lost data blocks than in the second list of lost data blocks, the first time slot precedes the second time slot,
wherein at least one of the first list of lost data blocks and the second list of lost data blocks is to be received from the first sink computing device and/or the second sink computing device in response to the message.
2. The method of claim 1, further comprising:
a step of retransmitting to the first sink computing device the data packets indicated in the first list of lost data blocks;
a step of transmitting another message associated with an end of retransmission of the data packet to the first sink computing device;
a step of receiving, from the first sink computing device and in response to the further message, a third list of lost data blocks if there is a data packet in the retransmitted data packet that the first sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the first sink computing device did not receive from the source computing device.
3. The method of claim 1, further comprising:
a step of retransmitting to the second sink computing device the data packets indicated in the second list of lost data blocks;
a step of transmitting, to a second sink computing device, another message associated with the end of the notification after retransmitting the data packet;
a step of receiving, from the second sink computing device and in response to the other message, a fourth list of lost data blocks if there is a data packet in the retransmitted data packet that the second sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the second sink computing device did not receive from the source computing device.
4. The method of any of claims 1-3, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the first sink computing device.
5. The method of any of claims 1-3, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the second sink computing device.
6. A method for use by a sink computing device of a plurality of sink computing devices, comprising:
receiving, from a source computing device, a message associated with an end of a transmission of a plurality of data packets from the source computing device to the sink computing device;
transmitting a first list of lost data blocks to the source computing device at a first time slot and in response to the message if there are data packets in the plurality of data packets that the sink computing device did not receive from the source computing device;
wherein if there is another sink computing device of the plurality of sink computing devices that does not receive a data packet from the source computing device in the plurality of data packets, and if more data packets are indicated in the first list of lost data blocks than are indicated in the second list of lost data blocks, the first slot precedes the second slot, and the another sink computing device transmits the second list of lost data blocks at the second slot.
7. The method of claim 6, further comprising:
a step of receiving, from the source computing device, another message associated with an end of a data packet indicated in a first list of retransmitted lost data blocks;
a step of communicating a third list of lost data blocks to the source computing device and in response to the other message if there is a data packet in the retransmitted data packet that the sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the sink computing device did not receive from the source computing device.
8. The method of any of claims 6-7, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the sink computing device.
9. The method of any of claims 6-7, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the other sink computing device.
10. A source computing device, comprising:
one or more processors; and
a memory device having stored therein a plurality of instructions that, when executed by the processor, cause the processor to:
transmitting a plurality of data packets to a plurality of sink computing devices;
transmitting, to the plurality of sink computing devices, a message associated with an end of transmitting the plurality of data packets;
receiving a first list of lost data blocks from a first sink computing device of the plurality of sink computing devices at a first time slot if there are data packets in the plurality of data packets that the first sink computing device did not receive from the source computing device; and
receiving a second list of lost data blocks from a second sink computing device of the plurality of sink computing devices at a second time slot if there are data packets in the plurality of data packets that the second sink computing device did not receive from the source computing device,
wherein, if more data packets are indicated in the first list of lost data blocks than in the second list of lost data blocks, the first time slot precedes the second time slot,
wherein at least one of the first list of lost data blocks and the second list of lost data blocks is to be received from the first sink computing device and/or the second sink computing device in response to the message.
11. The source computing device of claim 10, wherein the plurality of instructions further cause the processor to perform:
a step of retransmitting to the first sink computing device the data packets indicated in the first list of lost data blocks;
a step of transmitting another message associated with an end of retransmission of the data packet to the first sink computing device;
a step of receiving, from the first sink computing device and in response to the further message, a third list of lost data blocks if there is a data packet in the retransmitted data packet that the first sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the first sink computing device did not receive from the source computing device.
12. The source computing device of claim 10, wherein the plurality of instructions further cause the processor to perform:
a step of retransmitting to the second sink computing device the data packets indicated in the second list of lost data blocks;
a step of transmitting, to a second sink computing device, another message associated with the end of the notification after retransmitting the data packet;
a step of receiving, from the second sink computing device and in response to the other message, a fourth list of lost data blocks if there is a data packet in the retransmitted data packet that the second sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the second sink computing device did not receive from the source computing device.
13. The source computing device of any of claims 10-12, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the first sink computing device.
14. The source computing device of any of claims 10-12, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the second sink computing device.
15. A sink computing apparatus, comprising:
one or more processors; and
a memory device having stored therein a plurality of instructions that, when executed by the processor, cause the processor to:
receiving, from a source computing device, a message associated with an end of a transmission of a plurality of data packets from the source computing device to the sink computing device;
transmitting a first list of lost data blocks to the source computing device at a first time slot and in response to the message if there is a data packet in the plurality of data packets that the sink computing device did not receive from the source computing device,
wherein if there is another sink computing device of the plurality of sink computing devices that does not receive a data packet from the source computing device in the plurality of data packets, and if more data packets are indicated in the first list of lost data blocks than are indicated in the second list of lost data blocks, the first slot precedes the second slot, and the another sink computing device transmits the second list of lost data blocks at the second slot.
16. The sink computing device of claim 15, wherein the plurality of instructions further cause the processor to perform:
a step of receiving, from the source computing device, another message associated with an end of a data packet indicated in a first list of retransmitted lost data blocks;
a step of communicating a third list of lost data blocks to the source computing device and in response to the other message if there is a data packet in the retransmitted data packet that the sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the sink computing device did not receive from the source computing device.
17. The sink computing device of any of claims 15-16, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the sink computing device.
18. The sink computing device of any of claims 15-16, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the other sink computing device.
19. An apparatus in a source computing device, comprising:
means for transmitting a plurality of data packets to a plurality of sink computing devices;
means for transmitting a message associated with an end of transmitting the plurality of data packets to the plurality of sink computing devices;
means for receiving a first list of lost data blocks from a first sink computing device of the plurality of sink computing devices at a first time slot if there are data packets in the plurality of data packets that the first sink computing device did not receive from the source computing device; and
means for receiving a second list of lost data blocks from a second sink computing device of the plurality of sink computing devices at a second time slot if there are data packets in the plurality of data packets that the second sink computing device did not receive from the source computing device,
wherein, if more data packets are indicated in the first list of lost data blocks than in the second list of lost data blocks, the first time slot precedes the second time slot,
wherein at least one of the first list of lost data blocks and the second list of lost data blocks is to be received from the first sink computing device and/or the second sink computing device in response to the message.
20. The apparatus of claim 19, further comprising means for performing the steps of:
a step of retransmitting to the first sink computing device the data packets indicated in the first list of lost data blocks;
a step of transmitting another message associated with an end of retransmission of the data packet to the first sink computing device;
a step of receiving, from the first sink computing device and in response to the further message, a third list of lost data blocks if there is a data packet in the retransmitted data packet that the first sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the first sink computing device did not receive from the source computing device.
21. The apparatus of claim 19, further comprising means for performing the steps of:
a step of retransmitting to the second sink computing device the data packets indicated in the second list of lost data blocks;
a step of transmitting, to a second sink computing device, another message associated with the end of the notification after retransmitting the data packet;
a step of receiving, from the second sink computing device and in response to the other message, a fourth list of lost data blocks if there is a data packet in the retransmitted data packet that the second sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the second sink computing device did not receive from the source computing device.
22. The apparatus of any of claims 19-21, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the first sink computing device.
23. The apparatus of any of claims 19-21, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the second sink computing device.
24. An apparatus in a sink computing device of a plurality of sink computing devices, comprising:
means for receiving, from a source computing device, a message associated with an end of a transmission of a plurality of data packets from the source computing device to the sink computing device;
means for transmitting a first list of lost data blocks to the source computing device at a first time slot and in response to the message if there is a data packet in the plurality of data packets that the sink computing device did not receive from the source computing device;
wherein if there is another sink computing device of the plurality of sink computing devices that does not receive a data packet from the source computing device in the plurality of data packets, and if more data packets are indicated in the first list of lost data blocks than are indicated in the second list of lost data blocks, the first slot precedes the second slot, and the another sink computing device transmits the second list of lost data blocks at the second slot.
25. The apparatus of claim 24, further comprising means for performing the steps of:
a step of receiving, from the source computing device, another message associated with an end of a data packet indicated in a first list of retransmitted lost data blocks;
a step of communicating a third list of lost data blocks to the source computing device and in response to the other message if there is a data packet in the retransmitted data packet that the sink computing device did not receive from the source computing device; and
repeating the above steps until no data packets are present in the retransmitted data packets that the sink computing device did not receive from the source computing device.
26. The apparatus of any of claims 24-25, wherein the first time slot occurs after expiration of a first delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the sink computing device.
27. The apparatus of any of claims 24-25, wherein the second time slot occurs after expiration of a second delay value associated with a state of a Wireless Local Access Network (WLAN) channel between the source computing device and the other sink computing device.
28. A machine-readable medium having stored instructions, which when executed by a processor of a source computing device, cause the source computing device to perform the method of any of claims 1-5.
29. A machine-readable medium having stored instructions, which when executed by a processor of a sink computing device, cause the sink computing device to perform the method of any of claims 6-9.
CN201610491076.8A 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmission with reduced network overhead Active CN106130699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610491076.8A CN106130699B (en) 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmission with reduced network overhead

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610491076.8A CN106130699B (en) 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmission with reduced network overhead
CN2009801629658A CN102652411A (en) 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmissions with reduced network overhead

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN2009801629658A Division CN102652411A (en) 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmissions with reduced network overhead

Publications (2)

Publication Number Publication Date
CN106130699A CN106130699A (en) 2016-11-16
CN106130699B true CN106130699B (en) 2020-09-04

Family

ID=57287867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610491076.8A Active CN106130699B (en) 2009-12-17 2009-12-17 Method and system for facilitating one-to-many data transmission with reduced network overhead

Country Status (1)

Country Link
CN (1) CN106130699B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002058345A2 (en) * 2001-01-22 2002-07-25 Sharewave, Inc. Method for allocating receive buffers to accommodate retransmission scheme in wireless computer networks
CN1499868A (en) * 2002-11-08 2004-05-26 深圳市中兴通讯股份有限公司 Method for downloading data based on control instructions along with roate

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1277374C (en) * 2003-09-29 2006-09-27 中兴通讯股份有限公司 Communication system base station and server database real-time synchronizing method
WO2006091026A1 (en) * 2005-02-24 2006-08-31 Lg Electronics Inc. Packet structure and packet transmission method of network control protocol

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002058345A2 (en) * 2001-01-22 2002-07-25 Sharewave, Inc. Method for allocating receive buffers to accommodate retransmission scheme in wireless computer networks
CN1499868A (en) * 2002-11-08 2004-05-26 深圳市中兴通讯股份有限公司 Method for downloading data based on control instructions along with roate

Also Published As

Publication number Publication date
CN106130699A (en) 2016-11-16

Similar Documents

Publication Publication Date Title
US10503599B2 (en) Method and system for facilitating one-to-many data transmissions with reduced network overhead
CN101743716B (en) Establishment of reliable multicast/broadcast in a wireless network
CN102576320B (en) For the detection of access point congestion and the method and system of reduction
US20120320732A1 (en) Multicast bulk transfer system
US8973074B2 (en) Method and system for isochronous communication in audio/video networks
CN105162706A (en) Multicast transmission method, device and system
CN103973414A (en) Data transmission method and device
CN102571545A (en) Method and device for transmitting information in IPv4 (Internet Protocol vision 4) network
EP2491680B1 (en) Method and system for facilitating one-to-many data transmission to a plurality of computing devices
ES2735800T3 (en) Procedure and apparatus for setting packet transmission mode
CN106130699B (en) Method and system for facilitating one-to-many data transmission with reduced network overhead
CN109586931B (en) Multicast method and terminal equipment
CN108199814B (en) Data transmission method and device
WO2020220962A1 (en) Sidelink transmission method and terminal
JP5775123B2 (en) Storage medium and system for access point congestion detection and reduction
WO2022150956A1 (en) Block acknowledgment method, apparatus, and storage medium
CN117061071A (en) Data transmission method, device, electronic equipment and storage medium
CN118175166A (en) Wired serial bus data transmission method, system and related device
Sadeghi et al. Support of high-performance i/o protocols over mmWave networks
JP2012248929A (en) Communication device, communication method and communication system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant