GB2569808A - Transmission method, communication device and communication network - Google Patents

Transmission method, communication device and communication network Download PDF

Info

Publication number
GB2569808A
GB2569808A GB1721842.1A GB201721842A GB2569808A GB 2569808 A GB2569808 A GB 2569808A GB 201721842 A GB201721842 A GB 201721842A GB 2569808 A GB2569808 A GB 2569808A
Authority
GB
United Kingdom
Prior art keywords
media data
communication device
data
value
amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1721842.1A
Other versions
GB201721842D0 (en
GB2569808B (en
Inventor
Closset Arnaud
Guignard Romain
El Kolli Yacine
Le Scolan Lionel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1721842.1A priority Critical patent/GB2569808B/en
Publication of GB201721842D0 publication Critical patent/GB201721842D0/en
Publication of GB2569808A publication Critical patent/GB2569808A/en
Application granted granted Critical
Publication of GB2569808B publication Critical patent/GB2569808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/4013Management of data rate on the bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4247Bus transfer protocol, e.g. handshake; Synchronisation on a daisy chain bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40143Bus networks involving priority mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Small-Scale Networks (AREA)

Abstract

A communication device transmits data towards a processing device via a network by obtaining an amount of local media data composed of several streams, receiving first signaling information from a first upstream device indicating the how much data that can be transmitted from the first device to the communication device, and/or second signaling information from a second downstream device indicating the volume of data that can be transmitted from the second device, determining based on the amount of local data and the signaled quantity of first/second data, the target amount of local data to transmit downstream. Upon the start of a scheduling period, a second value being a difference between the amount of local data and a local allocated credit defining a local transmission allowance, and a third value being an excess of the local data over the allocation, may be calculated and transmitted as the signaling information by each node. The signaling may be the sum of nodes’ second and third values, and may look at signaling over several scheduling periods forming an aggregation window. Dropping and shaping policies may take account of latency requirements and priorities, and a global permitted credit for all devices in a daisy-chain.

Description

The following terms are registered trade marks and should be read as such wherever they occur in this document:
IETF
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
TRANSMISSION METHOD, COMMUNICATION DEVICE AND COMMUNICATION NETWORK
FIELD OF THE INVENTION
The present invention relates in general to transmission of media data in communication networks, and in particular to a method and communication device for transmitting media data.
BACKGROUND OF THE INVENTION
Nowadays, a lot of applications involve a large number of sensor devices. Most of them provide real-time services that are highly appreciated by users of these systems. However, the high number of sensors in sensor networks and the amount of data generated by each sensor raise several problems such as bandwidth consumption to transport media data to a centralized processing point and computational power requirements for processing this huge amount of media data.
To address these issues, distributed sensor processing systems have been introduced. They present numerous advantages such as reliability, scalability and processing performance. In such a system, several nodes each perform a different part of the processing so that the overall processing is distributed between multiple nodes instead of being centralized in one location.
This distributed architecture is especially suitable for a new generation of sensor networks in which the computational power of each sensor is sufficient to perform part of the processing.
Although the distributed processing allows a decrease in the bandwidth requirement and the load at server device side by executing part of the processing inside each sensor, some problems remain.
Traffic burden or network congestion may cause important data loss and uncontrolled delay, which are incompatible with high quality real-time applications.
More specifically, for real time applications that process data during several processing periods, the transmission latency between sensors and the server device is an important criterion that makes it necessary to control the amount of data inserted by the different sensor on the communication path during processing periods. This issue becomes even more critical when multiple-type of data traffic having possibly different latency constraints over time are generated by the different sensors.
To address such latency constraints, transmission scheduling using traffic shaping (shaping is known as the organization of data in relation to latency) is generally used, using for example 802.1Qxx protocols standardized by the Internet Engineering Task Force (IETF). According to this approach, each sensor computes the amount of data that can be transmitted for each data type according to a predetermined configuration, and is able to drop pending data in transmission buffers to sustain a maximum transmission latency.
However, drawback of this approach is that shaping or dropping decisions are based on local observation only. Consequently, one dropping decision can be operated by a sensor for a given data type while the aggregated amount of data produced by all sensors for this traffic type could have been transmitted within the maximum latency constraint, for example if some other communication devices have produced a smaller amount of data for the same traffic type. Similarly, whereas bandwidth allocation is set for a given traffic type for a given sensor, the transmission of more data would have been possible without violating latency constraints.
Another approach used in such daisy-chain configuration is described in patent WO 01/08419 A1 that implements daisy-chained signaling between video compression modules that share information relative to the amount of bandwidth requested by each video compression module to individually adjust encoder settings to the capacity of the daisy-chain bandwidth. One drawback of this solution is that many run of information sharing are necessary to co-ordinate bandwidth adjustment between nodes. Another drawback is that computation of encoder settings assumes some correlation between individual compressed videos, so that still some variations exist regarding transmission duration.
Consequently, there is a need for improving existing data transmission methods for multi-sensor systems in which a first processing step (also called preprocessing) is performed at the sensors and a further step is performed at a same computing device.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method for transmitting media data from a communication device towards a processing device via a network, the method comprising at the communication device:
obtaining an amount of local media data composed of several media data streams;
receiving at least one of first signaling information from a first communication device located upstream from the communication device with respect to the processing device, and second signaling information from a second communication device located downstream from the communication device with respect to the processing device, wherein:
the first signaling information signals an amount of first media data that can be transmitted to the communication device by the first communication device, and the second signaling information signals an amount of second media data that can be transmitted towards the processing device by the second communication device;
determining, based on the obtained amount of local media data and on at least one of the signaled amounts of first and second media data, a target amount of local media data to be transmitted towards the processing device; and transmitting the determined target amount of local media data towards the processing device.
Therefore, the method of the invention makes it possible to improve media data transmission towards a processing device.
As a consequence, the transmission latency is managed, whatever the media data size variation. Hence, a real-time application can be provided with a low risk of uncontrolled loss of data.
Optional features of the invention are further defined in the dependent appended claims.
According to a second aspect of the invention, there is provided a communication device for transmitting media data towards a processing device via a network, the communication device being configured for carrying out the steps of:
obtaining an amount of local media data composed of several media data streams;
receiving at least one of first signaling information from a first communication device located upstream from the communication device with respect to the processing device, and second signaling information from a second communication device located downstream from the communication device with respect to the processing device, wherein:
the first signaling information signals an amount of first media data that can be transmitted to the communication device by the first communication device, and the second signaling information signals an amount of second media data that can be transmitted towards the processing device by the second communication device;
determining, based on the obtained amount of local media data and on at least one of the signaled amounts of first and second media data, a target amount of local media data to be transmitted towards the processing device; and transmitting the determined target amount of local media data towards the processing device.
According to a third aspect of the invention, there is provided a network comprising a plurality of communication devices as aforementioned and a processing device configured to control the transmission of media data over the network.
For instance, the communication devices may be connected according to a daisy-chain topology.
The second and third aspects of the present invention have optional features and advantages similar to the first above-mentioned aspect.
Since the present invention may be implemented in software, the present invention may be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium, and in particular a suitable tangible carrier medium or suitable transient carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device or the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:
Figure 1 illustrates an example of a multi-sensor network in which embodiments of the invention may be implemented;
Figure 2a illustrates an example of configuration parameters;
Figure 2b illustrates an example of context;
Figure 3 illustrates a signaling phase and a data transmission phase according to latency constraints;
Figures 4a and 4b illustrate steps of a signaling algorithm executed by communication devices of the multi-sensor network shown in Figure 1;
Figure 4c illustrates steps of a transmission algorithm executed by all the communication devices after the signaling algorithm which steps are shown in Figures 6a and 6b;
Figure 5a illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to first embodiments;
Figure 5b illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to second embodiments;
Figure 5c illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to third embodiments;
Figure 5d illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to fourth embodiments;
Figure 6a illustrates steps of a shaping algorithm that may be executed by any communication device for associated traffic types, according to first embodiments;
Figure 6b illustrates steps of a shaping algorithm that may be executed by any communication device for associated traffic types according to second embodiments;
Figures 7a and 7b illustrate steps of a shaping algorithm that may be executed by any communication device for associated traffic types, according to third embodiments.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
The present embodiments are directed to a method for transmitting data, in particular media data (e.g. audio or image data) processed in each communication device of a network, towards a target device of the network which is a processing device. It takes benefit of the daisy-chained topology of the network architecture.
In the following description, the term “towards” for transmission of data in relation to a particular device is used to mean that the final destination of these data is the particular device. In some situations, the data may pass through some other devices before reaching said particular device.
Also, in the following description, a communication device is sometimes referred to as a sensor, meaning a sensor with computational capabilities (smart sensor). However, all the teachings relating to a sensor may apply to a communication device linked to a mere sensor without processing capabilities.
In the following description, the processing device is sometimes referred to as a computing server or a computational server.
According to embodiments, a bi-directional signaling phase is operated between communication devices upon each start of a new scheduling period. In practice, the synchronization of a common scheduling period between communication devices may be achieved using any standard time synchronization protocol, such as NTP or PTP 1588.
This signaling phase allows each communication device to be aware of the data located in its neighbors, more specifically the type of data (e.g. latency constrained or not) and their amount/size.
In other words, each communication device thus receives signaling information indicative of the aggregated amount of data produced by the other communication devices per traffic type during an elapsed scheduling period.
Each time a communication device receives signaling information from one port of its daisy-chain connectivity, this information is updated with the locally produced amount of data, and the resulting computed values are forwarded through the other port of the daisy-chain connectivity.
Hence, each communication device is aware of the profile (per traffic type and per scheduling period) of the amount of produced data pending for transmission for the entire group of communication devices located at left and right sides.
Based on the result of this signaling phase and according to internal configuration parameters set for each traffic type (represented by an aggregation window indicative of the number of scheduling periods over which shaping and dropping decisions apply), the communication devices may compute the best appropriate dropping and shaping decisions for each traffic type, taking into account latency constraint and in some embodiments, relative priority between data types.
These configuration parameters allow the definition of a quality of service for considered a traffic type / stream index. These parameters are shares by all communication devices. They are usually configured by default in factory but could alternatively be updated by an external controller (e.g. a configuration PC).
The amount of data selected to be transmitted by a given communication device towards can combine the received information with the configuration parameters allows to will be modulated according to the effective amount of data produced by the group of communication devices during every scheduling period.
Such internal configuration parameters set for each traffic type may comprise for instance:
- the amount of data (in bytes) allocated for a transmission phase during a scheduling period (also called “credit”);
- a priority value relative to the other traffic types;
- a shaping policy;
- a dropping policy;
- an aggregation window indicative of the number of scheduling periods over which shaping and dropping decisions apply;
- a maximum aggregation threshold used as reference according to the dropping decision policy, corresponding to the maximum number of bytes that can be transmitted over the number of scheduling periods as set using aggregation window (this reflect the maximum transmission latency that is acceptable over transmission).
According to embodiments, the transmission of data of traffic types having low constraint latency will be preferably operated sequentially by communication device while the latency constrained data will preferably be concurrently transmitted by all communication devices, to give privilege to latency constraints traffic type if necessary.
The following description is focused on a network having a daisy chain topology. This is because the daisy-chain topology has numerous advantages. For instance, it allows a system to be built with low cabling cost (cable length is reduced in comparison with star topology) and simple wired topology. Another advantage of the daisy chain topology is the scalability: sensors may be easily added or removed.
Figure 1 illustrates an example of a multi-sensor network enabling collaborative processing between several synchronized sensors.
In the given example, the multi-sensor network 100 comprises a plurality of nodes 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 and 160 connected as a daisy-chain.
In this figure, three different types of node are represented:
- A processing device 160, such as a computing server, which performs post-processing on data received from sensors,
- An edge sensor 101 which is connected to the computing server,
- Back-end sensors 101 to 112 which are connected to the computing server through the edge sensor.
In addition, a control station (not illustrated) can perform control operations over the network. In a variant, these control operations may be handled by the processing device.
Each sensor (edge and back-end) is connected to its neighbors through a full duplex link 120. The edge sensor is connected via a full duplex link 130 to the computing server 160.
In order to allow data exchanges and thus collaborative processing between sensors 101 to 112, a (bidirectional) processing path 140 is defined.
A sensor may be identified by a node identifier.
For data transmission performed by each node towards the computing server 160, a collecting path 150 is defined. This processing path is limited to the multisensor network and there is no predefined direction for this path (one direction is defined during the initialization of the system depending on bandwidth occupancy).
A communication device according to embodiments of the invention comprises a Packetizer Tx module to deliver (transmit) produced data of different data types. The Packetizer Tx module is configured to format the packets before transmission and to send a Rd/Drop message to buffers for reading or dropping data, according to shaping or dropping algorithms described with reference to Figures 4a to 6c.
The communication device according to embodiments of the invention also comprises a Packetizer Rx module to retrieve (receive) data of different data types that will have to be pre-processed before being delivered to the computing server 160 shown in Figure 1 or to one or more other communications devices (to be preprocessed).
The Packetizers Tx and Rx communicate with a Routing module which is in charge of transmitting, receiving and forwarding traffic over communication network using a network adapter.
The communication device according to embodiments of the invention further comprises a scheduler implementing shaping and dropping algorithms used to co-ordinate data transmission between communication devices and towards the computing server. The Scheduler is connected to the Packetizer Tx module. It is configured to send Send_Request and Drop_Request to the Packetizer Tx module.
Figure 2a illustrates an example of configuration parameters, associated with traffic types within communication devices, as well as relative position of the communication device within topology transmission path up to the application server.
Generally speaking, a traffic type is represented by a stream index and is characterized by different constraints (e.g. latency, throughput, packet error rate, etc.) so that different QOS policies are applied depending on these constraints. In the table of Figure 2a, for each stream index, a dropping policy, a shaping policy and a priority policy is defined. This is for obtaining a QOS associated with this traffic type (or stream index).
Thus, for each stream index 200 (from 1 to M), several parameters are set:
- Priority order 201 between stream indexes;
- Local allocated credit 202 which is the maximum number of bytes that can be transmitted by a given communication device during a reference scheduling period;
- Global allocated credit 207 which is the maximum number of bytes that can be transmitted by the whole system (i.e. by all the communication devices) during a reference scheduling period, and corresponding to the aggregation of local allocated credits of each one of the communication devices;
- Dropping policy selected for this traffic type (field 203), taking different possible values, for instance “Basic 1”, “Basic 1 with priority”, “Basic 2” or “Flexible”;
- Dropping ratio (i.e. percentage of data to be dropped) selected for this traffic type (field 204), used according to the dropping policy;
- Aggregation window field 206 (number of scheduling periods) considered according to the dropping and shaping policies. For illustration purpose, if an aggregation window is set to 1 and that the amount of data produced during a scheduling period exceeds the threshold of data to be transmitted by period, some of these data must be dropped. If an aggregation window is set to 5, the amount of data produced during 5 periods must not exceed the threshold of data to transmit;
- Shaping policy selected for this traffic type (field 205), taking different possible values, for instance “Basic”, “Priority”, “Low constraint latency”;
- Shaping ratio field 209 (percentage) considered according to the shaping policy; this ratio is used when shaping is applied with priority between streams. In this case, each stream is given a percentage of the transmission bandwidth computed from the aggregation of credit allocation values 206c;
- Aggregation threshold field 208 (number of bytes) considered according to the dropping policy, corresponding to the maximum number of bytes that can be transmitted over the number of scheduling periods as set using aggregation window (this reflect the maximum transmission latency that is acceptable over transmission); and
- Field 210 is set according to the node position. When the multi-sensor network has a daisy-chain topology, the configuration is either Left_edge (last communication device on the left side of the daisy-chain), Right_edge (first communication device on the right side of the daisy-chain that is communicating with the computing server), or Boundary (any communication device between left_edge and right edge).
The use of those different parameters is described with reference to Figures 6a to 8c.
Figure 2b illustrates a context that may be used in a scheduler implementing shaping and dropping algorithms used to co-ordinate data transmission between communication devices and towards a computing server. It is used to operate traffic shaping or dropping during data transmission phases towards the computing server 160 shown in Figure 1.
For each scheduling period index 220, three contexts are provided:
1) The amount of data produced by all communication devices on the left side of given communication device, compared to an allocated credit value (column 221). For each stream index 1 to M, two pieces of information are stored:
- Aggregation Info 1 denoted “Agg Info 1” 224.x which is the aggregated difference over communication devices on the left path for the stream index x (x being between 1 and the number M of streams), between the amount of data produced by the application layer and the allocated credit (number of bytes allowed during data transmission), whatever is the sign of this difference;
- Aggregation Info 2 denoted “Agg Info 2” 225.x which is the aggregated difference over communication devices on the left path for stream index x (x being between 1 and the number M of streams), between the amount of data produced by the application layer and the allocated credit (number of bytes allowed during data transmission), only when the amount of data produced by the application layer is greater than the allocated credit.
2) The amount 226.x of data locally produced by a given communication device for the stream index x (x being between 1 and the number M of streams).
3) The amount of data produced by all the communication devices on the right side of a given communication device, compared to an allocated credit value (column 223). For each stream index 1 to M, two pieces of information are stored:
- Aggregation Info 1 denoted “Agg Info 1” 227.x which is the aggregated difference over communication devices on the right path for the stream index x (x being between 1 and the number M of streams) between the amount of data produced by the application layer and the allocated credit (number of bytes allowed during data transmission), whatever is the sign of this difference;
- Aggregation Info 2 denoted “Agg Info 2” 228.x which is the aggregated difference over communication devices on the right path for the stream index x (x being between 1 and the number M of streams) between the amount of data produced by the application layer and the allocated credit (number of bytes allowed during data transmission), only when the amount of data produced by the application layer is greater than the number of allocated credit.
The use of those different parameters is described with reference to Figures 4a to 6c.
Figure 3 illustrates a coordinated signaling phase followed by a data transmission phase over a communication path between the communication devices 1 to N shown in Figure 1. During this phase, both latency constrained streams and low constraint latency streams are transmitted.
At the beginning of each scheduling period 305, a signaling step 301 is operated by each communication device to provide information in both directions.
During the signaling, Info 1 and Info 2 are aggregated (303) over the processing path 140 and at the same time Info 1 and Info 2 are aggregated (304) over the collecting path 150, and the context (see Figure 2b) is updated. Infol and Info2 are used by each communication device to determine whether some pending data produced by application must be dropped according to the dropping policy configuration, as well as the quantity of data that will be transmitted during the next transmission period (i.e. the next scheduling period).
After the signaling phase 301, a transmission step 302 is performed to transmit the latency constraint data towards the computing server 160 over the collecting path 150. There is no particular transmission sequence order to observe as the accumulated payload size doesn’t exceed network bandwidth capacity thanks to the coordinated and consistent computation of amount of dropped and scheduled data per stream during scheduling period 305. In this condition, it does not matter how the transmission sequence is shared between communication devices during transmission periods.
Once each communication device has transmitted latency constrained application data for the current scheduling period, it performs a data transmission step 306 during which low latency constrained data are transmitted. During this step, data transmission is sequentially operated from left to right (from the communication device the farthest from the computing server). The principle for each node is to wait until all nodes located on left side have transmitted all the data associated with low latency constraint data types, before starting transmitting its own pending data associated with low latency data types. Detection of the end of transmission from left side is operated using Agg info 1 224.x content for all stream indexes associated with low latency data types. Those contents are decremented from payload size of forwarded network data packets belonging to low latency data types. When no more data are expected from left side, for all streams indexes belonging to low latency constrained data types, the communication device start its transmission for low latency constraint data types. Details about this algorithm are given in reference to Figures 7a and 7b.
Examples of algorithms for the signaling step 301 are described with reference to Figures 4a to 4c.
Examples of algorithms for the transmission step 302 are described with reference to Figures 4c, 5a to 5d, and 6a to 6c.
Examples of algorithms for the transmission step 306 are described with reference to Figures 6c and 7.
Figure 4a is a flowchart comprising a first part of the steps of a signaling algorithm executed by the communication devices of the multi-sensor network shown in Figure 1. This first part of the steps is directed to the transmission of signaling messages by a communication device to inform the adjacent devices of the amount of its own produced data to transmit.
The signaling phase, which is thus composed of the steps of Figures 4a and 4b, allows each communication device to compute dropping and shaping decisions consistent between communication devices, so that bandwidth usage can be optimized.
During an initialization step 400, the internal variables are set to values by default. For instance, a variable ‘setjeft’ is set to ‘false’.
At step 401, the scheduling cycle index is reset.
The communication device then waits (step 402) until a scheduling event (e.g. start of a new scheduling period) is detected. Upon detection of this event, an amount of data (Nb_Bytes) generated by the application is updated (step 403) for each stream index within the local stream part 222 of the context shown in Figure 2b, and the cycle index is incremented (step 404) and variables “Left_done” and “Right_done” are set to “false”.
Then, at step 405, for each stream index, the difference between remaining bytes (“Nb_Bytes”) and the configuration parameter called “Local Allocated credit” (202 in Figure 2a) is computed as Info. It is recalled that the local allocated credit is the number of bytes allowed during data transmission.
Then, it is tested at step 406 whether no signaling message is expected from the left or the right side of the daisy-chain. If the result is positive, it means that the considered communication device is at the left edge or the right edge of the daisychain.
If the communication device is located at one of these edges, the following values Infol and Info2 are computed and sent to the right/left adjacent device (depending on the edge) at step 408:
Infol represents, for each stream index, the amount of produced data (not yet transmitted) relative to the allocated credit. This difference is positive when the amount of data to be transmitted is greater than the allocated credit, and is negative when the amount of data to be transmitted is lower than the allocated credit;
Info2 represents, for each stream index, the amount of produced data exceeding the allocated credit. Therefore, this difference can only be positive or null.
In practice, the signaling Infol & Info2 are forwarded in Signaling In and Out messages, and the “right/left aggregation” part 221/223 of the context shown in Figure 2b is updated at step 409. To remember that signaling is transmitted to the right/left side, the local variable “Right_done” (respectively “Left_done”) is set to “true” at step 410. Then, at step 407, the steps shown in Figure 4b are performed and finally the waiting step 402 is reached again.
When the communication device is not located at one edge of the daisychain (test 406 negative meaning that signaling messages are expected from both sides), the steps shown in Figure 4b are directly performed (step 407). After that, the waiting step 402 is reached again.
Figure 4b is a flowchart comprising a second part of the steps of a signaling algorithm executed by the communication devices of the multi-sensor network shown in Figure 1. This second part of the steps is directed to the reception of signaling messages from the adjacent device(s) by the communication device to inform it of the amount of their own produced data to transmit.
Depending on the location of the communication device within the daisychain, it can receive signaling information (Infol and Info2) from the left side, the right side, or both sides.
During an initialization step 420, the internal variables are set to values by default.
At step 421, it is tested whether the communication device has received signaling information Info1 and Info2 from the left side (respectively from the right side). In this case, the left part 221 (respectively the right part 223) of the context shown in Figure 2b is updated (step 422) with the received information Infol and Info2.
Next, at step 423, it is checked whether a signaling is expected from the other side, i.e. from the right side (respectively the left side). This test is typically negative for the right (respectively left) edge communication device.
When the test 423 is negative, the variable “Right_done” (respectively “Left_done”) is set to “true” (step 427).
When the test 423 is positive, the communication device updates the received signaling information before forwarding it to its right neighbor (respectively left neighbor). For each stream index, the difference between remaining bytes and the allocated credit 202 is computed as done in step 405 of Figure 4a.
Next, at step 425, for each stream index, a “new InfoT’ value is computed by aggregating the “received InfoT’ with the computed Info value, and a “new Info2” value is computed by aggregating the “received InfoT’ with the computed Info value if the Info value is positive, else “new Info2” value takes the received “Info2” value when the computed Info value is negative.
At step 426, the “Infol” & “Info2” signaling per stream are forwarded on the right side (respectively left side) with the “new InfoT’ and the “new Info2” values. To remember that signaling is forwarded on the right path (respectively left path), a local variable “Right_done” (respectively “Left_done”) is set to “true” at step 427. For the right edge (respectively for the left edge) communication device, the process goes from test 423 to step 427 directly, as if signaling transmission on the right side (respectively left side) was operated.
After step 427, the waiting step 421 is reached again.
Figure 4c illustrates steps of a transmission algorithm executed by all the communication devices after the signaling algorithm which steps are shown in Figures 4a and 4b.
During an initialization step 440, the internal variables are set to values by default.
The communication device then waits (step 441) until a scheduling event (e.g. start of a new scheduling period) is detected.
Next, at step 442, it is tested whether the status of the two internal variables “Left_done” and “Right_done” is set to “true”, thereby indicating that the left and right signaling steps have been completed for the current scheduling period, as described with reference to Figure 4a and 4b.
Each time a new scheduling period starts, at step 445, two additional internal values “Cumuli” and “Cumul2” are computed for all the stream indexes having the same dropping policy:
“Cumuli” represents the sum of “Infol” information over the daisychain, i.e. computed as the sum of “Infol” for left and right sides around the considered communication device plus the local “Info” value.
“Cumul2” represents the sum of “Info2” information over the daisychain, i.e. computed as the sum of “Info2” for left and right sides around the considered communication device plus the local “Info” value if “Info” is positive, else the sum of “Info2” for left and right sides around the considered communication device.
Next, at step 446, a third internal variable “Cumul3” is computed for all stream indexes having a “flexible” dropping policy, as the sum over multiple scheduling periods of “Infol” information over the daisy-chain (sum of “Infol” for the left side, “Infol” for the right side, and local “Info”) over the pre-determined aggregation window 206 shown in Figure 2a.
Next, at step 451, a fourth internal variable “Cumul4” is computed as the sum over multiple scheduling periods of “Infol” information from streams having a shaping policy other than a low constraint latency shaping policy, over the daisy-chain (sum of “Infol” for the left side, “Infol” for the right side, and local “Info”) over the predetermined aggregation window 206 shown in Figure 2a.
Based on “Cumuli”, “Cumul2”, “Cumul3” and “Cumul4” computation, a dropping algorithm is called at step 447, followed by a shaping algorithm called at step
448, before starting transmission of new data at step 449. Once transmission is terminated (test 450 is positive), step 441 is reached again.
Several exemplary dropping algorithms are described with reference to Figures 5a to 5d.
Several exemplary shaping algorithms for latency constrained data types are described with reference to Figures 6a to 6b.
An exemplary shaping algorithm for low latency constrained data types is described with reference to Figure 6c.
Figure 5a illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to first embodiments. The corresponding dropping policy settings are denoted “Basid” in field 203 of the configuration parameters shown in Figure 2a.
Generally speaking, a dropping decision is based, for each stream, on the difference between amount of produced data and bandwidth allocated, for the group of streams sharing this policy.
According to the “Basid” dropping policy, when too much data are produced by the application compared to the available bandwidth, each communication device drops the sale ratio of its produced data, independently from the difference between the amount of locally produced data and the locally allocated credit (even if the local application produced few data below allocated credit, dropping will be executed).
During an initialization step 500, the internal variables are set to values by default.
When dropping processing is called (test 501 is positive) for stream indexes having Basid dropping selected (test 502 is positive), test 504 checks if the dropping is necessary according to Cumuli value representing the difference between the amount of aggregated data pending for stream index compared to the aggregated credit for this stream index : if this difference is positive, the amount of dropped data is computed at step 505 as a common ratio for all communication devices, contributing equally to the same percentage of data to be dropped, whatever locally a communication device has more or less amount of pending data waiting for transmission compared to allocated credit value in field 202 in Figure 2a.
Then, for the concerned stream index, the scheduler computes the corresponding number of bytes to be dropped at step 506, and generates at step 507 a drop request command using a Drop_Request, with corresponding fields format. The number of pending bytes is then decremented by the number of dropped bytes for the stream index.
An advantage of this dropping policy is that it is very simple. If the profile of locally produced data is quite the same between communication devices for the different stream indexes, this method should be preferred, else it can be better to consider priority between streams before operating dropping.
Figure 5b illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to second embodiments. The corresponding dropping policy settings are denoted “Basicl with Priority” in field 203 of the configuration parameters shown in Figure 2a.
In this case, the dropping decision is also based for each stream, on the difference between the amount of produced data and the bandwidth allocated, but priority exists between streams, and the dropping will be operated first on lowest priority streams and then, if necessary, on highest priority streams.
During an initialization step 510, the internal variables are set to values by default.
When dropping processing is called (test 511 is positive), and for stream indexes having “Basicl with Priority” dropping selected (test 512 is positive), the test 514 checks whether the dropping is necessary according to Cumuli value representing the difference between the amount of aggregated data pending for the stream index and the aggregated credit for this stream index. If this difference is positive, the dropping mechanism is used iteratively from the lowest to the highest priority between stream indexes.
The amount of dropped data is computed by the Scheduler at step 515 as the field value 204 of Dropping ratio times the number of pending bytes for the stream index, and Cumuli variable content is decremented by this amount of dropped data. Then the Scheduler generates at step 516, a drop request command using Drop_Request, with the corresponding field formats. Then, iteration for stream index(es) having next higher priority re-evaluates Cumuli value at step 514.
An advantage of this alternative is to consider priority between streams before operating dropping.
Figure 5c illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to third embodiments. The corresponding dropping policy settings are denoted “Basic2” in field 203 of the configuration parameters shown in Figure 2a.
In this case, the dropping decision is based for each stream, on the difference between the amount of produced data and the bandwidth allocated, but taking care of the difference between the amount of locally produced data and the locally allocated credit (if the local application produced few data below the allocated credit, dropping will not be executed).
During an initialization step 520, the internal variables are set to values by default.
When the dropping processing is called (test 521 is positive) for the stream indexes having “Basic2” dropping selected (test 522 is positive), the test 524 checks if the dropping is necessary according to Cumul2 value representing the aggregated positive difference between the amount of data pending for the stream index and the credit allocation for this stream index : if this difference is positive, the amount of data to be dropped is computed in step 525 as an individual ratio contributing unequally to the percentage of data to be dropped : only communication devices having an amount of pending data greater than the credit allocation must operate traffic dropping.
Then, for the concerned stream index, scheduler computes the corresponding number of bytes to be dropped in step 525 and 526, and generates in step 527 a drop request command using a Drop_Request, with the corresponding field format. The number of pending bytes is then decremented by the number of dropped bytes for the stream index.
An advantage of this dropping policy is that it is simple while pushing more dropping aggressiveness to communication devices producing a higher amount of data than allowed by the credit allocation.
If the profile of produced data by the application is quite the same between communication devices for the different stream indexes, this method should be preferred. Otherwise, it should be better to consider priority between streams before operating dropping.
Figure 5d illustrates steps of a dropping algorithm that may be executed by any communication device for associated traffic types, according to fourth embodiments. The corresponding dropping policy settings are denoted “Flexible with Priority” in field 203 of the configuration parameters shown in Figure 2a.
In this case, the dropping decision is based for each stream, on the difference between the amount of produced data and the bandwidth allocated, but over multiple successive scheduling cycles.
This dropping policy allows integration of produced applicative data with a better flexibility, thus being capable to support peak rate application data for variable bit rate, up to some pre-determined threshold.
During an initialization step 530, the internal variables are set to values by default.
When dropping processing is called (test 531 is positive), and for the stream indexes having “Flexible with Priority” dropping selected (test 532 is positive), the test 534 checks if the dropping is necessary according to Cumul3 value representing the difference between the amount of aggregated data pending for the stream index and the aggregated credit for this stream index, but over a predetermined sliding window of a multiple scheduling period number defined in accordance with field 206 of the configuration parameters shown in Figure 2a.
If Cumul3 value is greater than a pre-determined threshold in accordance with field 208 of the configuration parameters in Figure 2a, the dropping mechanism is used iteratively from the lowest to the highest priority between the stream indexes.
The amount of dropped data is computed by the scheduler in step 535 as the field value 204 of Dropping ratio times the number of pending bytes for stream index, and Cumul3 variable content is decremented by this amount of dropped data. Then the scheduler generates in step 536 a drop request command using Drop_Request, with the corresponding field formats. Then the iteration for stream index(es) having the next higher priority re-evaluates Cumul3 value at step 534.
An advantage of this dropping policy is that it allows a better flexibility regarding bit rate variation of the application data relative to the credit allocation representing available transmission bandwidth over the daisy-chain.
Allowing transient latency variation between processing periods and scheduling periods offers capability to absorb bit rate variations, up to a maximum latency excursion in accordance with a pre-determined threshold.
Figure 6a illustrates steps of a shaping algorithm that may be executed by any communication device for associated traffic types, according to first embodiments. The corresponding shaping policy settings are denoted “Basic” in field 205 of the configuration parameters shown in Figure 2a.
According to the “Basic” shaping scheme, the transmission of data is arranged to fit a pre-determined allocated bandwidth (credit) per stream, using information 202 in Figure 2a. Thanks to this, a constant bit rate transmission may be achieved for each stream.
During an initialization step 600, the internal variables are set to values by default.
When the shaping processing is called (test 601 is positive) for stream indexes having “Basic” shaping selected (test 602 is positive), the scheduler computes a corresponding number of bytes to be transmitted at step 603 in accordance with the value of a credit allocation for the stream index, and generates at step 604 a Send request command using Send_Request, with corresponding field format.
The number of pending bytes is then decremented by the number of bytes that will be transmitted for the stream index.
An advantage of this shaping policy is to allow constant bit rate shaping scheme for data transmission during scheduling periods. It may be applied for constrained latency data types. If the profile of the produced data by the application is quite the same between communication devices for the different stream indexes, this method should be preferred. Otherwise, it should be better to consider priority of streams before operating shaping.
Figure 6b illustrates steps of a shaping algorithm that may be executed by any communication device for associated traffic types, according to second embodiments. The corresponding shaping policy settings are denoted “Priority” in field 205 of the configuration parameters shown in Figure 2a.
According to the “Priority” shaping scheme, the transmission of data is arranged to fit a global allocated bandwidth (credit) shared by streams having the same priority. The highest priority streams can thus access first to transmission.
During an initialization step 610, the internal variables are set to values by default.
When the shaping processing is called (test 611 is positive) for the stream indexes having “Priority” based shaping selected (test 612 is positive), the shaping mechanism is used iteratively from the lowest priority stream index to the highest priority stream index. Next, at step 613, the aggregated number (Sum_Credit) of local allocated credits is computed for all the stream indexes associated with a priority based shaping policy.
The Scheduler then computes at step 614 for each priority stream index, the corresponding number of bytes to be transmitted as the multiplication of the Shaping Ratio setting part 209 shown in the configuration parameters of Figure 2a by the Sum_Credit value for the stream index, and generates at step 615 a Send request command using Send_Request, with the corresponding field format.
The number of pending bytes is then decremented by the number of bytes that will be transmitted for the stream index. If the current priority stream index is not the last one (test 616), the algorithm goes back to step 614 to process the next higher priority stream index.
Figure 7a illustrates steps of a shaping algorithm that may be executed by any communication device for associated traffic types, according to third embodiments. The corresponding shaping policy settings are denoted “Low constraint latency” in field 205 of the configuration parameters shown in Figure 2a.
According to the “Low constraint latency” shaping scheme, the transmission of low latency constrained streams data is performed sequentially starting from the left edge communication device of the daisy-chain to the computing server 160 in Figure 1.
More specifically, according to this shaping policy, after all the communication devices have transmitted their latency constrained streams data, the left edge communication device starts transmitting an amount of low latency constrained streams data computed from the remaining bandwidth available for the current scheduling period. Once a communication device terminates the transmission of its pending low constraint latency stream data, then next right neighbor communication device will take the lead for subsequent transmission of pending low constraint latency stream data.
It must be noticed that multiple scheduling periods may be necessary for any communication device to complete the transmission of remaining low constraint latency streams data. The end of transmission of low constraint latency data types between neighbor communication devices is detected within the scheduler of forwarding communication devices thanks to the Pk_forwarded_Size message driven by the routing module, indicating the Stream index and the Pk_Size fields.
During an initialization step 700, the internal variables are set to values by default. The shaping mechanism is used iteratively from the lowest priority stream index to the highest priority stream index.
When the shaping processing is called (test 701 is positive), it is checked whether this shaping processing is set as “Low constrained latency” for at least one stream index (test 702 is positive).
Next, it is checked (test 703) whether the communication device is at the left edge of the daisy-chain, meaning that no signaling is expected from the left side.
If so (test 703 is positive), an internal variable “Tx_enable” is set to “true” (step 704) meaning that internal transmission is allowed. The other communication devices set their variable “Tx_enable” as “false” (step 705).
When the internal variable “Tx_enable” is set to “false” for the considered communication device (test 706 is negative), the steps shown in Figure 7b are performed.
Otherwise, i.e. when the internal variable “Tx_enable” is set to “true” (test 706 is positive), a subsequent test 708 is performed to check whether there is bandwidth left after the transmission of the data streams having a shaping policy other than low constraint latency, and if so, it is checked during test 708 whether the number of pending bytes to be transmitted is greater than the available bandwidth. If so, the number of bytes to be transmitted is set to the maximum remaining bandwidth (step 710).
Otherwise the number of bytes to be transmitted is aligned on pending bytes number (step 711).
Next, the scheduler requests the transmission (step 712) of the computed bytes number using the command Send_Request, with the corresponding fields “stream index” and “Pk_Size”. If the current priority stream index is not the last one (test 713), the algorithm goes back to step 706 to process the next higher priority stream index.
When the test 706 is negative meaning that the internal variable “Tx_enable” is set to “false” for the considered communication device, an iterative process comprising the following steps shown in Figure 7b is operated:
- detection of a new notification “Pk_forwarded_Size” (step 721) from the Routing module that indicates the stream index and the size of forwarded packets received from the left neighbor communication device;
- upon such detection (test 721 is positive), decrease of the value of Infol on left context for the stream index, by the Payload size of the forwarded packet (step 722). During this step, the remaining amount of data to be forwarded for the stream index i according to the “Pk_Size” information and the initial value of “Agg InfoT’ 224.i shown in Figure 2b is computed;
- check whether the remaining amount of data to be forwarded (“info 1”) has reached zero (test 723) and if not, the process goes back to step 721. If so, meaning that the whole expected forwarded amount of data for stream index i has been detected, the internal variable “Tx_enable” is set to “true” (step 724) so that the transmission of the pending data for stream index i is allowed.
Thus, in order to enable the transmission of pending data for the stream index i, the communication device must ensure that the amount of forwarded bytes received from the left side for streams having low constraint latency shaping policy covers the amount of expected aggregated data indicated in the variable “Agg InfoT’ 224.i shown in Figure 2b.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in putting into practice (i.e. performing) the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.

Claims (29)

1. A method for transmitting media data from a communication device towards a processing device via a network, the method comprising at the communication device:
obtaining an amount of local media data composed of several media data streams;
receiving at least one of first signaling information from a first communication device located upstream from the communication device with respect to the processing device, and second signaling information from a second communication device located downstream from the communication device with respect to the processing device, wherein:
the first signaling information signals an amount of first media data that can be transmitted to the communication device by the first communication device, and the second signaling information signals an amount of second media data that can be transmitted towards the processing device by the second communication device;
determining, based on the obtained amount of local media data and on at least one of the signaled amounts of first and second media data, a target amount of local media data to be transmitted towards the processing device; and transmitting the determined target amount of local media data towards the processing device.
2. The method of claim 1, wherein the local media data are captured by a sensor device associated with the communication device.
3. The method of claim 1 or 2, wherein the media data comprise at least one of: image data generated by an image sensor device and audio data generated by an audio sensor device.
4. The method of any one of claims 1 to 3, further comprising, upon start of a scheduling period:
obtaining a first value equal to the amount of local media data obtained by the communication device;
computing a second value as a difference between the first value and a local allocated credit defining the amount of data that can be transmitted by the communication device;
computing a third value as the excess of the first value over the local allocated credit.
5. The method of claim 4, further comprising:
updating the second value by adding to it the received first or second signaling information;
updating the third value by adding to it the received first or second signaling information;
transmitting the second and third updated values to a device as a new first and/or second signaling information.
6. The method of claim 4, comprising a step of transmission of the second and third values to a device as the first and/or second signaling information.
7. The method of claim 5, further comprising:
computing a fourth value as the sum of the second value computed by the communication device according to claim 4 and the second updated values computed by the first and the second devices according to claim 5 and received by the communication device; and computing a fifth value as the sum of the third value computed by the communication device according to claim 4 and the third updated values computed by the first and the second devices according to claim 5 and received by the communication device.
8. The method of claim 7, further comprising:
computing a sixth value as the sum of the fourth values computed for several scheduling periods forming an aggregation windows.
9. The method of claim 8, further comprising:
computing a seventh value as the sum of the fourth values computed only for media data streams not having a low constraint latency shaping policy, for several scheduling periods forming an aggregation windows.
10. The method of any one of claims 7 to 9, wherein determining the target amount of local media data to be transmitted comprises, when the fourth value is positive or null:
computing a dropping ratio as the ratio of the fourth value and the sum of the fourth value and a global allocated credit defining the total amount of data that can be transmitted over the network towards the processing device.
11. The method of any one of claims 7 to 9, wherein media data streams are each assigned a priority index, and determining the target amount of local media data to be transmitted comprises the following steps performed for each media data stream in the order of priority and while the fourth value is positive or null:
computing a dropping ratio as the ratio of the fourth value and the sum of the fourth value and a global allocated credit defining the total amount of data that can be transmitted over the network towards the processing device;
updating the fourth value by subtracting to it an amount of dropped data computed based on the dropping ratio.
12. The method of any one of claims 7 to 9, wherein determining the target amount of local media data to be transmitted comprises, when the fifth value is positive or null:
computing a dropping ratio as the ratio of the second value and the fifth value if the second value is strictly positive, otherwise the dropping ratio is null.
13. The method of claim 8 or 9, wherein media data streams are each assigned a priority index, and determining the target amount of local media data to be transmitted comprises the following steps performed for each media data stream in the order of priority and while the sixth value is higher than an aggregation threshold:
computing a dropping ratio as the ratio of the sixth value and the sum of the fourth value and a global allocated credit defining the total amount of data that can be transmitted over the network towards the processing device;
updating the sixth value by subtracting to it an amount of dropped data computed based on the dropping ratio.
14. The method of any one of claims 1 to 13, wherein the target amount of local media data to be transmitted is determined based on an allocated credit.
15. The method of any one of claims 1 to 13, wherein media data streams are each assigned a priority index and the target amount of local media data to be transmitted is determined based on a shaping ratio and on a global allocated credit defining the total amount of data that can be transmitted over the network towards the processing device.
16. The method of claim 9 or any one of claims 10 to 13 when depending on claim 9, comprising:
computing an eighth value as the difference between a global allocated credit defining the total amount of data that can be transmitted over the network towards the processing device and the seventh value;
wherein the target amount of local media data to be transmitted is determined based on the eighth value.
17. The method of any one of claims 1 to 16, wherein the network comprises a plurality of communication devices connected as a daisy-chain.
18. The method of any one of claims 1 to 17, wherein the first and/or second communication devices are physically adjacent to the communication device.
19. The method of any one of claims 1 to 17, wherein the first and/or second communication devices are logically adjacent to the communication device.
20. The method of any one of claims 1 to 19, wherein determining the target amount of local media data to be transmitted is further based on a predetermined selection rule.
21. The method of claim 20, wherein determining the target amount of local media data to be transmitted is further based on the amount of local media data and the signaled amounts of first and second media data generated during a scheduling period.
22. The method of claim 20, determining the target amount of local media data to be transmitted is further based on an aggregation of amounts of local media data and signaled amounts of first and second media data generated during a plurality of scheduling periods.
23. The method of claim 20, wherein a plurality of first signaling information are received for signaling the amounts of first media data generated during a first plurality of scheduling periods, and/or a plurality of second signaling information are received for signaling the amounts of second media data generated during a second plurality of scheduling periods.
24. The method of claim 23, wherein the transmitting step includes a selection step for selecting the data to be transmitted, based on priorities, when the size of the obtained local media data exceeds the size of the determined target amount of local media data.
25. A communication device for transmitting media data towards a processing device via a network, the communication device being configured for carrying out the steps of:
obtaining an amount of local media data composed of several media data streams;
receiving at least one of first signaling information from a first communication device located upstream from the communication device with respect to the processing device, and second signaling information from a second communication device located downstream from the communication device with respect to the processing device, wherein:
the first signaling information signals an amount of first media data that can be transmitted to the communication device by the first communication device, and the second signaling information signals an amount of second media data that can be transmitted towards the processing device by the second communication device;
determining, based on the obtained amount of local media data and on at least one of the signaled amounts of first and second media data, a target amount of local media data to be transmitted towards the processing device; and transmitting the determined target amount of local media data towards the processing device.
26. A network comprising a plurality of communication devices according to 5 claim 25 and a processing device configured to control the transmission of media data over the network.
27. The network according to claim 26, wherein the communication devices are connected as a daisy-chain and are each configured to perform steps of a method
10 according to any one of claims 1 to 24.
28. The network according to claim 26 or 27, wherein the first and/or second communication devices are physically adjacent to the communication device.
15
29. The network according to claim 26 or 27, wherein the first and/or second communication devices are logically adjacent to the communication device.
GB1721842.1A 2017-12-22 2017-12-22 Transmission method, communication device and communication network Active GB2569808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1721842.1A GB2569808B (en) 2017-12-22 2017-12-22 Transmission method, communication device and communication network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1721842.1A GB2569808B (en) 2017-12-22 2017-12-22 Transmission method, communication device and communication network

Publications (3)

Publication Number Publication Date
GB201721842D0 GB201721842D0 (en) 2018-02-07
GB2569808A true GB2569808A (en) 2019-07-03
GB2569808B GB2569808B (en) 2020-04-29

Family

ID=61131584

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1721842.1A Active GB2569808B (en) 2017-12-22 2017-12-22 Transmission method, communication device and communication network

Country Status (1)

Country Link
GB (1) GB2569808B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595887A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2595884A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2616735A (en) * 2020-06-09 2023-09-20 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528591A (en) * 1995-01-31 1996-06-18 Mitsubishi Electric Research Laboratories, Inc. End-to-end credit-based flow control system in a digital communication network
US20100194894A1 (en) * 2009-02-05 2010-08-05 Hitachi, Ltd. Data acquisition system and transmission control device
US20130003748A1 (en) * 2011-07-01 2013-01-03 Fujitsu Limited Relay apparatus and relay control method
WO2014068630A1 (en) * 2012-10-29 2014-05-08 日立マクセル株式会社 Repeater and transmission/reception method
US20160006648A1 (en) * 2013-03-21 2016-01-07 Hitachi, Ltd. Distributed Control System and Control Method Thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5528591A (en) * 1995-01-31 1996-06-18 Mitsubishi Electric Research Laboratories, Inc. End-to-end credit-based flow control system in a digital communication network
US20100194894A1 (en) * 2009-02-05 2010-08-05 Hitachi, Ltd. Data acquisition system and transmission control device
US20130003748A1 (en) * 2011-07-01 2013-01-03 Fujitsu Limited Relay apparatus and relay control method
WO2014068630A1 (en) * 2012-10-29 2014-05-08 日立マクセル株式会社 Repeater and transmission/reception method
US20160006648A1 (en) * 2013-03-21 2016-01-07 Hitachi, Ltd. Distributed Control System and Control Method Thereof

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595887A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2595884A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2595884B (en) * 2020-06-09 2023-04-19 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2595887B (en) * 2020-06-09 2023-08-23 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2616735A (en) * 2020-06-09 2023-09-20 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks
GB2616735B (en) * 2020-06-09 2024-05-29 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks

Also Published As

Publication number Publication date
GB201721842D0 (en) 2018-02-07
GB2569808B (en) 2020-04-29

Similar Documents

Publication Publication Date Title
US11902090B2 (en) Data packaging protocols for communications between IoT devices
US20210320820A1 (en) Fabric control protocol for large-scale multi-stage data center networks
EP2615802B1 (en) Communication apparatus and method of content router to control traffic transmission rate in content-centric network (CCN), and content router
KR101560613B1 (en) Hybrid networking path selection and load balancing
CN111769998B (en) Method and device for detecting network delay state
US11689606B2 (en) Communication method, system and apparatus
EP3671480B1 (en) Customer-directed networking limits in distributed systems
JP4508195B2 (en) Reduced number of write operations for delivery of out-of-order RDMA transmission messages
US10735346B2 (en) Data block prioritization for internet of things payloads
EP3066569B1 (en) Centralized networking configuration in distributed systems
US8942094B2 (en) Credit-based network congestion management
TWI589136B (en) Technologies for aligning network flows to processing resources
TW200820670A (en) Method and system for stale data detection based quality of service
EP2702731A1 (en) Hierarchical profiled scheduling and shaping
GB2569808A (en) Transmission method, communication device and communication network
CN103873550A (en) Method for data transmission among ecus and/or measuring devices
CN103843295A (en) Centralized data plane flow control
JP2022547143A (en) DATA TRANSMISSION CONTROL METHOD, DEVICE, AND STORAGE MEDIUM
CN101388846A (en) Method, apparatus and program for transferring data
KR20150131327A (en) Network transmission adjustment based on application-provided transmission metadata
CN112838992A (en) Message scheduling method and network equipment
CN112714081A (en) Data processing method and device
US11218394B1 (en) Dynamic modifications to directional capacity of networking device interfaces
US11528187B1 (en) Dynamically configurable networking device interfaces for directional capacity modifications
GB2563438A (en) Transmission method, communication device and communication network