GB2563438A - Transmission method, communication device and communication network - Google Patents

Transmission method, communication device and communication network Download PDF

Info

Publication number
GB2563438A
GB2563438A GB1709613.2A GB201709613A GB2563438A GB 2563438 A GB2563438 A GB 2563438A GB 201709613 A GB201709613 A GB 201709613A GB 2563438 A GB2563438 A GB 2563438A
Authority
GB
United Kingdom
Prior art keywords
communication device
data
transmission
media data
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1709613.2A
Other versions
GB201709613D0 (en
GB2563438B (en
Inventor
Guignard Romain
El Kolli Yacine
Le Scolan Lionel
Closset Arnaud
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1709613.2A priority Critical patent/GB2563438B/en
Publication of GB201709613D0 publication Critical patent/GB201709613D0/en
Publication of GB2563438A publication Critical patent/GB2563438A/en
Application granted granted Critical
Publication of GB2563438B publication Critical patent/GB2563438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/417Bus networks with decentralised control with deterministic access, e.g. token passing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Small-Scale Networks (AREA)

Abstract

Disclosed is a method of transmitting media data from a communication device 101-112 towards a processing device 160, comprising obtaining the media data captured by a sensor during a sensing period. In response to detection of a first authorisation such as a token associated with the sensing period, at least part of the media data is transmitted towards the processing device. Information such as a token characterising a second authorisation to transmit media data obtained by another communication device is transmitted. The media data may comprise audio or image data from a multi-camera system. There may be an initialisation during which a transmission sequence is retrieved indicating transmission order of the devices. The first and second authorisations may be comprised within a message identifying media data to be transmitted. Part of the media data can be dropped from a device buffer if a drift value in the time between sensing periods when accounting for processing time is above a threshold, or if the difference between expected and actual transmission start time is above a pre-set maximum time delay. There may be sensor fusion with collaborative processing. A token ring may prevent out-of-order reception at the processing device.

Description

TRANSMISSION METHOD, COMMUNICATION DEVICE
AND COMMUNICATION NETWORK
FIELD OF THE INVENTION
The present invention relates in general to transmission of media data in communication networks, and in particular to a method and communication device for transmitting media data captured by a sensor device during a sensing period over a network comprising a plurality of communication devices and a processing device.
Aspects of the present invention are particularly adapted to systems in which communication devices are connected to a high speed network according to a daisy chain topology.
BACKGROUND OF THE INVENTION
Nowadays, a lot of applications involve a large number of sensor devices (for instance wide video surveillance systems and multi-camera capture systems). Most of them provide real-time services that are highly appreciated by users of these systems. However, the high number of sensors in sensor networks and the amount of data generated by each sensor raise several problems such as bandwidth consumption to transport media data to a centralized processing point and computational power requirements for processing this huge amount of media data.
To address these issues, distributed sensor processing systems have been introduced. They present numerous advantages such as reliability, scalability and processing performance. In such a system, several nodes each perform a different part of the processing so that the overall processing is distributed between multiple nodes instead of being centralized in one location.
This distributed architecture is especially suitable for a new generation of sensor networks in which the computational power of each sensor is sufficient to perform part of the processing.
A prior art document called “A system for distributed multi-camera capture and processing”, Jim Easterbrook, Oliver Grau, Peter Schtibel from BBC Research and Development describes a distributed multi-camera capture and processing system for real-time media production applications. In this document, the communication between sensors and a server device is performed by push mode i.e. the transmission of data is controlled independently by each sensor which schedules the transmission as soon as new data are available for transmission.
Although the distributed processing allows a decrease in the bandwidth requirement and the load at server device side by executing part of the processing inside each sensor, some problems remain.
Since data may be transmitted towards the server device (i.e. the final destination of these data is the server device) as soon as available for transmission, they arrive at the server device in an unmanaged way. Hence, data presentation is anarchic spatially and temporally.
From the point of view of the server device, the reception of data out of order causes over consumption in memory and sub-optimal processing duration.
Moreover, traffic burden or network congestion may cause important data loss and uncontrolled delay, which are incompatible with high quality real-time applications.
Finally, in case of congestion and if the dropping of data becomes necessary due to the buffering capabilities of the sensor, the dropping is unmanaged and data belonging to different data types or different sensing periods may be deleted in the different sensors. As a consequence, post processing at the computing server may be impacted since important data may have been dropped.
Consequently, there is a need for improving existing data transmission methods for multi-sensor systems in which a first processing step (also called preprocessing) is performed at the sensors and a further step is performed at a same computing device.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method for transmitting media data from one communication device towards a processing device via a network, the method comprising the following steps:
obtaining media data captured by a sensor device during a sensing period; and in response to detection of a first authorization associated with the sensing period:
o transmitting at least part of the obtained media data towards the processing device; and o transmitting, to another communication device which is connected to the network, an item of information characterizing a second authorization to transmit media data obtained by said another communication device.
Therefore, the method of the invention makes it possible to improve media data transmission and processing in a multi-sensor network performing distributed preprocessing before transmission to a processing device for further processing.
Thanks to the first and second authorizations, transmission of media data is better controlled and the media data arrive in correct order at the processing device. As a consequence, the processing performed at the processing device is made easier and more efficient. Also, the transmission latency is managed, whatever the media data size variation. Hence, a real-time application can be provided with a low risk of uncontrolled loss of data.
Optional features of the invention are further defined in the dependent appended claims.
According to a second aspect of the invention, there is provided a communication device for transmitting media data from the communication device towards a processing device via a network, the communication device being configured to perform the following steps:
obtaining media data captured by a sensor device during a sensing period; and in response to detection of a first authorization associated with the sensing period:
o transmitting at least part of the obtained media data towards the processing device;
o transmitting, to another communication device which is connected to the network, an item of information characterizing a second authorization to transmit media data obtained by said another communication device.
According to a third aspect of the invention, there is provided a network comprising a plurality of communication devices as aforementioned and a processing device configured to control the transmission of media data over the network.
For instance, the communication devices may be connected according to a daisy-chain topology. In variants, they could also be connected according to a star, ring or mesh topology.
The second and third aspects of the present invention have optional features and advantages similar to the first above-mentioned aspect.
Since the present invention may be implemented in software, the present invention may be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium, and in particular a suitable tangible carrier medium or suitable transient carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device or the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:
Figure 1 illustrates an example of a multi-sensor network in which embodiments of the invention may be implemented;
Figure 2 illustrates an example of collaborative pre-processing between five communication devices of a multi-sensor network;
Figure 3 illustrates an example of pipeline pre-processing performed at a communication device of the multi-sensor network;
Figures 4a to 4j illustrates the transmission of data by sensors (communication devices) of the multi-sensor network, according to embodiments of the invention and the occupancy of communication device (sensor) buffers at a given time of the transmission process according to embodiments of the invention;
Figure 5 is a flowchart illustrating steps of a transmission method according to embodiments of the invention;
Figure 6a is a block diagram schematically illustrating a possible architecture of a communication device according to embodiments of the invention;
Figure 6b is a block diagram schematically illustrating logical blocks of the communication device shown in Figure 6a;
Figure 6c illustrates the format of messages exchanged within the logical blocks of the communication device of Figures 6a and 6b;
Figure 7a illustrates steps performed by a communication device for transmitting or forwarding data over the processing path according to embodiments of the invention;
Figure 7b illustrates steps performed by a communication device for transmitting or forwarding data over the collecting path according to embodiments of the invention;
Figure 8a illustrates general steps performed by a communication device for transmitting data over the collecting path according to embodiments;
Figure 8b illustrates steps performed by a communication device for transmitting local data over the collecting path;
Figure 8c illustrates steps performed by a communication device for forwarding data from other communication devices over the collecting path;
Figure 8d illustrates an alternative embodiment of Figure 8c;
Figure 9 is a flow chart comprising steps for insertion of the token in the data flow, implemented by the communication device according to embodiments;
Figure 10a illustrates an example of format for a Token XON used for the transmission control according to embodiments of the present invention;
Figure 10b illustrates an example of format for a transmission sequence;
Figures 11a and 11b illustrate a dropping mechanism based on latency monitoring, according to embodiments of the present invention;
Figure 12 is a chronogram illustrating message exchanges between the communication devices (sensors) and the processing device according to embodiments of the invention;
Figure 13 is a flow chart comprising steps carried out by the processing device according to embodiments of the invention;
Figure 14 is a block diagram schematically illustrating an exemplary architecture for a processing device according to embodiments of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
The present invention is directed to a method for transmitting data, in particular media data (e.g. audio or image data) captured by a sensor device and processed in each communication device of a network, towards a target device of the network which is a processing device.
In the following description, the term “towards” for transmission of data in relation to a particular device is used to mean that the final destination of these data is the particular device. In some situations, the data may pass through some other devices before reaching said particular device.
Also, in the following description, a communication device is sometimes referred to as a sensor, meaning a sensor with computational capabilities (smart sensor). However, all the teachings relating to a sensor may apply to a communication device linked to a mere sensor without processing capabilities.
In the following description, the processing device is sometimes referred to as a computing server or a computational server.
According to general embodiments, a communication device obtains media data captured by a sensor device during a sensing period (also called capture period in the following description). When the communication device detects a first authorization associated with the sensing period, it can transmit at least part of the obtained media data towards the processing device.
This first authorization allows the communication device to transmit at least part of the obtained media data. For instance, it may allow transmission of only a type of data (e.g. only audio data but not image data) captured during a given sensing period. Thus, in this example, only this type of data captured during the given sensing period can be transmitted by the communication device when the first authorization is detected.
For instance, the detection of the first authorization may be based on the transmission sequence that indicates the order of transmission of media data by communication devices of the network. Typically, the considered communication device can know that it is the initiating device of the transmission sequence, i.e. the first device to be allowed to send data.
In a variant, the detection of the first authorization may consist of detecting the last packet of data for a given sensing period (or an acknowledgement of this last packet). The data from the next sensing period may thus be sent.
However, in most cases, the detection of the first authorization consists of receiving a specific message (XON “token”) from another communication device.
Then, the communication device transmits, to another communication device of the network, an item of information characterizing a second authorization to transmit media data obtained by the other communication device.
For instance, the item of information may be comprised in the last packet of data captured during a given sensing period or may be an acknowledgment of this last data packet. According to a variant, in most cases, it is a specific message (XON “token”).
The other communication device may be a neighbor or the next communication device indicated in a transmission sequence.
It should be noted that the second authorization represents a first authorization for the other communication device and allows it to transmit a least part of the obtained media data towards the processing device.
These authorizations allow scheduling of the transmission of the media data towards the processing device so that these media data are sent by the communication devices one after the other, instead of in non-coordinated way as in the prior art.
The following description is focused on a network having a daisy chain topology. However, the present invention is not limited thereto and in embodiments, the network may have a different topology such as a star, ring or mesh topology.
The daisy-chain topology has numerous advantages. For instance, it allows a system to be built with low cabling cost (cable length is reduced in comparison with star topology) and simple wired topology. Another advantage of the daisy chain topology is the scalability: sensors may be easily added or removed.
Figure 1 illustrates an example of a multi-sensor network enabling collaborative processing between several synchronized sensors.
In the given example, the multi-sensor network 100 comprises a plurality of nodes 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 and 160 connected in cascade.
In this figure, three different types of node are represented:
- A processing device 160, such as a computing server, which performs post-processing on data received from sensors,
- An edge sensor 101 which is connected to the computing server,
- Back-end sensors 101 to 112 which are connected to the computing server through the edge sensor.
In addition, a control station (not illustrated) can perform control operations over the network. In a variant, these control operations may be handled by the processing device.
Each sensor (edge and back-end) is connected to its neighbors through a full duplex link 120. The edge sensor is connected via a full duplex link 130 to the computing server 160.
In order to allow data exchanges and thus collaborative processing between sensors 101 to 112, a (bidirectional) processing path 140 is defined.
For data transmission performed by each node towards the computing server 160, a collecting path 150 is defined. This processing path is limited to the multisensor network and there is no predefined direction for this path (one direction is defined during the initialization of the system depending on bandwidth occupancy).
Figures 2 and 3 illustrate different exemplary processing stages executed by the sensors (101 ...112) of the multi-sensor cascade network 100 shown in Figure 1. The exemplified processing supports transmission control using data type supplied by a token.
Embodiments of the present invention are not limited thereto and other kind of processing that those exemplified may be used. Notably, the present invention is also useful when there is no collaborative processing between nodes. More specifically, Figure 2 illustrates an example of collaborative processing (i.e. the different transformations of data) between the nodes of a cascade network as shown in Figure 1.
In this example, a set of successive nodes (e.g. 102, 103, 104, 105 and 106) each processes raw data 201 captured during a sensing period or capture period 200. Four stages of processing are illustrated.
A particularity of the multi-sensor network is that each sensor may capture raw data at a given shared frequency (for instance raw video frames at 60 frames per second). Thereby, each sensortriggers its capture process simultaneously. To achieve a synchronous capture process among all the sensors of the multi-sensor network, a packet-based synchronization protocol (e.g. Precision Time Protocol) is implemented in each sensor. This synchronization protocol allows the same global time to be shared by all the sensors of the multi-sensor system and so allows the capture to be triggered at the same time in all the sensors.
For instance, in the given example, the sensor k captures raw data 201 corresponding to the capture period n 200. A new raw data capture is performed for each new capture period n+1, n+2, n+3, n+4, n+5 and n+6 (respectively 210, 220, 230, 240, 250 and 260). Next, the sensor k transmits its raw data to the next sensor of the processing path (sensor k+1) during the period 202. This period is called Rx from the receiver point of view (sensor k+1) and Tx from the emitter point of view (sensor k).
At the end of the reception of the raw data by the sensor k+1, it performs first data processing 203 from which results the first fusion output 271.
It is recalled that the so-called data fusion is the process of integration of multiple data and knowledge representing the same real-world object into a consistent, accurate, and useful representation. The goal of data fusion is to combine relevant information from two or more data sources into a single one that provides a more accurate description than any of the individual data sources by its own.
In the given example, the notation used for the different fusion results of the multi-sensor cascade network is the following one:
Output [n, k]x
Where:
n is the capture period (or sensing period or cycle index);
k is the origin sensor;
- x is the fusion stage.
Thereby, using this notation, the result 271 of the first processing stage 203 is output [n,k]1. This result is sent during the transmission period 204 to the following sensor k+2 which performs the second stage of data processing 205 based on the raw data produced by the sensor k during the capture period n. The result 272 of the processing stage 205 is output [n,k]2. This second output is transmitted from the sensor k+2 to the sensor k+3 during transmission period 206. This data is used by the sensor k+3 for the third data transformation 207 for the production of a third fusion output 273 (output [n,k]3) based on raw data of the sensor k captured during period n. This third output 273 is transmitted from the sensor k+3 to the sensor k+4 during transmission period 208. This data is used by sensor k+4 for the fourth data transformation 209 for the production of a fourth and last fusion output 274 (output [n,k]4) relating to raw data of the sensor k captured during period n. The collaborative processing ends at the beginning of the period n+6 (260).
It is worth noting that to achieve real-time operation, each processing stage (203, 205, 207 and 209) has to last one capture period at the maximum. Otherwise several computational processors should be allocated to parallelize this processing stage.
As a result, in this example, a new entire distributed processing cycle (capture, 1st, 2nd, 3rd and 4th processing) is completed every capture period with an offset lasting 6 capture periods 200.
Figure 3 focuses on the different processing operations performed at a considered node.
The top of Figure 3 shows the same processing as described with reference to Figure 2 but this time from the point of view of a considered sensor, here the sensor k+4 of Figure 2.
During capture period n (300), the sensor k+4 captures data (301) simultaneously with the other sensors.
It then receives data from the sensor k+3 during the period 302 and then performs first processing on them during the period 303.
It then receives, during the period 304, data captured by the sensor k+2 and firstly processed by the sensor k+3. The sensor k+4 then performs second processing on them during the period 305.
It then receives, during the period 306, data captured by the sensor k+1 that have been firstly processed by sensor k+2 and secondly processed by sensor k+3. The sensor k+4 then performs third processing on them during the period 307.
It then receives, during the period 308, data captured by the sensor k that have been firstly processed by sensor k+1, secondly processed by sensor k+2 and thirdly processed by sensor k+3. The sensor k+4 then performs a fourth processing on them during the period 309.
For these purposes, each sensor (i.e. the sensor itself or the communication device linked to this sensor) has to embed the capability to execute in parallel all the processing stages or all the processing may be serialized or partially serialized if the processor is fast enough.
The bottom of Figure 3 illustrates the transmission time slots of the processing path and the different transmission periods 302, 304, 306, 308. The right part is a detailed view of the transmission period 308 for a particular embodiment of the collaborative processing executed by the multi-sensor cascade network.
In this example, the fourth processing stage needs the third fusion output of four different sensors. For example, the sensor k+4 needs its own third fusion output and the third fusion output of three adjacent sensors.
Consequently, the transmission period 308 is used to transmit output[n,k]3 308a, output[n,k-1]3 308b and output[n,k-2]3 308c. The sensor k+3 transmits the output[n,k]3, the sensor k+2 sends the output[n,k-1]3 which is forwarded by the sensor k+3 to reach the sensor k+4 and the sensor k+1 sends the output[n,k-2]3 which is forwarded by the sensors k+2 and k+3 to reach the sensor k+4.
Thus, these data are used by intermediary sensors for their own processing but are also forwarded to adjacent sensors. The same principle is applied to the other processing stages.
It worth noting that the processing path transmission period 390 is variable according to the size of the data to transmit during the transmission period (e.g. 302, 304, 306 and 308) and the duration of the processing which outputs the data.
As explained above, the processed data are exchanged between sensors to perform the different processing stages. The processing path bandwidth requirement is quasi constant and mainly depends on the depth of the pipeline processing i.e. the number of processing operations to carry out (four processing stages in Figures 2 and 3). The bandwidth requirement for the processing path also depends on the number of intermediate results necessary to compute the next processing operation. For instance, as aforementioned at the top of Figure 3, the fourth processing stage needs the third fusion output of four different sensors.
Moreover, the full set of processed data i.e. output[x,y]1, ouput[x,y]2, output[x,y]3 and output[x,y]4 has to be delivered to the computing server 100 through the collecting path 150 by the different sensors for every period. Consequently, the set of processed data produced by each sensor is aggregated through the collecting path and the overall traffic increases at each new sensor to reach a maximum at the arrival at the computing server.
Figures 4a to 4j schematically illustrate the transmission of data by sensors (communication devices) of the multi-sensor network, according to embodiments of the invention. These figures also show the change in occupancy of the communication device buffers during operation of the multi-sensor network, according to embodiments of the invention.
More specifically, each of these ten figures (Figures 4a to 4j) represents the occupancy of communication devices (sensors) buffers at a given time of the transmission process. In this example, a computing server (not shown) is connected to the initial sensor 101. This initial sensor 101 is itself connected to a second sensor 102 which is itself connected to a third sensor 103, and so on. In this example, there are N sensors connected in series. Obviously, this is only for illustration purposes and the multi-sensor network is not limited to this example.
In this example, the communication devices wait for the end of the complete data processing (here, the end of the fourth processing stage shown in Figure 2) to start the transmission of the full set of processed data towards the computing server according to a transmission sequence indicating the transmission order of data by communication devices of the network. An example of such a transmission sequence is described with reference to Figure 10b.
All communication devices of the multi-sensor network, each transmitting in turn, transmit their first fusion output data relative to the capture period n.
Next, all communication devices of the multi-sensor network, each transmitting in turn, transmit their second fusion output data relative to the capture period n and then their third fusion output data relative to the capture period n, each still transmitting in turn.
And finally, all communication devices of the multi-sensor network, each transmitting in turn, transmit their fourth fusion output data relative to the capture period n.
When the transmission of the full set of processed data relative to the capture period n is completed, all communication devices of the multi-sensor network, each transmitting in turn, transmit their first fusion output data relative to the next capture period (n+1) and so on. In this example, there are 4 steps in the transmission sequence to transmit all data corresponding to a cycle index (i.e. a sensing period).
The aggregated traffic for m-stage distributed processing (in the example of Figure 2, m = 4) over the multi-sensor network comprising N sensors may be computed according to the following formula, for the capture period n:
Aggregated traffic = m N output [n, sensor]™ stage = l sensor = 1
Figure 4a shows the beginning of the transmission of the processed data relative to the capture period n through the collecting path.
As shown, at this stage, it is assumed that the first sensor 101 (“sensor 1”) owns the token, i.e. the authorization to transmit its own data, and so transmits its local data (output [n,1]1). The data are extracted from a buffer 411 dedicated to storing local data and are packetized. The packet 451 is sent to the computing server. The other sensors of the network wait for the token.
Figure 4b shows the buffer occupancy of the three first sensors when the sensor 101 has just ended the transmission of its local data by sending the packet 452 that is to say its buffer 411 does not contain any more data from the current step of the transmission sequence i.e. first fusion output data relative to the capture period n, and it has sent the token XON 490 to the next sensor i.e. the sensor 102 in the given example.
In the shown example, the buffer does not contain other data (i.e. relative to other capture periods or to other processing outputs) but it could be the case. Upon reception of the token, the sensor 102 starts the transmission of its local data (output[n,2]1) towards the computing server. A first packet 453 corresponding to the local data of the sensor 102 relative to the capture period n is sent over the link. The sensor 101 is thus in forward mode.
Figure 4c shows the buffer occupancy of the three first sensors when the second sensor 102 has finished transmitting its local data and has thus sent the token XON 490 to the sensor 103. The sensor 103 thus starts its local data transmission. The sensors 101 and 102 are in forward mode, i.e. they only forward data from other sensors (but not their own data).
Figure 4d shows the buffer occupancy of the three first sensors when the three sensors 101, 102 and 103 have completed their transmission and are in forward mode. The data packets are forwarded successively by the different sensors. From the point of view of the computing server, the packets arrival is ordered.
One can note that in order to minimize the gap between packets from the computing server point of view, knowledge of the residence time of one packet in a sensor as well as the duration required to analyse the token and start the transmission of data would be useful. In order to compensate those durations, the token may be sent to the next sensor of the transmission sequence before the end of its local data transmission.
For instance, the token may be sent with an offset equal to the exact time of the residence T_reSidence of a packet in a sensor plus the analysis time T_ .analysisXON which corresponds to the time needed by a sensor to receive the token XON, to analyse its content and to start its transmission of local data. In this case, the time T_sendxoN at which the token may be sent is computed according to the following formula:
FendXON
Fast packet transmission end (^residence
TanalysisXON
Thus, the token may be preferably sent at a time equal to the last packet ended time T_iast packet sent minus the residence time T_reSidence minus the analysis time T_analysisXON14
In a variant, the offset to send the token may be rounded to a packet duration i.e. the token is sent to the next sensor at the end of the penultimate packet transmission if the sum of the residence time T_reSidence and the analysis time T_anaiysisxoN is below the transmission duration of the last packet.
Figures 4e, 4f and 4g take place during the transition between transmission of the first fusion output data relative to the capture period n and the transmission of the second fusion output data relative to the capture period n.
Figure 4e shows the buffer occupancy of the three first sensors when the packets resulting from the first fusion output of the two last sensors N-1 and N (respectively 454 and 455 for the sensor N-1 and 456, 457 and 458 for the sensor N) are in transit towards the computing server.
Although the second fusion output data are ready to be sent into the buffer of each sensor and more particularly into the buffer of the sensor 101, this sensor checks whether the first fusion output data of every sensor have been transmitted and waits if not. Thus, Figure 4e shows the sensor 101 waiting for the last packet of the last sensor (N) to start a new transmission sequence or a new step of the current transmission sequence.
More generally, the sensor which initiates the transmission sequence has to receive an acknowledgment of the last sensor. This acknowledgment may be directly the token, i.e. the reception of the token by the initiator sensor may be the acknowledgment 459. In a variant, the acknowledgment may be implicit, for instance it may be signalled by the reception of the last packet 458 of the last sensor by the sequence initiator. This variant is only valuable if the initiator node is also the edge node i.e. every packets of each node are forwarded by the initiator node.
In a variant, the initiator node could be the computing server (processing device), meaning that the edge node receives a token from the computing server. As a consequence, the computing server receives an acknowledgment from the multisensor network signalling that the data captured during a given period or step have been transmitted.
As shown in Figure 4f, upon reception of the explicit acknowledgment 459 or of the last packet 458 if there is no explicit acknowledgment, the first sensor 101 starts the transmission of its second fusion output data 460 (local data). In this example, the explicit acknowledgment 459 is optional.
Figure 4g shows the occupancy of the buffers when the sensor 101 has just ended the transmission of its second fusion output local data by sending the packet
461 i.e. its buffer 411 becomes empty or at least its buffer 411 does not contain any more second fusion output data relative to the capture period n.
The sensor 101 thus sends the token XON 490 to the next sensor i.e. the sensor 102 in the given example. Upon reception of the token, the sensor 102 starts the transmission of its local data (output[n,2]2) towards the computing server. The first packet 462 corresponding to the local data of the sensor 102 relative to the capture period n is sent over the link.
Figures 4h, 4i and 4j take place during the transition between the transmission of data relative to the capture period n and the transmission of data relative to the capture period n+1. This set of figures illustrates the case in which the aggregation of data relative to the capture period n represent a bandwidth greater than the link capacity or, in other words, the duration of the transmission of the full set of processed data relative to the capture period n lasts more than a capture period duration.
Consequently, the beginning of the transmission of the data relative to the capture period n+1 is delayed from the optimal transmission opportunity i.e. transmission opportunity of the previous transmission sequence plus capture period duration.
Figure 4h shows the buffers of the three first sensors when the packets of fourth fusion output data of the two last sensors N-1 and N (respectively 470 and 471 for the sensor N-1 and 472, 473 and 474 for the sensor N) are in transit towards the computing server.
Although the first fusion output data relative to the capture n+1 are ready to be sent into the buffers of each sensor and more particularly into the buffer of the sensor 101, this sensor checks whether the full set of processed data relative to the capture period n has been transmitted and waits if not. Thus, the sensor 101 waits for the last packet 474 of the last sensor to start a new transmission sequence.
As shown in Figure 4i, upon reception of the last packet 474 of the last sensor N or upon reception of an explicit acknowledgment 475 (optional), the first sensor 101 starts the transmission of its local data 476, i.e. here the local first fusion output data (output[n+1,1]1).
As previously described, Figure 4j shows the buffers occupancy when the sensor 101 has just finished the transmission of its local data by sending the last packet 477. At this time, its buffer 411 is empty or at least it does not contain any more data of the first fusion relative to the capture period n+1. The sensor 101 thus sends the token XON 490 to the next sensor 102. Upon reception of the token, the sensor 102 starts the transmission of its local data (output[n+1,2]1) towards the computing server with a first packet 478.
In a variant, the sensors can wait for the end of the complete data processing (i.e. here the end of the four-stage processing) to start the transmission of the full set of processed data towards the computing server. In this case, the sensors may transmit their four fusion outputs before giving the token to the next sensor.
In this variant, the aggregated traffic for m-stage distributed processing (in the example of Figure 2, m = 4) over the multi-sensor network comprising N sensors may be computed according to the following formula, for the capture period n:
Aggregated traffic =
N m output [n, sensor]™ sensor = 1 stage = 1
In another variant, the different output data may be transmitted at each intermediate end of processing.
The control method described above supports every transmission sequence: for instance, odd nodes first and even nodes later, overlapping data of different capture periods, and every configuration for mixing the different fusion outputs.
The sequence of data transmission may be defined by factory settings or may be pushed by the computing server or another server connected to the multisensor cascade network. Moreover, the transmission sequence may be dynamically updated.
The exemplary embodiment described with reference to Figures 4a to 4j allows advantageous distribution of the data buffering through all the communication devices of the multi-sensor cascade network. At a time, only one communication device is able to transmit data towards the computing server. As a result, there is no concurrency between the forward traffic and the local traffic. Data to forward are not stored in the intermediate communication devices and each communication device only stores its local production of data.
Moreover, embodiments of the invention enable improvement in the coherency of the data stored in each sensor. This is because, at the beginning of a new transmission sequence, each sensor owns the same kind of data in its local buffer (note: the size of those data may be variable from one node to another). For instance, as shown in Figure 4h, each sensor has sent its full set of processed data relative to the capture period n and each sensor has its full set of processed data relative to the capture period n+1 in its local buffer.
As aforementioned, the set of Figures 4h, 4i and 4j illustrates the case where the duration to transmit all data relative to the capture period n is longer than a capture period. Consequently, this delay leads to extra memory consumption in each communication device to store its local data. This extra memory consumption may be compensated for if the following transmission periods are shorter than the capture period or reference period. On the contrary, if the successive transmissions last more than the reference period, the quantity of data in the local buffer of each node increases and the latency between the capture and the transmission of the data relative to this cycle of capture increases also.
In the example of Figure 2, there are six reference periods between the trigger of the capture and the availability of the complete processed data relative to this capture and so the beginning of their transmission. Each communication device can monitor the latency in terms of reference period and react if necessary to avoid exceeding latency constraints of the whole network. A mechanism of data dropping based on this monitoring is described hereafter with reference to Figures 11a and 11b.
Figure 5 is a flowchart illustrating steps of a transmission method according to embodiments of the invention. This algorithm is typically executed by the CPU 611 of each sensor or of the associated communication device if the sensor has no computing facilities.
The algorithm starts with an initialization step 501 during which a transmission sequence is retrieved. The transmission sequence may be a default transmission sequence stored in an internal memory of the sensor. Otherwise, the transmission sequence may be received from the computing server.
The retrieved transmission sequence indicates a transmission order of media data by communication devices (sensors) of the network. It may define both the communication device order and the traffic order (which type of data is sent, for instance first output data or all output data, and/or of which sensing period). An example of format for the transmission sequence is described with reference to Figure 10b.
Once the initialization is complete, the sensor waits for an internal trigger. This internal trigger may be an end of processing of one of an intermediate processing or the end of the last processing i.e. the full set of the processed data relative to a capture period is ready to be sent, as shown in Figures 2 and 3.
Upon detection of this internal trigger (step 502), the system checks 503 whether the current sequence has been acknowledged. A sequence is acknowledged when all the data to be sent according to the transmission sequence (or step of the transmission sequence) have been transmitted.
The acknowledgement may be detected by receiving an explicit acknowledgment (ACK). In a variant, it may be detected by analyzing the data packets and more specifically the forwarding of the last packet of data of the last node of the current transmission sequence by the initiator of the transmission sequence.
If the test 503 is negative, the sensor maintains priority for the forward traffic 504 and performs packet analysis 505 to detect the acknowledgment.
If the test 503 is positive, it is checked whether the identification number of the sensor corresponds to the sensor which initiates the transmission sequence (510).
If the test 510 is negative, the sensor waits for the reception of the token. Consequently, in this case, the forward traffic has priority (i.e. is priority traffic).
Upon reception of the token at step 511, the sensor analyses the token (step 512) to update the transmission sequence according to information comprised in the token (for instance “data types” field 1004 shown in Figure 10a) if necessary.
Then, at step 520, the sensor starts the transmission of its local data.
If the test 510 is positive, the sensor directly starts the transmission 520 of its local data according to the initialization results for the traffic order. For instance, the transmission sequence may impose that each sensor transmits the first fusion output or the second fusion output or the third fusion output or every fusion output.
For each transmitted packet, the sensor checks (step 521) whether the packet is the last for the current transmission sequence.
When the test 521 is negative, the sensor keeps sending local data packets (step 520).
When the test 521 is positive, the sensor sends the token 522 to the next sensor of the transmission sequence and the process loops back to step 502.
Figure 6a schematically illustrates a communication device 600 of a network 650 (e.g. the multi-sensor cascade network 100 shown in Figure 1), configured to implement at least one embodiment of the present invention. The communication device 600 may be a device such as a micro-computer, a workstation or a light portable device. The communication device 600 comprises a communication bus 613 to which there are preferably connected:
- a central processing unit 611, such as a microprocessor, denoted CPU;
- a read only memory 607, denoted ROM, for storing computer programs for implementing the invention;
- a random access memory 612, denoted RAM, for storing the executable code of methods according to embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and
- at least one communication interface 602 connected to the communication network 650 over which digital data packets or frames are transmitted, for example a communication network according to the 802.3 protocol. The data frames are written from a FIFO sending memory in RAM 612 to the network interface for transmission or are read from the network interface for reception and writing into a FIFO receiving memory in RAM 612 under the control of a software application running in the CPU 611.
Optionally, the communication device 600 may also include the following components:
- a data storage means 604 such as a hard disk, for storing computer programs for implementing methods according to one or more embodiments of the invention;
- a disk drive 605 for a disk 606, the disk drive being adapted to read data from the disk 606 or to write data onto said disk.
The communication device 600 can be connected to various peripherals, for example such as a digital sensor 608, each being connected to an input/output card (not shown) so as to supply data to the communication device 600.
The communication bus provides communication and interoperability between the various elements included in the communication device 600 or connected to it. The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the communication device 600 directly or by means of another element of the communication device 600.
The disk 606 can be replaced by any information medium for example such as a compact disk (CD-ROM), rewritable or not, a ZIP disk, a USB key or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables a method according to the invention to be implemented.
The executable code may be stored either in read only memory 607, on the hard disk 604 or on a removable digital medium such as for example a disk 606 as described previously. According to a variant, the executable code of the programs can be received by means of the communication network 650, via the interface 602, in order to be stored in one of the storage means of the communication device 600, such as the hard disk 604, before being executed.
The central processing unit 611 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a nonvolatile memory, for example on the hard disk 604 or in the read only memory 607, are transferred into the random access memory 612, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC). Alternatively, a mixed implementation (part in hardware and part in software) may also be contemplated.
Figure 6b is a block diagram schematically illustrating logical blocks of the communication device 600 shown in Figure 6a.
As illustrated, the communication device 600 comprises:
- an Application Layer block 610,
- a Packetizer block 620,
- a Routing block 630,
- a Communication Interface block 640 comprising two Ports 631a and 631b, and
- a Scheduler block 660.
The Application Layer block 610 is configured to handle at least part of the processing stage described with reference to Figure 2 and to generate and receive data packets issued from the processing.
The Application Layer block 610 is configured to send to the Scheduler block 660 a message EOP 661 (corresponding to the internal trigger 502 shown in Figure 5) to inform it about the quantity of data produced for each data type, the corresponding cycle index and the address at which the data payload is stored in memory.
For these purposes, as shown in Figure 6c, the EOP message 661 comprises a field “Data Type” 671, a field “Size” 672 indicating the quantity of data produced, a field “Cycle Index” 673 which indicates the reference cycle for the data produced (processing index period) and a field “Address” 674 indicating the address where are stored the data in memory.
The Packetizer block 620 is configured to format data coming from the Application Layer block 610 into packets of predetermined size upon reception of a trigger request “Start_Packetizer” 662 from the Scheduler block.
As shown in Figure 6c, the trigger “Start_Packetizer” 662 comprises a field “Data Type” 675 indicating the type of data to packetize, a field “Nb_bytes” 676 indicating the number of bytes to packetize, a field “Cycle Index” 677 indicating the reference cycle for the data produced (processing index period) and a field “Address” 678 indicating the address at which the data payload is stored in memory.
The Packetizer block 620 is also configured to create multiple packets (called “formatted packets”) of predetermined size according to the request 662 of the Scheduler block 660. The Packetizer block 620 is also configured to send back a “Ready_to_Send” acknowledgment 663 to the Scheduler block 660 in order to inform it of the packet size and data type of the formatted packets. The “Ready_to_Send” acknowledgment 663 comprises similar fields to the “Start_Packetizer” trigger 662.
The Packetizer block 620 is also configured to add a header 625 to each formatted packet.
As shown in Figure 6c, the header 625 may comprise the following fields:
- a field “Sensor Index” 687 which comprises the number of the sensor;
- a field “Cycle Index” 688 which comprises the processing period index;
- a field “Data Type” 689 which indicates the data type of the payload, for instance “output of the fusion stage 1”;
- a field “Traffic Type” 690 which indicates whether the packet has to be sent over the collecting or processing path;
- a field “Size” 691 which indicates the size of the packet in bytes;
- a field “Remaining Hop” 692 which is used to stop packet propagation along the processing path ;
- a field “Fragment Offset” 693 which indicates the offset of the first byte of the packet payload relative to the beginning of data payload 695 generated by the Application Layer block 610;
- a field “Last Fragment” 694 which indicates that the packet payload is the last part to reconstruct data payload.
The Packetizer block 620 is configured to deliver the payload of received packets from the Routing block 630, to the Application Layer block 610.
The Routing block 630 is configured to transmit frames from the Application Layer block 610 to other communication devices of the network 650 for pipelined processing or to a computing server for post-processing.
The Routing block 630 is configured to route packets from the Packetizer block 620 to output ports 641a or 641b, from input ports 641a or 641b to the Packetizer block 620 or/and to output ports 641a or 641b.
The Routing block 630 is configured to receive and operate upon reception of a request “Send” 664 coming from the Scheduler block 660. The request “Send” 664 aims at informing the Routing block 630 of the next packet(s) to transmit.
It comprises a field “Data Type” 679, a field “Traffic Type” 680, a field “Cycle Index” 681, a field “Token Insertion” 682, a field “Address” 683 comprising the address where the data are stored in memory and a field “Local/Fwd” 684 indicating whether the packets are local, meaning from the Application Layer block 610 and Packetizer block 620, or from another communication device and have to be forwarded.
According to embodiments of the present invention, the routing decision, i.e. the decision to route a considered packet, is based on the fields “TrafficType” and “RemainingHop” of the header 625 inserted by the Packetizer block 620.
The Routing block 630 is configured to analyze these fields when receiving a packet from the Packetizer block 620 or from one of the input ports 641a/641b.
More specifically, when the “TrafficType” field of the packet indicates the collecting path, the Routing block 630 sends it to the output port 641a/641b which is connected, directly or through the other communication devices of the multi-sensor network, to the computing server. The Routing block 630 may choose the output port according to an internal routing table which indicates the output port to choose to reach specific destination.
Otherwise, in the case of a packet received from the Packetizer block 620 and having a “TrafficType” field indicating the processing path, the Routing block 630 sends it to the port 641a/641b which is connected to the next communication device of the multi-sensor network.
In the case of a packet received from one of the input ports 641a/641b and having a “TrafficType” field indicating the processing path, the Routing block 630 first transmits the packet to the Packetizer block 620 and decrements the “RemainingHop” field by one. The packet is then forwarded to the next communication device of the multi-sensor network unless the “RemainingHop” field comprises a zero value. For each packet to forward, the Routing block 630 sends a message “Ready_to_forward” 663 to the Scheduler block 660 to inform it that a packet is ready to be forwarded.
The “Ready_to_Forward” message 666 comprises similar fields to the “Start_Packetizer” trigger 662 and to the “Ready_to_Forward” acknowledgment 663.
The Routing block 630 is also configured to insert/modify/delete a MAC header 635.
As shown in Figure 6c, the MAC header 635 comprises a field “Dest Address” 696 indicating the destination address of the packet, a field “Src Address” 697 indicating the source address of the packet, and a field “Type/Length” 698 which indicates the type of Ethernet frame (e.g. 1588, IPv4,...).
The Routing block 630 is also configured to handle control frames (having a specific “DataType” field) which are directly delivered to the Scheduler block 660 using a control message “Ctrl” 665. For example, the control frames may comprise a Token or an acknowledgement.
As shown in Figure 6c, the “Ctrl” message 665 comprises a field “Ctrl payload” 685 and a field “Insertion Index” 686. For example, the field “Ctrl payload” 685 may comprise a token and field “Insertion Index” 686 may indicate the token insertion position.
The Communication Interface block 640 is configured to format and send/receive frames over the network 650.
The Scheduler block 660 is configured to control data transmission in interaction with the Application Layer block 610, the Packetizer block 620 and the Routing block 630.
Figure 7a illustrates steps performed by a communication device as shown in Figures 6a and 6b for transmitting local data (steps 701 to 707) or forwarding data from other communication devices (steps 711 to 714) over the processing path.
Transmission of local frames over the processing path is now described.
At step 701, the Scheduler block 660 receives a message EOP 661 (corresponding to the internal trigger 502 shown in Figure 5) from the Application Layer block 610. As already mentioned, this message provides information about the quantity of data produced for each data type, the corresponding cycle index and the address at which the data payload is stored in memory. The Scheduler block 660 generates and maintains an internal table in which information of the EOP messages is stored. More specifically, the Scheduler block 660 creates a new row in the table for each data type indicated in each new EOP message received. Each row comprises five columns: Remaining bytes to send, cycle index, data type, current address.
At step 702, the Scheduler block 660 analyses the data type of the EOP message and if the data type is related to the processing path, the Scheduler block 660 sends a “Start_Packetizer” message 662 to the Packetizer block 620. As already mentioned, this message comprises data type to packetize, the number of bytes to packetize, the cycle index and the address at which the data payload is stored in memory.
At step 703, the Packetizer block 620 reads the data payload at the address indicated in the “Start_Packetizer” message 662 and formats data into packets (called “formatted packets”) having the number of bytes indicated in the “Start_Packetizer” message 662 from the Scheduler block 660. For each new formatted packet, the Packetizer block 620 sends back a “Ready_to_Send” message 663 to the Scheduler block 660.
At step 704, upon reception of a “Ready_to_Send” message 663, the Scheduler block 660 updates the row of its internal table which corresponds to the data type of the packet just formatted by the Packetizer block 620 (Remaining bytes to send and address of the first data). When the “Remaining bytes to send” field becomes equal to zero the row is deleted from the table.
At step 705, for each “Ready_to_Send” message 663, the Scheduler block 660 delivers a “Send” message 664 to the Routing block 630. As already mentioned, this message comprises the “Traffic Type” (processing or collecting path) to handle by the Routing block 630 and indicates whether the frames are from the Packetizer block 620 (local frames) or not (frames to forward). Thereby, in our example of transmission of local frames, the Scheduler block 660 sets the “TrafficType” field 680 to processing and the “Local/fwd” field 684 to local in the message 664.
At step 706, upon reception of the “Send” message 664 from the Scheduler block 660, the Routing block 630 reads the packet from the Packetizer block 620.
At step 707, the Routing block 630 transmits the packet to the output port 641a or 641b according to the “TrafficType” field of the header and an internal routing table.
Handling of frames to forward over the processing path is now described.
At step 711, the Routing block 630 receives a frame from one input port and analyses the “Traffic Type” field and the “Remaining Hop” field of the header in order to determine whether the packet has to be forwarded or only distributed to the Application Layer block 610 through the Packetizer block 620.
At step 712, if the frame has to be forwarded, the Routing block 630 sends a “Ready_to_Forward” message 666 to the Scheduler block 660.
At step 713, for each “Ready_to_Forward” message 666, the Scheduler block 660 delivers a “Send” message 664 to the Routing block 630. As already mentioned, this message indicates the “Traffic Type” (processing or collecting path) to handle by the Routing block 630 and indicates whether the frames are from the Packetizer block 620 (local frames) or not (frames to forward). Thereby, in our example of forwarding of frames, the Scheduler block 660 sets the “TrafficType” field 680 to processing and the “Local/fwd” field 684 to forward in the message 664.
At step 714, upon reception of the “Send” message 664 from the Scheduler block 660, the Routing block 630 forwards the packet to the output port 641a or 641b according to the “TrafficType” field of the header and an internal routing table.
Figure 7b illustrates steps performed by a communication device as shown in Figures 6a and 6b for transmitting local data (steps 721 to 730) or forwarding data from other communication devices (steps 731 to 734) through the collecting path according to first embodiments.
Transmission of local frames through the collecting path is now described.
During an initialization phase (not shown), the communication device loads a predefined transmission sequence. This transmission sequence indicates the order of the different communication devices of the multi-sensor cascade network for the transmission as well as order of the data type for this transmission.
For instance, by default, the communication device may transmit all the data types in the same transmission opportunity and in rising order, i.e. output of the first fusion then output of the second fusion and so on.
Concerning the order of the different communication devices, by default, the transmission sequence may start with the communication device the closest to the computing server and end with the communication device the farthest from the computing server.
During the runtime, the computing server or control station may modify the transmission sequence by sending specific messages through the network 650 to each communication device. This new configuration is then taken into account at the end of the current transmission sequence.
During the initialization, the communication device also computes the position for the insertion of the token in the data flow. The computation of the position aims at minimizing the inter frames for the transition between transmission of local data and forward data in a communication device and minimizing the buffering required to store the forward data when the transmission of local data is not finished.
In practice, the token is inserted at the penultimate packet (if the residence time in a communication device is negligible) thereby the following communication device has time to receive the token, to analyse the token and to start sending its local data.
At step 721, the Scheduler block 660 receives a message EOP 661 (corresponding to the internal trigger 502 shown in Figure 5) from the Application Layer block 610. This step is similar to step 701 of Figure 7a.
At step 722, the Routing block 630 receives and provides a control message to the Scheduler block 660. This control message comprises a token which allows the communication device to transmit its local data. The token indicates which data type has to be transmitted and for which cycle number. The format of the control message is further described with reference to Figure 10a.
At step 723, upon reception of the token, the Scheduler block 660 sends one or several “Start_Packetizer” messages 662 to the Packetizer block 620 in order to handle the different data types marked in the token (reference 1004 in the Figure 10a) for the cycle index set by the token in the field 1005. As already mentioned, this message indicates the data type(s) to packetize, the number of bytes to packetize and the address at which the data payload is stored in memory.
At step 724, the Packetizer block 620 reads the data payload at the address indicated in the “Start_Packetizer” message 662 and formats data into packets. For each new formatted packet, the Packetizer block 620 sends back a “Ready_to_Send” message 663 to the Scheduler block 660. This step is similar to step 703 of Figure 7a.
At step 725, upon reception of a “Ready_to_Send” message, the Scheduler block 660 updates the row of its internal table which corresponds to the data type of the packet just formatted by the Packetizer block 620 (remaining bytes to send and address of the first data). When the “Remaining Bytes to Send” field becomes equal to zero, the row is deleted from the table.
In this step, the Scheduler block 660 also checks whether the packet belongs to the last data type indicated in the token. In this case, the Scheduler block 660 has to send the token to the next communication device of the transmission sequence or it has to transmit the acknowledgement of the transmission sequence if the communication device is the last one of the transmission sequence.
At step 726, for each “Ready_to_Send” message, the Scheduler block 660 delivers a “Send” message 664 to the Routing block 630. As already mentioned, this message comprises the “Traffic Type” (processing or collecting path) to handle by the Routing block 630 and indicates whether the frames are from the Packetizer block 620 (local frames) or not (frames to forward). Thereby, in our example of transmission of local frames, the Scheduler block 660 sets the “TrafficType” field 680 to collecting and the “Local/fwd” field 684 to local in the message 664.
Moreover, when in step 725, the Scheduler block 660 has detected that the token has to be sent to the next communication device of the transmission sequence, the field “Token Insertion” 682 is set to “true” in the “Send” message 664 at step 726. At the same time, the Scheduler block 660 sends a control message “Ctrl” 665 which comprises the token 685 and the token insertion position 686 which has been computed during the initialization phase.
At step 727, upon reception of the “Send” message 664 from the Scheduler block 660, the Routing block 630 reads the packet from the Packetizer block 620.
At step 728, the Routing block 630 transmits the packet to the output port 641a or 641b according to the “TrafficType” field 679 and an internal routing table.
At step 729, when the “Token Insertion” field 682 of the “Send” message 664 is equal to “true”, the Routing block 630 reads the control message 665 comprising the token and the position at which the token should be inserted into the data flow.
At step 730, when the Routing block 630 sends the packet corresponding to the token insertion position, the Routing block 630 also transmits the token to the output port 641a or 641b according to the destination of the token.
Handling of frames to forward through the collecting path is now described.
At step 731, the Routing block 630 receives a frame from one input port.
At step 732, the Routing block 630 sends a “Ready_to_Forward” message 666 to the Scheduler block 660.
At step 733, for each “Ready_to_Forward” message 666, if the communication device is not the token owner, the Scheduler block 660 delivers a “Send” message 664 to the Routing block 630. In our example of forward of frames through the collecting path, the Scheduler block 660 sets the “TrafficType” field 680 to collecting and the “Local/fwd” field 684 to forward in the ’’Send” message 664. When the communication device has the token, the forwarding of frames on the collecting path is delayed until local transmission ends.
At step 734, upon reception of the “Send” message 664 from the Scheduler block 660, the Routing block 630 forwards the packet to the output port 641a or 641b according to the “TrafficType” field of the header and an internal routing table.
Figure 8a illustrates general steps performed by a communication device for transmitting data over the collecting path according to embodiments. Figure 8b illustrates steps performed by a communication device for transmitting local data over the collecting path, Figure 8c illustrates steps performed by a communication device for forwarding data from other communication devices over the collecting path and Figure 8d illustrates an alternative embodiment of Figure 8c.
The embodiments described with reference to Figures 8a to 8c differ from the embodiments described with reference to Figure 7b in that only one “Ready_to_Send” message, one “Send” message and one “Ready_to_Forward” message are sent by data type while in the first embodiments, these messages are sent for each packet, whatever its data type.
As shown in Figure 8a, the algorithm starts with an initialization step 800 during which the transmission sequence is retrieved from the computing server or the default transmission sequence from an internal memory. This step is similar to step 501 of Figure 5. The format of the transmission sequence is described hereafter with reference to Figure 10b.
As already mentioned, this transmission sequence indicates the order of the different communication devices of the multi-sensor cascade network for the transmission as well as order of the data type(s) for this transmission.
For instance, by default, the communication device may transmit all the data types in the same transmission opportunity and in the rising order, i.e. output of the first fusion then output of the second fusion and so on.
Concerning the order of the different communication devices, by default, the transmission sequence may start with the communication device the closest to the computing server and end with the communication device the farthest from the computing server.
At step 801, the next state of the communication device is selected. This state may be either “local” or “forward”. Each communication device located between the initiator node (first communication device of the transmission sequence) and the computing server sets its next state to “forward”. The other nodes including the initiator node set their next state to “local”.
At step 802, it is checked whether the next state is “local” or “forward”. If the next state is “forward”, the next step is step 851, otherwise the next step is step 811.
As shown in Figure 8b, when the next state is “local”, the communication device waits for the token. An exemplary format for the token is described hereafter with reference to Figure 10a.
In the case where the communication device is the first of the transmission sequence (initiator sensor), this waiting step is skipped because the initiator sensor owns the token by default at the beginning of the transmission sequence.
At step 812, the communication device receives the token. It analyses the content of the token and sends a “Start_Packetizer” message (step 813) to trigger operation of the Packetizer block 620.
If the token specifies that several data types have to be sent for the current transmission sequence, several “Start_Packetizer” messages are sent to the Packetizer block 620, one per data type.
The “Start_Packetizer” message also specifies the cycle index of the data to packetize. This allows the control of the data presentation at the computing server.
The token is stored after modification of its “source” field (reference 1001 on Figure 10a) and its “destination” field (reference 1002 on Figure 10a) for future use.
The initiator communication device builds the token based on the transmission sequence information stored in memory and from the previous transmission sequence. For instance, the cycle index is inferred from the cycle index value of the previous transmission sequence incremented by the value of the “Cycle Index increment” field (reference 1015 on Figure 10b).
The communication device then waits for a “Ready_to_Send” message from the Packetizer block 620. As already mentioned, the “Ready_to_Send” message is used by the Packetizer block 620 for signalling the formatting of a packet and signalling that it is ready for transmission.
At step 814, the communication device receives the “Ready_to_Send” message and analyses its content at step 815. More specifically, it checks the “Data Type” field and accordingly whether this data type is for the collecting path or for the processing path (test 816).
If the data type corresponds to the processing path, the communication device sends a “Send” message 664 to the Routing block 630 at step 817. The “Send” message 664 comprises a “Traffic Type” field 680 set to processing and a “Local/fwd” field 684 set to local. Then, the algorithm goes back to step 814.
If the data type corresponds to the collecting path, the communication device checks whether the data type corresponds to the last data type for the current transmission sequence (test 818).
If the answer is no, a “Send” message is transmitted to the Routing block 630 at step 819. This Send message 664 comprises a “Traffic Type” field 680 set to collecting and a “Local/fwd” field 684 set to local. Next, the algorithm goes back to step 814.
If the result of the test 818 is positive, the number of packets produced by the Packetizer block 620 for the last collecting data type is retrieved at step 820.
The number of packet is either computed by the Scheduler block 660 based on the number of bytes 676 of the “Start_Packetizer” message for this data type or the number of packets is received from the Packetizer block 620 by an extra field in the “Ready_to_Send” message.
At step 821, the communication device computes the position for the token insertion based on the number of packets retrieved at step 820. For instance, this position is the number of packets minus one if the token has to be inserted at the penultimate position.
Next, the communication device checks whether the communication device number corresponds to the last communication device of the transmission sequence (test 822).
If the test 822 is positive, the communication device modifies the message type (reference 1006 in Figure 10a) in the token stored at step 812 from “token” to “ack”. Then, the communication device goes to step 824.
If the test 822 is negative, the communication device goes directly to step
824.
At step 824, the communication device sends a control message 665 to the Routing block 630. The control message 665 comprises the token stored at step 812 in the “Ctrl payload” field 685 and the result of the step 821 in the “Insertion Index” field 686.
At step 825, the communication device sends a “Send” message to the Routing block 630 with the “Token Insertion” flag set to “true”.
At step 826, the communication device sets the next state to “forward” before going back to step 810.
According to an alternative embodiment without acknowledgement, steps 822 and 823 may be skipped. The end of the transmission sequence is thus based on the header analysis. This part is described hereafter with reference to Figure 8d.
As shown in Figure 8c, when the next state is “forward”, at step 851, the communication device waits for a “Ready_to_Forward” message, a Token or an acknowledgment message. As already mentioned, a “Ready_to_Forward” message is used by the Routing block 630 to signal that a packet to forward has been received and is ready for transmission.
In case of reception of a “Ready_to_Forward” message at step 852, the communication device analyses its content at step 853. More specifically, the communication device checks the “Data Type” field and accordingly whether this data type is for the collecting path or for the processing path (test 854).
If the data type corresponds to the processing path, then the communication device sends a “Send” message to the Routing block 630 at step 855. This “Send” message 664 comprises a “Traffic type” field 680 set to processing and a “Local/fwd” field 684 set to forward. Then, the communication device goes back to step 851.
If the data type corresponds to the collecting path, then the communication device sends a “Send” message to the Routing block 630 at step 856. This “Send” message 664 comprises a “Traffic type” field 680 set to collecting and a “Local/fwd” field 684 set to forward. Then, the communication device goes back to step 851.
In case of reception of a token at step 857, the communication device sets the next state to “local” at step 859 and goes back to step 810.
In case of reception of an acknowledgment at step 858, the communication device sets the next state to “local” at step 859 and goes back to step 810.
Figure 8d illustrates an alternative embodiment of Figure 8c, in which steps 822 and 823 of Figure 8b are skipped (without acknowledgement).
In this alternative embodiment, the “Ready_to_Forward” message received at step 852 comprises an extra field: the sensor number extracted from the header “Sensor Index” field 687 of the received packet to forward.
Steps 851, 852, 853, 854, 855 and 856 are similar to those of Figure 8c. However, after step 856, the communication device reads the data type and the sensor number from the “Ready_to_Forward” message at step 860.
Based on those values, the communication device checks whether the “Ready_to_Forward” message corresponds to the last data type for the current transmission sequence of the last communication device of the transmission sequence or the last step of the transmission sequence (test 861).
If the test 861 is positive, the communication device sets the next state to “local” at step 863 and goes back to step 810.
If the test 861 is negative, the communication device checks whether the “Ready_to_Forward” message corresponds to the last data type for the current transmission sequence for the communication device just before the local communication device in the transmission sequence i.e. the local communication device is the next to transmit in the transmission sequence (test 862).
If the test 862 is positive, the communication device sets the next state to “local” at step 863 and goes back to step 810.
If the test 862 is negative, the communication device goes back to step 852.
Figure 9 illustrates a possible mechanism for insertion of the token (or acknowledgement) in the data flow. This mechanism is typically performed by the communication device 600. Typically, this mechanism is carried out at least partially by the Routing block 630.
This mechanism aims at ensuring the instant of the insertion is complied with whatever the direction of transmission (through collecting path or through processing path) and so complying with the computation performed for the position of the token insertion. Thus it makes it possible to precisely control the inter gap during the transition from local to forward transmission (between the last packet for the local transmission and the first packet to forward) and minimize the required buffering to store the packets to forward before their transmission.
According to embodiments in which the captured data are packetized and stored in a transmission buffer (typically the buffer of port 641b) of the sensor, a packet counter may be used for inserting the token at the right time in the data flow.
This packet counter may be initialized based on the number of packets stored in the transmission buffer and may be decremented each time a packet stored in the transmission buffer is transmitted towards the processing device (computing server). The sensor may insert the token (XON) in a second buffer (typically the buffer of port 641a) so that the token is transmitted when the packet counter has elapsed. An alternative to the packet counter could be a timer based on number of packet and the physical link speed.
For the last sensor indicated in the transmission sequence, instead of the token, an acknowledgment indicative of the transmission of the last packet authorized by the transmission sequence may be inserted in the transmission buffer (typically the buffer of port 641b) at a predetermined position, typically after the last packet.
An exemplary implementation is now described.
The flow chart starts with an initialization step 901.
At step 902, the Routing block 630 receives a token insertion request i.e. a “Send” message 664 that comprises a “Token Insertion” field 682 set to “true”.
At step 903, the communication device retrieves the position at which the token has to be inserted in the control message 665, from the “Insertion Index” field.
At step 904, the communication device sends the packets to the port 641b (corresponding to the collecting path in our example) and increments an internal counter to track the number of packets sent.
Next, it is checked whether the counter value reaches the position to insert the token (test 905).
When this is the case, the communication device goes to step 906 in which it determines the direction of the destination of the token. For instance, the communication devices having a number higher than the local communication device are on the processing direction (port 641a) while the communication devices having a number lower than the local communication device are on the collecting direction (port 641b). More generally, during initialization step 901, a learning phase may be performed to learn which destination is on which port.
If the next communication device of the transmission sequence is in the collecting direction, the communication device inserts the token message in the transmission Fl FO of the port 641 b at step 907 and goes back to step 902.
If the next communication device of the transmission sequence is in the processing direction, the communication device reads 908 the number of elements to transmit, that are in the transmission FIFO of the port 641b.
At step 909, the communication device computes the time required to transmit this quantity of frames as a function of the link speed.
At step 910, the communication device schedules an event for sending the token in the processing direction (port 641a).
When the timer has elapsed at step 911, the communication device inserts the token in the first position of the transmission FIFO corresponding to the processing path for an immediate transmission 912.
The communication device then goes back to step 902.
Figure 10a illustrates an example of format for a Token XON used for the transmission control according to embodiments of the present invention.
In this example, the packet comprises six fields.
The “Source” field 1001 indicates the sensor which previously had the token and the “Destination” field 1002 indicates the addressee of the token.
These fields allow detection of a problem with a sensor, for instance a sensor which is broken down and its transmission part is in bypass mode. In this case, the communication interface 640 detects that the sensor is failed (with for instance a watchdog mechanism) and with some relay connects directly the port a 641a to the port b 641b. Thereby the communication acts as a cable. Consequently, the sensor next to the sensor in bypass mode detects that the token has gone beyond its current destination and the sensor which detects the problem may take action to correct this problem. For instance, either the sensor having detected the problem warns the source of the token in order to make it produce a token with a new addressee or the new destination is chosen directly by the sensor which has detected the problem.
The “TS id” field 1003 provides the identifier of the transmission sequence. It is useful since several transmission sequences may coexist in the same network. For instance, several transmission sequences may be stored by a factory setting and this identifier thus allows the choice of the appropriate one. This field also enables several transmission sequences to be used in the same time. For instance, one transmission sequence is for a first set of sensors while another transmission sequence is for a second set of sensors.
The “Data types” field 1004 is used to enable the transmission only for a specific type of data. For instance, in the example of Figure 4, the token enables only the first fusion output to be transmitted, then only the second fusion output and so on. In a different way, the token could instead enable all data types (first, second, third and fourth fusion output) to be transmitted at the same time.
The “Cycle number” field 1005 indicates the cycle number (i.e. the sensing period number). It may be supplied by the application through a EOP message. This field in combination with the “Data types” field 1004 specifies with accuracy which data are targeted by the token and have to be transmitted.
The “Message type” field 1006 defines the type of message, e.g. a token, an acknowledgement or a drop message. The token is for a specific sensor defined by the field 1002 and the acknowledgement is a broadcast message which is addressed to all the sensors. The drop message may be a broadcast message which is addressed to all the sensors or to a specific sensor. It allows informing node(s) that a part of data should be dropped from their internal memory prior to their transmission. The data to drop are marked based on the data types and cycle number fields.
Figure 10b illustrates an example of format for a transmission sequence.
This exemplary format may be used for the transmission sequence sent by a control server as well as for a default transmission sequence stored in memory.
The “TS id” field 1010 is similar to the field 1003 of Figure 10a.
The “Nb step of TS” field 1011 comprises the number of steps for this transmission sequence. For instance, if the transmission sequence enables all data types to be transmitted, the transmission sequence has only one step. Otherwise, if the transmission sequence enables only one data type to be transmitted at a time, the transmission sequence has four steps (in the example of Figures 2 and 3).
The “Step order” field 1012 defines the steps order of the transmission sequence and defines for each step, the data type to enable. This field is duplicated according to the number of steps indicated in the field 1011. For instance, in the example of Figures 2 and 3 in which there are 4 data types, the “Step order” field 1012 is a 4 bits field. The bit of the data type is set to 1 if the data type has to be transmitted in this step, otherwise the bit is set to 0.
The “Nb sensor of TS” field 1013 indicates the number of sensors targeted by the transmission sequence.
The “Sensor order” field 1014 gives the sensor order of the transmission sequence. This field is duplicated according to the number of sensors indicated in the field 1013. The first one is the initiator sensor which owns the token at the system start up (or initialisation) and the last sensor has to handle the acknowledgement.
The “Cycle index increment” field 1015 indicates the increment of the cycle index at each new transmission sequence. This is useful for the initiator sensor which has to infer the cycle index of the next transmission sequence according to the previous one. By default, this value is set to 1.
Figures 11a and 11b illustrate a dropping mechanism based on latency monitoring, according to embodiments of the present invention.
According to embodiments, a sensor can compute a time elapsed between two sensing periods and reduce the computed time by a predetermined processing time. The result corresponds to the drift value in the sensor. The sensor can then drop part of the data it stores depending on the drift value and for instance the capacity of its buffer. For instance, when data captured during a given sensing period have to be dropped due to a too large drift value in one of the sensors, these may be dropped also in all the other sensors concerned by the current transmission sequence. For example, all communication devices will drop data relative to the same frame.
In another embodiment, the decision of dropping is taken by a specific node, typically the initiator node or the control node, and a drop message is sent to the other nodes (e.g. the drop message defined in the description of Figure 10a).
An example of implementation is now described.
As shown in Figure 11a, during an initialization step 1101, an initial gap is obtained. It corresponds to the time for processing all the media data generated during one sensing period.
In a first example, it is computed according to the maximum duration set for the different processing stages. In a second example, the initial gap is merely retrieved at the first end of processing of the last processing stage. For instance, in the example of Figure 2, the initial gap is equal to six sensing periods.
Next, the communication device waits for a new transmission sequence 1102. A new transmission sequence begins by the acknowledgement from the last communication device to the sequence initiator communication device.
At step 1103, the communication device retrieves the reference cycle of the new transmission sequence. This information is embedded in the token message or may be inferred from the previous cycle value by using the “Cycle Index” field 1015 of the Token shown in Figure 10b which defines the cycle index increment between successive transmission sequences. The cycle index retrieved in this step represents the oldest data.
At step 1104, the communication device computes the gap between the current capture cycle and the transmission cycle retrieved at step 1103. The current capture cycle corresponds to the cycle index of the current capture performed by the communication device.
At step 1105, a difference is calculated between this value of the gap computed at step 1104 and the initial gap obtained at step 1101.
At step 1106, the communication device compares the difference to a threshold. The threshold may be equal to the maximum acceptable drift to keep the multi-sensor network within the latency constraint. In a variant, the threshold may be set according to memory capacities (buffer size) in the communication devices.
If the result of the test 1106 is positive, meaning that the current gap is acceptable, the algorithm goes back to step 1102.
Otherwise, if the test is negative (too large gap), some data are dropped at step 1107. Advantageously, since data are dropped before transmission, no bandwidth is wasted with obsolete data.
After data deletion, the algorithm goes back to step 1102.
As shown in Figure 11b, each communication device has detected the acknowledgement (implicit or explicit) and has executed steps 1102, 1103, 1104 and 1105. The difference computed at step 1105 is equal to 4. The threshold value used at step 1106 is 3. Hence, the result of the test 1106 is negative and the communication devices have to drop (step 1107) the oldest produced data 117.
Figure 12 is a chronogram illustrating message exchanges between the communication devices (sensors) and the processing device (computing server) according to embodiments of the invention.
In this example, four sensors are represented: sensor 1, sensor 2, sensor i and sensor n. The sensor 1 and 2 are the closest to the server, sensor i is an intermediate sensor and sensor n is the last sensor of the daisy-chain. The present invention is not limited to this example.
After the initialization phase (not shown) during which the system synchronizes the different sensors of the multi-sensor network thanks to a synchronization method (for instance Precision Time Protocol), the server sends a “StartCapture” command 1201 to the different sensors belonging to the multi-sensor network.
This command 1201 comprises a “Ts” parameter 1202 which is a target time at which the sensors are expected to start the capture and a “ATs” parameter 1203 which is the initial processing duration i.e. the duration before the availability of the first processed data.
During the initial processing (ATs), sensors exchange data for collaborative processing in order to perform the different fusion processing stages (as depicted in Figures 2, 3a and 3b). The initial processing also corresponds to the initial waiting time for the first sensor of the transmission sequence to start its transmission of data towards the server.
The “StartCapture” command 1201 also comprises a “3Tx” parameter which represents the maximum time delay tolerated by the server for starting data transmission (also called jitter) by each sensor e.g. T1 for the sensor 1, T2 for the sensor 2 and so on.
Upon reception of the “StartCapture” command 1201, the sensors wait up to the target time Ts to start the capture (“starting event”).
After the target time Ts, the first sensor of the transmission sequence waits for ATs duration 1203 before starting the transmission of its data towards the computing server.
At T1, the sensor 1 sends its data 1224 to the computing server. The first packet of the data frame 1224 comprises information useful for the server for controlling the transmission sequence.
For instance, this first packet indicates at least a parameter “3Txn” computed by the sensor. This parameter corresponds to the time delay between an expected start time and an actual start time of transmission of the captured data.
For the sensor 1, the 3Tx1 measured for the data 1224 is equal to 0 because this is the first transmission so there is no delay compared to the predefined Transmission time T1. The subsequent dTxn measured by the sensors and sent to the server will be synchronized with the reception of the token 1210 and consequently, will be dependent on the quantity of data to transmit by the previous sensors. The packet may also contain information about quality of data and also complexity (e.g. compression rate).
At T1, the sensor 1 transmits its data 1224 to the computing server. At the end of its transmission (more generally, at the time defined by token insertion algorithm), the sensor 1 sends the token 1210 to the sensor 2. Upon reception of the token 1210 coming from the sensor 1, the sensor 2 calculates the time delay 3Tx2 of the transmission of its data 1225 towards the computing server compared to the predefined transmission time T2 1205.
The predefined transmission time T2 is for instance determined according to the following formula, where the sensor index is 2:
capture period
T sensor Judex = TI + (sensor index — 1) *----------------number of sensors
The computed delay is compared to the 3Tx supplied as parameter in the “StartCapture” command 1201.
If the computed delay is higher than the 3Tx, the sensor sends the token to the following sensor without transmitting its own data. In other words, the sensor drops some of the captured data from its buffer when the computed time delay is above the maximum time delay. Advantageously, since data are dropped before transmission, no bandwidth is wasted with obsolete data. In a variant embodiment, the sensor drops its data and sends only a packet including the STx, the quality parameter and so on. Those information items will be used by the server to update the transmission sequence.
Thereby, the system remains within the limit fixed by the server.
When sensor 2 sends its data, sensor 1 forwards them to the computing server.
At the end of its data transmission, the sensor 2 transmits the token 1210 to the sensor 3 which starts its data transmission to the server.
Thereby, the token controls the transmission sequence of the different sensors.
Next, around Ti, the sensor i receives the token 1210, computes the delay dTxi and sends its data 1226.
The transmission sequence corresponding to the capture period 0 finishes by the transmission of data 1227 of the last sensor of the transmission sequence (sensor n in this example).
When the sensor n transmits its data, each sensor of the multi-sensor network forwards these data to the next sensor until the data reaches the computing server.
Afterwards, the sensor 1 begins a new transmission sequence corresponding to the capture period 1.
This new transmission starts when the sensor 1 detects the end flag (or trailer packet or acknowledgement) of the data coming from the sensor n.
Upon detection of the end flag, the sensor 1 computes the 3Tx1 1214 and sends its data 1234 relative to the capture period 1.
To guarantee the sustainability of the application, the transmission duration of the data corresponding to one capture period should on average be equal to or less than the capture period.
The server or a control station may monitor the change in the transmission accuracy through the analysis of the 3Txn received from the sensors of the multisensor network.
Based on the result of this monitoring, the server or control station may send a command message (e.g. 1250) to specify a new 3Tx with more or less constraints for the transmission sequence or to specify a new transmission sequence which disables some sensors. The sensors disabled by the control station may be selected according to the quality and/or complexity of their data. So the server or control station decreases the number of sensors involved in the transmission sequence by keeping the sensors that are more important from an application point of view.
Figure 13 is a flow chart comprising steps carried out by the processing device according to embodiments of the invention.
In a particular embodiment, the edge sensor may act as a control station.
The algorithm starts with an initialization step 1301. During this initialization, the server computes the Ts parameter (target time), the ATs, and the 3Tx.
Next, at step 1302, the server sends a “StartCapture” command to the different sensors belonging to the multi-sensor network.
After the beginning of the data transmission by the multi-sensor network, the server retrieves information from each sensor at the step 1303. The information typically indicates the transmission time delay 3Tx, the data quality, and the data complexity.
Next, at step 1304, the server computes the average time delay based on the transmission time delays received from the different sensors.
At step 1305, the system compares the average time delay computed at step 1304 to a threshold.
If the test 1305 is positive, the process goes back to step 1303.
Otherwise, if the test is negative, the server sends a new request to the multi-sensor network at step 1306.
This new request may be a modification of the maximum time delay 3Tx or a modification of the transmission sequence to include or exclude some sensors from the multi-sensor network.
The choice of the sensors to keep is for instance based on the data quality and/or the data complexity. Even if a sensor is excluded from the transmission sequence, it continues to send its information (3Tx, the data quality, and the data complexity) to the server so as to re-evaluate the relevance of the sensor at step 1307.
Next, at step 1308, it is checked whether the transmission sequence has to be modified in order to integrate new sensors more relevant for the application.
If yes, the modified transmission sequence is sent at step 1306.
If no, the process loops back to step 1303.
Figure 14 is a block diagram schematically illustrating an exemplary architecture for a processing device according to embodiments of the invention.
As illustrated, the processing device 1400 comprises a communication interface block 1440, a Routing block 1430, a controller block 1460, a Packetizer block 1420 and an Application Layer block 1410. The communication interface block 1430 comprises one port 1441. This block has the task of formatting and sending or receiving frames over the network. The server mainly receives the data coming from the sensors of the multi-sensor network.
The Routing block 1430 allows receiving frames from sensors for postprocessing. The Routing block 1430 is in charge of routing packets to the Packetizer block 1420.
The Routing block is also in charge of inserting/deleting the MAC header. The MAC header is made up of three fields: Destination address 696, Source address 697 and Type/Length 698 which indicates the type of the Ethernet frame (1588, IPv6, etc.).
The Routing block also handles control frames.
In reception, the control frames (specific DataType) are directly delivered to the Controller block through the Ctrl message 1462. In transmission, the Routing block sends a Ctrl message 1461 from the Controller block to the sensors of the multi-sensor network (e.g. broadcast MAC address is used).
The Packetizer block 1420 is in charge to unpacketize the packet coming from Routing block 1430 and to deliver the payload of received packets to Application Layer block 1410.
The Application Layer block 1410 is tasked with storing data coming from multi-sensor network and performs the post-processing stage.
The Controller block 1460 is tasked with analyzing the information coming from the different sensors (see Figure 13) and controlling the parameters of the data transmission by creating/modifying the sensors involved in the transmission sequence.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in putting into practice (i.e. performing) the claimed invention, from a study of the drawings, the disclosure and the 10 appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims 15 does not indicate that a combination of these features cannot be advantageously used.
Any reference signs in the claims should not be construed as limiting the scope of the invention.

Claims (27)

1. A method for transmitting media data from one communication device towards a processing device via a network, the method comprising the following steps:
obtaining media data captured by a sensor device during a sensing period; and in response to detection of a first authorization associated with the sensing period:
transmitting at least part of the obtained media data towards the processing device; and transmitting, to another communication device which is connected to the network, an item of information characterizing a second authorization to transmit media data obtained by said another communication device.
2. The method of claim 1, wherein the media data comprise at least one of: image data generated by an image sensor device and audio data generated by an audio sensor device.
3. The method of claim 1 or 2, wherein the first or second authorization is indicative of the sensing period during which the obtained media data to be transmitted by said another communication device have been captured.
4. The method of any one of claims 1 to 3, comprising a step of initialization during which a transmission sequence is retrieved, the transmission sequence indicating a transmission order of media data by communication devices of the network.
5. The method of claim 4, wherein the detection of the first authorization comprises determining that the one communication device is the first communication device in the transmission order indicated in the transmission sequence and detecting a starting event.
6. The method of any one of claims 4 or 5, wherein the first communication device in the transmission order indicated in the transmission sequence is the processing device.
7. The method of any one of claims 1 to 6, wherein the detection of the first authorization comprises determining that all media data captured during a given sensing period and obtained by communication devices, have been transmitted towards the processing device.
8. The method of claim 7, wherein the first authorization comprises an acknowledgment transmitted by a communication device once all media data captured during the given sensing period and obtained by communication devices have been transmitted towards the processing device.
9. The method of any one of claims 1 to 8, wherein at least one of the first and the second authorizations is comprised within a specific message explicitly identifying media data to be transmitted.
10. The method of any one of claims 1 to 9, wherein the second authorization comprises an acknowledgment indicating that the step of transmitting at least part of the obtained media data towards the processing device in response to detection of the first authorization, is complete.
11. The method of any one of claims 1 to 8, wherein the item of information characterizing the second authorization corresponds to the last media data of the at least part of the obtained media data transmitted towards the processing device in response to detection of the first authorization.
12. The method of any one of claims 1 to 11, further comprising the following steps:
computing a time elapsed between a second sensing period and the sensing period during which the obtained media data have been captured;
reducing the computed time by a predetermined processing time to obtain a drift value;
comparing the drift value with a predefined threshold;
dropping at least part of media data captured during the oldest sensing period from among said sensing period and the second sensing period from a buffer of the one communication device when the drift value is above the threshold.
13. The method of claim 12, further comprising a step of sending a drop message to at least one other communication device of the network, indicating that media data captured during said oldest sensing period by said at least one other communication device have to be dropped from its buffer before any transmission.
14. The method of any one of claims 1 to 13, further comprising the following steps:
receiving a message comprising an indication of a maximum time delay for starting media data transmission;
computing a time delay between an expected start time and an actual start time of transmission of obtained media data by the one communication device;
comparing the computed time delay with the maximum time delay;
dropping at least part of the obtained media data from a buffer of the one communication device when the computed time delay is above the maximum time delay.
15. The method of claim 14, wherein the message comprising an indication of a maximum time delay is received from the processing device.
16. The method of claim 14 or 15, further comprising a step of transmitting the computed time delay towards the processing device.
17. The method of any one of claims 1 to 16, wherein the communication devices of the plurality are connected in cascade.
18. The method of claim 17, further comprising a step of packetizing the at least part of obtained media data before transmission and storing the packetized media data in a transmission buffer of the one communication device, wherein the step of transmitting an item of information characterizing a second authorization comprises inserting an acknowledgment indicative of the transmission of a last packet of the media data captured during a given sensing period and obtained by the one communication device, in the transmission buffer, at a predetermined position.
19. The method of claim 17, further comprising a step of packetizing the at least part of obtained media data before transmission and storing the packetized media data in a first transmission buffer of the one communication device, wherein the step of transmitting an item of information characterizing a second authorization comprises inserting a specific message in a second transmission buffer of the one communication device so that the specific message is transmitted when a packet counter has elapsed, the packet counter being initialized based on the number of packets stored in the first transmission buffer and being decremented each time a packet stored in the first transmission buffer is transmitted towards the processing device.
20. A communication device for transmitting media data from the communication device towards a processing device via a network, the communication device being configured to perform the following steps:
obtaining media data captured by a sensor device during a sensing period; and in response to detection of a first authorization associated with the sensing period:
transmitting at least part of the obtained media data towards the processing device;
transmitting, to another communication device which is connected to the network, an item of information characterizing a second authorization to transmit media data obtained by said another communication device.
21. The communication device of claim 20, wherein the communication device is configured to retrieve a transmission sequence indicating a transmission order of media data by communication devices of the network, and wherein the first communication device in the transmission order indicated in the transmission sequence is the processing device.
22. A network comprising a plurality of communication devices according to claim 20 or 21 and a processing device configured to control the transmission of media data over the network.
23. The network according to claim 22, wherein the processing device is configured to perform the following steps:
transmitting a message to the communication devices, the message comprising an indication of a maximum time delay tolerated by the processing device between an expected start time and an actual start time of transmission of media data by one communication device;
receiving data from the communication devices, the received data comprising an indication of a time delay computed by the communication devices; and transmitting another message based on the computed time delays received from the communication devices.
24. The network according to claim 23, wherein the processing device is further configured to perform a step of updating the maximum time delay tolerated by the processing device based on the computed time delays received from the communication devices, wherein the other message comprises an indication of the updated maximum time delay.
25. The network according to claim 23 or 24, wherein the processing device is further configured to perform a step of generating a transmission sequence indicating a transmission order of data by communication devices of the network.
26. The network according to claim 25, wherein the processing device is further configured to perform the following steps:
selecting at least one communication device to be disabled, based on at least one of the quality and the complexity of the media data sent by the at least one communication device;
updating the transmission sequence based on the selected communication device.
27. The network according to any one of claims 22 to 26, wherein the communication devices are connected in cascade and are each configured to perform steps of a method according to any one of claims 1 to 19.
GB1709613.2A 2017-06-16 2017-06-16 Transmission method, communication device and communication network Active GB2563438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1709613.2A GB2563438B (en) 2017-06-16 2017-06-16 Transmission method, communication device and communication network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1709613.2A GB2563438B (en) 2017-06-16 2017-06-16 Transmission method, communication device and communication network

Publications (3)

Publication Number Publication Date
GB201709613D0 GB201709613D0 (en) 2017-08-02
GB2563438A true GB2563438A (en) 2018-12-19
GB2563438B GB2563438B (en) 2020-04-15

Family

ID=59462228

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1709613.2A Active GB2563438B (en) 2017-06-16 2017-06-16 Transmission method, communication device and communication network

Country Status (1)

Country Link
GB (1) GB2563438B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595887A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2595884A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2616735A (en) * 2020-06-09 2023-09-20 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466259B1 (en) * 1999-01-04 2002-10-15 Unisys Corporation Collection of video images correlated with spacial and attitude information
JP2006295907A (en) * 2005-03-14 2006-10-26 Mitsubishi Materials Corp Wireless sensor network system, base station, wireless sensor and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466259B1 (en) * 1999-01-04 2002-10-15 Unisys Corporation Collection of video images correlated with spacial and attitude information
JP2006295907A (en) * 2005-03-14 2006-10-26 Mitsubishi Materials Corp Wireless sensor network system, base station, wireless sensor and program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595887A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2595884A (en) * 2020-06-09 2021-12-15 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2595884B (en) * 2020-06-09 2023-04-19 Canon Kk Method, device and computer program for robust data transmission in daisy-chain networks
GB2595887B (en) * 2020-06-09 2023-08-23 Canon Kk Method, device, and computer program for improving transmission of data in daisy-chain networks
GB2616735A (en) * 2020-06-09 2023-09-20 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks
GB2616735B (en) * 2020-06-09 2024-05-29 Canon Kk Method, device, and computer program for robust data transmission in daisy-chain networks

Also Published As

Publication number Publication date
GB201709613D0 (en) 2017-08-02
GB2563438B (en) 2020-04-15

Similar Documents

Publication Publication Date Title
CN102123073B (en) Packet reordering method and device
US7447164B2 (en) Communication apparatus, transmission apparatus and reception apparatus
CN111343039B (en) Communication synchronization method and node in distributed acquisition system
US10560383B2 (en) Network latency scheduling
GB2563438A (en) Transmission method, communication device and communication network
US10491418B2 (en) Can controller and data transmission method using the same
JP5800032B2 (en) Computer system, communication control device, and computer system control method
KR101565093B1 (en) Time synchronous method for avb in vehicle and system thereof
CN111343228B (en) Distributed time synchronization protocol for asynchronous communication system
CN112804157A (en) Programmable congestion control
JP6265058B2 (en) Network transmission system, its master node, slave node
US20210067459A1 (en) Methods, systems and appratuses for optimizing time-triggered ethernet (tte) network scheduling by using a directional search for bin selection
US20200322909A1 (en) Method and Arrangement for Deterministic Delivery of Data Traffic Over Wireless Connection
US10686897B2 (en) Method and system for transmission and low-latency real-time output and/or processing of an audio data stream
US10292181B2 (en) Information communication method and information processing apparatus
CN112714081B (en) Data processing method and device
US8306065B2 (en) Data distribution apparatus, relay apparatus and data distribution method
GB2569808A (en) Transmission method, communication device and communication network
US20200052926A1 (en) Methods and systems for transmitting and receiving data packets through a bonded connection
US10555350B2 (en) Bluetooth connection establishing method
US9210093B2 (en) Alignment circuit and receiving apparatus
KR20180064274A (en) Can controller and method for transmission of data using the same
JP7060026B2 (en) Communication device and communication method
GB2595884A (en) Method, device and computer program for robust data transmission in daisy-chain networks
WO2023017657A1 (en) Network relay device, network transmission device, packet relay method, and packet transmission method