GB2595884A - Method, device and computer program for robust data transmission in daisy-chain networks - Google Patents

Method, device and computer program for robust data transmission in daisy-chain networks Download PDF

Info

Publication number
GB2595884A
GB2595884A GB2008736.7A GB202008736A GB2595884A GB 2595884 A GB2595884 A GB 2595884A GB 202008736 A GB202008736 A GB 202008736A GB 2595884 A GB2595884 A GB 2595884A
Authority
GB
United Kingdom
Prior art keywords
data
processing device
information
media data
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2008736.7A
Other versions
GB2595884B (en
GB202008736D0 (en
Inventor
Visa Pierre
Le Houerou Brice
Le Scolan Lionel
Lorgeoux Mickael
Morvan Isabelle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB2008736.7A priority Critical patent/GB2595884B/en
Priority to GB2303256.8A priority patent/GB2616735B/en
Publication of GB202008736D0 publication Critical patent/GB202008736D0/en
Publication of GB2595884A publication Critical patent/GB2595884A/en
Application granted granted Critical
Publication of GB2595884B publication Critical patent/GB2595884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/22Communication route or path selection, e.g. power-based or shortest path routing using selective relaying for reaching a BTS [Base Transceiver Station] or an access point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/417Bus networks with decentralised control with deterministic access, e.g. token passing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/427Loop networks with decentralised control
    • H04L12/433Loop networks with decentralised control with asynchronous transmission, e.g. token ring, register insertion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4247Bus transfer protocol, e.g. handshake; Synchronisation on a daisy chain bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/724Admission control; Resource allocation using reservation actions during connection setup at intermediate nodes, e.g. resource reservation protocol [RSVP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Small-Scale Networks (AREA)

Abstract

A method of transmitting media data (e.g. video) in a communication network 100 comprising synchronised processing devices (e.g. sensors 105) connected in cascade (e.g. a daisy-chain topology), the processing devices acquiring media data at different acquisition rates, the method comprising, at a first processing device: receiving, from a second, upstream processing device, an item of information characterizing an end of transmission of media data from the second processing device; determining whether the first processing device has locally-acquired media data to transmit and, if so, updating the received item of information and forwarding the updated item of information to a third, downstream processing device; otherwise forwarding the received item of information to the third processing device without updating the received item of information. Each device is allowed to transmit media data during a transmitting period. The first device may receive a group identifier along with the item of information, compare the identifier with its associated identifier and update the received information depending on the result. The network may comprise servers 110 with the media data of devices associated with different group identifiers being directed to different servers.

Description

METHOD, DEVICE, AND COMPUTER PROGRAM FOR ROBUST DATA TRANSMISSION IN DAISY-CHAIN NETWORKS
FIELD OF THE INVENTION
The present invention relates to a method, a device, and a computer program for robust transmission of data in daisy-chain networks, for example for transmitting media data captured by a sensor device during a capturing period over a network such as a high-speed network comprising a plurality of sensor devices and processing devices.
BACKGROUND OF THE INVENTION
Nowadays, a lot of applications involve a large number of sensor devices (for instance wide video surveillance systems and multi-camera capture systems). Most of them provide real-time services that are highly appreciated by users of these systems. However, the high number of sensors in sensor networks and the amount of data generated by each sensor raise several problems such as bandwidth consumption to transport media data to a centralized processing point.
A prior art document called "A system for distributed multi-camera capture and processing", Jim Easterbrook, Oliver Grau, Peter Schubel from BBC Research and Development describes a distributed multi-camera capture and processing system for real-time media production applications. In this document, the communication between sensors and a server device is performed by push mode, i.e., the transmission of data is controlled independently by each sensor which schedules the transmission as soon as new data are available for transmission.
Although the distributed processing allows a decrease in the bandwidth requirement and the load at server device side by executing part of the processing inside each sensor, some problems remain.
Since data may be transmitted towards the server device (i.e., the final destination of these data is the server device) as soon as available for transmission, they arrive at the server device in an unmanaged way. Hence, data presentation is anarchic, spatially and temporally.
From the point of view of the server device, the reception of data out of order causes over consumption in memory and sub-optimal processing duration.
Moreover, traffic burden or network congestion may cause loss of important data and uncontrolled delay, which are incompatible with high quality real-time applications.
Finally, in case of congestion and if the dropping of data becomes necessary due to the buffering capabilities of the sensor, the dropping is unmanaged and data belonging to different data types or different capturing periods may be deleted in the different sensors. As a consequence, post processing at the computing server may be impacted since important data may have been dropped.
A solution to the aforementioned problems is to set up a transmission scheme based on fixed and predefined time slots with one time slot allocated to each device (also called node). This transmission scheme relies on the synchronization of nodes on a precise clock synchronization signal aligned (i.e., locked) onto a shared reference time, from which each node can determine the beginning of its time-slot. The shared reference time is obtained using a specific item of equipment such as that known as PTP Time Server (PTP standing for Precision Time Protocol), with a high-precision clock (e.g., an atomic clock or a global positioning system clock). The time server acts as a master node to generate synchronization information relating to its local clock (frequency and/or time of day) and sends this information to the other network devices (acting as slave nodes) within synchronization packets according to a synchronization protocol. Examples of known synchronization protocols are the "Network Time Protocol" (NTP) and the IEEE 1588-2008 protocol (also known as "Precision Time Protocol" (PTP)), defined in the document entitled "IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurements and Control Systems" (IEEE Instrumentation and Measurements Society, Jul. 24, 2008).
With a time-slot based transmission scheme, the sensor devices can transmit their data successively. However, this transmission scheme is not adapted where the amount of data captured and transmitted by the sensor devices is variable over time. This leads to dimensioning each time slot with a duration able to support the peak data rate, leading high wastage of bandwidth when the amount of data is low, or when a sensor has no data to transmit for some time.
In particular, the prior art solutions are not adapted, in terms of use of bandwidth, to systems wherein a sensor device has no local data to transmit for some time. Having no local data to transmit may be simply due to the absence of external events over some time. It may also result from a mixture of sensor devices in the same daisy-chain network, having different capture rates. For the sake of illustration, some sensor devices may capture data periodically at a rate of 60 Hz (i.e., 60 captures per second), while others may capture data periodically at a rate 30 Hz or 1 Hz. Moreover, the capture time for a sensor device operating at 1 Hz may not be aligned with a capture time for a sensor device operating at 60 Hz. In such situations, when a sensor device has no local data to transmit over some time, it does not transmit local data or generate a token at the end of a local transmission. Thus, the next downstream node has to wait for some time to resume the transmission sequence, which leads to loss of bandwidth. In addition, multiple fimeout conditions may be triggered in the downstream nodes, creating a mixture of data from different sensor devices in the reception server.
Consequently, there is a need to improve transmission of data captured by sensor devices in daisy-chain network.
The present invention has been devised to address one or more of the foregoing concerns.
SUMMARY OF THE INVENTION
According to a first aspect of the invention, there is provided a method for transmitting media data in a communication network comprising a first, a second, and a third processing device connected in cascade, the first, the second, and the third processing device being configured to acquire media data at different acquisition rates, the method comprising, at the first processing device: receiving, from the second processing device, an item of information characterizing an end of transmission of media data from the second processing device; determining whether the first processing device has locally-acquired media data to transmit and, as a function of the determination, updating the received item of information and forwarding the updated item of information to the third processing device if the first processing device has media data to transmit or forwarding the received item of information to the third processing device, without updating the received item of information, if the first processing device has no locally-acquired media data to transmit.
Accordingly, the method of the invention makes it possible to improve transmission of data captured by sensor devices in a daisy-chain network, in particular to improve the use of bandwidth.
Optional features of the invention are further defined in the dependent appended claims.
According to a second aspect of the invention, there is provided a device for transmitting media data, the device comprising a processing unit configured for carrying out each of the steps of the method described above. The second aspect of the present invention has optional features and advantages similar to the first above-mentioned aspect.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system".
Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g., a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 illustrates an example of a multi-capture system comprising a plurality of synchronized sensors that are connected in cascade, forming a daisy-chain network; Figure 2 is a block diagram illustrating schematically an example of architecture of a sensor according to some embodiments of the invention; Figure 3 illustrates an example of a functional block diagram of the sensor illustrated in Figure 2, illustrating functions according to some embodiments of the invention; Figure 4 is a flowchart illustrating an example of steps for generating a first trigger for transmitting local data within a daisy-chain network; Figure 5 is a flowchart illustrating an example of steps for generating a second trigger for transmitting local data within a daisy-chain network; Figure 6 is a flowchart illustrating an example of steps to control transmission of local data in a sensor; and Figure 7 is a flowchart illustrating an example of steps for managing forwarding of packets according to some embodiments of the invention.
DETAILED DESCRIPTION OF THE INVENTION
According to a first embodiment, data obtained (or acquired) by interconnected sensor devices (also referred to as sensors or nodes hereafter), forming a daisy-chain network, are transmitted according to a predetermined transmission order along the daisy-chain network, from the sensor device the farthest from the destination (e.g., the first sensor device in the daisy chain) to the sensor device the closest to the destination (e.g., the last sensor device in the daisy chain). In this embodiment, all the sensor devices are considered altogether in the same transmission sequence.
A new transmission sequence is started at the beginning of each capturing period, by the first node of the daisy chain. To that end, each node in the daisy chain waits for the upstream nodes to have completed their data transmission before starting the transmission of its local data. Accordingly, the right of transmitting local data is granted by detecting the end of transmission of data from upstream nodes which may be done by identifying a particular item of information in the received data, referred to as a first trigger. After receiving and detecting this particular item of information, the considered node can start transmitting its local data.
If the node receiving this particular item of information has no local data to transmit, it forwards the received particular item of information without changing it to the next downstream node. This enables the next downstream node to start immediately transmitting its own local data. On the contrary, if the node receiving this particular item of information has local data to transmit, it processes the received particular item of information and generates a new particular item of information once the transmission of the local data is completed. Processing the received particular item of information means either to retain it (i.e., to not forward the particular item of information before transmitting local data) or to forward it after changing it (i.e., modifying its content). This makes it possible to prevent a downstream node from starting to transmit its own local data at the same time.
A timer may be used to handle loss of the particular item of information (e.g., a loss of the particular item of information due to link down of node failure) so that a node may transmit its local data without waiting indefinitely for a particular item of information. For the sake of illustration, such a timer may start at the beginning of a new capturing period and may stop if a packet has been forwarded, meaning that the item of information should be received later. Accordingly, the right to transmit local data may be granted by detecting that the timer has elapsed, referred to as the second trigger, before identifying the first trigger.
For bandwidth efficiency, the value of the timer is preferably set to zero in the first node in the daisy chain starting the transmission sequence. For the other nodes, the value of the timer is a non-zero value. In order to comply with the successive transmission, node by node of the local data, the value shall be different in each node. For a given node, the value shall be set higher than the value in the previous upstream node and lower than the value in the subsequent downstream node.
According to a second embodiment, different groups of sensor devices are considered, formed according to their capture rate. For selecting the sensor devices of the same group, a group identifier, for example a data type, may be associated with each capture rate. Therefore, the sensor devices belonging to the same group (i.e., having the same group identifier, e.g., the same capture rate) can be selected based on such a group identifier, for example as a function of a received group identifier to be considered, that is compared with the local group identifier of the sensor device (e.g., with the capture rate of the sensor device). The local data may then be transmitted, group by group, after having defined a transmission sequence for each group. To that end, when a node receives a particular item of information indicating the end of transmission of upstream nodes, it checks whether the node belongs to the group of nodes transmitting their local data. This may be done by comparing a type of data contained within the received particular item of information with the local type of data of the node. If the node does not belong to the groups of nodes concerned by the received particular item of information, the particular item of information is forwarded without change. On the contrary, if the node belongs to the groups of nodes concerned by the received particular item of information, the received particular item of information is forwarded or updated depending on the availability of local data to transmit, as described in the first embodiment. In order to avoid data packets of different groups of sensor devices being mixed within the destination devices, the data packets may be directed to different computing servers (one computing server per group of sensor devices).
According to a third embodiment directed to systems comprising only a few sensor devices having a capture rate lower than the others, only the sensor devices having the highest capture rate achieve the successive transmission one-by-one, using a particular item of information as described in reference to the first embodiment. The other sensor devices, having a lower capture rate, ignore the received item of information and forward it without changing it. According to some embodiments, these sensor devices transmit their local data as soon as they are ready after the capture.
Whatever the embodiments, the particular item of information that may be used to trigger the local data transmission may be embedded in a specific data packet transmitted by each node once all the local data packets have been generated and transmitted for the current capturing period. Alternatively, the particular item of information may be embedded in the last data packet transmitted for the current capturing period. An advantage of this last solution is to decrease the overhead. In this case, the particular item of information may be one bit in the data packet header, denoted end flag, set to '1' in the packet header of the last transmitted data packet. If a node receiving a packet with the end flag set to '1' has data to transmit, then it forwards the received packet after overwriting the value '0' in the end flag field. In a variant the end flag bit may be associated with a hop counter also present in the data packet header. The hop counter is initialized to zero when a node generates a data packet, then it is incremented by one each time the data packet is forwarded by a downstream node having local data to transmit. Hence, the last data packet received by a node contains a header with the end flag bit equal to '1' and the hop counter equal to '0'.
In addition, whatever the embodiments, in order to strictly comply with the successive transmission node by node, the forwarding of data packets may be suspended in a node (no more read from the reception queue) when the local transmission has started. Then, the forwarding of data packets is resumed when the local transmission is completed. This function may be necessary in case a node has not completed its local transmission before the end of the capturing period.
Nodes and daisy-chain network Figure 1 illustrates an example of a multi-capture system 100 comprising a plurality of synchronized sensors (here, N sensors), denoted 105-1 to 105-N, that are connected in cascade, forming a daisy-chain network. A daisy-chain network topology or a ring topology are the preferred topologies to apply the invention. However the
B
invention may be carried out within networks conforming to other topologies like common bus, star, mesh, or tree.
As illustrated, multi-capture system 100 further comprises additional nodes: -several processing servers (or computing servers), here three processing servers denoted 110-1, 110-2, and 110-3 (generically denoted 110), which perform processing on data received from the sensors, -time server 115 that distributes information on the current time in accordance with a protocol such as NTP or 1EEE1588 PTP. For the sake of illustration, time server 115 may obtain the current time from a GPS, a standard radio wave, or an atomic clock, and distributes the obtained current time to the other nodes of the multi-capture system, and -interconnecting device 120 to interconnect the daisy-chain network, processing servers 110, and time server 115.
In addition, a control station (not illustrated) may be added to perform control operations. In a variant, these control operations may be handled by processing servers 110.
For the sake of illustration, the links between the sensors (also referred to as nodes) may be full duplex links based on 10 Gigabit wired Ethernet technology as defined by the IEEE 802.3ae-2002 standard, and the interconnecting device 160 may be a 10 Gigabit Ethernet switch.
Still for the sake of illustration, sensors 105-1 to 105-N may be image capturing devices that perform image capturing synchronously according to an internal synchronization signal generated thanks to the PTP protocol. A video captured by an image capturing device may be either a moving image or a still image, and may include data such as a sound. The video captured by an image capturing device can be saved in a storage incorporated in the image capturing device. The videos captured by the image capturing devices may be used to generate a free point-of-view content in real time with the data processed by processing servers 110. A free point-of-view content is a virtual viewpoint image (or a virtual viewpoint video) based on a plurality of captured images (or captured videos) obtained by performing image capturing from a plurality of directions by the plurality of image capturing devices. The synchronization signal is periodic with a period corresponding to the video frame rate (e.g., 60 frames per second). According to some embodiments, the period of the synchronization signal is referred to as the capturing period.
Still for the sake of illustration, it is assumed that the sensor devices 105-1, 105-5, and 105-7 operate at a frame rate of 60 fps (frames per second), the sensor devices 105-2, 105-4, 105-6, and 105-8 to 105-N operate at a frame rate of 30 fps, and the sensor device 105-3 operates at a frame rate of 1 fps. The data captured by sensor devices 105-1, 105-5, and 105-7 are directed to processing server 110-1, the data captured by sensor devices 105-2, 105-4, 105-6, and 105-8 to 105-N are directed to processing server 110-2, and the data captured by sensor device 105-3 are directed to processing server 110-3. The data captured by sensor devices 105-1, 105-5, and 105-7 are considered to be data of type 1, the data captured by sensor devices 105-2, 105-4, 105-6, and 105-8 to 105-N are considered to be data of type 2, and the data captured by sensor device 105-3 are considered to be data of type 3.
Data captured by the sensor devices 105-1 to 105-N are transmitted periodically, at each of their capturing period. The data captured during a capturing period may be accumulated in storage means of the corresponding sensor device and may be transmitted over their next capturing period.
According to some embodiments, data transmission is organized so as to avoid network congestion and to avoid transmitting a mixture of data captured from different sensors when they are delivered to processing servers 110. Network congestion may occur when the bandwidth required for the data transmission exceeds the capacity of the communication link. In system 100, this is likely to occur when a node (e.g., one of the nodes 105-1 to 105-N) transmits its local data while forwarding data from the upstream nodes. The consequence of network congestion is to introduce delay and to waste bandwidth due to the flow control mechanism that may be triggered in each node to pause the transmission until the congestion disappears. The data transmission is therefore performed node by node: a node transmits its own local data when the previous node in the transmission sequence has finished forwarding the data from its previous nodes in the sequence and has finished transmitting its own local data. This also presents the advantage of avoiding mixing data from different nodes and of reducing the complexity of data reception and processing in processing servers 110 (i.e., in the processing servers receiving data from multiple sensor devices).
As described above and according to some embodiments of the invention, the transmission sequence along the daisy-chain network is predetermined, from the sensor the farthest from switch 120 (i.e., sensor 105-N) to the sensor device the closest to switch 120 (i.e., sensor 105-1). In each node, the local data transmission is granted after the determination of the end of transmission from the upstream nodes. For instance, node 105-4 may start transmitting its local data when it has established that nodes 105-N to 105-5 (associated with the same type of data if a type of data is to be used) have completed their own local transmission.
As also described above, there are two possible conditions to assert that the upstream nodes have completed their local data transmission or will not complete their local data transmission. The first condition, corresponding to a trigger for local data transmission, is directed to identifying a particular item of information in the received data. This particular item of information is transmitted by each node when it has finished transmitting its local data. The second condition, corresponding to the second trigger for local data transmission, is directed to determining that a timer has elapsed before detecting the first trigger. The timer makes it possible to trigger transmission of local data in case the first condition is not fulfilled. The second condition provides robustness to failure in the system (i.e., a link or a node down) preventing data reception in one node. According to some embodiments of the invention and according to the example illustrated in Figure 1, two transmission sequences may be carried out in parallel. The first transmission sequence is directed to the data of type 1, that are obtained from nodes 105-7, 105-5, and 105-1 (node 105-7 being the first node to transmit data in the group and node 105-1 being the last node of the sequence to transmit data in the group). The second transmission sequence is directed to the data of type 2, that are obtained from nodes 105-N to 105-8, 105-6, 105-4, and 105-2 (node 105-N being the first node to transmit data in the group and node 105-2 being the last node to transmit data in the group). Regarding the data of type 3 that are obtained from node 105-3, the latter transmits its local data as soon as they are ready after their capture.
Figure 2 is a block diagram illustrating schematically an example of architecture of a sensor 200 according to some embodiments of the invention. Sensor 200 may correspond to one, several or all of sensor devices 105-1 to 105-N in Figure 1.
As illustrated, sensor 200 comprises communication bus 205 to which are connected: -central processing unit 210, such as a microprocessor, denoted CPU; -read-only memory 215, denoted ROM, for storing computer programs for implementing the invention; -random access memory 220, denoted CPU RAM, for storing the code executable by CPU 210, as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and -interface module 225, denoted I/O, providing interfaces with user devices (for example an interface module complying with the USB norm).
According to the illustrated example, sensor 200 further comprises communication bus 230 to which are connected: -communication controller 235 to receive and to transmit data packets through two full duplex links referenced 240 and 245 that may comply, for example, to the 802.3 protocol. Communication controller 235 may include programmable logic to implement hardware accelerated functions according to embodiments of the invention; -digital sensor 250 to capture local data and to make them available for transmission by the communication controller 235; -random access memory 255, denoted application RAM, which can be used by digital sensor 250 to store the captured data to transmit. The stored data may be read by communication controller 235 to generate and to transmit data packets on the network; and -hard disk drive 260, or other storage means, for storing computer programs executable by CPU 210. Hard disk 260 may also be used as storage means for the captured data.
At power up, the programs that are stored in a non-volatile memory, for example in read-only memory 215 and/or in hard disk 260, are transferred into random access memory 220 which then contains the executable code of the programs, as well as registers for storing the variables and parameters necessary for implementing the invention. Also at power-up, CPU 210 may program the programmable logic of communication controller 235.
To carry out the daisy-chain connection, communication link 240 is for instance connected to the previous upstream node and communication link 245 is connected to the subsequent downstream node. For the last node in the daisy chain (for example node 105-N in Figure 1), link 240 is left unconnected. From the synchronization messages exchanged with other nodes and with time server 115 (e.g., PTP protocol), communication controller 235 is able to generate a periodic synchronization signal (not represented), corresponding to the capturing period. This signal is transferred to digital sensor 250. As this signal is synchronized in phase and frequency between all the nodes having the same capture rate, the capture of data by the digital sensors is accurately synchronized.
Forwarding and transmitting data packets Figure 3 illustrates an example of a functional block diagram of the sensor illustrated in Figure 2, illustrating functions according to some embodiments of the invention.
According to the illustrated example, sensor 200 comprises: - application layer 310, - packetizer module 320, -routing module 330, -communication interface 340, and -transmission controller 300.
For the sake of illustration, application layer 310 may be implemented in digital sensor 250 while packetizer module 320, routing module 330, communication interface 340, and transmission controller 300 may be implemented in hardware in communication controller 235. However, other mappings of the functions are possible.
In particular, some of these functions may be implemented in software in CPU 210. Application layer 310 generates application data (through digital sensor 250), at the beginning of each capturing period, and stores the generated data, for example in application RAM 255. Also, at the beginning of each capturing period, application layer 310 sends a notification to transmission controller 300, to indicate that new data are available. According to the illustrated example, such a notification is referenced 311 and denoted capturingPetiod. This notification may include the amount of captured data and the address where they are stored in application RAM 255 (or elsewhere).
Upon reception of a transmission request from transmission controller 300, referenced 321 and denoted txReq, packetizer module 320 formats the application data issued by application layer 310 into data packets having preferably predetermined sizes. According to some embodiments, such a request includes the amount of captured data and their address in application RAM 205. For each data packet to generate, packetizer module 320 reads an amount of application data stored in application RAM 205 (the amount of read data corresponding to the data packet payload size). Next, the packetizer module 320 appends a packet header to the packet data payload and transfers the generated data packet to routing module 330. The payload size of data packets may be fixed by default, and only the last data packet is shorter (or padded) as the total amount of data to transmit may not be a multiple of the default packet payload size.
Along with the last data packet generated for a capturing period, packetizer module 320 generates a particular item of information indicating the end of the local transmission. It may be embedded within the header of the last data packet transmitted by a node. In addition to the usual fields in Ethernet packet header (source address, destination address, EtherType), the packet header may include useful information for processing servers 110. For the sake of illustration, the header may comprise: a type of payload data conveyed within the packet, - a particular item of information for signalling that the packet is the last packet for the capturing period (e.g., one bit such as an end flag set to '1' for the last data packet and to '0' for the other data packets), and -an indication of the number of times the data packet has been forwarded (e.g., a hop counter set to '0' when the data packet is generated and incremented by one each time the data packet is forwarded by a node). Alternatively, it may be a counter set to a predefined initial value when the packet is generated and decremented by one each time the packet is forwarded by a node. If the value reaches '0', the packet is discarded.
Hence, the particular item of information indicating the last packet is the end flag bit equals to '1'.
According to some embodiments, the end flag is associated with a hop counter. Therefore, a received data packet is to be considered as a last data packet if its end flag bit is equal to '1' and its hop counter is equal to '0'.
Alternatively, all the nodes generate their local data packets with the same initial hop counter value and the last data packet forwarded by a node has a header with end flag bit equals to '1' and the hop counter equals the initial value minus one.
Still as a variant, a node may indicate the end of its local transmission by transmitting a specific packet to the next node in the transmission sequence. The specific packet may be identified by another field in the header. In this case, the header of a data packet does not need to include the end flag, or the hop counter for the number of times the packet has been forwarded.
Once the generation of data packets for a capturing period is completed, packetizer module 320 notifies the completion to transmission controller 300, for example using the signal referenced 322 and denoted txDone.
Routing module 330 receives the data packets generated by the packetizer module 320. It stores them in the transmission queue 331 and notifies communication interface 340 that some data packets are ready to be transmitted. If the transmission queue 331 is full, routing module 330 requests packetizer module 320 to interrupt the generation of data packets. When the transmission queue empties, routing module 330 notifies packetizer module 320 to resume the generation of data packets.
Routing module 330 forwards the data packets received by the communication interface 340 from other nodes, that are temporarily stored in reception queue 332. This step, that aims at determining whether the header of a received data packet should be updated before being transmitted, is described in more detail by reference to Figure 7. According to this step, when a received data packet is available in reception queue 332, routing module 330 reads the data packet from the reception queue, extracts the header, and transfers it (or a part of it) to transmission controller 300 using the signal referenced 334 and denoted fotwardHeader. Depending on the value of the signal referenced 337 and denoted enableUpdate that is driven by transmission controller 300, routing module 330 updates the packet header or not. If signal enableUpdate 337 indicates that the packet header is to be updated, for the type of data conveyed in the packet header, routing module 330 resets the end flag (e.g., reset the end flag bit to '0') and increments the hop counter by one if a hop counter is used. On the contrary, if signal enableUpdate 337 does not indicate that the packet header is to be updated, the end flag in the packet header is left unchanged. Next, routing module 330 writes the data packet in transmission queue 331 and notifies communication interface 340 to trigger transmission of the data packet.
As a variant, if a node indicates the end of its local transmission by transmitting a specific packet to the next node in the transmission sequence, the specific packet is detected by routing module 330 and transferred to transmission controller 300. According to items of information contained within signal 337 enableUpdate, the packet is forwarded or discarded. If the node has local data to transmit and if the type of data referenced within the specific packet corresponds to that of the node, the specific packet is discarded. On the contrary, if the node does not have local data to transmit or if the type of data referenced within the specific packet does not correspond to that of the node, the specific packet is forwarded to the next node in the transmission sequence.
Routing module 330 comprises an arbiter referenced 333 to manage the concurrence of local data packets to be transmitted and received data packets to be forwarded. When a data packet is being stored in transmission queue 331, the transaction cannot be interrupted until the data packet is fully stored. A priority value may be assigned by transmission controller 300 to arbiter 333, through the signal referenced 336 and denoted routingPriority. If the priority is set to "forward", the arbiter blocks the path from packetizer module 320 so that only the data packets to be forwarded are stored in transmission queue 331. Conversely, if the priority is set to "local", the arbiter blocks the forwarding path so that only the local data packets received from packefizer module 320 are stored in transmission queue 331. When the priority is "none", arbiter 333 only insures there is no mixture of data between a local data packet to be transmitted and a received data packet to be forwarded.
Communication interface 340 is used to receive data packets from the network and to transmit data packets to the network. For the sake of illustration, it may comprise an Ethernet physical layer and an Ethernet medium access control module (MAC). According to the illustrated example, communication interface 340 comprises two ports referenced 341 and 341, port 341 being connected to link 240 and port 342 being connected to link 245. A specific EtherType may be assigned to the data packets containing application data captured by digital sensor 250. All data packets received with this EtherType may be stored in reception queue 332. The other received packets may be stored in other reception queues (not represented), to be handled by CPU 210.
When application data packets are ready for transmission in transmission queue 331, communication interface 340 reads the data packets from transmission queue 331 and handles the access to the medium. Other packets to transmit (e.g., PTP packets), that are stored in other transmission queues (not represented), are managed by CPU 210.
For instance, according to the daisy-chain cabling, application data packets may be received on port 341 from the upstream nodes and they may be transmitted through port 342 to the downstream nodes.
If reception queue 332 is full, communication interface 340 may temporarily store the packets in an internal buffer. In turn, if this buffer becomes full, a flow control mechanism may be triggered to request the previous upstream node to refrain from transmitting additional packets until the network congestion disappears.
Transmission controller 300 is in charge of starting the local transmission according to the forwarding traffic, in order to achieve the transmission sequence node by node. For this purpose, transmission controller 300 may execute algorithms such as the ones described by reference to Figures 4 to 6.
Generating a first trigger Figure 4 is a flowchart illustrating an example of steps for generating a first trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. According to some embodiments, these steps aim at monitoring the header of received data packets to be forwarded, in order to identify a particular item of information indicating the end of transmission of data packets from the upstream nodes.
During an initialization step (step 400), transmission controller 300 gets a list of types of data to monitor. According to some embodiments, all the types of data are monitored by all the nodes (i.e., the transmission sequence includes all the nodes).
According to some other embodiments, each node of a group of nodes monitors only one type of data, for example nodes 105-7, 105-5, and 105-1 monitor the data type 1 only (corresponding to a first capture rate) and ignore the other data types (other capture rates), nodes 105-N to 105-8, 105-6, 105-4, and 105-2 monitor the data type 2 only and ignore the other data types, and node 105-3 ignores all the data types.
As illustrated, after the initialization step, transmission controller 300 waits for a packet header from routing module 330 (step 402). When a packet header is received, the transmission controller extracts the values of the useful fields (step 404) to detect the end of data packet transmission from the upstream nodes. According to the illustrated example, the useful fields are the type of data, the end flag, and the hop counter. Next, it is checked whether the received type of data belongs to the list of types of data to monitor and whether the value of the end flag is equal to '1' and the value of the hop counter is equal to '0' (step 406). If the received type of data does not belong to the list of types of data to monitor, if the value of the end flag is not equal to '1', or if the value of the hop counter is not equal to '0', the algorithm returns to step 402 to wait for a new header. On the contrary, if the received type of data belongs to the list of types of data to monitor, if the value of the end flag is equal to '1', and if the value of the hop counter is equal to '0', transmission controller 300 generates a trigger signal denoted "first trigger" (step 408) to indicate that the last data packet to be forwarded has been received. This signal triggers transmission of local data, as described by reference to Figure 6.
If a node indicates the end of its local transmission by transmitting a specific packet to the next node in the transmission sequence, step 406 consists in checking that the specific packet is associated with a type of data belonging to the list of types of data to monitor.
Generating a second trigger Figure 5 is a flowchart illustrating an example of steps for generating a second trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. According to some embodiments, these steps aim at triggering transmission of local data if no data packet for the types of data to monitor has been forwarded after a predefined time since the beginning of the capturing period.
During an initialization step (step 520), transmission controller 300 obtains configuration parameters, in particular a configuration parameter corresponding to the types of data to monitor and to the timer value (for example, it may represent a duration of 500 ps expressed in system clock cycles).
Next, transmission controller 300 waits for the beginning of a new capturing period (step 521). When a notification is received signaling that a new capturing period has begun (for example when signal capturingPeriod 311 is received), transmission controller 300 initializes the timer with the configuration value and starts the timer to be decremented at each system clock cycle (step 523).
Next, the timer is monitored to determine when the waiting time elapses (step 524), i.e., when the timer reaches the value '0'.
When the waiting time elapses, transmission controller 300 generates a trigger signal called "second trigger" (step 528) and the algorithm returns to step 521, waiting for a new capturing period. As described by reference to Figure 6, this signal is used to enable transmission of local data. For the first node in the transmission sequence, the configuration value of the timer is preferably set to '0' (since there is no data packet to forward before transmitting local data, the local transmission may start immediately after the beginning of the capturing period).
If the waiting time has not lapsed (i.e., the timer has not reached the value '0', step 524), transmission controller 300 decrements the timer (step 525) and then checks the status of the forwarding path (step 526). If a data packet with a type of data corresponding to a type of data to monitor has been forwarded, the timer is stopped (step 527), and the algorithm returns to step 521 to wait for a new capturing period. If no data packet with a type of data corresponding to a type of data to monitor has been forwarded, the algorithm returns to step 524 to determine when the waiting time elapses.
According to this variant, it is assumed that the configuration timer value is lower than the capturing period. To avoid several nodes generating a second trigger at the same time, the configuration value of the timer is different in each node. For the sake of illustration, the configuration value may be set to 500 ps in the second node in the transmission sequence, then it may be set to 520 ps in the third node, then 540 ps in the fourth node, and so on.
Controlling transmission of local data and forwarding of data packets Figure 6 is a flowchart illustrating an example of steps to control transmission of local data in a sensor. For the sake of illustration, these steps may be executed in transmission controller 300 in sensor 200, using the trigger signals generated in the flowcharts described by reference to Figure 4 and to Figure 5.
During an initialization step (step 600), transmission controller 300 obtains configuration parameters, in particular a list of types of data to monitor.
Next, further to the initialization step, transmission controller 300 disables the local transmission path and disables updating headers of data packets for all the types of data (step 601). This step may comprise de-asserting signal txReq 321 sent to packetizer module 320 and de-asserting signal enableUpdate 337 sent to routing module 330.
Next, transmission controller 300 waits for the beginning of a new capturing period (step 602). When a notification signaling that a new capturing period has begun is received (for example when signal capturingPeriod 311 is received), transmission controller 300 checks whether there are local data to transmit, i.e., whether the amount of local data to transmit (payload size) is not zero (step 603). If the amount of local data to transmit is zero, the algorithm returns to step 602 to wait for a new capturing period: there is no local data to transmit and the headers of the packets to forward should not be updated.
On the contrary, if there are local data to transmit, transmission controller 300 asserts signal enableUpdate 337 to enable updating of packet headers (step 604), depending on the list of types of data to monitor.
Next, transmission controller 300 checks whether a first trigger signal has been generated (step 605), e.g., if a particular item of information indicating the end of transmission of upstream nodes has been identified.
If no first trigger signal has been generated, transmission controller 300 checks whether a second trigger signal has been generated (step 606), e.g., whether no data packet having the type of data to monitor has been forwarded during a predefined time. If no second trigger signal has been generated, transmission controller 300 returns to step 605.
On the contrary, if a first or a second trigger signal has been generated, transmission controller 300 enables transmission of local data (step 607). This step may comprise asserting signal txReq 321 sent to packefizer module 320.
Optionally, transmission controller 300 may disable the forwarding path (step 608). This may comprise setting signal routingPriority 336 (sent to routing module 330) to the value "local" (so that arbiter 333 blocks the forwarding data path). By default, the value of routingPriority 336 may be "none".
Next, transmission controller 300 checks whether the local transmission has been completed (step 609). When a corresponding notification is received (e.g., when signal txDone 322 is received), the transmission controller returns to step 601.
Optionally, the forwarding path may be enabled by setting signal routingPriority 336 to the value "none" or "forward" (step 610), so that arbiter 333 blocks the data path from packetizer module 320.
If a node has not completed its local transmission at the beginning of a new capturing period, the current transmission may continue until completion. The captured data, associated with the new capturing period, are not processed and are not transmitted. The steps of disabling and enabling the forwarding path (steps 608 and 610) may be useful to avoid a mixture of data packets from different nodes.
As a variant, the transmission of current local data may be aborted if a new capturing period starts (in such a case, the transmission controller returns to step 603 to process the new capturing period).
Figure 7 is a flowchart illustrating an example of steps for managing forwarding of packets within the sensor illustrated in Figure 2, based on the functional block diagram illustrated in Figure 3.
As illustrated, further to an initialization step (step 700), routing module 330 waits for a new packet available in the reception queue 332 (step 701). Once a packet is available, routing module 330 reads the packet from the reception queue, extracts the header, and transfers it (or part of it) to transmission controller 300 using signal fotwardHeader 334 (step 702). Next, a test is carried out to determine whether routing module 330 should update the packet header (step 703). As described above, such a determination is based on determining whether local data are to be transmitted. It may also be based on types of data to monitor. The result of the determination may be transmitted to routing module 330 as the value of signal enableUpdate 337 driven by transmission controller 300.
If signal enable Update 337 indicates that an update is requested for the type of data type conveyed in the data packet, routing module 330 clears the end flag bit (step 704), if appropriate (i.e., if the data packet is the last data packet of data packets to be transmitted by upstream nodes), for example reset its value to '0' and increments the hop counter by one if a hop counter is used. Otherwise, this field in the packet header is left unchanged. After this conditional operation, routing module 330 writes the data packet in the transmission queue 331 (step 705) and notifies the transmission to be carried out to communication interface 340.
Although the present invention has been described herein above with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a person skilled in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in claims not dependent upon each other does not indicate that a combination of these features cannot be advantageously used.

Claims (17)

  1. CLAIMS1. A method for transmitting media data in a communication network comprising a first, a second, and a third processing device connected in cascade, the first, the second, and the third processing device being configured to acquire media data at different acquisition rates, the method comprising, at the first processing device: receiving, from the second processing device, an item of information characterizing an end of transmission of media data from the second processing device; determining whether the first processing device has locally-acquired media data to transmit and, as a function of the determination, updating the received item of information and forwarding the updated item of information to the third processing device if the first processing device has media data to transmit or forwarding the received item of information to the third processing device, without updating the received item of information, if the first processing device has no locally-acquired media data to transmit.
  2. 2. The method of claim 1, wherein each processing device of the communication network is allowed to transmit media data during a transmitting period, wherein an item of information is generated by a processing device of the communication network at each beginning of its transmitting period.
  3. 3. The method of claim 1 or claim 2, wherein the item of information comprises an indication of the last packet of a set of packets obtained from media data acquired during a same media data capturing period.
  4. 4. The method of claim 3, wherein the particular item of information further comprises a value of a hop counter, the value of a hop counter of a packet representing a number of times the packet has been forwarded.
  5. 5. The method of claim 1 or claim 2, wherein the particular item of information is a specific packet.
  6. 6. The method of any one of claims 1 to 5, further comprising receiving a group identifier and comparing the received group identifier with at least one group identifier associated with the first processing device, the step of updating the received item of information being carried out as a function of a result of the comparison.
  7. 7. The method of claim 6, further comprising a step of obtaining a list of at least one group identifier associated with the first processing device.
  8. 8. The method of claim 6 or claim 7, wherein the received group identifier is received along with the item of information.
  9. 9. The method of any one of claims 6 to 8, wherein a group identifier is a type of data.
  10. 10. The method of any one of claims 6 to 9, wherein the communication network further comprises a plurality of computing servers, the locally-acquired media data of processing devices associated with different group identifiers being directed to different computing servers.
  11. 11. The method of any one of claims 1 to 10, further comprising initializing and starting a timer upon detecting the beginning of a media data capturing period, the timer being used for triggering transmission of locally-acquired media data.
  12. 12. The method of claim 11, further comprising stopping the timer upon forwarding a data packet received from the second processing device.
  13. 13. The method of claim 11 or claim 12, further comprising determining a value of the timer, values of the timers in the processing devices being such that the values increase with the position of the processing devices along a transmission sequence of processing devices.
  14. 14. The method of any one of claims 1 to 13, wherein the first processing device determines whether it has locally-acquired media data to transmit and whether it has to update the received item upon detecting the beginning of a media data capturing period.
  15. 15. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing each of the steps of the method according to any one of claims 1 to 14 when loaded into and executed by the programmable apparatus.
  16. 16. A non-transitory computer-readable storage medium storing instructions of a computer program for implementing each of the steps of the method according to any one of claims 1 to 14
  17. 17. A device for transmitting media data, the device comprising a processing unit configured for carrying out each of the steps of the method according to any one of claims 1 to 14.
GB2008736.7A 2020-06-09 2020-06-09 Method, device and computer program for robust data transmission in daisy-chain networks Active GB2595884B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2008736.7A GB2595884B (en) 2020-06-09 2020-06-09 Method, device and computer program for robust data transmission in daisy-chain networks
GB2303256.8A GB2616735B (en) 2020-06-09 2020-06-09 Method, device, and computer program for robust data transmission in daisy-chain networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2008736.7A GB2595884B (en) 2020-06-09 2020-06-09 Method, device and computer program for robust data transmission in daisy-chain networks

Publications (3)

Publication Number Publication Date
GB202008736D0 GB202008736D0 (en) 2020-07-22
GB2595884A true GB2595884A (en) 2021-12-15
GB2595884B GB2595884B (en) 2023-04-19

Family

ID=71616116

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2008736.7A Active GB2595884B (en) 2020-06-09 2020-06-09 Method, device and computer program for robust data transmission in daisy-chain networks

Country Status (1)

Country Link
GB (1) GB2595884B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112882978B (en) * 2021-03-02 2023-09-15 北京伟兴彦科技有限公司 Serial data transmission device, method and data processing equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170118141A1 (en) * 2013-03-15 2017-04-27 Innovasic, Inc. Packet data traffic management apparatus
GB2563438A (en) * 2017-06-16 2018-12-19 Canon Kk Transmission method, communication device and communication network
GB2569808A (en) * 2017-12-22 2019-07-03 Canon Kk Transmission method, communication device and communication network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170118141A1 (en) * 2013-03-15 2017-04-27 Innovasic, Inc. Packet data traffic management apparatus
GB2563438A (en) * 2017-06-16 2018-12-19 Canon Kk Transmission method, communication device and communication network
GB2569808A (en) * 2017-12-22 2019-07-03 Canon Kk Transmission method, communication device and communication network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurements and Control Systems", 24 July 2008, IEEE INSTRUMENTATION AND MEASUREMENTS SOCIETY
JIM EASTERBROOKOLIVER GRAUPETER SCHUBEL, A SYSTEM FOR DISTRIBUTED MULTI-CAMERA CAPTURE AND PROCESSING

Also Published As

Publication number Publication date
GB2595884B (en) 2023-04-19
GB202008736D0 (en) 2020-07-22

Similar Documents

Publication Publication Date Title
US9699091B2 (en) Apparatus and method for time aware transfer of frames in a medium access control module
US7730230B1 (en) Floating frame timing circuits for network devices
US11050501B2 (en) Performing PHY-level hardware timestamping and time synchronization in cost-sensitive environments
US7483448B2 (en) Method and system for the clock synchronization of network terminals
US8699646B2 (en) Media clock negotiation
CN105723657B (en) Switch, controller, system and link quality detection method
JP6157760B2 (en) Communication device, time correction method, and network system
WO2013124782A2 (en) Precision time protocol offloading in a ptp boundary clock
JP7393530B2 (en) Packet forwarding methods, devices, and systems
US11316654B2 (en) Communication device and method for operating a communication system for transmitting time critical data
CN112929117B (en) Compatible definable deterministic communication Ethernet
GB2595884A (en) Method, device and computer program for robust data transmission in daisy-chain networks
US8804770B2 (en) Communications system and related method for reducing continuity check message (CCM) bursts in connectivity fault management (CFM) maintenance association (MA)
JP2011040895A (en) Information processing apparatus, control method thereof and program
GB2616735A (en) Method, device, and computer program for robust data transmission in daisy-chain networks
JP2014032055A (en) Communication system
GB2595887A (en) Method, device, and computer program for improving transmission of data in daisy-chain networks
CN117220810A (en) Asynchronous data transmission method and system based on POWERLINK protocol
GB2563438A (en) Transmission method, communication device and communication network
CN110958072B (en) Multi-node audio and video information synchronous sharing display method
JP2021093695A (en) Synchronous control device, control method thereof, and program
JP2022028488A (en) Synchronization control unit, control method, and program
JP2024074327A (en) Communication device, transmission system, control method for communication device, and program
JP2023078537A (en) Communication device