GB2595887A - Method, device, and computer program for improving transmission of data in daisy-chain networks - Google Patents

Method, device, and computer program for improving transmission of data in daisy-chain networks Download PDF

Info

Publication number
GB2595887A
GB2595887A GB2008750.8A GB202008750A GB2595887A GB 2595887 A GB2595887 A GB 2595887A GB 202008750 A GB202008750 A GB 202008750A GB 2595887 A GB2595887 A GB 2595887A
Authority
GB
United Kingdom
Prior art keywords
packet
transmission
processing device
data
timer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2008750.8A
Other versions
GB2595887B (en
GB202008750D0 (en
Inventor
Visa Pierre
Lorgeoux Mickaël
Le Scolan Lionel
Le Houerou Brice
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB2008750.8A priority Critical patent/GB2595887B/en
Publication of GB202008750D0 publication Critical patent/GB202008750D0/en
Publication of GB2595887A publication Critical patent/GB2595887A/en
Application granted granted Critical
Publication of GB2595887B publication Critical patent/GB2595887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4247Bus transfer protocol, e.g. handshake; Synchronisation on a daisy chain bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4247Bus transfer protocol, e.g. handshake; Synchronisation on a daisy chain bus
    • G06F13/4256Bus transfer protocol, e.g. handshake; Synchronisation on a daisy chain bus using a clocked protocol
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/417Bus networks with decentralised control with deterministic access, e.g. token passing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/427Loop networks with decentralised control
    • H04L12/433Loop networks with decentralised control with asynchronous transmission, e.g. token ring, register insertion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/562Attaching a time tag to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6285Provisions for avoiding starvation of low priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Small-Scale Networks (AREA)

Abstract

A method for transmitting media data in a network comprising first, second, and third processing devices (105, fig. 1) connected in cascade comprises, at the first device: initializing and starting a timer at the beginning of a media data capturing period 602; receiving at least one packet of media data from the second device and forwarding it to the third device; determining with a timer whether a time delay has elapsed since receiving the last packet from the second device (i.e. a second trigger) 604; and, if it has, transmitting to the third device a packet of locally obtained media data 605. Thus sensor data may be transmitted according to a predetermined sequence. Each device may wait for upstream devices to complete their transmission before starting transmission of local data, and may identify the end of upstream transmission by identifying an information item in the last packet of a set (i.e. a first trigger) 603. The invention may be relevant to applications with many sensors, such as wide video surveillance and multi-camera capture systems, and may improve the use of bandwidth in systems where each device has an allocated transmission time slot and a variable bit rate.

Description

METHOD, DEVICE, AND COMPUTER PROGRAM FOR IMPROVING TRANSMISSION OF DATA IN DAISY-CHAIN NETWORKS
FIELD OF THE INVENTION
The present invention relates to a method, a device, and a computer program for improving transmission of data in daisy-chain networks, for example for transmitting media data captured by a sensor device during a capturing period over a network such as a high-speed network comprising a plurality of sensor devices and a processing device.
BACKGROUND OF THE INVENTION
Nowadays, a lot of applications involve a large number of sensor devices (for instance wide video surveillance systems and multi-camera capture systems). Most of them provide real-time services that are highly appreciated by users of these systems. However, the high number of sensors in sensor networks and the amount of data generated by each sensor raise several problems such as bandwidth consumption to transport media data to a centralized processing point.
A prior art document called "A system for distributed multi-camera capture and processing", Jim Easterbrook, Oliver Grau, Peter Schubel from BBC Research and Development describes a distributed multi-camera capture and processing system for real-time media production applications. In this document, the communication between sensors and a server device is performed by push mode, i.e. the transmission of data is controlled independently by each sensor which schedules the transmission as soon as new data are available for transmission.
Although the distributed processing allows a decrease in the bandwidth requirement and the load at server device side by executing part of the processing inside each sensor, some problems remain.
Since data may be transmitted towards the server device (i.e. the final destination of these data is the server device) as soon as available for transmission, they arrive at the server device in an unmanaged way. Hence, data presentation is anarchic, spatially and temporally.
From the point of view of the server device, the reception of data out of order causes over consumption in memory and sub-optimal processing duration.
Moreover, traffic burden or network congestion may cause important data loss and uncontrolled delay, which are incompatible with high quality real-time applications.
Finally, in case of congestion and if the dropping of data becomes necessary due to the buffering capabilities of the sensor, the dropping is unmanaged and data belonging to different data types or different capturing periods may be deleted in the different sensors. As a consequence, post processing at the computing server may be impacted since important data may have been dropped.
A solution to the aforementioned problems is to set up a transmission scheme based on fixed and predefined time slots with one time slot allocated to each device (also called node). This transmission scheme relies on the synchronization of nodes on a precise clock synchronization signal aligned (i.e., locked) onto a shared reference time, from which each node can determine the beginning of its time-slot. The shared reference time is obtained using a specific item of equipment such as that known as PTP Time Server (PTP standing for Precision Time Protocol), with a high-precision clock (e.g., an atomic clock or a global positioning system clock). The time-server acts as a master node to generate synchronization information related to its local clock (frequency and/or time of day) and sends this information to the other network devices (acting as slave nodes) within synchronization packets according to a synchronization protocol. Examples of known synchronization protocols are the "Network Time Protocol" (NTP) and the IEEE 1588-2008 protocol (also known as "Precision Time Protocol" (PTP)), defined in the document entitled "IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurements and Control Systems" (IEEE Instrumentation and Measurements Society, Jul. 24, 2008).
With a time-slot based transmission scheme, the sensor devices can transmit successively their data. However, this transmission scheme is not adapted in case the amount of data captured and transmitted by the sensor devices is variable along the time. This leads to dimension each time slot with a duration able to support the peak data rate, leading to waste a lot of bandwidth when the amount of data is low, or when a sensor has no data to transmit during some time.
A solution to take advantage of a variable bit rate is to request each node to indicate the end of its transmission to the subsequent node in the transmission sequence. Examples of such transmission schemes are Token bus (standard IEEE 802.4) and Token ring (standard IEEE 802.5). However, these transmission schemes have weakness, in particular when a link or a node failure occurs or when errors occur during the token transmission, which may require the generation of new tokens under particular circumstances.
Consequently, there is a need for improving transmission of data captured by sensor devices in daisy-chain network.
The present invention has been devised to address one or more of the foregoing concerns.
SUMMARY OF THE INVENTION
According to a first aspect of the invention, there is provided a method for transmitting media data in a communication network comprising a first, a second, and a third processing device connected in cascade, the method comprising, at the first processing device: initializing and starting a timer upon detecting the beginning of a media data capturing period; receiving at least one packet of media data from the second processing device and forwarding the at least one received packet to the third processing device; determining whether a time delay has elapsed since receiving the last packet from the second processing device, the timer being used for determining whether the time delay has elapsed; and in response to determining that a time delay has elapsed since receiving the last packet from the second processing device, transmitting to the third processing device at least one packet of media data obtained locally by the first processing device Accordingly, the method of the invention makes it possible to improve transmission of data captured by sensor devices in a daisy-chain network, in particular to improve the use of bandwidth.
Optional features of the invention are further defined in the dependent appended claims.
According to a second aspect of the invention, there is provided a device for transmitting media data, the device comprising a processing unit configured for carrying out each of the steps of the method described above. The second aspect of the present invention has optional features and advantages similar to the first above-mentioned aspect.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g., a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 illustrates an example of a multi-capture system comprising a plurality of synchronized sensors that are connected in cascade, forming a daisy-chain network Figure 2 is a block diagram illustrating schematically an example of architecture of a sensor according to some embodiments of the invention; Figure 3 illustrates an example of a functional block diagram of the sensor illustrated in Figure 2, illustrating functions according to some embodiments of the invention; Figure 4 is a flowchart illustrating an example of steps for generating a first trigger for transmitting local data within a daisy-chain network; Figure 5a, 5b, and 5c are flowcharts illustrating examples of steps for generating a second trigger for transmitting local data within a daisy-chain network; and Figure 6 is a flowchart illustrating an example of steps to control transmission of local data in a sensor.
DETAILED DESCRIPTION OF THE INVENTION
According to a first embodiment, data obtained by interconnected sensor devices (also referred to as sensors or nodes hereafter), forming a daisy-chain network, are transmitted according to a predetermined transmission sequence along the daisy-chain network, from the sensor device the farthest from the destination (e.g., the first sensor device in the daisy chain) to the sensor device the closest to the destination (e.g., the last sensor device in the daisy chain).
A new transmission sequence is started at the beginning of each capturing period. To that end, each node in the daisy chain waits for the upstream nodes to have completed their data transmission before starting the transmission of its local data transmission. Accordingly, the right of transmitting local data is granted by detecting the end of transmission of data from upstream nodes which is done either by identifying a particular item of information in the received data, referred to as the first trigger, or by detecting that a timer has elapsed, referred to as the second trigger, before identifying the first trigger. In other words, the timer permits the local data transmission to be triggered in case the particular item of information has not been received.
There exist several ways for managing the timer.
In a first variant, described by reference to Figure 5a, the timer is started after the beginning of the capturing period, when a transmission queue is detected as empty (i.e., when there is no data packet to forward). It is then stopped when data are stored in the transmission queue and restarted when the transmission queue becomes empty again. If the timer elapses, this means than no data have been forwarded during the time defined by the initial timer value. Accordingly, it may be considered that there will be no more data to be forwarded for this capturing period.
According to this alternative, a node can start its local transmission even if the particular item of information indicating the end of data transmission from the upstream nodes was not received (e.g., because of a link or node failure). It is to be noted that a reception queue may be monitored instead of the transmission queue. The value determining the expiry of the timer is minimized to avoid wasting bandwidth with a long period without data transmission. However, the value is higher than the latency to receive a data packet from an upstream node and to forward it in the subsequent downstream node. In addition, the value is higher than a transmission pause introduced by a flow control mechanism requesting the previous upstream node to refrain from transmitting new data packets during a specified time. Moreover, in order to respect the successive data transmission, node by node, the value is different in each node. For a given node, the value is set higher than the value in the previous upstream node and lower than the value in the subsequent downstream node. For the sake of illustration and in order to avoid wasting bandwidth, the value is set to zero in the first node in the daisy chain starting the transmission sequence. The value may be a fixed value which is part of the node's configuration at initialization. In a variant, the value is automatically selected according to the position of the node in the transmission sequence (i.e., in the daisy chain).
In another variant, described by reference to Figure 5b, the timer is started after the beginning of the capturing period and it is stopped when data are stored in the transmission queue. According to this variant, the timer makes it possible to trigger the local data transmission in case no data packet has been forwarded since the beginning of the capturing period.
Still in another variant, described by reference to Figure 5c, the timer is started after the beginning of the capturing period and it is not stopped. According to this embodiment, a node starts transmitting local data despite there are still some data packets to forward.
The particular item of information that may be used to trigger the local data transmission (first trigger) may be embedded in a specific data packet transmitted by each node once all the local data packets have been generated and transmitted for the current capturing period. Alternatively, the particular item of information may be embedded in the last data packets transmitted for the current capturing period. An advantage of this last solution is to decrease the overhead. In this case, the particular item of information may be one bit in the data packet header, denoted end flag, set to '1' for the last transmitted data packet, and reset to '0' when it is forwarded by the subsequent node. Optionally, the end flag is associated with a hop counter also present in the data packet header. The hop counter is initialized to zero when a node generates a data packet, then it is incremented by one each time the data packet is forwarded by a downstream node. When the end flag is associated with a hop counter, the end flag is not reset to '0' when forwarded. Hence, the last data packet forwarded by a node contains a header with the end flag bit equals to '1' and the hop counter equals to '1'. Instead of using a hop counter, a time-to-live field in the data packet header may be used to identify the data packets from the previous upstream node.
In order to strictly respect the successive transmission node by node, the forwarding of data packets may be suspended in a node (no more read from the reception queue) when the local transmission has started. Then, the forwarding of data packets is resumed when the local transmission is completed. This function may be necessary in case a node has not completed its local transmission before the end of the capturing period.
According to a second embodiment, data obtained by sensor devices forming a daisy-chain network are transmitted according to a predetermined transmission sequence along the daisy-chain network, according to any order, that is to say according to an order that may be different from the order from the sensor device the farthest from the destination (e.g., the first sensor device in the daisy chain) to the sensor device the closest to the destination (e.g., the last sensor device in the daisy chain). In such a case, the particular item of information for the first trigger may be embedded in a specific packet with a destination address or a destination identifier corresponding to the next node in the transmission sequence. Also, the timer for the second trigger is preferably different in each node. For a given node, the value is set higher than the value in the previous node in the transmission sequence, and lower than the value in the next node in the transmission sequence.
Nodes and daisy-chain network Figure 1 illustrates an example of a multi-capture system 100 comprising a plurality of synchronized sensors (here, N sensors), denoted 105-1 to 105-N, that are connected in cascade, forming a daisy-chain network. A daisy-chain network topology or a ring topology are the preferred topologies to apply the invention. However the invention may be carried out within networks conforming to other topologies like common bus, star, mesh, or tree.
As illustrated, multi-capture system 100 further comprises three additional nodes: -processing server 110 (or computing server), which performs processing on data received from the sensors, -time server 115 that distributes information on the current time in accordance with a protocol such as NTP or IEEE1588 PTP. For the sake of illustration, time-server 115 may obtain the current time from a GPS, a standard radio wave, or an atomic clock, and distributes the obtained current time to the other nodes of the multi-capture system, and -interconnecting device 120 to interconnect the daisy-chain network, processing server 110, and time-server 115.
In addition, a control station (not illustrated) may be added to perform control operations. In a variant, these control operations may be handled by processing server 110 For the sake of illustration, the links between the sensors (also referred to as nodes) may be full duplex links based on 10 Gigabit wired Ethernet technology as
B
defined by the IEEE 802.3ae-2002 standard, and the interconnecting device 160 may be a 10 Gigabit Ethernet switch.
Still for the sake of illustration, sensors 105-1 to 105-N may be image capturing devices that perform image capturing synchronously according to an internal synchronization signal generated thanks to the PTP protocol. A video captured by an image capturing device may be either a moving image or a still image, and may include data such as a sound. The video captured by an image capturing device can be saved in a storage incorporated in the image capturing device. The videos captured by the image capturing devices may be used to generate a free point-of-view content in real time within processing server 110. A free point-of-view content is a virtual viewpoint image (or a virtual viewpoint video) based on a plurality of captured images (or captured videos) obtained by performing image capturing from a plurality of directions by the plurality of image capturing devices. The synchronization signal is periodic with a period corresponding to the video frame rate (e.g., 60 frames per second).
According to some embodiments, the period of the synchronization signal is referred to as the capturing period.
The transmission of data captured by the sensors is carried out periodically, at each capturing period. The data captured during a capturing period may be accumulated in storage means of the sensor and may be transmitted along the next capturing period.
According to some embodiments of the invention, data transmission is organized so as to avoid network congestion and to avoid transmitting a mixture of data captured from different sensors when they are delivered to processing device 110. A network congestion may occur when the bandwidth required for the data transmission exceeds the capacity of the communication link. In system 100, it is likely to happen when a node (e.g., one of the nodes 105-1 to 105-N) transmits its local data while forwarding data from the upstream nodes. The consequence of network congestion is to introduce some delay and to waste bandwidth due to the flow control mechanism that may be triggered in each node to pause the transmission until the congestion disappears.
The data transmission is therefore performed node by node: a node transmits its local data when the previous node in the transmission sequence has finished forwarding the data from its previous nodes in the sequence and has finished transmitting its own data. This also presents the advantage of avoiding mixing data from different nodes and of reducing the complexity of data reception and processing in processing server 110.
As described above and according to a first embodiment of the invention, the transmission sequence along the daisy-chain network is predetermined, from the sensor the farthest from processing server 110 (i.e., sensor 105-N) to the sensor device the closest to processing server 110 (i.e., sensor 105-1). In each node, the local data transmission is granted after the determination of the end of transmission from the upstream nodes. For instance, node 105-(N-5) may start transmitting its local data when it has established that nodes 105-N to 105-(N-4) have completed their own local transmission.
As described above and according to a second embodiment of the invention, the transmission sequence along the daisy-chain network is fixed and predefined but does not follow the position of the nodes (i.e., not from the farthest to the closest nodes from the destination). In this case, the local data transmission is granted after the determination of the end of transmission from the previous nodes.
As also described above, there exist two possible conditions to assert that the upstream nodes have completed their local data transmission or will not complete their local data transmission. The first condition, corresponding to the first trigger for local data transmission, is directed to identifying a particular item of information in the received data. This particular item of information is transmitted by each node when it has finished transmitting its local data. The second condition, corresponding to the second trigger for local data transmission, is directed to determining that a timer has elapsed before detecting the first trigger. The timer makes it possible to trigger transmission of local data in case the first condition is not fulfilled. The second condition provides robustness to failure in the system (i.e., a link or a node down) preventing the correct data reception in one node.
Figure 2 is a block diagram illustrating schematically an example of architecture of a sensor 200 according to some embodiments of the invention. Sensor 200 may correspond to one, several or all of sensors 105-1 to 105-N in Figure 1.
As illustrated, sensor 200 comprises communication bus 205 to which are connected: -central processing unit 210, such as a microprocessor, denoted CPU; -read-only memory 215, denoted ROM, for storing computer programs for implementing the invention; -random access memory 220, denoted CPU RAM, for storing the code executable by CPU 210, as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and -interface module 225, denoted I/O, providing interfaces with user devices (for example an interface module complying with the USB norm).
According to the illustrated example, sensor 200 further comprises communication bus 230 to which are connected: -communication controller 235 to receive and to transmit data packets through two full duplex links referenced 240 and 245 that may comply, for example, to the 802.3 protocol. Communication controller 235 may include programmable logic to implement hardware accelerated functions; -digital sensor 250 to capture local data and to make them available for transmission by the communication controller 235; -random access memory 255, denoted application RAM, which can be used by digital sensor 250 to store the captured data to transmit. The stored data may be read by communication controller 235 to generate and to transmit data packets on the network; and -hard disk drive 260, or other storage means, for storing computer programs executable by CPU 210. Hard disk 260 may also be used as storage means for the captured data.
At power up, the programs that are stored in a non-volatile memory, for example in read-only memory 215 and/or in hard disk 260, are transferred into random access memory 220 which then contains the executable code of the programs, as well as registers for storing the variables and parameters necessary for implementing the invention. Also at power-up, CPU 210 may program the programmable logic of communication controller 235.
To carry out the daisy-chain connection, communication link 240 is for instance connected to the previous upstream node and communication link 245 is connected to the subsequent downstream node. For the last node in the daisy chain (for example node 105-N in Figure 1), link 240 is left unconnected. From the synchronization messages exchanged with other nodes and with time-server 115 (e.g., PTP protocol), communication controller 235 is able to generate a periodic synchronization signal (not represented), corresponding to the capturing period. This signal is transferred to digital sensor 250. As this signal is synchronized in phase and frequency between all the nodes, the capture of data by the digital sensors is accurately synchronized.
Forwarding and transmitting data packets Figure 3 illustrates an example of a functional block diagram of the sensor illustrated in Figure 2, illustrating functions according to some embodiments of the invention.
According to the illustrated example, sensor 200 comprises: -application layer 310, -packetizer module 320, -routing module 330, -communication interface 340, and -transmission controller 300.
For the sake of illustration, application layer 310 may be implemented in digital sensor 250 while packetizer module 320, routing module 330, communication interface 340, and transmission controller 300 may be implemented in hardware in communication controller 235. However, other mappings of the functions are possible.
In particular, some of these functions may be implemented in software in CPU 210.
Application layer 310 generates application data (through digital sensor 250), at the beginning of each capturing period, and stores the generated data, for example in application RAM 255. Also, at the beginning of each capturing period, application layer 310 sends a notification to transmission controller 300, to indicate that new data are available. According to the illustrated example, such a notification is referenced 311 and denoted capturingPeriod. This notification may include the amount of captured data and the address where they are stored in application RAM 255 (or elsewhere).
Upon reception of a transmission request from transmission controller 300, referenced 321 and denoted txReg, packetizer module 320 formats the application data issued by application layer 310 into data packets having preferably predetermined sizes.
According to some embodiments, such a request includes the amount of captured data and their address in application RAM 205. For each data packet to generate, packetizer module 320 reads an amount of application data stored in application RAM 205 (the amount of red data corresponding to the data packet payload size). Then, the packetizer module 320 appends a packet header to the packet data payload and transfers the generated data packet to routing module 330. The payload size of data packets may be fixed by default, and only the last data packet is shorter (or padded) as the total amount of data to transmit may not be a multiple of the default packet payload size.
Along with the last data packet generated for a capturing period, packetizer module 320 generates a particular item of information indicating the end of the local transmission. It may be embedded in the header of the last data packet transmitted by a node. In addition to the usual fields in Ethernet packet header (source address, destination address, EtherType), the packet header may include useful information. For the sake of illustration, the header may comprise a payload size, a time-to-live indication for the data packet (e.g., a counter set to a predefined value when the packet is generated, that is decremented by one each time the data packet is forwarded by a node such that if the value reaches '0', the data packet is discarded to avoid infinite loop), an indication of the last data packet for the capturing period (e.g., one bit such as a end flag set to '1' for the last data packet and to '0' for the other data packets, and an indication of the number of times the data packet has been forwarded (e.g., a hop counter set to 0' when the data packet is generated and incremented by one each time the data packet is forwarded by a node). In a variant, only the end flag indication is used.
According to some embodiments, the particular item of information corresponds to the values of an end flag and of a hop counter: when generating a data packet that is to be considered as a last data packet, the bit of the end flag is set to 't and the value of the hop counter is set to '0'. Accordingly, a received data packet is to be considered as a last data packet, when its end flag bit is equal to '1' and its hop counter is equal to Alternatively, the time-to-live field may be used instead of the hop counter.
Assuming all the nodes generate their local data packets with the same initial time-to-live value, the last data packet forwarded by a node has a header with end flag bit equals to '1' and the time-to-live field equals to the initial value minus one.
Still as a variant, a node may indicate the end of its local transmission by transmitting a specific packet to the next node in the transmission sequence. The specific packet may be identified by another field in the header. In this case, the header of a data packet does not need to include the end flag, nor the hop counter for the number of times the packet has been forwarded.
Once the generation of data packets for a capturing period is completed, packetizer module 320 notifies the completion to transmission controller 300, for example using the signal referenced 322 and denoted txDone.
Routing module 330 receives the data packets generated by the packetizer module 320. It stores them in the transmission queue 331 and notifies communication interface 340 that some data packets are ready to be transmitted. In case the transmission queue 331 is full, routing module 330 requests packetizer module 320 to interrupt the generation of data packets. When the transmission queue empties, routing module 330 notifies packetizer module 320 to resume the generation of data packets. Routing module 330 routes the data packets generated by packetiser module 320 and the data packets received by the communication interface 340 from other nodes.
The latter are temporarily stored in reception queue 332. Routing module 330 analyses the packet headers to check the validity of the data packets (for instance it checks the value of the time-to-live field), then it transfers (i.e., forwards) the valid data packets to transmission queue 331, and it notifies communication interface 340 that some data packets are ready to be transmitted. Before storing the data packets in transmission queue 331, routing module 320 updates the packet header by incrementing by one the hop counter (if present), and by decrementing by one the time-to-live field. Then, the header (or a part of the header) is copied and transferred to transmission controller 300 through the signal referenced 334 and denoted forwardHeader.
In the case where an upstream node indicates the end of its local transmission, by transmitting a specific packet to the next node in the transmission sequence, the specific packet is identified, upon reception, by routing module 330. In such a case, if the node processing the received packet is the destination node of this specific packet, the latter is not forwarded. Only the header of this packet is then provided to transmission controller 300.
Routing module 330 comprises an arbiter referenced 333 to manage the concurrence of local data packets to be transmitted and received data packets to be forwarded. When a data packet is being stored in transmission queue 331, the operation cannot be interrupted until the data packet is fully stored. A priority value may be assigned by transmission controller 300 to arbiter 333, through the signal referenced 336 and denoted routingPriority. If the priority is set to "forward", the arbiter blocks the path from packetizer module 320 so that only the data packets to be forwarded are stored in transmission queue 331. Conversely, if the priority is set to "local", the arbiter blocks the forwarding path so that only the local data packets received from packetizer module 320 are stored in transmission queue 331. When the priority is "none", arbiter 333 only insures there is no mixture of data between a local data packet to be transmitted and a received data packet to be forwarded.
According to some embodiments, routing module 330 provides a status of transmission queue 331 to transmission controller 300, for example an empty status, through signal 335 denoted emptyStatus. This signal may be sent when transmission queue 331 is empty.
Communication interface 340 is used to receive data packets from the network and to transmit data packets to the network. For the sake of illustration, it may comprise an Ethernet physical layer and an Ethernet medium access control module (MAC). According to the illustrated example, communication interface 340 comprises two ports referenced 341 and 342, port 341 being connected to link 240 and port 342 being connected to link 245. A specific EtherType may be assigned to the data packets containing application data captured by digital sensor 250. All data packets received with this EtherType may be stored in reception queue 332. The other received packets may be stored in other reception queues (not represented), to be handled by CPU 210.
When application data packets are ready for transmission in transmission queue 331, communication interface 340 reads the data packets from transmission queue 331 and handles the access to the medium. Other packets to transmit (e.g. PTP packets), that are stored in other transmission queues (not represented), are managed by CPU 210.
For instance, according to the daisy-chain cabling, application data packets may be received on port 341 from the upstream nodes and they may be transmitted through port 342 to the downstream nodes.
In case reception queue 332 is full, communication interface 340 may temporarily store the packets in an internal buffer. In turn, if this buffer becomes full, a flow control mechanism may be triggered to request the previous upstream node to refrain from transmitting additional packets until the network congestion disappears. Transmission controller 300 is in charge of starting the local transmission according to the forwarding traffic, in order to achieve the transmission sequence node by node. For this purpose, transmission controller 300 may execute algorithms such as the ones described by reference to Figures 4 to 6.
Generating a first trigger Figure 4 is a flowchart illustrating an example of steps for generating a first trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. According to some embodiments, these steps aim at monitoring the header of received data packets to be forwarded, in order to identify a particular item of information indicating the end of transmission of data packets from the upstream nodes.
As illustrated, after an initialization step (step 400), transmission controller 300 waits for a packet header from routing module 330 (step 402). When a packet header is received, the transmission controller extracts the values of the useful fields (step 404) to detect the end of data packet transmission from the upstream nodes. According to the illustrated example, the useful fields are the end flag and the hop counter. Next, it is checked whether the value of the end flag is equal to '1' and the value of the hop counter is equal to '1' (step 406). If the value of the end flag is not equal to '1 or the value of the hop counter is not equal to '1', the algorithm returns to step 402 to wait for a new header.
On the contrary, if the value of the end flag is equal to '1' and the value of the hop counter is equal to '1', transmission controller 300 generates a trigger signal denoted "first trigger" (step 408) to indicate that the last data packet to be forwarded has been received. This signal is used for transmitting local data, as described by reference to Figure 6.
Alternatively, the time-to-live field in the header may be used instead of the hop counter. In such a case, transmission controller 300 needs to get a configuration parameter in the initialization step 400. In case the time-to-live field is used, transmission controller 300 should know the initial value when a packet is generated so as to determine the time period at which received data packets should be ignored.
In case a node indicates the end of its local transmission by transmitting a specific packet to the next node in the transmission sequence, step 406 consists in identifying the specific header within the received packets.
Generating a second trigger Figure 5a is a flowchart illustrating a first example of steps for generating a second trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. According to some embodiments, these steps aim at monitoring the status of transmission queue 331 in routing module 330 of sensor 200 to detect whether or not it remains empty during a predefined time.
In an initialization step (step 510), transmission controller 300 gets configuration parameters, in particular a configuration parameter corresponding to the timer value (for example, it may represent a duration of 500ps expressed in system clock cycles).
Next, transmission controller 300 waits for the beginning of a new capturing period (step 511). When a notification signaling that a new capturing period begins is received (for example when signal capturingPeriod 311 is received), transmission controller 300 checks whether the transmission queue, for example transmission queue 331, is empty (step 512). This can be done using signal emptyStatus 335. If the transmission queue is empty, transmission controller 300 initializes the timer with the configuration value and starts the timer to be decremented at each system clock cycle (step 513).
Then, the timer is monitored to determine when the waiting time elapses (step 514), i.e., when the timer reaches the value '0'.
When the waiting time elapses, transmission controller 300 generates a trigger signal called "second trigger" (step 519) to indicate that the transmission queue has been empty during the predefined time and the algorithm returns to step 511, waiting for a new capturing period. As described by reference to Figure 6, this signal is used to enable transmission of local data. For the first node in the transmission sequence, the configuration value of the timer is preferably set to '0' (since there is no data packet to forward before transmitting local data, the local transmission may start immediately after the beginning of the capturing period).
In the case where the waiting time has not lapsed (i.e., the timer has not reached the value '0', step 514), transmission controller 300 decrements the timer (step 515) and then checks if a new capturing period starts (step 516). If a new capturing period starts, the timer is stopped (step 518) and the algorithm returns to step 512. On the contrary, if no new capturing period starts, transmission controller 300 checks the status of the transmission queue (step 517). If the transmission queue is still empty, the algorithm returns to step 514 to check the timer value. On the contrary, if the transmission queue is not empty, this means that a new data packet is being forwarded. Accordingly, the timer is stopped (step 518), and the algorithm returns to step 512 to wait for the empty status.
In case of a link or node failure in the daisy chain, the downstream nodes do not receive data packets to forward anymore. In such a case, the expiration of the timer generates the second trigger so that a node may transmit local data. To avoid that several nodes generate the second trigger at the same time, the configuration value of the timer is different in each node. Except for the first node in the transmission sequence with a configuration value set to 0, the other nodes have a configuration value not null and slightly different from each other according to their position in the transmission sequence. For instance, the configuration value may be set to 500ps in the second node in the transmission sequence, then it may be set to 520ps in the third node, then 540ps in the fourth node, and so on. In case of transmission sequence from the farthest node to the closest node from the destination server, the setting may be done automatically by monitoring the hop counter. To that end, each node may memorize the highest hop counter value observed in the forwarded packets, and sets its timer to 500+20*(max_hop_counter-1) in ps.
Figure 5b is a flowchart illustrating a second example of steps for generating a second trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. They are a variant of the ones illustrated in Figure 5a. According to this variant, the timer enables transmission of local data to be triggered in case no data packet has been forwarded since the beginning of the capturing period.
In an initialization step (step 520), transmission controller 300 obtains configuration parameters, in particular a configuration parameter corresponding to the timer value (for example, it may represent a duration of 500ps expressed in system clock cycles).
Next, transmission controller 300 waits for the beginning of a new capturing period (step 521). When a notification is received signaling that a new capturing period has begun (for example when signal capturingPeriod 311 is received), transmission controller 300 checks whether the transmission queue, for example transmission queue 331, is empty (step 522). This can be done using signal emptyStatus 335. If the transmission queue is empty, transmission controller 300 initializes the timer with the configuration value and starts the timer to be decremented at each system clock cycle (step 523).
Next, the timer is monitored to determine when the waiting time elapses (step 524), i.e., when the timer reaches the value '0'.
When the waiting time elapses, transmission controller 300 generates a trigger signal called "second trigger" (step 528) to indicate that the transmission queue has been empty for the predefined time and the algorithm returns to step 521, waiting for a new capturing period. As described by reference to Figure 6, this signal is used to enable transmission of local data. For the first node in the transmission sequence, the configuration value of the timer is preferably set to '0' (since there is no data packet to forward before transmitting local data, the local transmission may start immediately after the beginning of the capturing period).
If the waiting time has not lapsed (i.e., the timer has not reached the value '0', step 524), transmission controller 300 decrements the timer (step 525) and then checks the status of the transmission queue (step 526). If the transmission queue is still empty, the algorithm returns to step 524 to check the timer value. On the contrary, if the transmission queue is not empty, this means that a new data packet is being forwarded.
Accordingly, the timer is stopped (step 527), and the algorithm returns to step 521 to wait for a new capturing period.
According to this variant, it is assumed that the configuration timer value is lower than the capturing period. As for the case described by reference to Figure 5a, the configuration value of the timer is different in each node to avoid several nodes generating the second trigger at the same time. Again, the configuration value may be set to 500ps in the second node in the transmission sequence, then it may be set to 520ps in the third node, then 540ps in the fourth node, and so on.
Figure 5c is a flowchart illustrating a third example of steps for generating a second trigger for transmitting local data within a daisy-chain network. These steps may be carried out in a transmission controller such as transmission controller 300 of sensor 200. They are a variant of the ones illustrated in Figures 5a and 5b. According to this variant, the timer starts after the beginning of the capturing period and is not stopped or restarted afterward. Therefore, a node may start the transmission of local data despite there still being some data packets to forward.
In an initialization step (step 530), transmission controller 300 gets configuration parameters, in particular a configuration parameter corresponding to the timer value (for example, it may represent a duration of 500ps expressed in system clock cycles).
Next, transmission controller 300 waits for the beginning of a new capturing period (step 531). When a notification is received signaling that a new capturing period has begun (for example when signal capturingPeriod 311 is received), transmission controller 300 initializes the timer with the configuration value and starts the timer to be decremented at each system clock cycle (step 532).
Next, the timer is monitored to determine when the waiting time elapses (step 533), i.e., when the timer reaches the value '0'.
When the waiting time elapses, transmission controller 300 generates a trigger signal called "second trigger" (step 535) to indicate that the transmission queue has been empty for the predefined time and the algorithm returns to step 531, waiting for a new capturing period. As described by reference to Figure 6, this signal is used to enable transmission of local data. For the first node in the transmission sequence, the configuration value of the timer is preferably set to '0' (since there is no data packet to forward before transmitting local data, the local transmission may start immediately after the beginning of the capturing period).
If the waiting time has not lapsed (i.e., the timer has not reached the value '0', step 533), transmission controller 300 decrements the timer (step 534) and then returns to step 533 to check the timer value.
According to this variant, it is assumed that the configuration timer value is lower than the capturing period. As for the case described by reference to Figures 5a and 5b, the configuration value of the timer is different in each node to avoid several nodes generating the second trigger at the same time. For example, the configuration value may be set to 1 ms in the second node in the transmission sequence, then it may be set to 2 ms in the third node, and so on.
It is noted that the monitoring of the transmission queue (e.g., transmission queue 331) may be replaced by the monitoring of the reception queue (e.g., reception queue 332), in the algorithms described by reference to Figures 5a, 5b, and 5c.
Controlling transmission of local data Figure 6 is a flowchart illustrating an example of steps to control transmission of local data in a sensor. For the sake of illustration, these steps may be executed in transmission controller 300 in sensor 200, using the trigger signals generated in the flowcharts described by reference to Figure 4 and to Figures 5a, 5b, or 5c.
Further to an initialization step (step 600), transmission controller 300 disables the local transmission path (step 601). This step may consist in de-asserting signal txReq 321 sent to packetizer module 320.
Next, transmission controller 300 waits for the beginning of a new capturing period (step 602). When a notification signaling that a new capturing period has begun is received (for example when signal capturingPeriod 311 is received), transmission controller 300 checks whether a first trigger signal has been generated (step 603), e.g., if a particular item of information indicating the end of transmission of upstream nodes has been identified.
If no first trigger signal has been generated, transmission controller 300 checks whether a second trigger signal has been generated (step 604), e.g., if no data packet has been forwarded during a predefined time. If no second trigger signal has been generated, transmission controller 300 returns to step 603.
On the contrary, if a first or a second trigger signal has been generated, transmission controller 300 enables transmission of local data (step 605). This step may consist in asserting signal txReq 321 sent to packefizer module 320.
Optionally, transmission controller 300 may disable the forwarding path (step 606). This may consist in setting signal routingPriority (sent to routing module 330) to the value "local" (so that arbiter 333 blocks the forwarding data path). By default, the value of routingPriority may be "none".
Next, transmission controller 300 checks whether the local transmission is completed (step 607). When a corresponding notification is received (e.g., when signal txDone 322 is received), the transmission controller returns to step 601.
Optionally, the forwarding path may be enabled by setting signal routingPriority to the value "none" or "forward" (step 608), so that arbiter 333 blocks the data path from packefizer module 320.
In case a node has not completed its local transmission at the beginning of a new capturing period, then the current transmission may continue until completion. The captured data, associated with the new capturing period, are not processed and are not transmitted. The steps of disabling and enabling the forwarding path (steps 606 and 608) may be useful to avoid a mixture of data packets from different nodes.
As a variant, the transmission of current local data may be aborted if a new capturing period starts On such a case, the transmission controller returns to step 603 to wait for a trigger to transmit the new captured data).
Although the present invention has been described herein above with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a person skilled in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in claims not dependent upon each other does not indicate that a combination of these features cannot be advantageously used.

Claims (18)

  1. CLAIMS1. A method for transmitting media data in a communication network comprising a first, a second, and a third processing device connected in cascade, the method comprising, at the first processing device: initializing and starting a timer upon detecting the beginning of a media data capturing period; receiving at least one packet of media data from the second processing device and forwarding the at least one received packet to the third processing device; determining whether a time delay has elapsed since receiving the last packet from the second processing device, the timer being used for determining whether the time delay has elapsed; and in response to determining that a time delay has elapsed since receiving the last packet from the second processing device, transmitting to the third processing device at least one packet of media data obtained locally by the first processing device.
  2. 2. The method of claim 1, wherein a packet of media data received from the second processing device, that is to be forwarded to the third processing device, is temporarily stored in a buffer, the timer being initialized and started once determining that the buffer is empty, after having detected the beginning of a media data capturing period, wherein the buffer is a transmission queue or a reception queue.
  3. 3. The method of claim 1 or claim 2, wherein the timer is stopped when receiving a packet of media data from the second processing device and forwarding the received packet to the third processing device.
  4. 4. The method of claim 3 depending on claim 2, wherein the timer is restarted when it is determined that the buffer is empty.
  5. 5. The method of claim 4, wherein restarting the timer further comprises reinitializing the timer.
  6. 6. The method of any one of claims 1 to 5, further comprising determining a value of the timer, values of the timers in the processing devices being such that the values increase with the position of the processing devices along a transmission sequence of processing devices.
  7. 7. The method of claim 6, wherein the value of the timer is determined as a function of an item of information received from the second processing device within a packet.
  8. 8. The method of claim 7, wherein the received item of information is related to the position of the second processing device along the transmission sequence.
  9. 9. The method of any one of claims 1 to 8, further comprising identifying the last packet of a set of packets of media data received from the second processing device, at least one packet of media data obtained locally by the first processing device being transmitted to the third processing device upon identifying the last packet of the set of packets.
  10. 10. The method of claim 9, wherein identifying the last packet comprises identifying a particular item of information within a received packet.
  11. 11. The method of claim 10, wherein the particular item of information comprises an indication of the last packet of a set of packets obtained locally by a processing device.
  12. 12. The method of claim 11, wherein the particular item of information further comprises a value of a hop counter, the value of a hop counter of a packet representing a number of times the packet has been forwarded.
  13. 13. The method of claim 11 or claim 12, wherein the particular item of information further comprises a packet time-to-live indicator.
  14. 14. The method of claim 9, wherein identifying the last packet comprises identifying a specific packet.
  15. 15. The method of any one of claims 7 to 14, wherein identifying the last packet comprises determining an identifier of a destination processing device, identifying the last packet further comprising comparing an identifier of a processing device and an identifier of a destination processing device.
  16. 16. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing each of the steps of the method according to any one of claims 1 to 15 when loaded into and executed by the programmable apparatus.
  17. 17. A non-transitory computer-readable storage medium storing instructions of a computer program for implementing each of the steps of the method according to any one of claims 1 to 15
  18. 18. A device for transmitting media data, the device comprising a processing unit configured for carrying out each of the steps of the method according to any one of claims 1 to 15.
GB2008750.8A 2020-06-09 2020-06-09 Method, device, and computer program for improving transmission of data in daisy-chain networks Active GB2595887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2008750.8A GB2595887B (en) 2020-06-09 2020-06-09 Method, device, and computer program for improving transmission of data in daisy-chain networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2008750.8A GB2595887B (en) 2020-06-09 2020-06-09 Method, device, and computer program for improving transmission of data in daisy-chain networks

Publications (3)

Publication Number Publication Date
GB202008750D0 GB202008750D0 (en) 2020-07-22
GB2595887A true GB2595887A (en) 2021-12-15
GB2595887B GB2595887B (en) 2023-08-23

Family

ID=71615874

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2008750.8A Active GB2595887B (en) 2020-06-09 2020-06-09 Method, device, and computer program for improving transmission of data in daisy-chain networks

Country Status (1)

Country Link
GB (1) GB2595887B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083233A1 (en) * 2000-12-21 2002-06-27 Owen Jonathan M. System and method of allocating bandwith to a plurality of devices interconnected by a plurality of point-to-point communication links
US9594434B1 (en) * 2014-06-27 2017-03-14 Amazon Technologies, Inc. Autonomous camera switching
GB2563438A (en) * 2017-06-16 2018-12-19 Canon Kk Transmission method, communication device and communication network
GB2569808A (en) * 2017-12-22 2019-07-03 Canon Kk Transmission method, communication device and communication network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083233A1 (en) * 2000-12-21 2002-06-27 Owen Jonathan M. System and method of allocating bandwith to a plurality of devices interconnected by a plurality of point-to-point communication links
US9594434B1 (en) * 2014-06-27 2017-03-14 Amazon Technologies, Inc. Autonomous camera switching
GB2563438A (en) * 2017-06-16 2018-12-19 Canon Kk Transmission method, communication device and communication network
GB2569808A (en) * 2017-12-22 2019-07-03 Canon Kk Transmission method, communication device and communication network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurements and Control Systems", 24 July 2008, IEEE INSTRUMENTATION AND MEASUREMENTS SOCIETY

Also Published As

Publication number Publication date
GB2595887B (en) 2023-08-23
GB202008750D0 (en) 2020-07-22

Similar Documents

Publication Publication Date Title
US7730230B1 (en) Floating frame timing circuits for network devices
US9699091B2 (en) Apparatus and method for time aware transfer of frames in a medium access control module
US11050501B2 (en) Performing PHY-level hardware timestamping and time synchronization in cost-sensitive environments
US10158444B1 (en) Event-driven precision time transfer
US11552871B2 (en) Receive-side timestamp accuracy
JP6157760B2 (en) Communication device, time correction method, and network system
JP2002517132A (en) Time stamp synchronization method for reservation-based TDMA protocol
US20030142696A1 (en) Method for ensuring access to a transmission medium
JPH0373636A (en) Data synchronizing transmission system
JP5127482B2 (en) Timing synchronization method, synchronization apparatus, synchronization system, and synchronization program
JP2006101539A (en) Network transfer arrangement
EP2817902A2 (en) Method and network node for processing a precision time protocol
US12047300B2 (en) Packet forwarding method, device, and system
CN112929117A (en) Compatible definable deterministic communication Ethernet
GB2595884A (en) Method, device and computer program for robust data transmission in daisy-chain networks
JP2011040895A (en) Information processing apparatus, control method thereof and program
EP2628274A1 (en) Reducing continuity check message (ccm) bursts in connectivity fault management (cfm) maintenance association (ma)
GB2595887A (en) Method, device, and computer program for improving transmission of data in daisy-chain networks
CN117220810A (en) Asynchronous data transmission method and system based on POWERLINK protocol
GB2616735A (en) Method, device, and computer program for robust data transmission in daisy-chain networks
CN110958072B (en) Multi-node audio and video information synchronous sharing display method
JP2023061877A (en) Time-division schedule adjustment method, communication device, and time-division schedule adjustment method
WO2024054912A1 (en) System and methods for network data processing