GB2557613A - System and method for transmitting data and system and method for receiving data - Google Patents

System and method for transmitting data and system and method for receiving data Download PDF

Info

Publication number
GB2557613A
GB2557613A GB1621076.7A GB201621076A GB2557613A GB 2557613 A GB2557613 A GB 2557613A GB 201621076 A GB201621076 A GB 201621076A GB 2557613 A GB2557613 A GB 2557613A
Authority
GB
United Kingdom
Prior art keywords
data
protocol stacks
protocol
control information
transmitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1621076.7A
Other versions
GB201621076D0 (en
GB2557613B (en
Inventor
Closset Arnaud
Caillerie Alain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1621076.7A priority Critical patent/GB2557613B/en
Publication of GB201621076D0 publication Critical patent/GB201621076D0/en
Publication of GB2557613A publication Critical patent/GB2557613A/en
Application granted granted Critical
Publication of GB2557613B publication Critical patent/GB2557613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/166IP fragmentation; TCP segmentation

Abstract

A system for transmitting data from a user application 400T to a remote device comprises: a plurality of protocol stacks 420T, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks, a segmenting module 410T configured for segmenting data originating from the user application into data blocks, and for dispatching the data blocks to the protocol stacks, a combining module 430T configured for ordering data originating from the plural protocol stacks in order to generate a data flow, based on control information originating from the segmenting module, the control information being relative to the order in which data originating from the user application is segmented, and a network interface controller configured for receiving data flow from the combining module and for transmitting the data flow over a communication network.

Description

(71) Applicant(s):
Canon Kabushiki Kaisha
30-2 Shimomaruko 3-Chome, Ohta-ku,
146-8501 Tokyo, Japan (72) Inventor(s):
Arnaud Closset Alain Caillerie (74) Agent and/or Address for Service:
Santa relli
49, Avenue des Champs-Elysees, Paris 75008,
France (including Overseas Departments and Territori es) (56) Documents Cited:
CN 103532955 A US 20150295782 A
US 20150281407 A US 20150281112 A
US 20140019982 A US 20090097480 A (58) Field of Search:
INT CL G06F, H04L
Other: ONLINE: WPI, EPODOC,TXTE, INSPEC (54) Title ofthe Invention: System and method for transmitting data and system and method for receiving data Abstract Title: Communication system with plural independent protocol stacks (57) A system for transmitting data from a user application 400T to a remote device comprises: a plurality of protocol stacks 420T, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks, a segmenting module 410T configured for segmenting data originating from the user application into data blocks, and for dispatching the data blocks to the protocol stacks, a combining module 430T configured for ordering data originating from the plural protocol stacks in order to generate a data flow, based on control information originating from the segmenting module, the control information being relative to the order in which data originating from the user application is segmented, and a network interface controller configured for receiving data flow from the combining module and for transmitting the data flow over a communication network.
Figure GB2557613A_D0001
Stream transmission
Fig. 4T
1/20
Figure GB2557613A_D0002
TCP/IP protocol architecture
2/20
Figure GB2557613A_D0003
CL ο O 4= £ § ο
Ο co
CM
Multipath TCP (MPTCP) architecture
Figure GB2557613A_D0004
ο ο
3/20
Processing Processing Processing core 1 core 2 core n
Figure GB2557613A_D0005
Standard SMP hardware architecture
4/20
Figure GB2557613A_D0006
Figure GB2557613A_D0007
oc o
to
M400T ___ 400Rf_
TxApplication Rx Application
41OT 405V, I , 41 OR 405R /,ΐ co c
ΖΪ5 £= Ε θ ω £ to JZ ψ ω φ -Ω >- to O it o ro ro to ι
σ -4= Φ —j co ro £ -Q »_ Cd O o
CQ o
CL m
CM
-Q
CL m
CM
MT ro
CL m
CM ro
CL o
CM
M
0£ o
in o
CL o
CM
M _Q
OC o
CM
-=3-
CL
cl o 1—
/ CL in
co
Q_ M
CL o 1-
-q5 CL in
co
CL
CL _
Figure GB2557613A_D0008
o
GL co co
-Q £ CL °° co co 2
D ro
CL co co
M-
CL rco ro
CL hco
,_L -I—» =i E
«
o
M— c
<D o
C 4—i
o
C3)
£Z
<D 4—» to
Ό JO
ro CO
co Cd
ro o
JO ro
-H* -s—»
ro to
Cd
o
ro
cl
X
CL
CL o
co
Figure GB2557613A_D0009
Stream transmission Stream reception
Fig. 4T Fig. 4R
5/20
ΟΟΤ 400Τι γχ Application 400ΤΊ γχ Application ο
Figure GB2557613A_D0010
CO
ΙΟ ό>
Ll
6/20
400R' _ 400R ___
Rx Application Rx Application
Figure GB2557613A_D0011
v NIC
Fig. 5b S70R ssor | 565r
Γ Standard remote device
7/20
600T 400TUTx Application 600T’
Figure GB2557613A_D0012
Standard remote device
8/20
Figure GB2557613A_D0013
9/20
Figure GB2557613A_D0014
Fig. 7a
10/20
SZOZ ΟΖΟΓ SLOZ (HOZ SOOZ
7- Stack ID
-7- Expected Rx time
- Tx request time
-7- Block size
-7- Block ID (S' ft1 - &
Figure GB2557613A_D0015
Tx scheduler application status
11/20
Ο co
Figure GB2557613A_D0016
12/20
Figure GB2557613A_D0017
13/20 οεθ
Μ co οο
CM co
ΟΟ
-/ Local_Seq_num : :
Sub-flow ID - CM Nb sub-flow
_G>
JO {Ο
JO
Ξ5 ω
φ
ΟΟ
2ιί
CL
P
ϋ) co οο 'Μ-
CD 'Μ-
ΟΟ 'Μ-
Μ-
CM
Μ-
ΟΟ
Pk size Φ N '<0 -X CL : Φ N 'co CL
E E E
ZJ zj =5
c c £Z
J Π1 o'
Φ Φ Φ
co I co 1 co 1
co TO ro
O o O
O O o
_l
Q Q Q
£ £ $
o o o
H— M— M—
JO JO JO
ZJ ZJ ZJ
CO co co
E E E
zj =5 Z3
c C £Z
J o* J
Φ Φ Φ
ω I ω | ω I
co co co
JO JO JO
o o o
0 0 0
Φ
JO
CO
Ο 4—» CO ω c co
Ε φ
ω .X το φ
Figure GB2557613A_D0018
ιο οο ο
co μ- οο
Μ
CM αο
Figure GB2557613A_D0019
φ
Ε σ>
φ ω
το co ο
>>
co
Ο.
φ >
CO ο
Ω_
QC0 .X
Ο ο
JO φ
ο.
Μ— ο
-4—'
Ο) το φ
co
J= ο
(Send_vector__struct) Ack synthesis status table
Fig. 8
14/20
Figure GB2557613A_D0020
15/20
Figure GB2557613A_D0021
16/20
CD
CO
O ίΤ
CO o
CM
CO
O o
CD
O o
.*= N ^'cO X oc rJ O cr o <0 _ιω
Σ3 co
CM .£3
Σ3 _Q o
co o
CF
Φ co cu
-Ω _O o
I ~o <D
4—» o
<D
ΩΧ
Φ
X
Φ φ CD -= m -Ω o ro E-Ω
Σ3
Φ
a.
o «*— _c
-X
CL
-4—*
03
05 CM
_1 in
ο
ro
CM ro
ο 1 Ο) χ 05 *Ί _Ω ζ
-7- Stack ID - CM CO =
CO
Ω.
O)
C φ
Φ
Σ5
Μ—* cn
-e- 03 m
o
Pksize Φ IM ‘co Αί CL Pksize
E E E
o o Σ5
c I c I c I
CF 1 σ 1 CF
Φ Φ Φ - -
co I co I ω i
1 1 ί
co 05 05
o O O
o O O
l !
Q
Q
$
$ o
- o . M—
V CM 2 Ω
Σ5
Ω CO
CO
Z
cF m
CD
Figure GB2557613A_D0022
JD _Ω
M—* £Σ
O
JO c
c
I
CF
Φ co
σ)
Li_
O l·O co t- CM CO m o o o o o
CD CD CD O CD
V
Φ N Q
03 $ 05 4—· Sz-
Σ5 Αί o 05 <D
z o o m Sub-fl LL z
co Σ5 O size wID O> 4—»
> Φ a: o o o **r CO LL z
CL _Q Σ5
co
co Ω O size Q O)
> Αί o 05 =5
Φ CL O o ffl M— -Ω Σ5 LL z
CO
CM
O
Ό
O >.
Ω.
Φ >
M—»
O
Ω_
Ω.
Ai
O
JD _Ω
Φ
ΩΜΟ
M—* « ~σ <o £Σ
CO
-£Σ
Ο ο
Ω ι_
Μ—* co ϋ
φ >Ι
Φ >
φ ο
φ
Ωί
C
Ζω
Ε φ
co co co φ
17/20
Figure GB2557613A_D0023
Figure GB2557613A_D0024
□>
Figure GB2557613A_D0025
19/20
Figure GB2557613A_D0026
20/20
Figure GB2557613A_D0027
Fig. 13
System and method for transmitting data and system and method for receiving data
The present invention concerns a system and a method for transmitting data over a communication network.
The invention also concerns a system and a method for receiving data.
Data transmission speeds over Ethernet connections continue to increase, typically from a few gigabits per second (Gbps) to tens of Gbps and beyond, and host central processing units (CPU) are not capable of processing data at such high data rates.
In order to improve the throughput, processing resources are added, network protocol stacks being executed in software over multicore processing.
Processing overheads experienced during protocol stack execution, mainly regarding the TCP/IP (“Transmission Control Protocol/ Internet Protocol”) based protocol family, are mainly generated due to intermediate memory copies made between Network Interface Controller (NIC) devices before joining end application layers (in particular application buffering), and due to synchronization required between applications, NIC devices, and network protocol stacks.
It may be noted that parallelization of processing is a way to theoretically extend processing performance. However, when using a single protocol stack instance (for example the TCP/IP protocol family), the same data structures are shared, requiring scheduling of the use of data structures and creating processing overheads.
In this respect, it may be noted that when the number of cores increases, the relative individual CPU waiting protocol time increases, and when the number of cores increases, the relative individual CPU utilization for the protocol execution decreases.
Three types of parallelisms may be applied for protocol processing scalability:
A pipeline parallelism approach, wherein the processing of each protocol layer is assigned to a dedicated processor core. Even though this type of parallelism is simple to implement, the overall achievable throughput is limited by the slowest layer processor.
A packet based parallelism approach, wherein each incoming data packet is assigned to one processor core from a pool of processor cores. Even though in this type of parallelism, the levels of parallelism and scalability may be high, it is not adapted for stateful protocols, such as TCP/IP protocol.
In a flow parallelism approach, all data packets belonging to a same flow are routed to a dedicated processor core. Even though the implementation may be easy and the scalability high, the balance of processing resources utilization is limited.
A standard TCP/IP network protocol processing architecture is represented by Figure 1.
TCP/IP (“Transmission Control Protocol/ Internet Protocol”} protocol is used to transfer application data between two remote applications 400 through a communication network 1020. The TCP/IP protocol operates in connected mode, i.e. a logical connection between end systems is established before the transmission of data.
Data originating from application 400 is transferred to TCP/IP protocol stack 420 by a socket 405.
TCP/IP protocol stack 420 implements transport layer 1010, routing layer 1011 and is associated with a Network Interface Controller (NIC) 440 via a driver 1014.
It may be noted that a socket 405 is an end-point in a communication through a network and a NIC device 440 which is used to access to the network 1020.
To operate TCP/IP protocols, a plurality of socket buffers 1013 are necessary to sustain application data (network packet headers and payload). Also, a transmission control block (Tcb) 1012 is a data structure which contains information about the connection such as connection states, flow control and congestion control information.
It may be noted that data processed by a network using Ethernet are organized in packets. Each packet mainly comprises a header and payload.
In order to minimize synchronization and communication between processing cores, the use of flow parallelism is preferred.
Multipath TCP (MPTCP) is a development of TCP. MPTCP is directed to allowing a Transmission Control Protocol (TCP) connection to use multiple paths to maximize resource usage and increase redundancy.
It may be noted that MPTCP protocol allows a plurality of paths to be used between a pair of hosts, while presenting a single TCP connection to the application layer.
Therefore, MPTCP creates multiple sub-flows from a single flow originating from an application. TCP/IP protocol is executed to generate network packets for each sub-flow. Sub-flow packets are transmitted over different network interface controllers (NICs), allowing to increase global bandwidth from a network perspective.
A structural representation of a Multipath TCP/IP network protocol processing architecture is represented by Figure 2.
In MPTCP architecture, multiple instances of TCP/IP protocol stacks 420a - 420n are used for the TCP/IP protocol execution of an application 400. Each TCP/IP protocol stack 420a - 420n individually executes TCP/IP protocol for a TCP/IP connection associated with a Network Interface Controller (NIC) 440a - 440n.
Within an MPTCP layer 210, a packet scheduler and control sending function divides the byte-stream originating from application 400 in order to build TCP segments for transmission on sub-flows.
For transmission, the MPTCP layer 210 and individual TCP/IP protocol stacks 420a - 420n share a same transmit buffer 240, a same Meta socket 220 and a same MPTCP control block 230. The transmit buffer 240 contains in particular a list of TCP segments to send and a data sequence information or metadata that will be used for reconstructing the flow originating from the application.
The Meta socket 220 provides an interface between the application and the plurality of individual TCP/IP protocol stacks 420a - 420n and communicates with the transmit buffer 240 which is used to store segments to be sent, with necessary metadata (“Data Seq’j to lower TCP layers.
It may be noted that metadata is necessary in particular to allow the reassembly of segments arriving in multiple sub-flows with differing network delays.
Individual TCP/IP protocol stacks 420a - 420n are modified with respect to conventional TCP/IP protocol stacks (as shown in Figure 1) to handle “Data Seq” and “TCP Seq association within TCP header extensions, and to handle Meta sockets 220 and MPTCP control block 230 access. “Data Seq” is required by MPTCP, to keep trace of byte ordering sequence of application data when TCP segments are multiplexed over the different TCP/IP sub-flows.
TCP retransmission queues are managed out of sub-flow level to allow retransmission over any sub-flow.
As for the transmission path, in the reception path, MPTCP layer 210 and individual TCP/IP protocol stacks 420a - 420n share a same receive buffer 240, a same Meta socket 220 and a same control block 230.
Received TCP segments and metadata information (“Data Seq”) are stored in the Meta socket buffer 240. Packets are re-ordered by MPTCP layer 210, according to “Data Seq” values within Meta socket buffer 220.
Thus, MPTCP architecture uses some data structures which are shared by the plurality of individual TCP/IP protocol stacks and by the MPTCP layer 210. Sharing data structures prevents taking full advantage of the parallelization. In particular, due to sharing data structures, overheads are experienced during protocol execution.
In addition, in MPTCP, as two levels of sequence numbers are managed and shared between devices (“Data Seq” and “TCP Seq’j, MPTCP is required to be used on both ends of the communication and not only at one of them.
The present invention is directed to providing a system for transmitting and for receiving data, making it possible to achieve a high throughput, regardless of the use of multipath TCP/IP by the communication end receiving or transmitting the data respectively.
To that end, according to a first aspect, the present invention concerns a system for transmitting data originating from at least one user application to a remote device, said transmitting system comprising for a user application:
- a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- a segmenting module configured for segmenting data originating from said user application into a plurality of data blocks, and for dispatching said plurality of data blocks to the plurality of protocol stacks,
- a combining module configured for ordering data originating from at least one of the plurality of protocol stacks in order to generate a data flow, based on first control information originating from said segmenting module, the first control information being relative to the order in which data originating from said user application is segmented, and
- a network interface controller configured for receiving said data flow from the combining module and for transmitting said data flow over a communication network.
Thus, the processing of data originating from the user application is parallelized over the plurality of protocol stacks enabling increased throughput, while data blocks originating from the protocol stacks are ordered to generate a unique data flow to be transmitted over a communication network.
Therefore, the processing of data over a plurality of protocol stacks or multipath processing is internal to the system of transmission, and data may be received and treated by any type of remote device, and not only exclusively by a remote device using MPTCP protocol.
According to a feature, the data flow originating from said combining module is organized in packets, the packets being generated by said plurality of protocol stacks from said data blocks generated by said segmenting module.
It may be noted that a packet is a unit of data to be transferred in a packet-switched communication network, a packet mainly comprising a header and payload.
Thus, each protocol stack is configured to generate packets to be transferred over a communication network from data blocks originating from the segmenting module, and the combining module is configured to order the packets originating from at least one of the plurality of protocol stacks.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
Thus, a single device comprises the plurality of processing resources and data originating from at least one user application is transmitted from this single device to a remote device
According to another feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
Thus, a user application may use processing resources in different devices, incrementing the modularity of the data processing parallelization.
According to a feature, the segmenting module is configured to dispatch the plurality of data blocks into the plurality of protocol stacks taking into account second control information originating from the combining module, said second control information representing the status of the processing load of at least one of the plurality of protocol stacks.
Thus, the data processing parallelization is optimized according to the processing load of protocol stacks, and the throughput may be increased.
According to a feature, the combining module is further configured for generating data acknowledgment synthesis for the plurality of protocol stacks from data acknowledgment received from the remote device.
According to a feature, the first control information originating from the segmenting module comprises for each data block generated by said segmenting module, the size of the data block and a parameter identifying a protocol stack to which the data block is dispatched.
Thus, the combining module orders data blocks originating from protocol stacks and generates a unique data flow by using the data block size and the information concerning the protocol stack processing the data blocks.
According to a feature, a socket is associated with each protocol stack, said plurality of data blocks being dispatched to the plurality of protocol stacks respectively through a plurality of sockets.
Thus, the segmenting module pushes data blocks into a protocol stack through an associated socket.
According to a second aspect, the present invention concerns a system for receiving data originating from a user application in a remote device, comprising:
- a network interface controller for receiving a data flow originating from a remote device, said data flow being organized in data units,
- a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- a multiplexing module for dispatching the data units forming the received data flow into the plurality of protocol stacks, and
- an assembling module for ordering data blocks originating from at least one of the plurality of protocol stacks, based on third control information originating from said multiplexing module, and for transmitting said ordered data blocks to a user application, said third control information comprising information relative to the order in which the data units are dispatched.
According to a feature, the data units are packets, the packets being processed by said plurality of protocol stacks in order to generate data blocks.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
According to a feature, the multiplexing module is configured to dispatch the data units forming said data flow into the plurality of protocol stacks taking into account fourth control information originating from the assembling module, said fourth control information representing the status regarding the processing load of protocol stacks.
According to a feature, the multiplexing module is further configured to schedule dispatch of data units forming said data flow into the plurality of protocol stacks based on the fourth control information, the fourth control information further comprising a reception window size for one protocol stack.
According to a feature, the multiplexing module is further configured for generating data acknowledgment synthesis from acknowledgment received from the plurality of protocol stacks.
According to a feature, the third control information comprises for each data block originating from a protocol stack, a data block size and a parameter identifying the protocol stack processing said data block.
According to a feature, a socket is associated with each protocol stack, said plurality of data blocks being pulled from the plurality of protocol stacks to the multiplexing module respectively through a plurality of sockets.
According to a third aspect, the present invention concerns a method for transmitting data originating from at least one user application to a remote device, the transmitting method comprising for a user application:
- processing said data in a plurality of protocol stacks, each protocol stack being processed on a processing resource, data processing in a protocol stack being independent from the other protocol stack,
- segmenting data originating from said user application into a plurality of data blocks,
- dispatching said plurality of data blocks to the plurality of protocol stacks,
- ordering data originating from at least one of the plurality of protocol stacks in order to generate a data flow, based on first control information relative to the order in which data originating from said user application is segmented, and
- transmitting said data flow over a communication network.
According to a feature, the data flow is organized in packets, the packets being generated by said plurality of protocol stacks from said data blocks.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
According to a feature, the dispatching of the plurality of data blocks into the plurality of protocol stacks takes into account second control information representing the status regarding the processing load of at least one of the plurality of protocol stacks.
According to a feature, the dispatching of data blocks into the plurality of protocol stacks is further scheduled based on the second control information, the second control information further comprising an acknowledgment of a data block processed by one protocol stack.
According to a feature, the schedule dispatching of data blocks into the plurality of protocol stacks is further scheduled based on the second control information, the second control information further comprising an acknowledgment of several data packets associated to several data blocks processed by several protocol stacks.
According to a feature, the transmitting method further comprises generating data acknowledgment synthesis to the plurality of protocol stacks from data acknowledgment received from said remote device.
According to a feature, the first control information comprises for each generated data block, the size of the data block and a parameter identifying a protocol stack to which the data block is dispatched.
According to a feature, data blocks are dispatched to the plurality of protocol stacks respectively through a plurality of sockets associated respectively with the plurality of protocol stacks.
According to a fourth aspect, the present invention concerns a method for receiving data comprising:
- receiving a data flow originating from a remote device, said data flow being organized in data units,
- dispatching the data units forming the received data flow into a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- ordering data blocks originating from at least one of the plurality of protocol stacks, based on third control information originating from a multiplexing module, and
- transmitting said ordered data blocks to a user application, said third control information comprising information relative to the order in which the data units are dispatched.
According to a feature, the data units are packets, the packets being processed by said plurality of protocol stacks in order to generate data blocks.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
According to a feature, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
According to a feature, the dispatching the data units forming said data flow into the plurality of protocol stacks takes into account fourth control information originating from an assembling module, said fourth control information representing the status regarding the processing load of protocol stacks.
According to a feature, the dispatching the data units forming said data flow into the plurality of protocol stacks is further scheduled based on the fourth control information, the fourth control information further comprising a reception window size for one protocol stack.
According to a feature, the dispatching of data units forming said data flow into the plurality of protocol stacks is further scheduled based on the fourth control information, the fourth control information further comprising a reception window size for several protocol stacks.
According to a feature, the receiving method further comprises generating data acknowledgment synthesis from acknowledgment received from the plurality of protocol stacks.
According to a feature, the third control information comprises for each data block originating from a protocol stack, a data block size and a parameter identifying the protocol stack processing said data block.
According to a feature, the plurality of data blocks are pulled from the plurality of protocol stacks to the multiplexing module respectively through a plurality of sockets associated respectively with the plurality of protocol stacks.
According to a fifth aspect, the present invention concerns a system for processing data comprising a system for transmitting data according to the invention and a system for receiving data from said system for transmitting data.
According to a feature, the system for receiving data is according to any one of claims 12 to 19.
According to a feature, the system for receiving data is a system operating according to a TCP/IP network protocol or MPTCP/IP network protocol.
According to a sixth aspect, the present invention concerns a system for processing data comprising a system for receiving data according to the invention and a system for transmitting data to said system for receiving data.
According to a feature, the system for transmitting data is according to the invention.
According to a feature, the system for transmitting data is a system operating according to a TCP/IP network protocol or MPTCP/IP network protocol.
According to a seventh aspect of the invention there is provided a means for storing information which can be read by a computer or a microprocessor holding instructions of a computer program, for implementing a method for processing data according to the invention, when said information is read by said computer or said microprocessor.
The means for storing information may be partially or totally removable.
According to a eighth aspect of the invention there is provided a computer program product which can be loaded into a programmable apparatus, comprising a sequence of instructions for implementing a method for processing data according to the invention, when said computer program product is loaded into and executed by said programmable apparatus.
The objects according to the second, third, fourth, fifth, sixth, seventh and eighth aspects of the invention provide at least the same advantages as those provided by the transmitting system according to the first aspect.
Still other particularities and advantages of the invention will appear in the following description, made with reference to the accompanying drawings which are given by way of non-limiting example, and in which:
- Figure 1 represents a functional block diagram of a standard TCP/IP network protocol processing architecture as implemented in the priorart,
- Figure 2 represents a functional block diagram of a Multipath TCP/IP network protocol processing architecture as implemented in the priorart,
- Figure 3 represents an internal architecture organization of a Symmetric Multi-Processing (SMP) device,
- Figures 4T and 4R illustrate a functional block diagram of a multipath architecture for network protocol processing architecture in accordance with embodiments,
- Figure 5a illustrates a functional block diagram of the multi-path architecture presented by Figure 4T according to a first embodiment,
- Figure 5b illustrates a functional block diagram of the multi-path architecture presented by Figure 4R according to a first embodiment,
- Figure 6a illustrates a functional block diagram of the multi-path architecture presented by Figure 4T according to a second embodiment
- Figure 6b illustrates a functional block diagram of the multi-path architecture presented by Figure 4R according to a second embodiment,
- Figure 7a represents a flowchart of block based segmenting operated within the architecture represented by Figure 4T for TCP/IP protocol transmission path according to an embodiment,
- Figure 7b represents data structure and storage means used by an alternative block based segmenting, the flowchart of which is represented by Figure 7d. This alternative block based segmenting manages flow control and allows reception of data blocks in their sequence order,
- Figure 7c illustrates a selection of transmission stacks so as to receive blocks in their sequence order,
- Figure 7d represents a flowchart of the alternative block based segmenting represented in Figure 7b which prevents head of line blocking and allows the receiver to obtain the transmitted blocks of data in their sequence order,
- Figure 8 represents data structure and storage means used by the architecture represented by Figure 4T for TCP/IP protocol transmission path according to an embodiment,
- Figure 9a represents a flowchart of a packet transmission algorithm used by the architecture represented by Figure 4T for TCP/IP transmission path according to an embodiment,
- Figure 9b represents a flowchart of a packet acknowledgment algorithm used by the architecture presented in Figure 4T for TCP/IP protocol transmission path according to an embodiment,
- Figure 10 represents data structure and storage means used by the architecture represented by Figure 4R for TCP/IP protocol reception path according to an embodiment,
- Figure 11a illustrates a flowchart of a packet reception algorithm used by the architecture represented by Figure 4R for TCP/IP protocol reception path according to an embodiment,
- Figure 11b and Figure 11c illustrate flowcharts of a packet acknowledgment algorithm used by architecture presented in Figure 4R for TCP/IP protocol reception path according to an embodiment,
- Figure 12 illustrates a flowchart of block based assembling carried out within the architecture represented by Figure 4R for TCP/IP reception path according to an embodiment, and
- Figure 13 illustrates a flowchart of the re-assembling control algorithm using the table 1070 of Figure 10.
The invention applies to transmitting systems and receiving systems using multiple processors or multiple cores of a processor. An example of architecture (Symmetric Multi-Processing (SMP)) of a device using multiple processors that may be used by the invention is represented by Figure 3.
It may be noted that this architecture scheme is widely employed in embedded systems, due to the easy programming model and the capability of modern operating systems to handle multiprocessing scheduling.
SMP architecture is well adapted for executing relative independent tasks, but the shared memory can become a bottleneck due in particular to heavy data content synchronization which is necessary between processing cores.
Protocol processing parallelization over SMP architecture therefore keeps away from sharing memory bottlenecks so allowing performance scalability.
It may be noted that, the invention applies to transmitting systems according to the invention transmitting information to either a receiver system with standard TCP/IP (or MPTCP/IP) protocol architecture or a receiver system according to the invention.
Similarly, the invention applies to receiving systems according to the invention receiving information from either a transmitting system with standard TCP/IP (or MPTCP/IP) protocol architecture or a transmitting system according to the invention.
Figure 3 represents the internal architecture organization of a Symmetric Multi-Processing (SMP) device according to an embodiment.
By construction, symmetric multi-processing architectures comprise a plurality of processing cores 3001a to 3001 n. The processing cores 3001a to 3001 n are similar. Symmetric multi-processing architectures further comprise a plurality of individual Level 1 cache memories 3002a to 3002n, each Level 1 cache memories 3002a to 3002n being associated with processing cores 3001a to 3001 n. The Level 1 cache memories 3002a to 3002n share a common interconnection bus 3010.
An I/O controller 3005 is connected to the interconnection bus 3010 and is used to handle the transfer of a data flow (in particular network packets) issued to or from a Network Interface Controller (NIC) device 440 over the interconnection bus 3010, with either Level 1 cache memories 3002a to 3002n, or with a memory 3007. The memory 3007 is connected to the interconnection bus 3010 through a centralized memory controller 3006.
In flow based parallelism, such as is the case in the invention, each processing core 3001a to 3001 n is allocated to the execution of a TCP/IP protocol stack.
As will be described below, in the invention, upper layer functions associated with multipath protocol processing management are implemented on a same processing core 3001a to 3001 n, while lower layer packet based decision functions are implemented in the I/O controller 3005.
Of course, the invention applies to other types of architecture using multiple processors.
Figures 4T and 4R illustrate functional block diagrams of a multipath architecture for network protocol processing architecture according to an embodiment.
Figure 4T represents, according to an embodiment, a functional block diagram of a multi-path architecture for network protocol processing architecture for the transmission data path.
Transmission paths are mainly managed by two functions, a first function called “block-based traffic segmentation and a second function called “Tx packet-based engine.
The first function “block-based traffic segmentation is implemented by the segmenting module 410T and the second function “Tx packet-based engine is implemented by the combining module 430T.
The first function “block-based traffic segmentation is in charge of splitting data originating from user application 400T to be transmitted to the different TCP/IP protocol stacks or TCP/IP protocol stack instances 420Ta to 420Tc. Each TCP/IP protocol stack processes several data blocks or blocks of payload.
It may be noted that each protocol stack instance 420Ta to 420Tc is processed on a processing resource, in particular in a processor core.
Data originating from the user application 400T is transmitted to the segmenting module 410T through a transmission socket 405T.
This first function “block-based traffic segmentation is configured for segmenting data originating from said user application 400T into a plurality of data blocks, and for dispatching said plurality of data blocks to the plurality of protocol stacks 420Ta to 420Tc.
Data blocks or blocks of payload generated by the segmenting module 410T are transmitted to the TCP/IP protocol stacks 420Ta to 420Tc using sockets 425Ta to 425Tc. Each socket 425Ta to 425Tc is respectively associated with one TCP/IP protocol stack 420Ta to 420Tc.
A data block has a predetermined size, for example 64kBytes.
When data blocks are pushed into sockets 425Ta to 425Tc, the segmenting module 410T generates first control information. The first control information originating from the segmenting module 410T comprises information relative to the order in which data originating from said user application 400T has been segmented.
According to an embodiment, first control information comprises for each data block the time each data block has been requested for transmission, data block size, and a TCP/IP protocol stack instance identifier. The first control information is stored at a signalling means 460T which will be further used by the second function “Tx packet-based engine”. According to the described embodiment, and as will be described with reference to Figure 8, the first control information comprises a chained list.
The second function “Tx packet-based engine orders data originating from the protocol stacks 420Ta to 420 Tc in order to generate a data flow. The ordering is based on the first control information stored at the signalling means 460T.
In the described embodiment, data originating from the protocol stacks are packets (TCP segments). It may be noted that each protocol stack 420Ta to 420Tc generates packets or TCP segments from data blocks originating from the segmenting module 410T. For example, a TCP segment has a size of 1,5Kbytes.
This second function “Tx packet-based engine is implemented by the combining module 430T. This combining module 430T recovers packets from the TCP/IP protocol stack instances, analyzes the header of packets and generates a data flow which is transmitted to a network interface controller (NIC) device 440T. The NIC device 440T is configured for receiving said data flow from the combining module 430T and for transmitting said data flow over a communication network.
The second function “Tx packet-based engine or combining module 430T also performs packet acknowledgement synthesis to TCP/IP protocol stacks 420Ta to 420Tc according to packet acknowledgement received from a remote device.
According to an embodiment, the segmenting module 410T is further configured to dispatch the plurality of data blocks into the plurality of protocol stacks 420Ta to 420Tc taking into account second control information originating from the combining module 430T, said second control information representing the status regarding the processing load of protocol stacks 420Ta to 420Tc.
Thus, segmenting data originating from the user application 400T may be modulated by signalling information or second control information originating from the combining module 430T. This signalling information or second control information is fed by the combining module 430T and stored at second signalling means 450T where it is refreshed.
In the described embodiment, the signalling information or second control information comprises the status of flow control and congestion feedback retrieved within received packets from a remote node (e.g. TCP Ack packets) and/or status regarding the processing load of individual processing cores. In particular, congestion feedback comprises the quantity of payload which has been acknowledged by the remote node.
This is determined based on “Ack Seq” information in acknowledgment packets indicating the index of the last byte acknowledged from among the TCP segments or data flow.
In addition, the status of flow control comprises information about the remaining free space in a reception buffer of the remote node, and the sequences of bytes which have been acknowledged by the remote node.
This information is extracted from the acknowledgment TCP packets (TCP Ack packets).
According to an example, the processing load of individual processing cores may be computed based on the elapsed time between time at which a packet became available at an output queue of a TCP/IP protocol stack and the time at which a data block or payload entered a socket associated with the protocol stack.
Steps of a method for transmitting data implemented by the architecture represented by Figure 4T will be described with reference to Figures 7, 8, 9a and 9b.
Figure 4R represents, according to an embodiment, a functional block diagram of a multi-path architecture for network protocol processing architecture for the reception data path.
The reception data path is mainly managed by two functions, a first function called “Rx packet-based engine and a second function called “blockbased re-assembling”.
The first function “Rx packet-based engine is implemented by a multiplexing module 430R and the second function “block-based assembling” is implemented by an assembling module 41 OR.
The system for receiving data comprises a Network Interface Controller (NIC) device 440R for receiving a data flow originating from a communication network. In the described embodiment, the data flow originating from the communication network is organized in packets or TCP segments.
The multiplexing module 430R receives the packets from the NIC device 440R and has the task of delivering those packets to the different TCP/IP protocol stack instances 420Ra to 420Rc.
As for the transmitting system, each protocol stack 420Ra to 420Rc is processed on a processing resource (in particular on a processing core) and is configured to process data independently from the other protocol stacks 420Ra to 420Rc.
The multiplexing module 430R analyzes the packet headers and modifies them before dispatching the packets forming the received data flow into the plurality of protocol stacks 420Ra to 420Rc.
The assembling module 41 OR implements data block recovery from TCP/IP protocol stacks using sockets 425Ra to 425Rc, each socket 425a to 425c being associated respectively with a protocol stack 420Ra to 420Rc. Once the data blocks have been recovered, the assembling module 41 OR orders data blocks originating from at least one of the plurality of protocol stacks 420Ra to 420Rc and releases them to a user application 400R using a reception socket 405R.
The multiplexing module 430R generates control information (third control information) which is stored at third signalling means 450R and is used by the assembling mode 41 OR for ordering the data blocks into a data flow which is transmitted to the user application 400R.
According to an embodiment, third control information in 450R comprises an amount corresponding to the payload computed for the packets pushed into a same TCP/IP protocol stack 420Ra to 420Rc by the multiplexing module 430R.
According to an embodiment, the multiplexing module 430R is further configured for generating data acknowledgment synthesis from acknowledgment received from the plurality of protocol stacks 420Ra to 420Rc. The generated data acknowledgment synthesis is destined for a remote device transmitting the data flow that has been received by the reception system.
According to an embodiment, the multiplexing module 430R is further configured to dispatch the data units (packets) forming said data flow into the plurality of protocol stacks 420Ra to 420Rc taking into account control information (fourth control information) originating from the assembling module 41 OR, said fourth control information representing the status regarding the processing load of protocol stacks.
Signalling information or fourth control information originating from the assembling module 41 OR is stored at fourth signalling means 460R and comprises flow control information which is within TCP/IP packet headers.
Steps of a method for receiving data implemented by the architecture represented by Figure 4R will be described with reference to Figures 10, 11a, 11b and 12.
Figure 5a illustrates a functional block diagram of the multi-path architecture presented by Figure 4T according to a first embodiment.
According to this embodiment, a plurality of user applications runs within a same device 600T.
A segmenting module (e.g., “block-based traffic segmentation) 410T, 410T’, a combining module (“Tx packet-based decision engine means) 430T, 430T’ and signalling means 450T, 450T’ and 460T, 460T’ are associated with each user application 400T, 400T’.
Thus, the transmission system represented by Figure 5a comprises two user applications 400T, 400T’, and for each user application 400T, 400T’, the transmission system comprises a segmenting module 410T, 410T’, a combining module 430T, 430T, two signalling means 450T, 460T, 450T, 460T’ and a plurality of protocol stacks 420Ta to 420Tc, 420T’a to 420T’c.
Of course the number of user applications may be different.
It may be noted that no interaction exists between transmission modules when several applications are used.
Figure 5b illustrates a functional block diagram of the multi-path architecture presented by Figure 4R according to a first embodiment.
According to this embodiment, a plurality of user applications runs within a same device 600R.
An assembling module (e.g. “block-based traffic reassembling module) 41 OR, 41 OR’, a multiplexing module (“Rx packet-based decision engine means) 430R, 430R’ and signalling means 450R, 460R, 450R’, 460R’ are associated with each user application 400R, 400R’.
Thus, the reception system represented by Figure 5b comprises two user applications 400R, 400R’, and for each user application 400R, 400R’ the reception system comprises an assembling module 41 OR, 41 OR’, a multiplexing module 430R, 430R’, two signalling means 450R, 460R, 450R’, 460R’ and 460T’ and a plurality of protocol stacks 420Ra to 420Rc, 420R’a to 420R’c.
Of course the number of user applications may be different.
It may be noted that no interaction exists between reception modules when several applications are used.
According to the described embodiment, the NIC device 440R recognizes for what user application the data units or packets forming the received data flow are destined. Thus, data units or packets are delivered to the appropriate multiplexing module 430R, 430R’.
If the NIC device 440R does not support this recognition functionality, an intermediate function (not represented in Figure 5b) is necessary to manage such operations.
Figure 6a illustrates a functional block diagram of the multi-path architecture represented by Figure 4T according to a second embodiment.
According to this embodiment, a plurality of user applications runs within multiple devices 600T, 600T.
In this embodiment, the transmission system comprises different TCP/IP protocol stack instances 420Ta to 420Tc running in different devices 600T, 600T.
In the embodiment described in Figure 6a, two protocol stacks 420Ta, 420Tb run in a first device 600T and an additional protocol stack 420Tc runs in a second device 600T’.
It may be noted that the transmission system comprises a unique segmenting module 41 OT, a unique combining module 430T, and unique first and second signalling means 450T, 460T. Also, the additional protocol stack instance 420Tc is not intrusive to the way in which remaining multi-core processing resources within the second device 600T’ are used.
Figure 6b illustrates a functional block diagram of the multi-path architecture represented by Figure 4R according to a second embodiment.
According to this embodiment, a plurality of user applications runs within multiple devices 600R, 600R’.
In this embodiment, the reception system comprises different TCP/IP protocol stacks instances 420Ra to 420Rc running in different devices 600R, 600R’.
In the embodiment described in Figure 6b, two protocol stacks 420Ra, 420Rb run in a first device 600R and an additional protocol stack 420Rc runs in a second device 600R’.
It may be noted that the reception system comprises a unique assembling module 41 OR, a unique multiplexing module 430R and unique signalling means 450R, 460R. Also, the additional protocol stack instance 420Rc is not intrusive to the way in which remaining multi-core processing resources within the second device 600R’ are used.
The functionality of the modules in the architecture represented by Figure 4T for TCP/IP protocol transmission path will be described with reference to Figures 7, 9a and 9b. Data structures and storage means used by the architecture illustrated by Figure 4 for a TCP/IP protocol transmission path are described in reference with Figure 8.
First signalling means 460T in Figure 4T comprises signalling information or first control information. In the described embodiment, the first control information is organized in a linked structure or chained list. This linked structure or chained list comprises a plurality of elements 800, 810, 820 and is named “Send_vector_strucf’. Each element 800, 810, 820 is associated with a data block.
Of course, the number of elements in the linked structure may be different from the number of elements represented by this Figure.
The linked structure “Send_vector_struct” is used to signal the segmenting which has been carried out by the segmenting module 410T (the segmenting will be described in reference with Figure 7) to the data flow originating from the user application 400T.
According to an example, each element 800, 810, 820 comprises a plurality of fields. A first field or “Block size field 802 contains the size of the applicative payload data in the corresponding data block.
A second field or “Sub-flowlD field 803 comprises a parameter identifying the protocol stack to which the data block is pushed.
A third field 804 comprises a pointer to the next element. Thus, third field 804 in a first element 800 points to a second element 810, third field in the second element 810 points to a third element 820, etc.
A second data structure 840 (“Tx Seq_num translation table) is used by the combining module 430T (Figure 4T) to store information related to packets previously transmitted to the NIC device 440T. Each row of this data structure stores for a packet:
- the packet size 848 (“PK size),
- a parameter “Sub-flow ID 844 identifying the TCP/IP protocol stack instance which has processed the packet,
- an initial sequence number 846 called “Local_Seq_ num which is generated by the protocol stack instance. Each TCP/IP protocol stack instance manages individually its TCP sequence numbering. According to an embodiment, the Local Sequence Number (“Local_Seq_num) is contained by the packet header of TCP segments, and
- an effective sequence number 842 or Global Sequence Number called “Global_Seq_num. This Global Sequence Number (“Global_Seq_num”) is written by the combining module 430T in order to replace the initial value of Local Sequence Number 846 (“Local_Seq_num) in TCP segment header.
A third data structure (“Last Tx Pk info per sub-flow table) 830 is used by the combining module 430T in order to keep track for each TCP/IP protocol stack instance 420Ta to 420Tc, of the initial Local Sequence Number 846 associated with the last packet generated by the protocol stacks 420Ta to 420Tc. This information is used in particular to identify retransmitted packets issued from any TCP/IP protocol stack instance 420Ta to 420Tc. Each row of this third data structure 830 comprises a parameter “Sub_flow ID 832 identifying a TCP/IP protocol stack instance 420Ta to 420Tc and an associated Local Sequence Number “Local_Seq_Num).
A fourth data structure 850 (“Ack synthesis status table) is used by the combining module 430T in order to support the synthesis for the plurality of TCP/IP protocol stacks of acknowledgment packets, from acknowledgment packets originating from a remote device through the NIC device 440T. Each row of this fourth structure 850 comprises a parameter “Sub-flow ID 852 identifying a TCP/IP protocol stack instance 420Ta to 420Tb and a flag 854, indicative of an acknowledgement packet already synthetized for the corresponding protocol stack.
The functionality of the segmenting module 410T or steps of a method for transmitting data implemented by the segmenting module 410T in the architecture represented by Figure 4T for TCP/IP protocol transmission path is represented by Figure 7a.
The segmenting module 410T is configured for:
- segmenting data originating from the user application 400T, 400T into data blocks of payload,
- signalling the order of segmenting and the size of generated data blocks to the combining module 430T (signalling information or first control information), the signalling information being stored in this embodiment, in first signalling means 460T, and
- controlling the dispatching of the generated data blocks into the plurality of protocol stacks by taking into account feedback from the combining module 430T. The feedback comprises signalling information or second control information representing the status of the processing load of protocol stacks 420Ta to 420Tc. In this embodiment, the second control information is stored in second signalling means 450T.
When a transfer request is issued by the user application 400T at step 700, the segmenting module 410T divides the data flow originating from the user application 400T into data blocks, which are dispatched to different protocol stacks 420Ta to 420Tc. Thus, the data flow is divided into sub-flows, the execution of the plurality of sub-flows being respectively parallelized on the plurality of protocol stacks 420Ta to 420Tc.
The segmenting module 410T keeps track of first control information relative to data blocks. For example the first control information relative to a data block comprises a number of sequences in the segmenting, size of the different blocks within applicative payload and a parameter representing the associated protocol stack instances 420Ta to 420Tc. According to an embodiment, the segmenting module 410T maintains a chained list or linked list (800, 810, 820 in Figure 8) for the data blocks that have been requested to be processed. Thus, each list or element 800, 810, 820 contains the control information and a pointer to the next list in the chain.
It may be noted that for each protocol stack 420 Ta to 420Tc, the order in which the data blocks are processed is irrelevant, i.e. it is irrelevant whether processed blocks are successive or not from a user application perspective, since control information necessary to assemble the packets to be transmitted by the combining module 430T is kept.
When a transfer request is issued by the user application 400T at step 700, the segmenting module 410T computes the size of a data block to be processed at a computing step 720.
According to an embodiment, the size is computed according to a typical predetermined data block size and the remaining applicative data or remaining data.
In the described embodiment, the quantity of data transferred from the application 400T to the socket 405T is initialized with the remaining applicative data at an initializing step 710.
According to another embodiment, the processing load of the protocol stacks is also taken into account when computing the size of the data block to process.
Next, the segmenting module 410T selects a protocol stack instance to process the data block at a selecting step 730, and at a step 740 a new element is added into the chained list associated with the processed data block. This new element contains the identifier of the protocol stack which has been selected at the selecting step 730 and the data block size computed at the computing step 720.
According to an embodiment, an identifier of the NIC device 440T is also included into the element 800, 810, 820.
At an update step 750, an element representing the quantity of applicative data or payload originating from the user application is updated by decreasing the quantity of the size of the processed data block.
At a verification step 760 it is verified whether the processed data block was the last data block. In an affirmative case, the segmenting module 410T returns to the initial step 700 in order to wait for a new user application request. In a negative case, the next data block is selected and steps 720 to 760 are reiterated.
As scheduling alternative implemented by the segmenting module 410T, an improved algorithm is depicted in Figure 7d. This algorithm permits reception of data blocks in their sequence order, i.e., the order in which data blocks are received by the remote device corresponds to the order in which data blocks are stored in the application buffer of the transmission device. In this embodiment, this segmenting module 410T works in association with the data structures represented by Figure 7b. These data structures are the following:
- A “Tx scheduler application status table 7200 reflects for each application 7205 the maximum allowed data rate 7210, the cumulated size of payload transmitted 7215 (to the remote device), and the associated time 7220 at which the payload size measurement started (these data enable to estimate the effective throughput of the application);
- A “Tx scheduler path status, table 7100 reflects the status of each stack. This status table includes, for each stack (7105), the available memory size for transmission (local Tx FIFO, 7110), the available memory size for reception (remote reception window, 7115), and effective data rate for this stack 7120. The computation of the effective data rate is based on the transmitted block size, the time at which the transmission request occurred (“Tx request time), and the time of reception of the acknowledgement transmitted by the reception device or remote device. For this scheduling alternative, the transmission and the remote devices comprise a same number of protocol stacks;
- A “Tx scheduler block status, table 7000 reflects the status of each transmitted block. This status table includes for each block: its identifier “block ID”, 7005, its transmitted size 7010, the time at which the transmission request occurred “Tx request time 7015, the identifier of the stack used for the transmission “stack ID”, 7025, the expected time 7020 for completion of the reception of this block. The expected time is estimated from the effective throughput of the stack, the “Tx request time and the size of the block to be transmitted.
The alternative scheduler, the workflow of which is depicted in Figure 7d, operates as explained below in steps 7400 to 7475:
First an initialization of the data structures represented by Figure 7b is performed at step 7400:
As long as the applications are not running, the “Tx scheduler application status table 7200 has to be purged (empty). When a transfer request is issued by the user application 400T, the corresponding predetermined maximum authorized data rate 7210 for the corresponding application 7205 is updated in the “Tx scheduler application status table 7200. In case this value is unknown, the parameter in the table has to be set at a value allowing fairness between all the applications for sharing the global data rate, for example by sharing the available bandwidth equally. According to other embodiments, this parameter could be also set after running trials, and/or adjusted dynamically based on the requirements of the applications.
In addition, the “Tx scheduler path status table 7100 has to be set for each existing TCP protocol stack 7105. According to an embodiment, the status of a local transmission FIFO 7110 and of a remote reception FIFO 7115 have to be set at a maximum size.
Finally, “Tx scheduler block status table 7000 is cleared so that columns corresponding to the “block ID’’, the “block size’’, the “Tx request time’’, the “Rx expected time’’, and the associated “stack ID are empty.
Next, at step 7405 blocks in course of processing (transmitted and not yet acknowledged) are checked in “Tx scheduler block status’’ table 7000. According to an embodiment, it is checked whether the “expected Rx time” at column 7020 which was set at the moment of the block transmission request, has elapsed. If “expected Rx time’’ has elapsed, a retransmission process for concerned block(s) (from step 7450) is to be started as soon as possible. Otherwise, the normal process is proceeded with (starting from step 7410).
At step 7410, the segmenting module 410T verifies whether the applications (in “Tx scheduler application status’’ table 7200) have remaining data to be transmitted, and whether their current data rate (e.g., the cumulated transmitted volume of data stored at column 7215, divided by the corresponding transmission time which is equal to the current time minus the time of starting volume measurement stored at column 7220) do not exceed a predetermined maximum data rate (stored at column 7210) in order to proceed with the transmission (from step 7415) or jump back to step 7405.
The head of line margin is computed at step 7415. It consists in estimating the necessary space spared in the transmitter FIFO and the receiver FIFO to allow a potential retransmission of the largest transmitted block not yet acknowledged. The computed head of line margin will be used in the next step 7420 to compute the minimum and maximum block size value.
At step 7420, the segmenting module 410T determines for each protocol stack, whether a given stack has available space in transmission and reception Thus, at this step 7420 the segmenting module 41 OT estimates, for each protocol stack, the minimum and maximum possible block sizes.
The minimum block size is the minimum value among:
- the block size that allows the end of a new data transmission to occur just after the transmission completion of the block previously transmitted (refer to the description of Figure 7c)
- the available space in reception (remote reception window 7115) for the corresponding remote protocol stack minus the head of line margin computed in step 7415, and
- the available space in a transmission protocol stack (local transmission FIFO), minus the head of line margin computed in step 7415.
The maximum block size is the minimum value of:
- the size of the remaining data to be transmitted from the application buffer,
- the default block size (configuration parameter)
- the available space in reception (remote reception window 7115) for the corresponding remote protocol stack, minus the head of line margin computed in step 7415, and
- the available space in a transmission protocol stack (local transmission FIFO), minus the head of line margin computed in step 7415.
Figure 7c illustrates according to a non-limiting example, block reception in their sequence order. In this figure, a protocol stack A is currently transmitting a block 7300 at time 7350. This block 7300 is expected to have its reception ended by the remote device at the time 7360.
When new data 7310 have to be transmitted, a computation is carried out for each protocol stack (for example A, B and C) to estimate the protocol stack that is the most appropriate to transmit these new data. The transmission of the new data 7310 by the protocol stack A starts after the end of reception of a block of the currently on-going transmission, i.e. after the time 7360. Thus, the estimated minimum block size 7315 for the protocol stack A is equal to zero.
The same data or data of the same size 7320 could be transmitted immediately (e.g., at the current time) by the protocol stack B, and the corresponding minimum block size of data which could be received at the end of the current block transmission by the protocol stack A is estimated 7325 based on the latest known transmission conditions for the protocol stack B (data rate, etc.).
The same data or data of the same size 7330 could be transmitted immediately by the protocol stack C, and the corresponding minimum block size of data which could be received at the end of the current block transmission by the protocol stack A is estimated 7335 based on the latest known transmission conditions for the stack C (data rate, etc.).
The difference of end of transmission by the protocol stacks B and C is related to the different data rates on the paths linking the transmission and reception devices.
In the Figure 7c example, data is transmitted faster through protocol stack B than through protocol stack C. Thus, protocol stack B is able to transmit a higher quantity of payload 7325 once the data initially stored by protocol stack A has actually been received 7360.
Based on the result of the expected end of reception time 7360, it is possible to adjust the data block size to match these requirements, or to postpone the transmission.
Before selecting a protocol stack for the new transmission, it is also necessary to check whether the protocol stack has enough available space at the transmission and reception sides.
Coming back to the scheduler algorithm of Figure 7d, at step 7425 it is checked if whether there is at least one protocol stack having enough space at the transmission and reception sides and allowing the reception by the remote device of a new data block after the complete reception of a data block previously transmitted. In particular, at step 7425 it is checked whether the corresponding stacks must have their minimum block size value below their maximum block size value.
In the negative, it is necessary to wait for a change of the status of any one of the stacks (step 7460).
Step 7430 is implemented if at step 7425 it is verified that at least one protocol stack exists for which its minimum block size value is lower than its maximum block size value. In step 7430, the protocol stack having the highest minimum block size is selected from among the protocol stacks determined in step 7425 as allowing the reception by the remote device of a new data block. The selected stack may be able to transmit the highest quantity of data before completion of the previous reception. After selecting the protocol stack to be used for the transmission, the size of the data block to be transmitted is set, this size being the maximum data block size already determined for this protocol stack at step 7420.
Next, at step 7435, the effective data block transmission (or retransmission) is carried out through the selected protocol stack and the scheduler data structures tables (represented by Figure 7b) are updated accordingly as follows:
- In “Tx scheduler block status” table 7000, “block ID” 7005, “block size 7010, “Tx request time 7015, “Rx expected time 7020, and the associated “stack ID 7025 fields are updated.
- “Tx scheduler path status” table 7100 is updated with the available size of the transmission FIFO 7110 and the estimated available size for reception by the remote device 7115 for the protocol stack 7105 concerned.
- “Tx scheduler application status” table 7200 is updated by filling in the current transmitted block size to the cumulated transmitted volume of data 7215 for the application 7205 concerned. In addition, the “time of starting volume measurement” field 7220 is set to the current time if it is the first transmission of a data block for the protocol stack.
In order to prevent flooding of block transmissions, the scheduling is performed periodically.
Thus, at step 7470 it is checked whether there are available protocol stacks which are able to transmit new blocks of data during the current scheduling period. Such protocol stacks are typically protocol stacks which are not currently transmitting data, or protocol stacks for which current transmission is estimated to end before the end of the scheduling period. If such protocol stacks are available, the process goes back to step 7405 for potentially preparing the transmission of a new block of data. Otherwise, at step 4775 the system waits for the end of the scheduling period.
At step 7450 (implemented in case of affirmative response at step 7405) parameters are prepared for the retransmission of a data block which may be processed with the protocol stack having the highest data rate capability and enough available space on the transmission and reception sides. In step 7455, it is checked whether such a protocol stack exists. If a protocol stack is available, step 7435 is implemented wherein the transmission and the tables represented by Figure 7b are updated. Otherwise, the system waits for an available stack at step 7460.
It should be noted that step 7460 is an intermediate step where it is checked whether events (such as an information notification to scheduler coming from the step 943. This step will be described with reference to Figure 9b) are occurring, these events modifying of the working context.
At step 7465 the event or information notification received by the scheduler is analyzed. For example, if an acknowledgement for an entire block of transmitted payload is received, the corresponding buffer may be released, the information about this data block in the “Tx scheduler block status” table 7000 may be released, and the “Tx scheduler path status table 7100 may be updated for the corresponding stack (available size in FIFOs 7110, 7115, Tx rate 7120).
Alternatively, an acknowledgement of a group of packets belonging to several blocks is received. After these updates, the process jumps to step 7405.
Within individual TCP/IP protocol stack instances 420Ta to 420Tc, TCP packets are generated from processed data blocks. In a protocol stack, the sequence number of TCP packets is incremented as soon as a packet is generated. This sequence number is named “Local_ Seq_num.
It may be noted that for each protocol stack instance 420Ta to 420Tc, the payload of the last packet from a data block cannot be aggregated with the payload of a first packet of a following data block to be processed by the protocol stack.
The functionality of the combining module 430T or the steps of the method for transmitting data implemented by the combining module 430T in the architecture represented by Figure 4T for a TCP/IP protocol transmission path, according to an embodiment, is represented by Figure 9a.
The combining module 430T pulls from an output queue of the plurality of protocol stacks instances 420Ta to 420Tc, a sequence of packets. For that, at an extracting step 900, the combining module 430T extracts an element 800, 810, 820 in the chained list “Send_vector_struct”.
From to the extracted element 800, 810, 820, the combining module 430T knows from which protocol stack instance a next packet to be transmitted to the NIC device 440T, has to be pulled. According to the described embodiment, the “Sub-flow_ID” field in the data structure field 803 (in Figure 8) is used.
At an initialization step 902, a variable “Remaining payload” is initialized with the size of a data block.
The size of the applicative payload data in the block is given by the “Block size” field 802.
It may be noted that from among the plurality of TCP segments formed from a data block, the last TCP segment is identified. For example, by substracting the value of the variable “Remaining payload from the size of a TCP segment payload.
Next, at a reading step 905, a packet is read from the packet queue of the protocol stack instance associated with the identifier Sub-flow ID.
At a verifying step it is checked whether the Local Seq_num associated with the pulled packet is lower than the Local Seq_num of the last packet in the previous element associated with the same protocol stack instance 420Ta to 420Tc.
If the response is negative, and that is the case when the pulled packet is a new packet, for each TCI/IP stack instance 420Ta to 420Tc, the latest value of the Local Sequence Number “Local Seq_num is stored at step 910. This step favours the interoperability with a remote device being a standard device (using TCP/IP protocol and MPTCP/IP protocol).
It may be noted that from an individual TCP/IP protocol stack instance perspective, each TCP/IP protocol stack locally manages its own local sequence numbering, independently from each other. Thus, the local sequence number generated by a TCP/IP protocol stack instance (i.e., “Local Sequence Number”) has to be modified and the Global Sequence Number “Global_Seq_num has to be stored into the data structure 840. Thus, at step 912, a new row is added to the data structure 840 (Figure 8) with the new generated local sequence number associated with the protocol stack instance 420Ta to 420Tc, and the global sequence number is incremented by adding the packet size to a previous global sequence number.
Next, at a step 915, the local sequence number in the header of the transmitted packet is replaced by the global sequence number. Thus, the header of the packet delivered to the NIC device 440T contains the Global sequence number.
At an updated step 917, the data block size in the block size field 802 is decremented by the packet payload size if the packet is a new packet.
At a verification step 920 it is verified if additional packets are available for the current data block). In other words, since a data block comprises a plurality of packets, at the verification step 920 it is verified if all packets related to the current data block have been sent to the NIC device 440T or not.
In a negative case, next data block is read from the same output queue at step 905 and steps 907 to 920 are reiterated. In an affirmative case, the element is removed from the chained list “Send_vector_strucf’ (step 922) and the combining module 430T returns to the initial step 900 in order to wait for a new available packet.
If the response is affirmative in the verification step 907, which is the case when the packet is a “retransmitted packet’ (i.e., a packet to be retransmitted because of a previous transmission error), the field Block size 802 in the element 800, 810, 820 is not updated.
According to an embodiment, data structure 830 (Figure 8) is used to check if a packet is a retransmitted packet, since retransmitted packets have a Local Sequence Number value lower or equal to the Local Sequence Number of the last newly transmitted packet (step 907).
At step 925, the global sequence number value is retrieved from corresponding row in the data structure 840. Thus, at step 927, the local sequence number of the transmitted packet is replaced by the global sequence number (as for step 915).
It may be noted that the Global Sequence Number of a retransmitted packet is the same Global Sequence Number as when the packet was transmitted for the first time.
The steps implemented by the combining module 430T described above ensure that packets are pulled from the different TCP/IP protocol stack instances 420Ta to 420Tc as if they were generated by a single TCP/IP instance and delivered to NIC device 440T.
The combining module 430T manages the packet acknowledgments originating from a remote device. The packet acknowledgment management used by the architecture presented in Figure 4T for a TCP/IP protocol transmission path according to an embodiment is represented by Figure 9b.
During execution of packet acknowledgment management, three alternatives of signalling policy can be considered to trigger flow control and congestion signalling feedback 450T to the segmenting module 410T. A first alternative is to feedback information each time an acknowledgment packet is received from the remote device. According to second and third alternatives, , information is fed back only each time the cumulative amount of acknowledged payload reaches a block size value. This condition is reached either without considering how the amount of acknowledged payload is spread among the different TCP/IP protocol stack instances 420Ta to 420 To (second alternative), or when a block size is entirely acknowledged for a particular TCP/IP protocol stack instance 420Ta to 420 Tc (third alternative). These alternatives are less disruptive from a Segmenting module 41 OT perspective.
As described above, each protocol stack instance 420Ta to 420Tc generates packets containing payload which is not necessary adjacent to the payload of the data blocks generated by the segmenting module 41 OT from the data flow originating from the user application 400T. It may be noted that a last packet of a data block and a first packet of a data block are in general processed in parallel by different protocol stack instances.
In addition, the packets generated by a same protocol stack instance 420Ta to 420Tc may belong to multiple data blocks processed by this instance.
In order to ensure proper processing of the data blocks, each protocol stack instance 420Ta to 420Tc has to receive acknowledgment of its generated packets. At this respect, it may be noted that an “Ack Sequence Numbed field in the packet acknowledgment header indicates a next expected byte index in the associated stream. Therefore, the reception of all previous bytes within the stream is intrinsically acknowledged.
As a consequence, when the remote device acknowledges a group of received packets using an acknowledgment sequence number “Ack_Seq_num (or “Global Ack_Seq_num), acknowledged packets may have been generated by different protocol stack instances.
The combining module 430T has to identify to which protocol stack instance 420 Ta to 420Tc a packet acknowledgment refers.
Data structures 840, 850 (Figure 8) are used to manage synthesis of packet acknowledgements to the different protocol stacks according to acknowledgment packets received from NIC device 440T.
When a new packet acknowledgment (“Ack Pk’) is received from the NIC device 440T at a reception step 930, the variable 854 which indicates whether an acknowledgement has been synthesized for the corresponding protocol stack, at a step 932 is set to a status “NO” for all the protocol stacks in the data structure “Ack synthesis status table 850.
Next, at step 935, there is retrieved from data structure “Tx Seq-num translation table 840 information concerning a transmitted packet which contains acknowledged payload containing the last acknowledged byte position.
In the described embodiment, the retrieved information corresponds to the information contained in the row of data structure “Tx Seq-num translation table” 840 in which the global sequence number “Global Seq_num” value incremented by the packet size “Pk_size value is equal to the global acknowledgment sequence number of the received packet acknowledgment “Global Ack Seq_num”.
At a verification step 937, the value is verified of the variable 854 in the data structure “Ack synthesis status table 850 which indicates if an acknowledgement has been synthesized for the corresponding protocol stack 420Ta to 420Tc. An iterative process is implemented in order to generate a synthesis of packet acknowledgments destined for the protocol stack instances. This iterative process is implemented from the retrieved row at step 935 up to the last row within the data structure “Tx Seq-num translation table” 840.
If the value of the status is “NO” at the verification step 937, at a synthetizing step 940, the synthesis of the packet acknowledgment is implemented for the protocol stack instance 420Ta to 420Tc corresponding to the “sub-flow ID” identifier stored at field 844 in the retrieved row. The packet acknowledgment synthesis is generated for all the protocol stack instances 420Ta to 420Tc having generated at least one packet of which the payload is part of acknowledged payload. According to an embodiment, synthetizing a packet acknowledgment comprises setting the acknowledgment sequence number value (Local Ack Sequence Number) to the local sequence number incremented by the packet size.
Then, according to policy for signalling means 450T, in the test implemented at step 941:
- According to a first alternative, information within a synthesized acknowledgment packet is systematically delivered to the signalling means 450T at step 943.
- According to a second alternative, if the cumulated amount of acknowledged payload for all TCP/IP protocol stacks 420Ta to 420Tc is equal to or greater than a pre-determined block size value, information within a synthesized acknowledgment packet is delivered to the signalling means 450T at step 943.
- According to a third alternative, if the cumulated amount of acknowledged payload for a corresponding sub-flow_ID is equal to or greater than a pre-determined block size value, information within a synthesized acknowledgment packet is delivered to signalling means 450T at step 943.
Next, at a step 942, the acknowledgment synthetized status is set to value “YES” for the corresponding protocol stack in data structure “Ack synthesis status table 850.
If the value of the status is “YES” at the verification step 937 or after implementing step 942, the row retrieved at step 935 is deleted at a step 945.
At step 950, it is verified whether the acknowledgment synthesis has been implemented for the totality of the rows in the data structure “Tx Seq-num translation table 840. In an affirmative case, the combining module 430T returns to the initial step 930. In the negative case, the next row of the structure data 840 is selected in step 955 and the steps 937 is next implemented.
The functionality of the modules in the architecture represented by Figure 4R for a TCP/IP protocol reception path or method for receiving data will be described with reference to Figures 11a, 11b and 12. Data structures and storage means used by the architecture illustrated by Figure 4 for TCP/IP protocol reception path are described with reference to Figure 10.
Signalling means 460R in Figure 4R comprises signalling information or fourth control information. In the described embodiment, the fourth control information is organized in a linked structure or chained list. This linked structure or chained list is named “Receive_vector_strucT and comprises a plurality of elements 1000, 1010, 1020. Each element 1000, 1010, 1020 is associated with a data block.
Of course, the number of elements in the linked structure may be different from the number of elements represented by this Figure.
Thus, the linked structure “Receive_vector_struct” is used to signal the data block construction which has been carried out by the assembling module 41 OR (the data block construction will be described in reference with Figure 11a).
According to an example, each element 1000, 1010, 1020 comprises a plurality of fields. A first field or “block size” field 1002 contains the size of the applicative payload data in the corresponding data block.
A second field or “Sub-flow ID field 1003 comprises a parameter identifying the protocol stack 420Ra to 420Rc from which the data block is pulled.
A third field “Flag 1005 comprises a binary value indicative of the in sequence order status of acknowledged blocks or indicating the status of acknowledge blocks as in their sequence order.
A fourth field comprises a pointer to the next element. Thus, fourth field in a first element 1000 points to a second element 1010, fourth field in the second element 1010 points to a third element 1020, etc.
A second data structure “Rx Seq-num translation table 1040 is used by the multiplexing module 430R (Figure 4R) to store information related to packets previously received by the NIC device 440R. Each row of this data structure 1040 stores for a packet:
- the packet size 1048,
- a parameter “Sub-flow ID 1044 identifying the TCP/IP protocol stack instance which has processed a packet,
- an initial “Global sequence numbed 1042 which is generated by the remote protocol stack instance 420Ra to 420Rc delivering TCP segments from the NIC device 440R. According to an embodiment, the “Global Sequence Numbed is contained by the packet header or received TCP segments, and
- an effective sequence number 1046 called “Local_Seq_ num. This “Local_Seq num is used by the TCP/IP protocol stacks instances 420Ra to 420Rc, and as will be described, it is written by the multiplexing module 430R in order to replace the initial value of the Global Sequence Number 1042. According to an embodiment, the “Local Sequence Numbed (“Local_Seq_num”) 1046 is contained in the packet header of TCP segments when forwarded by multiplexing module 430R to the different stack instances 420Ra to 420Rc.
A third data structure “Last Ack Pk info per sub-flow table” 1030 is used by the multiplexing module 430R in order to keep track of the last packet acknowledged by each TCP/IP protocol stack instance 420Ra to 420Rc. Each row of this data structure “Last Ack Pk info per sub-flow table 1030 stores for each TCP/IP protocol stack instance identified by a “Sub-flow ID identifier 1032, the local acknowledgment sequence number “Local Ack Seq-num” 1034 and “Rx window size” 1036.
The information in the second and third data structures 1030, 1040 are used to support packet acknowledgment from TCP/IP protocol stack instances 420Ra to 420Rn and packet header sequence numbering management.
A fourth data structure “Last Rx Pk info per sub-flow table 1050 is used by the assembling module 430R in order to keep track for each TCP/IP protocol stack instance 420Ra to 420Rn, of the sequence number (“Local_Seq_num) 1054 associated with the last packet delivered to the protocol stack by module 430R. Each row of this fourth structure “Last Rx Pk info per sub-flow table 1050 comprises a parameter “Sub-flow ID 1052 identifying a TCP/IP protocol stack instance, the sequence number (“Local_Seq_num) 1054 associated with the last packet delivered to the protocol stack, and the packet size (“Pk_size”) 1056.
The information in this fourth data structure 1050 is used in particular to identify the last acknowledged packet for each TCP/IP protocol stack instance.
A fifth data structure 1070 is used by re-assembling module 430R in order to keep track, for each protocol stack instance 420Ra to 420Rn, of the number of acknowledged blocks of payload that are not in order from an application data perspective, i.e. that cannot be released to the application while corresponding data is acknowledged. This structure is used by the Reassembling algorithm that will be described with reference to Figure 13.
A last structure 1060 keeps track of the next expected Global Sequence number that will be released to the application 400T, once acknowledged. This structure is used by algorithm described in Figure 11b.
The functionality of the multiplexing module 430R or the steps of the method for receiving data implemented by the multiplexing module 430R in the architecture represented by Figure 4R for a TCP/IP protocol reception path is represented by Figure 11a.
A packet received by the multiplexing module 430R from the NIC device 440R may be either:
- a new data packet in sequence with previously received data packets, i.e. having a Global Seq_num which is consecutive to the Global Seq_num of a previously received data packet, or
- a new data packet out of sequence (i.e. having a global sequence number non-consecutive to a global sequence number of a previously received data packet), or
- a retransmitted packet, i.e. a data packet which has been already delivered to a protocol stack instance,
Thus, the data flow is divided into sub-flows, the execution of the plurality of sub-flows being respectively parallelized on the plurality of protocol stacks 420Ra to 420Rc.
The multiplexing module 430R is configured to push the packets received from the NIC device 440R to an input queue of a protocol stack instance 420Ra to 420Rc. The pushed packets have a consecutive Local Sequence Number. The multiplexing module 430R is configured to dispatch the received packets to a plurality of protocol stack instances in order to parallelize the processing of the received packets. Thus, each protocol stack 420Ra to 420Rc individually processes a part of the packets received by the NIC device 440R.
According to an embodiment, a plurality of successive packets received from the NIC device 440R, are delivered to the same protocol stack instance, before switching to another protocol stack instance. According to an embodiment, the decision of switching to another protocol stack instance (step
1106) is based on the accumulated value of packet payload. In this embodiment, a switching decision is taken when the accumulated value of packet payload reaches a predetermined amount of data called data block size, or is forced when an “out of sequence” (sequence rupture) is detected for a new received packet.
It may be noted that the packets received from the NIC device have a sequence number field in their packet header (called “Global Seq_num). When a packet is delivered to a protocol stack instance, the “Global Seq_num is replaced by a “Local Seq_num” which keeps packet sequence numbering coherency within each individual protocol stack instances. To perform those translation operations, the Global sequence numbers and Local sequence numbers associated with each protocol stack instance is kept (as described in reference with Figure 10).
When a new packet is received at initial step 1100, it is verified at a verification step 1102 whether the packet is a new one or a retransmitted one, and if the packet is a new one, whether the packet is either in the sequence of packets or out of the sequence of packets. According to an embodiment, at the verification step 1102 it is verified whether the global sequence number in the packet header is equal to the previous global sequence number incremented to the previous packet size, the previous global sequence number and the previous packet size being stored in the structure data 1040 (“Rw Seq-num translation table in Figure 10). If the response is affirmative, the received packet is a new packet in sequence with the packet previously received.
When a new packet in sequence is received by 430R, at a test step 1104 it is checked whether enough space is available within a protocol stack instance 420Ra to 420Rc to which previously new received packets were delivered. In particular, it is checked whether enough space is available in the data block formed by the received packets, the data block being next delivered to the assembling module 41 OR. According to an embodiment, it is checked whether the remaining space in the data block for the protocol stack instance is higher than a predetermined data block size, for example 64KBytes.
It may be noted that if two received packets are in the sequence of packets, they are pushed to the same protocol stack instance 420Ra to 420Rc until the accumulated payload is greater than the predetermined data block size.
If two received packets are out of sequence, they are pushed to different protocol stack instances 420Ra to 420Rc.
If the response is positive, the received packet is pushed into the current protocol stack instance 420Ra to 420Rc. If the response is negative, another protocol stack instance is used (and consequently a block with an adapted new block size) to process the new packets received in sequence. Thus, at a step 1106 a next protocol stack instance 420Ra to 420Rc is selected and the remaining space allocated for the protocol stack is set to a predetermined block size.
Either once step 1106 has been implemented or when the response at the test step 1104 is positive, a step 1108 is implemented for retrieving the local sequence number for the last received packet from the data structure “Last AckPk info per sub-flow table” 1030.
Next, at step 1110a new row is added in the data structure “Rx Seqnum translation table 1040 containing the sub-flow ID identifier, the packet size, the global sequence number and the local sequence number incremented by the packet size. The remaining space value is next decreased by the packet size at a step 1112 and in the data structure 1050 (“Last Rx Pk info per sub-flow table), the local sequence number and the packet size associated with the current protocol stack are updated at a step 1114.
When a new packet is detected “out of sequence”, a new stack instance is also used to also initiate a new block.
It may be noted that when the response is negative at the verification step 1102, it is verified at a second verification step 1120 if the global sequence number in the received packet header is higher than the global sequence number incremented to the packet size, the global sequence number and the packet size being stored in the structure data 1040 (“Rw Seq-num translation table in Figure 10). If the response is positive, the new received packet is a packet “out of sequence”. In such a case, steps 1106, 1108, 1110, 1112 and 1116 are implemented as for the case of a new packet received “in sequence”.
If the response is negative at the second verification step 1120, the packet newly received is a retransmitted packet, and it is delivered to the protocol stack being the protocol stack to which that same packet was previously delivered. According to an embodiment, at a step 1122, the row of the data structure “Rx Seq-num translation table 1040 having the global sequence number of the received packet is retrieved.
Following either the step 1122 for the retransmitted packets or the step 1114 for the new packets, the global sequence number is replaced in the packet header by the local sequence number at step 1116. Next, the packet is delivered to the input queue of the current protocol stack of “Sub-flow ID identifier associated with the local sequence number at step 1118.
Next, the multiplexing module 430R returns to the initial step 1100 in order to wait for the reception of a new packet from the NIC device 440R.
To sum up, when a new received packet is delivered to a given stack instance:
- the local Sequence Number in the packet header is set according to the Local Sequence Number of a previous packet delivered to same stack instance, and to the packet size (step 1108 and 1116),
- the remaining size in the data block containing the received packets is updated with the packet size (step 1112),
- a new row is created in the data structure 1040 (in Figure 10) in order to keep track of the association between the Global Sequence Number and the Local Sequence Number, the packet size and the protocol stack instance identifier (step 1110),
- the local Sequence and the packet size are stored separately in the data structure 1050 for the protocol stack instance (step 1114).
When a retransmitted packet is received from the NIC device 440R, the packet is delivered to the protocol stack having previously received the packet (i.e., with the same Local Seq_num), using the Global Sequence
Number in data structure 1040 (step 1122), and next delivered to the protocol stack instance having the same previous Local Sequence Number (step 1116).
The multiplexing module 430R manages the packet acknowledgment originating from device protocol stack in the device. The packet acknowledgment management used by the architecture presented in Figure 4R for TCP/IP protocol reception path or steps of the method for transmitting data according to an embodiment is represented by Figure 11b.
The multiplexing module 430R is configured to:
- synthetize the acknowledgment of packets for a remote device, such as a standard TCP/IP device, from the packet acknowledgments generated by the plurality of protocol stack instances which parallelizes the protocol processing, and
- compute an amount of payload aggregated by a same stack, for as long as packet acknowledgments are received from individual protocol stack instances 420Ra to 420Rc.
It may be noted that the individual protocol stack instances 420Ra to 420Rc generate packet acknowledgments having an acknowledgment Sequence Number (Local Ack Sequence Number) that depends on the local sequence Number in packets header delivered by Rx packet engine 430R (figure 4R).
Further, it is necessary to generate packet acknowledgments back to the remote device, with an acknowledgment Sequence Number (Global Ack Sequence Number) which is consistent with the Global Sequence Number in the headers of the packets received from the NIC device 440R.
Nevertheless, a Global Ack Sequence Number value within an acknowledgement packet pushed to the NIC device 440R is generated only if the corresponding packets have been acknowledged by individual stack instance(s) 420Ra to 420Rc, while the packets may have been dispatched and processed to different stack instances.
Therefore, when a protocol stack instance generates a packet acknowledgement containing a Local Ack Sequence Number, the data structure “Last Ack Pk info per sub-flow table 1030 (Figure 10) is updated for the concerned protocol stack instance.
At an initial step 1180, the re-assembling module 430R sets the “Next_expected_Global_Seq variable 1060 to the lowest value of “Global_Seq_num” 1042 among all rows of structure 1040.
At a subsequent step 1150, the re-assembling module 430R waits for the reception of a new packet acknowledgment originating from any one of the plurality of protocol stack instances.
As indicated above, at step 1152, the data structure “Last Ack Pk info per sub-flow table 1030 (Figure 10) is updated with the acknowledgment number and “Rx window size” received from the protocol stack instance.
Next, at step 1154 the block payload size is set to zero, and a variable “Flag 1005 is set to zero. This variable “Flag 1005 represents a sequence order property of the next block of payload that may be acknowledged by any stack instance 420Ra to 420Rc. At step 1156 the lowest row of the data structure “Rx_Seq-num translation table 1040 (Figure 10) containing same “Sub_flow ID 1044 as the stack instance having generated the Ack packet is reached.
At a verifying step 1158, it is verified whether the local sequence number incremented by the packet size is lower than the local acknowledge number in the data structure “Last Ack Pk info per sub-flow table 1030 for a protocol stack instance.
If the protocol stack instance in the row of the data structure “Rx_Seq-num translation table 1040 is the same as the protocol stack instance which generated the packet acknowledgment, and if the Local Sequence Number in the data structure “Rx_Seq-num translation table 1040 is lower than the Local Sequence Number value in the data structure “Last Ack Pk info per sub-flow table 1030 at step 1158, the payload of the packet can be acknowledged relating to a global sequence.
In this case, at step 1182, a test is performed to check whether the “Global_Seq_num 1042 in the corresponding row has a same content as “Next_expected_Global_Ack_Seq” 1060.
If this is the case, it means that the acknowledged payload is ordered from an application data perspective, because first byte sequence from Global Sequence numbering equals next expected byte position designated by structure 1060. Next at step 1162, the “Flag variable is set to 1, “Next_expected_Global_Seq 1060 is refreshed from the previous value incremented by Packet Size, and the corresponding row in data structure “RX_Seq-num translation table 1040 is removed, a next row of data structure “RX_ Seq-num translation table” 1040 being reached. Then, at step 1160 the block payload size is incremented with the packet size, the packet size being incremented in the data structure “Rx_Seq-num translation table 1040. Thus, the accumulated packet payload size is computed.
If test 1182 is negative, step 1160 is directly reached without implementing step 1162.
At a verification step 1164, it is verified whether the last row of the data structure “RX_Seq-num translation table 1040 corresponds to a same protocol stack instance.
If the protocol stack instance is same, the multiplexing module 430R returns to step 1156, and the verifying step 1158 is implemented again.
If the protocol stack instance is not the same, at a step 1170 a new element is inserted in the chained list “Receive_vector_lisf’ (450R in figure 4R) by using the block size, the “Sub-flow ID identifier of the last row of the data structure 1040, the element comprising an associated block size field 1002, and a protocol stack instance identifier 1003.
When a data block has been received and is acknowledged (i.e. a data block is ready for transmitting it to the user application), the multiplexing module 430R informs the assembling module 41 OR about it. Thus, a new element is inserted in the chained list “Receive_vector_lisf’ when a new data block has been acknowledged. When a protocol stack instance 420Ra to 420Rc acknowledges a sequence (from a local sequence point of view), the data structure 1040 is analyzed in order to verify if the sequence which has been acknowledged comprises a portion which is in sequence from a Global sequence point of view. If that is the case, the portion in sequence corresponds to a data block to be transmitted to the assembling module 41 OR.
The payload of the packet is not acknowledged relating to the local sequence, if at the verifying step 1158, the protocol stack instance in the row of the data structure “Rx_Seq-num translation table 1040 is the same as the protocol stack instance which generated the packet acknowledgment, but the Local Sequence Number value for the protocol stack instance in the row of the data structure “Rx_Seq-num translation table 1040 is greater than the Local Sequence Number value in the data structure “Last Ack Pk info table 1030.
In both cases, when the block payload size is zero the multiplexing module 430R returns to the initial step 1180. When the block payload size is not zero, the step 1170 is implemented.
Once the step 1170 is implemented, the block payload size is set to zero at step 1172, and at step 1174, the packet acknowledgment is generated and addressed to the NIC device 440R. The generated acknowledgement packet contains a Global Acknowledgment Sequence Number and a Rx window size. According to an embodiment, the Global Acknowledgment Sequence Number is computed by incrementing the Global Sequence Number by the packet size of the last packet for which the response at the verifying step 1158 was positive. The “Rx window size is retrieved from “Last Ack Pk info” table 1030.
It may be noted that for each protocol stack instance 420Ra to 420Rc, the payload of received packets is aggregated in the queue available for the assembling module 41 OR. It means that the totality of payload containing applicative data has been processed by the protocol stack instances 420Ra to 420Rc. The payload containing applicative data is thus managed by the multiplexing module 430R which delivers ordered block size and protocol stack instance identifiers to successively recover payload from the different output queues of stack instances, for example by using the chained list “Receive_vector_strucf’ in the signalling means 460R.
Figure 11c describes an example of behaviour of a multiplexing module (430R or 430R’) when receiving notifications from the assembling module 410R, 410R’.
According to an embodiment, during execution of the assembling module 41 OR, 41 OR’ algorithm, a “Rx window size for a given protocol stack instance 420Ra to 420Rc may need to be reduced to limit delivery of received segments to this given stack (see Head of line avoidance algorithm described with reference to Figure 13). Each time the assembling module 41 OR, 41 OR’ notifies of a new “Rx window size value (at step 1190) for a given sub-flow ID, using the fourth signalling means 460R, 460R’, this new “Rx window size replaces the existing value in the “Last Ack Pk info per sub-flow/’ table ( at step 1191) for the associated sub-flow ID.
The functionality of the assembling module 41 OR in the architecture represented by Figure 4R for TCP/IP protocol reception path is represented by Figure 12.
The assembling module 41 OR is configured in particular for:
- ordering and combining the data blocks originating from the protocol stack instance into blocks of payload,
- flow control feedback from the packet combining function, and
- Signalling information containing the packet combining order and block sizes.
At an initial step 1200, an element from the chained list “Receive_vector_struct is extracted. Each element indicates at step 1210, by reading the sub-flow_id and the size of this payload portion, where the next part of applicative payload has to be recovered.
The payload is then pushed into an application socket at step 1220 and the element in the chained list is removed at step 1230.
Steps 1200 to 1230 are implemented until the chained list “Receive_vector_strucr is empty. Thus, until the chained list is empty, the next element of the chained list is extracted at initial step 1200.
It may be noted that since the number of bytes read from the application socket is not known in advance, it may happen that a number of bytes greater than the block size are retrieved from the application socket. In this case, the additional bytes are added to a next block to be retrieved from a protocol stack instance.
Head of line blocking (due to flow control or a congestion issue) could be originated by the reception side if one protocol stack cannot correctly receive blocks of data and if simultaneously buffers are no longer available to communicate through the other protocol stacks.
To anticipate such an issue with a remote MPTCP transmitting standard, it may be necessary at the reception side to reduce artificially the remote reception window size for the stacks still able to communicate. This can be done by computing a new remote reception window size corrected with a margin.
Figure 13 illustrates a flowchart of a re-assembling control algorithm implemented by the assembling module 41 OR, 41 OR’.
The re-assembling operates in association with the data structure 1070 (represented by Figure 10). “Rx re-assembling path status” table which stores for each protocol stack (“Stack ID) the number of out of order received blocks pending in the reception queue (equivalent to the number of flags at zero (“Nb_flag_0’j for this protocol stack).
When the algorithm in the assembling module 41 OR is started at step 1300, the availability of any notification from the packet Rx decision (done at the step 1170) is checked at step 1310. It should be noted that, at a step 1170 a new element was inserted in the chained list “Receive_vector_list” (450R in figure 4R).
Upon detection of a notification, the process moves to a step 1320 to update the “Rx re-assembling path status table 1070 for the corresponding stack. Thus, the “Nb_flag_0 is incremented if the value of the received “flag” is 0 (i.e. the block is out of sequence order), or “Nb_flag_0” is decremented if the value of the received “flag” is 1 (i.e. the block is in sequence) considering that “Nb_flag_0” must be equal to or greater than 0. For a “flag” value at 1, the associated block can be released to the user application 400R.
Then at step 1330, it is checked for each protocol stack whether the value of “Nb_flag_0 is equal to or greater than a pre-determined threshold. In an affirmative case, at step 1340, a flow control limitation for this protocol stack is requested by using signalling means 460R or 460R’ (Figure 5b) in order for the remote transmitter to reduce its transmissions and then avoid head of line blocking.
In a negative case, there is no need to request a flow control limitation and so the system waits for a new notification from the packet Rx decision at step 1310.

Claims (48)

1. A system for transmitting data originating from at least one user application to a remote device, said transmitting system comprising for a user application:
- a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- a segmenting module configured for segmenting data originating from said user application into a plurality of data blocks, and for dispatching said plurality of data blocks to the plurality of protocol stacks,
- a combining module configured for ordering data originating from at least one of the plurality of protocol stacks in order to generate a data flow, based on first control information originating from said segmenting module, the first control information being relative to the order in which data originating from said user application is segmented, and
- a network interface controller configured for receiving said data flow from the combining module and for transmitting said data flow over a communication network.
2. Transmitting system according to claim 1, wherein said data flow originating from said combining module is organized in packets, the packets being generated by said plurality of protocol stacks from said data blocks generated by said segmenting module.
3. Transmitting system according to any one of claims 1 or 2, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
4. Transmitting system according to any one of claims 1 or 2, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
5. Transmitting system according to any one of claims 1 to 4, wherein said segmenting module is configured to dispatch the plurality of data blocks into the plurality of protocol stacks taking into account second control information originating from the combining module, said second control information representing the status of the processing load of at least one of the plurality of protocol stacks.
6. Transmitting system according to claim 5, wherein said segmenting module is further configured to schedule dispatch of data blocks into the plurality of protocol stacks based on the second control information, the second control information further comprising an acknowledgment of a data block processed by one protocol stack.
7. Transmitting system according to claim 5, wherein said segmenting module is further configured to schedule dispatch of data blocks into the plurality of protocol stacks based on the second control information, the second control information further comprising an acknowledgment of several data packets associated to several data blocks processed by several protocol stacks.
8. Transmitting system according to any one of claims 1 to 7, wherein the combining module is further configured for generating data acknowledgment synthesis for the plurality of protocol stacks from data acknowledgment received from said remote device.
9. Transmitting system according to any one of claims 1 to 8, wherein said first control information originating from the segmenting module comprises for each data block generated by said segmenting module, the size of the data block and a parameter identifying a protocol stack to which the data block is dispatched.
10. Transmitting system according to any one of claims 1 to 8, wherein a socket is associated with each protocol stack, said plurality of data blocks being dispatched to the plurality of protocol stacks respectively through a plurality of sockets.
11. Transmitting system according to any one of claims 1 to 4, wherein said communication network comprises a plurality of transmission paths, each transmission path associated respectively to each protocol stack, wherein the segmenting module is further configured for dispatching said plurality of data blocks based on previous transmission conditions of data blocks on the plurality of transmission paths.
12. System for receiving data, comprising:
- a network interface controller for receiving a data flow originating from a remote device, said data flow being organized in data units,
- a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- a multiplexing module for dispatching the data units forming the received data flow into the plurality of protocol stacks, and
- an assembling module for ordering data blocks originating from at least one of the plurality of protocol stacks, based on third control information originating from said multiplexing module, and for transmitting said ordered data blocks to a user application, said third control information comprising information relative to the order in which the data units are dispatched.
13. Receiving system according to claim 912 wherein said data units are packets, the packets being processed by said plurality of protocol stacks in order to generate data blocks.
14. Receiving system according to any one of claims 12 or 13, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
15. Receiving system according to any one of claims 12 or 13 wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
16. Receiving system according to any one of claims 12 to 15, wherein said multiplexing module is configured to dispatch the data units forming said data flow into the plurality of protocol stacks taking into account fourth control information originating from the assembling module, said fourth control information representing the status regarding the processing load of protocol stacks.
17. Receiving system according to claim 16, wherein said multiplexing module is further configured to schedule dispatch of data units forming said data flow into the plurality of protocol stacks based on the fourth control information, the fourth control information further comprising a reception window size for one protocol stack.
18. Receiving system according to any one of claims 12 to 17, wherein the multiplexing module is further configured for generating data acknowledgment synthesis from acknowledgment received from the plurality of protocol stacks.
19. Receiving system according to any one of claims 12 to 18, wherein said third control information comprises for each data block originating from a protocol stack, a data block size and a parameter identifying the protocol stack processing said data block.
20. Receiver system according to any one of claims 12 to 19, wherein a socket is associated with each protocol stack, said plurality of data blocks being pulled from the plurality of protocol stacks to the multiplexing module respectively through a plurality of sockets.
21. A method for transmitting data originating from at least one user application to a remote device, said transmitting method comprising for a user application:
- processing said data in a plurality of protocol stacks, each protocol stack being processed on a processing resource, data processing in a protocol stack being independent from the other protocol stack,
- segmenting data originating from said user application into a plurality of data blocks,
- dispatching said plurality of data blocks to the plurality of protocol stacks,
- ordering data originating from at least one of the plurality of protocol stacks in order to generate a data flow, based on first control information relative to the order in which data originating from said user application is segmented, and
- transmitting said data flow over a communication network.
22. Transmitting method according to claim 21, wherein said data flow is organized in packets, the packets being generated by said plurality of protocol stacks from said data blocks.
23. Transmitting method according to any one of claims 21 or 22, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
24. Transmitting method according to any one of claims 21 or 22, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
25. Transmitting method according to any one of claims 21 to 24, wherein said dispatching of the plurality of data blocks into the plurality of protocol stacks takes into account second control information representing the status regarding the processing load of at least one of the plurality of protocol stacks.
26. Transmitting method according to claim 25, wherein said dispatching of data blocks into the plurality of protocol stacks is further scheduled based on the second control information, the second control information further comprising an acknowledgment of a data block processed by one protocol stack.
27. Transmitting method according to claim 25, wherein said schedule dispatching of data blocks into the plurality of protocol stacks is further scheduled based on the second control information, the second control information further comprising an acknowledgment of several data packets associated to several data blocks processed by several protocol stacks.
28. Transmitting method according to any one of claims 21 to 27, further comprising generating data acknowledgment synthesis to the plurality of protocol stacks from data acknowledgment received from said remote device.
29. Transmitting method according to any one of claims 21 to 28, wherein said first control information comprises for each generated data block, the size of the data block and a parameter identifying a protocol stack to which the data block is dispatched.
30. Transmitting method according to any one of claims 21 to 29, wherein data blocks are dispatched to the plurality of protocol stacks respectively through a plurality of sockets associated respectively with the plurality of protocol stacks.
31. A method for receiving data comprising:
- receiving a data flow originating from a remote device, said data flow being organized in data units,
- dispatching the data units forming the received data flow into a plurality of protocol stacks, each protocol stack being processed on a processing resource and configured to process data independently from the other protocol stacks,
- ordering data blocks originating from at least one of the plurality of protocol stacks, based on third control information originating from a multiplexing module, and
- transmitting said ordered data blocks to a user application, said third control information comprising information relative to the order in which the data units are dispatched.
32. Receiving method according to claim 3231 wherein said data units are packets, the packets being processed by said plurality of protocol stacks in order to generate data blocks.
33. Receiving method according to any one of claims 31 or 32, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources being located in a same device.
34. Receiving method according to any one of claims 31 or 32, wherein, the plurality of the protocol stacks are respectively processed on a plurality of processing resources, said plurality of processing resources comprising at least two groups of processing resources, each group of processing resources being located in a different device.
35. Receiving method according to any one of claims 31 to 34, wherein said dispatching the data units forming said data flow into the plurality of protocol stacks takes into account fourth control information originating from an assembling module, said fourth control information representing the status regarding the processing load of protocol stacks.
36. Receiving method according to claim 35, wherein said dispatching the data units forming said data flow into the plurality of protocol stacks is further scheduled based on the fourth control information, the fourth control information further comprising a reception window size for one protocol stack.
37. Receiving method according to claim 35, wherein said dispatching of data units forming said data flow into the plurality of protocol stacks is further scheduled based on the fourth control information, the fourth control information further comprising a reception window size for several protocol stacks.
38. Receiving method according to any one of claims 31 to 37, further comprising generating data acknowledgment synthesis from acknowledgment received from the plurality of protocol stacks.
39. Receiving method according to any one of claims 31 to 38, wherein said third control information comprises for each data block originating from a protocol stack, a data block size and a parameter identifying the protocol stack processing said data block.
40. Receiving method according to any one of claims 31 to 39, wherein said plurality of data blocks are pulled from the plurality of protocol stacks to the multiplexing module respectively through a plurality of sockets associated respectively with the plurality of protocol stacks.
41. System for processing data comprising a system for transmitting data according to any one of claims 1 to 11 and a system for receiving data from said system for transmitting data.
42. System for processing data according to claim 41, wherein said system for receiving data is according to any one of claims 12 to 20.
43. System for processing data according to claim 41, wherein said system for receiving data is a system operating according to a TCP/IP network protocol or MPTCP/IP network protocol.
44. System for processing data comprising a system for receiving data according to any one of claims 12 to 20 and a system for transmitting data to said system for receiving data.
45. System for processing data according to claim 44, wherein said system for transmitting data is according to any one of claims 1 to 11.
46. System for processing data according to claim 44, wherein said system for transmitting data is a system operating according to a TCP/IP network protocol or MPTCP/IP network protocol.
47. Means for storing information which can be read by a computer or a microprocessor holding instructions of a computer program, for implementing a method for transmitting data according to any one of claims 21 to 30 and/or a method for receiving data according to any one of claims 31 to 40, when said information is read by said computer or said microprocessor.
48. Computer program product which can be loaded into a programmable apparatus, comprising a sequence of instructions for implementing a method for transmitting data according to any one of claims 21
5 to 30 and/or a method for receiving data according to any one of claims 31 to 40, when said computer program product is loaded into and executed by said programmable apparatus.
Intellectual
Property
Office
Application No: Claims searched:
GB1621076.7
1-48
GB1621076.7A 2016-12-12 2016-12-12 System and method for transmitting data and system and method for receiving data Active GB2557613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1621076.7A GB2557613B (en) 2016-12-12 2016-12-12 System and method for transmitting data and system and method for receiving data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1621076.7A GB2557613B (en) 2016-12-12 2016-12-12 System and method for transmitting data and system and method for receiving data

Publications (3)

Publication Number Publication Date
GB201621076D0 GB201621076D0 (en) 2017-01-25
GB2557613A true GB2557613A (en) 2018-06-27
GB2557613B GB2557613B (en) 2020-03-25

Family

ID=58222056

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1621076.7A Active GB2557613B (en) 2016-12-12 2016-12-12 System and method for transmitting data and system and method for receiving data

Country Status (1)

Country Link
GB (1) GB2557613B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285918A (en) * 2021-12-30 2022-04-05 湖北天融信网络安全技术有限公司 Shunting method and device based on protocol analysis, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097480A1 (en) * 2007-10-10 2009-04-16 Sun Microsystems, Inc. Parallelizing the tcp behavior of a network connection
US20140019982A1 (en) * 2012-07-13 2014-01-16 Rekesh John Core-affine processing on symmetric multiprocessing systems
CN103532955A (en) * 2013-10-18 2014-01-22 苏州斯凯迪网络科技有限公司 Embedded multi-protocol mobile network data acquisition probe equipment
US20150281112A1 (en) * 2014-03-31 2015-10-01 Nicira, Inc. Using different tcp/ip stacks with separately allocated resources
US20150281407A1 (en) * 2014-03-31 2015-10-01 Nicira, Inc. Using different tcp/ip stacks for different tenants on a multi-tenant host
US20150295782A1 (en) * 2014-04-09 2015-10-15 Hcl Technologies Ltd efficient mechanism to improve data speed between systems by MPTCP and MIMO combination

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097480A1 (en) * 2007-10-10 2009-04-16 Sun Microsystems, Inc. Parallelizing the tcp behavior of a network connection
US20140019982A1 (en) * 2012-07-13 2014-01-16 Rekesh John Core-affine processing on symmetric multiprocessing systems
CN103532955A (en) * 2013-10-18 2014-01-22 苏州斯凯迪网络科技有限公司 Embedded multi-protocol mobile network data acquisition probe equipment
US20150281112A1 (en) * 2014-03-31 2015-10-01 Nicira, Inc. Using different tcp/ip stacks with separately allocated resources
US20150281407A1 (en) * 2014-03-31 2015-10-01 Nicira, Inc. Using different tcp/ip stacks for different tenants on a multi-tenant host
US20150295782A1 (en) * 2014-04-09 2015-10-15 Hcl Technologies Ltd efficient mechanism to improve data speed between systems by MPTCP and MIMO combination

Also Published As

Publication number Publication date
GB201621076D0 (en) 2017-01-25
GB2557613B (en) 2020-03-25

Similar Documents

Publication Publication Date Title
EP3707882B1 (en) Multi-path rdma transmission
US10116574B2 (en) System and method for improving TCP performance in virtualized environments
JP4583383B2 (en) Method for improving TCP retransmission process speed
JP4508195B2 (en) Reduced number of write operations for delivery of out-of-order RDMA transmission messages
US9888048B1 (en) Supporting millions of parallel light weight data streams in a distributed system
US8238350B2 (en) Message batching with checkpoints systems and methods
US5961659A (en) Independent simultaneous queueing of message descriptors
CN113874848A (en) System and method for facilitating management of operations on accelerators in a Network Interface Controller (NIC)
US9049218B2 (en) Stateless fibre channel sequence acceleration for fibre channel traffic over Ethernet
US7840682B2 (en) Distributed kernel operating system
US7835359B2 (en) Method and apparatus for striping message payload data over a network
US7664112B2 (en) Packet processing apparatus and method
US9118478B2 (en) Fault-tolerant data transmission system for networks with non-full-duplex or asymmetric transport
US8788576B2 (en) High speed parallel data exchange with receiver side data handling
KR101242338B1 (en) Multi-stream acknowledgement scheduling
US20190044875A1 (en) Communication of a large message using multiple network interface controllers
JP4979823B2 (en) Data transfer error check
US8566833B1 (en) Combined network and application processing in a multiprocessing environment
Kokshenev et al. Comparative analysis of the performance of selective and group repeat transmission modes in a transport protocol
GB2557613A (en) System and method for transmitting data and system and method for receiving data
US7830901B2 (en) Reliable network packet dispatcher with interleaving multi-port circular retry queue
CA2985674A1 (en) Method and computer product for operating a memory buffer system
US9069625B2 (en) Method of parallel processing of ordered data streams
JP5761193B2 (en) Communication apparatus, communication system, packet retransmission control method, and packet retransmission control program
CN114095402A (en) RAFT distributed system transmission delay analysis method considering channel quality