WO2021078232A1 - 一种基于多路径调度的中继设备 - Google Patents

一种基于多路径调度的中继设备 Download PDF

Info

Publication number
WO2021078232A1
WO2021078232A1 PCT/CN2020/123085 CN2020123085W WO2021078232A1 WO 2021078232 A1 WO2021078232 A1 WO 2021078232A1 CN 2020123085 W CN2020123085 W CN 2020123085W WO 2021078232 A1 WO2021078232 A1 WO 2021078232A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
path
user
scheduling
processing module
Prior art date
Application number
PCT/CN2020/123085
Other languages
English (en)
French (fr)
Inventor
许辰人
倪蕴哲
钱风
Original Assignee
北京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学 filed Critical 北京大学
Priority to US17/754,925 priority Critical patent/US20230276483A1/en
Publication of WO2021078232A1 publication Critical patent/WO2021078232A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/14Multichannel or multilink protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0992Management thereof based on the type of application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/22Communication route or path selection, e.g. power-based or shortest path routing using selective relaying for reaching a BTS [Base Transceiver Station] or an access point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/542Allocation or scheduling criteria for wireless resources based on quality criteria using measured or perceived quality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/543Allocation or scheduling criteria for wireless resources based on quality criteria based on requested quality, e.g. QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • H04W72/566Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
    • H04W72/569Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient of the traffic information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present invention relates to the field of mobile communication technology, in particular to a relay device based on multi-path scheduling.
  • Heterogeneous networks include Wireless Personal Area Network (WPAN) represented by Bluetooth and Zigbee, Wi-Fi and WiGig Wireless Local Area Network (WLAN) represented by WiMAX, Wireless Metropolitan Area Network (WMAN) represented by WiMAX, mobile communication networks represented by 3G, 4G, and 5G, satellite networks, Ad Hoc And wireless sensor network, etc.
  • WPAN Wireless Personal Area Network
  • WLAN Wi-Fi and WiGig Wireless Local Area Network
  • WMAN Wireless Metropolitan Area Network
  • mobile communication networks represented by 3G, 4G, and 5G
  • satellite networks Ad Hoc And wireless sensor network, etc.
  • existing communication terminals usually have multiple network interfaces.
  • notebook computers are equipped with wired network interfaces and wireless Wi-Fi network adapters.
  • Smart phones can be connected to cellular networks (UMTS, 3G, 4G, 5G, etc.). ), and you can access the Wi-Fi network.
  • network operators usually configure backup links and equipment on the access link and the backhaul link to play a role when the network fails.
  • the idea of using multiple paths at the same time appeared to improve the robustness and transmission performance of the end-to-end connection.
  • Such a multi-path connection can balance load, dynamically switch, and automatically transfer services from the most congested and most easily interrupted path to a better path.
  • heterogeneous network integration includes: a) Realizing reasonable selection and service switching between multiple standard networks through multi-access technology interoperability; b) Using a variety of connection methods to time flexible and reliable networks to reduce time and System overhead improves energy consumption efficiency; c) A complex network with multiple standards, multiple levels and multiple connections realizes unified self-organization and self-optimization, reducing capital output and operating costs.
  • TCP Transmission Control Protocol
  • the Internet Engineering Task Force (IETF) in 2011 released a simultaneous TCP compatible Protocol and the Multi-Path Transmission Control Protocol (MPTCP) of current applications enable mobile terminal devices to use heterogeneous wireless network technology for multi-path data transmission, thereby maximizing network throughput and reducing transmission delay the goal.
  • IETF Internet Engineering Task Force
  • MPTCP Multi-Path Transmission Control Protocol
  • the international patent document with the publication number WO2016144224A1 discloses a method and arrangement for multi-path service aggregation using the multi-path transmission control protocol MPTCP proxy.
  • a method of relaying data between a wireless device with MPTCP capability and a server is implemented in a multi-path transmission control protocol MPTCP agent configured with a unique Internet Protocol IP address. It includes: establishing an MPTCP session between the MPTCP agent and the wireless device, the MPTCP session including the first MPTCP sub-flow mapped on the first network path to the wireless device using the default service flow tuple; and establishing a TCP session with the server.
  • the method further includes mapping between the MPTCP agent and the wireless device based on the mapping of the additional MPTCP sub-flow to the second network path to the wireless device based on the use of a tuple of filtered traffic flow including the unique IP geology configured for the MPTCP agent Initiate another MPTCP sub-stream in the MPTCP session.
  • the data between the MPTCP agent and the server is exchanged in the TCP session.
  • MPTCP relies on kernel transformation, and requires both the client and server to support MPTCP. Therefore, the possibility that each host on the network supports MPTCP is very low.
  • MPTCP proxy at PGW Packet Data Network Gateway
  • PGW Packet Data Network Gateway
  • UE User Equipment
  • the method disclosed in the patent is to relay data between a wireless device with MPTCP capability and a server, wherein the MPTCP proxy is set between the wireless device with MPTCP capability and the server, and the wireless device with MPTCP capability and MPTCP Multi-path transmission is realized between agents, and MPTCP agent and server are normal single-path TCP connections.
  • This requires two communication hosts, one of which must support the MPTCP protocol, and the MPTCP proxy is set on the other communication host side that does not support MPTCP. In fact, it does not solve the situation that the two communication hosts do not support the MPTCP protocol. The problem of realizing multi-path transmission.
  • the Chinese patent document with the publication number CN108075987A discloses a multi-path data transmission method and equipment, in which at least two multi-paths are established between the multi-path proxy client and the multi-path proxy gateway through the first Internetwork Protocol IP address. Data sub-stream, and multi-path data sub-stream data transmission. Between the multipath proxy gateway and the application server to be accessed by the multipath proxy client, the first IP addresses of at least two multipath data substreams are established between the multipath proxy client and the multipath proxy gateway, Establish a TCP link and perform TCP data transmission. Through the proxy of the multipath proxy client and the multipath proxy gateway, MPTCP multipath data transmission based on the IP address information of the multipath proxy client is realized.
  • This patent establishes an MPTCP multi-path transmission connection between a multi-path proxy client and a multi-path proxy gateway, in which data transmission between the multi-path proxy gateway and the server is based on TCP.
  • this kind of proxy gateway or proxy client requires its kernel to support the MPTCP protocol, which makes software programming difficult.
  • the patent solves the problem that the IP address of the proxy server is visible to both the user side and the network side, it is vulnerable to security attacks, and the network side cannot obtain the terminal’s IP address, resulting in the inability to achieve statistics and control of terminal traffic, but because of MPTCP It exists in the kernel, so it requires a lot of clock overhead when sending network health data to the kernel, which will not only cause performance loss, but also become a potential data attack node of the kernel and security vulnerabilities of the system software.
  • the Chinese patent document with the publication number CN107979833A discloses a multi-state information fusion intelligent terminal based on heterogeneous network interconnection, including a dispatching system and hardware equipment; the dispatching system includes a service generation system, a signal management system, and a signal interface.
  • the hardware equipment includes PDT communication module card reader, LTE communication module card reader, Beidou system communication chip card reader, ad hoc network communication module card reader, shortwave satellite communication module card reader, volume control buttons , Network selection button, screen start button, touch control screen and built-in antenna.
  • the communication device disclosed in this patent adaptively selects the current optimal communicable network through the unified scheduling of the scheduling system based on the current network environment.
  • the scheduling method adopted by this patent only implements priority scheduling based on the strength of different network signals, and does not take into account the round-trip time (RTT) and bandwidth (Band width, BW) of the current network. ), but in fact, in the state of dynamic movement, the correlation between signal strength and network performance is not as high as expected, so it is unreliable to schedule different data streams to networks of different standards based on signal strength.
  • RTT round-trip time
  • BW bandwidth
  • a multi-path transmission packet scheduling scheme based on buffer overflow probability guarantee at the receiving end is disclosed. It can effectively reduce the disorder of data packets at the receiving end and reduce the occupancy of the buffer at the receiving end. It solves the problem that the existing packet allocation strategy only considers the average delay of the transmission path and ignores the random time-varying characteristics of the delay. With dynamically changing path delays, packets arriving at the receiving end will be out of order, prone to buffer overflow.
  • the Chinese patent document with the publication number CN107682258A discloses a virtualization-based multi-path network transmission method and device.
  • the method includes: the control node obtains the network status of the data transmission network in real time, and obtains the topological structure of the data transmission network;
  • the path information of the first target data is determined through a preset path selection algorithm;
  • the control node sends the path information to the source node, After the source node splits the first target data into multiple copies of the second target data according to the path information, it sends multiple copies of the second target data to the target node through multiple paths, and the destination node receives the multiple copies of the second target data.
  • multiple copies of the second target data are restored to the first target data, which realizes the multi-path transmission of the first target data, does not need to adopt a complex cross-layer cooperation mechanism, and reduces the complexity of the control and scheduling process.
  • the present invention provides a relay device based on multi-path scheduling, which is used to deploy on the routing path of multiple user terminals communicating with multiple servers to converge multiple wireless access networks.
  • the relay device at least includes: at least one communication receiving module, configured to receive user data from a plurality of the user terminals.
  • At least one hardware interface module is used to access a network that provides at least two independent communication paths, so as to distribute the user data through the at least two communication paths.
  • At least one data processing module is used to map the received user data to the respective interfaces of at least two mutually independent communication paths.
  • the data processing module diverges the user data received via the communication receiving module to a first processing path that processes data in a predetermined manner based on a specific type of target data, and is independent of the first processing path and bypasses the kernel protocol The second processing path of the stack.
  • kernel modification can be avoided and it is compatible with the existing network middleware deployed on a large scale, thereby constructing a multi-path transmission framework at the user space level.
  • the data processing module can work in a separate network namespace to avoid conflicts with the kernel configuration used by other programs.
  • the first processing path sends data to the user space or the kernel protocol stack.
  • the second processing path directly transmits data conforming to a specific type of target data to the user space on the data processing module by bypassing the kernel protocol stack.
  • the data processing module inversely multiplexes the data in the second processing path onto multiple interfaces.
  • Inverse multiplexing refers to segmenting and encapsulating data streams into packets, and distributing the packets to multiple communication paths.
  • the present invention also provides a relay device based on multi-path scheduling, which at least includes a data processing module.
  • the data processing module is configured to transmit the data sent by at least one client/server in the following multi-stage scheduling manner: the first stage of determining the connection sequence of a plurality of the client/server; the second stage for realizing the queue-cutting transmission Stage; used to map the data stream of the connection sequence determined in the first stage and the second stage to the third stage of at least one communication path.
  • the data processing module drives the user data in the second processing path to be transmitted in at least two independent communication paths accessed by at least one of the hardware interface modules in a multi-stage continuous scheduling manner, thereby realizing multiple Multi-path transmission between the client and multiple said servers.
  • the data processing module is configured to implement the first stage of scheduling in the following manner: based on the user data in the second processing path or the data transmitted by the user end/server is divided into always priority scheduling
  • the second priority of each connection between the client and server in the first group is allocated in a completely fair scheduling manner, and the second priority is scheduled according to the order of the second priority.
  • Each connection related to the client/server in the first group is allocated in a completely fair scheduling manner, and the second priority is scheduled according to the order of the second priority.
  • the data processing module is an emerging new user
  • the flow is assigned the highest second priority, and the scheduling is implemented in a round-robin manner.
  • the implementation of scheduling in a cyclic manner means that when the transmission time of the new user stream exceeds the first time threshold, or the transmission data volume of the new user stream exceeds the first data threshold, the first time of the new user stream is The second priority is reduced to the lowest, and the second priority of the user flow is dynamically increased according to the completely fair scheduling method until the highest second priority is reached.
  • the data processing module is configured to implement the scheduling of the third stage in the following manner: based on the first stage and the second stage, it is determined that the connection sequence of a plurality of the client/server will correspond to The user flow of the client/server is mapped to at least one interface in sequence. Based on the second data threshold, the user flow of each client/server is divided into a first data flow before the second data threshold and a second data flow after the second data threshold. The first data flow is bound to one of the multiple interfaces, and the second data flow is mapped to the remaining multiple interfaces.
  • the data processing module is configured to map the second data stream to the remaining multiple interfaces in the following manner: uniformly based on the first scheduling behavior of the multi-communication path transmission of the remaining multiple interfaces and cross- The second scheduling behavior of the retransmission of the communication path, so as to allocate the best interface for each data packet in the second data stream to provide the best quality of service.
  • the data processing module is configured to implement the scheduling of the second phase in the following manner: based on the trigger of the second scheduling behavior of retransmitting the data packet in the third phase, the retransmission is preferentially transmitted. Transmission data packet. The retransmitted data packet is mapped to an available interface other than the original transmission interface, so as to realize cross-communication path retransmission.
  • the data processing module when one of the client terminals in the first stage has opened multiple connections to connect to the same resource in the server, the data processing module will connect to the same resource in the server
  • the multiple connections about the user end are divided into a second group.
  • the network resources obtained by the second group are reallocated in the second group to other users who have not terminated the session Stream, thereby reducing the transmission time of other user streams that have not terminated the session to balance the overhead of connecting to different servers with large differences in data streams.
  • the present invention also provides a relay device based on multi-path scheduling, which at least includes a data processing module.
  • the data processing module is configured to perform transmission/scheduling based on the high and low order of the second priority assigned by the data stream of each user end/server connection.
  • the new data stream is assigned the highest second priority, and the second priority of the data stream is dynamically increased based on the time of existence of the data stream.
  • the data processing module when a new data stream appears in the client/server connection, is configured to assign the highest second priority to the new data stream, and the new data stream In the case that the transmission time of the stream exceeds the first time threshold or the transmission data volume of the new data stream exceeds the first data threshold, the second priority of the new data stream is reduced to the lowest.
  • the data processing module when the second priority of the new data stream is minimized or there is a data stream that has not completed transmission for a long time, the data processing module is configured to increase the second priority based on completely fair scheduling.
  • the second priority is the second priority of the data stream with the lowest priority level or the data stream that has not completed transmission for a long time.
  • the data processing module when a client in the first stage opens multiple connections to connect to resources in the same server, the data processing module is configured to connect multiple connections to resources in the same server.
  • the connection of the user end is divided into a second group.
  • the network resources obtained by the second packet are reallocated to other data of the unterminated session in the second packet flow.
  • the present invention also provides a relay method based on multi-path scheduling.
  • the method includes: respectively deploying communication with a plurality of the user terminals and merging the plurality of accesses on the routing paths for communication between a plurality of user terminals and a plurality of servers.
  • the first multi-path data processing module based on the editable specific data type, diverts the received data to a first processing path that processes data in a predetermined manner and a first processing path that is independent of the first processing path and bypasses the kernel protocol stack.
  • the second processing path based on the editable specific data type
  • the data of the second processing path is mapped to multiple independent communication paths and communicates with the second multi-path data processing module in a reverse multiplexing manner, thereby realizing multiple user terminals and multiple communication paths. Multi-path transmission between the two said servers.
  • Fig. 1 is a schematic diagram of a module of a preferred embodiment of the relay device of the present invention
  • Fig. 2 is a schematic block diagram of a preferred embodiment of the method of the invention.
  • Hardware interface module 203 Data processing module 210: First processing path
  • Second processing path 401 Remote server 501: Interface
  • User side It can be various forms of User Equipment (UE), which can be handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems, mobile stations ( Mobile Station (MS), Terminal Equipment (Terminal Equipment), etc., can also be applications on the user equipment, and the user terminal can also be a name for a physical computer relative to a virtual machine.
  • UE User Equipment
  • MS Mobile Station
  • Terminal Equipment Terminal Equipment
  • the client is provided to the virtual machine hardware environment, sometimes also called the host or host.
  • Proxy server It can refer to a network proxy, which provides a special network service, allowing a network terminal to make an indirect connection with another network terminal through this service.
  • Bandwidth aggregation Multi-path expects that parallel transmission on multiple available paths can double the available bandwidth of the network. If this method can be used to achieve effective bandwidth aggregation, multi-homed devices will achieve good network performance.
  • Stream A sequence of data that can only be read once in a pre-defined order, specifically a complete TCP/IP link, containing multiple data packets.
  • User flow Refers to the data flow about users.
  • Packet Corresponds to the network layer of TCP/IP. It refers to the data unit of TCP/IP protocol communication and transmission. It can also be called a data packet. It is usually called a packet in scheduling, which refers to the scheduling strategy to forward data. granularity.
  • Fairness requires that the flow that is being multipathed and the flow of the same level share the same network resources in the bottleneck link. If the endpoint obtains more resources through multi-path transmission, it may cause network congestion and collapse.
  • RP Resource Pool
  • Path starvation After the path is free of congestion, it cannot continue to transmit data normally.
  • User Space The running space of the user program.
  • Kernel Space The running space of the operating system kernel.
  • Context The context is simply an environmental parameter.
  • Environmental parameters refer to parameters such as network performance and transmission time and bytes when scheduling user streams.
  • Network namespace The Linux kernel provides a namespace.
  • the namespace packs global system resources into an abstraction.
  • the abstraction can only be bound to processes in the namespace to provide resource isolation; the network namespace is all in the namespace.
  • the process provides a brand new network stack, including network interfaces, routing tables, etc.
  • This embodiment discloses a relay device based on multi-path scheduling, which is configured to be deployed on a routing path for communication between multiple user terminals 100 and multiple servers 400 to converge multiple wireless access networks.
  • multiple wireless access networks include wireless wide area networks (Wireless Wide Area Network, WWAN), wireless metropolitan area networks (Wireless Metropolitan Area Network, WMAN), wireless local area networks (Wireless Local Area Network, WLAN), and wireless personal domains.
  • Network Wireless Personal Area Network, WPAN), Mobile Ad Hoc Network (MANET), mobile communication network (such as 3G, 4G, 5G, etc.), satellite network, wireless sensor network and other access technologies, Heterogeneous networks running on different protocols.
  • the mobile communication network includes mobile communication networks of different mobile communication technologies used by different operators.
  • GSM Global System for Mobile Communication
  • WCDMA Wideband Code Division Multiple Access
  • CDMA Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronization Code Division Multiple Access
  • 3G Third-generation mobile communication technology
  • LTE Long-term Evolution
  • SAE System Architecture Evolution
  • convergence refers to the construction of a complete multi-user multi-path transmission framework in a heterogeneous network scenario where multiple user terminals 100 are placed in the coexistence of multiple access technologies.
  • the integration can also perform comprehensive and unified resource management for heterogeneous networks that integrate multiple access technologies such as GSM, WCDMA, LTE, Wi-Fi, etc., and provide multiple options for the access network of the user end 100, so that users
  • the terminal 100 can select one or several access networks from a variety of networks provided by heterogeneous networks, and can switch from one network to another, thereby using multiple coexisting access technologies to parallelize multiple paths
  • the data is transmitted to the server 400 to provide aggregated bandwidth resources for the client 100.
  • the existing multi-path transmission scheme can use at least one of the multiple networks as the main transmission path, and other paths as the backup transmission path.
  • the main path and the backup path are switched to maintain the transmission continuity.
  • the data stream of the user end 100 can also be distributed to multiple communication paths 500, thereby improving the utilization rate of the communication paths, in order to achieve the goal of load balancing and maximizing the aggregate bandwidth, and it can also pass through multiple end-to-end connections To improve its reliability.
  • RR round robin
  • the polling algorithm is the simplest and most direct scheduling management algorithm, which can prevent the path from starving to death.
  • the polling algorithm will cause disorder at the receiving end The emergence of grouping.
  • These out-of-sequence packets are also reordered at the receiving end and consume cache resources and processor resources of the receiving end. However, these resources are precisely what the mobile user terminal 100 lacks most.
  • the disorder at the receiving end is generally caused by packet loss and path changes.
  • the disorder at the receiving end is generally caused by packet loss and path changes.
  • this method is not suitable for multi-path transmission scenarios, because packets come from different paths, and when the differences in bandwidth and delay of each path cause disorder, the missing packet may be being sent end-to-end, or the packet may be sent to it. Confirmation has not arrived yet. Therefore, simply discarding out-of-order packets or resending them immediately and reducing the size of the congestion window will have an adverse effect on performance.
  • the prior art can use the display congestion notification to distinguish between congestion and packet loss, and use forward error correction coding to recover errors, and at the same time use a packet dynamic matching algorithm to select an appropriate path for each packet.
  • most of the prior art scheduling schemes use a combination of throughput and fairness to achieve packet scheduling, but they do not consider maintaining and improving the user’s quality of experience QoE, especially when the traffic is large. How to optimize the QoE problem of aggregated multi-user experience quality under the application scenario.
  • most of the existing multi-path transmission methods or devices are focused on the transport layer to achieve end-to-end multi-path transmission, relying on kernel transformation, requiring simultaneous support from the user end 100 and the server 400 end. The middleware of large-scale deployment is not compatible.
  • the present invention provides a relay device based on multi-path scheduling.
  • the relay device can establish a complete multi-user multi-path transmission framework in the user space, and upgrade the packet scheduling logic to the application layer, so that the communication/relay device can interact with existing network middleware without modifying the kernel or application program. It is compatible, and the system design of the relay device provided by this embodiment is at the user space layer, so it has a highly expandable capability and can integrate new packet scheduling strategies, thereby facilitating deployment and performance optimization.
  • the relay device includes at least: at least one communication receiving module 201, used to receive user data of multiple user terminals 100; at least one hardware interface module 202, used to provide access, including at least two A network of mutually independent communication paths 500 so as to distribute user data through at least two communication paths 500; at least one data processing module 203 for mapping received user data to the respective interfaces 501 of at least two mutually independent communication paths 500 on.
  • the communication receiving module 201 may be a gateway, a signal receiver, or a signal receiving circuit, which can receive data in various formats and analyze the received data.
  • the hardware interface module 202 may be a baseband module with a SIM card interface corresponding to different operators, or may be a Wi-Fi module, a Bluetooth module, a Zigbee module, or the like.
  • the data processing module 203 at least includes a processor and a memory.
  • the memory is used to store instructions.
  • the processor is configured to execute instructions stored in the memory.
  • the processor can be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array ( Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
  • the data processing module 203 carries an operating system, such as a Linux system.
  • a heterogeneous network can be accessed through the hardware interface module 202.
  • the heterogeneous network includes multiple wireless access networks. Different wireless access networks may use different communication paths for transmission. And the same wireless access technology may use different communication paths for communication.
  • the frequency bands used by China Mobile's TD-LTE are 1880-1890MHz, 2320-2370MHz, and 2575-12635MHz.
  • the frequency bands used by China Unicom's TD-LTE are 2300-2320MHz and 2555-2575MHz.
  • the frequency band used by China Unicom's FDD-LTE is: the uplink frequency band 1755-1765MHz, and the downlink frequency band: 1850-1860MHz.
  • the frequency bands used by China Telecom's TD-LTE are 2370-2390MHz and 2635-2665MHz.
  • the frequency band used by China Telecom's FDD-LTE is: uplink frequency band 1765-1780MHz.
  • the data processing module 203 diverges the user data received via the communication receiving module 201 to the first processing path 210 and the second processing path 220 based on the specific type of target data.
  • the target data refers to user-defined data. Users can edit specific types of target data as needed.
  • the specific type of target data refers to the frame structure of the target data and whether the source address of the data is the client 100 or the server 400 directly served by the relay device.
  • the data processing module 203 classifies the data received by the communication receiving module 201 based on the specific type of the target data. Data of a specific type belonging to the target data is diverted to the first processing path 210.
  • the first processing path 210 processes data in a predetermined manner.
  • the established method refers to processing methods other than multi-path transmission, such as forwarding control data for settlement, or forwarding other information.
  • the first processing path 210 can send data to the user space or the kernel protocol stack.
  • the second processing path 220 and the first processing path 210 are independent of each other.
  • the second processing path 220 directly transmits data of a specific type conforming to the target data to the data processing module 203 by bypassing the kernel.
  • the second processing path 220 may transmit data to the user space of the operating system carried by the data processing module 203.
  • the way to bypass the kernel protocol stack is that the data passes through the raw socket instead of the standard socket.
  • the original socket can send and receive data packets that have not passed through the kernel protocol stack, so that the second processing path 220 can directly enter the user space without passing through the kernel protocol stack.
  • the data processing module 203 drives the user data in the second processing path 220 to be transmitted in at least two independent communication paths 500 provided by the at least one hardware interface module 202 by means of multi-stage continuous scheduling, thereby realizing multiple user terminals. Multi-path transmission between 100 and multiple servers 400.
  • the data processing module 203 allocates user data in the second processing path 220 in a multi-stage continuous scheduling manner.
  • the data in the second processing path 220 are respectively distributed to the interfaces 501 of at least two communication paths 500 independent of each other.
  • the present invention uses the original socket through the second processing path 220 to process the data stream sent by the client 100 or the server 400 in the user space to achieve kernel bypass, so that multiple client 100s are transmitted at the user space level. Therefore, the design of multi-path transmission is completely in the user space, and there is no need to modify the kernel of the system, nor does it involve the modification of application programs. Moreover, the logic of multi-path packet scheduling is upgraded to the user space of the application, so that it can combine the dynamic changes of the network and application specifications from a global perspective to schedule packets to achieve the optimization of aggregate QoE. In addition, multi-path transmission and packet scheduling are all implemented in the user space, which is not only conducive to integrating and driving contextual data about network performance, but also can be highly expanded to integrate new packet scheduling strategies, which is conducive to deployment and performance optimization.
  • the data processing module 203 transmits and schedules user data in the second processing path 220 in a separate network namespace.
  • this setting method conflicts with the kernel configuration used by other programs can be avoided, and potential security issues can be mitigated.
  • the operation of disabling reverse path filtering on the proxy server provided in this embodiment allows the virtual Ethernet device used to forward data packets to the real network card to receive any source IP generated by the relay device provided in this embodiment The resulting security risks will be isolated from normal running programs and managed by the data processing module 203 in its own network namespace.
  • the deployment mode of the relay device provided in this embodiment is shown in FIG. 1.
  • the user terminal 100 may be connected to the communication receiving module 201 in a wireless or wired manner.
  • it may be connected to the communication receiving module 201 of the relay device of the present invention through a wireless access point (Wireless Access Point, AP).
  • the relay device provided in this embodiment accesses networks of different access technologies through at least one hardware interface module 202.
  • different core networks can be accessed through base stations (such as BTS, NodeB, eNodeB, etc.), and the core network can access shortwave communication networks, GPS, satellite communication networks, cellular mobile networks, PSTN, ISDN, Internet, etc.
  • the relay device provided in this embodiment can directly establish a multi-path data link with the server 400.
  • the multi-path data link can be established through a remote server 401 supporting multi-path transmission that establishes a communication session with the server 400.
  • a device with the same function as the data processing module 203 disclosed in this embodiment can also be selected, and the device can establish a session with the server 400, so that the relay device provided in this embodiment can establish a multi-path data link with the device.
  • the relay device provided in this embodiment can be deployed on the server 400 side, that is, the relay device provided in this embodiment can establish a session with the server 400.
  • the user terminal 100 can establish a multi-path data link with the relay device provided in this embodiment.
  • the relay device provided in this embodiment can implement different functions according to different deployment locations.
  • the relay device provided in this embodiment can be deployed on subways, trains, and high-speed trains, and the user equipment of passengers can be accessed through the AP, so that the mobile networks of different frequency bands of different operators can be used and the distance deployed on the server 400 side can be used.
  • the server 401 is connected, or the server 400 with the multi-path transmission function is connected to realize the multi-path transmission.
  • a device with the same function as the data processing module 203 disclosed in this embodiment, or a remote server 401 with multi-path transmission capability can be deployed in the backbone network of a content delivery network (Content Delivery Network, CDN) provider.
  • Multi-channel transmission technology can be used to improve the efficiency of users' access to the CDN.
  • the remote server 401 or a device with the same function as the data processing module 203 disclosed in this embodiment in a specific intranet, which can achieve an effect similar to a virtual private network VPN, that is, while externally accessing the data in the intranet
  • the transmission efficiency gain provided by multiplex transmission can also be obtained.
  • the relay device provided in this embodiment can be deployed on the user side of a base station (such as BTS, NodeB, eNodeB), and can also be deployed on the 3G core network element SGSN (Serving GPRS Support Node) and GGSN (Gateway GPRS Support Node).
  • EPC Evolved Packet Core
  • LTE Long Term Evolution
  • SGW Serving Gateway
  • PGW PDN Gateway
  • it can also be deployed on the User Plane Function (User Plane Function) of the 5G core network.
  • it can also be deployed on CPE (Customer Premise Equipment), that is, the customer premise equipment.
  • CPE Customer Premise Equipment
  • the relay device provided in this embodiment and the remote server 401 or a device or agent capable of multi-path transmission capable of establishing a session with the server 400, or a server capable of multi-path transmission
  • the communication path 500 may also be referred to as a pipe.
  • the pipeline can be implemented flexibly in different ways. For example, you can use a TCP link as a pipe.
  • the data processing module 203 inversely multiplexes the data in the second processing path 220 onto the multiple interfaces 501, so that the data in the second processing path 220 can be transmitted through the TCP socket on each pipe.
  • Both TCP payload and control data are encapsulated in the transport layer.
  • the TCP BBR congestion control algorithm can be selected, which can reduce end-to-end delay and packet loss.
  • inverse multiplexing refers to the data processing module 203 segmenting and encapsulating the data stream from the second processing path 220 into packets, and then distributing the packets to the pipeline.
  • Each message has a header, which contains the ID, length, and serial number of the application connection.
  • a remote server 401 that supports multi-path transmission, or an agent with multi-path transmission capability, or a server 400 with multi-path transmission capability, reassembles the inverse multiplexed data stream by extracting data , And forward it to the server 400 according to the connection ID.
  • the remote server 401 or a device or agent with multi-path transmission capability, or a server 400 with multi-path transmission capability, transmits the data fed back to the user terminal 100 in the user space to this embodiment by inverse multiplexing. Relay equipment.
  • the multi-stage continuous scheduling includes at least the first stage, the second stage and the third stage.
  • the first stage is used to establish the connection sequence of users, that is, the sending sequence of each user connection is scheduled to ensure the fairness of all users and maintain QoE.
  • the second stage is about queue insertion scheduling, that is, priority is given to the transmission of emergency packets and retransmission across the communication path 500 in the third stage.
  • the third stage is interface scheduling, that is, mapping the traffic of a user connection to different interfaces 501 (corresponding to different communication paths 500) according to relevant contextual information to improve end-to-end performance.
  • the method of multi-stage continuous scheduling is: the data processing module 203 drives the flow (data flow) in the second processing path 220 to map to different phases in the first phase, second phase, and third phase successively coordinated scheduling.
  • mapping to a different interface 501 means that the data stream is mapped to a different communication path 500.
  • the continuous in the multi-stage continuous scheduling manner may refer to the realization of the scheduling according to the first stage, the second stage, and the third stage in sequence.
  • Coordination can refer to the progression of the interconnection and connection sequence between you in the first, second and third stages.
  • the first stage may be to determine the connection sequence of multiple clients 100 or servers 400.
  • the second stage can be used to achieve queue transfer.
  • the third stage may be used to map the data flow of the connection sequence determined in the first stage and the second stage to at least one interface 501. .
  • the data processing module 203 is configured to implement the first-stage scheduling in the following manner:
  • the data processing module 203 is configured to implement user flow scheduling in the following steps:
  • the multiple user streams are divided into at least two first groups with different first priorities based on a hierarchical manner specified by the user, and the connections of the at least two first packets with respect to the user end 100 are scheduled according to the order of the first priority.
  • the user-specified hierarchical manner is that the user can divide all user flows related to the connection of the user terminal 100 into several first groups in a customized manner according to the usage scenario.
  • Each first group has a different first priority.
  • the scheduling priority of the user connection can be divided according to the order in which the user terminal 100 requests to establish a session, the QoS of the user service, and the characteristics of the user data packet service.
  • the data processing module 203 divides all the connections of the client 100 into different first groups according to its self-defined division basis. Each different first group has a different first priority. The data processing module 203 always selects the user connection in the first group with the highest first priority as the first connection. If and only if there is no data packet to be sent for the connection with the higher first priority, the connection in the first packet with the lower first priority will be selected.
  • the second priority of each connection with the user end 100 in the first group is allocated in a completely fair scheduling manner.
  • Each connection about the user end 100 in the first group is scheduled according to the order of the second priority.
  • the first group has a plurality of connections about the user end 100, and these connections have the same first priority.
  • the second priority refers to the scheduling priority of each connection in the first packet within the same first priority.
  • CFS Completely Fair Scheduler
  • the main idea of Completely Fair Scheduler (CFS) is to maintain fairness in scheduling time and allocate certain resources to each connection in the first group. When a certain connection in the first group runs for a long time, its second priority is lowered.
  • the data processing module 203 is configured to: New user flows appearing in the connection are assigned the highest second priority, and scheduling is implemented in a round-robin manner.
  • obtaining the first data threshold for example, 330KB
  • the data stream is assigned a higher priority.
  • a bias should be given to a data stream that has existed for a long time so that it can complete the transmission of the data stream as soon as possible.
  • the highest second priority is assigned to the new data stream that appears, that is, the new user stream will preempt other data streams in the same first priority that have not completed transmission for a long time.
  • Two priority levels so that QoE can be met as soon as possible.
  • scheduling in a cyclic manner means that when the transmission time of the new user stream exceeds the first time threshold, or the transmission data volume of the new user stream exceeds the first data threshold, the second priority of the new user stream is reduced to lowest.
  • the second priority of the user flow is dynamically increased until the highest second priority is reached.
  • the first time threshold or the first data threshold can be customized by the user according to the application scenario.
  • the first time threshold can be selected as 3 seconds
  • the first data threshold can be selected as 330KB.
  • the new user flow exceeds the first time threshold or the first data threshold, the new flow is considered to be a long flow, and the right to preempt the second priority will be lost.
  • the data processing module 203 when a client 100 in the first stage opens multiple connections to connect to resources in the same server 400, the data processing module 203 will connect multiple resources in the same server 400 to the user.
  • the connection of the terminal 100 is divided into a second group.
  • the network resources acquired by the second packet are reallocated to other user flows that have not terminated the session in the second packet, thereby reducing the number of unterminated user flows.
  • the transmission time of other user streams of the session is used to balance the overhead of connecting to different servers 400 with large differences in data streams.
  • each user stream since we do not take the size of each user stream as a priori knowledge, we cannot allocate network resources for the transmission of each user stream in advance. Therefore, by dividing multiple connections on the client 100 that connect to resources in the same server 400 into a second group, if one connection in the group is completed before the other connections, its resources will be reallocated in the same group of connections, Thereby speeding up the remaining connections, that is, by mapping the QoE requirements of a single session to the network resource pool and completing the balance optimization. Through this setting method, the fairness between different web pages can also be improved. Two web pages, regardless of the number of substreams, are allocated the same amount of network resources to load.
  • the data processing module 203 is configured to implement the third-stage scheduling as follows:
  • connection sequence of the multiple client terminals 100 is determined based on the first stage and the second stage, and the user streams corresponding to the client terminals 100 are sequentially mapped to the at least one interface 501 provided by the at least one hardware interface module 202.
  • the user flow of each client 100 is divided into a first data flow before the second data threshold and a second data flow after the second data threshold based on the second data threshold.
  • the first data stream is bound to one of the multiple interfaces 501, and the second data stream is mapped to the remaining multiple interfaces 501.
  • the first data stream is bound to the interface 501 so that all the first data stream passes through the interface 501 Mapping to the communication path 500 corresponding to the interface 501 for transmission can not only reduce the out-of-sequence delay, but also reduce the transmission delay across the communication path 500 under heterogeneous paths, that is, although the first data stream is arranged to pass through the interface with a higher RRT 501 sends data packets and continues to schedule on the communication path 500 corresponding to the interface 501, thereby freeing up space on other low RTT paths to potentially benefit other connections.
  • the second data threshold is user editable. The user can set the size of the second data threshold according to the current application environment.
  • the data processing module 203 is configured to map the second data stream to the remaining plurality of interfaces 501 in the following manner: unify the first scheduling of multi-communication path transmission based on the remaining plurality of interfaces 501 Behavior and the second scheduling behavior of retransmission across the communication path, thereby assigning the best interface 501 to each data packet in the second data stream to provide the best quality of service.
  • multiple communication paths 500 may be used for transmission to obtain the gain of multiplexing. Retransmission across the communication path 500 in time can balance and reduce packet loss and out-of-sequence delay.
  • the first scheduling behavior of unified multi-communication path transmission and the second scheduling behavior of cross-communication path retransmission can significantly improve broadband utilization.
  • the data processing module 203 abstracts the unity of the first scheduling behavior and the second scheduling behavior as a function representation.
  • is a scale factor that normalizes RTT and BW to the same unit.
  • formula (1) is an objective function that unifies the first scheduling behavior and the second scheduling behavior.
  • the data processing module 203 is configured to implement the second stage of scheduling in the following manner: based on the trigger of the second scheduling behavior of retransmitting the data packet in the third stage, the retransmitted data packet is preferentially transmitted.
  • the backup interface 501 should be selected for retransmission across the communication path 500.
  • the normal order delivery of the data stream can be restored as soon as possible, thereby alleviating out-of-sequence delay and packet loss.
  • This embodiment discloses a relay method based on multi-path scheduling.
  • the method includes: respectively deploying communication with a plurality of user terminals 100 on a routing path between a plurality of user terminals 100 and a plurality of servers 400, and integrating multiple accesses.
  • multiple wireless access networks include wireless wide area networks (Wireless Wide Area Network, WWAN), wireless metropolitan area networks (Wireless Metropolitan Area Network, WMAN), wireless local area networks (Wireless Local Area Network, WLAN), and wireless personal domains.
  • Heterogeneous networks such as Wireless Personal Area Network (WPAN), Mobile Ad Hoc Network (MANET), mobile communication networks (such as 3G, 4G, 5G, etc.), satellite networks, and wireless sensor networks.
  • WPAN Wireless Personal Area Network
  • MANET Mobile Ad Hoc Network
  • the mobile communication network includes mobile communication networks of different mobile communication technologies used by different operators.
  • convergence refers to the construction of a complete multi-user multi-path transmission framework in a heterogeneous network scenario where multiple user terminals 100 are placed in the coexistence of multiple access technologies, which is useful for converging GSM, WCDMA, LTE, Wi-Fi, etc.
  • Heterogeneous networks with multiple access technologies perform comprehensive and unified resource management, and provide multiple options for the access network of the user end 100, so that the user end 100 can select one or more of the various networks provided by the heterogeneous network. It can be switched from one network to another network, so that multiple coexisting access technologies can be used to transmit data to the server 400 in parallel on multiple paths to provide aggregated bandwidth resources for the user end 100.
  • the first multi-path data processing module 200 diverges the received data to the first processing path 210 that processes the data in a predetermined manner based on the editable specific data type, and is independent of the first processing path 210 and bypasses the kernel protocol.
  • the target data refers to user-defined data. Users can edit specific types of target data as needed.
  • the specific type of target data refers to the frame structure of the target data and whether the source address of the data is the user terminal 100 directly served by the relay device.
  • the first multi-path data processing module 200 classifies the received data based on the specific type of target data. Data of a specific type belonging to the target data is diverted to the first processing path 210.
  • the first processing path 210 processes data in a predetermined manner.
  • the established method refers to processing methods other than multi-path transmission, such as forwarding control data for settlement, or forwarding other information.
  • the first processing path 210 can send data to the user space or the kernel protocol stack.
  • the second processing path 220 and the first processing path 210 are independent of each other.
  • the second processing path 220 directly transmits data of a specific type conforming to the target data to the user space of the operating system carried by the first multi-path data processing module 200 by bypassing the kernel protocol stack.
  • Bypassing the kernel means that the data passes through the original socket instead of the standard socket.
  • the original socket can send and receive data packets that have not passed through the kernel protocol stack, so that the second processing path 210 can be implemented without passing through the kernel protocol stack. Way directly into the user space.
  • the first multi-path data processing module 200 maps the data of the second processing path 220 to a plurality of independent communication paths 500 in the user space and communicates with the second multi-path data processing module 300 in a reverse multiplexing manner. In this way, multi-path transmission between multiple client terminals 100 and multiple servers 400 is realized.
  • the first multi-path data processing module 220 can access multiple networks of different standards through the hardware interface module 20, thereby distributing user data to multiple independent communication paths 500 corresponding to the multiple networks of different standards.
  • the present invention uses the original socket through the second processing path 220 to process the data stream sent by the client 100 or the server 400 in the user space to achieve kernel bypass, so that multiple client 100s are transmitted at the user space level.
  • the design of multi-path transmission is completely in the user space, and there is no need to modify the kernel of the system, nor does it involve the modification of application programs.
  • the logic of multi-path packet scheduling is upgraded to the user space of the application, so that it can combine the dynamic changes of the network and application specifications from a global perspective to schedule packets to achieve the optimization of aggregate QoE.
  • the deployment mode of the first multi-path data processing module 200 and the second multi-path data processing module 300 provided in this embodiment is shown in FIG. 2.
  • Multiple user terminals 100 may be connected to the first multi-path data processing module 200 in a wireless or wired manner.
  • it may be connected to the first multi-path data processing module 200 through a wireless access point (Wireless Access Point, AP).
  • AP Wireless Access Point
  • the first multi-path data processing module 200 can be integrated into the user equipment of the user end 100, or set on the user side routing, intermediate, and CPE, or set on the user side in an independent hardware manner, so that it can be passed through a base station (such as BTS, NodeB, eNodeB, etc.) are connected to different core networks, and the core network is connected to shortwave communication network, GPS, satellite communication network, cellular mobile network, PSTN, ISDN, Internet, etc.
  • a base station such as BTS, NodeB, eNodeB, etc.
  • the core network is connected to shortwave communication network, GPS, satellite communication network, cellular mobile network, PSTN, ISDN, Internet, etc.
  • the server 400 supports multi-path transmission, for example, an application supporting multi-path transmission is installed, the first multi-path data processing module 200 provided in this embodiment can directly establish a multi-path data link with the server 400.
  • the multi-path data link may be established through the second multi-path data processing module 300 that establishes a communication session with the server 400.
  • the first multi-path data processing module 200 or the second multi-path data processing module 200 provided in this embodiment can be deployed on the server 400 side, so that the user end 100 can establish a multi-path data link with a multi-path data processing module 200 or a second multi-path data processing module 200.
  • the first multi-path data processing module 200 provided in this embodiment can be deployed on the user side of a base station (for example, BTS, NodeB, eNodeB).
  • the first multi-path data processing module 200 and the second multi-path data processing module 300 can also be deployed on the 3G core network elements SGSN (Serving GPRS Support Node) and GGSN (Gateway GPRS Support Node).
  • it can also be deployed in a 4G core network, for example, deployed on the network elements of the all-IP packet core network EPC (Evolved Packet Core) of LTE (Long Term Evolution), such as SGW (Serving Gateway) and PGW (PDN Gateway) ).
  • EPC Evolved Packet Core
  • LTE Long Term Evolution
  • SGW Serving Gateway
  • PGW Packe Gateway
  • it can also be deployed on the User Plane Function (User Plane Function) of the 5G core network.
  • CPE Customer Premise Equipment
  • the first multi-path data processing module 200 and the second multi-path data processing module 300 provided in this embodiment can be combined with a virtual network device.
  • the application program calls the port (Socket) and sends the corresponding data packet to the server 400.
  • the VPN service system forwards all data packets to virtual network devices by using Network Address Translation (NAT).
  • NAT Network Address Translation
  • the second multi-path data processing module 200 can obtain all the data packets forwarded to the virtual network device by reading the data of the virtual network device, so that it can not only access the data in the internal network from the outside, but also obtain multi-path transmission. Provides a gain in transmission efficiency.
  • the communication path 500 may also be referred to as a pipe.
  • the pipeline can be implemented flexibly in different ways. For example, you can use a TCP link as a pipe.
  • the data in the second processing path 220 can be inversely multiplexed onto the multiple interfaces 501, so that the data in the second processing path 220 can be transmitted through the TCP socket on each pipe. Both TCP payload and control data are encapsulated in the transport layer.
  • the TCP BBR congestion control algorithm can be selected, which can reduce end-to-end delay and packet loss.
  • inverse multiplexing refers to segmenting and encapsulating the data stream from the second processing path 220 into packets, and then distributing the packets to the pipeline.
  • Each message has a header, which contains the ID, length, and serial number of the application connection.
  • the second multi-path data processing module 300 reassembles the inversely multiplexed data stream by extracting data, and forwards it to the server 400 according to the connection ID.
  • the first multi-path data processing module 200 and/or the second multi-path data processing module 300 drives the components in the second processing path 220 in a manner of sequentially performing the first stage, the second stage, and the third stage. data.
  • the first stage is to determine the connection sequence of multiple client 100s.
  • the second stage is used to realize the queue transfer.
  • the third stage is used to map the user flow whose connection sequence is determined in the first stage and the second stage to at least one communication path 500.
  • the first multi-path data processing module 200 and/or the second multi-path data processing module 300 drives the data in the second processing path 220 in a continuous coordinated scheduling manner in the first stage, the second stage, and the third stage.
  • the first stage, second stage, and third stage continuous scheduling manners adopted in this embodiment are the same as the content of Embodiment 1, and the repeated content will not be repeated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Security & Cryptography (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明涉及一种基于多路径调度的中继设备,至少包括:至少一个通信接收模块、至少一个硬件接口模块、至少一个数据处理模块;所述数据处理模块基于目标数据的特定类型而将经由所述通信接收模块接收的用户数据分流至按既定方式处理数据的第一处理路径和与所述第一处理路径彼此独立且绕过内核协议栈的第二处理路径,所述数据处理模块通过多阶段连续调度的方式驱动所述第二处理路径内的用户数据在至少两个彼此独立的通信路径传输,从而实现多个所述用户端与多个所述服务器之间的多路径传输。通过以上设置方式,不仅将分组调度的逻辑提升到应用层,还在多连接、多路径和应用程序无关的设置同时考虑路径质量、QoE和公平性。

Description

一种基于多路径调度的中继设备 技术领域
本发明涉及移动通信技术领域,尤其涉及一种基于多路径调度的中继设备。
背景技术
在无线网络技术快速发展、多样化网络设备广泛部署以及多网络接口移动终端日益普及的背景下,基于多个无线接入技术(Radio Access Technology,RAT)的融合实现并行数据传输将会是未来通信领域的研究热点之一。一方面层出不穷的无线通信系统为用户提供了异构网络(Heterogeneous Network)环境,异构网络包括以Bluetooth和Zigbee为代表的无线个域网(Wireless Personal Area Network,WPAN)、以Wi-Fi和WiGig为代表的无线局域网(Wireless Local Area Network,WLAN)、以WiMAX为代表的无线城域网(Wireless Metropolitan Area Network,WMAN)、以3G、4G以及5G为代表的移动通信网络、卫星网络、Ad Hoc以及无线传感网络等。另一方面,现有的通信终端通常具有多种网络接口,比如笔记本电脑同时配置有有线网络接口和无线Wi-Fi网络适配器,智能手机既可以接入蜂窝网络(UMTS、3G、4G、5G等),又可以接入Wi-Fi网络。而且,网络运营商通常会在接入链路和回程链路配置备用链路和设备,在网络失效时发挥作用。由此以来,两个通信端点之间就有可能存在多条路径。自然而然,出现了同时使用多条路径的想法,以此提升端到端连接的稳健性和传输性能。这样的多路径连接可以均衡负载、动态切换,自动将业务从最拥塞、最易中断的路径上转到较好的路径上。
目前,基于蜂窝系统内的多种无线接入技术、多层次的覆盖技术、多链路之间的紧密融合和协同工作,可以给用户提供更多、更优的服务。其中,异构网络融合主要研究方向包括:a)通过多接入技术互操作实现多种制式网络之间的合理选择和业务切换;b)通过多种连接方式时间灵活可靠的网络,降低时间以及系统开销,提高能耗效率;c)多种制式、多层次和多连接的复杂网络实现统一的自组织、自优化,降低资本输出和运营成本。但当前使用的传输控制协议(Transmission Control Protocol,TCP)只能支持单路径传输数据,为了改变这种不利局面,2011年互联网工程任务组(Internet Engineering Task Force,IETF)发布了一种同时兼容TCP协议以及当前应用程序的多路径传输协议(Multi-Path Transmission Control Protocol,MPTCP)使得移动终端设备能够利用异构无线网络技术进行多路径数据传输,从而实现网络吞吐量最大化和减少传输延时的目标。
例如,公开号为WO2016144224A1的国际专利文献公开了一种使用多路径传输控制协议MPTCP代理的用于多路径业务聚合的方法和布置。在用独特因特网协议IP地址来配置的多路径传输控制协议MPTCP代理中执行具有MPTCP能力的无线装置与服务器之间中继数据的方法。包括:在MPTCP代理与无线装置之间建立MPTCP会话,MPTCP会话包括使用默认业务流元组被映射在对于无线装置的第一网络路径上的第一MPTCP子流;并且建立与服务器的TCP会话。该方法进一步包括基于使用包括配置用于MPTCP代理的独特IP地质的过滤业务流元组进行的而另外MPTCP子流到对于无线装置的第二网络路径的映射,在MPTCP代理与无线装置之间的 MPTCP会话中发起另外MPTCP子流。在无线装置与服务器之间中继数据,其中MPTCP代理与无线装置之间的数据在包括第一网络路径上第一MPTCP子流的和第二网络路径上另外MPTCP子流的MPTCP会话中被交换,并且其中MPTCP代理与服务器之间的数据在TCP会话中被交换。该专利解决的技术问题是:MPTCP依赖于内核改造,需要客户端和服务器同时支持MPTCP,因此网络上的每个主机都支持MPTCP的可能性非常低。为了在两个通信主机都不支持MPTCP的情况下受益于MPTCP的多路径传输,该专利采用在PGW(Packet Data Network Gateway)处部署MPTCP代理,将来自服务器的常规TCP协议转换成传输向用户设备(User Equipment,UE)的MPTCP协议。但是,该专利公开的方法和布置并没有解决如何在两个通信主机都不支持MPTCP的情况下受益于MPTCP的问题。具体而言,该专利公开的方法是在具有MPTCP能力的无线装置与服务器之间中继数据,其中MPTCP代理设置在具有MPTCP能力的无线装置与服务器之间,在具有MPTCP能力的无线装置与MPTCP代理之间实现多路径传输,MPTCP代理与服务器是正常的单路径TCP连接。这就要求两个通信的通信主机,其中一方必须支持MPTCP协议,而在另一个不支持MPTCP的通信主机侧设置MPTCP代理,实质上并没有解决在两个通信主机都不支持MPTCP协议的情况下实现多路径传输的问题。
例如,公开号为CN108075987A的中国专利文献公开了一种多路径数据传输方法及设备,其中,多路径代理客户端和多路径代理网关之间通过第一网际互联协议IP地址建立至少两个多路径数据子流,并进行多路径数据子流数据传输。所述多路径代理网关与所述多路径代理客户端待访问的应用服务器之间,依据多路径代理客户端和多路径代理网关之间建立至少两个多路径数据子流的第一IP地址,建立TCP链接并进行TCP数据传输。通过多路径代理客户端和多路径代理网关的代理,实现基于多路径代理客户端的IP地址信息进行MPTCP多路径数据传输。该专利在多路径代理客户端和多路径代理网关之间建立MPTCP多路径传输连接,其中多路径代理网关与服务器之间基于TCP进行数据传输。然而,这种代理网关或者代理客户端需要其内核支持MPTCP协议,软件编程难度大。而且尽管该专利解决了代理服务器的IP地址对于用户侧和网络侧均可见导致容易受到安全攻击以及网络侧并不能获取终端的IP地址导致不能实现对终端流量的统计与控制的问题,但是因为MPTCP存在于内核之中,故将网络健康状况数据发送至内核时需要大量的时钟开销,不仅会造成性能损失,而且依然会成为内核的潜在数据攻击节点和系统软件的安全漏洞。
例如,公开号为CN107979833A的中国专利文献公开了一种基于异构网络互联互通的多态信息融合智能终端,包括调度系统和硬件设备;所述调度系统包括服务生成系统、信号管理系统和信号接入系统;所述硬件设备包括PDT通信模块读卡口、LTE通信模块读卡口、北斗系统通信芯片读卡口、自组网通信模块读卡口、短波卫星通信模块读卡口、音量调控按键、网络选择按键、屏幕启动按键、触摸控制屏幕和内置天线。该专利公开的通信设备是通过基于当前网络的环境通过调度系统的统一调度,自适应选择当前最优的可通信网络。但是该专利采用的调度方法,只是单纯的根据不同制式网络信号的强弱判断来实现优先级调度,没有考虑到当前网络的往返时延(Round-trip Time,RTT)以及带宽(Band width,BW),然而事实上,在动态移动的状态下,信号强度与网络性能的关联性并不如预期的那样高,因此基于信号强度来将不同的数据流调度至不同的制式的网络是不可靠的。
因此,除了需要解决多路径传输设备能够与传统的中间件兼容而大规模部署以及安全性等问 题外,还需要考虑多路径传输产生的如何利用各种网络资源的优势,为用户提供可靠、高效便捷的网络服务,例如需要考虑连接质量、体验质量(Quality of Experience,QoE)以及服务质量(Quality of Service,QoS),实现业务在不同接入技术网络间动态分流和汇聚。通过多路径并行传输能够提高吞吐量,但是在多路径传输过程中,不同传输路径性能的差异(RTT和丢包率)会导致接收端发生数据包乱序,造成接收缓存阻塞,降低整体传输的吞吐量。此外,由于传输丢包的不可避免性,丢失数据选择哪条路径进行重传,也会直接影响接收端缓存拥塞的程度。
现有技术,例如文献[1]周冬梅.泛在网络多路径并行传输机制研究[D].西安电子科技大学,2014.公开了一种基于接收端缓存溢出概率保障的多路径传输分组调度方案,能够有效地减少接收端数据包乱序,降低接收端缓存的占用,其解决了现有分组分配策略仅考虑传输路径的平均时延,忽略了时延的随机时变特性的问题,即面对动态变化的路径时延,分组到达接收端会乱序,容易出现缓存溢出现象。
例如,公开号为CN107682258A的中国专利文献公开了一种基于虚拟化的多路径网络传输方法及装置,方法包括:控制节点实时获取数据传输网络的网络状态,并获得数据传输网络的拓扑结构;当第一目标数据通过数据传输网络进行传输时,根据数据传输网络的拓扑结构和网络状态,通过预设的路径选择算法,确定出第一目标数据的路径信息;控制节点向源节点发送路径信息,以使源节点根据路径信息,将第一目标数据拆分为多份第二目标数据后,通过多条路径向目标节点发送多份第二目标数据,目的节点接收到该多份第二目标数据后,再将多份第二目标数据恢复为第一目标数据,实现了第一目标数据的多路径传输,不需要采用复杂的跨层协作机制,降低了控制及调度过程的复杂度。
例如,文献[2]王燃,谢东亮.基于估计交付时间的多路径调度算法优化[J].2016.公开了一种基于估计交付时间的多路径调度算法优化,在结合网络编码与多路径传输协议的基础上,通过确认字符(Acknowledgement,ACK)路径的RTT和丢包率进行指数平滑计算,并以此为基础较准确地预估出个路径的编码达到时间,使得编码块能按序解码。
但是以上公开的多路径调度方法和装置,均没有考虑到移动通信设备应该优化聚合多个用户的体验质量QoE而不是针对任何单个用户的体验质量QoE。而且目前没有实现这一目标的实际解决方案,尤其是从应用程序无关的角度在兼容现有传统中间件的多路径传输框架下实现聚合多个用户的体验质量QoE的多路径调度。
此外,一方面由于对本领域技术人员的理解存在差异;另一方面由于发明人做出本发明时研究了大量文献和专利,但篇幅所限并未详细罗列所有的细节与内容,然而这绝非本发明不具备这些现有技术的特征,相反本发明已经具备现有技术的所有特征,而且申请人保留在背景技术中增加相关现有技术之权利。
发明内容
针对现有技术至不足,本发明提供一种基于多路径调度的中继设备,用于部署在多个用户端与多个服务器通信的路由路径上以融合多个无线接入网络,所述中继设备至少包括:至少一个通信接收模块,用于接收多个所述用户端的用户数据。至少一个硬件接口模块,用于接入提供包括至少两个彼此独立的通信路径的网络,从而通过至少两个所述通信路径分发所述用户数据。至少一个数据处理模块,用于将接收的用户数据映射至至少两个彼此独立的通信路径各自的接口上。所述数据处理模块基于目标数据的特定类型而将经由所述通信接收模块接收的用户数据分流至 按既定方式处理数据的第一处理路径和与所述第一处理路径彼此独立且绕过内核协议栈的第二处理路径。通过该设置方式,能够避免内核修改且与大规模部署的现有网络中间件兼容,从而在用户空间层面构建了多路径传输框架。同时使得数据处理模块可以在单独的网路命名空间工作,以避免与其他程序使用的内核配置冲突。
根据一种优选实施方式,所述第一处理路径将数据发送至用户空间或内核协议栈。所述第二处理路径以绕过内核协议栈的方式将符合目标数据特定类型的数据直接传输至所述数据处理模块上的用户空间。
根据一种优选实施方式,所述数据处理模块将所述第二处理路径内的数据反向复用至多个接口上。反向复用是指将数据流分段并封装成报文,并将报文分发到多个通信路径上。
本发明还提供一种基于多路径调度的中继设备,至少包括数据处理模块。所述数据处理模块配置为通过以下多阶段调度的方式传输至少一个用户端/服务器发送的数据:确定多个所述用户端/服务器的连接顺序的第一阶段;用于实现插队传输的第二阶段;用于将所述第一阶段和第二阶段确定的连接顺序的数据流映射至至少一个通信路径的第三阶段连续。所述数据处理模块通过多阶段连续调度的方式驱动所述第二处理路径内的用户数据在至少一个所述硬件接口模块接入的至少两个彼此独立的通信路径传输,从而实现多个所述用户端与多个所述服务器之间的多路径传输。通过该设置方式,不仅将分组调度的逻辑提升到应用层,还在多连接、多路径和应用程序无关的设置同时考虑路径质量、QoE和公平性。
根据一种优选实施方式,所述数据处理模块配置为按照如下方式实现第一阶段的调度:基于所述第二处理路径中的用户数据或者所述用户端/服务器传输的数据划分为始终优先调度的用于内部控制消息传递的控制流以及对应每个所述用户端/服务器的用户流;基于用户指定的分级方式将多个所述用户流划分为至少两个具有不同第一优先级的第一分组,并按照第一优先级的高低顺序调度至少两个所述第一分组关于所述用户端/服务器的连接。
根据一种优选实施方式,第一分组内的每个关于所述用户端/服务器的连接的第二优先级采用完全公平调度的方式分配,并根据所述第二优先级的高低顺序调度所述第一分组内每个关于所述用户端/服务器的连接。
根据一种优选实施方式,在所述第一阶段调度内的第一分组内的多个关于所述用户端/服务器的连接出现新用户流的情况下,所述数据处理模块为出现的新用户流分配最高的第二优先级,并且以循环的方式实现调度。所述循环的方式实现调度是指在所述新用户流的传输时间超过第一时间阈值,或所述新用户流的传输数据量超过第一数据阈值的情况下,所述新用户流的第二优先级降至最低,并根据所述完全公平调度的方式动态增加该用户流的第二优先级直至达到最高第二优先级。
根据一种优选实施方式,所述数据处理模块配置为按照如下方式实现所述第三阶段的调度:基于所述第一阶段和第二阶段确定多个所述用户端/服务器的连接顺序将对应所述用户端/服务器的用户流按顺序映射至至少一个接口上。基于第二数据阈值将每个所述用户端/服务器的用户流分为位于所述第二数据阈值前的第一数据流以及位于所述第二数据阈值后的第二数据流。所述第一数据流绑定至多个所述接口的其中一个,并且所述第二数据流映射至其余多个接口上。
根据一种优选实施方式,所述数据处理模块配置为按照如下方式将所述第二数据流映射至其余多个接口上:统一基于其余多个接口的多通信路径传输的第一调度行为以及跨通信路径重传的 第二调度行为,从而为所述第二数据流内的每个数据包分配最佳接口以提供最佳服务质量。
根据一种优选实施方式,所述数据处理模块配置为按照如下方式实现所述第二阶段的调度:基于所述第三阶段中重传数据包的第二调度行为的触发而优先传输所述重传数据包。所述重传数据包映射至在原传输接口之外的可用接口上,以实现跨通信路径重传。
根据一种优选实施方式,在所述第一阶段内的一个所述用户端打开了多个连接以连接同一所述服务器内资源的情况下,所述数据处理模块将连接相同所述服务器内资源的多个关于所述用户端的连接划分为第二分组。在所述第二分组内的至少一个关于所述用户端的连接的用户流终止会话的情况下,所述第二分组获取的网络资源在所述第二分组内重新分配于未终止会话的其他用户流,从而减少未终止会话的其他的用户流的传输时间以平衡连接数据流差异较大的不同服务器的开销。
本发明还提供一种基于多路径调度的中继设备,至少包括数据处理模块。所述数据处理模块配置为基于每个用户端/服务器连接的数据流分配的第二优先级的高低顺序进行传输/调度。在所述用户端/服务器的连接出现新数据流的情况下,为所述新数据流分配最高的第二优先级,并基于数据流存在的时间动态增加数据流的第二优先级。
根据一种优选实施方式,在所述用户端/服务器的连接出现新数据流的情况下,所述数据处理模块配置为所述新数据流分配最高的第二优先级,并在所述新数据流的传输时间超过第一时间阈值或所述新数据流的传输数据量超过第一数据阈值的情况下,将所述新数据流的第二优先级降至最低。
根据一种优选实施方式,在所述新数据流的第二优先级降至最低或者存在长时间未完成传输的数据流的情况下,所述数据处理模块配置为基于完全公平调度的方式增加第二优先级降至最低的数据流或者是长时间未完成传输的数据流的第二优先级。
根据一种优选实施方式,在所述第一阶段内的一个用户端打开了多个连接以连接同一服务器内资源的情况下,所述数据处理模块配置为将连接相同服务器内资源的多个关于所述用户端的连接划分为第二分组。在所述第二分组内的至少一个关于所述用户端的连接的数据流终止会话的情况下,所述第二分组获取的网络资源在所述第二分组内重新分配于未终止会话的其他数据流。
本发明还提供一种基于多路径调度的中继方法,所述方法包括:在多个用户端与多个服务器通信的路由路径上分别部署与多个所述用户端通信且融合多个接入网络的第一多路径数据处理模块以及与所述服务器通信的第二多路径数据处理模块。所述第一多路径数据处理模块基于可编辑的特定数据类型而将接收的数据分流至按既定方式处理数据的第一处理路径和与所述第一处理路径彼此独立且绕过内核协议栈的第二处理路径。在用户空间将所述第二处理路径的数据映射至多个彼此独立的通信路径且以反向复用的方式与所述第二多路径数据处理模块通信,从而实现多个所述用户端与多个所述服务器之间的多路径传输。
附图说明
图1是本发明中继设备的一个优选实施方式的模块示意图;
图2是本发明方法的一个优选实施方式的模块示意图。
附图标记列表
100:用户端   200:第一多路径数据处理模块  300:第二多路径数据处理模块
400:服务器   500:通信路径                201:通信接收模块
202:硬件接口模块  203:数据处理模块  210:第一处理路径
220:第二处理路径  401:远程服务器    501:接口
具体实施方式
下面结合附图1和附图2进行详细说明。首先,对本发明使用的部分术语进行定义。
用户端:可以是各种形式的用户设备(User Equipment,UE),可以是具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其它处理设备,移动台(Mobile Station,MS),终端设备(Terminal Equipment)等等,也可以是用户设备上的应用程序,同时用户端也可以是相对于虚拟机而言的对实体计算机的称呼。用户端提供给虚拟机硬件环境,有时也称为寄主或宿主。
代理服务器:可以是指网络代理,提供一种特殊的网络服务,允许一个网络终端通过这个服务与另一个网络终端进行非直接的连接。
带宽聚合:多路径期望在多条可用路径上的并行传输可以成倍地增加网络可用带宽。若能使用这种方式实现有效的带宽聚合,多宿主设备将获得好的网络性能。
流:只能以事先规定好的顺序被读取一次的数据的一个序列,具体而言是完整的一次TCP/IP链接,包含多个数据包。
用户流:指关于用户的数据流。
分组(Packet):对应于TCP/IP的网络层,指的是TCP/IP协议通信传输的数据单位,也可以称为数据包,通常在调度中称为分组,指的是调度策略转发数据的粒度。
公平性:公平性要求正在进行多路径传输的流与同级别的流在瓶颈链路中享有相同的网络资源。如果端点通过多路径传输获得了更多资源,将可能引起网络拥塞崩溃的问题。
资源池(Resource Pool,RP):RP大部分指带宽,其改变了公平的概念,使得多路径传输在真实网络中的实施成为了可能。RP原则不是独立地处理每个路径资源,而是将多个路径视为一个大型资源池,进而改变对多个路径资源的调度。
路径饿死:路径解除拥塞之后不能继续正常传输数据。
用户空间(User Space):用户程序的运行空间。
内核协议栈(Kernel Space):操作系统内核的运行空间。
上下文(Context):上下文简单说来就是一个环境参数。环境参数是关于网络性能以及调度用户流时的传输时间和字节等参数。
网络命名空间:Linux内核提供了命名空间,命名空间将全局系统资源包装到一个抽象中,该抽象只会与命名空间中的进程绑定,从而提供资源隔离;网络命名空间为命名空间中的所有进程提供了全新的网络堆栈,包括网络接口、路由表等。
实施例1
本实施例公开了一种基于多路径调度的中继设备,用于部署在多个用户端100与多个服务器400通信的路由路径上以融合多个无线接入网络。优选地,多个无线接入网络是指包括无线广域网(Wireless Wide Area Network,WWAN)、无线城域网(Wireless Metropolitan Area Network,WMAN)、无线局域网(Wireless Local Area Network,WLAN)、无线个域网(Wireless Personal Area Network,WPAN)、移动自组织网络(Mobile Ad Hoc Network,MANET)、移动通信网络(例如3G、4G、5G等)、卫星网络、无线传感网络等多种接入技术、运行在不同的协议上的异 构网络。优选地,移动通信网络包括不同运营商使用的不同移动通信技术的移动通信网络。不同的移动通信技术包括全球移动通信(Global System for Mobile Communication,GSM)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、码分多址(Code Division Multiple Access,CDMA)、时分-同步码分多址(Time Division–Synchronous Code Division Multiple Access,TD-SCDMA)、第三代移动通信技术(The 3rd-Generation Mobile Communication,3G)长期演进(Long term Evolution,LTE)、符合第四代移动通信技术(The 4th-Generation Mobile Communication,4G)标准的LTE-Advanced、具备第四代移动通信技术特征的系统架构演进项目(System Architecture Evolution,SAE)以及第五代移动通信技术(The 5th-Generation Mobile Communication,5G)。
优选地,融合是指在多个用户端100置于多种接入技术共存的异构网络场景下,构建一种完整多用户多路径传输框架。优选地,融合还可以对融合GSM、WCDMA、LTE、Wi-Fi等多种接入技术的异构网络进行全面统一的资源管理,为用户端100的接入网络提供多种选择性,使得用户端100能够从异构网络提供的多种网络中选择其中一个或几个接入网络,并且可以从一个网络切换至另一个网络,从而利用多种并存的接入技术,在多个路径上并行传输数据至服务器400,为用户端100提供聚合的带宽资源。优选地,现有的多路径传输方案,例如MPTCP可以将多个网络中的至少一个作为主要传输路径,将其他路径作为备用传输路径。为了保持数据传输的连续性,在用户端100移动和网络动态变化的情况下,进行主要路径和备用路径之间的切换,保持传输连续性。同时,也可以将用户端100的数据流分发到多条通信路径500上,从而提高通信路径的利用率,以达到负载均衡和最大化聚合带宽的目标,而且还可以通过多个端到端的连接来提高其可靠性。但是,现有多路径传输调度方案大多都是采用简单的轮询(Round Robin,RR)算法,即当发送端发送分组时,分组被依次分配给每条路径进行传输。轮询算法是一种最简单、最直接的调度管理算法,能够防止路径饿死,但是由于不同路径带宽、时延、丢包率等网络性能方面的差异,轮询算法会导致接收端乱序分组的出现。这些乱序分组还在接收端进行重排序,并且消耗接收端的缓存资源和处理器资源。然而,这些资源恰恰是移动用户端100最缺乏的。优选地,传统的单路径传输模式中,接收端的乱序一般是由于分组丢失和路径的变化造成的,通过简单丢弃乱序分组或者缓存乱序分组,并及时重传丢失分组来降低缓存占用和重排序的开销。显然多路径传输的场景并不适用这个方法,因为分组来自不同的路径,各个路径在带宽、时延等方面的差异造成乱序时,缺失的分组可能正在端到端的发送中,或者对它的确认还尚未到达。因此简单丢弃乱序分组或者立即重发,并降低拥塞窗口的大小,都会对性能产生不利影响。优选地,现有技术可以使用显示拥塞通知区分拥塞和丢包,并使用前向纠错编码恢复错误,同时使用分组动态匹配算法,为每一个分组选择合适的路径。优选地,事实上,现有技术的调度方案大多采用基于吞吐量和公平性的结合来实现分组调度,但没有考虑到保持和改善用户的体验质量QoE,尤其是没有考虑到在人流量较大的应用场景下,如何优化聚合多用户的体验质量QoE的问题。而且,现有的大多数多路径传输方法或设备均是集中于传输层实现到端到端的多路径传输,依赖内核改造,需要用户端100和服务器400端同时支持,因此与现有网络中大规模部署的中间件不兼容。
综上,本发明针对上述问题,提供一种基于多路径调度的中继设备。该中继设备能够在用户空间建立完整的多用户多路径传输框架,并将分组调度逻辑提升到应用层,使得通信/中继设备不需要修改内核或应用程序而能够与现有的网络中间件兼容,并且由本实施例提供的中继设备其系 统设计是在用户空间层,因此具有高度扩展的能力,能够集成新的分组调度策略,从而有利于部署和性能优化。
优选地,如图1所示,中继设备至少包括:至少一个通信接收模块201,用于接收多个用户端100的用户数据;至少一个硬件接口模块202,用于接入提供包括至少两个彼此独立的通信路径500的网络,从而通过至少两个通信路径500分发用户数据;至少一个数据处理模块203,用于将接收的用户数据映射至至少两个彼此独立的通信路径500各自的接口501上。优选地,通信接收模块201可以是网关、信号接收器、信号接收电路,能够接收各种格式的数据,并对接收到的数据进行解析。优选地,硬件接口模块202可以是对应不同运营商带有SIM卡接口的基带模块,也可以是Wi-Fi模块、蓝牙模块、Zigbee模块等。优选地,数据处理模块203至少包括处理器和存储器。存储器用于存储指令。处理器被配置为通过执行存储器存储的指令。处理器可以是中央处理器(Central Processing Unit,CPU),通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application-Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。优选地,数据处理模块203承载有操作系统,例如Linux系统。优选地,通过硬件接口模块202能够接入异构网络。异构网络内包括多个无线接入网络。不同的无线接入网络可能使用不同的通信路径进行传输。而且相同的无线接入技术可能使用不同的通信路径进行通信。例如,中国移动的TD-LTE使用的频段为1880-1890MHz、2320-2370MHz以及2575-12635MHz。中国联通的TD-LTE使用的频段为2300-2320MHz以及2555-2575MHz。中国联通的FDD-LTE使用的频段为:上行频段1755-1765MHz,下行频段:1850-1860MHz。中国电信的TD-LTE使用的频段为2370-2390MHz、2635-2665MHz。中国电信的FDD-LTE使用的频段为:上行频段1765-1780MHz。
优选地,数据处理模块203基于目标数据的特定类型而将经由通信接收模块201接收的用户数据分流至第一处理路径210和第二处理路径220。优选地,目标数据是指用户定义的数据。用户可根据需要编辑目标数据的特定类型。优选地,目标数据的特定类型是指目标数据的帧结构以及该数据的源地址是否是中继设备直接服务的用户端100或服务器400。优选地,数据处理模块203基于目标数据的特定类型对通信接收模块201接收到的数据进行分类。属于目标数据的特定类型的数据分流至第一处理路径210。第一处理路径210按既定方式处理数据。既定方式是指除了多路径传输之外的处理方式,例如转发结算用的控制数据,或者是转发其他信息等。优选地,第一处理路径210可以将数据发送至用户空间或内核协议栈。优选地,第二处理路径220与第一处理路径210彼此独立。第二处理路径220以绕过内核的方式将符合目标数据的特定类型的数据直接传输至数据处理模块203。第二处理路径220可以将数据传输至数据处理模块203承载的操作系统的用户空间。绕过内核协议栈的方式是指数据通过原始套接字而不是标准套接字。原始套接字可以收发没有经过内核协议栈的数据包,从而第二处理路径220能够实现以不经过内核协议栈的方式直接进入用户空间。优选地,数据处理模块203通过多阶段连续调度的方式驱动第二处理路径220内的用户数据在至少一个硬件接口模块202提供的至少两个彼此独立的通信路径500传输,从而实现多个用户端100与多个服务器400之间的多路径传输。优选地,数据处理模块203采用多阶段连续调度的方式分配第二处理路径220内的用户数据。优选地,第二处理路径220内的数据分别分配至至少两个彼此独立的通信路径500的接口501上。 通过该设置方式,本发明通过第二处理路径220使用原始套接字从而用户空间中处理用户端100或服务器400发送的数据流以实现内核绕过,使得在用户空间层面传输多个用户端100的数据,从而多路径传输的设计完全在用户空间,不需要对系统的内核进行改造,也不涉及对应用程序的修改。而且将多路径分组调度的逻辑提升到应用于应用程序的用户空间,从而能够从全局的角度结合网络的动态变化和应用规范来调度分组实现聚合QoE的优化。此外,多路径传输以及分组调度全部都是在用户空间实现,不仅有利于集成和驱动有关网络性能的上下文数据,并且可以高度扩展以集成新的分组调度策略,有利于部署和性能优化。
优选地,数据处理模块203在单独的网络命名空间传输和调度第二处理路径220内的用户数据。通过该设置方式,能够避免与其他程序使用的内核配置冲突,并减轻潜在的安全问题。例如,在本实施例提供的在代理服务器上需要禁用反向路径过滤的操作,让用于将数据包转发到真实网卡的虚拟以太网设备接收本实施例提供的中继设备生成的任何源IP的数据包,由此产生的安全风险将与正常运行的程序隔离开来,并由数据处理模块203在自己的网络命名空间中管理。
优选地,本实施例提供的中继设备其部署方式如图1所示。用户端100可以通过无线或有线的方式与通信接收模块201连接。例如,可以通过无线访问接入点(Wireless Access Point,AP)与本发明的中继设备的通信接收模块201连接。本实施例提供的中继设备通过至少一个硬件接口模块202接入不同接入技术的网络。例如,可以通过基站(例如BTS、NodeB、eNodeB等)接入不同的核心网,并由核心网接入短波通信网、GPS、卫星通信网、蜂窝移动网、PSTN、ISDN、Internet等。在服务器400侧,如果服务器400支持多路径传输,例如安装有支持多路径传输的应用程序等,则本实施例提供的中继设备可以直接与服务器400建立多路径数据链接。在服务器400不支持多路径传输的情况下,可以通过与服务器400建立通信会话的支持多路径传输的远程服务器401建立多路径数据链接。优选地,还可以选择使用具有本实施例公开的数据处理模块203相同功能的设备,该设备能够与服务器400建立会话,从而本实施例提供的中继设备与该设备能够建立多路径数据链接。优选地,在用户端100具有多路径通信功能的情况下,可以将本实施例提供的中继设备部署在服务器400侧,即本实施例提供的中继设备能够与服务器400建立会话。而用户端100能够与本实施例提供的中继设备建立多路径数据链接。通过以上的部署方式,能够在用户端100、服务器400其中一方不支持多路径传输,或者是均不支持多路径传输的情况下实现用户端100和服务器400之间的多路径传输。
优选地,本实施例提供的中继设备根据部署的位置不同,可以实现不同的功能。例如在高速移动的地铁、火车、动车上,由于快速移动以及复杂地形的限制,乘客的移动网络处于频繁的网络中断状态。因此可以将本实施例提供的中继设备部署在地铁、火车、动车上,通过AP将乘客的用户设备接入,从而可以利用不同运营商的不同频段的移动网络与部署在服务器400侧的运程服务器401连接,或者是有多路径传输功能的服务器400连接,实现多路径传输。例如,在车站、机场等人流量较大的公共场所中,可以与为用户提供网络服务的设备连接,从而为大量的用户提供基于多路传输的网络访问功能。优选地,可以将具有本实施例公开的数据处理模块203相同功能的设备,或者是具有多路径传输能力的远程服务器401部署于内容分发网络(Content Delivery Network,CDN)提供商的骨干网络中,可以利用多路传输技术提高用户访问该CDN的效率。也可以将远程服务器401或具有本实施例公开的数据处理模块203相同功能的设备部署于特定的内网中,可以取得类似虚拟专用网络VPN的效果,即在外部访问该内网中数据的同时 也可以获得多路传输提供的传输效率增益。优选地,在实际部署中,本实施例提供的中继设备除了可以部署在基站(例如BTS、NodeB、eNodeB)的用户侧,也还可以部署在3G核心网元SGSN(Serving GPRS Support Node)以及GGSN(Gateway GPRS Support Node)上。优选地,还可以部署在4G核心网络中,例如,部署在LTE(Long Term Evolution)的全IP分组核心网EPC(Evolved Packet Core)的网元上,比如SGW(Serving Gateway)和PGW(PDN Gateway)。优选地,还可以部署在5G的核心网的用户面功能(User Plane Function)上。优选地,还可以部署在CPE(Customer Premise Equipment),即客户端前置设备上。
优选地,如图1所示,在本实施例提供的中继设备与远程服务器401,或者能够与服务器400建立会话的具有多路径传输能力的设备、代理,或者是具有多路径传输能力的服务器400之间具有多个彼此独立的可用通信路径500。优选地,通信路径500也可以称为管道。管道可以以不同的方式灵活实现。例如,可以使用TCP链接作为管道。优选地,数据处理模块203将第二处理路径220内的数据反向复用到多个接口501上,从而可以通过每个管道上的TCP套接字来传输第二处理路径220内的数据。TCP有效载荷和控制数据都被封装到传输层中。优选地,可以选择TCP BBR拥塞控制算法,能够减少端到端延迟和丢包。优选地,反向复用是指在数据处理模块203将来自第二处理路径220的数据流分段并封装成报文,然后将报文分发到管道上。每个报文都有一个报头,其中包含应用程序连接的ID、长度和序列号。优选地,在服务器400一侧,支持多路径传输的远程服务器401,或者具有多路径传输能力的代理,或者是具有多路径传输能力的服务器400通过提取数据来重新组合反向复用的数据流,并根据连接ID将其转发到服务器400上。优选地,远程服务器401,或者具有多路径传输能力的设备、代理,或者是具有多路径传输能力的服务器400将反馈至用户端100的数据在用户空间反向复用的方式传输至本实施例的中继设备。通过利用反向多路复用管道和使用TCP链接作为管道,能够通过消除连接建立的开销(例如,慢启动)立即使得短流受益,而且使得每个管道上的流量更加密集,从而带来更好的宽带利用率。
优选地,多阶段连续调度至少包括第一阶段、第二阶段和第三阶段。第一阶段用于建立关于用户的连接顺序,即每个用户连接的发送顺序进行调度,以确保所有用户的公平性并保持QoE。第二阶段是关于插队调度,即优先考虑传输紧急分组以及第三阶段中的跨通信路径500重传。第三阶段是接口调度,即根据上下文的相关信息,将一个用户连接的流量映射到不同的接口501上(对应不同的通信路径500)以提高端到端的性能。优选地,多阶段连续调度的方式为:数据处理模块203以依次第一阶段、第二阶段以及第三阶段连续协调调度的方式驱动第二处理路径220内的流(数据流)映射至不同的接口501上。优选地,映射至不同的接口501上表示数据流映射至不同的通信路径500上。多阶段连续调度的方式中的连续可以是指依次按照第一阶段、第二阶段和第三阶段实现调度。协调可以是指第一阶段、第二阶段和第三阶段你彼此之间的相互联系和连接顺序的递进。优选地,第一阶段可以是确定多个用户端100或服务器400的连接顺序。第二阶段可以是用于实现插队传输。第三阶段可以是用于将第一阶段和第二阶段确定的连接顺序的数据流映射至至少一个接口501上。。
优选地,数据处理模块203配置为按照如下方式实现所述第一阶段的调度:
基于第二处理路径220中的用户数据划分为始终优先调度的用于内部控制消息传递的控制流以及对应每个用户端100的用户流。优选地,数据处理模块203配置为如下步骤实现用户流 的调度:
基于用户指定的分级方式将多个用户流划分为至少两个不同第一优先级的第一分组,并按照第一优先级的高低顺序调度至少两个第一分组关于用户端100的连接。优选地,用户指定的分级方式是用户可根据使用场景义自定义的方式将所有关于用户端100的连接的用户流划分为几个第一分组。每个第一分组都具有不同的第一优先级。例如,可以根据用户端100请求建立会话的顺序、用户业务的QoS、用户数据分组业务的特性等来划分用户连接的调度优先级。通过以上设置,数据处理模块203根据其自定义的划分依据将所有用户端100的连接划分为不同第一分组。每个不同的第一分组具有不同的第一优先级。数据处理模块203始终选择第一优先级最高的第一分组内的用户连接作为第一个连接。当且仅当较高的第一优先级的连接没有要发送的数据包时,才会选择较低第一优先级的第一分组内的连接。
优选地,第一分组内的每个关于用户端100的连接的第二优先级采用完全公平调度的方式分配。根据第二优先级的高低顺序调度第一分组内每个关于用户端100的连接。优选地,第一分组内具有多个关于用户端100的连接,这些连接具有相同的第一优先级。第二优先级指的在相同第一优先级内的第一分组内的每个连接的调度优先级。优选地,完全公平调度(Completely Fair Scheduler,CFS)的主要思想是维护调度时间方面的公平性,为第一分组内的每个连接分配一定的资源。当第一分组内的某个连接运行时间较长时,其第二优先级降低。
根据一种优选实施方式,在第一阶段调度内第一分组内多个关于用户端100的连接出现新用户流的情况下,数据处理模块203配置为:为第一分组内所有用户端100的连接出现的新用户流分配最高的第二优先级,并且以循环的方式实现调度。优选地,对于某些应用程序,例如网页,尽快的获取前第一数据阈值(例如330KB)的数据可能会完成一半的用户请求,或者在某种程度上已经满足了QoE,因此应该为较短的数据流分配更高的优先级。另一方面,对于已经存在很长时间的数据流应当给予偏置以使其尽快地完成数据流的传输。因此对于相同的第一分组内的连接,为出现的新数据流分配最高的第二优先级,即新用户流将会抢占同一第一优先级内的其他长时间未完成传输的数据流的第二优先级,从而能够尽快地满足QoE。优选地,循环的方式实现调度是指在新用户流的传输时间超过第一时间阈值,或新用户流的传输数据量超过第一数据阈值的情况下,新用户流的第二优先级降至最低。根据完全公平调度的方式动态增加该用户流的第二优先级直至达到最高第二优先级。优选地,第一时间阈值或第一数据阈值可以用户根据应用的场景自定义。例如对于高铁、列车、火车以及车站这样的应用场景,用户端100大部分请求的Web网页,因此第一时间阈值可以选择为3秒,第一数据阈值可以选择为330KB。优选地,新用户流如果超过第一时间阈值或第一数据阈值则认为新流是长流,将失去抢占第二优先级的权限。通过以上设置方式,用户能够根据不同应用场景的要求以进度感知的方式,在满足公平性的前提下进一步优化QoE。
根据一种优选实施方式,在第一阶段内的一个用户端100打开了多个连接以连接同一服务器400内资源的情况下,数据处理模块203将连接相同服务器400内资源的多个关于该用户端100的连接划分为第二分组。在第二分组内的至少一个关于用户端100的连接的用户流终止会话的情况下,第二分组获取的网络资源在第二分组内重新分配于未终止会话的其他用户流,从而减少未终止会话的其他的用户流的传输时间以平衡连接数据流差异较大的不同服务器400的开销。优选地,由于我们没有将每个用户流的大小作为先验知识,因此我们无法预先为每个用户流 的传输分配网络资源。因此通过将连接相同服务器400内资源的多个关于该用户端100的连接划分为第二分组,如果组内的一个连接先于其他连接之前完成,则其资源将在同一组连接中重新分配,从而加速其余连接,即通过将单个会话的QoE要求映射到网络资源池以及完成平衡优化。通过该设置方式,还可以提高不同网页之间的公平性。两个网页,无论其子流数量如何,都被分配相同数量的网络资源来加载。同时,传输更多字节的流将动态分配更多带宽,这将有利于加快页面下载时间。因此,我们将优先级与同一页面的连接数相乘,这样既可以保持用户公平性又可以尽最大努力减少页面下载时间。
根据一种优选实施方式,数据处理模块203配置为按照如下方式实现所述第三阶段的调度:
基于第一阶段和第二阶段确定多个用户端100的连接顺序将对应用户端100的用户流按顺序映射至至少一个硬件接口模块202提供的至少一个接口501上。优选地,基于第二数据阈值将每个用户端100的用户流分为位于第二数据阈值前的第一数据流以及位于第二数据阈值后的第二数据流。第一数据流绑定至多个接口501的其中一个,并且第二数据流映射至其余多个接口501上。优选地,基于对于某些应用程序,例如Web网页,用户流的前330KB能够完成一半的用户请求的认知,通过第一数据流与接口501绑定,使得第一数据流全部经由该接口501映射到该接口501对应的通信路径500上进行传输,不仅能够减少乱序延迟,还能够减少异构路径下的跨通信路径500传输延迟,即尽管第一数据流被安排通过RRT较高的接口501发送数据包,继续在该接口501对应的通信路径500上进行调度,从而在其他低RTT的路径上腾出空间,以潜在地有益于其他连接。优选地,第二数据阈值是用户可编辑的。用户能够根据当前的应用环境而设定第二数据阈值的大小。
根据一种优选实施方式,数据处理模块203配置为按照如下方式将所述第二数据流映射至其余多个所述接口501上:统一基于其余多个接口501的多通信路径传输的第一调度行为以及跨通信路径重传的第二调度行为,从而为第二数据流内的每个数据包分配最佳接口501以提供最佳服务质量。优选地,对于第二数据流可以采用利用多个通信路径500传输获取多路复用的增益。通过及时的跨通信路径500重传能够平衡减少数据包的丢失和乱序延迟。优选地,通过统一多通信路径传输的第一调度行为以及跨通信路径重传的第二调度行为能够显著地提高宽带利用率。优选地,数据处理模块203将第一调度行为和第二调度行为的统一抽象为函数表示。优选地,可以使用效用函数f=RTT -1+α·BW来表示接口501质量,其中RTT和带宽BW可以是承载在数据处理模块203上的程序来获取的当前网络性能参数。α是将RTT和BW归一化为相同单位的比例因子。我们使用L i表示接口i上的丢失率,并且
Figure PCTCN2020123085-appb-000001
表示该数据包的丢失率,其中S是选择的一组接口集合。那么
Figure PCTCN2020123085-appb-000002
可以表示为接口i的丢包率的贡献率。而u i=(1-L i)f i是接口实用程序的期望值。为了在路径之间分配流,我们使用
Figure PCTCN2020123085-appb-000003
其中buf i是TCP缓冲区总数据包的大小。此外,我们使用F来量化特定数据包的无损需求的重要性,或调度程序愿意支付的额外带宽成本,并在目标函数中添加
Figure PCTCN2020123085-appb-000004
我们的目标函数设计如下:
Figure PCTCN2020123085-appb-000005
s.t Q i(Q i-1)=0,
Figure PCTCN2020123085-appb-000006
优选地,式(1)为统一第一调度行为和第二调度行为的目标函数。式(2)为目标函数的受限条件。为了简单起见,令0/0=0。其中Q i表示是否选择接口i。
Figure PCTCN2020123085-appb-000007
表示该第二数据流 分配的接口501数量。β为比例因子。式(1)和式(2)所表示的是如果F为0,则只选择一个具有最大
Figure PCTCN2020123085-appb-000008
的接口。如果
Figure PCTCN2020123085-appb-000009
与u i的数量级相同或更高时,数据处理模块203会选择多个接口501。因此,可以通过把F设置为0来传输普通的第二数据流,把F设置为较大值用于重传数据包。
根据一种优选实施方式,数据处理模块203配置为按照如下方式实现所述第二阶段的调度:基于第三阶段中重传数据包的第二调度行为的触发而优先传输重传数据包。优选地,对跨通信路径500重传应该选用备用的接口501。优选地,通过优先调度跨通信路径500的重传数据包,能够尽快恢复该数据流正常顺序的交付,从而缓解乱序延迟和丢包。
实施例2
本实施例公开了一种基于多路径调度的中继方法,方法包括:在多个用户端100与多个服务器400通信的路由路径上分别部署与多个用户端100通信且融合多个接入网络的第一多路径数据处理模块200以及与服务器400通信的第二多路径数据处理模块300。优选地,多个无线接入网络是指包括无线广域网(Wireless Wide Area Network,WWAN)、无线城域网(Wireless Metropolitan Area Network,WMAN)、无线局域网(Wireless Local Area Network,WLAN)、无线个域网(Wireless Personal Area Network,WPAN)、移动自组织网络(Mobile Ad Hoc Network,MANET)、移动通信网络(例如3G、4G、5G等)、卫星网络、无线传感网络等的异构网络。优选地,移动通信网络包括不同运营商使用的不同移动通信技术的移动通信网络。优选地,融合是指在多个用户端100置于多种接入技术共存的异构网络场景下,构建一种完整多用户多路径传输框架,对融合GSM、WCDMA、LTE、Wi-Fi等多种接入技术的异构网络进行全面统一的资源管理,为用户端100的接入网络提供多种选择性,使得用户端100能够从异构网络提供的多种网络中选择其中一个或几个接入网络,并且可以从一个网络切换至另一个网络,从而利用多种并存的接入技术,在多个路径上并行传输数据至服务器400,为用户端100提供聚合的带宽资源。
优选地,第一多路径数据处理模块200基于可编辑的特定数据类型而将接收的数据分流至按既定方式处理数据的第一处理路径210和与第一处理路径210彼此独立且绕过内核协议栈的第二处理路径220。优选地,目标数据是指用户定义的数据。用户可根据需要编辑目标数据的特定类型。优选地,目标数据的特定类型是指目标数据的帧结构以及该数据的源地址是否是中继设备直接服务的用户端100。优选地,第一多路径数据处理模块200基于目标数据的特定类型对接收到的数据进行分类。属于目标数据的特定类型的数据分流至第一处理路径210。第一处理路径210按既定方式处理数据。既定方式是指除了多路径传输之外的处理方式,例如转发结算用的控制数据,或者是转发其他信息等。优选地,第一处理路径210可以将数据发送至用户空间或内核协议栈。优选地,第二处理路径220与第一处理路径210彼此独立。第二处理路径220以绕过内核协议栈的方式将符合目标数据的特定类型的数据直接传输至第一多路径数据处理模块200承载的操作系统的用户空间。绕过内核的方式是指数据通过原始套接字而不是标准套接字,原始套接字可以收发没有经过内核协议栈的数据包,从而第二处理路径210能够实现以不经过内核协议栈的方式直接进入用户空间。
优选地,第一多路径数据处理模块200在用户空间将第二处理路径220的数据映射至多个彼此独立的通信路径500且以反向复用的方式与第二多路径数据处理模块300通信,从而实现多个用户端100与多个服务器400之间的多路径传输。优选地,第一多路径数据处理模块220 可以通过硬件接口模块20接入多个不同制式的网络,从而将用户数据分配至多个不同制式网络对应的多个彼此独立的通信路径500。通过该设置方式,本发明通过第二处理路径220使用原始套接字从而用户空间中处理用户端100或服务器400发送的数据流以实现内核绕过,使得在用户空间层面传输多个用户端100的数据,从而多路径传输的设计完全在用户空间,不需要对系统的内核进行改造,也不涉及对应用程序的修改。而且将多路径分组调度的逻辑提升到应用于应用程序的用户空间,从而能够从全局的角度结合网络的动态变化和应用规范来调度分组实现聚合QoE的优化。
优选地,本实施例提供的第一多路径数据处理模块200和第二多路径数据处理模块300的部署方式如图2所示。多个用户端100可以通过无线或有线的方式与第一多路径数据处理模块200连接。例如,可以通过无线访问接入点(Wireless Access Point,AP)与第一多路径数据处理模块200连接。第一多路径数据处理模块200可以集成至用户端100的用户设备上,或设置在用户侧的路由、中级以及CPE上,或独立的硬件方式设置在用户侧,从而可以通过基站(例如BTS、NodeB、eNodeB等)接入不同的核心网,并由核心网接入短波通信网、GPS、卫星通信网、蜂窝移动网、PSTN、ISDN、Internet等。在服务器400侧,如果服务器400支持多路径传输,例如安装有支持多路径传输的应用程序等,则本实施例提供的第一多路径数据处理模块200可以直接与服务器400建立多路径数据链接。在服务器400不支持多路径传输的情况下,可以通过与服务器400建立通信会话的第二多路径数据处理模块300建立多路径数据链接。优选地,在用户端100具有多路径通信功能的情况下,可以将本实施例提供的第一多路径数据处理模块200或第二多路径数据处理模块200部署在服务器400侧,从而而用户端100能够与一多路径数据处理模块200或第二多路径数据处理模块200建立多路径数据链接。通过以上的部署方式,能够在用户端100、服务器400其中一方不支持多路径传输,或者是均不支持多路径传输的情况下实现用户端100和服务器400之间的多路径传输。
优选地,在实际部署中,本实施例提供的第一多路径数据处理模块200可以部署在基站(例如BTS、NodeB、eNodeB)的用户侧。优选地,第一多路径数据处理模块200和第二多路径数据处理模块300也还可以部署在3G核心网元SGSN(Serving GPRS Support Node)以及GGSN(Gateway GPRS Support Node)上。优选地,还可以部署在4G核心网络中,例如,部署在LTE(Long Term Evolution)的全IP分组核心网EPC(Evolved Packet Core)的网元上,比如SGW(Serving Gateway)和PGW(PDN Gateway)。优选地,还可以部署在5G的核心网的用户面功能(User Plane Function)上。优选地,还可以部署在CPE(Customer Premise Equipment),即客户端前置设备上。
优选地,本实施例提供的第一多路径数据处理模块200和第二多路径数据处理模块300可以与虚拟网络设备结合。例如,应用程序调用端口(Socket),将相应的数据包发送至服务器400上。VPN服务系统通过使用网络地址转换(Network Address Translation,NAT),将所有的数据包转发到虚拟网络设备上。第二多路径数据处理模块200通过读取虚拟网络设备的数据,可以获取所有转发到虚拟网络设备上的数据包,从而不仅能能够在外部访问内网中的数据,同时也可获得多路径传输提供的传输效率增益。
优选地,如图2所示,在第一多路径数据处理模块200和第二多路径数据处理模块300之间具有多个彼此独立的可用通信路径500。优选地,通信路径500也可以称为管道。管道可以以 不同的方式灵活实现。例如,可以使用TCP链接作为管道。优选地,可以将第二处理路径220内的数据反向复用到多个接口501上,从而可以通过每个管道上的TCP套接字来传输第二处理路径220内的数据。TCP有效载荷和控制数据都被封装到传输层中。优选地,可以选择TCP BBR拥塞控制算法,能够减少端到端延迟和丢包。优选地,反向复用是指将来自第二处理路径220的数据流分段并封装成报文,然后将报文分发到管道上。每个报文都有一个报头,其中包含应用程序连接的ID、长度和序列号。优选地,在服务器400一侧,第二多路径数据处理模块300通过提取数据来重新组合反向复用的数据流,并根据连接ID将其转发到服务器400上。通过利用反向多路复用管道和使用TCP链接作为管道,能够通过消除连接建立的开销(例如,慢启动)立即使得短流受益,而且使得每个管道上的流量更加密集,从而带来更好的宽带利用率。
根据一种优选实施方式,第一多路径数据处理模块200和/或第二多路径数据处理模块300以依次进行第一阶段、第二阶段、第三阶段的方式驱动第二处理路径220内的数据。第一阶段为确定多个用户端100的连接顺序。第二阶段用于实现插队传输。第三阶段用于将第一阶段和第二阶段确定连接顺序的用户流映射至至少一个通信路径500。第一多路径数据处理模块200和/或第二多路径数据处理模块300以第一阶段、第二阶段和第三阶段连续协调调度的方式驱动第二处理路径220内的数据。优选地,本实施例采用的第一阶段、第二阶段和第三阶段连续调度的方式同实施例1的内容相同,重复的内容不再赘述。
需要注意的是,上述具体实施例是示例性的,本领域技术人员可以在本发明公开内容的启发下想出各种解决方案,而这些解决方案也都属于本发明的公开范围并落入本发明的保护范围之内。本领域技术人员应该明白,本发明说明书及其附图均为说明性而并非构成对权利要求的限制。本发明的保护范围由权利要求及其等同物限定。

Claims (15)

  1. 一种基于多路径调度的中继设备,用于部署在多个用户端(100)与多个服务器(400)通信的路由路径上以融合多个无线接入网络,所述中继设备至少包括:
    至少一个通信接收模块(201),用于接收多个所述用户端(100)的用户数据;
    至少一个硬件接口模块(202),用于接入提供包括至少两个彼此独立的通信路径(500)的网络,从而通过至少两个所述通信路径(500)分发所述用户数据;
    至少一个数据处理模块(203),用于将接收的用户数据映射至至少两个彼此独立的通信路径(500)各自的接口(501)上;
    其特征在于,
    所述数据处理模块(203)基于目标数据的特定类型而将经由所述通信接收模块(201)接收的用户数据分流至按既定方式处理数据的第一处理路径(210)和与所述第一处理路径(210)彼此独立且绕过内核协议栈的第二处理路径(220)。
  2. 根据权利要求1所述的中继设备,其特征在于,
    所述第一处理路径(210)将数据发送至用户空间或内核协议栈;
    所述第二处理路径(220)以绕过内核协议栈的方式将符合目标数据特定类型的数据直接传输至所述数据处理模块(203)上的用户空间。
  3. 根据权利要求1所述的中继设备,其特征在于,所述数据处理模块(203)将所述第二处理路径(220)内的数据反向复用至多个接口(501)上,其中,
    反向复用是指将数据流分段并封装成报文,并将报文分发到多个通信路径(500)上。
  4. 一种基于多路径调度的中继设备,至少包括数据处理模块(203),其特征在于,所述数据处理模块(203)配置为通过以下多阶段调度的方式传输至少一个用户端(100)/服务器(400)发送的数据:
    确定多个所述用户端(100)/服务器(400)的连接顺序的第一阶段;
    用于实现插队传输的第二阶段;
    用于将所述第一阶段和第二阶段确定的连接顺序的数据流映射至至少一个通信路径(500)的第三阶段连续。
  5. 根据权利要求1至4任一所述的中继设备,其特征在于,所述数据处理模块(203)配置为按照如下方式实现第一阶段的调度:
    基于所述第二处理路径(220)中的用户数据或者所述用户端(100)/服务器(400)传输的数据划分为始终优先调度的用于内部控制消息传递的控制流以及对应每个所述用户端(100)/服务器(400)的用户流;
    基于用户指定的分级方式将多个所述用户流划分为至少两个具有不同第一优先级的第一分组,并按照第一优先级的高低顺序调度至少两个所述第一分组关于所述用户端(100)/服务器(400)的连接。
  6. 根据权利要求5所述的中继设备,其特征在于,第一分组内的每个关于所述用户端(100)/服务器(400)的连接的第二优先级采用完全公平调度的方式分配,并根据所述第二优先级的高低顺序调度所述第一分组内每个关于所述用户端(100)/服务器(400)的连接。
  7. 根据权利要求6所述的中继设备,其特征在于,在所述第一阶段调度内的第一分组内的 多个关于所述用户端(100)/服务器(400)的连接出现新用户流的情况下,所述数据处理模块(203)为出现的新用户流分配最高的第二优先级,并且以循环的方式实现调度,其中,
    所述循环的方式实现调度是指在所述新用户流的传输时间超过第一时间阈值,或所述新用户流的传输数据量超过第一数据阈值的情况下,所述新用户流的第二优先级降至最低,并根据所述完全公平调度的方式动态增加该用户流的第二优先级直至达到最高第二优先级。
  8. 根据权利要求1至4任一所述的中继设备,其特征在于,所述数据处理模块(203)配置为按照如下方式实现所述第三阶段的调度:
    基于所述第一阶段和第二阶段确定多个所述用户端(100)/服务器(400)的连接顺序将对应所述用户端(100)/服务器(400)的用户流按顺序映射至至少一个接口(501)上,其中,
    基于第二数据阈值将每个所述用户端(100)/服务器(400)的用户流分为位于所述第二数据阈值前的第一数据流以及位于所述第二数据阈值后的第二数据流,其中,
    所述第一数据流绑定至多个所述接口(501)的其中一个,并且所述第二数据流映射至其余多个接口(501)上。
  9. 根据权利要求8所述的中继设备,其特征在于,所述数据处理模块(203)配置为按照如下方式将所述第二数据流映射至其余多个接口(501)上:
    统一基于其余多个接口(501)的多通信路径传输的第一调度行为以及跨通信路径重传的第二调度行为,从而为所述第二数据流内的每个数据包分配最佳接口以提供最佳服务质量。
  10. 根据权利要求9所述的中继设备,其特征在于,所述数据处理模块(203)配置为按照如下方式实现所述第二阶段的调度:
    基于所述第三阶段中重传数据包的第二调度行为的触发而优先传输所述重传数据包,其中,
    所述重传数据包映射至在原传输接口之外的可用接口上,以实现跨通信路径重传。
  11. 根据权利要求5所述的中继设备,其特征在于,在所述第一阶段内的一个所述用户端(100)打开了多个连接以连接同一所述服务器(400)内资源的情况下,所述数据处理模块(203)将连接相同所述服务器(400)内资源的多个关于所述用户端(100)的连接划分为第二分组,其中,
    在所述第二分组内的至少一个关于所述用户端(100)的连接的用户流终止会话的情况下,所述第二分组获取的网络资源在所述第二分组内重新分配于未终止会话的其他用户流,从而减少未终止会话的其他的用户流的传输时间以平衡连接数据流差异较大的不同服务器(400)的开销。
  12. 一种基于多路径调度的中继设备,至少包括数据处理模块(203),其特征在于,所述数据处理模块(203)配置为基于每个用户端(100)/服务器(400)连接的数据流分配的第二优先级的高低顺序进行传输/调度,其中,
    在所述用户端(100)/服务器(400)的连接出现新数据流的情况下,为所述新数据流分配最高的第二优先级,并基于数据流存在的时间动态增加数据流的第二优先级。
  13. 根据权利要求12所述的中继设备,其特征在于,在所述用户端(100)/服务器(400)的连接出现新数据流的情况下,所述数据处理模块(203)配置为所述新数据流分配最高的第二优先级,并在所述新数据流的传输时间超过第一时间阈值或所述新数据流的传输数据量超过第一数据阈值的情况下,将所述新数据流的第二优先级降至最低。
  14. 根据权利要求12所述的中继设备,其特征在于,在所述新数据流的第二优先级降至最 低或者存在长时间未完成传输的数据流的情况下,所述数据处理模块(203)配置为基于完全公平调度的方式增加第二优先级降至最低的数据流的第二优先级或者是长时间未完成传输的数据流的第二优先级。
  15. 根据权利要求12所述的中继设备,其特征在于,在所述第一阶段内的一个用户端(100)打开了多个连接以连接同一服务器(400)内资源的情况下,所述数据处理模块(203)配置为将连接相同服务器(400)内资源的多个关于所述用户端(100)的连接划分为第二分组,其中,
    在所述第二分组内的至少一个关于所述用户端(100)的连接的数据流终止会话的情况下,所述第二分组获取的网络资源在所述第二分组内重新分配于未终止会话的其他数据流。
PCT/CN2020/123085 2019-10-24 2020-10-23 一种基于多路径调度的中继设备 WO2021078232A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/754,925 US20230276483A1 (en) 2019-10-24 2020-10-23 Multipath-scheduling-based relay device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911015828.3A CN110730470B (zh) 2019-10-24 2019-10-24 一种融合多接入技术的移动通信设备
CN201911015828.3 2019-10-24

Publications (1)

Publication Number Publication Date
WO2021078232A1 true WO2021078232A1 (zh) 2021-04-29

Family

ID=69222996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123085 WO2021078232A1 (zh) 2019-10-24 2020-10-23 一种基于多路径调度的中继设备

Country Status (3)

Country Link
US (1) US20230276483A1 (zh)
CN (2) CN110730470B (zh)
WO (1) WO2021078232A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116419363A (zh) * 2023-05-31 2023-07-11 深圳开鸿数字产业发展有限公司 数据传输方法、通信设备和计算机可读存储介质
WO2023213281A1 (zh) * 2022-05-06 2023-11-09 阿里巴巴(中国)有限公司 多路径冗余传输方法、用户设备、网络实体及存储介质
WO2023226730A1 (zh) * 2022-05-23 2023-11-30 中兴通讯股份有限公司 数据传输方法及其装置、存储介质、程序产品

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110730470B (zh) * 2019-10-24 2020-10-27 北京大学 一种融合多接入技术的移动通信设备
CN111642022B (zh) * 2020-06-01 2022-07-15 重庆邮电大学 一种支持数据包聚合的工业无线网络确定性调度方法
CN112333091B (zh) * 2020-11-05 2022-11-11 中国联合网络通信集团有限公司 路由系统、方法及装置
CN112491906B (zh) * 2020-12-01 2022-07-15 中山职业技术学院 一种并行网络入侵检测系统及其控制方法
CN113055907B (zh) * 2021-02-02 2023-12-26 普联国际有限公司 一种组网方法、装置及网络设备
CN113038530B (zh) * 2021-03-22 2021-09-28 军事科学院系统工程研究院网络信息研究所 卫星移动通信系统QoS保障的分组业务高效传输方法
CN113225670B (zh) * 2021-03-31 2022-11-08 中国船舶重工集团公司第七二二研究所 一种基于分组交换的异构无线网络及其垂直切换方法
CN113473486B (zh) * 2021-07-13 2023-04-07 蒋溢 一种端边协同的网络覆盖增强系统及方法
CN113301632B (zh) * 2021-07-27 2021-11-02 南京中网卫星通信股份有限公司 一种融合网络多模终端的网络接入方法和装置
CN114567356B (zh) * 2022-03-08 2023-03-24 中电科思仪科技股份有限公司 一种mu-mimo空时数据流分配方法及系统
CN115549954B (zh) * 2022-08-16 2023-05-30 北京连山科技股份有限公司 一种基于异构的碎片化网络资源安全拼接通信系统
CN115550976B (zh) * 2022-08-17 2023-05-30 北京连山科技股份有限公司 基于移动公网的碎片化网络资源融合通信系统
CN117112237B (zh) * 2023-10-23 2023-12-29 湖南高至科技有限公司 基于纯实物多路并发的实时数据采集方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130194923A1 (en) * 2012-01-28 2013-08-01 International Business Machines Corporation Converged enhanced ethernet network
CN105262696A (zh) * 2015-09-01 2016-01-20 上海华为技术有限公司 一种多路径分流的方法及相关设备
CN105553872A (zh) * 2015-12-25 2016-05-04 浪潮(北京)电子信息产业有限公司 一种多路径数据流量负载均衡方法
CN106559840A (zh) * 2016-11-16 2017-04-05 北京邮电大学 一种多协议混合通信方法及系统
CN110650089A (zh) * 2019-10-24 2020-01-03 北京大学 一种支持多路聚合通信的中间设备
CN110730470A (zh) * 2019-10-24 2020-01-24 北京大学 一种融合多接入技术的移动通信设备

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6055564A (en) * 1998-03-11 2000-04-25 Hewlett Packard Company Admission control where priority indicator is used to discriminate between messages
US7023803B2 (en) * 2001-09-28 2006-04-04 Nokia Corporation Apparatus, and associated method, for selectably controlling packet data flow in a packet radio communication system
US7376879B2 (en) * 2001-10-19 2008-05-20 Interdigital Technology Corporation MAC architecture in wireless communication systems supporting H-ARQ
ATE539533T1 (de) * 2004-01-12 2012-01-15 Alcatel Lucent Verfahren und systeme zur ressourcenbündelung in einem kommunikationsnetz
US7475335B2 (en) * 2004-11-03 2009-01-06 International Business Machines Corporation Method for automatically and dynamically composing document management applications
CN100407707C (zh) * 2006-03-03 2008-07-30 北京大学 用于承载高速数据业务的网络系统及其传送方法
CN101141339A (zh) * 2007-02-09 2008-03-12 江苏怡丰通信设备有限公司 基于嵌入式SoC芯片的无线网络工业监控管理系统
CN102043676B (zh) * 2010-12-08 2012-09-05 北京航空航天大学 虚拟化数据中心调度方法及系统
CN104247331B (zh) * 2012-01-30 2017-06-16 瑞典爱立信有限公司 用于管理网络资源的方法和节点以及相应的系统和计算机程序
CN102611622B (zh) * 2012-02-28 2014-09-24 清华大学 一种弹性云计算平台下工作负载的调度方法
US10368255B2 (en) * 2017-07-25 2019-07-30 Time Warner Cable Enterprises Llc Methods and apparatus for client-based dynamic control of connections to co-existing radio access networks
WO2015133941A1 (en) * 2014-03-04 2015-09-11 Telefonaktiebolaget L M Ericsson (Publ) Method and wireless device for managing probe messages
WO2015168909A1 (zh) * 2014-05-08 2015-11-12 华为技术有限公司 数据传输控制节点、通信系统及数据传输管理方法
US10405234B2 (en) * 2014-12-23 2019-09-03 Hughes Network Systems, Llc Load balancing of committed information rate service sessions on TDMA inroute channels
CN105656798A (zh) * 2016-01-08 2016-06-08 努比亚技术有限公司 数据传输方法、装置、多通道路由方法及用户设备
CN105228179B (zh) * 2015-09-06 2018-09-25 深圳优克云联科技有限公司 历史网络kpi数据集合生成、用户识别卡分配方法、装置及系统
CN106559806B (zh) * 2015-09-25 2020-08-21 努比亚技术有限公司 双通道数据传输方法、装置、网络节点及移动终端
CN105791054B (zh) * 2016-04-22 2018-10-19 西安交通大学 一种基于流分类实现的自主可控可靠组播传输方法
CN107592186A (zh) * 2016-07-08 2018-01-16 电信科学技术研究院 一种进行数据重传的方法和设备
EP3297322B1 (en) * 2016-09-15 2022-06-22 Alcatel Lucent Multiple path transmission of data
CN106533981B (zh) * 2016-12-19 2019-05-03 北京邮电大学 一种基于多属性的大数据流量调度方法及装置
CN107529186A (zh) * 2017-07-13 2017-12-29 深圳天珑无线科技有限公司 多通道传输上行数据的方法及系统、客户端、服务器
CN107819701B (zh) * 2017-12-15 2020-06-02 中广热点云科技有限公司 流媒体应用快速缓冲的带宽分配方法及服务器
CN109343977B (zh) * 2018-09-21 2021-01-01 新华三技术有限公司成都分公司 跨态通信方法和通道驱动装置
CN109639595A (zh) * 2018-09-26 2019-04-16 北京云端智度科技有限公司 一种基于时延的cdn动态优先级调度算法
CN110275765B (zh) * 2019-06-14 2021-02-26 中国人民解放军国防科技大学 基于分支dag依赖的数据并行作业调度方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130194923A1 (en) * 2012-01-28 2013-08-01 International Business Machines Corporation Converged enhanced ethernet network
CN105262696A (zh) * 2015-09-01 2016-01-20 上海华为技术有限公司 一种多路径分流的方法及相关设备
CN105553872A (zh) * 2015-12-25 2016-05-04 浪潮(北京)电子信息产业有限公司 一种多路径数据流量负载均衡方法
CN106559840A (zh) * 2016-11-16 2017-04-05 北京邮电大学 一种多协议混合通信方法及系统
CN110650089A (zh) * 2019-10-24 2020-01-03 北京大学 一种支持多路聚合通信的中间设备
CN110730470A (zh) * 2019-10-24 2020-01-24 北京大学 一种融合多接入技术的移动通信设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213281A1 (zh) * 2022-05-06 2023-11-09 阿里巴巴(中国)有限公司 多路径冗余传输方法、用户设备、网络实体及存储介质
WO2023226730A1 (zh) * 2022-05-23 2023-11-30 中兴通讯股份有限公司 数据传输方法及其装置、存储介质、程序产品
CN116419363A (zh) * 2023-05-31 2023-07-11 深圳开鸿数字产业发展有限公司 数据传输方法、通信设备和计算机可读存储介质
CN116419363B (zh) * 2023-05-31 2023-08-29 深圳开鸿数字产业发展有限公司 数据传输方法、通信设备和计算机可读存储介质

Also Published As

Publication number Publication date
US20230276483A1 (en) 2023-08-31
CN112243253A (zh) 2021-01-19
CN112243253B (zh) 2022-07-08
CN110730470B (zh) 2020-10-27
CN110730470A (zh) 2020-01-24

Similar Documents

Publication Publication Date Title
WO2021078232A1 (zh) 一种基于多路径调度的中继设备
US11696202B2 (en) Communication method, base station, terminal device, and system
US10721754B2 (en) Data transmission method and apparatus
KR101517471B1 (ko) 다중경로 전송을 이용한 협력적 대역폭 애그리게이션
JP2022095657A (ja) ロングタームエボリューション通信システムのためのマルチテクノロジアグリゲーションアーキテクチャ
US11096109B2 (en) Methods and network elements for multi-connectivity control
WO2020073971A1 (zh) 用于无线回传网络的数据传输方法和装置
CN110635988B (zh) 一种用于多路径传输的数据转发方法及设备
CN110035449B (zh) 一种数据量报告的发送方法和装置
CN108307450A (zh) 一种数据传输方法、装置和系统
JP6504608B2 (ja) 通信装置及びその制御方法並びにプログラム、並びに通信システム
CN108390746A (zh) 无线通信方法、用户设备、接入网设备和网络系统
WO2021078231A1 (zh) 基于位置感知的网络中间设备
Kucera et al. Latency as a service: Enabling reliable data delivery over multiple unreliable wireless links
KR20120065867A (ko) 지연-대역폭의 곱이 큰 네트워크를 위한 단대단 에너지 절약 기법을 사용한 다중경로 tcp 네트워크
CN110730479B (zh) 一种用于多路径通信的方法及装置
CN210899219U (zh) 基于多路径传输的网络中间设备及多路径传输网络架构
De Schepper et al. ORCHESTRA: Supercharging wireless backhaul networks through multi-technology management
WO2021078233A1 (zh) 一种多路径传输设备及架构
AU2008267742B2 (en) Method of communicating a data stream over a communication network
EP4080836B1 (en) System and method for multipath transmission
Fahmi et al. Understanding MPTCP in Multi-WAN Routers: Measurements and System Design.
US20240056885A1 (en) Multi-access traffic management
RU2782866C2 (ru) Архитектура с агрегированием технологий для систем связи стандарта долгосрочного развития
Fahmi et al. BOOST: Transport-Layer Multi-Connectivity Solution for Multi-Wan Routers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20879697

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20879697

Country of ref document: EP

Kind code of ref document: A1