WO2022160143A1 - 一种带宽调整方法、装置以及系统 - Google Patents

一种带宽调整方法、装置以及系统 Download PDF

Info

Publication number
WO2022160143A1
WO2022160143A1 PCT/CN2021/074009 CN2021074009W WO2022160143A1 WO 2022160143 A1 WO2022160143 A1 WO 2022160143A1 CN 2021074009 W CN2021074009 W CN 2021074009W WO 2022160143 A1 WO2022160143 A1 WO 2022160143A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
delay
data stream
network node
degree
Prior art date
Application number
PCT/CN2021/074009
Other languages
English (en)
French (fr)
Inventor
唐德智
杨光宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/074009 priority Critical patent/WO2022160143A1/zh
Priority to CN202180091669.4A priority patent/CN116982304A/zh
Publication of WO2022160143A1 publication Critical patent/WO2022160143A1/zh

Links

Images

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a bandwidth adjustment method, apparatus, and system.
  • DCN data center network
  • the distributed application refers to a working mode in which application programs are distributed on different computers and jointly complete a task through a network. Therefore, realizing the load balancing of the data center network is of great significance to improve the network bandwidth utilization.
  • the sending end device can divide the data to be sent into multiple data streams according to a fixed granularity (such as 16KB), and send the multiple data streams to the receiving end device through the multiple transmission paths, which can avoid the load on a single transmission path. If the load is large, other transmission paths are idle or the load is small, so as to achieve load balancing. Because the network environment of each transmission path is different, that is, the transmission time required for the data stream on each transmission path is different, which leads to the multiple data streams arriving at the receiving end device out of sequence, that is, multiple data streams There is a delay in the data stream arriving at the receiving device. The receiving device can solve this problem by reordering multiple data streams arriving out of order in the buffer.
  • a fixed granularity such as 16KB
  • the present application provides a bandwidth adjustment method, apparatus, and system, which are used to reduce the computing pressure and the buffering pressure of the receiving end device for reordering multiple data streams.
  • the present application provides a bandwidth adjustment method, which may be performed by a first network node, or performed by a component of the first network node (eg, a chip or a chip system, etc.).
  • the first network node may receive multiple data streams from the second network node through multiple transmission paths, wherein the data of each data stream in the multiple data streams is in a different order in the first data
  • the plurality of data streams may include a first data stream and a second data stream, the data in the first data stream is sorted in the first data before the data in the second data stream, and the data in the first data stream is the same as the data in the second data stream.
  • the data of the second data stream is adjacent in the first data stream, and each data stream includes at least one data packet; the first network node can count the reception time of the last data packet of the first data stream and the first data stream of the second data stream.
  • the reception time of a data packet if the reception time of the first data packet of the second data stream is earlier than the reception time of the last data packet of the first data stream, the degree of delay of the first data is determined;
  • the network node may send a first message to the second network node, where the first message includes information about the degree of delay and indication information instructing the second network node to adjust bandwidth resources.
  • the first network node can receive multiple data streams of the first data through the multiple transmission paths, that is, using the multiple transmission paths This can avoid the problem of using a single transmission path to transmit the first data to cause a heavy load on the transmission path, and other transmission paths are in an idle state or have a small load, and achieve the effect of load balancing.
  • the first network node may count the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream.
  • the first network node can determine the degree of delay of the first data.
  • the delay degree can reflect the network environment. For example, the greater the delay degree, the worse the network environment.
  • the first network node can send the delay degree to the second network node, so that the second network node can adjust the bandwidth resources according to the delay degree.
  • the second network node can appropriately reduce the bandwidth resources of each transmission path to adapt to a poor network environment, thereby reducing the number of data streams that delay arriving at the network node, reducing the first network node reordering multiple data
  • the computing pressure and cache pressure of the stream can improve the overall network performance.
  • the greater the delay degree the less bandwidth resources of each transmission path in the adjusted multiple transmission paths.
  • the first network node determines the delay degree of the first data, which may be: the first network node determines the delay duration, and the delay duration may be the difference between the receiving time of the first data packet of the second data stream and the the difference between the reception times of the last data packet of the first data stream; and the first network node determines the delay degree according to the delay duration, wherein the longer the delay duration is, the greater the delay degree is.
  • the reception time of the first data packet of the second data stream is earlier than the reception time of the last data packet of the first data stream, that is, the first data stream is a data stream that arrives at the first network node with a delay.
  • the first network node may perform a difference operation on the reception time of the first data packet of the second data stream and the reception time of the last data packet of the first data stream to obtain the delay duration, and determine the delay duration according to the delay duration. reflect the network environment.
  • the delay duration can reflect the network environment, and the delay degree determined by the delay duration can also reflect the network environment. Appropriate adjustment of bandwidth resources can reduce the number of data flows delayed to arrive at the first network node and/or the delay duration of data flows delayed to arrive at the first network node.
  • the number of delay durations is multiple, and the method may further include: the first network node performs an average operation on the multiple delay durations to obtain the average delay duration of the first data;
  • the delay degree is determined by the duration, which may be: the first network node determines the delay degree according to the average delay duration, wherein the longer the average delay duration is, the greater the delay degree is.
  • the first network node can perform an average operation (such as an average operation, etc.) on the multiple delay durations to obtain the average delay duration, and then according to the The average delay duration determines the degree of delay. Since the average delay time can reflect the network environment as a whole, the delay degree determined by the average delay time can also reflect the network environment as a whole. For example, the longer the average delay time is, the greater the delay degree and the worse the network environment.
  • the node appropriately adjusts the bandwidth resource according to the delay degree determined by the average delay duration, which can reduce the number of data streams delayed to arrive at the first network node and/or the delay duration of data streams delayed to arrive at the first network node.
  • the method may further include: the first network node determines a first resource, where the first resource is a cache resource occupied when reordering multiple data streams; and, according to the first resource, the first network node, The degree of delay is determined, wherein the more the first resource, the greater the degree of delay.
  • the first network node can store the multiple data streams in the buffer according to the ordering of the data of each data stream in the first data.
  • the stream is reordered to get the first data.
  • the first network node may count the buffer resources (ie, the first resources) occupied when the multiple data streams are reordered, and determine the degree of delay according to the first resources.
  • the second data stream arrives at the first network node preferentially and is stored in the buffer area.
  • the first network node needs to wait for the arrival of the first data stream before reordering the first data stream and the second data stream, and reordering the reordering
  • the first data stream and the second data stream are sent to the upper layer (such as the application layer, etc.), and then the cache resources occupied by the second data stream and the first data stream are released. If the delay time of the first data stream is long, the second data stream will occupy cache resources for a long period of time, and the cache resources of the first network node are limited, which increases the cache pressure of the first network node.
  • the first network node sends the delay degree determined by the first resource to the second network node, so that the second network node can adjust bandwidth resources based on the delay degree determined by the first resource to reduce the buffer pressure of the first network node.
  • the method may further include: the first network node reorders the multiple data streams according to the sequence of the data of each data stream in the first data to obtain the first data.
  • each transmission path is located in a different network environment, so the transmission duration of the data stream on each transmission path may be different, which makes The multiple data streams arrive at the first network node out of sequence.
  • the first network node may reorder the multiple data streams arriving out of sequence according to the sorting of the data of each data stream in the first data, so as to obtain the multiple data streams in the correct order, that is, obtain the first data.
  • the first message may be, but is not limited to, a NACK message or the like.
  • the present application provides a bandwidth adjustment method, which may be performed by a second network node, or performed by a component of the second network node (eg, a chip or a chip system, etc.).
  • the second network node may send multiple data streams to the first network node through multiple transmission paths, wherein the data of each data stream in the multiple data streams is in a different order in the first data, and the multiple data streams are in a different order.
  • Each data stream includes a first data stream and a second data stream, the data in the first data stream is sorted in the first data stream before the data in the second data stream, and the data in the first data stream and the second data stream are sorted
  • the data in the first data is adjacent in the first data, and each data stream includes at least one data packet; receiving a first message from the first network node, the first message includes information about the degree of delay, and instructs the second network node to adjust Indication information of bandwidth resources, wherein the delay degree is determined according to the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream, and the first data packet of the second data stream.
  • the reception time of the packet is earlier than the reception time of the last data packet of the first data stream; and the bandwidth resource is adjusted according to the degree of delay.
  • the greater the delay degree the less bandwidth resources of each transmission path in the adjusted multiple transmission paths.
  • the degree of delay may be determined according to a delay duration, where the delay duration is the difference between the reception time of the first data packet of the second data stream and the reception time of the last data packet of the first data stream value, where the longer the delay time, the greater the delay degree.
  • the delay degree may be determined according to an average delay duration obtained by performing an average operation on multiple delay durations, wherein the larger the average delay duration, the greater the delay degree.
  • the delay degree may be determined according to a first resource, where the first resource is a buffer resource occupied when reordering multiple data streams, wherein the more first resources, the greater the delay degree.
  • the method may further include: dividing the first data into data of a plurality of data streams by the second network node.
  • the present application provides a bandwidth adjustment apparatus, which may include a processing module and a communication module, and these modules may perform corresponding functions performed by the first network node in any of the design examples of the first aspect.
  • the communication module may be configured to receive multiple data streams from the second network node through multiple transmission paths, wherein the data of each data stream in the multiple data streams is in a different order in the first data,
  • the multiple data streams include a first data stream and a second data stream, the data in the first data stream is sorted in the first data prior to the data in the second data stream, and the data in the first data stream is the same as the second data stream.
  • the data in the streams are adjacent in the first data, and each data stream includes at least one data packet.
  • the processing module can be used to count the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream, if the reception time of the first data packet of the second data stream is earlier at the reception time of the last data packet of the first data stream, then determine the degree of delay of the first data;
  • the communication module may also be configured to send a first message to the second network node, where the first message includes information about the degree of delay and indication information instructing the second network node to adjust bandwidth resources.
  • the present application provides a bandwidth adjustment device, which may include a processing module and a communication module, and these modules may perform corresponding functions performed by the second network node in any of the design examples of the second aspect.
  • the communication module may be configured to send multiple data streams to the first network node through multiple transmission paths, wherein the data of each data stream in the multiple data streams is in a different order in the first data, and the multiple data streams are in a different order.
  • Each data stream includes a first data stream and a second data stream, the data in the first data stream is sorted in the first data stream before the data in the second data stream, and the data in the first data stream and the second data stream are sorted
  • the data in the first data are adjacent in the first data, and each data stream includes at least one data packet; and, receiving a first message from the first network node, the first message includes information on the degree of delay, and indicates the second network
  • the indication information that the node adjusts the bandwidth resources, wherein the delay degree is determined according to the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream.
  • the reception time of each data packet is earlier than the reception time of the last data packet of the first data stream.
  • the processing module can be used to adjust the bandwidth resource according to the delay degree.
  • the present application provides a communication device, where the communication device may be a first network node or a device in the first network node.
  • the communication apparatus may include a processor for implementing the method performed by the first network node in the above-mentioned first aspect.
  • the communication apparatus may also include memory for storing program instructions and data.
  • the memory is coupled to the processor, and the processor can call and execute program instructions stored in the memory, so as to implement any one of the methods performed by the first network node in the first aspect above.
  • the communication apparatus may further include a transceiver, and the transceiver is used for the communication apparatus to communicate with other devices.
  • the present application provides a communication device, where the communication device may be a second network node or a device in the second network node.
  • the communication apparatus may include a processor for implementing the method performed by the second network node in the above-mentioned second aspect.
  • the communication apparatus may also include memory for storing program instructions and data. The memory is coupled to the processor, and the processor can call and execute program instructions stored in the memory, so as to implement any one of the methods performed by the second network node in the second aspect above.
  • the communication apparatus may further include a transceiver, and the transceiver is used for the communication apparatus to communicate with other devices.
  • the present application provides a computer-readable storage medium in which a computer program or instruction is stored.
  • the computer program or instruction is executed, the first network in any of the design examples of the first aspect can be implemented.
  • the present application provides a computer-readable storage medium in which a computer program or instruction is stored.
  • the computer program or instruction is executed, the second network in any of the design examples of the second aspect can be implemented. The method executed by the node.
  • the present application provides a computer program product, comprising instructions, when the instructions are run on a computer, the computer executes the method performed by the first network node in any one of the design examples of the first aspect.
  • the present application provides a computer program product, comprising instructions, when the instructions are run on a computer, the computer executes the method performed by the second network node in any of the design examples of the second aspect.
  • the present application further provides a system-on-chip, where the system-on-chip includes a processor, and may further include a memory, for implementing the method executed by the first network node in any one of the design examples of the first aspect.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the present application further provides a chip system, where the chip system includes a processor, and may further include a memory, for implementing the method executed by the second network node in any one of the design examples of the second aspect.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the present application further provides a communication system, which includes the communication device in any of the design examples of the fifth aspect and the communication device in any of the design examples of the sixth aspect.
  • FIG. 1 is a schematic diagram of a communication system to which an embodiment of the application is applied;
  • FIG. 2 is a schematic flowchart of a bandwidth adjustment method provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of determining a delay duration provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of data stream 1 arriving at network node 1 before data stream 2 according to an embodiment of the present application;
  • FIG. 5 is a schematic diagram of the data stream 2 arriving at the network node 1 before the data stream 1 according to an embodiment of the present application;
  • FIG. 6 is a schematic diagram of multiple data streams provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of reordering multiple data streams according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a bandwidth adjustment apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a communication apparatus according to an embodiment of the present application.
  • the present application provides a bandwidth adjustment method, apparatus, and system, which are used to reduce the computing pressure and the buffering pressure of a receiving end device for reordering multiple data streams, and can improve network performance.
  • the method and the device are based on the same technical concept. Since the principles of the method and the device to solve the problem are similar, the implementation of the method and the device can be referred to each other, and the repetition will not be repeated.
  • FIG. 1 is a schematic diagram of a communication system to which this embodiment of the present application is applied.
  • the communication system 100 may include network nodes and intermediate nodes.
  • the communication system 100 may include a plurality of network nodes, and FIG. 1 takes the network node 1 and the network node 2 as an example.
  • the communication system 100 may include multiple intermediate nodes, and FIG. 1 takes the intermediate node 1 , the intermediate node 2 , the intermediate node 3 , and the intermediate node 4 as an example.
  • the network node 1 and the network node 2 may communicate through one or more intermediate nodes.
  • multiple transmission paths may exist between network node 1 and network node 2, wherein each transmission path may include one or more intermediate nodes.
  • FIG. 1 takes transmission path 1, transmission path 2 and transmission path 3 as examples.
  • network node 1 can send data to network node 2 through transmission path 1, that is, network node 1 can first send data to intermediate node 1, and then intermediate node 1 forwards the data to the network node 1.
  • network node 2 can send data to network node 2 through transmission path 2, that is, network node 1 can first send data to intermediate node 2, intermediate node 2 forwards the received data to intermediate node 3, and then It is forwarded by the intermediate node 3 to the network node 2; alternatively, the network node 1 can send the data to the network node 2 through the transmission path 3, that is, the network node 1 can first send the data to the intermediate node 4, and then the intermediate node 4 forwards the data to the network Node 2.
  • the network node may be a device with data and/or message sending and receiving functions.
  • the network node may be a server, a network device, or a terminal device.
  • the intermediate node may be a device with data and/or message forwarding function.
  • the intermediate node may be a router, a switch, or a relay terminal device, or the like. It can be understood that, the embodiment of the present application does not limit the specific form of the network node or the intermediate node.
  • the server may be a device with data processing functions, for example, the server may be a server in a data center network, or a component in a server in a data center network, such as a processor, a chip, or a chip system.
  • the network device may be an access network device, such as a radio access network (radio access network, RAN) device, which is a device that provides a wireless communication function for a terminal device.
  • the access network equipment includes, but is not limited to, a next-generation base station (generation nodeB, gNB ) in the fifth generation (5th generation, 5G), an evolved node B (evolved node B, eNB), and a remote radio unit (remote radio unit).
  • the access network device may also be a wireless controller, a central unit (central unit, CU), and/or a distributed unit (distributed unit, DU) in a cloud radio access network (cloud radio access network, CRAN) scenario, or a network
  • the device may be a relay station, a vehicle-mounted device, and a network device in a future evolved network, and the like.
  • a terminal device may be referred to as a terminal for short, such as user equipment, which is a device with a wireless transceiver function.
  • Terminal equipment can be deployed on land (such as vehicles, vehicles, high-speed rail or motor vehicles, etc.); can also be deployed on water (such as ships, etc.); can also be deployed in the air (such as aircraft, drones, balloons and satellites, etc.) .
  • the terminal equipment can be a mobile phone, a tablet computer, a computer with wireless transceiver function, virtual reality terminal equipment, augmented reality terminal equipment, wireless terminal equipment in industrial control, wireless terminal equipment in unmanned driving, and wireless terminal equipment in telemedicine.
  • the communication system shown in FIG. 1 is taken as an example, and does not constitute a limitation on the communication system to which the method provided by the embodiment of the present application is applicable.
  • the methods provided in the embodiments of the present application are applicable to various communication systems in which there are multiple transmission paths between the transmitting end device and the receiving end, such as a data center network system.
  • the embodiments of the present application may also be applied to various types and standards of communication systems, for example: the 5th generation (5G) communication system, the long term evolution (LTE) communication system, the vehicle to everything (vehicle to everything) everything, V2X), Long Term Evolution-Vehicle (LTE-Vehicle, LTE-V), Vehicle to Vehicle (V2V), Internet of Vehicles, Machine Type Communications (MTC), Internet of Things (Internet of Things) things, IoT), Long Term Evolution-Machine to Machine (LTE-machine to machine, LTE-M), Machine to Machine (M2M), Enterprise LTE Discrete Spectrum Aggregation (eLTE-DSA) systems, etc., are not limited in the embodiments of the present application.
  • 5G 5th generation
  • LTE long term evolution
  • LTE-Vehicle Long Term Evolution-Vehicle
  • V2V Vehicle to Vehicle
  • MTC Machine Type Communications
  • IoT Internet of Things
  • LTE-machine to machine
  • the distributed application refers to a working mode in which application programs are distributed on different computers and jointly complete a task through a network.
  • the traffic inside the data center network exhibits the characteristics of "mainly east-west” and "mixed like mouse flow".
  • the elephant stream can refer to a data stream with a relatively large amount of bytes and high throughput requirements, such as a transmission control protocol (TCP) stream, or a remote direct memory access (remote direct memory access over converged) on an aggregated Ethernet. ethernet, RoCE) flow, etc.
  • TCP transmission control protocol
  • RoCE remote direct memory access over converged
  • a mouse stream can refer to a data stream with a relatively small amount of bytes and low latency requirements. Therefore, realizing data center network load balancing is of great significance to improve network bandwidth utilization and meet the needs of large and small flows.
  • the data center network Clos topology provides multiple parallel transmission paths with equal hops between any two network nodes.
  • Equal-cost multipath (ECMP) technology is widely used in data centers.
  • Network load balancing In complex traffic scenarios, ECMP technology has not achieved the expected effect. This is because the processing granularity of ECMP technology is at the flow level, and the result of each load balancing decision will be executed to all data packets in a data flow, and the large amount of bytes in the elephant flow means that when the elephant flow is transmitted in the network A transmission path may be occupied for a long time.
  • the sending end device can divide the elephant flow into multiple data flows according to fixed granularity (such as 16KB, 32KB or 64KB, etc.). And send the multiple data streams to the receiving end device through multiple transmission paths with the receiving end device. In this way, it can avoid the problem that the transmission of the elephant flow in a single transmission path causes a large load of the transmission path, and other transmission paths are in an idle state or have a small load, so as to achieve the best load balancing effect. Since the network environment of each transmission path is different, for example, the transmission duration required by the data stream on each transmission path is different, which causes the multiple data streams to arrive at the receiving end device out of sequence.
  • fixed granularity such as 16KB, 32KB or 64KB, etc.
  • the sending end device sends the three data streams to the receiving end device in the order of data stream 1, data stream 2 and data stream 3, and the order in which the three data streams arrive at the receiving end device is data stream 1, data stream 3 3 and data stream 2.
  • the receiving device can solve this problem by reordering multiple data streams arriving out of order in the buffer.
  • the data stream with the later sending order needs to wait for the data stream with the earlier sending order in the buffer area.
  • the sending end device sends the three data streams to the receiving end device according to the sending order of data stream 1, data stream 2 and data stream 3.
  • Data stream 3 arrives at the receiving end device before data stream 2, that is, data stream 2 is delayed
  • data stream 3 needs to wait for the arrival of data stream 2 in the buffer area.
  • the embodiments of the present application provide a bandwidth adjustment method, apparatus, and system, which can reduce the computing pressure and cache pressure of the receiving end device for reordering multiple data streams, and improve network performance.
  • FIG. 2 is a schematic flowchart of a bandwidth adjustment method provided by an embodiment of the present application, and the method may be applied to the communication system 100 shown in FIG. 1 .
  • the first network node may be the network node 1 shown in FIG. 1
  • the second network node may be the network node 2 shown in FIG. 1 .
  • the steps performed by the first network node may also be specifically performed by a module or component of the first network node, for example, may be performed by a chip or a chip system in the first network node; performed by the second network node.
  • the steps may also be specifically performed by a module or component of the second network node, for example, may be performed by a chip or a chip system in the second network node.
  • the method provided by this embodiment will be described in detail below with reference to the schematic flowchart of the bandwidth adjustment method shown in FIG. 2 .
  • S201 The network node 2 divides the first data into data of multiple data streams.
  • the first data may be data to be sent by network node 2 to network node 1.
  • network node 2 After network node 2 receives a message that network node 1 requests the first data, it sends the first data to network node 1, or, network node 2 sends the first data to network node 1. 2 can also actively push the first data to the network node 1. It can be understood that the network node 2 can send data to the network node 1, and can also send messages to the network node 1.
  • the embodiment of the present application does not limit the specific form of the content sent by the network node 2 to the network node 1.
  • the first data is used as an example to introduce the embodiment of the present application.
  • the multiple data streams may include a first data stream and a second data stream, the data of the first data stream and the data of the second data stream are adjacent in the order of the first data, and the data of the first data stream
  • the ordering in the first data precedes the data in the second data stream.
  • the order of the data of the first data stream in the first data can be understood as the sending order of the first data stream in the multiple data streams.
  • the sequence of the data of the second data stream in the first data can be understood as the sending sequence of the second data stream in the multiple data streams.
  • multiple data streams include data stream 1 and data stream 2.
  • the data of data stream 1 is sorted in the first data before the data of data stream 2. It can be understood that when sending the first data, network node 2 will Data stream 1 is sent before data stream 2 is sent.
  • the sequence of the data of the data stream in the first data is represented by the number of the data stream hereinafter.
  • the numbers of multiple data streams may be denoted as #0 to #i, where i is a positive integer greater than 0.
  • the data of data stream #0 is the most advanced data in the first data
  • the data stream #0 is the first data stream to be sent among the multiple data streams
  • the data of data stream #0 is the same as the data stream.
  • the data of #1 is adjacent in the first data; the data of data stream #i is the data ranked last in the first data, and this data stream #i is the last sent data stream among the multiple data streams, The data of the data stream #i is adjacent to the data of the data stream #(i-1) in the first data. It should be understood that the specific form of sorting the data of each data stream in the first data is not limited in this embodiment of the present application.
  • the data of the first data stream and the data of the second data stream are adjacent in the first data, that is, the number of the first data stream is adjacent to the number of the second data stream; the data of the first data stream is in the first data stream.
  • the first data stream is data stream #0
  • the second data stream is data stream #1.
  • each of the multiple data streams may include one or more data packets.
  • the number of the data stream may be included in each of the one or more data packets.
  • the network node 1 can determine that the data of each data stream is ordered in the first data, so that it can reorder the received multiple data streams based on the ordering of the data of each data stream in the first data, so as to Get the first data.
  • the network node 2 may divide the first data into data of multiple data streams according to a fixed granularity. Wherein, the data of each of the multiple data streams has a different order in the first data, and the multiple data streams can be used to determine the first data.
  • the fixed granularity may be defined by a protocol, or agreed between the network node 2 and the network node 1, or defined by a system, etc., which is not limited in this embodiment of the present application.
  • the fixed granularity may be 16 kilobytes (kilobyte, KB), 32KB, 64KB, or the like.
  • the size of the first data is 48KB
  • the fixed granularity is 16KB.
  • network node 2 After network node 2 divides the first data according to the fixed granularity, three data streams can be obtained. , and send the first data to the network node 1 in the manner of transmitting one 16KB data per transmission path, that is, each transmission path transmits one data stream. For another example, the size of the first data is 64KB, and the fixed granularity is 16KB. There are three transmission paths between network node 1 and network node 2. After network node 2 divides the first data according to the fixed granularity, four pieces of data can be obtained. stream, and send the first data to the network node 1 in a manner that one transmission path transmits two 16KB data, and each of the remaining two transmission paths transmits one 16KB data, that is, one transmission path in the three transmission paths transmits the first data. Two data streams, one for each of the remaining two transmission paths.
  • the network node 2 can divide the first data into data of multiple data streams equally according to a fixed granularity, and then send the multiple data streams to the network node 1 through multiple transmission paths, instead of sending the multiple data streams to the network node 1 through one transmission path.
  • the first data transmission network node 1, in this way, can avoid the problem that a single transmission path is overloaded, and other transmission paths are in an idle state or have a small load, thereby achieving the effect of load balancing.
  • the network node 2 may divide the first data into data of multiple data streams according to the number of transmission paths existing between itself and the network node 1 .
  • the data of each of the multiple data streams has a different order in the first data, and the multiple data streams can be used to determine the first data.
  • the network node 2 may determine the number of transmission paths with the network node 1 according to the network topology information.
  • the network topology information may be configured by a base station, or configured by a system, or the like.
  • the size of the first data is 48KB, and there are three transmission paths between network node 1 and network node 2. After network node 2 divides the first data according to the number of transmission paths, three data streams can be obtained. Each transfer path can transfer 16KB of data.
  • the size of the first data is 64KB, and there are three transmission paths between network node 1 and network node 2. After network node 2 divides the first data according to the number of transmission paths, three data streams can be obtained. Each transfer path can transfer 23KB of data.
  • the network node 2 can equally divide the data amount corresponding to the first data to each transmission path for transmission according to the number of transmission paths, so that multiple transmission paths can transmit data of the same size, and a single transmission path can be avoided.
  • the load is too heavy, other transmission paths are idle or the load is small, so as to achieve the effect of load balancing.
  • the network node 2 sends multiple data streams to the network node 1 through multiple transmission paths; correspondingly, the network node 1 receives the multiple data streams through the multiple transmission paths.
  • each transmission path in the multiple transmission paths may be used to carry data of one or more data streams, and the data of one data stream is transmitted in one transmission path.
  • the number of multiple data streams is three
  • each of the three transmission paths can be used to carry one data stream.
  • there are three transmission paths between network node 1 and network node 2 and the number of multiple data streams is 4.
  • One transmission path among the three transmission paths can be used to carry two data streams, and the remaining two transmission paths
  • Each transport path in the path can be used to carry one data stream.
  • S203 The network node 1 determines the delay degree of the first data.
  • the network node 1 may count the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream, if the reception time of the first data packet of the second data stream Earlier than the reception time of the first data packet of the first data stream, the network node 1 can determine the degree of delay of the first data.
  • the number of the first data stream is smaller than the number of the second data stream, the network node 2 sends the first data stream before sending the second data stream, but the second data stream arrives at the network node 1 before the first data stream.
  • the network environment of the transmission path for transmitting the first data stream is poor, so that the first data stream arrives (or arrives out of sequence) at the network node 1 with a delay.
  • the network node 1 may determine the delay degree of the first data, and the greater the delay degree, the worse the network environment between the network node 1 and the network node 2 is.
  • reception time of the last data packet of the first data stream may refer to the timestamp extracted when the last data packet of the first data stream is received, or may refer to the time of receiving the last data packet of the first data stream. the corresponding time, etc.
  • reception time of the first data packet of the second data stream may refer to the timestamp extracted when the first data packet of the second data stream is received, or may refer to the time of receiving the first data packet of the second data stream. The corresponding time of the package, etc.
  • each data stream may include one or more data packets.
  • Each data packet may include the number of the data stream to determine which data stream the data packet belongs to, and the sequence of the data of the data stream in the first data. It can be understood that when a data stream includes only one data packet, the first time of the data stream is the same as the second time.
  • each data stream includes two or more data packets as an example.
  • the network node 1 may use one or more bits to indicate the delay degree of the first data, so that the resource overhead of the network node 1 sending information to the network node 2 can be reduced, and the resource utilization rate can be improved.
  • network node 1 may use 2 bits to indicate the degree of delay of the first data, as shown in Table 1, 00 may represent the degree of delay 0, 01 may represent the degree of delay 1, 10 may represent the degree of delay 2, and 11 may represent the degree of delay 3 . It can be understood that the data in Table 1 are only examples, and the specific form of the delay degree is not limited in the embodiments of the present application.
  • the network node 1 may determine the degree of delay of the first data in the following three ways.
  • the network node 1 may determine the delay degree of the first data according to the delay time. For example, the network node 1 may determine the delay duration, where the delay duration may be the difference between the reception time of the first data packet of the second data stream and the reception time of the last data packet of the first data stream, and then determine according to the delay duration The delay degree of the first data. Wherein, the longer the delay time is, the greater the delay degree is. It can be understood that, the number of the delay period may be one or more. For example, the number of delay durations is one, and the delay degree can be determined according to the one delay duration. For another example, the number of delay durations is multiple, and the delay degree may be determined according to the multiple delay durations.
  • FIG. 3 is a schematic flowchart of a method for determining a delay duration provided by an embodiment of the present application. As shown in Figure 3, the method flow may include the following contents.
  • the network node 1 determines the first time and the second time of each data stream.
  • the network node 1 may determine the first time and the second time of each of the multiple data streams.
  • the first time may be the receiving time of the first data packet in each data stream.
  • the second time may be the reception time of the last data packet in each data stream.
  • the network node 1 may record and store the reception time of the first data packet and the last data packet in each data flow.
  • the network node 1 may determine which data flow the data packet belongs to according to the number of the data flow included in the data packet.
  • the network node 1 can determine whether the data packet is the first data packet or the last data packet in the data flow according to the number (or sequence number, or identification, etc.) of the data packet included in the data packet.
  • the number (or sequence number, or identification, etc.) of the data packets may be used to indicate the ordering of the data packets in the data flow.
  • S32 The network node 1 reorders the received multiple data streams.
  • multiple data streams may be used to determine the first data.
  • the network node 1 may reorder the multiple data streams in the reordering buffer area (may be referred to as the buffer area for short) according to the ordering of the data of each data stream in the first data to obtain the first data. That is, the network node 1 can reorder the multiple data flows in the reordering buffer according to the serial number of each data flow to obtain the first data. Since the network environment of each transmission path in the multiple transmission paths between network node 1 and network node 2 is different, such as the number of intermediate nodes, or network congestion, etc., this leads to the required transmission of data flow in each transmission path.
  • the durations are different, so there may be a data stream that delays reaching the network node 1 among the multiple data streams.
  • the network node 1 will reorder the multiple data streams received out of order in the reordering buffer area to obtain multiple data streams in the correct order, that is, obtain the first data. For example, data flow #1, data flow #0, data flow #3 and data flow #2 arrive at network node 1 successively, and network node 1 reorders these four data flows to obtain four data flows in the correct order, namely Data stream #0, data stream #1, data stream #2, and data stream #3.
  • the network node 1 may determine the data streams that arrive at the network node 1 with a delay in the correct sequence of the multiple data streams one by one; Data streams arriving at network node 1 with a delay are determined based on the number of each data stream before or at the same time as the streams are reordered.
  • the network node 1 determines one or more data streams that arrive at the network node 1 with a delay among the multiple data streams.
  • the network node 1 may, according to multiple data streams in correct sequence, one by one determine whether the data stream with the smaller number among the two adjacent numbers in the multiple data streams arrives at the network node 1 with a delay.
  • the network node 1 may determine, according to the numbers of the multiple data streams, one by one whether the data stream with the smaller number among the two adjacent numbers arrives at the network node 1 with a delay.
  • the network node 1 may determine whether the data flow with the smaller number arrives at the network node 1 with a delay according to the second time of the data flow with the smaller number and the first time of the data flow with the larger number among the two adjacent numbers. . For example, if the second time of the data flow with the smaller number among the two adjacent numbers is greater than the first time of the data flow with the larger number, the network node 1 may determine that the data flow with the smaller number arrives at the network node 1 with a delay; If the second time of the data flow with the smaller number among the two adjacent numbers is less than or equal to the first time of the data flow with the larger number, the network node 1 can determine that the data flow with the smaller number arrives at the network node 1 without delay .
  • the second time of the data stream with the smaller number in the two adjacent numbers is greater than the first time of the data stream with the larger number, that is, the reception of the last data packet of the data stream with the smaller number in the two adjacent numbers
  • the time is later than the reception time of the first data packet of the data stream with the larger number, which means that part or all of the data of the data stream with the smaller number among the two adjacent numbers has not yet reached the network node 1, and the data with the larger number has not yet reached the network node 1.
  • network node 2 sends the data flow with the smaller number to network node 1 before sending the data flow with the larger number, indicating that the data flow with the smaller number Part or all of the data arrives at network node 1 with a delay, that is, a data stream with a smaller number arrives at network node 1 with a delay.
  • the reception time of the last data packet of the data stream with the smaller number in the two adjacent numbers is not later than the reception time of the first data packet of the data stream with the larger number, which means that the number in the adjacent two numbers
  • the smaller data flow arrives at network node 1 before the larger numbered data flow, and network node 2 sends the smaller numbered data flow to network node 1 before sending the larger numbered data flow, indicating that the smaller numbered data flow
  • the data flow arrives at network node 1 in the correct order.
  • network node 1 may determine whether data flow #1 is a delayed arriving data flow according to the second time of data flow #1 and the first time of data flow #2.
  • the number of the data stream #1 is smaller than the number of the data stream #2, that is to say, in the case of receiving the data stream #1 and the data stream #2 in the correct order, the network node 1 first receives the data stream #1 and then receives the data.
  • Flow #2 ie, the reception time of the last data packet of data flow #1 should be earlier than the reception time of the first data packet of data flow #2.
  • network node 1 can determine that data flow #1 arrives at network node 1 in the correct order, as shown in FIG. 4; if data flow #1 If the second time of 1 is less than the first time of data flow #2, network node 1 may determine that data flow #1 arrives at network node 1 with a delay, as shown in FIG. 5 .
  • the network node 1 determines the delay duration of each data stream delayed to arrive at the network node 1.
  • the network node 1 may determine a delay duration (also referred to as an out-of-order duration) of each data stream delayed to arrive at the network node 1 .
  • the network node 1 may determine the delay duration as the difference between the second time of the data stream with the smaller number and the first time of the data stream with the larger number among two adjacent numbers.
  • the second time of the data stream with the smaller number among the two adjacent numbers is greater than the first time of the data stream with the larger number.
  • network node 1 may perform a difference operation between the second time of data flow #1 and the first time of data flow #2 to obtain the data Delay duration for stream #1.
  • the network node 2 may divide the first data into data of 5 data streams, and the 5 data streams are respectively recorded as data stream #0, data stream #1, data stream #2, data stream #3 and data stream Stream #4.
  • the network node 2 sends the first data to the network node 1 through a plurality of transmission paths in the order of data flow #0, data flow #1, data flow #2, data flow #3 and data flow #4.
  • the network node 1 starts to receive the first data stream, the network node 1 starts timing to record the reception time (ie the first time) and the last data packet of each data stream in the 5 data streams.
  • the receiving time of the data packet ie, the second time
  • the first data flow refers to the first data flow reaching the network node 1 among the five data flows.
  • the first time of data flow #0 is 1 millisecond (it can be understood that network node 1 receives the first data packet of data flow #0 at the 1 ms after the start of timing), and the second time is 4 milliseconds (maybe It is understood that network node 1 receives the last data packet of data stream #0 at the 4th ms after the start of timing), the first time of data stream #3 is 10 milliseconds (ms), the second time is 13 ms, and the data stream # The first time of 1 is 14ms, the second time is 20ms, the first time of data stream #2 is 22ms, the second time is 29ms, the first time of data stream #4 is 30ms, the second time is 34ms, as shown in the figure 6 shown.
  • network node 1 can determine that data flow #0 arrives at network node 1 in the correct order;
  • the second time (ie 20ms) is less than the first time (ie 22ms) of data flow #2, then network node 1 can determine that data flow #1 also arrives at network node 1 in the correct order;
  • the second time (ie 22 ms) of data flow #2 29ms) is greater than the first time (ie, 10ms) of data flow #3, then network node 1 can determine that data flow #2 arrives at network node 1 with a delay, and further, network node 1 can determine according to the second time of data flow #2 and
  • the first time of data flow #3 is determined to be 19ms;
  • the second time of data flow #3 (ie 13ms) is less than the first time of data flow #4 (ie 30ms), then network node 1 It can be determined that data flow #4 also arrives at network node 1 in the correct
  • the delay time is the length of time that the data stream delays to reach the network node 1, it can reflect that the network environment of the transmission path transmitting the data stream is poor, such as high transmission delay or network congestion.
  • data flow #2 arrives at network node 1 with a delay, and the first time of data flow #3 is earlier than the second time of data flow #2, indicating that the network environment of the transmission path for transmitting data flow #2 is poor, and The longer the delay time of the data stream #2, the worse the network environment of the transmission path for transmitting the data stream #2.
  • the number of delay durations may be one or more, and the network node 1 may count the total duration of one or more delay durations within a preset duration, and determine the delay degree according to the total duration. Wherein, the larger the total duration of one or more delay durations within the preset duration, the greater the delay degree and the worse the network environment.
  • the network node 1 may determine the delay degree according to the total duration and the corresponding relationship between the total duration and the delay degree. It should be noted that the corresponding relationship between the total duration and the delay degree may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • the network node 1 can determine that the delay degree of the first data is the delay degree 0; if the total duration is greater than T1 and less than or equal to T2, the network node 1 1 may determine that the delay degree of the first data is delay degree 1; if the total duration is greater than T2 and less than or equal to T3, the network node 1 may determine that the delay degree of the first data is delay degree 2; if the total duration is greater than T3, and If the value is less than T4, the network node 1 may determine that the delay degree of the first data is the delay degree 3. It can be understood that the data in Table 2 are only examples, and the present application is not limited thereto.
  • the number of delay durations is multiple, and the network node 1 may perform an average operation on the multiple delay durations, such as an average value operation, to obtain an average delay duration, and determine the delay degree according to the average delay duration.
  • the network node 1 may count multiple delay durations within a preset time length, remove the maximum value and the minimum value among the multiple delay durations, and perform an average operation on the remaining delay durations. Among them, the longer the average delay time is, the greater the delay degree is, and the worse the network environment is.
  • the network node 1 may determine the delay degree according to the average delay duration and the corresponding relationship between the average delay duration and the delay degree.
  • the corresponding relationship between the average delay duration and the delay degree may refer to Table 2, which will not be repeated here. It should be noted that the correspondence between the average delay duration and the delay degree may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • the delay time can reflect the network environment, such as high transmission delay or network congestion, etc., and the delay degree determined by the delay time can also reflect the network environment. the worse.
  • network node 1 may send the delay level to network node 2, so that network node 2 adjusts the bandwidth resources according to the delay level determined by the delay duration to reduce the number of data streams arriving at network node 1 with a delay, and/or The delay time and the like of the data stream that delays arriving at the network node 1 are reduced, so that the buffer resources required by the network node 1 for reordering multiple data streams can be reduced, and the network performance can be improved.
  • the network node 1 may determine the delay degree of the first data according to the buffer resources occupied when the multiple data streams are reordered. For example, the network node 1 may obtain the buffer resources occupied when reordering the multiple data streams, such as statistics from the first arrival of the multiple data streams to the network node 1 until the network node 1 completes the reordering of the multiple data streams.
  • the occupied cache resources, and the delay degree is determined according to the occupied cache resources when the multiple data streams are reordered.
  • the cache resource occupied when reordering multiple data streams is simply referred to as the first resource hereinafter.
  • the network When the number of data flows delayed to reach network node 1 is large, or the delay time of data flows delayed to reach network node 1 is large, or the number of data flows delayed to reach network node 1 is large and the delay time is large, the network When node 1 reorders multiple data streams, the demand for cache resources will increase significantly, but the cache resources of network node 1 are limited, which will increase the cache pressure of network node 1 and affect the performance of network node 1. Therefore, the demand of the network node 1 for the first resource may reflect the network environment, and the degree of delay determined by the first resource may also reflect the network environment. For example, the more first resources, the worse the network environment and the greater the degree of delay.
  • the network node 1 may determine the delay degree according to the first resource and the corresponding relationship between the first resource and the delay degree. It should be noted that the correspondence between the first resource and the delay degree may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • network node 1 can determine that the delay degree of the first data is delay degree 0; if the first resource is greater than resource 1 and less than or equal to resource 2, the network node 1 can determine the delay degree of the first data as the delay degree 1; if the first resource is greater than the resource 2 and less than or equal to the resource 3, the network node 1 can determine the delay degree of the first data as the delay degree 2; if the first resource is greater than resource 3 and less than resource 4, the network node 1 may determine that the delay degree of the first data is delay degree 3. It can be understood that the data in Table 3 are only examples, and the present application is not limited thereto.
  • the first resource is the buffer resource required by the network node 1 when reordering the multiple data streams.
  • the network node 1 can send the delay degree to the network node 2, so that the network node 2 adjusts the bandwidth resource according to the delay degree determined by the first resource, so as to reduce the buffer pressure of the network node 1 and improve the network performance.
  • the network node 1 may determine the delay degree of the first data according to the delay duration and the first resource. Wherein, the longer the delay time is, the more the first resources are, the worse the network environment is, and the greater the delay degree is. For example, the network node 1 can count the total duration of one or more delay durations and the first resource within the preset duration, and determine the delay degree according to the total duration and the first resource, such as according to the total duration, the first resource and the delay degree. The correspondence determines the degree of delay.
  • the network node 1 may count the average delay duration and the first resource within the preset duration, and determine the delay degree according to the average delay duration and the first resource, for example, according to the correspondence between the average delay duration, the first resource and the delay degree. degree of delay.
  • the total duration (or average delay duration), the corresponding relationship between the first resource and the delay degree may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc. This embodiment of the present application addresses this issue. Not limited.
  • the network node 1 can determine the first resource if the first resource is greater than resource 0 and less than or equal to resource 1, and the average delay duration is greater than T0 and less than or equal to T1, the network node 1 can determine the first resource.
  • the delay degree of the resource is delay degree 0; if the first resource is greater than resource 1 and less than or equal to resource 2, and the average delay duration is greater than T1 and less than or equal to T2, network node 1 can determine the delay degree of the first resource as the delay degree 1; if the first resource is greater than resource 2, less than or equal to resource 3, and the average delay duration is greater than T2 and less than or equal to T3, network node 1 can determine that the delay degree of the first resource is delay degree 2; if the first resource is greater than Resource 3 is less than resource 4, and the average delay duration is greater than T3 and less than T4, network node 1 may determine that the delay degree of the first resource is delay degree 3. It can be understood that the data in Table 4 are only examples, and the present application is not limited thereto.
  • the first resource and the delay duration can reflect the network environment from different perspectives.
  • the first resource reflects the network environment from the perspective of the capability range of the network node 1 (such as cache resources), and the delay duration is determined from the location of the transmission path.
  • the overall network environment is estimated based on the network environment, then the delay degree determined by the first resource and the delay duration can accurately reflect the network environment. For example, the more first resources, the greater the delay degree and the worse the network environment.
  • the network node 1 can send the delay degree to the network node 2, so that the network node 2 adjusts the bandwidth resource according to the delay degree determined by the first resource and the delay duration, so as to reduce the number of data streams arriving at the network node 1 with a delay And/or the delay time of delaying the data flow arriving at the network node 1, and reducing the buffer pressure of the network node 1, and improving the network performance.
  • the network node 1 sends the first message to the network node 2; correspondingly, the network node 2 receives the first message from the network node 1.
  • the first message may include information about the degree of delay and indication information instructing the network node 2 to adjust the bandwidth resources.
  • the first message may only include information about the degree of delay, in which case the first message may implicitly instruct the network node 2 to adjust the bandwidth resources.
  • the first message may be, but is not limited to, a negative acknowledgement (NACK) message and the like.
  • the information about the delay degree may include the delay degree, for example, 2 bits are used to indicate the delay degree, as shown in Table 1.
  • S205 The network node 2 adjusts the bandwidth resources according to the delay degree.
  • the network node 2 may adjust the bandwidth resource according to the delay degree.
  • the delay degree reflects the network environment and/or the cache situation of network node 1.
  • the bandwidth resources of each transmission path between network node 2 and network node 1 can be adjusted to the same amplitude. adjustment (such as a decrease in the amplitude or an increase in the same amplitude, etc.), so as to realize the adjustment of the bandwidth resources, so that the adjusted bandwidth resources of each transmission path can adapt to the network environment, and reduce the delay to reach the network node 1.
  • the number of data streams and the delay Delay time of data flow arriving at network node 1.
  • the network node 2 may adjust the current bandwidth resources, such as reducing the current bandwidth resources, according to the delay degree and the corresponding relationship between the delay degree and the bandwidth resources.
  • the corresponding relationship between the delay degree and the bandwidth resource may be predefined, or may be predefined by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • the network node 2 can reduce the bandwidth resource of each transmission path by bandwidth resource 0; if the delay degree of the first data is delay degree 1, then the network Node 2 can reduce the bandwidth resource of each transmission path by bandwidth resource 1; if the delay degree of the first data is delay degree 2, the network node 2 can reduce the bandwidth resource of each transmission path by bandwidth resource 2; The delay degree is delay degree 3, then the network node 2 can reduce the bandwidth resource of each transmission path by bandwidth resource 3.
  • the bandwidth resource 0 is smaller than the bandwidth resource 1, the bandwidth resource 1 is smaller than the bandwidth resource 2, and the bandwidth resource 2 is smaller than the bandwidth resource 3.
  • the bandwidth resource 0 corresponding to the delay degree 0 may be greater than 0, or less than 0, or equal to 0.
  • the network node 2 when the bandwidth resource is greater than 0, the network node 2 can reduce the bandwidth resource of each transmission path by 0; when the bandwidth resource 0 is equal to 0, the network node 2 can keep the bandwidth resource of each transmission path unchanged, for example , when the delay time is less than the first threshold and/or the first resource is less than the second threshold, the cache pressure and computing pressure of the network node 1 are within its capability range, in this case, the network node 2 can keep the The bandwidth resource remains unchanged to maintain the current data transmission efficiency; when the bandwidth resource 0 is less than 0, the network node 2 can increase the bandwidth resource of each transmission path by the absolute value of the bandwidth resource 0, for example, when the delay duration is less than the first threshold value And/or when the first resource is less than the second threshold, the cache pressure and computing pressure of the network node 1 are within its capability range, and the network node 2 can appropriately increase the bandwidth resources of each transmission path so as to be
  • the first threshold may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • the second threshold may be predefined, or may be pre-agreed by the network node 1 and the network node 2, etc., which is not limited in this embodiment of the present application.
  • the bandwidth resource 0 corresponding to the delay degree 0 is greater than 0.
  • the network node 1 may send the first Indication information, the first indication information may be used to instruct the network node 2 to keep the bandwidth resources of each transmission path unchanged, or instruct the network node 2 to appropriately increase the bandwidth resources of each transmission path.
  • the network node 2 sends multiple data streams of the first data to the network node 1 through multiple transmission paths, which can avoid using a single transmission path to transmit the first data to cause heavy load on the transmission path, and other transmission paths can be avoided.
  • the path is in an idle state or the load is small to achieve the effect of load balancing.
  • the network node 1 After the network node 1 receives multiple data streams, it can count the delay of the data streams in the multiple data streams that are delayed to reach the network node 1 according to the first time and the second time of the two data streams corresponding to two adjacent numbers. duration, and the delay degree of the first data is determined according to the delay duration, and the delay degree is sent to the network node 2 .
  • the network node 2 can adjust the bandwidth resources according to the delay degree. For example, the greater the delay degree, the The worse the network environment, the less bandwidth resources of each transmission path after adjustment. In this way, the number of data streams delayed to arrive at network node 1 and/or the delay duration of data streams delayed to arrive at network node 1 can be reduced, thereby reducing the computational pressure and cache pressure on network node 1 for reordering multiple data streams, and Improve overall network performance.
  • the network node 1 may determine the delay duration (such as the total duration or the average delay duration), and carry the delay duration in the first message and send it to the network node 2 .
  • the network node 2 may determine the delay degree according to the delay duration, and adjust the bandwidth resources according to the determined delay degree.
  • the network node 2 can directly adjust the bandwidth resources according to the delay time.
  • the network node 2 can adjust the bandwidth resources according to the delay time and the corresponding relationship between the delay time and the bandwidth resources. The longer the delay time, the adjusted bandwidth of each transmission path. Fewer resources.
  • the delay duration and the bandwidth resources reference may be made to the aforementioned correspondence between the delay degree and the bandwidth resources, which will not be repeated here.
  • the network node 1 may acquire the first resource, and send the first resource to the network node 2 by carrying the first resource in the first message.
  • the network node 2 may determine the delay degree according to the first resource, and adjust the bandwidth resource according to the determined delay degree.
  • the network node 2 may directly adjust the bandwidth resource according to the first resource.
  • the network node 2 may adjust the bandwidth resource according to the first resource and the corresponding relationship between the first resource and the bandwidth resource.
  • the bandwidth resources of the transmission path are less.
  • the network node 1 may obtain the delay duration and the first resource, and send the delay duration and the first resource in the first message to the network node 2.
  • the network node 2 may determine the delay degree according to the delay duration and the first resource, and adjust the bandwidth resource according to the determined delay degree.
  • the network node 2 can directly adjust the bandwidth resources according to the delay time and the first resource.
  • the network node 2 can adjust the bandwidth resources according to the delay degree, the corresponding relationship between the first resources and the bandwidth resources. The longer the delay time, the more the first resources. , the adjusted bandwidth resources of each transmission path are less.
  • the delay degree, the first resource and the bandwidth resource reference may be made to the above-mentioned correspondence between the delay degree and the bandwidth resource, which will not be repeated here.
  • the methods provided by the embodiments of the present application are respectively introduced from the perspective of interaction between the network node 1 and the network node 2 .
  • the network node 1 and the network node 2 may include hardware structures and/or software modules, and implement the above-mentioned functions in the form of hardware structures, software modules, or hardware structures plus software modules. each function. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • FIG. 8 shows a schematic structural diagram of a bandwidth adjustment apparatus 800 .
  • the bandwidth adjustment apparatus 800 may be the network node 1 (or the network node 2) in the embodiments shown in any of the foregoing FIG. 2 to FIG. 7, and can implement the network node 1 (or the network node 2) in the method provided in the embodiment of the present application.
  • the function of the node 2); the bandwidth adjustment apparatus 800 may also be a device capable of supporting the network node 1 (or the network node 2) to implement the function of the network node 1 (or the network node 2) in the method provided in the embodiment of the present application.
  • the bandwidth adjustment apparatus 800 may be a hardware structure, a software module, or a hardware structure plus a software module.
  • the bandwidth adjustment device 800 may be implemented by a chip system. In this embodiment of the present application, the chip system may be composed of chips, or may include chips and other discrete devices.
  • the bandwidth adjustment apparatus 800 may include a processing module 801 and a communication module 802 .
  • the communication module 802 can be configured to receive multiple data streams from the second network node through multiple transmission paths, wherein the data of each data stream in the multiple data streams is The order in the first data is different, and the multiple data streams can be used to determine the first data, the multiple data streams include the first data stream and the second data stream, and the data of the first data stream is in the first data stream. The ordering precedes the data of the second data stream, and the data of the first data stream and the data of the second data stream are adjacent in the first data.
  • the processing module 801 may be configured to count the reception time of the last data packet of the first data stream and the reception time of the first data packet of the second data stream. If the reception time of the first data packet of the second data stream is earlier than the reception time of the last data packet of the first data stream, the processing module 801 is configured to determine the degree of delay of the first data.
  • the communication module 802 may also be configured to send a first message to the second network node, where the first message includes information about the delay degree and indication information instructing the second network node to adjust bandwidth resources.
  • the greater the delay degree the less bandwidth resources of each transmission path in the adjusted multiple transmission paths.
  • the processing module 801 may be specifically configured to: determine a delay duration, where the delay duration is the difference between the reception time of the first data packet of the second data stream and the last data packet of the first data stream The difference value of the reception time, and the delay degree is determined according to the delay duration, wherein the larger the delay duration is, the greater the delay degree is.
  • the processing module 801 may be further configured to: perform an average operation on multiple delay durations to obtain the average delay duration of the first data, and determine the delay degree according to the average delay duration, wherein the average delay duration The larger the delay, the greater the degree of delay.
  • the processing module 801 may be further configured to: determine a first resource, where the first resource is a cache resource occupied when reordering multiple data streams, and determine a delay degree according to the first resource, Wherein, the more the first resource, the greater the degree of delay.
  • the processing module 801 may be further configured to: reorder the multiple data streams according to the sequence of the data of each data stream in the first data to obtain the first data.
  • the communication module 802 can be used to send multiple data streams to the first network node through multiple transmission paths, wherein the multiple data streams include the first data stream and the second data stream.
  • the degree of delay is based on the last data of the first data stream
  • the reception time of the packet and the reception time of the first data packet of the second data stream are determined, and the reception time of the first data packet of the second data stream is earlier than the reception time of the last data packet of the first data stream.
  • the processing module 801 may be configured to adjust bandwidth resources according to the delay time. For example, the longer the delay time is, the less bandwidth resources of each transmission path are adjusted.
  • the processing module 801 may be further configured to divide the first data into data of multiple data streams.
  • the communication module 802 is used for the bandwidth adjustment device 800 to communicate with other modules, and it can be a circuit, a device, an interface, a bus, a software module, a transceiver or any other device that can implement communication.
  • the division of modules in the embodiments of the present application is schematic, and is only a logical function division. In actual implementation, there may be other division methods.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit. In the device, it can also exist physically alone, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • FIG. 9 shows a communication apparatus 900 provided in an embodiment of the present application, wherein the communication apparatus 900 may be the network node 1 (or network node 2) in any of the embodiments shown in FIG. 2 to FIG.
  • the communication apparatus 900 may be a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the above-mentioned communication module 802 may be a transceiver, and the transceiver is integrated into the communication device 900 to form a communication interface 910 .
  • the communication apparatus 900 may include at least one processor 920, configured to implement or support the communication apparatus 900 to implement the function of the network node 1 or the network node 2 in the method provided in this embodiment of the present application.
  • the processor 920 may determine the delay degree according to the delay duration. For details, please refer to the detailed description in the method example, which will not be repeated here.
  • the processor 920 may adjust the bandwidth resource according to the degree of delay. For details, please refer to the detailed description in the method example, which will not be repeated here.
  • Communication apparatus 900 may also include at least one memory 930 for storing program instructions and/or data.
  • Memory 930 is coupled to processor 920 .
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • Processor 920 may cooperate with memory 930 .
  • Processor 920 may execute program instructions stored in memory 930 . At least one of the at least one memory may be included in the processor.
  • the communication apparatus 900 may further include a communication interface 910 for communicating with other devices through a transmission medium, so that the devices used in the communication apparatus 900 may communicate with other devices.
  • the communication apparatus 900 is the network node 1, and the other device may be the network node 2; or, the communication apparatus 900 is the network node 2, and the other device may be the network node 1.
  • the processor 920 may use the communication interface 910 to send and receive data.
  • the communication interface 910 may specifically be a transceiver.
  • the specific connection medium between the communication interface 910 , the processor 920 , and the memory 930 is not limited in the embodiments of the present application.
  • the memory 930, the processor 920, and the communication interface 910 are connected through a bus 940 in FIG. 9.
  • the bus is represented by a thick line in FIG. 9.
  • the connection between other components is only for schematic illustration. , is not limited.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
  • the processor 920 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which may implement Alternatively, each method, step, and logic block diagram disclosed in the embodiments of the present application are executed.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory 930 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), Such as random-access memory (random-access memory, RAM).
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • Embodiments of the present application further provide a computer-readable storage medium, including instructions, which, when executed on a computer, cause the computer to execute the method executed by the first network node or the second network node in the foregoing embodiments.
  • Embodiments of the present application further provide a computer program product, including instructions, which when run on a computer, cause the computer to execute the method executed by the first network node or the second network node in the foregoing embodiments.
  • An embodiment of the present application provides a chip system, where the chip system includes a processor, and may further include a memory, for implementing the function of the first network node or the second network node in the foregoing method.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • An embodiment of the present application provides a communication system, where the communication system includes the first network node and the second network node in the foregoing embodiments.
  • the methods provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, network equipment, user equipment, or other programmable apparatus.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server or data center by means of wired (such as coaxial cable, optical fiber, digital subscriber line, DSL for short) or wireless (such as infrared, wireless, microwave, etc.)
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media.
  • the available media can be magnetic media (eg, floppy disks, hard disks, magnetic tape), optical media (eg, digital video disc (DVD) for short), or semiconductor media (eg, SSD), and the like.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种带宽调整方法、装置以及系统,该方法可以由第一网络节点执行,或由第一网络节点的部件执行。该方法中,第一网络节点通过多个传输路径接收包括第一数据流和第二数据流的多个数据流,第一数据流的数据在第一数据的排序先于第二数据流的数据,第一数据流的数据与第二数据流的数据在第一数据中相邻;若第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,则确定延迟程度;向第二网络节点发送第一消息,第一消息包括关于延迟程度的信息和指示第二网络节点调整带宽资源的指示信息。这样第二网络节点根据延迟程度对带宽资源进行调整,可以减少第一网络节点对多个数据流进行重排序的运算压力和缓存压力。

Description

一种带宽调整方法、装置以及系统 技术领域
本申请涉及通信技术领域,尤其涉及一种带宽调整方法、装置以及系统。
背景技术
在云计算、大数据等技术的带动下,分布式应用成为数据中心网络(data center network,DCN)的主要工作方式。该分布式应用是指应用程序分布在不同的计算机上,通过网络来共同完成一项任务的工作方式。所以,实现数据中心网络的负载均衡对提升网络带宽利用率具有重要意义。
目前,发送端设备与接收端设备之间可以存在多个传输路径。发送端设备可以按照固定粒度(如16KB)将待发送的数据均分为多个数据流,并通过该多个传输路径向接收端设备发送该多个数据流,这样可以避免单个传输路径上负载较大,其它传输路径处于空闲状态或负载较小的问题,从而实现负载均衡。由于每个传输路径的网络环境不同,即数据流在每个传输路径上所需的传输时长不同,这就导致该多个数据流是乱序到达接收端设备的,也即是多个数据流中存在延迟到达接收端设备的数据流。接收端设备可以对缓存区中乱序到达的多个数据流进行重排序来解决这一问题。
当延迟到达接收端设备的数据流的数量较多时,会增大接收端设备对该多个数据流进行重排序的运算压力和缓存压力,不利于整网性能的提升。
发明内容
本申请提供一种带宽调整方法、装置以及系统,用以减少接收端设备对多个数据流进行重排序的运算压力和缓存压力。
第一方面,本申请提供一种带宽调整方法,该方法可以由第一网络节点执行,或者由第一网络节点的部件(如芯片或芯片系统等)执行。在该方法中,第一网络节点可以通过多个传输路径接收来自第二网络节点的多个数据流,其中,多个数据流中的每个数据流的数据在第一数据中的排序不同,该多个数据流可以包括第一数据流和第二数据流,第一数据流中的数据在第一数据中的排序先于第二数据流中的数据,且第一数据流中的数据与第二数据流的数据在第一数据流中相邻,每个数据流包括至少一个数据包;第一网络节点可以统计第一数据流的最后一个数据包的接收时间以及第二数据流的第一个数据包的接收时间,若第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,则确定第一数据的延迟程度;以及,第一网络节点可以向第二网络节点发送第一消息,该第一消息中包括关于延迟程度的信息以及指示第二网络节点调整带宽资源的指示信息。
在上述实施例中,第二网络节点与第一网络节点之间存在多个传输路径,第一网络节点可以通过该多个传输路径接收第一数据的多个数据流,即使用该多个传输路径来一起承载第一数据,这样可以避免使用单个传输路径传输第一数据导致该传输路径负载大,其它传输路径处于空闲状态或负载较小的问题,达到负载均衡的效果。第一网络节点在接收到多个数据流后,可以统计第一数据流的最后一个数据包的接收时间和第二数据流的第一个 数据包的接收时间。若第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,该第一数据流的数据在第一数据中的排序先于第二数据流的数据,这就意味着第一数据流延迟到达第一网络节点,在此情况下,第一网络节点可以确定第一数据的延迟程度。该延迟程度可以反映网络环境,如延迟程度越大,网络环境越差,第一网络节点可以将延迟程度发送给第二网络节点,以使得第二网络节点根据该延迟程度对带宽资源进行调整。例如,第二网络节点可以适当性的减少每个传输路径的带宽资源以适应较差的网络环境,从而可以减少延迟到达网络节点的数据流的数量,减少了第一网络节点重排序多个数据流的运算压力和缓存压力,可以整体提高网络性能。
在一种可能的设计中,延迟程度越大,调整后的多个传输路径中每个传输路径的带宽资源越少。
通过该设计,延迟程度越大,意味着网络环境越差,相应地,调整后的多个传输路径中每个传输路径的带宽资源也就越少,以适应较差的网络环境。
在一种可能的设计中,第一网络节点确定第一数据的延迟程度,可以为:第一网络节点确定延迟时长,该延迟时长可以为第二数据流的第一个数据包的接收时间与第一数据流的最后一个数据包的接收时间的差值;以及,第一网络节点根据延迟时长确定延迟程度,其中,延迟时长越大,延迟程度越大。
通过该设计,第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,即第一数据流为延迟到达第一网络节点的数据流。第一网络节点可以对第二数据流的第一个数据包的接收时间和第一数据流的最后一个数据包的接收时间进行差值运算,得到延迟时长,以及根据该延迟时长确定延迟时长可以反映网络环境。该延迟时长可以反映网络环境,由延迟时长确定的延迟程度也可以反映网络环境,如延迟时长越大,延迟程度越大,网络环境越差,所以第二网络节点根据由延迟时长确定的延迟程度对带宽资源进行适当性的调整,可以减少延迟到达第一网络节点的数据流的数量和/或延迟到达第一网络节点的数据流的延迟时长。
在一种可能的设计中,延迟时长的数量为多个,该方法还可以包括:第一网络节点对多个延迟时长进行均值运算,得到第一数据的平均延迟时长;第一网络节点根据延迟时长确定延迟程度,可以为:第一网络节点根据平均延迟时长确定延迟程度,其中,平均延迟时长越大,延迟程度越大。
通过该设计,当延迟到达第一网络节点的数据流的数量为多个时,第一网络节点可以对多个延迟时长进行均值运算(如平均值运算等),得到平均延迟时长,再根据该平均延迟时长确定延迟程度。由于平均延迟时长可以从整体上反映网络环境,由平均延迟时长确定的延迟程度也可以从整体上反映网络环境,如平均延迟时长越大,延迟程度越大,网络环境越差,所以第二网络节点根据由平均延迟时长确定的延迟程度对带宽资源进行适当性的调整,可以减少延迟到达第一网络节点的数据流的数量和/或延迟到达第一网络节点的数据流的延迟时长。
在一种可能的设计中,该方法还可以包括:第一网络节点确定第一资源,第一资源为重排序多个数据流时占用的缓存资源;以及,第一网络节点根据第一资源,确定延迟程度,其中,第一资源越多,延迟程度越大。
通过该设计,多个数据流乱序到达第一网络节点,为了得到第一数据,第一网络节点可以在缓存区中根据每个数据流的数据在第一数据中的排序对该多个数据流进行重排序 以得到第一数据。第一网络节点可以统计重排序该多个数据流时所占用的缓存资源(即第一资源),以及根据第一资源确定延迟程度。例如,第二数据流优先达到第一网络节点,并存储在缓存区内,第一网络节点需等待第一数据流到达后,对第一数据流和第二数据流进行重排序,将重排序后的第一数据流和第二数据流发往上层(如应用层等),然后释放第二数据流和第一数据流占用的缓存资源。若第一数据流的延迟时长较大,第二数据流会在较长时间段内占用缓存资源,而第一网络节点的缓存资源有限,这就增大了第一网络节点的缓存压力。第一网络节点将由第一资源确定的延迟程度发送给第二网络节点,这样第二网络节点可以基于由第一资源确定的延迟程度调整带宽资源,以减少第一网络节点的缓存压力。
在一种可能的设计中,该方法还可以包括:第一网络节点根据每个数据流的数据在第一数据中的排序,对多个数据流进行重排序,得到第一数据。
通过该设计,第一网络节点与第二网络节点之间存在多个传输路径,每个传输路径所处的网络环境不同,那么数据流在每个传输路径上的传输时长可能不同,这就使得多个数据流乱序到达第一网络节点。第一网络节点可以根据每个数据流的数据在第一数据中的排序对乱序到达的多个数据流进行重排序,从而得到正确顺序的多个数据流,即得到第一数据。
在一种可能的设计中,第一消息可以但不限于为NACK消息等。
第二方面,本申请提供一种带宽调整方法,该方法可以由第二网络节点执行,或者由第二网络节点的部件(如芯片或芯片系统等)执行。在该方法中,第二网络节点可以通过多个传输路径向第一网络节点发送多个数据流,其中,多个数据流中的每个数据流的数据在第一数据中的排序不同,多个数据流包括第一数据流和第二数据流,第一数据流中的数据在第一数据中的排序先于第二数据流中的数据,第一数据流中的数据与第二数据流中的数据在第一数据中相邻,每个数据流包括至少一个数据包;接收来自第一网络节点的第一消息,第一消息中包括关于延迟程度的信息,以及指示第二网络节点调整带宽资源的指示信息,其中,延迟程度是根据第一数据流的最后一个数据包的接收时间以及第二数据流的第一个数据包的接收时间确定的,第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间;以及,根据延迟程度调整带宽资源。
在一种可能的设计中,延迟程度越大,调整后的多个传输路径中每个传输路径的带宽资源越少。
在一种可能的设计中,延迟程度可以是根据延迟时长确定的,该延迟时长为第二数据流的第一个数据包的接收时间与第一数据流的最后一个数据包的接收时间的差值,其中,延迟时长越大,延迟程度越大。
在一种可能的设计中,延迟程度可以是根据平均延迟时长确定的,该平均延迟时长是对多个延迟时长进行均值运算得到的,其中,平均延迟时长越大,延迟程度越大。
在一种可能的设计中,延迟程度可以是根据第一资源确定的,该第一资源为重排序多个数据流时占用的缓存资源,其中,第一资源越多,延迟程度越大。
在一种可能的设计中,该方法还可以包括:第二网络节点将第一数据划分为多个数据流的数据。
第三方面,本申请提供一种带宽调整装置,该装置可以包括处理模块和通信模块,这些模块可以执行上述第一方面任一种设计示例中第一网络节点所执行的相应功能。
示例性的,通信模块,可以用于通过多个传输路径接收来自第二网络节点的多个数据流,其中,多个数据流中的每个数据流的数据在第一数据中的排序不同,多个数据流包括第一数据流和第二数据流,第一数据流中的数据在第一数据中的排序先于第二数据流中的数据,第一数据流中的数据与第二数据流中的数据在第一数据中相邻,每个数据流包括至少一个数据包。
处理模块,可以用于统计第一数据流的最后一个数据包的接收时间,以及第二数据流的第一个数据包的接收时间,若第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,则确定第一数据的延迟程度;
该通信模块,还可以用于向第二网络节点发送第一消息,第一消息中包括关于延迟程度的信息,以及指示第二网络节点调整带宽资源的指示信息。
第四方面,本申请提供一种带宽调整装置,该装置可以包括处理模块和通信模块,这些模块可以执行上述第二方面任一种设计示例中第二网络节点所执行的相应功能。
示例性的,通信模块,可以用于通过多个传输路径向第一网络节点发送多个数据流,其中,多个数据流中的每个数据流的数据在第一数据中的排序不同,多个数据流包括第一数据流和第二数据流,第一数据流中的数据在第一数据中的排序先于第二数据流中的数据,第一数据流中的数据与第二数据流中的数据在第一数据中相邻,每个数据流包括至少一个数据包;以及,接收来自第一网络节点的第一消息,第一消息中包括关于延迟程度的信息,以及指示第二网络节点调整带宽资源的指示信息,其中,延迟程度是根据第一数据流的最后一个数据包的接收时间以及第二数据流的第一个数据包的接收时间确定的,第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间。
处理模块,可以用于根据延迟程度调整带宽资源。
第五方面,本申请提供一种通信装置,该通信装置可以是第一网络节点,也可以是第一网络节点中的装置。该通信装置可以包括处理器,用于实现上述第一方面中第一网络节点所执行的方法。该通信装置还可以包括存储器,用于存储程序指令和数据。该存储器与该处理器耦合,该处理器可以调用并执行该存储器中存储的程序指令,用于实现上述第一方面中第一网络节点所执行的任意一种方法。
可选的,该通信装置还可以包括收发器,该收发器用于该通信装置与其它设备进行通信。
第六方面,本申请提供一种通信装置,该通信装置可以是第二网络节点,也可以是第二网络节点中的装置。该通信装置可以包括处理器,用于实现上述第二方面中第二网络节点所执行的方法。该通信装置还可以包括存储器,用于存储程序指令和数据。该存储器与该处理器耦合,该处理器可以调用并执行该存储器中存储的程序指令,用于实现上述第二方面中第二网络节点所执行的任意一种方法。
可选的,该通信装置还可以包括收发器,该收发器用于该通信装置与其它设备进行通信。
第七方面,本申请提供一种计算机可读存储介质,该存储介质中存储有计算机程序或指令,当计算机程序或指令被执行时,可实现上述第一方面任一种设计示例中第一网络节点所执行的方法。
第八方面,本申请提供一种计算机可读存储介质,该存储介质中存储有计算机程序或指令,当计算机程序或指令被执行时,可实现上述第二方面任一种设计示例中第二网络节 点所执行的方法。
第九方面,本申请提供一种计算机程序产品,包括指令,当指令在计算机上运行时,使得计算机执行上述第一方面任一种设计示例中第一网络节点所执行的方法。
第十方面,本申请提供一种计算机程序产品,包括指令,当指令在计算机上运行时,使得计算机执行上述第二方面任一种设计示例中第二网络节点所执行的方法。
第十一方面,本申请还提供一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现上述第一方面任一种设计示例中第一网络节点执行的方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第十二方面,本申请还提供一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现上述第二方面任一种设计示例中第二网络节点执行的方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第十三方面,本申请还提供一种通信系统,该通信系统包括上述第五方面任一种设计示例中的通信装置以及上述第六方面任一种设计示例中的通信装置。
上述第二方面至第十三方面及其实现方式的有益效果可以参考对第一方面及其实现方式的有益效果的描述。
附图说明
图1为本申请实施例适用的一种通信系统的示意图;
图2为本申请实施例提供的一种带宽调整方法的流程示意图;
图3为本申请实施例提供的一种确定延迟时长的流程示意图;
图4为本申请实施例提供的数据流1先于数据流2到达网络节点1的一种示意图;
图5为本申请实施例提供的数据流2先于数据流1到达网络节点1的一种示意图;
图6为本申请实施例提供的多个数据流的一种示意图;
图7为本申请实施例提供的重排序多个数据流的一种示意图;
图8为本申请实施例提供的一种带宽调整装置的示意图;
图9为本申请实施例提供的一种通信装置的示意图。
具体实施方式
本申请提供一种带宽调整方法、装置以及系统,用于减少接收端设备对多个数据流进行重排序的运算压力和缓存压力,可以提高网络性能。其中,方法和装置是基于同一技术构思的,由于方法及装置解决问题的原理相似,因此方法与装置的实施可以相互参见,重复之处不再赘述。
需要说明的是,本申请实施例中的“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
需要说明的是,本申请中所涉及的多个,是指两个或两个以上。至少一个,是指一个或多个。至少两个,是指两个或两个以上。
另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
下面将结合附图,对本申请实施例进行详细描述。
请参见图1,图1所示为本申请实施例适用的一种通信系统的示意图。如图1所示,该通信系统100可以包括网络节点和中间节点。例如,通信系统100可以包括多个网络节点,图1以网络节点1和网络节点2为例。通信系统100可以包括多个中间节点,图1以中间节点1、中间节点2、中间节点3以及中间节点4为例。其中,网络节点1与网络节点2之间可以通过一个或多个中间节点进行通信。例如,网络节点1与网络节点2之间可以存在多个传输路径,其中,每个传输路径可以包括一个或多个中间节点,图1以传输路径1、传输路径2和传输路径3为例。
以网络节点1向网络节点2发送数据为例,网络节点1可以通过传输路径1将数据发送给网络节点2,即网络节点1可以先将数据发送给中间节点1,再由中间节点1转发给网络节点2;或者,网络节点1可以通过传输路径2将数据发送给网络节点2,即网络节点1可以先将数据发送给中间节点2,中间节点2将接收的数据转发给中间节点3,再由中间节点3转发给网络节点2;或者,网络节点1可以通过传输路径3将数据发送给网络节点2,即网络节点1可以先将数据发送给中间节点4,再由中间节点4转发给网络节点2。
其中,网络节点,可以为具有数据和/或消息收发功能的设备。例如,该网络节点可以为服务器、网络设备或终端设备等。中间节点,可以为具有数据和/或消息转发功能的设备。例如,该中间节点可以为路由器,交换机或中继终端设备等。可以理解的是,本申请实施例对网络节点或中间节点的具体形式并不限定。
例如,服务器可以为具备数据处理功能的设备,例如服务器可以为数据中心网络的服务器,或者为数据中心网络的服务器中的部件,如处理器、芯片或芯片系统等。
例如,网络设备可以是接入网设备,例如无线接入网(radio access network,RAN)设备,是一种为终端设备提供无线通信功能的设备。接入网设备例如包括但不限于:第五代(5 th generation,5G)中的下一代基站(generation nodeB,gNB)、演进型节点B(evolved node B,eNB)、远端射频单元(remote radio unit,RRU)、基带单元(baseband unit,BBU)、收发点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、未来移动通信系统中的基站或WiFi系统中的接入点等。接入网设备还可以是云无线接入网络(cloud radio access network,CRAN)场景下的无线控制器、集中单元(central unit,CU),和/或分布单元(distributed unit,DU),或者网络设备可以为中继站、车载设备以及未来演进的网络中的网络设备等。
例如,终端设备可以简称为终端,例如用户设备,是一种具有无线收发功能的设备。终端设备可以部署在陆地上(如车载、车辆、高铁或动车等);也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、无人机、气球和卫星上等)。所述终端设备可以是手机、平板电脑、带无线收发功能的电脑、虚拟现实终端设备、增强现实终端设备、工业控制中的无线终端设备、无人驾驶中的无线终端设备、远程医疗中的无线终端设备、智能电网中的无线终端设备、运输安全中的无线终端设备、智慧城市中的无线终端设备、智慧家庭中的无线终端设备。
需要说明的是,图1所示的通信系统作为一个示例,并不对本申请实施例提供的方法适用的通信系统构成限定。总之,本申请实施例提供的方法,适用于各种发送端设备与接收端之间存在多个传输路径的通信系统中,例如数据中心网络系统。本申请实施例还可以应用于各种类型和制式的通信系统,例如:第五代(the 5th generation,5G)通信系统、长 期演进(long term evolution,LTE)通信系统、车到万物(vehicle to everything,V2X)、长期演进-车联网(LTE-vehicle,LTE-V)、车到车(vehicle to vehicle,V2V)、车联网、机器类通信(machine type communications,MTC)、物联网(internet of things,IoT)、长期演进-机器到机器(LTE-machine to machine,LTE-M)、机器到机器(machine to machine,M2M)、企业LTE离散窄带聚合(enterprise LTE discrete spectrum aggregation,eLTE-DSA)系统等,本申请实施例不予限定。为了便于表述,本申请实施例中以数据中心网络系统为例进行说明。
下面对本申请实施例涉及的一些技术特征进行介绍。
在云计算、大数据等技术的带动下,分布式应用成为数据中心网络的主要工作方式。该分布式应用是指应用程序分布在不同的计算机上,通过网络来共同完成一项任务的工作方式。这样,数据中心网络内部的流量呈现“东西向为主”、“象鼠流混合”的特点。其中,大象流可以指字节量比较大、有高吞吐需求的数据流,如传输控制协议(transmission control protocol,TCP)流、或聚合以太网上的远程直接内存访问(remote direct memory access over converged ethernet,RoCE)流等。老鼠流可以指字节量比较小、有低时延需求的数据流。因此,实现数据中心网络负载均衡对提升网络带宽利用率、满足大小流需求具有重大意义。
数据中心网络克洛斯(clos)拓扑结构对任意两个网络节点之间提供了多个并行的相等跳数的传输路径,等价多路径(equal-cost multipath,ECMP)技术被广泛用于数据中心网络的负载均衡。在复杂的流量场景下,ECMP技术并未达到预期的效果。这是因为ECMP技术的处理粒度为流级别,每次负载均衡决策的结果会执行到一个数据流中所有的数据包,而大象流字节量大,意味着大象流在网络内传输时可能会较长时间占用一条传输路径。这样就可能存在以下两种情况:情况1,多个大象流被均衡到同一个传输路径上,但该传输路径的带宽有限无法同时满足多个大象流的吞吐需求,导致网络拥塞;情况2,老鼠流和大象流被均衡到同一个传输路径上,老鼠流的数据包会在大象流的数据包后排队较长时间,从而无法满足老鼠流的低时延需求,导致老鼠流阻塞。
目前,为了解决因大象流的突发导致网络拥塞从而网络整体性能下降的问题,发送端设备可以将大象流按照固定粒度(比如16KB,32KB或64KB等)均分为多个数据流,并通过与接收端设备之间的多个传输路径,向接收端设备发送该多个数据流。这样可以避免大象流在单个传输路径传输使得该传输路径负载大,其它传输路径处于空闲状态或负载较小的问题,从而达到最佳的负载均衡效果。由于每个传输路径的网络环境不同,如数据流在每个传输路径上所需的传输时长不同,这就导致该多个数据流是乱序到达接收端设备的。例如,发送端设备按照数据流1、数据流2以及数据流3的发送顺序将该3个数据流发送给接收端设备,该3个数据流到达接收端设备的顺序是数据流1、数据流3以及数据流2。接收端设备可以对缓存区中乱序到达的多个数据流进行重排序来解决这一问题。
当发送顺序靠后的数据流先于发送顺序靠前的数据流到达接收端设备时,该发送顺序靠后的数据流需要在缓存区内等待该发送顺序靠前的数据流。例如,发送端设备按照数据流1、数据流2以及数据流3的发送顺序将该3个数据流发送给接收端设备,数据流3先于数据流2到达接收端设备,即数据流2延迟到达接收端设备,数据流3需要在缓存区内等待数据流2的到达。这样,当延迟到达接收端设备的数据流的数量较多和/或延迟到达接收端设备的数据流的延迟时长较长时,会增大接收端设备对该多个数据流进行重排序的运算压力和缓存压力,不利于整网性能的提升。
鉴于此,本申请实施例提供一种带宽调整方法、装置以及系统,可以减少接收端设备对多个数据流进行重排序的运算压力和缓存压力,提高网络性能。
图2为本申请实施例提供的一种带宽调整方法的流程示意图,该方法可以应用于图1所示的通信系统100。其中,第一网络节点可以为图1所示的网络节点1,第二网络节点可以为图1所示的网络节点2。可以理解的是,由第一网络节点执行的步骤也可以具体由第一网络节点的一个模块或部件执行,如可以由该第一网络节点中的芯片或芯片系统执行;由第二网络节点执行的步骤也可以具体由第二网络节点的一个模块或部件执行,如可以由该第二网络节点中的芯片或芯片系统执行。下面请参阅图2所示的带宽调整方法的流程示意图,对本实施例提供的方法进行详细说明。
S201:网络节点2将第一数据划分为多个数据流的数据。
其中,第一数据可以为网络节点2待发送给网络节点1的数据,例如,网络节点2接收到网络节点1请求第一数据的消息后,向网络节点1发送第一数据,或者,网络节点2也可以主动向网络节点1推送第一数据。可以理解的是,网络节点2可以向网络节点1发送数据,也可以向网络节点1发送消息,本申请实施例对网络节点2向网络节点1发送的内容的具体形式并不限定。在下文中以第一数据为例介绍本申请实施例。
示例性的,多个数据流中可以包括第一数据流和第二数据流,第一数据流的数据与第二数据流的数据在第一数据中的排序相邻,第一数据流的数据在第一数据中的排序先于第二数据流的数据。其中,第一数据流的数据在第一数据中的排序,可以理解为第一数据流在该多个数据流中的发送顺序。同理,第二数据流的数据在第一数据中的排序,可以理解为第二数据流在该多个数据流中的发送顺序。例如,多个数据流包括数据流1和数据流2,数据流1的数据在第一数据中的排序先于数据流2的数据,可以理解为在发送第一数据时,网络节点2会在发送数据流2之前发送数据流1。
为了便于理解本申请实施例,在下文中以数据流的编号来表示数据流的数据在第一数据中的排序。其中,数据流的编号越小,该数据流的数据在第一数据中的排序越靠前,该数据流在多个数据流中的发送顺序越靠前(即网络节点2越优先发送该数据流)。例如,多个数据流的编号可以记为#0~#i,i为大于0的正整数。其中,数据流#0的数据为第一数据中排序最靠前的数据,且该数据流#0是多个数据流中第一个被发送的数据流,数据流#0的数据与数据流#1的数据在第一数据中相邻;数据流#i的数据为第一数据中排序最靠后的数据,且该数据流#i是多个数据流中最后一个被发送的数据流,数据流#i的数据与数据流#(i-1)的数据在第一数据中相邻。应理解的是,本申请实施例对每个数据流的数据在第一数据中的排序的具体形式并不限定。
那么,第一数据流的数据与第二数据流的数据在第一数据中相邻,也即是第一数据流的编号与第二数据流的编号相邻;第一数据流的数据在第一数据中的排序先于第二数据流的数据,也即是第一数据流的编号小于第二数据流的编号。例如,第一数据流为数据流#0,第二数据流为数据流#1。
示例性的,多个数据流的每个数据流中可以包括一个或多个数据包。该一个或多个数据包中的每个数据包中可以包括数据流的编号。这样,网络节点1可以确定每个数据流的数据在第一数据中排序,从而可以基于每个数据流的数据在第一数据中的排序,对接收到的多个数据流进行重排序,以得到第一数据。
作为一个示例,网络节点2可以按照固定粒度将第一数据划分为多个数据流的数据。 其中,多个数据流中每个数据流的数据在第一数据中的排序不同,该多个数据流可以用于确定第一数据。该固定粒度可以是协议定义的,或者是网络节点2与网络节点1约定的,或者是系统定义的等,本申请实施例对此不作限定。例如,该固定粒度可以是16千字节(kilobyte,KB)、也可以是32KB、还可以是64KB等。例如,第一数据的大小为48KB,固定粒度为16KB,网络节点1与网络节点2之间存在三个传输路径,网络节点2按照固定粒度对第一数据进行划分后,可以得到3个数据流,并以每个传输路径传输一个16KB的数据的方式向网络节点1发送第一数据,即每个传输路径传输一个数据流。再例如,第一数据的大小为64KB,固定粒度为16KB,网络节点1与网络节点2之间存在三个传输路径,网络节点2按照固定粒度对第一数据进行划分后,可以得到4个数据流,并以一个传输路径传输两个16KB的数据、剩余两个传输路径中的每一个传输一个16KB的数据的方式向网络节点1发送第一数据,即三个传输路径中的一个传输路径传输两个数据流,剩余两个传输路径的中每一个传输一个数据流。
通过上述方式,网络节点2可以按照固定粒度将第一数据均分为多个数据流的数据,然后将该多个数据流通过多个传输路径发送给网络节点1,而不是通过一个传输路径将第一数据发送网络节点1,这样,可以避免单个传输路径负载过重,其它传输路径处于空闲状态或负载较小的问题,从而达到负载均衡的效果。
作为另一个示例,网络节点2可以根据自身与网络节点1之间存在的传输路径的数量,将第一数据划分为多个数据流的数据。其中,多个数据流中每个数据流的数据在第一数据中的排序不同,该多个数据流可以用于确定第一数据。例如,网络节点2可以根据网络拓扑信息确定与网络节点1之间的传输路径的数量。其中,网络拓扑信息可以是基站配置、或系统配置的等。例如,第一数据的大小为48KB,网络节点1与网络节点2之间存在三3个传输路径,网络节点2根据传输路径的数量将第一数据进行划分后,可以得到3个数据流,每个传输路径可以传输16KB的数据。再例如,第一数据的大小为64KB,网络节点1与网络节点2之间存在三个传输路径,网络节点2根据传输路径的数量将第一数据进行划分后,可以得到3个数据流,每个传输路径可以传输23KB的数据。
通过上述方式,网络节点2可以按照传输路径的数量将第一数据对应的数据量均分到每个传输路径上进行传输,使得多个传输路径可以传输相同大小的数据,且可以避免单个传输路径负载过重,其它传输路径处于空闲状态或负载较小的问题,从而达到负载均衡的效果。
S202:网络节点2通过多个传输路径向网络节点1发送多个数据流;相应地,网络节点1通过该多个传输路径接收该多个数据流。
示例性的,网络节点2与网络节点1之间存在两个或两个以上的传输路径。网络节点2可以通过与网络节点1之间的部分或全部的传输路径,向网络节点1发送多个数据流。其中,多个传输路径中的每个传输路径可以用于承载一个或多个数据流的数据,且一个数据流的数据在一个传输路径中传输。例如,网络节点1与网络节点2之间存在三个传输路径,多个数据流的数量为3个,该三个传输路径中的每个传输路径可以用于承载一个数据流。再例如,网络节点1与网络节点2之间存在三传输路径,多个数据流的数量为4个,该三个传输路径中的一个传输路径可以用于承载两个数据流,剩余两个传输路径中的每个传输路径可以用于承载一个数据流。
S203:网络节点1确定第一数据的延迟程度。
示例性的,网络节点1可以统计第一数据流的最后一个数据包的接收时间和第二数据流的第一个数据包的接收时间,若第二数据流的第一个数据包的接收时间早于第一数据流的第一个数据包的接收时间,则网络节点1可以确定第一数据的延迟程度。其中,第一数据流的编号小于第二数据流的编号,网络节点2在发送第二数据流之前发送第一数据流,但第二数据流先于第一数据流达到网络节点1,说明用于传输第一数据流的传输路径的网络环境较差,使得第一数据流延迟到达(或乱序到达)网络节点1。进一步,网络节点1可以确定第一数据的延迟程度,该延迟程度越大,反应了网络节点1与网络节点2之间的网络环境越差。
需要说明的是,第一数据流的最后一个数据包的接收时间,可以指接收第一数据流的最后一个数据包时提取到的时间戳、或者可以指接收第一数据流的最后一个数据包时对应的时刻等。类似的,第二数据流的第一个数据包的接收时间,可以指接收第二数据流的第一个数据包时提取到的时间戳、或者可以指接收第二数据流的第一个数据包时对应的时刻等。
为了便于表述,在下文可以将数据流的第一个数据包的接收时间称为第一时间,将数据流的最后一个数据包的接收时间称为第二时间。其中,每个数据流中可以包括一个或多个数据包。每个数据包中可以包括数据流的编号,以确定该数据包为哪个数据流,以及该数据流的数据在第一数据中的排序。可以理解的是,当一个数据流中仅包括一个数据包时,该数据流的第一时间与第二时间相同。在下文中以每个数据流中包括两个或两个以上的数据包为例进行描述。
示例性的,网络节点1可以使用一个或多个比特指示第一数据的延迟程度,这样可以减少网络节点1向网络节点2发送信息的资源开销,提高资源利用率。例如,网络节点1可以使用2比特指示第一数据的延迟程度,如表1所示,00可以表示延迟程度0,01可以表示延迟程度1,10可以表示延迟程度2,11可以表示延迟程度3。可以理解的是,表1中的各项数据仅为示例,本申请实施例对延迟程度的具体形式并不限定。
表1:延迟程度的指示方式
延迟程度 比特
延迟程度0 00
延迟程度1 01
延迟程度2 10
延迟程度3 11
示例性的,网络节点1可以通过如下三种方式确定第一数据的延迟程度。
方式一,网络节点1可以根据延迟时长确定第一数据的延迟程度。例如,网络节点1可以确定延迟时长,该延迟时长可以为第二数据流的第一个数据包的接收时间与第一数据流的最后一个数据包的接收时间的差值,然后根据延迟时长确定第一数据的延迟程度。其中,该延迟时长越大,延迟程度越大。可以理解的是,该延迟时长的数量可以为一个或多个。例如,延迟时长的数量为一个,延迟程度可以根据该一个延迟时长确定。再例如,延迟时长的数量为多个,延迟程度可以根据该多个延迟时长确定。
下面结合图3对网络节点1确定延迟时长的具体实现过程进行介绍。
图3为本申请实施例提供的一种确定延迟时长的方法流程示意图。如图3所示,该方 法流程可以包括如下内容。
S31:网络节点1确定每个数据流的第一时间和第二时间。
示例性的,网络节点1可以确定多个数据流中每个数据流的第一时间和第二时间。其中,第一时间可以为每个数据流中的第一个数据包的接收时间。第二时间可以为每个数据流中的最后一个数据包的接收时间。例如,网络节点1可以记录每个数据流中的第一个数据包和最后一个数据包的接收时间,并存储。例如,网络节点1可以根据数据包中包括的数据流的编号确定该数据包属于哪个数据流。以及,网络节点1可以根据数据包中包括的数据包的编号(或序号、或标识等)确定该数据包是否为数据流中的第一个数据包或最后一个数据包。其中,数据包的编号(或序号、或标识等)可以用于指示该数据包在数据流中的排序。
S32:网络节点1对接收到的多个数据流进行重排序。
示例性的,多个数据流可用于确定第一数据。网络节点1可以根据每个数据流的数据在第一数据中的排序,对重排序缓存区(可简称为缓存区)中的多个数据流进行重排序,得到第一数据。也即是,网络节点1可以根据每个数据流的编号对重排序缓存区中的多个数据流进行重排序,得到第一数据。由于网络节点1与网络节点2之间的多个传输路径中每个传输路径的网络环境不同,如中间节点的数量、或网络堵塞等,这就导致数据流在每个传输路径所需的传输时长不同,所以,多个数据流中可能存在延迟到达网络节点1的数据流。为了得到第一数据,网络节点1会在重排序缓存区对乱序接收到的多个数据流进行重排序,得到正确顺序的多个数据流,即得到第一数据。例如,数据流#1、数据流#0、数据流#3以及数据流#2先后到达网络节点1,网络节点1对这4个数据流进行重排序,得到正确顺序的4个数据流,即数据流#0、数据流#1、数据流#2以及数据流#3。
可选的,网络节点1可以在对多个数据流进行重排序后,按照多个数据流的正确顺序逐一确定延迟到达网络节点1的数据流;或者,网络节点1也可以在对多个数据流进行重排序之前或同时,根据每个数据流的编号确定延迟到达网络节点1的数据流。
S33:网络节点1确定多个数据流中延迟到达网络节点1的一个或多个数据流。
示例性的,网络节点1可以根据正确顺序的多个数据流,逐一确定该多个数据流中的相邻两个编号中编号较小的数据流是否延迟到达网络节点1。或者,网络节点1可以根据多个数据流的编号,逐一确定相邻两个编号中编号较小的数据流是否延迟到达网络节点1。
示例性的,网络节点1可以根据相邻两个编号中编号较小的数据流的第二时间和编号较大的数据流的第一时间,确定编号较小的数据流是否延迟到达网络节点1。例如,若相邻两个编号中编号较小的数据流的第二时间大于编号较大的数据流的第一时间,则网络节点1可以确定该编号较小的数据流延迟到达网络节点1;若相邻两个编号中编号较小的数据流的第二时间小于或等于编号较大的数据流的第一时间,则网络节点1可以确定该编号较小的数据流没有延迟到达网络节点1。相邻两个编号中编号较小的数据流的第二时间大于编号较大的数据流的第一时间,也即是相邻两个编号中编号较小的数据流的最后一个数据包的接收时间晚于编号较大的数据流的第一个数据包的接收时间,意味着相邻两个编号中编号较小的数据流的部分或全部数据还未到达网络节点1,编号较大的数据流的部分或全部数据的数据就已经到达网络节点1,但网络节点2是在发送编号较大的数据流之前向网络节点1发送编号较小的数据流的,说明编号较小的数据流的部分或全部数据延迟到达网络节点1,也即是编号较小的数据流延迟到达网络节点1。反之,相邻两个编号中编号 较小的数据流的最后一个数据包的接收时间不晚于编号较大的数据流的第一个数据包的接收时间,意味着相邻两个编号中编号较小的数据流先于编号较大的数据流到达网络节点1,并且网络节点2是在发送编号较大的数据流之前向网络节点1发送编号较小的数据流的,说明编号较小的数据流正确顺序到达网络节点1。
以数据流#1和数据流#2为例,网络节点1可以根据数据流#1的第二时间和数据流#2的第一时间,确定数据流#1是否为延迟到达的数据流。其中,数据流#1的编号小于数据流#2的编号,也即是说,在正确顺序接收数据流#1和数据流#2的情况下,网络节点1先接收数据流#1再接收数据流#2,即,数据流#1的最后一个数据包的接收时间应该早于数据流#2的第一个数据包的接收时间。即,如果数据流#1的第二时间大于或等于数据流#2的第一时间,则网络节点1可以确定数据流#1正确顺序到达网络节点1,如图4所示;如果数据流#1的第二时间小于数据流#2的第一时间,则网络节点1可以确定数据流#1延迟到达网络节点1,如图5所示。
S34:网络节点1确定每个延迟到达网络节点1的数据流的延迟时长。
示例性的,网络节点1可以确定每个延迟到达网络节点1的数据流的延迟时长(又可称为乱序时长)。例如,网络节点1可以确定延迟时长为相邻两个编号中编号较小的数据流的第二时间与编号较大的数据流的第一时间的差值。其中,相邻两个编号中编号较小的数据流的第二时间大于编号较大的数据流的第一时间。例如,数据流#1的第二时间大于数据流#2的第一时间,则网络节点1可以对数据流#1的第二时间和数据流#2的第一时间进行差值运算,得到数据流#1的延迟时长。
举例而言,网络节点2可以将第一数据划分为5个数据流的数据,该5个数据流分别记为数据流#0、数据流#1、数据流#2、数据流#3以及数据流#4。网络节点2按照数据流#0、数据流#1、数据流#2、数据流#3以及数据流#4的顺序通过多个传输路径向网络节点1发送第一数据。当网络节点1开始接收第一个数据流时,网络节点1开始计时,以记录该5个数据流中的每个数据流的第一个数据包的接收时间(即第一时间)和最后一个数据包的接收时间(即第二时间),其中,第一个数据流是指该5个数据流中第一个到达网络节点1的数据流。例如,数据流#0的第一时间为1毫秒(可以理解为,网络节点1在开始计时后的第1ms接收到数据流#0的第一个数据包)、第二时间为4毫秒(可以理解为,网络节点1在开始计时后的第4ms接收到数据流#0的最后一个数据包),数据流#3的第一时间为10毫秒(ms)、第二时间为13ms,数据流#1的第一时间为14ms、第二时间为20ms,数据流#2的第一时间为22ms、第二时间为29ms,数据流#4的第一时间为30ms、第二时间为34ms,如图6所示。
数据流#0的第二时间(即4ms)小于数据流#1的第一时间(即14ms),则网络节点1可以确定数据流#0是正确顺序到达网络节点1的;数据流#1的第二时间(即20ms)小于数据流#2的第一时间(即22ms),则网络节点1可以确定数据流#1也是正确顺序到达网络节点1的;数据流#2的第二时间(即29ms)大于数据流#3的第一时间(即10ms),则网络节点1可以确定数据流#2是延迟到达网络节点1的,进一步,网络节点1可以根据数据流#2的第二时间和数据流#3的第一时间,确定数据流#3的延迟时长为19ms;数据流#3的第二时间(即13ms)小于数据流#4的第一时间(即30ms),则网络节点1可以确定数据流#4也是正确顺序到达网络节点1的,如图7所示。
至此,网络节点1确定延迟时长的流程结束。
由于延迟时长为数据流延迟到达网络节点1的时长,可以反映传输该数据流的传输路径的网络环境较差,如传输时延高、或网络堵塞等问题。以图7为例,数据流#2延迟到达网络节点1,数据流#3的第一时间早于数据流#2的第二时间,说明传输数据流#2的传输路径的网络环境差,且数据流#2的延迟时长越大,传输数据流#2的传输路径的网络环境越差。
作为一个示例,延迟时长的数量可以为一个或多个,网络节点1可以统计预设时间长度内的一个或多个延迟时长的总时长,以及根据该总时长确定延迟程度。其中,预设时间长度内的一个或多个延迟时长的总时长越大,延迟程度越大,网络环境越差。例如,网络节点1可以根据该总时长、以及总时长与延迟程度的对应关系,确定延迟程度。需要说明的是,总时长与延迟程度的对应关系可以是预先定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例对此不作限定。
如表2所示,若总时长大于T0、且小于或等于T1,则网络节点1可以确定第一数据的延迟程度为延迟程度0;若总时长大于T1、且小于或等于T2,则网络节点1可以确定第一数据的延迟程度为延迟程度1;若总时长大于T2、且小于或等于T3,则网络节点1可以确定第一数据的延迟程度为延迟程度2;若总时长大于T3、且小于T4,则网络节点1可以确定第一数据的延迟程度为延迟程度3。可以理解的是,表2中的各项数据仅为示例,本申请并不限定于此。
表2:总时长与延迟程度的对应关系
延迟程度 比特 总时长
延迟程度0 00 (T0,T1]
延迟程度1 01 (T1,T2]
延迟程度2 10 (T2,T3]
延迟程度3 11 (T3,T4)
作为另一个示例,延迟时长的数量为多个,网络节点1可以对多个延迟时长进行均值运算,如平均值运算,得到平均延迟时长,以及根据该平均延迟时长确定延迟程度。例如,网络节点1可以统计预设时间长度内的多个延迟时长,去掉多个延迟时长中的最大值和最小值,对剩余的延迟时长进行均值运算。其中,平均延迟时长越大,延迟程度越大,网络环境越差。例如,网络节点1可以根据该平均延迟时长、以及平均延迟时长与延迟程度的对应关系,确定延迟程度。其中,平均延迟时长与延迟程度的对应关系可以参考表2,在此不再赘述。需要说明的是,平均延迟时长与延迟程度的对应关系可以是预先定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例对此不作限定。
在上述方式一中,延迟时长可以反应网络环境,如传输时延高、或网络堵塞等,由延迟时长确定的延迟程度也可以反映网络环境,如延迟时长越大,延迟程度越大,网络环境越差。这样,网络节点1可以将延迟程度发送给网络节点2,以使网络节点2根据由延迟时长确定的延迟程度对带宽资源进行调整,以减少延迟到达网络节点1的数据流的数量,和/或减少延迟到达网络节点1的数据流的延迟时长等,从而可以减少网络节点1重排序多个数据流时所需的缓存资源,提高网络性能。
方式二,网络节点1可以根据重排序该多个数据流时占用的缓存资源确定第一数据的延迟程度。例如,网络节点1可以获取重排序该多个数据流时占用的缓存资源,如统计该 多个数据流中第一个到达网络节点1至网络节点1完成对多个数据流的重排序期间所占用的缓存资源,以及根据该重排序该多个数据流时占用的缓存资源确定延迟程度。为了便于描述,在下文中将重排序多个数据流时占用的缓存资源简称为第一资源。
当延迟到达网络节点1的数据流的数量较多,或者延迟到达网络节点1的数据流的延迟时长较大,或延迟到达网络节点1的数据流的数量较多且延迟时长较大时,网络节点1在对多个数据流进行重排序时对缓存资源的需求会大幅度增加,但网络节点1的缓存资源有限,这就会增大网络节点1的缓存压力,影响网络节点1的性能。所以,网络节点1对第一资源的需求可以反映网络环境,由第一资源确定的延迟程度也可以反映网络环境。例如,第一资源越多,网络环境越差,延迟程度越大。
示例性的,网络节点1可以根据第一资源、以及第一资源与延迟程度的对应关系,确定延迟程度。需要说明的是,第一资源与延迟程度的对应关系可以是预先定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例对此不作限定。
如表3所示,若第一资源大于资源0、且小于或等于资源1,则网络节点1可以确定第一数据的延迟程度为延迟程度0;若第一资源大于资源1、且小于或等于资源2,则网络节点1可以确定第一数据的延迟程度为延迟程度1;若第一资源大于资源2、且小于或等于资源3,则网络节点1可以确定第一数据的延迟程度为延迟程度2;若第一资源大于资源3、且小于资源4,则网络节点1可以确定第一数据的延迟程度为延迟程度3。可以理解的是,表3中的各项数据仅为示例,本申请并不限定于此。
表3:延迟程度与第一资源的对应关系
延迟程度 比特 第一资源
延迟程度0 00 (资源0,资源1]
延迟程度1 01 (资源1,资源2]
延迟程度2 10 (资源2,资源3]
延迟程度3 11 (资源3,资源4)
在上述方式二中,第一资源为网络节点1对多个数据流重排序时需要的缓存资源,当延迟到达网络节点1的数据流的数量较多和/或延迟到达网络节点1的数据流的延迟时长越大时,网络节点1对该第一资源的需求也越多,也即是该第一资源可以反映网络环境,那么由第一资源确定的延迟程度也可以反应网络环境,如第一资源越多,延迟程度越大,网络环境越差。这样,网络节点1可以将延迟程度发送给网络节点2,以使网络节点2根据由第一资源确定的延迟程度对带宽资源进行调整,以减少网络节点1的缓存压力,提高网络性能。
方式三,网络节点1可以根据延迟时长和第一资源确定第一数据的延迟程度。其中,延迟时长越大,第一资源越多,网络环境越差,延迟程度越大。例如,网络节点1可以统计预设时长内的一个或多个延迟时长的总时长以及第一资源,根据该总时长和第一资源确定延迟程度,如根据总时长、第一资源与延迟程度的对应关系确定延迟程度。再例如,网络节点1可以统计预设时长内的平均延迟时长以及第一资源,根据该平均延迟时长以及第一资源确定延迟程度,如根据平均延迟时长、第一资源与延迟程度的对应关系确定延迟程度。其中,方式三的具体实现过程可以参考前述对方式一和方式二的描述,在此不再赘述。需要说明的是,总时长(或平均延迟时长)、第一资源与延迟程度的对应关系可以是预先 定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例对此不作限定。
以延迟程度由平均延迟时长和第一资源确定的为例,若第一资源大于资源0、小于或等于资源1,且平均延迟时长大于T0、小于或等于T1,则网络节点1可以确定第一资源的延迟程度为延迟程度0;若第一资源大于资源1、小于或等于资源2,且平均延迟时长大于T1、小于或等于T2,则网络节点1可以确定第一资源的延迟程度为延迟程度1;若第一资源大于资源2、小于或等于资源3,且平均延迟时长大于T2、小于或等于T3,则网络节点1可以确定第一资源的延迟程度为延迟程度2;若第一资源大于资源3、小于资源4,且平均延迟时长大于T3、小于T4,则网络节点1可以确定第一资源的延迟程度为延迟程度3。可以理解的是,表4中的各项数据仅为示例,本申请并不限定于此。
表4:延迟程度与第一资源、平均延迟时长的对应关系
延迟程度 比特 平均延迟时长 第一资源
延迟程度0 00 (T0,T1] (资源0,资源1]
延迟程度1 01 (T1,T2] (资源1,资源2]
延迟程度2 10 (T2,T3] (资源2,资源3]
延迟程度3 11 (T3,T4) (资源3,资源4)
在上述方式三中,第一资源和延迟时长可以从不同的角度反映网络环境,如第一资源从网络节点1的能力范围(如缓存资源)的角度反映网络环境,延迟时长是从传输路径所在的网络环境来推测总体的网络环境,那么由第一资源和延迟时长确定的延迟程度可以准确地反应网络环境,如第一资源越多,延迟程度越大,网络环境越差。这样,网络节点1可以将延迟程度发送给网络节点2,以使网络节点2根据由第一资源和延迟时长确定的延迟程度对带宽资源进行调整,以减少延迟到达网络节点1的数据流的数量和/或延迟到达网络节点1的数据流的延迟时长,以及减少网络节点1的缓存压力,提高网络性能。
S204:网络节点1向网络节点2发送第一消息;相应地,网络节点2接收来自网络节点1的第一消息。
示例性的,第一消息中可以包括关于延迟程度的信息以及指示网络节点2调整带宽资源的指示信息。或者,第一消息中可以仅包括关于延迟程度的信息,在此情况下,该第一消息可以隐式指示网络节点2调整带宽资源。例如,第一消息可以但不限定于否定应答(negative acknowledgement,NACK)消息等。其中,关于延迟程度的信息可以包括延迟程度,如使用2比特指示延迟程度,如表1所示。
S205:网络节点2根据延迟程度调整带宽资源。
示例性的,网络节点2接收到第一消息后,可以根据延迟程度调整带宽资源。延迟程度反映了网络环境和/或网络节点1的缓存情况,网络节点2在根据延迟程度调整带宽资源时,可以对网络节点2与网络节点1之间的每个传输路径的带宽资源进行同幅度的调整(如同幅度减少或同幅度增加等),从而实现对带宽资源的调整,使得调整后的每个传输路径的带宽资源能够适应网络环境,减少延迟到达网络节点1的数据流的数量以及延迟到达网络节点1的数据流的延迟时长。其中,延迟程度越大,调整后多个传输路径中每个传输路径的带宽资源越少。例如,网络节点2可以根据延迟程度、以及延迟程度与带宽资源的对应关系,对当前的带宽资源进行调整,如减少当前的带宽资源。需要说明的是,延迟程度与带宽资源的对应关系可以是预先定义的,或者可以是网络节点1和网络节点2预先约定 的等,本申请实施例对此不作限定。
如表5所示,若第一数据的延迟程度为延迟程度0,则网络节点2可以将每个传输路径的带宽资源减少带宽资源0;若第一数据的延迟程度为延迟程度1,则网络节点2可以将每个传输路径的带宽资源减少带宽资源1;若第一数据的延迟程度为延迟程度2,则网络节点2可以将每个传输路径的带宽资源减少带宽资源2;若第一数据的延迟程度为延迟程度3,则网络节点2可以将每个传输路径的带宽资源减少带宽资源3。其中,带宽资源0小于带宽资源1,带宽资源1小于带宽资源2,带宽资源2小于带宽资源3。可以理解的是,表5中的各项数据仅为示例,本申请并不限定于此。
表5:延迟程度与带宽资源的对应关系
延迟程度 比特 带宽资源
延迟程度0 00 带宽资源0
延迟程度1 01 带宽资源1
延迟程度2 10 带宽资源2
延迟程度3 11 带宽资源3
作为一个示例,延迟程度0对应的带宽资源0可以大于0,或者小于0,或者等于0。其中,当带宽资源大于0时,网络节点2可以将每个传输路径的带宽资源减少带宽资源0;当带宽资源0等于0时,网络节点2可以保持每个传输路径的带宽资源不变,例如,当延迟时长小于第一阈值和/或第一资源小于第二阈值时,网络节点1的缓存压力和运算压力在其能力范围内,在此情况下,网络节点2可以保持每个传输路径的带宽资源不变,以保持当前的数据传输效率;当带宽资源0小于0时,网络节点2可以将每个传输路径的带宽资源增加带宽资源0的绝对值,例如,当延迟时长小于第一阈值和/或第一资源小于第二阈值时,网络节点1的缓存压力和运算压力在其能力范围内,网络节点2可以适当地增加每个传输路径的带宽资源,以在网络节点1的能力范围内适当地提高数据传输效率。
需要说明的是,第一阈值可以是预定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例不作限定。类似的,第二阈值可以是预定义的,或者可以是网络节点1和网络节点2预先约定的等,本申请实施例不作限定。
作为另一个示例,延迟程度0对应的带宽资源0大于0,当网络节点1确定延迟时长小于第一阈值和/或第一资源小于第二阈值时,网络节点1可以向网络节点2发送第一指示信息,该第一指示信息可以用于指示网络节点2保持每个传输路径的带宽资源不变,或者指示网络节点2适当地增加每个传输路径的带宽资源。
在本申请的上述实施例中,网络节点2通过多个传输路径向网络节点1发送第一数据的多个数据流,可以避免使用单个传输路径传输第一数据导致该传输路径负载大,其它传输路径处于空闲状态或负载较小的问题,达到负载均衡的效果。网络节点1在接收到多个数据流之后,可以根据相邻两个编号对应的两个数据流的第一时间和第二时间统计该多个数据流中延迟到达网络节点1的数据流的延迟时长,以及根据延迟时长确定第一数据的延迟程度,并将延迟程度发送给网络节点2。由于延迟时长的数量和/或大小可以反映网络环境,相应的由延迟时长确定的延迟程度也可以反映网络环境,所以网络节点2可以根据延迟程度对带宽资源进行调整,例如,延迟程度越大,网络环境越差,调整后的每个传输路径的带宽资源越少。这样,可以减少延迟到达网络节点1的数据流的数量、和/或延迟到达 网络节点1的数据流的延迟时长,从而可以减少网络节点1重排序多个数据流的运算压力和缓存压力,可以整体提高网络性能。
在一种可能的实现方式中,在上述步骤S203至步骤S205中,网络节点1可以确定延迟时长(如总时长或平均延迟时长),以及将延迟时长携带在第一消息中发送给网络节点2。网络节点2可以根据延迟时长确定延迟程度,以及根据确定出的延迟程度调整带宽资源。或者,网络节点2可以根据延迟时长直接调整带宽资源,如网络节点2可以根据延迟时长、以及延迟时长与带宽资源的对应关系调整带宽资源,延迟时长越大,调整后的每个传输路径的带宽资源越少。其中,延迟时长与带宽资源的对应关系可参考上述延迟程度与带宽资源的对应关系,在此不再赘述。
在另一种可能的实现方式中,在步骤S203至步骤S205中,网络节点1可以获取第一资源,以及将第一资源携带在第一消息中发送给网络节点2。网络节点2可以根据第一资源确定延迟程度,以及根据确定出的延迟程度调整带宽资源。或者,网络节点2可以直接根据第一资源调整带宽资源,如网络节点2可以根据第一资源、以及第一资源与带宽资源的对应关系调整带宽资源,第一资源越多,调整后的每个传输路径的带宽资源越少。其中,第一资源与带宽资源的对应关系可参考上述延迟程度与带宽资源的对应关系,在此不再赘述。
在另一种可能的实现方式中,在步骤S203至步骤S205中,网络节点1可以获取延迟时长和第一资源,以及将延迟时长和第一资源携带在第一消息中发送给网络节点2。网络节点2可以根据延迟时长和第一资源确定延迟程度,以及根据确定出的延迟程度调整带宽资源。或者,网络节点2可以直接根据延迟时长和第一资源调整带宽资源,如网络节点2可以根据延迟程度、第一资源与带宽资源的对应关系调整带宽资源,延迟时长越大,第一资源越多,调整后的每个传输路径的带宽资源越少。其中,延迟程度、第一资源与带宽资源的对应关系可参考上述延迟程度与带宽资源的对应关系,在此不再赘述。
上述本申请提供的实施例中,分别从网络节点1与网络节点2两者之间交互的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,网络节点1、网络节点2可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
图8示出了一种带宽调整装置800的结构示意图。其中,带宽调整装置800可以是上述图2~图7中任一个所示的实施例中的网络节点1(或网络节点2),能够实现本申请实施例提供的方法中网络节点1(或网络节点2)的功能;带宽调整装置800也可以是能够支持网络节点1(或网络节点2)实现本申请实施例提供的方法中网络节点1(或网络节点2)的功能的装置。带宽调整装置800可以是硬件结构、软件模块、或硬件结构加软件模块。带宽调整装置800可以由芯片系统实现。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
带宽调整装置800可以包括处理模块801和通信模块802。
以带宽调整装置800为网络节点1为例,通信模块802,可以用于通过多个传输路径接收来自第二网络节点的多个数据流,其中,多个数据流中的每个数据流的数据在第一数据中的排序不同,且该多个数据流可以用于确定第一数据,多个数据流包括第一数据流和 第二数据流,第一数据流的数据在第一数据中的排序先于第二数据流的数据,且第一数据流的数据与第二数据流的数据在第一数据中相邻。
处理模块801,可以用于统计第一数据流的最后一个数据包的接收时间以及第二数据流的第一个数据包的接收时间。若第二数据流的第一个数据包的接收时间早于第一数据流的最后一个数据包的接收时间,则处理模块801用于确定第一数据的延迟程度。
通信模块802,还可以用于向第二网络节点发送第一消息,第一消息中包括关于延迟程度的信息以及指示第二网络节点调整带宽资源的指示信息。
在一种可能的实现方式中,延迟程度越大,调整后的多个传输路径中每个传输路径的带宽资源越少。
在一种可能的实现方式中,处理模块801,具体可以用于:确定延迟时长,该延迟时长为第二数据流的第一个数据包的接收时间与第一数据流的最后一个数据包的接收时间的差值,以及,根据该延迟时长确定延迟程度,其中,延迟时长越大,延迟程度越大。
在一种可能的实现方式中,处理模块801,进一步可以用于:对多个延迟时长进行均值运算,得到第一数据的平均延迟时长,以及根据平均延迟时长确定延迟程度,其中,平均延迟时长越大,延迟程度越大。
在一种可能的实现方式中,处理模块801,进一步可以用于:确定第一资源,该第一资源为重排序多个数据流时占用的缓存资源,以及根据第一资源,确定延迟程度,其中,第一资源越多,延迟程度越大。
在一种可能的实现方式中,处理模块801,进一步可以用于:根据每个数据流的数据在第一数据中的排序,对该多个数据流进行重排序,得到第一数据。
以带宽调整装置800为网络节点2为例,通信模块802,可以用于通过多个传输路径向第一网络节点发送多个数据流,其中,多个数据流包括第一数据流和第二数据流,该第一数据流的数据在第一数据中的排序先于第二数据流的数据,第一数据流的数据与第二数据流的数据在所述第一数据中相邻。以及,接收第一网络节点的第一消息,该第一消息中包括关于延迟程度的信息,以及指示网络节点2调整带宽资源的指示信息,其中,延迟程度是根据第一数据流的最后一个数据包的接收时间以及第二数据流的第一个数据包的接收时间确定的,第二数据流的第一个数据包的接收时间早于所述第一数据流的最后一个数据包的接收时间。
处理模块801,可以用于根据延迟时长调整带宽资源。例如,延迟时长越大,调整后的每个传输路径的带宽资源越少。
在一种可能的实现方式中,处理模块801,可以进一步用于将第一数据划分为多个数据流的数据。
通信模块802用于带宽调整装置800和其它模块进行通信,其可以是电路、器件、接口、总线、软件模块、收发器或者其它任意可以实现通信的装置。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
如图9所示为本申请实施例提供的通信装置900,其中,通信装置900可以是图2~图7中任一个所示的实施例中的网络节点1(或网络节点2),能够实现本申请实施例提供的方法中网络节点1(或网络节点2)的功能;通信装置900也可以是能够支持网络节点1(或网络节点2)实现本申请实施例提供的方法中网络节点1(或网络节点2)的功能的装置。其中,该通信装置900可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
在硬件实现上,上述通信模块802可以为收发器,收发器集成在通信装置900中构成通信接口910。
通信装置900可以包括至少一个处理器920,用于实现或用于支持通信装置900实现本申请实施例提供的方法中网络节点1或网络节点2的功能。例如,处理器920可以根据延迟时长确定延迟程度,具体参见方法示例中的详细描述,此处不做赘述。再例如,处理器920可以根据延迟程度调整带宽资源,具体参见方法示例中的详细描述,此处不做赘述。
通信装置900还可以包括至少一个存储器930,用于存储程序指令和/或数据。存储器930和处理器920耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器920可能和存储器930协同操作。处理器920可能执行存储器930中存储的程序指令。所述至少一个存储器中的至少一个可以包括于处理器中。
通信装置900还可以包括通信接口910,用于通过传输介质和其它设备进行通信,从而用于通信装置900中的装置可以和其它设备进行通信。示例性地,通信装置900为网络节点1,该其它设备可以是网络节点2;或者,通信装置900为网络节点2,该其它设备可以是网络节点1。处理器920可以利用通信接口910收发数据。通信接口910具体可以是收发器。
本申请实施例中不限定上述通信接口910、处理器920以及存储器930之间的具体连接介质。本申请实施例在图9中以存储器930、处理器920以及通信接口910之间通过总线940连接,总线在图9中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请实施例中,处理器920可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器930可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请实施例中还提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行前述实施例中第一网络节点或第二网络节点执行的方法。
本申请实施例中还提供一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行前述实施例中第一网络节点或第二网络节点执行的方法。
本申请实施例提供了一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现前述方法中第一网络节点或第二网络节点的功能。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
本申请实施例提供了一种通信系统,所述通信系统包括前述实施例中的第一网络节点和第二网络节点。
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,简称DVD))、或者半导体介质(例如,SSD)等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (27)

  1. 一种带宽调整方法,应用于第一网络节点中,其特征在于,包括:
    通过多个传输路径接收来自第二网络节点的多个数据流,其中,所述多个数据流中的每个数据流的数据在第一数据中的排序不同,所述多个数据流包括第一数据流和第二数据流,所述第一数据流中的数据在所述第一数据中的排序先于所述第二数据流中的数据,所述第一数据流中的数据与所述第二数据流中的数据在所述第一数据中相邻,所述每个数据流包括至少一个数据包;
    统计所述第一数据流的最后一个数据包的接收时间,以及所述第二数据流的第一个数据包的接收时间,若所述第二数据流的第一个数据包的接收时间早于所述第一数据流的最后一个数据包的接收时间,则确定所述第一数据的延迟程度;
    向所述第二网络节点发送第一消息,所述第一消息中包括关于所述延迟程度的信息,以及指示所述第二网络节点调整带宽资源的指示信息。
  2. 根据权利要求1所述的方法,其特征在于,
    所述延迟程度越大,调整后的所述多个传输路径中每个传输路径的带宽资源越少。
  3. 根据权利要求1或2所述的方法,其特征在于,确定所述第一数据的延迟程度,包括:
    确定延迟时长,所述延迟时长为所述第二数据流的第一个数据包的接收时间与所述第一数据流的最后一个数据包的接收时间的差值;
    根据所述延迟时长确定所述延迟程度,其中,所述延迟时长越大,所述延迟程度越大。
  4. 根据权利要求3所述的方法,其特征在于,所述延迟时长的数量为多个,所述方法还包括:
    对多个所述延迟时长进行均值运算,得到所述第一数据的平均延迟时长;
    根据所述延迟时长确定所述延迟程度,包括:
    根据所述平均延迟时长确定所述延迟程度,其中,所述平均延迟时长越大,所述延迟程度越大。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    确定第一资源,所述第一资源为重排序所述多个数据流时占用的缓存资源;
    根据所述第一资源,确定延迟程度,其中,所述第一资源越多,所述延迟程度越大。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述每个数据流的数据在所述第一数据中的排序,对所述多个数据流进行重排序,得到所述第一数据。
  7. 一种带宽调整方法,应用于第二网络节点中,其特征在于,包括:
    通过多个传输路径向第一网络节点发送多个数据流,其中,所述多个数据流中的每个数据流的数据在第一数据中的排序不同,所述多个数据流包括第一数据流和第二数据流,所述第一数据流中的数据在所述第一数据中的排序先于所述第二数据流中的数据,所述第一数据流中的数据与所述第二数据流中的数据在所述第一数据中相邻,所述每个数据流包括至少一个数据包;
    接收来自所述第一网络节点的第一消息,所述第一消息中包括关于延迟程度的信息,以及指示所述第二网络节点调整带宽资源的指示信息,其中,所述延迟程度是根据第一数 据流的最后一个数据包的接收时间以及所述第二数据流的第一个数据包的接收时间确定的,所述第二数据流的第一个数据包的接收时间早于所述第一数据流的最后一个数据包的接收时间;
    根据所述延迟程度调整带宽资源。
  8. 根据权利要求7所述的方法,其特征在于,
    所述延迟程度越大,调整后的所述多个传输路径中每个传输路径的带宽资源越少。
  9. 根据权利要求7或8所述的方法,其特征在于,
    所述延迟程度是根据延迟时长确定的,所述延迟时长为所述第二数据流的第一个数据包的接收时间与所述第一数据流的最后一个数据包的接收时间的差值,其中,所述延迟时长越大,所述延迟程度越大。
  10. 根据权利要求9所述的方法,其特征在于,
    所述延迟程度是根据平均延迟时长确定的,所述平均延迟时长是对多个所述延迟时长进行均值运算得到的,其中,所述平均延迟时长越大,所述延迟程度越大。
  11. 根据权利要求7至10中任一项所述的方法,其特征在于,
    所述延迟程度是根据第一资源确定的,所述第一资源为重排序所述多个数据流时占用的缓存资源,其中,所述第一资源越多,所述延迟程度越大。
  12. 根据权利要求7至11中任一项所述的方法,其特征在于,所述方法还包括:
    将所述第一数据划分为所述多个数据流的数据。
  13. 一种带宽调整装置,其特征在于,包括:
    通信模块,用于通过多个传输路径接收来自第二网络节点的多个数据流,其中,所述多个数据流中的每个数据流的数据在第一数据中的排序不同,所述多个数据流包括第一数据流和第二数据流,所述第一数据流中的数据在所述第一数据中的排序先于所述第二数据流中的数据,所述第一数据流中的数据与所述第二数据流中的数据在所述第一数据中相邻,所述每个数据流包括至少一个数据包;
    处理模块,用于统计所述第一数据流的最后一个数据包的接收时间,以及所述第二数据流的第一个数据包的接收时间,若所述第二数据流的第一个数据包的接收时间早于所述第一数据流的最后一个数据包的接收时间,则确定所述第一数据的延迟程度;
    所述通信模块,还用于向所述第二网络节点发送第一消息,所述第一消息中包括关于所述延迟程度的信息,以及指示所述第二网络节点调整带宽资源的指示信息。
  14. 根据权利要求13所述的装置,其特征在于,
    所述延迟程度越大,调整后的所述多个传输路径中每个传输路径的带宽资源越少。
  15. 根据权利要求13或14所述的装置,其特征在于,所述处理模块,具体用于:
    确定延迟时长,所述延迟时长为所述第二数据流的第一个数据包的接收时间与所述第一数据流的最后一个数据包的接收时间的差值;
    根据所述延迟时长确定所述延迟程度,其中,所述延迟时长越大,所述延迟程度越大。
  16. 根据权利要求15所述的装置,其特征在于,所述延迟时长的数量为多个,所述处理模块,进一步用于:
    对多个所述延迟时长进行均值运算,得到所述第一数据的平均延迟时长;
    根据所述平均延迟时长确定所述延迟程度,其中,所述平均延迟时长越大,所述延迟程度越大。
  17. 根据权利要求13至16中任一项所述的装置,其特征在于,所述处理模块,进一步用于:
    确定第一资源,所述第一资源为重排序所述多个数据流时占用的缓存资源;
    根据所述第一资源,确定延迟程度,其中,所述第一资源越多,所述延迟程度越大。
  18. 根据权利要求13至17中任一项所述的装置,其特征在于,所述处理模块,进一步用于:
    根据所述每个数据流的数据在所述第一数据中的排序,对所述多个数据流进行重排序,得到所述第一数据。
  19. 一种带宽调整装置,其特征在于,包括:
    通信模块,用于通过多个传输路径向第一网络节点发送多个数据流,其中,所述多个数据流中的每个数据流的数据在第一数据中的排序不同,所述多个数据流包括第一数据流和第二数据流,所述第一数据流中的数据在所述第一数据中的排序先于所述第二数据流中的数据,所述第一数据流中的数据与所述第二数据流中的数据在所述第一数据中相邻,所述每个数据流包括至少一个数据包;以及,接收来自所述第一网络节点的第一消息,所述第一消息中包括关于延迟程度的信息,以及指示所述第二网络节点调整带宽资源的指示信息,其中,所述延迟程度是根据第一数据流的最后一个数据包的接收时间以及所述第二数据流的第一个数据包的接收时间确定的,所述第二数据流的第一个数据包的接收时间早于所述第一数据流的最后一个数据包的接收时间;
    处理模块,用于根据所述延迟程度调整带宽资源。
  20. 根据权利要求19所述的装置,其特征在于,
    所述延迟程度越大,调整后的所述多个传输路径中每个传输路径的带宽资源越少。
  21. 根据权利要求19或20所述的装置,其特征在于,
    所述延迟程度是根据延迟时长确定的,所述延迟时长为所述第二数据流的第一个数据包的接收时间与所述第一数据流的最后一个数据包的接收时间的差值,其中,所述延迟时长越大,所述延迟程度越大。
  22. 根据权利要求21所述的装置,其特征在于,
    所述延迟程度是根据平均延迟时长确定的,所述平均延迟时长是对多个所述延迟时长进行均值运算得到的,其中,所述平均延迟时长越大,所述延迟程度越大。
  23. 根据权利要求19至22中任一项所述的装置,其特征在于,
    所述延迟程度是根据第一资源确定的,所述第一资源为重排序所述多个数据流时占用的缓存资源,其中,所述第一资源越多,所述延迟程度越大。
  24. 根据权利要求19至23中任一项所述的装置,其特征在于,所述处理模块,进一步用于:
    将所述第一数据划分为所述多个数据流的数据。
  25. 一种通信装置,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述装置执行如权利要求1至6中任一项所述的方法,或如权利要求7至12中任一项所述的方法。
  26. 一种通信系统,其特征在于,包括如权利要求13至18中任一项所述的装置,和/或,如权利要求19至24中任一项所述的装置。
  27. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指 令,当所述计算机程序或指令被通信装置执行时,实现如权利要求1至12中任一项所述的方法。
PCT/CN2021/074009 2021-01-27 2021-01-27 一种带宽调整方法、装置以及系统 WO2022160143A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/074009 WO2022160143A1 (zh) 2021-01-27 2021-01-27 一种带宽调整方法、装置以及系统
CN202180091669.4A CN116982304A (zh) 2021-01-27 2021-01-27 一种带宽调整方法、装置以及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/074009 WO2022160143A1 (zh) 2021-01-27 2021-01-27 一种带宽调整方法、装置以及系统

Publications (1)

Publication Number Publication Date
WO2022160143A1 true WO2022160143A1 (zh) 2022-08-04

Family

ID=82654009

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074009 WO2022160143A1 (zh) 2021-01-27 2021-01-27 一种带宽调整方法、装置以及系统

Country Status (2)

Country Link
CN (1) CN116982304A (zh)
WO (1) WO2022160143A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180063748A1 (en) * 2016-09-01 2018-03-01 Alcatel-Lucent Usa Inc. Estimating bandwidth in a heterogeneous wireless communication system
CN109644156A (zh) * 2016-09-05 2019-04-16 日本电气株式会社 网络频带测量设备、系统、方法和程序
CN111211936A (zh) * 2019-12-27 2020-05-29 视联动力信息技术股份有限公司 一种基于网络状态的数据处理方法和装置
CN111953618A (zh) * 2020-08-21 2020-11-17 锐捷网络股份有限公司 一种多级并行交换架构下的解乱序方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180063748A1 (en) * 2016-09-01 2018-03-01 Alcatel-Lucent Usa Inc. Estimating bandwidth in a heterogeneous wireless communication system
CN109644156A (zh) * 2016-09-05 2019-04-16 日本电气株式会社 网络频带测量设备、系统、方法和程序
CN111211936A (zh) * 2019-12-27 2020-05-29 视联动力信息技术股份有限公司 一种基于网络状态的数据处理方法和装置
CN111953618A (zh) * 2020-08-21 2020-11-17 锐捷网络股份有限公司 一种多级并行交换架构下的解乱序方法、装置及系统

Also Published As

Publication number Publication date
CN116982304A (zh) 2023-10-31

Similar Documents

Publication Publication Date Title
RU2678691C2 (ru) Эффективные механизмы планирования восходящей линии связи для двойного соединения
KR101027356B1 (ko) 고속 미디어 액세스 제어를 위한 메모리 관리를 위한 방법, 장치, 무선 통신 디바이스 및 프로그램을 기록한 컴퓨터 판독 가능 매체
WO2019057154A1 (zh) 数据传输方法、终端设备和网络设备
US11509597B2 (en) Data transmission method and device
US11411892B2 (en) Packet fragment processing method and apparatus and system
WO2020063339A1 (zh) 一种实现数据传输的方法、装置和系统
CN104704909A (zh) 用于WiFi卸载的系统和方法
WO2016008399A1 (en) Flow control
WO2018082595A1 (zh) 数据传输方法、装置及基站
EP2957093A1 (en) System and method for compressing data associated with a buffer
WO2016161594A1 (zh) 一种数据传输的方法及装置
US11425592B2 (en) Packet latency reduction in mobile radio access networks
CN101237384A (zh) 多媒体广播/组播业务数据发送的方法、装置、用户面实体和系统
EP4273697A2 (en) Method, apparatus an computer program for executing virtualized network function
US20210092058A1 (en) Transmission of high-throughput streams through a network using packet fragmentation and port aggregation
TW201909586A (zh) 回饋應答訊息的傳輸方法、裝置及系統
TW201904222A (zh) 實現數據映射傳輸的方法及相關產品
WO2022160143A1 (zh) 一种带宽调整方法、装置以及系统
WO2023061119A1 (zh) 一种流的搬移方法及网络设备
WO2022257790A1 (zh) 一种通信方法及装置
WO2021114807A1 (zh) 一种传输速率配置方法及装置
CN108476428B (zh) 用于处置从无线装置传送的信号的方法和网络节点
JP6973511B2 (ja) 通信装置、通信システム、通信方法及びプログラム
US20210336895A1 (en) Data transmission method and network device
CN114257602B (zh) 触发终端执行点对点业务的方法、装置及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180091669.4

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21921763

Country of ref document: EP

Kind code of ref document: A1