CN114501544A - Data transmission method, device and storage medium - Google Patents

Data transmission method, device and storage medium Download PDF

Info

Publication number
CN114501544A
CN114501544A CN202011146700.3A CN202011146700A CN114501544A CN 114501544 A CN114501544 A CN 114501544A CN 202011146700 A CN202011146700 A CN 202011146700A CN 114501544 A CN114501544 A CN 114501544A
Authority
CN
China
Prior art keywords
service flow
flow
traffic
service
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011146700.3A
Other languages
Chinese (zh)
Inventor
杜宗鹏
耿亮
刘鹏
姚惠娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011146700.3A priority Critical patent/CN114501544A/en
Publication of CN114501544A publication Critical patent/CN114501544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/26Resource reservation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a data transmission method, a data transmission device and a storage medium, wherein the method comprises the following steps: converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement; and sending the second service flow.

Description

Data transmission method, device and storage medium
Technical Field
The present invention relates to the field of networks, and in particular, to a data transmission method, apparatus, communication device, and storage medium.
Background
In a specific scenario of future fifth Generation mobile communication (B5G, Beyond5 Generation)/sixth Generation mobile communication (6G, 6th Generation), strict deterministic service capability requirements are proposed, such as industrial control, telemedicine, holographic communication, etc., which cannot be met by conventional IP forwarding. Therefore, a Deterministic Internet Protocol (DIP) network is an important development trend of future networks, and the requirements put new demands on the message scheduling of the DIP network.
Disclosure of Invention
In view of the above, the main object of the present invention is to provide a data transmission method, apparatus, communication device and storage medium.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides a data transmission method, which is applied to a first communication node and comprises the following steps:
converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
and sending the second service flow.
In the above scheme, the method further comprises:
sending a bandwidth request; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
the bandwidth reservation adopts one of the following steps:
a first sum of bandwidths of respective ones of the at least one first traffic flow;
a preset multiple of the first sum.
In the foregoing scheme, the number of reserved bandwidths includes: one or more of;
the method further comprises the following steps:
when the number of the reserved bandwidths is one, the reserved bandwidths aiming at the corresponding DIP are evenly distributed into a plurality of periods according to the bandwidth parameters of the reserved bandwidths;
when the number of the reserved bandwidth is multiple, the reserved bandwidth is respectively allocated to multiple periods.
In the foregoing scheme, the aggregating at least one first service flow to obtain a second service flow includes:
traffic shaping each of the at least one first traffic flow; the traffic shaping is performed based on the bandwidth requirements of each first traffic flow;
arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow;
and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeuing corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
In the above scheme, the method further comprises:
and performing traffic shaping on the second service flow according to the rate sum of the at least one first service flow.
In the foregoing scheme, the sending the second service flow includes:
enabling the messages in the second service flow to enter outlet queues corresponding to at least three periods according to a packet-by-packet polling mode;
according to the byte number counting mode, enabling the messages in the second service flow to averagely enter an outlet queue corresponding to at least three periods;
and enabling the messages in the second service flow to enter at least three outlet cycle queues in a weighted packet-by-packet polling mode or a weighted counting polling mode according to the number of bytes according to preset weights of the outlet queues corresponding to the at least three cycles.
The embodiment of the invention provides a data transmission method, which is applied to a second communication node and comprises the following steps:
receiving a second traffic flow from the first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
An embodiment of the present invention provides a data transmission apparatus, where the apparatus includes:
the processing module is used for converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
and the sending module is used for sending the second service flow.
In the above scheme, the sending module is further configured to send a bandwidth request; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
the bandwidth reservation adopts one of the following steps:
a first sum of bandwidths of respective ones of the at least one first traffic flow;
the first sum of preset multiples.
In the foregoing scheme, the number of reserved bandwidths includes: one or more of;
the processing module is further configured to, when the number of the reserved bandwidths is one, evenly allocate the reserved bandwidths for the corresponding DIPs to a plurality of cycles according to the bandwidth parameters of the reserved bandwidths;
when the number of the reserved bandwidths is plural, the reserved bandwidths are allocated to plural periods respectively.
In the foregoing solution, the processing module is configured to perform traffic shaping on each first traffic flow in the at least one first traffic flow; the traffic shaping is performed based on the bandwidth requirements of each first traffic flow;
arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow;
and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeuing corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
In the foregoing solution, the processing module is further configured to perform traffic shaping on the second service flow according to a sum of the rates of the at least one first service flow.
In the above scheme, the sending module is configured to enable the packets in the second service flow to enter the egress queues corresponding to at least three periods in a packet-by-packet polling manner;
according to the byte number counting mode, enabling the messages in the second service flow to averagely enter an outlet queue corresponding to at least three periods;
and enabling the messages in the second service flow to enter at least three outlet cycle queues in a weighted packet-by-packet polling mode or a weighted counting polling mode according to the number of bytes according to preset weights of the outlet queues corresponding to the at least three cycles.
An embodiment of the present invention provides a data transmission apparatus, where the apparatus includes:
a receiving module, configured to receive a second traffic flow from a first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
The embodiment of the invention provides a data transmission device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the data transmission method at any one side of the first communication node; alternatively, the first and second electrodes may be,
the processor, when executing the program, implements the steps of the data transmission method of any of the above second communication node sides.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the data transmission method described in any one of the above first communication node sides; alternatively, the first and second electrodes may be,
the computer program realizes the steps of the data transmission method of any of the above second communication node sides when executed by a processor.
The embodiment of the invention provides a data transmission method, a data transmission device and a storage medium, wherein the method comprises the following steps: the first communication node converges at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement; sending the second service flow; correspondingly, the second communication node receives the second service flow from the first communication node; the second traffic flow includes: at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement; in this manner, the perception of the state of each flow at the intermediate node (i.e., the second communication node) is minimized.
Drawings
Fig. 1 is a schematic diagram of a relationship between a scheduling mechanism of an existing IP and a scheduling mechanism of a DIP;
fig. 2 is a schematic diagram of a CQF scheduling method;
fig. 3 is a diagram illustrating the effect of forwarding delay of a CQF;
FIG. 4 is a schematic diagram of a DIP mechanism;
fig. 5 is a schematic flowchart of a data transmission method according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of another data transmission method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a resource reservation method according to an embodiment of the present invention;
fig. 8 is a schematic diagram of traffic shaping according to an embodiment of the present invention;
fig. 9 is a schematic diagram of interleaving and arranging a message according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a data transmission apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another data transmission apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a communication device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, and the related art will first be described.
As mentioned above, DIP networks are an important development trend in future networks. Because the traditional IP message is based on statistical multiplexing, the Service Level Agreement (SLA) indexes such as time delay, bandwidth, packet loss and the like can not be promised by best effort; for a specific scenario of BG5/6G, strict deterministic service capability requirements are proposed, which cannot be met by traditional IP forwarding. For example: the telemedicine requires that the end-to-end time delay is less than 50ms and the jitter is less than 200 us; in order to ensure the accuracy of relay protection in a smart grid scene, the difference of one-way time needs to be less than 200us, and the jitter is less than 50 us.
In the related art, a Time Sensitive Networking (TSN) technology standardized by the Institute of Electrical and Electronics Engineers (IEEE) provides a series of standards for congestion control and queue scheduling, but is itself designed according to an Ethernet (Ethernet) local area network, and some preconditions of Time synchronization, flow-by-flow identification, etc. are not realistic on a large network, and scalability and maintainability are the greatest challenges for the certainty of a three-Layer (L3, Layer3) network.
Therefore, in the related art, an idea of DIP based on periodic scheduling is proposed to hopefully provide the certainty of a large-scale backbone network, but there are some problems in terms of lack of deployment practice, convenience in maintenance, and the like. For example, the existing IP-based scheduling technique and the related mechanism of TSN shown in fig. 1 propose a DIP mechanism.
To explain the DIP mechanism in the prior art, a related art round-robin queue forwarding (CQF) is explained below. The CQF is a queue management method applied to a deterministic network in the related art.
The CQF inherits the concept of Time Aware Shaping (TAS) gating and introduces a mechanism of circular queue for processing a data stream (Critical stream) with a strict delay requirement. As shown in the schematic diagram of the CQF scheduling method in fig. 2, for example, two queues, namely queue (queue)2 and queue3 are used; queue2 and queue3 will open and close alternately, i.e. when queue2 is open to transmit data, queue3 will close and receive data, after which queue3 is open to transmit data and queue2 is closed and receive data. In this mechanism, frames of the Critical stream are sent one by one (or called a cycle) and sent one by one), for example, white frames in the figure, they must arrive in a certain time window and enter a specific queue (queue 2 or queue 4 in the figure), and in the same time window, queue3 or queue 5 needs to complete sending, so that the whole mechanism can operate normally. If the link delay and processing delay are negligible with respect to the cycle duration, then for a particular deterministic network, the mechanism works well with two buffers (buffers) (e.g., only queue2 and queue3 in the figure), otherwise the mechanism needs to use more queues. If the devices in the network all support CQF, the packet of the Critical flow can enter a period at the network edge node, and then stays for about a period of time after each intermediate node, so that the packet of the Critical flow can be transmitted deterministically and reaches the edge node of the opposite network in a fixed period. In the CQF mechanism, from the perspective of each packet, the time of each hop stays around one cycle (the shortest is close to 0, and the longest is close to two cycles), but from the perspective of the overall CQF system, the overall delay of each node in a packet set of one entry cycle is one cycle. As can be seen in fig. 3, fig. 3 is a schematic diagram illustrating the effect of the forwarding delay of the CQF.
The above scheme has problems: the time synchronization of the whole network equipment is required, and the distance between two nodes cannot be too long (namely, the link delay is required to be short).
A scheduling scheme of DIP in the related art is explained as follows. Regarding the scheduling of DIP, time synchronization of the whole network is no longer assumed, but frequency synchronization of the whole network is assumed, and the DIP scheduling mechanism is supported. Because the optical fiber delay cannot be ignored for a large-scale and long-distance network, more queues are set, for example, 3 queues are different from a queue of a CQF that rotates from one transmission queue to the other reception queue, a time of two periods is used here to ensure that a message of one period is received and then transmitted, that is, three queues are matched to circulate, for example, three periods of red, green and yellow, and are respectively set as follows according to the corresponding situation of the queue (queue):
red period: queue1 sends status, queue2 receives status, queue3 receives status;
green period: queue1 receives status, queue2 sends status, queue3 receives status;
yellow period: queue1 receives status, queue2 receives status, queue3 sends status;
and the process is circulated.
When the method is applied, the message is selected to enter a period at the network edge node, and then stays for about two periods every time an intermediate node passes, so that the message of the Critical flow can be transmitted deterministically and reaches the edge node of the opposite network in a fixed period.
As shown in fig. 4, fig. 4 shows a control flow of the control plane and the data plane in the DIP mechanism; in the DIP mechanism, on one hand, after a data plane supports convergence of Critical streams, that is, after different Critical streams enter a DIP node, if an outgoing interface is the same and an outgoing period is the same, different Critical streams are all delayed by a time of about 2T (assuming that T represents the time of one period) at the node and then sent out to a next hop. The DIP node only needs to process the tag/destination address to find the outgoing interface and process the cycle mapping table to match the appropriate outgoing cycle without identifying each Critical stream. Each Critical flow should conform to the limit of the related traffic bandwidth, for example, the limit of the formula in fig. 4, for one Critical flow (i.e., flowi in the figure), the sent data amount should be the sum of the traffic corresponding to the reserved bandwidth and the bursty traffic, the deterministic request of the sender should be submitted to the Network through a User Network Interface (UNI), and an Operation, Administration and Maintenance (OAM) tool set should also be provided in the Network to provide performance monitoring of the deterministic related function. On the other hand, in the control plane, aggregated resource reservation is supported.
In the related art, a Resource ReSerVation method of the common Resource ReSerVation Protocol-Traffic Engineering (RSVP) is provided, which is considered to require a state of each Critical flow, needs to maintain a soft state, and is not suitable for a large-scale DIP network scenario. In a conventional resource reservation mechanism, a head node sends a Path (Path) message to a tail node, detects a resource condition, and after the tail node receives the Path, feeds back a resource reservation confirmation (Resv) message to be determined, completes resource reservation, and optionally carries an allocated Path Label (Multi-Protocol Label Switching (MPLS) scenario). While this scheme reserves bandwidth, the soft state needs to be refreshed, e.g., every 30s or 90s, and not refreshing upon timeout may release the reserved resources.
Compared with the existing RSVP-TE mechanism, the DIP network introduces a new mechanism, hopefully, soft state can be overcome, and the intermediate node maintains less state; that is, the intermediate node does not need to maintain the flow ID of each flow, but only needs to maintain a time window, which shows the bandwidth reservation of the link of the node in the next K periods. However, the time window is also converged according to the requirement of each Critical flow, the reservation of the intermediate node is continuously updated, if 1 period is 10us, the updating frequency is very high, and from the concept of Internet design, the intermediate node should not need to sense the resource condition of each service flow.
In summary, the intermediate node does not suggest to perceive the specific case of each deterministic traffic flow (a Critical flow requiring resource reservation), whether in the control plane or in the data plane, but the current mechanism requires the state of each deterministic traffic flow to be transmitted in the control plane.
Based on this, in the method provided in the embodiment of the present invention, the first communication node aggregates at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement; sending the second service flow; correspondingly, the second communication node receives the second service flow from the first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
The present invention will be described in further detail with reference to examples.
Fig. 5 is a schematic flowchart of a data transmission method according to an embodiment of the present invention; as shown in fig. 5, the method is applied to a first communication node, where the first communication node may be an edge node (PE) of an operator, and the data transmission method includes:
step 501, converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
step 502, sending the second service flow.
Wherein the first network refers to a deterministic network, i.e., a DIP network.
The first service flow is a data flow (called Critical flow) with a strict delay requirement in the DIP network.
The requirements for ensuring the same quality of service may include: with the same low latency requirements. For example, the delay is required to be less than a certain threshold, such as less than 2.5ms, less than 3 ms. It should be noted that even if the values of the delay requirements of the two services are different (e.g. one requirement is less than 2.8ms, and the other requirement is less than 2.5ms), the two services belong to the same low delay requirement, i.e. have the same qos requirement.
In an embodiment, the method further comprises:
sending a bandwidth request; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
the bandwidth reservation adopts one of the following steps:
a first sum of bandwidths of respective ones of the at least one first traffic flow;
the first sum of preset multiples.
Here, when the first communication node applies for resource reservation to the second communication node, the first communication node appropriately reserves more resources considering the future situation, that is, the bandwidth reservation is the first sum of the preset multiple. That is, the method provided by the embodiment of the present invention allows for over-provisioning of the reserved bandwidth by the appropriate amount.
In an embodiment, the amount of reserved bandwidth includes: one or more of;
when the number of the reserved bandwidths is one, the reserved bandwidths aiming at the corresponding DIP are evenly distributed into a plurality of periods according to the bandwidth parameters of the reserved bandwidths;
when the number of the reserved bandwidth is multiple, the reserved bandwidth is respectively allocated to multiple periods.
In an embodiment, the aggregating the service flows of at least one same path to obtain a second service flow includes:
traffic shaping (also called Pacing) each of the at least one first traffic flow; the traffic shaping is performed based on the bandwidth requirements of each first traffic flow;
arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow;
and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeuing corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
The message refers to a first service flow after flow shaping;
the shaping queue refers to a queue for carrying out flow shaping on a first service flow;
the time for preparing dequeuing corresponding to the packet of the first service flow may be a time for completing packet shaping of the first service flow.
In an embodiment, the method further comprises:
and performing traffic shaping on the second service flow according to the rate sum of the at least one first service flow.
Specifically, after the aggregated second service flow is obtained, the aggregated second service flow may be further subjected to traffic shaping (paging) according to the rate sum of the first service flows (specifically, the rate sum of the traffic of the packets of the first service flow).
In an embodiment, the sending the second traffic flow includes:
enabling the messages in the second service flow to enter outlet queues corresponding to at least three periods according to a packet-by-packet polling mode;
according to the byte number counting mode, enabling the messages in the second service flow to averagely enter an outlet queue corresponding to at least three periods;
and enabling the messages in the second service flow to enter the at least three outlet cycle queues in a weighted packet-by-packet polling mode or a weighted polling mode according to byte number statistics according to the preset weight of the outlet queues corresponding to the at least three cycles.
Here, the packet-by-packet polling manner represents that the packets in the second service flow sequentially and circularly enter the egress queues corresponding to the at least three periods.
And representing according to the statistical mode of the byte number, averaging the messages according to the size of the messages in the second service flow, and respectively entering the outlet queues corresponding to the at least three periods.
Here, each of the at least three periods may correspond to a different egress queue, and different weights are preset in the different egress queues, where the weights are used to determine a ratio of packet ingress. If the weight ratio of queue1, queue2, and queue3 is 1:1:2, then it can be considered that flow 4 is divided equally first, where 1 part of flow enters queue1, 1 part of flow enters queue2, and 2 parts of flow enters queue3, and the method of dividing equally may also be divided according to packets, or according to the number of bytes sent by the flow, considering the size of the packets.
Correspondingly, the embodiment of the invention provides another data transmission method. Fig. 6 is a schematic flowchart of a data transmission method according to an embodiment of the present invention; as shown in fig. 6, the data transmission method is applied to a second communication node, where the second communication node may be an intermediate node (denoted as a P node) of an operator, and the method includes:
step 601, receiving a second service flow from a first communication node; the second traffic flow includes: at least one first traffic flow of the same path aggregated.
In practical application, the second communication node can communicate with one or more first communication nodes;
in the same ingress PE (a first communication node), traffic flows destined for other egress PEs are also aggregated into other flows and enter a particular path.
If the egress PEs of multiple aggregate flows (i.e. multiple second traffic flows) are the same, the multiple aggregate flows are all scheduled to at least three queues (typically three queues or four queues) of the same link, and in order to distribute the aggregate flows to at least three queues as evenly as possible, each aggregate flow (i.e. interleaved traffic) may be polled packet by packet (RR, round-robin) or entered into at least three queues in a byte count manner.
The method further comprises the following steps:
step 602, sending the second service flow to the corresponding next hop P node (an intermediate node).
In an embodiment, the method further comprises:
receiving a bandwidth request from a first communication node; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
reserving a reserved bandwidth for the second traffic flow.
In an embodiment, the receiving the second traffic flow includes:
and forwarding the second service flow according to a period mapping mechanism of the DIP.
The method provided by the embodiment of the invention is applied to a DIP network to realize resource reservation. The method performs convergence (interleaving arrangement) on the PE (namely, the first communication node) on the data surface for the service flows of the same path; the interleaved traffic is forwarded as an aggregate flow in the DIP network, and the reserved bandwidth supports a proper amount of over-allocation, so as to avoid excessive resource reservation modification. By means of the over-allocated bandwidth, too much resource waste is not caused, because resources which are not used by the Critical traffic can BE used for Best-effort (BE) traffic. And the interleaved streams can occupy several periods of bandwidth on average in the PE, so as to avoid the problems of traffic congestion in part of the period and idle in part of the period.
The embodiment of the invention provides a resource reservation method. The method comprises the following steps:
performing flow-by-flow shaping (i.e. performing flow shaping on each traffic flow) on traffic flows (equivalent to the first traffic flow) from an ingress edge node (PE) to an egress PE; specifically, according to the destination address of each service flow, service flows with the same destination address are converged in an interleaving manner to obtain a converged flow (equivalent to the second service flow).
The convergent flow is expressed as a flow, and enters the network through a session (session) which is maintained by an opposite terminal and is based on RSVP, so that the number of service flows in the network is reduced; in this way, the intermediate node (i.e., P node) does not sense the entry and exit of each traffic, only represents the change in reserved bandwidth, and does not affect the operation of the forwarding plane.
Fig. 7 is a schematic diagram of another data transmission method according to an embodiment of the present invention; as shown in fig. 7, each ingress PE receives a first service flow sent by each client (client), and for the first service flow received by each ingress PE, each ingress PE aggregates the same paths and the same requirements for quality of service guarantee in a first network (i.e., a deterministic network) to obtain a second service flow, and then sends the second service flow to an intermediate node (P), and the second service flow is sent to an egress PE by the intermediate node.
The convergence is performed in an interleaving manner, specifically: traffic shaping each of the at least one first traffic flow; arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow; and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeue corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
Table 1 is a schematic diagram comparing a current PE scheduling scheme of DIP with a PE scheduling scheme of DIP provided in an embodiment of the present invention.
Figure BDA0002739907470000121
Figure BDA0002739907470000131
TABLE 1
A specific procedure of a control plane in the communication method provided in the embodiment of the present invention is described below.
Deterministic tunnel or Segment Routing (SR) policies (policies) are built between edge nodes of a deterministic network, either traffic driven or pre-configured.
Corresponding to the path congruence in the deterministic network of a plurality of deterministic traffic flows (corresponding to the first traffic flow described above) (i.e., the head and tail nodes in the network are congruous and both are deterministic traffic), these deterministic traffic flows will be aggregated (e.g., by way of interleaving).
In some embodiments, for example, after the related tunnel or SR policy is established, the aggregated messages of the deterministic traffic flow (corresponding to the messages of the second traffic flow) may be encapsulated in a new MPLS label header (pushed to MPLS label stack) or a new IPv6 header (a new IPv6 message header is generated, and the original message is used as payload).
The aggregated deterministic traffic flow applies for a total bandwidth reservation value in the network, for example, by interacting via RSVP protocol, where the total bandwidth reservation value may be the sum of the bandwidths of the services before aggregation, or the sum of the bandwidths of the traffic flows before aggregation plus an additional bandwidth reservation amount, for example, the additional bandwidth reservation amount is 10% of the sum.
When the network is running, new deterministic traffic may enter the network, and at the same time, old deterministic traffic may leave the network, where the sum of the bandwidths of the corresponding aggregated traffic changes. If the RSVP is conventional, the relevant bandwidth reservation update is triggered immediately, but in the method of the embodiment of the present invention, if the bandwidth reservation value is within a certain threshold (for example, 10% of the previous reservation is added), it is supported that the updated bandwidth reservation value is not sent immediately to update in the network, but the value of the reserved resource in the network can be updated again until the soft state of RSVP is refreshed; but if a certain threshold is exceeded (which may be preset by network management or developers) immediate updates may also be triggered.
Thus, frequent state updating of RSVP can be avoided; it is particularly applicable to scenarios where deterministic traffic may not require too much bandwidth, but is more numerous and generates and terminates traffic more frequently.
When the resource reservation situation is notified through RSVP, since the service flow may enter multiple cycles in the data plane, in one possible implementation, three RSVP reserved bandwidths need to be set, corresponding to three cycles respectively; or the bandwidth reserved by RSVP is set to one, but the bandwidth parameter of RSVP has a flag (flag) corresponding to the fact that the bandwidth is reserved for DIP traffic and is divided equally into three periods.
The following describes a specific flow of a data plane in the communication method provided by the embodiment of the present invention.
At an edge node of a deterministic network, traffic shaping (also called paging) is performed on a deterministic traffic flow, so that packets corresponding to the deterministic traffic flow are distributed as uniformly as possible. The process of stream-wise shaping is shown in fig. 8.
In response to the paths of the deterministic traffic flows (corresponding to the first traffic flow) in the deterministic network (such as the DIP network described above) being consistent (i.e., the head and tail nodes in the network are consistent and both are deterministic traffic), the deterministic traffic flows are interleaved. The interleaving and arranging method is shown in fig. 9, and means that in the process of service flow paging, it is determined that the packet of the service flow completes paging, and then the packet enters a large queue according to the sequence of completion time, that is, the interleaving and arranging of the packet is realized.
In an embodiment, the aggregated packet of the deterministic traffic flow (corresponding to the second traffic flow) may be subjected to a second traffic shaping according to a sum of flow rates of the packets corresponding to the deterministic traffic flow.
The aggregated deterministic traffic flow appears in the deterministic network as the same traffic until leaving the deterministic network.
For the aggregated messages of the deterministic service, the aggregated messages may be averagely scheduled to three egress periodic queues, for example, the interleaved traffic may be averagely entered into three queues according to packet-by-packet polling or according to a byte count manner, or may be scheduled to three egress periodic queues according to a certain weight.
Fig. 10 is a schematic structural diagram of a data transmission apparatus according to an embodiment of the present invention; as shown in fig. 10, the apparatus is applied to a first communication node, and the apparatus includes:
the processing module is used for converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
and the sending module is used for sending the second service flow.
Specifically, the sending module is further configured to send a bandwidth request; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
the bandwidth reservation adopts one of the following steps:
a first sum of bandwidths of each of the at least one first traffic flow;
the first sum of preset multiples.
Specifically, the number of reserved bandwidths includes: one or more of;
the processing module is configured to, when the number of the reserved bandwidths is one, evenly allocate the reserved bandwidths for the corresponding DIPs to a plurality of cycles according to the bandwidth parameters of the reserved bandwidths;
when the number of the reserved bandwidth is multiple, the reserved bandwidth is respectively allocated to multiple periods.
Specifically, the processing module is configured to perform traffic shaping on each of the at least one first traffic flow; the traffic shaping is performed based on the bandwidth requirements of each first traffic flow;
arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow;
and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeuing corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
Specifically, the processing module is further configured to perform traffic shaping on the second traffic flow according to a sum of rates of the at least one first traffic flow.
Specifically, the sending module is configured to enable the packets in the second service flow to enter the egress queues corresponding to at least three periods in a packet-by-packet polling manner;
according to the byte number counting mode, enabling the messages in the second service flow to averagely enter an outlet queue corresponding to at least three periods;
and enabling the messages in the second service flow to enter at least three outlet cycle queues in a weighted packet-by-packet polling mode or a weighted counting polling mode according to the number of bytes according to preset weights of the outlet queues corresponding to the at least three cycles.
It should be noted that: in the data transmission device provided in the above embodiment, when implementing the corresponding data transmission method, only the division of the program modules is taken as an example, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the server is divided into different program modules to complete all or part of the processing described above. In addition, the apparatus provided by the above embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is described in the method embodiment, which is not described herein again.
Fig. 11 is a schematic structural diagram of another data transmission apparatus according to an embodiment of the present invention; as shown in fig. 11, the apparatus is applied to a second communication node, and the apparatus includes:
a receiving module, configured to receive a second traffic flow from a first communication node; the second service flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
Specifically, the apparatus may further include a forwarding module configured to forward the aggregated second traffic flow, such as sending the second traffic flow to a corresponding next-hop P node (an intermediate node).
It should be noted that: in the data transmission device provided in the above embodiment, when implementing the corresponding data transmission method, only the division of the program modules is taken as an example, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the server is divided into different program modules to complete all or part of the processing described above. In addition, the apparatus provided by the above embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is described in the method embodiment, which is not described herein again.
Fig. 12 is a schematic structural diagram of a communication device according to an embodiment of the present invention, and as shown in fig. 12, the communication device 120 includes: a processor 1201 and a memory 1202 for storing computer programs operable on the processor;
the processor 1201 is configured to, when running the computer program, perform, when the communication device is applied to a first communication node: converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement; and sending the second service flow.
When the processor runs the computer program, it implements the corresponding flow of the first communication node in the methods of the embodiments of the present invention, and for brevity, details are not described here again.
When the communication device is applied to a second communication node, the processor 1201 is configured to execute, when running the computer program, the following steps: receiving a second traffic flow from the first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
When the processor runs the computer program, the corresponding process of the second communication node in each method according to the embodiment of the present invention is implemented, and for brevity, no further description is given here.
In practical applications, the communication device 120 may further include: at least one network interface 1203. The various components of the communication device 120 are coupled together by a bus system 1204. It is understood that the bus system 1204 is used to enable connective communication between these components. The bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 1204 in fig. 12. The number of the processors 1201 may be at least one. The network interface 1203 is used for communication between the communication device 120 and other devices in a wired or wireless manner.
The memory 1202 in embodiments of the present invention is used to store various types of data to support the operation of the communication device 120.
The method disclosed by the embodiment of the invention can be applied to the processor 1201 or implemented by the processor 1201. The processor 1201 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 1201. The Processor 1201 may be a general purpose Processor, a DiGital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 1201 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium that is located in the memory 1202, and the processor 1201 reads the information in the memory 1202 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the communication Device 120 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored;
the computer program, when executed by a processor, performs: converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement; and sending the second service flow.
When the computer program is executed by the processor, the corresponding process implemented by the first communication node in the methods according to the embodiments of the present invention is implemented, and for brevity, no further description is given here.
When the stored computer program is run by a processor, corresponding to the application of the stored computer program to a second communication node, the computer program performs: receiving a second traffic flow from the first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
When the computer program is executed by the processor, the corresponding process implemented by the second communication node in the methods according to the embodiments of the present invention is implemented, and for brevity, no further description is given here.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
It should be noted that: "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The technical means described in the embodiments of the present application may be arbitrarily combined without conflict.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. A data transmission method applied to a first communication node, the method comprising:
converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
and sending the second service flow.
2. The method of claim 1, further comprising:
sending a bandwidth request; the bandwidth request is used for requesting to allocate reserved bandwidth for the second service flow;
the bandwidth reservation adopts one of the following steps:
a first sum of bandwidths of respective ones of the at least one first traffic flow;
the first sum of preset multiples.
3. The method of claim 2, wherein the reserving the amount of bandwidth comprises: one or more of;
the method further comprises the following steps:
when the number of the reserved bandwidths is one, according to the bandwidth parameters of the reserved bandwidths, the reserved bandwidths aiming at the corresponding deterministic internet interconnection protocol DIP are evenly distributed into a plurality of periods;
when the number of the reserved bandwidth is multiple, the reserved bandwidth is respectively allocated to multiple periods.
4. The method of claim 1, wherein aggregating at least one first traffic flow to obtain a second traffic flow comprises:
traffic shaping each of the at least one first traffic flow; the traffic shaping is performed based on the bandwidth requirements of each first traffic flow;
arranging the at least one first service flow according to the time for finishing message shaping of the shaping queue corresponding to each first service flow to obtain a second service flow;
and the arrangement representation sequentially enters the queue corresponding to the second service flow according to the time for preparing dequeuing corresponding to the message of each first service flow aiming at the shaping queue corresponding to each first service flow.
5. The method of claim 4, further comprising:
and performing traffic shaping on the second service flow according to the rate sum of the at least one first service flow.
6. The method of claim 2, wherein the sending the second traffic flow comprises:
enabling the messages in the second service flow to enter outlet queues corresponding to at least three periods according to a packet-by-packet polling mode;
according to the byte number counting mode, enabling the messages in the second service flow to averagely enter an outlet queue corresponding to at least three periods;
and enabling the messages in the second service flow to enter at least three outlet cycle queues in a weighted packet-by-packet polling mode or a weighted counting polling mode according to the number of bytes according to preset weights of the outlet queues corresponding to the at least three cycles.
7. A data transmission method, applied to a second communication node, the method comprising:
receiving a second traffic flow from the first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
8. A data transmission apparatus, characterized in that the apparatus comprises:
the processing module is used for converging at least one first service flow to obtain a second service flow; the path of each first service flow in the at least one first service flow in the first network is the same and has the same service quality guarantee requirement;
and the sending module is used for sending the second service flow.
9. A data transmission apparatus, characterized in that the apparatus comprises:
a receiving module, configured to receive a second traffic flow from a first communication node; the second traffic flow includes: and at least one first service flow of the same path is converged, and the at least one first service flow has the same service quality guarantee requirement.
10. A communication device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 6 are implemented when the program is executed by the processor;
alternatively, the processor implements the steps of the method of claim 7 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6; alternatively, the first and second liquid crystal display panels may be,
which computer program, when being executed by a processor, carries out the steps of the method as claimed in claim 7.
CN202011146700.3A 2020-10-23 2020-10-23 Data transmission method, device and storage medium Pending CN114501544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011146700.3A CN114501544A (en) 2020-10-23 2020-10-23 Data transmission method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011146700.3A CN114501544A (en) 2020-10-23 2020-10-23 Data transmission method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114501544A true CN114501544A (en) 2022-05-13

Family

ID=81470796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011146700.3A Pending CN114501544A (en) 2020-10-23 2020-10-23 Data transmission method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114501544A (en)

Similar Documents

Publication Publication Date Title
US11316795B2 (en) Network flow control method and network device
US20230388239A1 (en) Packet sending method, network node, and system
Ishimori et al. Control of multiple packet schedulers for improving QoS on OpenFlow/SDN networking
US10333779B2 (en) System and method for providing a software defined protocol stack
US7903553B2 (en) Method, apparatus, edge router and system for providing QoS guarantee
US11394646B2 (en) Packet sending method, network node, and system
US20210083970A1 (en) Packet Processing Method and Apparatus
WO2021057447A1 (en) Method for determining required bandwidth for data stream transmission, and devices and system
WO2021185208A1 (en) Packet processing method and apparatus, device, and storage medium
JP7231749B2 (en) Packet scheduling method, scheduler, network device and network system
US11310167B2 (en) Resource reservation method and related device
JP7288980B2 (en) Quality of Service in Virtual Service Networks
US20220006744A1 (en) Service forwarding method and network device
US20230142425A1 (en) Virtual dual queue core stateless active queue management (agm) for communication networks
WO2016177058A1 (en) Method and apparatus for data stream scheduling
US11646978B2 (en) Data communication method and apparatus
US20230336486A1 (en) Service flow scheduling method and apparatus, and system
US20150109922A1 (en) Method and apparatus for bandwidth adjustment in network virtualization system
Shin et al. Flit scheduling for cut-through switching: Towards near-zero end-to-end latency
US11805071B2 (en) Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
CN114501544A (en) Data transmission method, device and storage medium
KR101587379B1 (en) Method of dynamic control for queue size and apparatus thereof
WO2022237860A1 (en) Packet processing method, resource allocation method and related device
WO2020238875A1 (en) Method and apparatus for determining port attribute
WO2024011907A1 (en) Data scheduling method, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination