CN116235435A - Bandwidth adjustment method based on FlexE service and network equipment - Google Patents

Bandwidth adjustment method based on FlexE service and network equipment Download PDF

Info

Publication number
CN116235435A
CN116235435A CN202080104796.9A CN202080104796A CN116235435A CN 116235435 A CN116235435 A CN 116235435A CN 202080104796 A CN202080104796 A CN 202080104796A CN 116235435 A CN116235435 A CN 116235435A
Authority
CN
China
Prior art keywords
channel
data
time point
service data
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080104796.9A
Other languages
Chinese (zh)
Inventor
孙洪亮
朱澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN116235435A publication Critical patent/CN116235435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received

Abstract

The application discloses a bandwidth adjustment method and network equipment based on FlexE service, which are used for achieving the purpose that when a transmitting device side adjusts a transmitting channel, the boundary of service data transmitted by a first channel is aligned with the boundary of service data transmitted by a second channel, and the phenomenon that part of service data transmitted by the second channel is lost is avoided, and the method comprises the following steps: at a first time point before a current sending channel is adjusted from a first channel to a second channel at a second time point, starting to write business data to be sent into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel; starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.

Description

Bandwidth adjustment method based on FlexE service and network equipment Technical Field
The present invention relates to the field of communications technologies, and in particular, to a bandwidth adjustment method and a network device based on FlexE service.
Background
The optical internet forum (optical internet forum, OIF) promulgates flexible ethernet (flexible ethernet, flexE) technology standard, flexE being a generic technology supporting multiple media access control (medium access control, MAC) layer rates. The FlexE divides each 100 Gigabit Ethernet (GE) Physical (phy) interface into data carrying channels of 20 slots (slots) in the time domain with 5G bandwidth as granularity in a time division multiplexing (time division multiplexing, TDM) manner, and for simplicity, the name of the data carrying channel of each slot is a transmission channel, that is, the bandwidth of each transmission channel is 5G, so that hard isolation of the bandwidth of the transmission channel is realized. Wherein, one service data flow can be distributed to one to a plurality of transmission channels, thus realizing the matching of various rate services.
The FlexE can adjust the bandwidth of the transmission channel carrying each service, so as to realize the requirement of each service on different bandwidths. For example, flexE may reduce the bandwidth of a transmission channel that transmits a certain service when the service requires a small bandwidth, or FlexE may increase the bandwidth of a transmission channel that transmits the service when the service requires a large bandwidth.
At present, when the FlexE is adjusted from a large transmission channel (a transmission channel with a larger bandwidth) to a small transmission channel (a transmission channel with a smaller bandwidth) at the transmitting device side, since the transmission rate of the large transmission channel is larger than that of the small transmission channel, the phenomenon that the service data transmitted by the large transmission channel and the service data transmitted by the small transmission channel are not aligned easily at the switching boundary point, and part of the service data transmitted by the small transmission channel is lost is caused, so that the service data transmission is damaged.
On the receiving device side, when the FlexE is adjusted from the small transmission channel to the large transmission channel, the transmission rate of the small transmission channel is smaller than that of the large transmission channel, so that the phenomenon that the service data received by the small transmission channel and the service data received by the large transmission channel are output simultaneously easily occurs within a period of time after the boundary point is switched, and the receiving sequence of the service data is disordered, so that the content of the service data is disordered, and the received service data is wrong.
Disclosure of Invention
The embodiment of the application provides a bandwidth adjusting method and network equipment based on a FlexE service, which are used for solving the problem that service data transmission is damaged or service data reception is wrong easily under the condition of adjusting the bandwidth of a transmission channel for transmitting service data.
In a first aspect, an embodiment of the present application provides a method for adjusting bandwidth of a FlexE service, where the method may be applied to a network device located on a service data sending side, and the method may include: according to the requirement of sending data, determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; at a first time point before the second time point, starting to write the business data to be transmitted into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel; further, between the first time point and the second time point, service data filled into the first channel is sent through the first channel; further, at the second time point, starting to send service data filled into the second channel through the second channel; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
By adopting the design, the service data volume filled into the second channel between the first time point and the second time point can be equal to the data volume sent in one period on the second channel, so that when the first channel is adjusted to the second channel at the second time point, the second channel can start to send service data, the aim of aligning the service data sent by the first channel with the service data boundary sent by the second channel can be achieved, the phenomenon that part of the service data sent by the second channel is lost can not occur, and further the lossless effect of service data sending can be achieved.
In one possible design, after starting to send the service data filled into the second channel through the second channel at the second time point, the method may include: and stopping writing the service data needing to be transmitted subsequently into the buffer memory when the rest service data in the buffer memory are transmitted through the second channel, and continuing to transmit the service data needing to be transmitted subsequently through the second channel.
By adopting the design, when the second channel transmits the service data in the buffer memory, the subsequent service data to be transmitted can not be buffered, so that the power consumption of the transmitting equipment for processing the data can be reduced.
In one possible design, between the first time point and the second time point, the transmission channel currently corresponding to the data entry includes the first channel and the second channel, and the transmission channel currently corresponding to the data exit is the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
By adopting the design, the service data to be transmitted can be written into the buffer memory by adopting the speed larger than the first bandwidth when the first time point is reached, and the first channel and the second channel are filled with the service data in the buffer memory.
In a second aspect, an embodiment of the present application provides a method for adjusting bandwidth of a FlexE service, where the method may be applied to a network device located on a service data receiving side, and the method may include: according to the requirement of received data, determining that a current receiving channel is required to be adjusted from a second channel to a first channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; starting to write the service data which is required to be received through the first channel into a first buffer memory at the second time point, and continuously receiving the residual service data in the second channel; beginning to receive the service data in the first cache through the first channel at a third time point after the second time point; or, at the second time point, starting to write the service data received through the first channel into a second buffer memory, and continuing to receive the residual service data in the second channel; starting to output the service data in the second cache at the third time point; wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
By adopting the design, the time length from the second time point to the third time point is enough to receive the residual service data in the second channel, so that when the service data in the first buffer memory is received through the first channel or the service data in the second buffer memory is output at the third time point, the effect of processing the service data received by the first channel after the service data received by the second channel is processed can be achieved, the purposes of preserving the sequence of the service data received by the first channel and the service data received by the second channel can be achieved, the disorder phenomenon of the received service data can not occur, and the lossless effect of the service data receiving can be achieved.
In one possible design, starting to output the service data in the second buffer at the third time point may include: and at the third time point, starting to output the service data in the second buffer memory at a rate greater than the first bandwidth.
By adopting the design, the service data in the second cache can be emptied, so that the subsequent service data received through the first channel can not be cached, and the power consumption of the sending equipment for processing the data can be reduced.
In one possible design, starting to receive the service data in the first buffer through the first channel at a third time point after the second time point may include: at the third time point, starting to read the service data in the first cache at a rate greater than the first bandwidth, and filling the read service data into the first channel; and receiving the business data filled into the first channel.
By adopting the design, the service data in the first cache can be emptied, so that the subsequent service data which needs to be received through the first channel can be not cached, and the power consumption of the sending equipment for processing the data can be reduced.
In one possible design, after starting to output the service data in the second buffer at the third time point, the method may further include: and stopping writing the service data received by the first channel into the second buffer memory when the service data in the second buffer memory is output.
By adopting the design, the subsequent service data received through the first channel can be not cached, so that the power consumption of the sending equipment for processing the data can be reduced.
In one possible design, after starting to receive the service data in the first buffer through the first channel at a third time point after the second time point, the method may further include: and stopping writing the service data which is required to be received by the first channel into the first buffer memory when the service data in the first buffer memory is received by the first channel, and continuing to receive the service data which is required to be received by the first channel.
By adopting the design, the service data which is required to be received through the first channel can be no longer cached, so that the power consumption of the sending equipment for processing the data can be reduced.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet comprises the first channel and the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
By adopting the design, the service data received by the first channel and the second channel can be prevented from being output between the second time point and the third time point, and further, the time between the second time point and the third time point is enough to receive the residual service data in the second channel, so that when the service data in the second cache starts to be output at the third time point, the effect of starting to process the service data received by the first channel after the service data received by the second channel is processed can be achieved, the phenomenon of disorder of the received service data can not occur, and further, the lossless effect of the service data reception can be achieved.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet is the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
By adopting the design, the service data received by the first channel and the second channel can be prevented from being output between the second time point and the third time point, and further, the time between the second time point and the third time point is enough to receive the residual service data in the second channel, so that when the service data in the first cache is received through the first channel at the third time point, the effect of processing the service data received by the first channel after the service data received by the second channel is processed can be achieved, the phenomenon of disorder of the received service data is avoided, and the lossless effect of the service data receiving can be achieved.
In a third aspect, an embodiment of the present application provides a method for adjusting bandwidth of a FlexE service, where the method may be applied to a network device located on a service data sending side, and the method may include: according to the requirement of sending data, determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; starting to fill the second channel with preset format data or idle data at a first time point before the second time point; starting to send the preset format data or the idle data filled into the second channel through the second channel at the second time point; wherein the amount of data of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period; further, at a third time point after the second time point, starting to transmit service data through the second channel; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
By adopting the design, the data volume of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the data volume transmitted in one period on the second channel, so that the second channel can start transmitting data when the first channel is adjusted to the second channel at the second time point, the aim of aligning the boundary of the data transmitted by the first channel and the service data transmitted by the second channel can be achieved, and further, the time length between the second time point and the third time point can be equal to the time length required for transmitting the preset format data or the idle data filled into the second channel, so that when the service data is transmitted through the second channel at the third time point, the complete service data packet is not cut off, the phenomenon of partial data loss transmitted by the second channel can not occur, and further, the lossless effect of service data transmission can be achieved.
In one possible design, before the second point in time, the transmission channels currently corresponding to the data entry and the data exit are both the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
By adopting the design, the preset format data or idle data filled into the second channel can be sent through the second channel at the second time point, the aim of aligning the boundaries of the data sent by the first channel and the service data sent by the second channel can be achieved, further, when the service data is sent through the second channel at the third time point, the complete service data packet is not cut off, the phenomenon that part of the service data sent by the second channel is lost can be avoided, and the lossless effect of the service data sending can be achieved.
In a fourth aspect, an embodiment of the present application provides a method for adjusting bandwidth of a FlexE service, where the method may be applied to a network device located on a service data receiving side, and the method may include: according to the requirement of the received data, determining that the current receiving channel is required to be adjusted from the first channel to the second channel at a second time point; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel; at the second time point, deleting the preset format data or idle data received through the second channel; beginning to receive traffic data over the second channel at a third point in time subsequent to the second point in time; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
With the above design, since the duration from the second time point to the third time point is equal to the duration required for deleting the preset format data or the idle data received through the second channel, the receiving rate of the first channel is larger than that of the second channel, and the receiving device can process the service data received through the first channel when the third time point is reached, when the third time point begins to receive the service data through the second channel, the effect of starting to process the service data received through the second channel after the service data received through the first channel is processed can be achieved, thereby achieving the purpose of maintaining the sequence of the service data received through the first channel and the service data received through the second channel, avoiding the disorder phenomenon of the received service data, and further achieving the lossless effect of the service data reception.
In one possible design, before the second point in time, the currently corresponding receiving channels of the data inlet and the data outlet are both the first channel; and after the second time point, the receiving channels currently corresponding to the data inlet and the data outlet are both the second channels.
By adopting the design, the preset format data or idle data received through the second channel can be deleted at the second time point, further, when the service data is received through the second channel at the third time point, the purpose of starting to process the service data received by the second channel after the service data received by the first channel is processed can be achieved, the effect of preserving the sequence of the service data received through the first channel and the service data received through the second channel can be achieved, the phenomenon of disorder of the received service data can not occur, and further, the lossless effect of the service data receiving can be achieved.
In a fifth aspect, embodiments of the present application provide a network device, including: a processing unit and a transmitting unit;
the processing unit is used for determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point according to the requirement of sending data, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; at a first time point before the second time point, starting to write the business data to be transmitted into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel;
The sending unit is used for sending the business data filled into the first channel through the first channel between the first time point and the second time point; starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
In one possible design, the processing unit may also be configured to: stopping writing the service data to be transmitted subsequently into the buffer memory when the rest service data in the buffer memory are transmitted through the second channel; the transmitting unit may be further configured to: and continuing to send the service data which needs to be transmitted subsequently through the second channel.
In one possible design, between the first time point and the second time point, the transmission channel currently corresponding to the data entry includes the first channel and the second channel, and the transmission channel currently corresponding to the data exit is the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
The advantages of the fifth aspect and possible designs thereof described above may be referred to the description of the advantages of the method described in the first aspect and any of the possible designs thereof described above.
In a sixth aspect, embodiments of the present application provide a network device, including: a processing unit and a receiving unit;
the processing unit is used for determining that the current receiving channel is required to be adjusted from a second channel to a first channel at a second time point according to the requirement of the received data, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; at the second time point, starting to write the service data required to be received through the first channel into a first buffer, or starting to write the service data required to be received through the first channel into a second buffer;
the receiving unit is configured to continuously receive, at the second time point, service data remaining in the second channel; starting to receive the service data in the first buffer through the first channel or starting to output the service data in the second buffer at a third time point after the second time point; wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
In one possible design, the receiving unit may be specifically configured to: and at the third time point, starting to output the service data in the second buffer memory at a rate greater than the first bandwidth.
In one possible design, the processing unit may be specifically configured to: at the third time point, starting to read the service data in the first cache at a rate greater than the first bandwidth, and filling the read service data into the first channel; the receiving unit may be specifically configured to: and receiving the business data filled into the first channel.
In one possible design, the processing unit may also be configured to: and stopping writing the service data received by the first channel into the second buffer memory when the receiving unit outputs the service data in the second buffer memory.
In one possible design, the processing unit may also be configured to: when the receiving unit receives the service data in the first buffer memory through the first channel, stopping writing the service data which is required to be received through the first channel into the first buffer memory; the receiving unit may be further configured to: and continuing to receive the service data which is required to be received through the first channel.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet comprises the first channel and the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet is the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
The advantages of the sixth aspect and possible designs thereof described above may be referred to the description of the advantages of the method of the second aspect and any of the possible designs thereof described above.
In a seventh aspect, embodiments of the present application provide a network device, including: a processing unit and a transmitting unit;
the processing unit is used for determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point according to the requirement of sending data, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; starting to fill the second channel with preset format data or idle data at a first time point before the second time point;
The sending unit is configured to start sending, at the second time point, the preset format data or the idle data filled into the second channel through the second channel; wherein the amount of data of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period; beginning to transmit traffic data over the second channel at a third point in time subsequent to the second point in time; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
In one possible design, before the second point in time, the transmission channels currently corresponding to the data entry and the data exit are both the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
The advantages of the seventh aspect and possible designs thereof described above may be referred to the description of the advantages of the method of the third aspect and any of the possible designs thereof described above.
In an eighth aspect, embodiments of the present application provide a network device, including: a processing unit and a receiving unit;
the processing unit is used for determining that the current receiving channel is required to be adjusted from the first channel to the second channel at the second time point according to the requirement of the received data; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel; starting to delete the preset format data or idle data received by the receiving unit through the second channel at the second time point;
the receiving unit is configured to start receiving service data through the second channel at a third time point after the second time point; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
In one possible design, before the second point in time, the currently corresponding receiving channels of the data inlet and the data outlet are both the first channel; and after the second time point, the receiving channels currently corresponding to the data inlet and the data outlet are the second channel.
The advantages of the above-mentioned eighth aspect and possible designs thereof may be referred to the above description of the advantages of the method of the fourth aspect and any of the possible designs thereof.
In a ninth aspect, embodiments of the present application provide a network device, where the network device includes one or more processors and one or more memories or non-volatile storage media, where the one or more processors are connected to the one or more memories or non-volatile storage media, and one or more computer instructions or computer programs are stored in the one or more memories or non-volatile storage media, which when executed by the one or more processors, cause the network device to perform the methods related to the first aspect to the fourth aspect.
In a tenth aspect, embodiments of the present application provide a chip, including: at least one processor and an interface, which may be a code/data read-write interface, for providing computer instructions (computer instructions stored in memory, possibly read directly from memory, or possibly via other devices) to the at least one processor; the at least one processor is configured to execute the computer instructions to implement the method according to any one of the first to fourth aspects.
In an eleventh aspect, embodiments of the present application provide a computer-readable storage medium or a non-volatile storage medium, in which computer instructions or a computer program are stored which, when invoked by a computer, cause the computer to perform the method of any of the above first to fourth aspects, or when run on one or more processors, cause a network device comprising the one or more processors to perform the method of any of the above first to fourth aspects.
In a twelfth aspect, embodiments of the present application provide a computer program product for storing a computer program for causing a computer to perform the method as described in any one of the first to fourth aspects above, when the computer program is run on the computer.
In a thirteenth aspect, embodiments of the present application provide a network system, where the network system includes two network devices; a network device (located on the side of sending service data) for performing the steps performed by the network device in the first aspect or the third aspect, or in the solution provided in the embodiments of the present application; the other network device (located on the side of receiving service data) is configured to perform the steps performed in the second aspect or the fourth aspect, or performed by the network device in the solution provided in the embodiments of the present application.
Drawings
FIG. 1 is a schematic diagram of a conventional communication system based on a flexible Ethernet protocol;
fig. 2 is a schematic diagram of a conventional data transmission by a binding function of FlexE;
fig. 3 is a schematic diagram of a conventional data transmission by FlexE sub-rate function;
FIG. 4 is a schematic diagram of a conventional data transmission by a FlexE tunneling function;
fig. 5 is a schematic diagram of a data transmission flow at a transmitting device side in the prior art;
fig. 6 is a schematic diagram of a data receiving flow at a receiving device side in the prior art;
fig. 7 is a flow chart of a bandwidth adjustment method based on FlexE service according to an embodiment of the present application;
fig. 8 is a schematic diagram of bandwidth sizes of a first channel and a second channel according to an embodiment of the present application;
fig. 9 is a schematic diagram of a process of adjusting bandwidth at a transmitting device side according to an embodiment of the present application;
fig. 10 is a schematic diagram of a data transmission flow at a transmitting device side according to an embodiment of the present application;
fig. 11 is a schematic diagram of a process of adjusting bandwidth at a transmitting device side according to an embodiment of the present application;
fig. 12 is a flow chart of a bandwidth adjustment method based on FlexE service according to an embodiment of the present application;
Fig. 13 is a schematic diagram of a process of adjusting bandwidth at a receiving device side according to an embodiment of the present application;
fig. 14 is a schematic process diagram of bandwidth adjustment at a receiving device side according to an embodiment of the present application;
fig. 15 is a schematic diagram of a data receiving flow at a receiving device side according to an embodiment of the present application;
fig. 16 is a schematic diagram of a data receiving flow at a receiving device side according to an embodiment of the present application;
fig. 17 is a schematic process diagram of bandwidth adjustment at a receiving device side according to an embodiment of the present application;
fig. 18 is a schematic diagram of a process for adjusting bandwidth at a receiving device according to an embodiment of the present application;
fig. 19 is a flow chart of a bandwidth adjustment method based on FlexE service according to an embodiment of the present application;
fig. 20 is a schematic diagram of a process of adjusting bandwidth at a transmitting device side according to an embodiment of the present application;
fig. 21 is a flow chart of a bandwidth adjustment method based on FlexE service according to an embodiment of the present application;
fig. 22 is a schematic process diagram of bandwidth adjustment at a receiving device side according to an embodiment of the present application;
fig. 23 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 24 is a schematic structural diagram of a network device according to an embodiment of the present application;
Fig. 25 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 26 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 27 is a schematic structural diagram of a network device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before describing the technical solutions provided in the embodiments of the present application, the data transmission principle in an existing FlexE group is first described by using fig. 1 to fig. 4, so that those skilled in the art easily understand the technical solutions provided in the embodiments of the present application.
As shown in fig. 1, a schematic diagram of a conventional communication system based on a flexible ethernet protocol is shown. One FlexE Group may contain one or more physical link interfaces (english may be written as PHY), and fig. 1 includes 4 PHYs in FlexE Group. A flexible ethernet protocol Client (FlexE Client) is represented as a Client data stream transmitted by a transport channel (a transport channel or channels) specified on a FlexE Group, one FlexE Group may carry multiple FlexE clients, one FlexE Client corresponding to one user traffic data stream (may be referred to as a MAC Client), and a flexible ethernet protocol functional layer (FlexE Shim) provides data adaptation and conversion from FlexE clients to MAC clients.
The FlexE is configured to bind a plurality of 100GE PHYs interfaces, and divide each 100GE port into 20 transmission channels with 5G bandwidth as granularity in a time domain. FlexE can support the following functions:
A. binding function.
As shown in fig. 2, flexE may support MAC traffic at a rate greater than that of a single PHY by bundling multiple PHYs into one link group. For example, taking FlexE Client a to transmit 200G traffic as an example, since the rate of the traffic is greater than that of a single PHY, a single PHY cannot satisfy the transmission of 200G traffic, flexE may be used to support FlexE Client a to transmit 200G traffic by bundling PHY a and PHY b into one 200G link group.
B. A sub-rate function.
As shown in fig. 3, flexE can support MAC traffic at a rate less than the link group bandwidth or less than the rate of a single PHY by allocating transport channels for the traffic. For example, taking FlexE Client a to transmit 75G traffic as an example, since the rate of 75G traffic is smaller than that of a single PHY, transmission of 75G traffic can be satisfied by using a part of transmission channels of a single PHY, flexE can be used to support FlexE Client a to transmit 75G traffic by allocating a part of transmission channels of a PHY to FlexE Client a, and taking the remaining transmission channels not occupied by 75G traffic as idle transmission channels.
C. Channeling functions.
As shown in fig. 4, flexE may support simultaneous transmission of multiple MAC traffic in a link group by allocating a transmission channel for traffic, for example, one 125G MAC traffic and one 75G MAC traffic in a link group consisting of two PHYs. For example, in the case that FlexE Client a transmits 75G traffic and FlexE Client b transmits 125G traffic, since the rate of the traffic transmitted by FlexE Client a is smaller than that of a single PHY, the rate of the traffic transmitted by FlexE Client b is greater than that of a single PHY, a portion of the transmission channels using a single PHY may satisfy the transmission of 75G traffic, a single PHY may not satisfy the transmission of 125G traffic, two PHYs may satisfy the transmission of 125G traffic, but a portion of the transmission channels may remain unused, and FlexE may be used to support the FlexE Client a and FlexE Client b by allocating transmission channels of different proportions in the link groups for the FlexE Client a and FlexE Client b in order to increase the usage of the transmission channels of the PHYs.
The process of data transmission by the existing FelxE will be described in detail with reference to fig. 5 and 6.
Fig. 5 illustrates a data transmission flow diagram of a transmitting device side, where, as shown in fig. 5, one or more FlexE clients may be included in a FlexE group formed by N PHYs, for example, felxE Client #1, flexE Client #2 to FlexE Client # M shown in fig. 5. The data corresponding to each FelxE Client is processed, e.g., 64B/66B encoded (encode) processed, and sent to FlexE Shim. For example, the data corresponding to FlexE Client #1 is subjected to 64B/66B encoding and sent to FlexE Shim. In fig. 5, the encoding mode is 64B/66B encoding, and those skilled in the art will recognize that other encoding modes may be applied.
Since in FlexE there may be a deviation of the clock frequencies of the Client clock domain (Client clock domain) and FlexE clock domain (FlexE clock domain), and since Overhead (OH) inserted on the FlexE interface and alignment code blocks (AM) require a certain bandwidth overhead, AM refers to the code blocks used for performing the alignment operation, rate adaptation of FlexE Client and FlexE Group can be achieved by IDLE code block insertion/deletion (IDLE insert/delete). For example, the coded code block stream of FlexE Client #1 can be used for realizing rate adaptation of FlexE Client #1 and FlexE Group by inserting/deleting idle code blocks. Alternatively, rate adaptation may be performed by other code blocks, such as rate adaptation by deleting ordered-set code blocks (ordered set block) to FlexE Client and FlexE Group, etc., which are illustrated schematically in fig. 5 by way of example only with IDLE code block insertion/deletion.
Further, the code block stream after the IDLE code block insertion/deletion corresponding to each FlexE Client may be allocated to the transmission channel corresponding to each FlexE Client by mapping (calndar). For example, the FlexE shim adopts N-way 100GE rate, and uses a time division multiplexing manner to schedule service data sent by FlexE clients with different transmission rates of the MAC layer according to a 5G transmission channel, and distributes the service data into N-way FlexE instance frames with a transmission rate of 100G, such as flexe#1 instance, flexe#2 instance to flexe#n instance shown in fig. 5.
According to the predefined frame format, the operation of inserting OH (Overhead insertion) is performed in the 64B/66B code block streams transmitted by the 20 transmission channels corresponding to each PHY, and according to the corresponding relation between the transmission channels and each PHY, the code block streams inserted with the overhead are distributed to each PHY for transmission. Wherein, the Sub-mapping table (Sub-algorithm) of each PHY may represent an allocation relationship of 20 transmission channels of each PHY. Wherein each PHY may correspond to a FlexE instance frame with a transmission rate of 100G. After each code block stream is mapped to each PHY, i.e., from the FlexE-defined sub-layer (FlexE defined sublayers) to the physical layer interface, such as the 802.3-defined sub-layer (802.3 defined sublayers) shown in fig. 5, the 802.3-defined sub-layer can also be understood as the standard physical layer part content of the 802.3 definition. Optionally, the 802.3 defined sublayers is provided with a plurality of functions, such as a scrambling (scrambling) function according to the 802.3 standard definition; optionally, channel allocation (lane distribution), alignment code block insertion (AM insertion), physical media additional layers (physical medium attachment, PMA), physical media dependent layers (physical media dependent, PMD), etc. may also be included to process the received data accordingly.
In the process of sending data at the sending device side, the FlexE may provide a transmission channel adjustment (reconfiguration) mechanism corresponding to each FlexE Client, so as to implement dynamic adjustment of a transmission channel carrying service data of each FlexE Client. Specifically, taking FlexE Client #1 as an example, the Sub-mapping table (Sub-canendar) of the PHY corresponding to FlexE Client #1 may represent the allocation relationship between 20 transmission channels of the PHY and FlexE Client # 1. In the FlexE Group, the Sub-map table (Sub-map) of the PHY may have two Sub-maps such as Sub-map a (Sub-map a, indicated by 0 bit) and Sub-map B (Sub-map B, indicated by 1 bit), where Sub-map B is used as a backup map. The switching of both mapping tables may be achieved by a request/acknowledge (request/ack) mechanism embedded in the overhead management channel. In the process of switching the sub-mapping table A to the sub-mapping table B, the number of transmission channels in the sub-mapping table A is increased or decreased through the FlexE Client #1, so that the capacity of the transmission channels in the FlexE is changed by the FlexE Client #1, the effect of dynamically adjusting the transmission channels for carrying the business data of the FlexE Client #1 is achieved, and further, the effect of dynamically adjusting the bandwidth can be achieved.
Fig. 6 illustrates a data receiving flow diagram at a receiving device side, where data at a transmitting device side is transmitted to the receiving device side through a standard physical interface, and 802.3 defined sublayers at the receiving device side performs a series of processes on data received through each PHY, such as a physical medium correlation layer (physical media dependent, PMD), physical medium attachment (physical medium attachment, PMA), physical coding sub-layer (physical coding sublayer, PCS), channel alignment (lane deskew), and alignment code block deletion (AM remove) and descrambling (de-skew), as shown in fig. 6.
Further, the data stream corresponding to each PHY is overhead locked after descrambling, and an overhead extracting operation is performed after the overhead locking. After the data stream corresponding to each PHY performs the operation of extracting the overhead (Overhead extraction), the data stream may be sent to a mapping (Calendar) according to the correspondence between each PHY and a transmission channel. In the Calendar, the code block stream corresponding to each FlexE Client can be recovered according to the allocation relation between each FlexE Client and the transmission channel. The Sub-mapping table (Sub-algorithm) of each PHY may represent an allocation relationship of 20 transmission channels of each PHY, and each PHY may correspond to a FlexE example frame with a transmission rate of 100G, as shown in fig. 6, flexe#1 instance, flexe#2 instance to flexe#n instance respectively correspond to one PHY, and the transmission rate is 100G. Specifically, the N FlexE example frames may be obtained by scheduling and distributing FlexE clients (service data) with different transmission rates of the MAC layer according to a 5G transmission channel by using a time division multiplexing manner with a FlexE shim rate of 100 GE.
Further, since in FlexE there is a deviation of clock frequencies in the Client clock domain (Client clock domain) and FlexE clock domain (FlexE clock domain), and since inserting OH and AM on FlexE interfaces requires a certain bandwidth overhead, rate adaptation of FlexE clients and FlexE groups can be achieved by IDLE code block insertion/deletion (IDLE insert/delete). For example, the code block stream corresponding to FlexE Client #1 recovered by the Calendar may be inserted/deleted through the idle code block to implement rate adaptation of FlexE Client #1 and FlexE Group. Alternatively, rate adaptation may be performed by other code blocks, such as rate adaptation by deleting ordered-set code blocks (ordered set block) to FlexE Client and FlexE Group, and the like, and is schematically illustrated in fig. 6 by way of example only with the insertion/deletion of IDLE code blocks.
The code block stream corresponding to each FlexE Client recovered by the Calendar may sequentially perform processing of idle code block insertion/deletion and decoding (decoding), and send the idle code block insertion/deletion and decoding (decoding) to each FlexE Client. For example, the code block stream corresponding to FlexE Client #1 recovered by the Calendar may be sequentially processed by inserting/deleting idle code blocks and 64B/66B decoding, and sent to FlexE Client #1. In fig. 6, the decoding mode is 64B/66B decoding, and those skilled in the art will recognize that other encoding modes may be used.
In the process of receiving data at the receiving device side, the FlexE may provide a transmission channel adjustment (reconfiguration) mechanism corresponding to each FlexE Client, so as to implement dynamic adjustment of a transmission channel carrying service data of each FlexE Client. The specific process may refer to the above process of implementing the dynamic adjustment of the transmission channel carrying the service data of each FlexE Client on the transmitting device side, which is not described herein again.
From the foregoing, it can be seen that FlexE can adjust the bandwidth of a transmission channel carrying each service, so as to realize the requirement of each service on different bandwidths. However, when the FlexE is adjusted from a large transmission channel to a small transmission channel or vice versa, the paths of the two transmission channels are directly switched at the switching boundary point, so that a phenomenon that a service data portion is lost at the transmitting device side or a phenomenon that received service data is out of order at the receiving device side is easily caused. Such as:
in example 1, in a scenario in which the FlexE adjusts the large transmission channel to be a small transmission channel, taking a certain FlexE Client (hereinafter referred to as a first FlexE Client) with a bandwidth that varies, the bandwidth of the large transmission channel currently transmitting the service data of the first FlexE Client is 5G as an example, if the bandwidth of the small transmission channel is 1G, when the transmission rate of the first FlexE Client needs to be switched from 5G to 1G, the FlexE may divide each large transmission channel that needs to transmit the service data of the first FlexE Client subsequently into 5 small transmission channels with 1G as granularity, and then, when the switching boundary point is reached, adjust the large transmission channel currently transmitting the service data of the first FlexE Client to be the small transmission channel, and transmit the service data of the first FlexE Client through each small transmission channel. However, since the transmission rate of the large transmission channel is larger than that of the small transmission channel, on the transmitting device side as shown in fig. 5, when the switching boundary point is reached, the large transmission channel is likely to send the service data, but the small transmission channel is not yet started to send the service data, so that the service data sent by the large transmission channel and the service data boundary sent by the small transmission channel are not aligned, the FlexE instance frame corresponding to the first FlexE Client is discontinuous, and a phenomenon of losing part of the service data sent by the second channel occurs, so that the service data of the first FlexE Client is damaged. On the receiving device side as shown in fig. 6, after the switching boundary point is reached, processing of the service data received through the small transmission channel can be started after the processing of the service data received through the large transmission channel is completed, so that the purpose that the sequence of the service data received through the large transmission channel and the small transmission channel is continuous (i.e., the service data is kept in order) in the process of adjusting the transmission channel can be achieved, and the service data of the first FlexE Client can achieve a lossless effect.
In example 2, in the case that the FlexE adjusts the small transmission channel to be the large transmission channel, taking the bandwidth of the small transmission channel currently transmitting the service data of the first FlexE Client as an example, if the bandwidth of the large transmission channel is 5G, when the transmission rate of the first FlexE Client needs to be switched from 1G to 5G, the FlexE may form each large transmission channel with 5G as granularity by each small transmission channel subsequently needing to transmit the service data of the first FlexE Client, and then, when the switching boundary point is reached, adjust the small transmission channel currently transmitting the service data of the first FlexE Client to be the large transmission channel, and transmit the service data of the first FlexE Client through each large transmission channel. Since the transmission rate of the small transmission channel is smaller than that of the large transmission channel, on the transmitting device side as shown in fig. 5, the small transmission channel can complete transmission of service data when reaching the switching boundary point, and can start to transmit service data through the large transmission channel, and the service data boundary between the service data transmitted by the large transmission channel and the service data transmitted by the small transmission channel can be aligned, so that the problem of discontinuous FlexE instance frames corresponding to the first FlexE Client existing on the transmitting device side in example 1 does not occur. On the receiving device side as shown in fig. 6, when the receiving device reaches the switching boundary point, the receiving device has not processed the service data received through the small transmission channel, but has already started to process the service data received through the large transmission channel, so that the service data received by the small transmission channel and the large transmission channel are simultaneously output within a period of time after the transmission channel is switched, and the data disorder occurs in the FlexE instance frame corresponding to the first FlexE Client, so that the service data is discontinuous in sequence (i.e. the service data is not in order) and the content is disorder, and the service data of the first FlexE Client is wrong.
In the following, taking a case that a large transmission channel is called a first channel, a small transmission channel is called a second channel, a transmission channel at a transmitting device side is called a transmitting channel, and a transmission channel at a receiving device side is called a receiving channel as an example, the technical schemes provided in the embodiments of the present application are specifically described.
In the first embodiment of the present application, in order to solve the problem that a part of service data is easily lost when the transmitting device (network device for transmitting service data) side is adjusted from the first channel to the second channel in example 1, the transmitting device may start writing the service data to be transmitted into the buffer memory at a rate greater than the first bandwidth of the first channel at the first time point before reaching the switching boundary point (hereinafter referred to as the second time point), and fill the service data in the buffer memory into the first channel and the second channel, and optionally may fill the service data in the buffer memory into the first channel and the second channel in parallel. Further, between the first time point and the second time point, the service data filled in the first channel is sent through the first channel, and at the second time point, the service data filled in the second channel is sent through the second channel. Wherein the amount of traffic data filling the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period.
In the first embodiment, the service data to be transmitted is written into the buffer memory at the first time point before the second time point, and the amount of the service data filled into the second channel between the first time point and the second time point may be equal to the amount of the data transmitted in one period on the second channel, so that when the first channel is adjusted to the second channel at the second time point, the second channel may start to transmit the service data, thereby achieving the purpose of aligning the service data transmitted by the first channel and the service data boundary transmitted by the second channel, avoiding the phenomenon that part of the service data transmitted by the second channel is lost, and further achieving the lossless effect of service data transmission.
In the second embodiment of the present application, in order to solve the problem that when the receiving apparatus (the network apparatus for receiving service data) side switches from the second channel to the first channel in example 2, the receiving apparatus may start writing the service data that needs to be received through the first channel into the first buffer when reaching the second time point, and continue to receive the service data remaining in the second channel. Further, the receiving of the service data in the first buffer through the first channel may be started at a third time point after the second time point. Or, the receiving device may start writing the service data received through the first channel into the second buffer when reaching the second time point, and continue to receive the service data remaining in the second channel, and further start outputting the service data in the second buffer when reaching the third time point. Wherein the third time point is equal to or later (i.e., not earlier) than the time point when the service data remaining in the second channel is received.
In the second embodiment, the service data to be received through the first channel is written into the first buffer memory at the second time point, or the service data to be received through the first channel is written into the second buffer memory, the processing of the residual data in the second channel is continued, and at the third time point after the second time point, the service data in the first buffer memory is received through the first channel, or the service data in the second buffer memory is started to be output, because the duration between the second time point and the third time point is enough to receive the residual service data in the second channel, the effect of starting to process the service data received through the first channel after the service data received through the second channel is processed can be achieved, the purpose of storing the service data received through the first channel and the service data received through the second channel can be achieved, the disorder phenomenon of the received service data can not occur, and the lossless effect of the service data reception can be achieved.
In the third embodiment of the present application, in order to solve the problem that when the transmitting device side is adjusted from the first channel to the second channel transmitting channel in example 1, a part of service data is easily lost, at the transmitting device side, the transmitting device may start to fill the second channel with preset format data or idle data at a first time point before reaching a second time point, and further, when reaching the second time point, may start to transmit the preset format data or idle data filled into the second channel through the second channel. Further, the sending of the service data through the second channel may be started when the third point in time is reached. The data amount of the preset format data or the idle data filled into the second channel between the first time point and the second time point may be equal to the data amount transmitted in one period on the second channel, and the duration between the second time point and the third time point may be equal to the duration required for transmitting the preset format data or the idle data filled into the second channel.
Accordingly, on the receiving device side, since the beginning of the data sent by the sending device through the second channel is formed by the preset format data or the idle data, the receiving device may start to delete the preset format data or the idle data received through the second channel when the current receiving channel is adjusted from the first channel to the second channel by reaching the second time point. Further, the receiving device may start receiving the service data through the second channel when the third time point is reached. The duration between the second time point and the third time point may be equal to a duration required for transmitting the preset format data or the idle data filled into the second channel.
In the third embodiment of the present application, the transmitting device starts to fill the preset format data or idle data into the second channel at the first time point, and since the data amount of the preset format data or idle data filled into the second channel between the first time point and the second time point is equal to the data amount transmitted in one period on the second channel, when the first channel is adjusted to the second channel at the second time point, the second channel can start to transmit data, so that the purpose of aligning the boundary between the service data transmitted by the first channel and the data transmitted by the second channel can be achieved, and further, since the time period between the second time point and the third time point can be equal to the time period required for transmitting the preset format data or idle data filled into the second channel, when the service data is transmitted through the second channel at the third time point, the complete service data packet is not cut off, and the phenomenon that part of the service data transmitted by the second channel is lost can not occur, so that the lossless effect of service data transmission can be achieved.
In the third embodiment of the present application, when the current receiving channel is adjusted from the first channel to the second channel at the second time point, the receiving device deletes the preset format data or the idle data received through the second channel, and starts to process the service data received through the second channel at the third time point.
In a specific implementation, the first to third embodiments may be used in combination, which is not limited in this application.
It should be noted that, in addition to the first to third embodiments, the present application may solve the problems of the above-described example 1 and the above-described example 2 in other ways. For example, in some other embodiments, in order to solve the problem that in example 1, when the transmitting device side is adjusted from the first channel to the second channel, a part of service data is easily lost, the transmitting device may uniformly transmit service data through the second channel, which may be understood that the transmitting device may transmit service data through the second channel all the time without adjusting the transmitting channel, thereby avoiding the phenomenon that when the transmitting device side is adjusted from the first channel to the second channel, a part of service data is easily lost, and further achieving the lossless effect of service data transmission. Accordingly, in order to solve the problem that in example 2, when the receiving device side is adjusted from the second channel to the first channel, the service data is easy to be disordered, the receiving device side can uniformly receive the service data through the second channel, it can be understood that the receiving device receives the service data through the second channel all the time, and the receiving channel is not required to be adjusted, so that the phenomenon that when the receiving device side is adjusted from the second channel to the first channel, the service data is easy to be disordered can be avoided, and further, the lossless effect of service data receiving can be achieved.
Before describing embodiments of the present application, some of the terms in the present application are explained first to facilitate understanding by those skilled in the art.
The switching boundary point (second time point) in this embodiment of the present application refers to a time point corresponding to a next OH frame header after a FlexE decides to send c bits (one field segment in an OH frame) to a receiving device side when the transmitting device side is to adjust a transmission channel, and refers to a time point corresponding to a next OH frame header after a FlexE decides to receive c bits sent by the transmitting device side when the receiving device side is to adjust a reception channel.
The period referred to in the embodiments of the present application may refer to a period of time formed by a plurality of time slots obtained by dividing a 100G PHY in a TDM manner by using different bandwidths as granularity, for example, when the FlexE divides a 100G PHY in a TDM manner by using a 1G bandwidth as granularity to obtain 100 time slots, the 100 time slots may form a period, where when the second bandwidth of the second channel is 1G, the amount of data sent in one period on the second channel may be understood as the amount of data sent in 100 time slots on the second channel. Alternatively, when the FlexE divides the 100G PHY into 20 slots in the time domain with the 5G bandwidth as granularity in the TDM manner, the 20 slots may form a period, where when the first bandwidth of the first channel is 5G, the amount of data received in one period on the first channel may be understood as the amount of data received in 20 slots on the first channel.
The term "plurality" in the embodiments of the present application refers to two or more. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular expressions "a", "an", "the" and "the" are intended to include, for example, also "one or more" such expressions, unless the context clearly indicates the contrary. And, unless specified to the contrary, the embodiments of the present application refer to the ordinal terms "first," "second," etc., as used to distinguish between multiple objects, and are not to be construed as limiting the order, timing, priority, or importance of the multiple objects. For example, the first channel and the second channel are only for distinguishing between different transmission channels, and are not indicative of the difference in priority or importance of the two transmission channels.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, the appearances of the terms "comprising," "including," "having," and variations thereof herein are meant to be "including but not limited to" unless otherwise specifically emphasized.
In addition, it should be understood that, in the embodiment of the present application, the bandwidth size of the transmission channel for transmitting the service data of the single FlexE Client is taken as an example. When the bandwidth size of the transmission channel for transmitting the service data of a plurality of FlexE clients needs to be adjusted, the adjustment can be performed in a manner of adjusting the bandwidth size of the transmission channel for transmitting the service data of a single FlexE Client. The bandwidth of the transmission channel for transmitting the service data of the plurality of FlexE clients may be adjusted simultaneously, or the bandwidth of the transmission channel for transmitting the service data of the plurality of FlexE clients may be adjusted according to a corresponding sequence, which is not limited in the embodiment of the present application.
The procedure of adjusting bandwidth at the transmitting device side in the first embodiment of the present application will be specifically described with reference to fig. 7 to 10.
Fig. 7 is a schematic flow chart of a bandwidth adjustment method for FlexE service according to an embodiment of the present application. Fig. 7 illustrates an example in which an execution body is a transmitting device. As shown in fig. 7, the method may include the steps of:
s101, according to the requirement of transmitting data, determining that a current transmission channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel.
As an example, when the first channel is a 5G channel and the second channel is a 1G channel, the schematic diagrams of the first channel and the second channel may be as shown in fig. 8, and the size of the first bandwidth may be 5 times the size of the second bandwidth.
In a specific implementation process, when the requirement of each FlexE Client for the transmission cost changes, the requirement for transmitting data may also change. For example, if the first FlexE Client needs to reduce the transmission cost, the bandwidth for transmitting the service data is reduced. In other words, the transmitting apparatus can reduce the transmission cost of the service data by reducing the bandwidth of transmitting the service data.
In the first embodiment, when the requirement of the first FlexE Client for transmitting data decreases, the transmitting device may determine that the current transmission channel needs to be adjusted from the first channel to the second channel at a second future time point according to the requirement of the first FlexE Client for transmitting data.
S102, at a first time point before the second time point, starting to write the service data to be transmitted into a cache at a rate greater than the first bandwidth, and filling the service data in the cache into the first channel and the second channel.
In the first embodiment, when the transmitting device determines that the current transmission channel needs to be adjusted from the first channel to the second channel at the second time point in the future, the transmitting device may start to write the service data to be transmitted into the buffer at the first time point at a rate greater than the first bandwidth. For example, when the first bandwidth is 5G, the transmitting device may write the service data to be transmitted into the buffer at a rate greater than 5G. For example, please refer to fig. 9, the transmitting device at a first time point t 0 Before, the first channel transmits the service data, and when the first time point t is reached 0 And when the service data to be transmitted starts to be written into a buffer (buffer) at a rate larger than the first bandwidth. For example, referring to fig. 10, the transmitting device may write the service data to be transmitted into a buffer provided on the data entry side of the FlexE shim. In the data transmission process shown in fig. 10, the rest of the processes except buffering the service data that needs to be transmitted in the first channel in the buffer may be referred to the above description about the data transmission process shown in fig. 5, which is not repeated here.
In a specific implementation process, between a first time point and a second time point, the sending device may fill the service data in the buffer into the first channel and the second channel. Specifically, the sending device may fill the buffered service data in parallel to the first channel and the second channel.
In the first embodiment, since the rate of writing into the buffer is greater than the first bandwidth, the service data in the buffer can achieve the effect of filling the first channel and the second channel in parallel between the first time point and the second time point, so that the second channel can start to send the service data when the second time point is reached. For example, referring to fig. 9, since the rate of writing into the buffer is greater than the first bandwidth, the transmitting device may fill the traffic data in the buffer into the first channel and the second channel in parallel, e.g., at the first time point t 0 To a second point in time t 1 In between, the transmitting device may fill the traffic data of the white area in the buffer shown in fig. 9 to the first channel, and fill the traffic data of the gray area to the second channel in parallel.
And S103, transmitting the service data filled into the first channel through the first channel between the first time point and the second time point.
In the first embodiment, the transmitting device may transmit, through the first channel, the service data filled into the first channel between the first time point and the second time point, and may keep the state of the first channel transmitting the service data unchanged, i.e. may keep the service data flow unchanged.
S104, starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
In the first embodiment, since the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period, that is, the amount of traffic data filled into the second channel may reach the amount of data transmitted on the second channel for one period before or when the second time point is reached, the transmitting apparatus may transmit the traffic data filled into the second channel through the second channel when the second time point is reached, at which time the transmitting apparatus has transmitted the traffic data filled into the first channel through the first channel.
Exemplary, when the first channel is a 5G transmission channel and the second channel is a 1G transmission channel, as shown in FIG. 9, at a first point in time t 0 To a second point in time t 1 The amount of traffic data to be filled into the second channel may be equal to the amount of data to be transmitted on the second channel for one period, so that the transmitting device may, at the second point in time t 1 And starting to send the service data filled into the second channel through the second channel.
In the first embodiment, the service data to be transmitted is written into the buffer memory in advance at the first time point before the second time point, and the service data in the buffer memory can meet the requirement that the service data filled into the second channel is equal to the data amount transmitted in one period on the second channel, so that when the first channel is adjusted to the second channel at the second time point, the second channel can start to transmit the service data filled into the second channel, the aim of aligning the service data transmitted by the first channel and the service data boundary transmitted by the second channel can be achieved, the phenomenon of partial data loss transmitted by the second channel can be avoided, and further the lossless effect of service data transmission can be achieved.
In the first embodiment, when the remaining service data in the buffer memory is sent through the second channel, the sending device may stop writing the service data to be sent subsequently into the buffer memory, and continue sending the service data to be transmitted subsequently through the second channel. As shown in fig. 9, when the second channel transmits the service data in the buffer at the third time point t 2 At t 2 The subsequent traffic data to be transmitted may then continue to be transmitted via the second channel, in other words, the subsequent traffic data to be transmitted is no longer written into the buffer, e.g. in combination with fig. 9 and fig. 9 FIG. 10 shows that at t 2 And then, the service data to be transmitted later is not written into the buffer for caching.
Note that, at the time of reaching t 1 At this time, since a small amount of remaining service data (1 or 2 service data) may remain in the buffer and have not been transmitted, in order to be able to transmit the service data buffered in the buffer, i.e. empty the service data buffered in the buffer, at t 1 To t 2 And the sending equipment can stop writing the service data which is required to be sent subsequently into the buffer for caching, continue to read the rest service data in the buffer, and fill the read service data into the second channel.
In the first embodiment of the present application, when the second channel sends the service data in the completion buffer, writing the service data to be sent subsequently into the buffer is stopped, and the service data to be transmitted subsequently is continuously sent through the second channel, that is, the service data to be sent subsequently is not buffered, so that the power consumption of the sending device for processing the data can be reduced.
In a specific implementation process, the FlexE shim of the transmitting device has two side ports, for example, one side port is a data inlet, the other side port is a data outlet, and data is transmitted between the two side ports through the transmitting channel, because the amount of service data filled into the second channel is equal to the amount of data transmitted in one period on the second channel, a certain period of time is required, and the smaller the bandwidth of the transmitting channel is, the longer the period of time is, therefore, when the currently corresponding transmitting channel of the data outlet is adjusted from the first channel to the second channel at the second time point, the first channel finishes transmitting the service data, and the amount of service data filled into the second channel is not less than the amount of data transmitted in one period on the second channel, the service data cannot be transmitted through the second channel, that is, when the second time point is reached, the second channel cannot transmit the service data at the data outlet, so that the service data transmitted by the first channel and the service data boundary transmitted by the second channel are not aligned, and the phenomenon that part of the service data transmitted by the second channel is lost occurs.
In the first embodiment, in order to avoid the phenomenon that the second channel cannot transmit the service data at the data outlet when reaching the second time point, the transmitting device may adjust the transmitting channels currently corresponding to the data inlet and the data outlet at different time points in a corresponding manner.
The process of bandwidth adjustment at the transmitting device side in the first embodiment of the present application will be described in detail with reference to fig. 7 to 11.
Please refer to fig. 11, at a first time point t 0 Previously, the currently corresponding transmission channel of the data inlet and the data outlet is the same as the first channel, namely at t 0 Previously, the transmitting device transmitted the service data through the first channel.
At t of arrival 0 When the sending device starts to write the service data to be sent into the buffer at a rate greater than the first bandwidth, and reads the service data from the buffer for filling into the first channel and the second channel, that is, until reaching t 0 When the transmitting device adds a second channel in the data entry, in other words, the currently corresponding transmitting channel of the data entry includes the first channel and the second channel, and the currently corresponding transmitting channel of the data exit is the first channel, that is, when reaching t 0 When the transmitting device transmits the service data filled in the first channel through the first channel, the state of the service data transmitted by the first channel can be kept unchanged until reaching the second time point t 1 And stopping sending the service data through the first channel.
At the second time point t 1 When the transmitting device adjusts the currently corresponding transmitting channel of the data outlet from the first channel to the second channel, the transmitting device can start transmitting the service data filled into the second channel through the second channel because the service data filled into the second channel is equal to the data transmitted in one period on the second channel. At this time, the transmitting device has completed transmitting the service data filled into the first channel through the first channel, and the transmitting device may not fill the service data to be transmitted into the first channel any more, that is, at the second time point t 1 The current corresponding sending channels of the data inlet and the data outlet are the firstAnd the sending equipment can send the service data filled into the second channel through the second channel. Alternatively, at t 0 To t 1 When the traffic data amount filled into the second channel is equal to the data amount sent in one period on the second channel, the sending device can stop writing the traffic data to be sent into the buffer memory by adopting the rate larger than the first bandwidth, and resume writing the traffic data to be sent into the buffer memory by adopting the rate of the first bandwidth. At t of arrival 1 In order to empty the buffered service data in the buffer, a small amount of service data may remain in the buffer and not be sent yet, at t 1 And then, the sending equipment can stop writing the service data to be sent subsequently into the buffer for caching, namely the service data to be sent subsequently is not cached any more, and the rest of the service data cached in the buffer is continuously filled into the second channel.
At the third time point t 2 And when the rest of the cached business in the buffer is filled into the second channel, the sending equipment can fill the business data which needs to be sent subsequently into the second channel, and send the business data which needs to be sent subsequently through the second channel. At t 2 Then, the service data which needs to be transmitted subsequently is continuously transmitted through the second channel, namely, when t is reached 2 At or at t 2 Thereafter, the transmitting device continues to transmit the service data through the second channel.
In the first embodiment, the sending device starts to write the service data to be sent into the buffer memory by adopting a rate greater than the first bandwidth when reaching the first time point, fills the first channel and the second channel with the service data in the buffer memory, and can achieve the aim of aligning the service data sent by the first channel with the service data sent by the second channel in a boundary when the second time point adjusts the first channel to the second channel because the service data amount filled to the second channel between the first time point and the second time point can be equal to the data amount sent by the second channel in one period, so that the second channel can start to send the service data when the second time point adjusts the first channel to the second channel, thereby avoiding the phenomenon that the second channel cannot output the data at the data outlet when reaching the second time point, and achieving the aim of lossless service data sending.
The following describes the procedure of bandwidth adjustment on the receiving device side in the second embodiment of the present application in detail with reference to fig. 8, 12 to 18.
Fig. 12 is a schematic flow chart of a bandwidth adjustment method for FlexE service according to an embodiment of the present application. Fig. 12 illustrates an execution subject as a receiving apparatus. As shown in fig. 12, the method may include the steps of:
s201, according to the requirement of received data, determining that the current receiving channel is required to be adjusted from a second channel to a first channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel.
As an example, when the first channel is a 5G channel and the second channel is a 1G channel, the schematic diagrams of the first channel and the second channel may be as shown in fig. 8, and the size of the first bandwidth may be 5 times the size of the second bandwidth.
In the second embodiment, when the requirement of the first FlexE Client for receiving data increases, the receiving device may determine, according to the requirement of the first FlexE Client for receiving data, to adjust the current receiving channel from the second channel to the first channel at a second time point in the future.
S202, starting to write the service data which is required to be received through the first channel into a first buffer memory at the second time point, and continuously receiving the residual service data in the second channel; starting to receive the service data in the first cache through the first channel at a third time point after the second time point; or, at the second time point, starting to write the service data received through the first channel into a second buffer memory, and continuing to receive the residual service data in the second channel; starting to output the service data in the second cache at the third time point; wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
In a second embodiment, the receiving device receives the current reception channel at the second point in timeWhen the second channel is adjusted to the first channel, some service data still remain in the second channel, and the receiving device can continue to receive the service data remaining in the second channel. At this time, the receiving apparatus may start writing the service data, which needs to be received through the first channel, into the first buffer. For example, please refer to fig. 13, at the second time point t 1 Before the receiving device receives the service data through the second channel, when reaching the second time point t 1 When the receiving device starts to write the service data required to be received through the first channel into the first buffer, and continues to receive the service data remained in the second channel. Alternatively, as shown in FIG. 14, at the second time point t 1 Before the receiving device receives the service data through the second channel, when reaching the second time point t 1 When the receiving device starts to write the service data received through the first channel into the second buffer, and continues to receive the service data remaining in the second channel.
In the second embodiment, as shown in fig. 13 and fig. 15, when the first buffer is set on the data entry side of the FlexE shim, the receiving device may start writing the service data that needs to be received through the first channel into the first buffer at the second time point. In the data receiving process shown in fig. 15, the rest of the processes except for buffering the service data that needs to be received through the first channel in the first buffer may be referred to the above description about the data receiving process shown in fig. 6, which is not repeated here. Alternatively, as shown in fig. 14 and 16, when the second buffer is set on the data outlet side of the FlexE shim, the receiving device may start writing the service data received through the first channel into the second buffer at the second time point. In the data receiving process shown in fig. 16, the rest of the processes except for buffering the service data received through the first channel in the second buffer may be referred to the above description about the data receiving process shown in fig. 6, which is not repeated here.
In the second embodiment, the data to be received through the first channel starts to be written into the first buffer memory at the second time point, or the data to be received through the first channel starts to be written into the second buffer memory, and the residual service data in the second channel continues to be received, so that the phenomenon that the service data received through the first channel and the service data received through the second channel are simultaneously output at the second time point can be avoided, further, the purpose of order preservation of the service data received through the first channel and the service data received through the second channel can be achieved, the phenomenon that errors occur in the received service data can not occur, and further, the lossless effect of service data reception can be achieved.
In the second embodiment, the receiving device starts to receive the service data in the first buffer through the first channel at a third time point after the second time point, or starts to output the service data in the second buffer at the third time point. Thereafter, the receiving device may process the service data received by the first channel, for example, as shown in fig. 15 or fig. 16, insert/delete the idle code block, and 64B/66B decode the data of the first channel service. Wherein the third time point may be equal to or later than a time point when the remaining service data in the second channel is received. That is, when the third time point is reached, the remaining service data in the second channel has been received, that is, the remaining service data in the second channel has been processed, the service data in the first buffer may be started to be received through the first channel or the service data in the second buffer may be started to be output. For example, referring to fig. 13, the receiving device may receive the remaining service data in the second channel at a third time point t 2 And starting to receive the service data in the first cache through the first channel. Alternatively, referring to fig. 14, the receiving device may receive the remaining service data in the second channel at a third time point t 2 And starting to output the service data in the second buffer memory.
In the second embodiment of the present application, the writing of the service data to be received through the first channel into the first buffer memory is started through the second time point, or the writing of the service data to be received through the second channel into the second buffer memory is started, and at a third time point after the second time point (where, a duration between the second time point and the third time point may ensure that the receiving device receives the transmission of the remaining service data in the second channel) begins to receive the service data in the first buffer memory through the first channel, or begins to output the service data in the second buffer memory, so that an effect of starting to process the service data received by the first channel after the service data received by the second channel is processed can be achieved, thereby achieving a purpose of keeping the sequence of the service data received through the first channel and the service data received through the second channel, avoiding a phenomenon that the received service data is out of sequence and has errors, and further achieving a lossless effect of service data reception.
In the second embodiment, in order to drain the service data in the first buffer, when reaching the third time point, the receiving device may begin to read the service data in the first buffer at a rate greater than the first bandwidth, and fill the read service data into the first channel, and receive the service data filled into the first channel, until reaching the fourth time point when the service data in the first buffer is read, may begin to stop writing the service data that needs to be received subsequently into the first buffer through the first channel, and continue to receive the service data that needs to be received subsequently through the first channel. As shown in fig. 13, at a third time point t 2 To the fourth time point t 3 The receiving device can read the service data in the first buffer memory at a rate greater than the first bandwidth, and receive the service data in the first buffer memory through the first channel. When reaching the fourth time point t when the service data in the first buffer is read out 3 In this case, the service data that needs to be received through the first channel is received directly through the first channel, in other words, the service data that needs to be received through the first channel is no longer buffered, for example, as shown in fig. 13 and 15, the service data that needs to be received through the first channel is no longer buffered through the first buffer.
Alternatively, in the second embodiment, in order to drain the service data in the second buffer, the receiving device may start outputting the service data in the second buffer at a rate greater than the first bandwidth when reaching the third time point, until stopping passing the subsequent service data through the second buffer when reaching the fourth time point when outputting the service data in the second bufferAnd writing the business data received by the channel into a second buffer memory. As shown in fig. 14, at a third time point t 2 To the fourth time point t 3 The receiving device may output the traffic data in the second buffer at a rate greater than the first bandwidth. When reaching the fourth time point t when the output of the business data in the second buffer memory is completed 3 When the service data received subsequently through the first channel is not written into the second buffer, the service data received subsequently through the first channel is not buffered, for example, as shown in fig. 14 and 16, the service data received subsequently through the first channel is not buffered through the second buffer.
In the second embodiment of the present application, the service data in the first buffer is read at a rate greater than the first bandwidth at the third time point, and the subsequent service data that needs to be received through the first channel is stopped to be written into the first buffer at a fourth time point when the service data in the first buffer is read, or the service data in the second buffer is output at a rate greater than the first bandwidth at the third time point, and the subsequent service data that needs to be received through the first channel is stopped to be written into the second buffer at a fourth time point when the service data in the second buffer is output, that is, the subsequent service data that needs to be received does not perform buffer processing any more, so that the power consumption of the receiving device for processing the data can be reduced.
In a specific implementation process, the receiving device has two side ports, for example, one side port is a data inlet, the other side port is a data outlet, and data is transmitted between the two side ports through the receiving channel, because the receiving device needs a certain period of time when receiving the service data received in one period on the first channel, and the bandwidth of the receiving channel is larger, the period of time is shorter, when the receiving device adjusts the currently corresponding receiving channel of the data inlet from the second channel to the first channel at a second time point, the residual service data in the second channel is not processed, the first channel already begins to output the service data, and the effect of processing the service data received by the first channel after the service data received by the second channel is not processed by the data outlet, so that the phenomenon that the service data received by the first channel and the service data received by the second channel are simultaneously output at the data outlet within a period of time after the receiving channel is adjusted occurs.
In the second embodiment, in order to avoid a phenomenon that the service data received by the first channel and the second channel are output at the data outlet within a period of time after the second time point, the receiving device may adjust the currently corresponding sending channels of the data inlet and the data outlet at different time points in a corresponding manner.
The process of bandwidth adjustment at the receiving device side in the second embodiment of the present application will be described in detail with reference to fig. 12 to 18.
Please refer to fig. 17 or fig. 18, at a second time point t 1 Before, the currently corresponding transmitting channels of the data inlet and the data outlet are the second channel, and the receiving device receives the service data through the second channel, namely at t 1 Previously, the receiving device receives the service data through the second channel.
Please refer to fig. 17 or fig. 18, when the second time point t is reached 1 And when the receiving equipment adjusts the sending channel currently corresponding to the data inlet from the second channel to the first channel. At this time, as shown in fig. 17, the receiving device may start writing the service data that needs to be received through the first channel into the first buffer, and continue to receive the remaining service data in the second channel, where the current receiving channel corresponding to the data outlet is the second channel. As shown in fig. 18, the receiving device may start writing the service data received through the first channel into the second buffer, and continue to receive the remaining service data in the second channel, where the current receiving channel corresponding to the data outlet includes the first channel and the second channel. I.e. at t 1 And when the receiving device does not fill the service data to be received into the second channel, the receiving device can continue to receive the service data remained in the second channel until the service data remained in the second channel is received, in other words, until the receiving device processes the service data remained in the second channel.
At the third time point t 2 When the receiving device has received the remaining service data in the second channel, at this time, as shown in fig. 17, the receiving device may begin to read the service data in the first buffer at a rate greater than the first bandwidth, fill the read service data into the first channel, and receive the service data filled into the first channel through the first channel, where the currently corresponding receiving channels of the data entry and the data exit are both the first channel. Or, as shown in fig. 18, the receiving device may start to output the service data in the second buffer at a rate greater than the first bandwidth, where the receiving channels currently corresponding to the data entry and the data exit are both the first channel. Thereafter, the receiving device may perform corresponding processing on the service data received through the first channel, for example, as shown in fig. 15 or fig. 16, insertion/deletion of idle code blocks, 64B/66B decoding processing, and the like. At t 2 And then, the receiving equipment continues to receive the service data in the first buffer through the first channel or continues to output the service data in the second buffer. At the third time point t 2 And then, the current receiving channels corresponding to the data inlet and the data outlet are the first channels.
At the fourth time point t 3 When the receiving device has read the service data in the first buffer, the writing of the service data which needs to be received through the first channel into the first buffer can be stopped, and the subsequent service data can be received through the first channel directly. As shown in fig. 17, when t is reached 3 And when the service data is received directly through the first channel, the service data which is required to be received through the first channel is not cached. Alternatively, at t 3 When the receiving device has output the service data in the second buffer, the writing of the service data received subsequently through the first channel into the second buffer can be stopped. As shown in fig. 18, when t is reached 3 And when the service data received by the first channel is not cached.
In the second embodiment, the receiving device starts to write the service data required to be received through the first channel into the first buffer or starts to write the service data received through the first channel into the second buffer at the second time point, continues to receive the residual service data in the second channel, starts to receive the service data in the first buffer through the first channel or starts to output the service data in the second buffer at the third time point enough to complete receiving the residual service data in the second channel, so that the effect of starting to process the service data received by the first channel after the service data received by the second channel is processed can be achieved, the purpose of maintaining the sequence of the service data received through the first channel and the service data received through the second channel can be achieved, the phenomenon that errors occur due to disorder of the received service data can not occur, and further, the effect of lossless service data receiving can be achieved.
The following describes the procedure of bandwidth adjustment at the transmitting device side in the third embodiment of the present application in detail with reference to fig. 5, 8, and 19 to 20.
Fig. 19 is a schematic flow chart of a bandwidth adjustment method for FlexE service according to an embodiment of the present application. Fig. 19 illustrates an example in which an execution body is a transmitting device. As shown in fig. 19, the method may include the steps of:
s301, according to the requirement of data transmission, determining that a current transmission channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel.
As an example, when the first channel is a 5G channel and the second channel is a 1G channel, the schematic diagrams of the first channel and the second channel may be as shown in fig. 8, and the size of the first bandwidth may be 5 times the size of the second bandwidth.
In the third embodiment, when the requirement of the first FlexE Client for transmitting data decreases, the transmitting device may determine that the current transmission channel needs to be adjusted from the first channel to the second channel at a second future time point according to the requirement of the first FlexE Client for transmitting data.
S302, starting to fill the second channel with preset format data or idle data at a first time point before the second time point.
In the third embodiment, in order to distinguish the special format encoded data for FlexE, the preset format data may refer to encoded format data other than the special format encoded data for FlexE, which is not limited in this application.
In the third embodiment, when the transmitting apparatus determines that the current transmission channel is adjusted from the first channel to the second channel at a second point in time in the future, the transmitting apparatus may start filling the second channel with the preset format data or the idle data at the first point in time.
For example, as shown in fig. 20, since the FlexE shim of the transmitting device has two side ports, the transmitting device reaches the first time point t 0 Before, the service data is sent through the first channel, and at this time, the sending channel corresponding to the data inlet and the data outlet currently is the first channel. At t of arrival 0 When the sending device starts to fill the second channel with the preset format data or the idle data, the sending device still sends the service data through the first channel.
S303, starting to send the preset format data or the idle data filled into the second channel through the second channel at the second time point; wherein the data amount of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the data amount transmitted on the second channel for one period.
In the third embodiment, since the data amount of the preset format data or the idle data filled into the second channel between the first time point and the second time point may be equal to the data amount transmitted in one period on the second channel, when the second time point is reached, the transmitting device may start to transmit the preset format data or the idle data filled into the second channel through the second channel, so that the purpose of aligning the boundaries of the service data transmitted by the first channel and the data transmitted by the second channel may be achieved.
For example, as shown in fig. 20, since the data amount of the preset format data or the idle data filled into the second channel between the first time point and the second time point may be equal to the data amount transmitted on the second channel for one period, the transmitting device may transmit the preset format data or the idle data filled into the second channel through the second channel. At this time, the transmission channels currently corresponding to the data inlet and the data outlet are the second channel, and the transmission device may not adopt the preset format data or the idle data to fill the second channel, but adopts the service data to fill the second channel.
In the third embodiment, the second channel is filled with the preset format data or the idle data at the first time point, and the preset format data or the idle data filled into the second channel is transmitted through the second channel at the second time point, so that the transmitting device can start transmitting data through the second channel when the second time point is reached, and the aim of aligning the boundaries of the service data transmitted by the first channel and the data transmitted by the second channel can be achieved.
S304, starting to send service data through the second channel at a third time point after the second time point; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
For example, referring to fig. 20, the transmitting device may at a third point in time t 2 Time and t 2 Thereafter, transmission of the traffic data through the second channel is started.
In the third embodiment, the transmitting device fills the second channel with the preset format data or the idle data at the first time point, and starts to transmit the preset format data or the idle data filled into the second channel through the second channel at the second time point, so that the transmitting device can start to transmit the data through the second channel at the second time point, the aim of aligning the boundaries of the data transmitted by the first channel and the service data transmitted by the second channel can be achieved, and further, the time period between the second time point and the third time point can be equal to the time period required for transmitting the preset format data or the idle data filled into the second channel, so that when the service data is transmitted through the second channel at the third time point, the complete service data packet is not cut off, the phenomenon that part of the service data transmitted by the second channel is lost can be avoided, and further, the lossless effect of service data transmission can be achieved.
The following describes the procedure of bandwidth adjustment at the receiving device side in the third embodiment of the present application in detail with reference to fig. 6, 8, 21-22.
Fig. 21 is a schematic flow chart of a bandwidth adjustment method for FlexE service according to an embodiment of the present application. Fig. 21 illustrates an execution subject as a receiving apparatus. As shown in fig. 21, the method may include the steps of:
s401, according to the requirement of received data, determining that the current receiving channel is required to be adjusted from the first channel to the second channel at a second time point; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel.
In the third embodiment, when the requirement of the first FlexE Client for receiving data decreases, the receiving device may determine, according to the requirement of the first FlexE Client for receiving data, to adjust the current receiving channel from the first channel to the second channel at a second point in the future.
S402, starting to delete the preset format data or idle data received through the second channel at the second time point; beginning to receive traffic data over the second channel at a third point in time subsequent to the second point in time; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
As shown in fig. 22, since the FlexE shim of the receiving device has two side ports, the receiving device reaches the second time point t 1 Before, the service data is received through the first channel, and at this time, the receiving channels corresponding to the data inlet and the data outlet currently are the same as the first channel. At t of arrival 1 When the receiving device receives the data through the second channel, the receiving device can delete the data when the receiving channels corresponding to the data inlet and the data outlet are both the second channelAnd receiving the preset format data or idle data through the second channel.
In a specific implementation, since the receiving rate of the first channel is greater than the receiving rate of the second channel, the receiving device has completed processing the service data received by the first channel when the second time point is reached. For example, please refer to fig. 22, at the second time point t 1 Before, the receiving device receives the service data through the first channel, and reaches the second time point t 1 When the receiving device has processed the service data received through the first channel, the receiving device does not receive the service data through the first channel. At t 1 And then, the receiving channels corresponding to the data inlet and the data outlet at present are both second channels.
For example, referring to fig. 22, the transmitting device may at a third point in time t 2 Time and t 2 Thereafter, reception of the service data through the second channel is started.
In the third embodiment, when the current receiving channel is adjusted from the first channel to the second channel at the second time point, the receiving device deletes the preset format data or the idle data received through the second channel, and since the duration between the second time point and the third time point is equal to the duration required for deleting the preset format data or the idle data received through the second channel, the receiving rate of the first channel is larger than the receiving rate of the second channel, so when the receiving device starts to receive the service data through the second channel at the third time point, the purpose of starting to process the service data received by the second channel after the service data received by the first channel is processed can be achieved, the effect of preserving the sequence of the service data received through the first channel and the service data received through the second channel can be achieved, the disorder phenomenon of receiving the service data can not occur, and the lossless effect of receiving the service data can be achieved.
It should be understood that in the embodiments of the present application, the transmitting device and the receiving device may perform some or all of the steps in the embodiments of the present application, these steps are merely examples, and the embodiments of the present application may also perform other steps or variations of various steps. Furthermore, the various steps may be performed in a different order presented in embodiments of the present application, and it is possible that not all of the steps in embodiments of the present application may be performed.
In the various embodiments of the application, if there is no specific description or logical conflict, terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments according to their inherent logical relationships.
The above description has been presented mainly in terms of interaction between a transmitting device and a receiving device. It should be understood that, in order to implement the above functions, the sending device and the receiving device include corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide the functional modules of the transmitting device or the receiving device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In the case of using an integrated unit (module), fig. 23 shows a schematic structural diagram of a network device according to an embodiment of the present application. As shown in fig. 23, the network device 500 is a network device located on the service data transmitting side, and may include: a processing unit 501 and a transmitting unit 502.
Wherein, the sending unit 502 is configured to support communication between the network device 500 and other devices, for example, communication between the network device located on the service data receiving side. A processing unit 501 for controlling and managing the actions of the network device 500, e.g. the processing unit 501 is for supporting the network device 500 to perform the processes S101-S104 in fig. 7 and the processes in fig. 9-11, and/or other processes for the techniques described herein. In particular, reference may be made to the following description:
The processing unit 501 is configured to determine, according to a requirement for transmitting data, that a current transmission channel needs to be adjusted from a first channel to a second channel at a second time point, where a first bandwidth of the first channel is greater than a second bandwidth of the second channel; at a first time point before the second time point, starting to write the business data to be transmitted into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel;
the sending unit 502 is configured to send, between the first time point and the second time point, service data filled into the first channel through the first channel; starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
In one possible design, the processing unit 501 may also be configured to: stopping writing the service data to be transmitted subsequently into the buffer memory when the rest service data in the buffer memory are transmitted through the second channel; the sending unit 502 may further be configured to: and continuing to send the service data which needs to be transmitted subsequently through the second channel.
In one possible design, between the first time point and the second time point, the transmission channel currently corresponding to the data entry includes the first channel and the second channel, and the transmission channel currently corresponding to the data exit is the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
It should be understood that the operations and/or functions of the respective modules in the network device 500 are respectively for implementing the respective flows of the bandwidth adjustment method of the FlexE service shown in fig. 7 and 9-11, and are not described herein for brevity.
In the case of using an integrated unit (module), fig. 24 shows a schematic structural diagram of a network device provided in an embodiment of the present application. As shown in fig. 24, the network device 600 is a network device located on the service data receiving side, and may include: a processing unit 601 and a receiving unit 602.
Wherein, the receiving unit 602 is configured to support communication between the network device 600 and other devices, for example, communication between the network device located on the service data transmitting side. A processing unit 601 for controlling and managing actions of the network device 600, e.g., the processing unit 601 is configured to support the network device 600 to perform the processes S201-S202 in fig. 12 and the processes in fig. 13-18, and/or other processes for the techniques described herein. In particular, reference may be made to the following description:
The processing unit 601 is configured to determine, according to a requirement of received data, that a current receiving channel needs to be adjusted from a second channel to a first channel at a second time point, where a first bandwidth of the first channel is greater than a second bandwidth of the second channel; at the second time point, starting to write the service data required to be received through the first channel into a first buffer, or starting to write the service data required to be received through the first channel into a second buffer;
the receiving unit 602 is configured to continue receiving, at the second time point, service data remaining in the second channel; starting to receive the service data in the first buffer through the first channel or starting to output the service data in the second buffer at a third time point after the second time point; wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
In one possible design, the receiving unit 602 may be specifically configured to: and at the third time point, starting to output the service data in the second buffer memory at a rate greater than the first bandwidth.
In one possible design, the processing unit 601 may be specifically configured to: at the third time point, starting to read the service data in the first cache at a rate greater than the first bandwidth, and filling the read service data into the first channel; the receiving unit 602 may specifically be configured to: and receiving the business data filled into the first channel.
In one possible design, the processing unit 601 may also be configured to: and stopping writing the service data received by the first channel into the second buffer when the receiving unit 602 outputs the service data in the second buffer.
In one possible design, the processing unit 601 may also be configured to: when the receiving unit receives the service data in the first buffer memory through the first channel, stopping writing the service data which is required to be received through the first channel into the first buffer memory; the receiving unit 602 may further be configured to: and continuing to receive the service data which is required to be received through the first channel.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet comprises the first channel and the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
In one possible design, between the second time point and the third time point, the receiving channel currently corresponding to the data inlet is the first channel, and the receiving channel currently corresponding to the data outlet is the second channel; and after the third time point, the receiving channels currently corresponding to the data inlet and the data outlet are the first channel.
It should be understood that the operations and/or functions of the respective modules in the network device 600 are respectively for implementing the respective flows of the bandwidth adjustment method of the FlexE service shown in fig. 12-18, and are not described herein for brevity.
In the case of using an integrated unit (module), fig. 25 shows a schematic structural diagram of a network device provided in an embodiment of the present application. As shown in fig. 25, the network device 700 is a network device located on the service data transmitting side, and may include: a processing unit 701 and a transmitting unit 702.
Wherein, the sending unit 702 is configured to support communication between the network device 700 and other devices, for example, communication between the network device located on the service data receiving side. A processing unit 701 for controlling and managing the actions of the network device 700, for example, the processing unit 701 is configured to support the network device 700 to perform the processes S301-S304 in fig. 19 and the processes in fig. 20, and/or other processes for the techniques described herein. In particular, reference may be made to the following description:
The processing unit 701 is configured to determine, according to a requirement for transmitting data, that a current transmission channel needs to be adjusted from a first channel to a second channel at a second time point, where a first bandwidth of the first channel is greater than a second bandwidth of the second channel; starting to fill the second channel with preset format data or idle data at a first time point before the second time point;
the sending unit 702 is configured to start sending, at the second time point, the preset format data or the idle data filled into the second channel through the second channel; wherein the amount of data of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period; beginning to transmit traffic data over the second channel at a third point in time subsequent to the second point in time; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
In one possible design, before the second point in time, the transmission channels currently corresponding to the data entry and the data exit are both the first channel; and after the second time point, the sending channels currently corresponding to the data inlet and the data outlet are the second channels.
It should be understood that the operations and/or functions of the respective modules in the network device 700 are respectively for implementing the respective flows of the bandwidth adjustment method of the FlexE service shown in fig. 19-20, and are not described herein for brevity.
In the case of using an integrated unit (module), fig. 26 shows a schematic structural diagram of a network device provided in an embodiment of the present application. As shown in fig. 26, the network device 800 is a network device located on the service data receiving side, and may include: a processing unit 801 and a receiving unit 802.
The receiving unit 802 is configured to support communication between the network device 800 and other devices, for example, communication between the network device on the service data transmitting side. A processing unit 801 for controlling and managing the actions of the network device 800, e.g., the processing unit 801 is for supporting the network device 800 to perform the processes S401-S402 in fig. 21 and the process in fig. 22, and/or other processes for the techniques described herein. In particular, reference may be made to the following description:
The processing unit 801 is configured to determine, according to a requirement of receiving data, that a current receiving channel needs to be adjusted from a first channel to a second channel at a second time point; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel; starting to delete the preset format data or idle data received by the receiving unit through the second channel at the second time point;
the receiving unit 802 is configured to start receiving service data through the second channel at a third time point after the second time point; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
In one possible design, before the second point in time, the currently corresponding receiving channels of the data inlet and the data outlet are both the first channel; and after the second time point, the receiving channels currently corresponding to the data inlet and the data outlet are the second channel.
It should be understood that the operations and/or functions of the respective modules in the network device 800 are respectively for implementing the respective flows of the bandwidth adjustment method of the FlexE service shown in fig. 21-22, and are not described herein for brevity.
In the case of using an integrated unit (module), fig. 27 shows a schematic structural diagram of a network device according to an embodiment of the present application. As shown in fig. 27, the network device 900 may include at least one processor 901, a memory 902; the memory 902 stores one or more computer programs, such as one or more computer programs necessary for storing the network device 900. The at least one processor 901 is configured to support the network device 900 to implement the bandwidth adjustment method of FlexE traffic described above, e.g. when one or more computer programs stored by the memory 902 are executed by the at least one processor 901, so that the network device 900 may implement any one of the embodiments of the bandwidth adjustment method of FlexE traffic shown in fig. 7, 12, 19, 21, and/or be configured to implement other embodiments described herein. The network device 900 may further comprise a communication interface 903 which may be used for communication with other devices or communication networks, such as FlexE or the like.
Based on the same conception as the above method embodiments, there is further provided in this application a network device comprising one or more processors and one or more memories or non-volatile storage media, said one or more processors being connected to said one or more memories or non-volatile storage media, said one or more memories or non-volatile storage media having stored therein computer instructions or a computer program which, when executed by said one or more processors, can implement any of the possibilities of embodiments of the bandwidth adjustment method of FlexE traffic shown in fig. 7, 12, 19, 21 and/or for implementing other embodiments described herein.
Based on the same conception as the above-described method embodiments, there is also provided in the present application a computer readable storage medium or non-volatile storage medium storing computer instructions or a computer program, which when run on a computer, cause the computer to perform any one of the possible implementations of the method embodiments of receiving multicast information, or when run on one or more processors, cause a network device comprising the one or more processors to perform any one of the possible implementations of the method embodiments of receiving multicast information, such as performing any steps of the bandwidth adjustment method embodiments of FlexE traffic shown in fig. 7, 12, 19, 21, and/or perform other processes of the techniques described herein.
Based on the same conception as the above-described method embodiments, there is also provided in the present application a program product for storing a computer program for causing a computer to perform any one of the possible implementations of the method embodiments of receiving multicast information, the method embodiments described above, e.g. any steps of the bandwidth adjustment method embodiments of FlexE traffic as shown in fig. 7, 12, 19, 21, and/or performing other procedures of the techniques described herein, when said computer program is run on a computer.
Based on the same concept as the method embodiments described above, a chip is further provided in the embodiments of the present application, where the chip may include at least one processor and an interface; the interface may be a code/data read-write interface for providing computer-executable instructions (computer-executable instructions stored in memory, possibly read directly from memory, or possibly via other means) to the at least one processor; the at least one processor is configured to execute the computer-executable instructions to perform any one of the possible implementations of the method embodiments for receiving multicast information, method embodiments described above, e.g., to perform any of the steps of the bandwidth adjustment method embodiments for FlexE traffic shown in fig. 7, 12, 19, 21, and/or to perform other processes of the techniques described herein.
Based on the same concept as the method embodiment, the embodiment of the application also provides a network system, which includes: the embodiment of the application provides a network system, which comprises two network devices; a network device (located on the side of sending service data) is configured to perform the steps shown in fig. 7 or fig. 19, or the steps performed by the network device in the solution provided in the embodiment of the present application; the other network device (located on the side of receiving service data) is configured to perform the steps shown in fig. 12 or fig. 21, or the steps performed by the network device in the solution provided in the embodiment of the present application.
It should be appreciated that the processor or processing unit in embodiments of the present application (e.g., the processors or processing units shown in fig. 23-27) may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the embodiments of the bandwidth adjustment method of FlexE traffic as shown in fig. 7, 12, 19 and 21 described above may be performed by means of instructions in the form of integrated logic circuits or software of hardware in a processor or processing unit. The processor or processing unit described above may be a programmable logic device, such as a digital signal processing (digital signal processing, DSP), application specific integrated circuit (application specific integrated circuits, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof; combinations of computing functions may also be implemented, for example, including one or more microprocessor combinations, a combination of DSP and microprocessor, and the like.
It should be understood that the memory or storage units in embodiments of the present application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The various illustrative logical blocks and circuits described in the embodiments of the present application may be implemented or performed with a digital signal processor, application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments of the present application may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software elements may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a communication device (e.g., first terminal apparatus, network device, etc.), e.g., in a different component in a communication device.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program or instructions may be stored in or transmitted across a computer-readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as DVD; but also semiconductor media such as Solid State Disks (SSDs).
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments of the present application have been described in connection with specific features, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the embodiments of the application. Accordingly, the specification and drawings are merely exemplary illustrations of embodiments of the application defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents of the embodiments of the application.

Claims (18)

  1. A bandwidth adjustment method for FlexE service, comprising:
    according to the requirement of sending data, determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel;
    at a first time point before the second time point, starting to write the business data to be transmitted into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel;
    transmitting service data filled into the first channel through the first channel between the first time point and the second time point;
    starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
  2. The method of claim 1, wherein after starting to transmit traffic data filled into the second channel through the second channel at the second point in time, comprising:
    And stopping writing the service data needing to be transmitted subsequently into the buffer memory when the rest service data in the buffer memory are transmitted through the second channel, and continuing to transmit the service data needing to be transmitted subsequently through the second channel.
  3. A bandwidth adjustment method for FlexE service, comprising:
    according to the requirement of received data, determining that a current receiving channel is required to be adjusted from a second channel to a first channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel;
    starting to write the service data which is required to be received through the first channel into a first buffer memory at the second time point, and continuously receiving the residual service data in the second channel; beginning to receive the service data in the first cache through the first channel at a third time point after the second time point; or alternatively, the process may be performed,
    starting to write the service data received through the first channel into a second buffer memory at the second time point, and continuously receiving the residual service data in the second channel; starting to output the service data in the second cache at the third time point;
    Wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
  4. The method of claim 3, wherein starting outputting traffic data in the second buffer at the third point in time comprises:
    and at the third time point, starting to output the service data in the second buffer memory at a rate greater than the first bandwidth.
  5. The method of claim 3, wherein beginning to receive traffic data in the first cache over the first channel at a third point in time subsequent to the second point in time comprises:
    at the third time point, starting to read the service data in the first cache at a rate greater than the first bandwidth, and filling the read service data into the first channel;
    and receiving the business data filled into the first channel.
  6. The method of claim 3 or 4, further comprising, after starting outputting the service data in the second buffer at the third point in time:
    and stopping writing the service data received by the first channel into the second buffer memory when the service data in the second buffer memory is output.
  7. The method of claim 3 or 5, further comprising, after beginning to receive traffic data in the first buffer through the first channel at a third point in time after the second point in time:
    and stopping writing the service data which is required to be received by the first channel into the first buffer memory when the service data in the first buffer memory is received by the first channel, and continuing to receive the service data which is required to be received by the first channel.
  8. A bandwidth adjustment method for FlexE service, comprising:
    according to the requirement of sending data, determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel;
    starting to fill the second channel with preset format data or idle data at a first time point before the second time point;
    starting to send the preset format data or the idle data filled into the second channel through the second channel at the second time point; wherein the amount of data of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period;
    Beginning to transmit traffic data over the second channel at a third point in time subsequent to the second point in time; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
  9. A bandwidth adjustment method for FlexE service, comprising:
    according to the requirement of the received data, determining that the current receiving channel is required to be adjusted from the first channel to the second channel at a second time point; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel;
    at the second time point, deleting the preset format data or idle data received through the second channel; beginning to receive traffic data over the second channel at a third point in time subsequent to the second point in time; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
  10. A network device, comprising: a processing unit and a transmitting unit;
    The processing unit is used for determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point according to the requirement of sending data, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; at a first time point before the second time point, starting to write the business data to be transmitted into a cache by adopting a rate larger than the first bandwidth, and filling the business data in the cache into the first channel and the second channel;
    the sending unit is used for sending the business data filled into the first channel through the first channel between the first time point and the second time point; starting to send service data filled into the second channel through the second channel at the second time point; wherein the amount of traffic data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one cycle.
  11. The device of claim 10, wherein the processing unit is further to:
    stopping writing the service data to be transmitted subsequently into the buffer memory when the rest service data in the buffer memory are transmitted through the second channel;
    The transmitting unit is further configured to:
    and continuing to send the service data which needs to be transmitted subsequently through the second channel.
  12. A network device, comprising: a processing unit and a receiving unit;
    the processing unit is configured to determine that a current receiving channel is required to be adjusted from a second channel to a first channel at a second time point according to a requirement of receiving data, where a first bandwidth of the first channel is greater than a second bandwidth of the second channel; at the second time point, starting to write the service data required to be received through the first channel into a first buffer, or starting to write the service data required to be received through the first channel into a second buffer;
    the receiving unit is configured to continuously receive, at the second time point, service data remaining in the second channel; starting to receive the service data in the first buffer through the first channel or starting to output the service data in the second buffer at a third time point after the second time point; wherein the third time point is equal to or later than a time point when the service data remaining in the second channel is received.
  13. The apparatus of claim 12, wherein the receiving unit is specifically configured to:
    And at the third time point, starting to output the service data in the second buffer memory at a rate greater than the first bandwidth.
  14. The apparatus according to claim 12, wherein the processing unit is specifically configured to:
    at the third time point, starting to read the service data in the first cache at a rate greater than the first bandwidth, and filling the read service data into the first channel;
    the receiving unit is specifically configured to:
    and receiving the business data filled into the first channel.
  15. The apparatus of claim 12 or 13, wherein the processing unit is further to:
    and stopping writing the service data received by the first channel into the second buffer memory when the receiving unit outputs the service data in the second buffer memory.
  16. The apparatus of claim 12 or 14, wherein the processing unit is further to:
    when the receiving unit receives the service data in the first buffer memory through the first channel, stopping writing the service data which is required to be received through the first channel into the first buffer memory;
    the receiving unit is further configured to:
    And continuing to receive the service data which is required to be received through the first channel.
  17. A network device, comprising: a processing unit and a transmitting unit;
    the processing unit is used for determining that a current sending channel is required to be adjusted from a first channel to a second channel at a second time point according to the requirement of sending data, wherein the first bandwidth of the first channel is larger than the second bandwidth of the second channel; starting to fill the second channel with preset format data or idle data at a first time point before the second time point;
    the sending unit is configured to start sending, at the second time point, the preset format data or the idle data filled into the second channel through the second channel; wherein the amount of data of the preset format data or the idle data filled into the second channel between the first time point and the second time point is equal to the amount of data transmitted on the second channel for one period; beginning to transmit traffic data over the second channel at a third point in time subsequent to the second point in time; the time length from the second time point to the third time point is equal to the time length required for sending the preset format data or the idle data filled into the second channel.
  18. A network device, comprising: a processing unit and a receiving unit;
    the processing unit is used for determining that the current receiving channel is required to be adjusted from the first channel to the second channel at the second time point according to the requirement of the received data; wherein the first bandwidth of the first channel is greater than the second bandwidth of the second channel; starting to delete the preset format data or idle data received by the receiving unit through the second channel at the second time point;
    the receiving unit is configured to start receiving service data through the second channel at a third time point after the second time point; the duration from the second time point to the third time point is equal to the duration required by deleting the preset format data or the idle data received through the second channel.
CN202080104796.9A 2020-07-31 2020-07-31 Bandwidth adjustment method based on FlexE service and network equipment Pending CN116235435A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/106477 WO2022021451A1 (en) 2020-07-31 2020-07-31 Flexe service-based bandwidth adjustment method and network device

Publications (1)

Publication Number Publication Date
CN116235435A true CN116235435A (en) 2023-06-06

Family

ID=80037433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080104796.9A Pending CN116235435A (en) 2020-07-31 2020-07-31 Bandwidth adjustment method based on FlexE service and network equipment

Country Status (2)

Country Link
CN (1) CN116235435A (en)
WO (1) WO2022021451A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10135760B2 (en) * 2015-06-30 2018-11-20 Ciena Corporation Flexible Ethernet chip-to-chip inteface systems and methods
CN108242969B (en) * 2016-12-23 2021-04-20 华为技术有限公司 Transmission rate adjusting method and network equipment
CN109981208B (en) * 2017-12-27 2021-02-09 华为技术有限公司 Method and device for transmitting service flow based on flexible Ethernet Flexe
CN108777667B (en) * 2018-05-31 2021-12-21 华为技术有限公司 Method and device for adjusting bandwidth of transmission channel in flexible Ethernet
CN110650002B (en) * 2018-06-26 2021-01-29 华为技术有限公司 Method for adjusting PHY in Flexe group, related equipment and storage medium

Also Published As

Publication number Publication date
WO2022021451A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
US11469844B2 (en) Method and apparatus for transmitting service flow based on flexible ethernet FlexE
US10230481B2 (en) Method and apparatus for bearing flexible ethernet service on optical transport network
CN110650002B (en) Method for adjusting PHY in Flexe group, related equipment and storage medium
US5881245A (en) Method and apparatus for transmitting MPEG data at an adaptive data rate
US11108485B2 (en) Clock synchronization method and apparatus
WO2019062227A1 (en) Data transmission method, transmission device and transmission system
KR101205622B1 (en) Apparatus and method for bandwidth allocation of channel
JP2023542599A (en) Bandwidth adjustment method, business transmission method, network device, and readable storage medium
WO2021109705A1 (en) Flexible ethernet group management method and device, and computer-readable storage medium
US20230035379A1 (en) Service flow adjustment method and communication apparatus
EP4283888A1 (en) Communication method, device, and chip system
CN116235435A (en) Bandwidth adjustment method based on FlexE service and network equipment
WO2024011879A1 (en) Method and device for optimization of network latency in flexible ethernet
CN106487711A (en) A kind of method of caching dynamically distributes and system
CN110719182A (en) Method and device for adjusting link capacity
ZA200508487B (en) Method and device for proactive rate adaptation signaling
JP7167345B2 (en) DATA TRANSMISSION METHOD, COMMUNICATION DEVICE, AND STORAGE MEDIUM
EP2798790B1 (en) Compression method for tdm frames in a packet network
CN111342983B (en) Dynamic member processing method, system, receiving terminal, management entity and storage medium
US20110191481A1 (en) Systems and methods for transferring data
CN104081735A (en) System for the transmission of concurrent data streams over a network
US11489761B2 (en) Method and apparatus for coded multipath network communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination