WO2019165908A1 - 一种业务发送方法及装置、业务接收方法及装置 - Google Patents

一种业务发送方法及装置、业务接收方法及装置 Download PDF

Info

Publication number
WO2019165908A1
WO2019165908A1 PCT/CN2019/075393 CN2019075393W WO2019165908A1 WO 2019165908 A1 WO2019165908 A1 WO 2019165908A1 CN 2019075393 W CN2019075393 W CN 2019075393W WO 2019165908 A1 WO2019165908 A1 WO 2019165908A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
service
service flow
block
overhead
Prior art date
Application number
PCT/CN2019/075393
Other languages
English (en)
French (fr)
Inventor
刘峰
吴炜
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US16/967,791 priority Critical patent/US20210399992A1/en
Priority to EP19761658.4A priority patent/EP3737012A4/en
Publication of WO2019165908A1 publication Critical patent/WO2019165908A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/50Transmitters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/60Receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/40Flow control; Congestion control using split connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/41Flow control; Congestion control by acting on aggregated flows or links
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet

Definitions

  • the present disclosure relates to communication technologies, and in particular, to a service transmission method and apparatus, and a service reception method and apparatus.
  • the rapid increase of user network information traffic has promoted the rapid increase of communication network information transmission bandwidth.
  • the interface bandwidth speed of communication equipment has been increased from 10M (unit: bit/second, the same content) to 100M, 1G, 10G, and has reached 100G.
  • the bandwidth speed a large number of commercially available 100G optical modules have been on the market.
  • 400G optical modules have been developed, but the 400G optical modules are expensive, exceeding the price of four 100G optical modules, resulting in the lack of commercial economic value of 400G optical modules.
  • the International Standards Organization defines the FlexE (Flexible Ethernet) protocol.
  • the FlexE protocol bundles multiple 100G optical modules to form a high-speed transmission channel.
  • At least one embodiment of the present disclosure provides a service sending method and apparatus, a service receiving method, and a device, which implement service transmission between members of different rates.
  • At least one embodiment of the present disclosure provides a service sending method, including:
  • the service flow is sent through a channel of a corresponding rate.
  • At least one embodiment of the present disclosure provides a service transmitting apparatus, including a memory and a processor, where the memory stores a program, and when the program is read and executed by the processor, implements the service sending according to any of the embodiments. method.
  • At least one embodiment of the present disclosure provides a service receiving method, including:
  • At least one embodiment of the present disclosure provides a service receiving apparatus including a memory and a processor, the memory storing a program, and when the program is read and executed by the processor, implementing the service receiving according to any of the embodiments method.
  • At least one embodiment of the present disclosure maps a customer service to one or more service flows of a first rate; and divides at least one service flow of the first rate into a plurality of service flows of other rates, where The service stream of the other rate is filled with the overhead block; the service flow is sent through the channel of the corresponding rate.
  • the solution provided in this embodiment implements service transmission between members of different rates.
  • FIG. 1 is a schematic diagram of a FlexE protocol application in the related art.
  • Figure 2 is a schematic diagram of the FlexE protocol (100G rate) overhead block and data block arrangement location.
  • Figure 3 is a schematic diagram of the distribution of FlexE protocol (100G rate) services over multiple physical channels.
  • FIG. 4 is a schematic diagram of a FlexE protocol frame (100G rate) structure.
  • FIG. 5 is a schematic diagram of a FlexE protocol multiframe (100G rate) structure.
  • FIG. 6 is a schematic diagram of a FlexE protocol client service bearer access.
  • Figure 7 is a schematic diagram of the recovery of the FlexE protocol client service bearer.
  • FIG. 8 is a schematic diagram of a FlexE protocol (50G rate) overhead block and a data block arrangement position.
  • FIG. 9 is a schematic diagram of a Flex calendar (50G rate) multiframe structure Client calendar.
  • Figure 10 is a schematic diagram of a FlexE protocol (50G rate) multiframe structure PHY map.
  • Figure 11 is a schematic diagram of the FlexE protocol (25G rate) overhead block and data block arrangement position.
  • FIG. 12 is a schematic diagram of a Flex calendar (25G rate) multiframe structure Client calendar.
  • Figure 13 is a schematic diagram of a FlexE protocol (25G rate) multiframe structure PHY map.
  • FIG. 14 is a flowchart of a service sending method according to an embodiment of the present disclosure.
  • FIG. 15 is a flowchart of a service receiving method according to an embodiment of the present disclosure.
  • 16 is a schematic diagram of a 50G rate and a 100G rate component bundle set according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic diagram of an implementation scheme of a 50G rate and a 100G rate component bundle set according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic diagram of an implementation scheme of a bundle group composed of a 25G rate and a 100G rate according to an embodiment of the present disclosure.
  • FIG. 19 is a schematic diagram of an implementation scheme of a bundle group composed of three rates of 25G, 50G, and 100G according to an embodiment of the present disclosure.
  • FIG. 20 is a schematic structural diagram of deinterleaving a 100G rate service into a 50G rate service according to an embodiment of the present disclosure.
  • FIG. 21 is a schematic structural diagram of a 50G service rate service interleaved into a 100G rate service according to an embodiment of the present disclosure.
  • FIG. 22 is a schematic structural diagram of deinterleaving a 100G rate service into a 25G rate service according to an embodiment of the present disclosure.
  • FIG. 23 is a schematic structural diagram of a 25G service rate service interleaved into a 100G rate service according to an embodiment of the present disclosure
  • FIG. 24 is a block diagram of a service sending apparatus according to an embodiment of the present disclosure.
  • FIG. 25 is a block diagram of a service receiving apparatus according to an embodiment of the present disclosure.
  • the FlexE protocol is defined according to the physical layer 100G rate.
  • the 100G data packet is 64/66 encoded before the data packet is transmitted, and the 64-bit data block is expanded into a 66-bit data block, and the added 2 bits are located at the forefront of the 66-bit block.
  • the start flag of the 66-bit block it is then sent out from the optical port in a 66-bit block.
  • the optical port discriminates the 66-bit block from the received data stream, and then recovers the original 64-bit data from the 66-bit block, reassembling the data message.
  • the FlexE protocol is in the 64-bit to 66-bit block conversion layer.
  • the 66-bit data block is sorted and planned before the 66-bit data block is transmitted. As shown in Figure 2, for every 100 66-bit data block partitions for 100G services. For a data block group, each 66-bit data block represents one time slot, and each data block group represents 20 time slots, each time slot representing a service speed of 5G bandwidth.
  • a FlexE overhead block (block filled with slashes in Figure 2) is inserted for every 1023 data block groups (ie, 1023 * 20 data blocks). After inserting the overhead block, continue to send the data block, after transmitting the next 1023*20 data blocks, insert the next overhead block, and so on. In the process of transmitting the data block, periodically insert the overhead block, adjacent The interval between the two overhead blocks is 1023*20 data blocks.
  • each physical layer When four physical layers of 100G are bundled into a logical service bandwidth of 400G, as shown in FIG. 3, each physical layer still forms a data block group according to 20 data blocks, and inserts an overhead block every 1023 data block groups.
  • the FlexE master calendar the main schedule, located in the shim layer
  • four 20 data blocks are assembled into a block of 80 data blocks, which has 80 slots.
  • the customer service is transmitted in these 80 time slots, wherein each time slot has a bandwidth of 5G, and 80 time slots have a total service bandwidth of 400G.
  • the FlexE overhead block is a 66-bit-long overhead block.
  • an overhead block is inserted every 1023*20 blocks.
  • the overhead block plays a positioning function in the entire service flow, and the location of the overhead block is determined to know the location of the first data block group in the service and the location of the subsequent data block group.
  • defining eight overhead blocks to form a frame is called an overhead frame, as shown in FIG. 4, wherein one overhead block is composed of a 2-bit block flag and a 64-bit block content, and the block flag is located in the first two columns. The next 64 columns are block contents.
  • the block flag of the first overhead block in the overhead frame is 10, and the block flag of the next 7 overhead blocks is 01 or SS (SS indicates that the content is uncertain).
  • SS indicates that the content is uncertain.
  • the specified position of the overhead block is 4B (hexadecimal, identifier is 0x4B) and 05 (hexadecimal, identifier is 0x5), it indicates that the overhead block is the first overhead block of the overhead frame, and the following seven
  • the overhead block constitutes an overhead frame.
  • the first overhead block contains the following information: 0x4B (8 bits, 4B in hexadecimal), C bits (1 bit, indicating adjustment control), OMF bits (1 bit, overhead frame multiframe indication), RPF bits ( 1 bit, remote defect indication), RES bit (1 bit, reserved bit), FlexE group number (20 bits, indicating the number of the bundle group), 0x5 (4 bits, 5 in hexadecimal), 000000 (28 bits) , are 0).
  • 0x4B and 0x5 are the flag indications of the first overhead block. When receiving, when the corresponding position in an overhead block is found to be 0x4B and 0x5, it indicates that the overhead block is the first overhead block of the overhead frame, and then consecutively The 7 overhead blocks form an overhead frame.
  • the reserved portion is reserved and has not been defined.
  • the PHY number indicates the number of the member PHY (physical layer) in the group group, and the number ranges from 0 to 255.
  • the PHY map (bitmap) indicates the in-position of each PHY in the group group.
  • the PHY map has 8 bits in one frame, and there are 256 bits in the multiframe composed of 32 frames, indicating whether 256 PHY members are in the group group. In the case, the corresponding position is "1", otherwise it is set to "0".
  • There are 20 time slots in the 100G rate FlexE frame each time slot can carry customer information, and the customer calendar (customer calendar) indicates the customer name carried in each time slot.
  • One frame carries the customer name of one time slot
  • one multiframe can carry 32 time slots, and actually only 20 time slots, the first 20 time slots are valid, and the last 16 time slots are reserved.
  • the time slot bearer client name is represented by Client calendar A and Client calendar B.
  • only one group of indications is in working state (which group is indicated by C bits), and one group is in standby state.
  • the two sets of indications are used for dynamic adjustment of the time slot.
  • the time slot is adapted, only the time slot content of the standby state is changed, and then both sides simultaneously switch to the new configuration table.
  • For other bytes in the overhead frame please refer to the related technology.
  • the OMF field is a multiframe indication signal, as shown in FIG.
  • OMF is a single-bit value.
  • the OMF in consecutive 16 frames is 0, then the OMF is 1 in 16 consecutive frames, then 0 in consecutive 16 frames, and then 1 in consecutive 16 frames, every 32.
  • the frame is repeated once, so that a multiframe is composed of 32 frames, and there are 8*32 overhead blocks in one multiframe.
  • Figure 6 is a schematic diagram of the process of the FlexE protocol carrying the customer service.
  • the client service first performs 64/66 encoding, and the client traffic is cut into 64-bit (8-byte) long bit blocks, and then 64 bits.
  • the data information is encoded and becomes a 66-bit data block.
  • the traffic flow becomes a 66-bit data block stream.
  • the idle block is inserted or deleted in the data block stream, and the speed adjustment is performed to adapt the rate of the master calendar in the FlexE protocol.
  • the 66-bit data block is placed in the master calendar of the FlexE protocol according to the slot configuration.
  • the slot plan table structure is shown in Figure 5.
  • each member is divided into 20 slots (each slot is a 66-bit data block, each slot represents 5G service bandwidth), if there are 4 members. There are a total of 80 time slots in the time slot planning table. Through configuration, it is determined that each customer service selects those time slots for bearer.
  • the time slot planning table groups all time slots, each group of 20 time slots, which are sent to each member defined by the FlexE protocol, and each member inserts a FlexE overhead block on the basis of these time slots (the overhead block is also a 66-bit block, each Interval 20*1023 time slot blocks are inserted into an overhead block, as shown in Figure 2).
  • Each member in the diagram is a sub calendar, which is carried on a PHY. After the FlexE overhead block is inserted, each PHY scrambles the traffic carried by the PHY and sends it through the PMA (Physical Medium Attachment).
  • PMA Physical Medium Attachment
  • the PMA receives the signal and recovers the 66-bit block by descramble.
  • each PHY looks for the overhead block of the FlexE protocol, restores the FlexE frame structure with the overhead block as the reference position, and obtains the sub calendar.
  • the time slots of all members are arranged in order, and the master calendar structure is restored.
  • the service flow is taken out from the corresponding time slot of the master calendar, the idle information block is deleted, and then 66/64 bit decoding is performed to restore the original customer service.
  • the data block and the overhead block relationship are as shown in FIG. 8.
  • the number of time slots is 10, and each interval is 1023*2*10 data blocks are inserted into one FlexE overhead block, and the ratio of the overhead block to the data block is 1: (1023 * 2 * 10), which is exactly the same as the ratio of the cost block and the data block of the 100G rate member 1: (1023 * 20).
  • eight overhead blocks form a FlexE frame. As shown in Figure 9, the frame structure content is consistent with the frame structure of the 100G FlexE rate.
  • the OMF field (consisting of 8 consecutive "0"s and 8 consecutive "1"s) indicates the multiframe structure, and the number of frames of the multiframe is changed from 32 to 16, that is, 16 frames form a multiframe, which has 8*16 overhead blocks in a multiframe.
  • the maximum number of members in the group group is halved, from 256 to 128, so that the total number of PHY map bits is reduced from 256 to 128, as shown in FIG. Since the number of time slots is 10, there are 16 Client calendar fields.
  • the first 10 bars represent the customer identifiers of 10 time slots, and the next 6 bars serve as reserved fields, as shown in FIG. 10 .
  • the data block and the overhead block relationship are as shown in FIG. 11.
  • the number of time slots is five, and each time interval 1023*4 data blocks are inserted into one FlexE overhead block, and the ratio of the overhead block to the data block is 1: (1023*4*5), which is exactly the same as the ratio of the cost block and the data block of the 100G rate member 1: (1023*20).
  • eight overhead blocks form a FlexE frame.
  • the frame structure content is consistent with the frame structure of the 100GFlexE rate.
  • the OMF field (consisting of four consecutive "0"s and four consecutive "1"s) indicates the multiframe structure, and the multiframe composition is changed from 32 to 8, indicating 8
  • the overhead frames form a multiframe with a total of 8*8 overhead blocks in a multiframe.
  • the maximum number of members in the group group is reduced from 256 to 64, so that the total number of PHY map bits is reduced from 256 to 64.
  • the number of time slots is five, there are eight Client calendar fields. The first five bars represent the customer identifiers of the five time slots, and the last three are reserved fields, as shown in FIG.
  • An embodiment of the present disclosure provides a service sending method, as shown in FIG. 14, including:
  • Step 1401 Mapping and dividing the customer service into one or more first-rate service flows
  • the customer service is mapped to the master calendar for bearer, and then the master calendar is divided into several sub-calendar members of the 100G rate according to the protocol standard;
  • the customer business is mapped to the master calendar for bearer, and then the master calendar is divided into several sub-calendar members of 50G rate.
  • Step 1402 Divide at least one service flow of the first rate into a plurality of service flows of other rates, and fill the service blocks in the service flows of the other rates;
  • Step 1403 Send the service flow through a channel corresponding to the rate.
  • the 100G service flow is sent through the 100G optical module
  • the 50G service flow is sent through the 50G optical module.
  • the dividing the traffic flow of the at least one first rate into the traffic flows of the multiple other rates includes:
  • the service flow of at least one 100G rate is divided into two service flows of 50G rate, and one service flow of 50G rate is divided into two service flows of 25G rate; that is, the service flow of 100G is divided into services of 50G rate.
  • the at least one 50G rate service flow is split into two 25G rate service flows.
  • the dividing the traffic flow of the at least one first rate into the traffic flows of the multiple other rates includes:
  • the position corresponding to the overhead block position of the service flow reserves a blank overhead block; wherein, one data block is 66 bits; the first rate is, for example, 100G, and the second rate is, for example, 50G; for example, when divided into two, the even data block is one Service flow, odd data block is a service flow; when divided into 4, the 0, 4, 8, 12... data blocks form a service flow, the first, fifth, ninth, thirteenth, ... data
  • the blocks form a service flow, and the 2nd, 6th, 10th, and 14th data blocks form a service flow, and the 3rd, 7th, 11th, and 15th data blocks form a service flow.
  • the first rate is, for example, 100G
  • the second rate is, for example, 50G
  • the third rate is, for example, 25G
  • an even data block is used as a 50G rate service flow
  • an odd data block is used as a 50G rate service flow
  • a 50G rate service flow the even data block of the 50G rate service flow is used as a 25G rate service flow
  • the odd data block of the 50G rate service flow is used as a 25G rate service flow
  • the traffic of the 25G rate uses a 4N+3 data block as a 25G rate service flow.
  • the filling the overhead block in the service flow of the other rate comprises at least one of the following:
  • the Client calendar in the first 50G service flow is from the Client calendar of the 0th, 2nd, 4th, and 6th frames in the original multiframe; 50G business The Client calendar Client calendar from the original multiplex frames 1,3,5,7 ... frame;
  • the indication information, the remaining field content of the overhead block in the first 25G rate service flow is from the corresponding content in the overhead block of the 4Nth frame in the 100G rate service flow, N is an integer greater than or equal to 0; the second 25G rate The remaining field content of the overhead block in the service flow is from the corresponding field in the overhead block of the 4N+1 frame in the 100G service flow; the remaining field content of the overhead block in the third 25G rate service flow is from the 100G service flow. Opening of the 4th N+2 frame Corresponding field in the pin block; the remaining field content of the overhead block in the fourth 25G rate service stream is from the corresponding field in the overhead block of the 4N+3 frame in the 100G rate service stream.
  • the Client calendar in the first 25G service flow is from the Client calendar of the 0th, 4th, 8th, and 12th frames in the original multiframe;
  • the Client calendar in the 25G service flow is from the Client calendar of the first, fifth, ninth, and 13th frames in the original multiframe, and the Client calendar in the third 25G service flow is from the second, sixth, and tenth in the original multiframe.
  • the Client calendar of the 14...frame, the Client calendar in the fourth 25G service stream is from the Client calendar of the 3rd, 7th, 11th, and 15th frames in the original multiframe.
  • an embodiment of the present disclosure provides a service receiving method, including:
  • Step 1501 Interleave a plurality of service flows having a lower rate than the first rate to form a traffic flow of a first rate;
  • the sum of the rates of the multiple service flows is the first rate
  • Step 1502 Fill the overhead block content in the service flow of the first rate
  • Step 1503 Recover the customer service from the service flow of the first rate.
  • the service flow of the first rate is a sub calendar member, and one or more sub calendar members recover the master calendar, and the client calendar is extracted and restored from the master calendar.
  • the interleaving the service flows with the multiple rates lower than the first rate to form the service flow of the first rate includes:
  • the two 25G rate service flows are interleaved to form a 50G rate service flow, and the 50G rate service flow is interleaved with another 50G rate service flow into a 100G rate service flow.
  • the interleaving may be performed in units of one bit, or may be interleaved in units of one data block.
  • the plurality of service flows having a lower rate than the first rate are interleaved to form a service at a first rate.
  • the flow includes:
  • the interleaving interleaving is performed in units of one data block, and the overhead blocks obtained after the interleaving are dispersed, so that the adjacent overhead blocks are separated by 1023*20 data blocks.
  • the overhead block retains the overhead block position after interleaving, and then spreads the continuous overhead block evenly. For example, if there are two consecutive overhead blocks, one overhead block is moved backward by 1023*20 data block positions; if it is four consecutive overhead blocks, one overhead block is moved backward by 1023*20 data block positions. An overhead block moves backward by 2*1023*20 data block positions, and an overhead block moves backward by 3*1023*20 data block positions; after moving, all overhead blocks are equally spaced by 1023*20 data blocks.
  • the filling the cost block content in the service flow of the first rate includes at least one of the following:
  • the overhead block content of the first 50G service flow multiframe is sequentially padded to the overhead of the even frame in the 100G service flow multiframe from the multiframe boundary.
  • the content of the overhead block of the second 50G service flow multiframe is sequentially filled into the corresponding position of the overhead block of the odd frame in the 100G service flow multiframe; the highest bit of the PHY number field is cleared; for example, The content of the overhead block in the first 50G service flow multiframe is sequentially filled to the corresponding position of the overhead block in the 0th, 2nd, 4th, 6th, 8th, ..., 30th frame of the 100G service flow multiframe; the second 50G The content of the overhead block in the service flow multiframe is sequentially filled into the corresponding positions of the overhead blocks in the first, third, fifth, seventh, ninth, ..., and 31 frames of the 100G service flow multiframe.
  • the overhead block content of the first 25G service flow multiframe is filled into the 4Nth frame in the 100G service flow multiframe from the multiframe boundary.
  • the corresponding position of the block, N is an integer greater than or equal to 0, that is, the content of 8 overhead blocks per frame in the first 25G service flow multiframe is filled into the 0th, 4th, 8th, 12th, and 16th of the 100G service flow multiframe.
  • the content of the overhead block of the second 25G service flow multiframe is filled to the corresponding position of the overhead block in the 4N+1 frame of the 100G service flow multiframe, that is, the second 25G service
  • the content of 8 overhead blocks in each frame of the flow multiframe is filled to the corresponding position of the overhead block in the first, fifth, ninth, thirteenth, thirteenth, ..., and 29th frames of the 100G service flow multiframe
  • the third 25G service flow The content of the overhead block of the frame is filled into the corresponding position of the overhead block in the 4th N+2 frame of the 100G service flow multiframe, that is, the content of 8 overhead blocks in each frame of the third 25G service flow multiframe is filled into the 100G service flow multiframe.
  • the content of the first cost block of the first 25G service flow multiframe is filled into the corresponding position of the overhead block in the even frame of the 50G service flow multiframe, that is, the first
  • the content of 8 overhead blocks in each frame of the 25G service flow multiframe is filled to the corresponding position of the overhead block in the 0th, 2nd, 4th, 6th, 8th, ..., 14th frame of the 50G service flow multiframe;
  • the second 25G service The content of the overhead block of the stream multiframe is sequentially filled into the corresponding position of the overhead block in the odd frame of the 50G service flow multiframe, that is, the content of the 8 overhead blocks in the second 25G service flow multiframe is filled into the 50G service flow multiframe.
  • the corresponding position of the overhead block in the first, third, fifth, seventh, ninth, ..., 15 frames; the two bits with the highest PHY number field are cleared.
  • the multi-frame structure is also different, and PHYs of different rates cannot be directly bundled to form a group group, as shown in FIG. 16.
  • a group group can be formed according to the existing standard content and other 100G rate PHY members, and the 50G rate and the 100G rate PHY member are bundled into one. Group group, as shown in Figure 17.
  • the 25G PHY and the 100G rate PHY are bundled, the four 25G PHYs are first processed into a 100G rate PHY member structure, and then grouped with other 100G rate PHY members, as shown in Figure 18.
  • the two 25G PHYs are first processed into a 50G PHY member structure, and then processed into a 100G rate with another 50G PHY member.
  • the PHY member structure is combined with other PHY members of the 100G rate to form a group group, as shown in FIG.
  • a 100G service flow is deinterleaved into two 50G service flows.
  • each 1023*20 data block in the 100G rate FlexE service flow has an overhead block, and each of the 8 overhead blocks constitutes one frame, and each 32 frames are composed.
  • a multiframe There are 8*32 overhead blocks in a multiframe. These overhead blocks are represented by an array (frame sequence number: intraframe overhead block sequence number): 0:0, 0:1, 0:2, 0:3, 0: 4, 0:5, 0:6, 0:7, 1:0, 1:1, 1:2, 1:3, 1:4, 1:5, 1:6, 1:7,...
  • the frame structure composed of 8 overhead blocks in each frame is shown in FIG. 4, and the multiframe structure composed of 32 frames is shown in FIG. 5.
  • the data blocks of the time slot portion are deinterleaved (divided according to the inverse process of the interleaving) in units of 66-bit blocks, and are divided into two sets of service streams. Only the content of the data portion of the slot portion is deinterleaved, and the FlexE overhead block portion retains the idle position, as in the middle portion of FIG. There are 20 time slots in the 100G FlexE service, from 0 to 19.
  • the first service flow retains the even time slots in the original service flow, 0, 2, 4, ..., 18; In the second service flow, odd time slots, 1, 3, 5, ..., 19 are reserved in the original service flow.
  • the method of filling the overhead block content in the two separate service flows, and filling the content of the overhead block is:
  • the first overhead block position (overhead array 0:0) in the original multiframe is used as the first overhead block location in the new two service flow multiframe.
  • the overhead block is only filled in the even block position in the original overhead block, and the odd block position is not filled. For example, 0:0, 0:2, 0:4, 0:6, 1:0, 1:2, 1:4, 1:6, ..., 31:0, 31:2, 31:4, 31 :6.
  • Each 50G service flow is filled with 128 overhead blocks in one multiframe, which is half the number of overhead blocks in a multiframe of a 100G service flow. Other overhead blocks that do not fill in the contents of the overhead block are deleted directly.
  • the C bit, RPF, CR, and CA fields directly copy the corresponding content in the original overhead block; the lower 7 bits (0-6 bits) of the PHY number field directly copy the PHY number field in the original overhead block. Low 0-6 bits.
  • the upper bit of the PHY number field indicates the service flow. For example, the 7th bit (ie, the highest bit) of the PHY number field in the first 50G service flow is set to 0, and the 7th bit of the PHY number field in the second 50G service flow (ie, The highest bit is set to 1, which is used to distinguish between two 50G service flows.
  • Client calendar A and Client calendar B are in the third block of a frame. This part comes from the corresponding content of the original cost block.
  • the cost fields in the first 50G service flow, Client calendar A and Client calendar B are from the original 100G service flow.
  • the corresponding content in the overhead block of the frame such as the 0th frame, the 2nd frame, the 4th frame, ..., the 30th frame;
  • the overhead fields Client calendar A and Client calendar B in the second 50G service flow are from the original 100G service
  • Corresponding content in the original odd frame in the stream such as the first frame, the third frame, the fifth frame, ..., the 31st frame.
  • PHY map, Manage channel-section (management channel-shim to shim), content channel processing is similar to Client calendar processing, the first 50G service flow
  • the contents of the PHY map, Manage channel-section, and Manage channel-shim to shim fields are from the corresponding contents in the overhead block of the even frame in the original 100G service flow, such as the 0th frame, the 2nd frame, the 4th frame, ..., the first 30 frames;
  • the overhead field PHY map, Manage channel-section, and Manage channel-shim to shim in the second 50G service flow are from the corresponding content in the original odd frame in the original 100G service flow, such as the first frame, the third frame, and the fifth Frame, ..., frame 31.
  • a 50G rate FlexE service (the overhead block location of the unfilled content is directly deleted) is transmitted on two 50G rate lines.
  • the interleaving process of interleaving two 50G service flows into one 100G service flow is as follows. As shown in FIG. 21, there is one overhead block for every 1023*2*10 data blocks in the 50G rate FlexE service flow. Two 50G rate FlexE traffic flows are aligned with FlexE overhead block locations (multiframe boundaries) and then interleaved in units of 66-bit blocks to form a 100G traffic stream. After interleaving, the data block is the interleaving result of the data blocks in the two 50G service streams. After the interleaving, the overhead block only retains the position of the overhead block, and does not temporarily fill the content of the overhead block, as shown in FIG. 21 is the result of the interleaving.
  • 8 overhead blocks in each frame of the first 50G service flow are filled into 8 overhead block positions in the even frames of the 100G service flow, and 8 overhead blocks in each frame of the second 50G service flow are filled. Up to 8 overhead block locations in odd frames in a 100G traffic flow.
  • the specific process is as follows: 8 overhead blocks of the 0th frame in the first 50G service multiframe are filled into 8 overhead block positions of the 0th frame in the 100G service flow (for example, A 0:0 is padded to 0:0, A 0:1 is padded to 0:1, A 0:3 is padded to 0:3,..., A 0:7 is padded to 0:7), and then the second 50G service stream is multiframed in frame 0 of frame 0
  • the overhead block is filled into the 8 overhead block positions of the 1st frame in the 100G service flow (for example: B 0:0 is padded to 1:0, B 0:1 is padded to 1:1, B 0:3 is padded to 1:3) ,..., B 0:7 is filled to 1:7).
  • the second round of filling is started, and the eight overhead blocks of the first frame in the first 50G service flow multiframe are filled into the eight overhead block positions of the second frame in the multiframe of the 100G service flow, and the second The 8 overhead blocks of the 1st frame in the 50G service flow multiframe are filled into 8 overhead block positions of the 3rd frame in the 100G service flow multiframe, and so on.
  • the maximum number of members in the group group is 128, so the PHY number field is only valid for the lower 7 bits, and the highest bit may be used to represent the service flow.
  • the highest position of the PHY number field is “0”, which is used for other purposes.
  • a 100G service flow is deinterleaved into four 25G service flows.
  • the structure of the 100G rate FlexE service is as shown in the upper part of Figure 22.
  • Each time interval 1023*20 data block has an overhead block, and each 8 overhead blocks form an overhead.
  • Frames, each 32 frames form a multiframe.
  • overhead blocks are represented by an array (frame sequence number: intra block sequence number): 0:0, 0:1, 0:2, 0:3, 0:4 , 0:5, 0:6, 0:7, 1:0, 1:1, 1:2, 1:3, 1:4, 1:5, 1:6, 1:7,...,31 : 0, 31:1, 31:2, 31:3, 31:4, 31:5, 31:6, 31:7, a total of 256 overhead blocks.
  • the deinterleaving process is based on a 66-bit block, and deinterleaves the data blocks of the slot portion (divided according to the inverse process of the interleaving), and is divided into four groups of service streams.
  • the FlexE overhead block portion retains the overhead block position, as in the middle portion of FIG.
  • the 0th, 4th, 8th, 12th, and 16th time slots in the original service flow are retained in the first service flow;
  • the first, fifth, ninth, thirteenth, and thirteenth time slots in the original service flow are reserved in the service flow;
  • the second, sixth, sixth, fourth, and thirteenth time slots in the original service flow are retained in the third service flow;
  • the third, seventh, eleventh, fifteenth, and nineth time slots in the original service flow are reserved in the service flow.
  • the first overhead block position (overhead array 0:0) in the original multiframe is the first overhead block location of the new traffic flow multiframe.
  • the overhead content is filled in every 4 cost block positions in the original cost block, that is, the 0th, 4th, 8th, 16th, 24th, and 28th cost block positions are filled, and the multiframe of each 25G service stream is composed of 8 frames, which is filled.
  • Other overhead blocks that do not fill in the contents of the overhead block are deleted directly.
  • the C bit, RPF, CR, and CA fields directly copy the corresponding content in the original overhead block; the lower 0-5 bits of the PHY number field directly copy the lower 0-5 of the PHY number field in the original overhead block.
  • the upper bit of the PHY number field indicates the service flow. For example, the highest 2 position of the PHY number field in the first 25G service flow is “00”, and the highest 2 position of the PHY number field in the second 25G service flow is “01”. The highest 2 positions of the PHY number field in the three 25G service flows are “10”, and the highest 2 positions of the PHY number field in the fourth 25G service flow is “11”. The highest two bits are used to distinguish four 25G service flows.
  • Client calendar A and Client calendar B are in the third block of a frame. This part comes from the corresponding content of the original cost block: the cost field in the first 25G service flow, Client calendar A and Client calendar B, is from the original 100G service stream.
  • the overhead fields Client calendar A and Client calendar B in the second 25G service flow are from the corresponding contents in the cost blocks of frames 2, 6, 10, 14, 18, 22, 26, and 30 of the original 100G service flow;
  • the cost fields Client calendar A and Client calendar B in the 25G service flow are from the corresponding contents in the 3rd, 7th, 11th, 15th, 19th, 23th, 27th and 31st frames of the original 100G service flow.
  • the PHY map, Manage channel-section, Manage channel-shim to shim field content processing is similar to the Client calendar processing.
  • the PHY map, Manage channel-section, and Manage channel-shim to shim fields of the first 25G service flow are from the original 100G service.
  • the implementation of four 25G service flows interleaved into one 100G service flow is shown in Figure 23.
  • the 25G rate FlexE service there is one overhead block for every 1023*4*5 data block, and four 25G rate FlexE services with FlexE overhead block position. (Multiframe boundary) is the reference alignment, and then interleaved in units of 66-bit blocks to form a 100G service stream.
  • the data block portion is the interleaving result of the data blocks in the four 25G services.
  • the overhead block portion retains only the overhead block position after interleaving, and does not temporarily fill the content, as shown in FIG.
  • a service flow with a rate of 100G is obtained, and four overhead block positions appear every 4*1023*4*5 data blocks.
  • the three overhead blocks of the four consecutive overhead blocks are sequentially moved back to the 1023*4*5 data block position at equal intervals, so that a FlexE overhead block position appears every 1023*4*5 data blocks, and only the filling overhead is required.
  • the contents of the block are exactly the same as those defined in the FlexE V1.0 standard.
  • the method of filling the content of the overhead block location is as follows:
  • the padding mode is to fill the 0th block of each frame in the first 25G service flow into the 100G service flow, and the 0th, 4th, 8th, 12th, 16th, 20th, 8 overhead block positions in 24 and 28 frames, and 8 overhead blocks per frame in the second 25G service flow multiframe are filled into the first, fifth, ninth, thirteenth, seventeenth, and twenty-first of the 100G service flow multiframe.
  • 8 overhead block positions in 30 frames filling 8 overhead blocks per frame in the fourth 25G service flow multiframe into the 3rd, 7th, 11th, 15th, 19th, 23th, 27th, and 31th frames in the 100G service flow multiframe 8 overhead block locations.
  • the 8 overhead blocks of the 0th frame in the first 25G service flow multiframe are filled into 8 overhead block positions of the 0th frame in the 100G service flow (for example, A 0:0 is padded to 0:0, A 0 :1 is padded to 0:1, A 0:3 is padded to 0:3,..., A 0:7 is padded to 0:7), then 8 of the 0th frame in the second 25G service stream is multiframed
  • the overhead block is filled into the 8 overhead block positions of the 1st frame in the 100G service flow (for example, B 0:0 is padded to 1:0, B 0:1 is padded to 1:1, and B 0:3 is padded to 1:3, ..., B 0:7 is padded to 1:7)
  • the 8 overhead blocks of the 0th frame in the third 25G service flow multiframe are filled into 8 overhead block positions of the 2nd frame in the 100G service flow multiframe.
  • the second round of filling process begins, and the eight overhead blocks of the first frame in the first 25G service flow multiframe are filled into the eight overhead block positions of the fourth frame in the 100G service flow multiframe.
  • the 8 overhead blocks of the first frame in the second 25G service flow multiframe are filled into 8 overhead block positions of the 5th frame in the 100G service flow multiframe, and so on.
  • the maximum number of members in the group group is 64, so only the lower 6 bits of the PHY number field are valid (the maximum number of members in the group group is 64), and the highest two bits may be used to indicate the traffic flow.
  • the highest two positions of the PHY number field are “0”, which is used for other purposes.
  • a 100G member FlexE protocol information flow is formed, and the FlexE standard definition content and other 100G rate members can be used to form a group group for service recovery and processing.
  • a 50G service is deinterleaved into two 25G service processes, and two 25G services are interleaved into one 50G service.
  • the process is deinterleaved into one 50G service process with one 100G service, and two 50G services are interwoven into one.
  • the 100G service is similar, except that the multiframe composed of 32 frames is halved into a multiframe composed of 16 frames, and the multiframe composed of 16 frames is halved to a multiframe composed of 8 frames.
  • the 10G*2*10 data block in the 50G rate FlexE service stream has one overhead block, and each 8 overhead blocks form one frame, and each 16 frames form a multiframe.
  • These overhead blocks are represented by an array (frame sequence number: intraframe overhead block sequence number): 0:0, 0:1, 0:2, 0:3, 0: 4, 0:5, 0:6, 0:7, 1:0, 1:1, 1:2, 1:3, 1:4, 1:5, 1:6, 1:7,... 15:0, 15:1, 15:2, 15:3, 15:4, 15:5, 15:6, 15:7, a total of 128 overhead blocks.
  • the data blocks of the time slot portion are deinterleaved (divided according to the inverse process of the interleaving) in units of 66-bit blocks, and are divided into two sets of service streams. Only the content of the data portion of the slot portion is deinterleaved, and the FlexE overhead block portion retains the idle position. There are 10 slots in the 50G FlexE service, from 0 to 9, after deinterleaving, in the first service flow. The even time slots in the original service flow are reserved, 0, 2, 4, 6, and 8; the second service flow retains the odd time slots in the original service flow, 1, 3, 5, 7, and 9.
  • the method of filling the overhead block content in the two separate service flows, and filling the content of the overhead block is:
  • the first overhead block position (overhead array 0:0) in the original multiframe is used as the first overhead block location in the new two service flow multiframe.
  • the overhead block is only filled in the even block position in the original overhead block, and the odd block position is not filled. For example, 0:0, 0:2, 0:4, 0:6, 1:0, 1:2, 1:4, 1:6, ..., 15:0, 15:2, 15:4, 15 :6.
  • Each 25G service flow is filled with 64 overhead blocks in one multiframe, which is half the number of overhead blocks in a multiframe of a 50G service flow. Other overhead blocks that do not fill in the contents of the overhead block are deleted directly.
  • the C bit, RPF, CR, and CA fields directly copy the corresponding content in the original overhead block; the lower 6 bits (0-5 bits) of the PHY number field directly copy the PHY number field in the original overhead block. Low 0-5 bits.
  • the upper 2 bits of the PHY number field indicate the service flow.
  • Client calendar A and Client calendar B are in the third block of a frame. This part comes from the corresponding content of the original cost block.
  • the overhead field of the first 25G service flow is Client calendar A.
  • Client calendar B is from the corresponding content in the overhead block of the even frame in the original 50G service flow, such as 0th frame, 2nd frame, 4th frame, ..., 14 frames; the overhead field Client in the second 25G service flow Calendar A and Client calendar B are from the corresponding content in the original odd frame in the original 50G service stream, such as the first frame, the third frame, the fifth frame, the ..., the fifteenth frame.
  • the PHY map, Manage channel-section, Manage channel-shim to shim field content processing is similar to the Client calendar processing.
  • the PHY map, Manage channel-section, and Manage channel-shim to shim fields of the first 25G service flow are from the original 50G service.
  • the corresponding content in the overhead block of the even frame in the stream such as the 0th frame, the 2nd frame, the 4th frame, the ..., the 14th frame; the overhead field PHY map, the Manage channel-section in the second 25G service flow, Manage channel-shim to shim is the corresponding content in the original odd frame in the original 50G service stream, such as the first frame, the third frame, the fifth frame, the ..., the fifteenth frame.
  • a 25G rate FlexE service (the overhead block location of the unfilled content is directly deleted) is transmitted on two 25G rate lines.
  • the interleaving process of interleaving two 25G service flows into one 50G service flow is as follows.
  • the 25G rate FlexE service flow there is one overhead block for every 1023*4*5 data blocks.
  • Two 25G rate FlexE traffic flows are aligned with FlexE overhead block locations (multiframe boundaries) and then interleaved in units of 66-bit blocks to form a 50G traffic stream.
  • the data block is the interleaving result of the data blocks in the two 25G service flows.
  • the overhead block only retains the position of the overhead block, and does not fill the content of the overhead block.
  • a service stream with a rate of 50G is obtained, and two overhead block positions appear at intervals of 2*1023*2*10 data blocks.
  • One of the consecutive two overhead blocks is moved backward by 1023*2*10 data blocks, so that a FlexE overhead block position appears every 1023*2*10 data blocks.
  • the rules for filling the content of the overhead block in the 50G service flow are as follows:
  • 8 overhead blocks in each frame of the first 25G service flow are filled into 8 overhead block positions in the even frames of the 50G service flow, and 8 overhead blocks in each frame of the second 25G service flow are filled. Up to 8 overhead block locations in odd frames in a 50G traffic flow.
  • the specific process is as follows: 8 overhead blocks of the 0th frame in the first 25G service multiframe are filled into 8 overhead block positions of the 0th frame in the 50G service flow (for example, A 0:0 is padded to 0:0, A 0:1 is padded to 0:1, A 0:3 is padded to 0:3,..., A 0:7 is padded to 0:7), and then the second 25G service stream is framed in frame 0 of frame 0
  • the overhead block is filled into the 8 overhead block positions of the 1st frame in the 50G service flow (for example, B 0:0 is padded to 1:0, B 0:1 is padded to 1:1, and B0:3 is padded to 1:3, ..., B 0:7 is padded to 1:7).
  • the second round of filling is started, and the eight overhead blocks of the first frame in the first 25G service flow multiframe are filled into the eight overhead block positions of the second frame in the 50G service flow multiframe, and the second The eight overhead blocks of the first frame in the 25G service flow multiframe are padded to the eight overhead block locations of the third frame in the 50G service flow multiframe, and so on.
  • the maximum number of members in the group group is 64, so the PHY number field is only valid for the lower 6 bits, and the highest 2 bits may be used to represent the service flow.
  • the highest 2 position of the PHY number field is "0", which is used for other purposes.
  • a 50G member FlexE protocol information flow is formed, and a group group of other 50G rate members can be formed to perform customer service recovery.
  • interleaving and deinterleaving are performed in units of one data block, but the present disclosure is not limited thereto. In other embodiments, multiple data blocks or other bits may be used.
  • the units are interleaved and deinterleaved.
  • an embodiment of the present disclosure provides a service transmitting apparatus 2400, which includes a memory 2410 and a processor 2420.
  • the memory 2410 stores a program, and when the program is read and executed by the processor 2420, the following is performed. operating:
  • the service flow is sent through a channel of a corresponding rate.
  • the program when read and executed by the processor, further performs the service sending method described in any of the above embodiments.
  • An embodiment of the present disclosure provides a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the foregoing A method for transmitting a service as described in an embodiment.
  • an embodiment of the present disclosure provides a service receiving apparatus 2500, which includes a memory 2510 and a processor 2520.
  • the memory 2510 stores a program, and when the program is read and executed by the processor 2520, the following is performed. operating:
  • the program when read and executed by the processor, further performs the service receiving method described in any of the above embodiments.
  • An embodiment of the present disclosure provides a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the foregoing A service receiving method according to an embodiment.
  • the computer readable storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes. Medium.
  • the present disclosure relates to the field of communication technology.
  • the technical solution of the present disclosure maps a customer service to one or more first-rate service flows; divides at least one first-rate traffic flow into multiple other-rate traffic flows, and fills the other-rate traffic flows An overhead block; the service flow is sent through a channel of a corresponding rate.
  • the solution provided by the present disclosure implements service transmission between members of different rates.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Communication Control (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种业务发送方法及装置、业务接收方法及装置,该业务发送方法包括:将客户业务映射到一条或多条第一速率的业务流;将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;将所述业务流通过对应速率的通道发送。本实施例提供的方案,实现了不同速率成员之间的业务传输。

Description

一种业务发送方法及装置、业务接收方法及装置 技术领域
本公开涉及通信技术,尤指一种业务发送方法及装置、业务接收方法及装置。
背景技术
用户网络信息流量的快速增加,促使着通讯网络信息传递带宽的快速提升,通讯设备的接口带宽速度从10M(单位:比特/秒,后面内容相同)提高到100M、1G、10G,目前已经达到100G的带宽速度,市场上已经开始大量商用100G的光模块。目前已经研发出400G的光模块,但400G的光模块价格昂贵,超过了4个100G光模块的价格,导致400G光模块缺少商用的经济价值。为了在100G光模块上传递400G业务,国际标准组织定义了FlexE(Flexible Ethernet,灵活以太网)协议。FlexE协议将多个100G的光模块捆绑起来,形成一个大速度的传递通道,如图1所示,通过FlexE协议将4个100G光模块捆绑起来,形成一个400G传递通道,等效于1个400G的光模块的传递速度,在不增加成本的情况下解决了400G业务的传递需求。目前FlexE协议V1.0标准中定义的物理层是100G,在100G的物理层上定义了20个时隙。在V2.0草案中只定义了物理层成员速率为200G、400G的应用方法。当前50G速率的光模块技术上已经成熟,具备经济价值,市场上提出了25G、50G速率PHY(物理层)成员的FlexE应用场景需求,借鉴了100G速率的FlexE成员帧结构,定了25G、50G速率FlexE成员的帧结构。但不同速率成员之间如何进行业务传输,没有具体方法。
发明内容
本公开至少一实施例提供了一种业务发送方法及装置、业务接收方法及装置,实现不同速率成员之间业务传输。
为了达到本公开目的,本公开至少一实施例提供了一种业务发送方法,包括:
将客户业务映射到一条或多条第一速率的业务流;
将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;
将所述业务流通过对应速率的通道发送。
本公开至少一实施例提供一种业务发送装置,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现任一实施例所述的业务发送方法。
本公开至少一实施例提供一种业务接收方法,包括:
将多条速率低于第一速率的业务流交织形成一条第一速率的业务流;
在所述第一速率的业务流中填充开销块内容;
从所述第一速率的业务流中恢复出客户业务。
本公开至少一实施例提供一种业务接收装置,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现任一实施例所述的业务接收方法。
与相关技术相比,本公开至少一实施例将客户业务映射到一条或多条第一速率的业务流;将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;将所述业务流通过对应速率的通道发送。本实施例提供的方案,实现了不同速率成员之间的业务传输。
本公开的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图说明
附图用来提供对本公开技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1是相关技术中FlexE协议应用示意图。
图2是FlexE协议(100G速率)开销块和数据块排列位置示意图。
图3是FlexE协议(100G速率)业务在多物理通道上分配示意图。
图4是FlexE协议帧(100G速率)结构示意图。
图5是FlexE协议复帧(100G速率)结构示意图。
附图6是FlexE协议客户业务承载接入示意图。
图7是FlexE协议客户业务承载恢复示意图。
图8是FlexE协议(50G速率)开销块和数据块排列位置示意图。
图9是FlexE协议(50G速率)复帧结构Client calendar示意图。
图10是FlexE协议(50G速率)复帧结构PHY map示意图。
图11是FlexE协议(25G速率)开销块和数据块排列位置示意图。
图12是FlexE协议(25G速率)复帧结构Client calendar示意图。
图13是FlexE协议(25G速率)复帧结构PHY map示意图。
图14是本公开一实施例提供的业务发送方法流程图。
图15是本公开一实施例提供的业务接收方法流程图。
图16是本公开一实施例50G速率和100G速率组成捆绑组的示意图。
图17是本公开一实施例提供的50G速率和100G速率组成捆绑组的实现方案示 意图。
图18是本公开一实施例提供的25G速率和100G速率组成捆绑组的实现方案示意图。
图19是本公开一实施例提供的25G、50G、100G三种速率组成捆绑组的实现方案示意图。
图20是本公开一实施例提供的100G速率业务解交织成50G速率业务的结构示意图。
图21是本公开一实施例提供的50G业务速率业务交织成100G速率业务的结构示意图。
图22是本公开一实施例提供的100G速率业务解交织成25G速率业务的结构示意图。
图23是本公开一实施例提供的25G业务速率业务交织成100G速率业务的结构示意图;
图24是本公开一实施例提供的业务发送装置框图;
图25是本公开一实施例提供的业务接收装置框图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,下文中将结合附图对本公开的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
目前FlexE协议按照物理层100G速率来定义。在光模块中,100G的数据报文在发送前,将数据包报文进行64/66编码,将64比特的数据块扩展成66比特的数据块,增加的2比特位于66比特块的最前面,作为66比特块的开始标志,然后以66比特块的方式从光口发送出去。在接收时,光口从接收到的数据流中辨别出66比特块,然后从66比特块中恢复出原始的64比特数据,重新组装出数据报文。FlexE协议处于64比特到66比特块转换层,在发送66比特数据块前,对66比特的数据块进行排序和规划,如图2所示,对于100G业务,每20个66比特的数据块划分为一个数据块组,每个66比特的数据块代表一个时隙,每个数据块组代表20个时隙,每个时隙代表5G带宽的业务速度。发送66比特的数据块时,每发送完1023个数据块组(即1023*20个数据块),插入一个FlexE开销块(图2中斜线填充的块)。插入开销块后,继续发送数据块,发送完下一1023*20个数据块后,再插入下一个开销块,以此类推,在发送数据块的过程中,周期性地插入开销块, 相邻两个开销块的间隔是1023*20个数据块。
当4路100G的物理层捆绑成一个400G的逻辑业务带宽时,如图3所示,每个物理层仍按照20个数据块组成一个数据块组,每1023个数据块组插入一个开销块。在FlexE的master calendar(主日程表,位于shim层),4路20个数据块拼装成一个由80个数据块组成的数据块组,该数据块组中有80个时隙。客户业务在这80个时隙中进行传递,其中,每个时隙带宽是5G,80个时隙共400G的业务传递带宽。
FlexE开销块是一个66比特长的开销块,在业务数据流发送时,每间隔1023*20个数据块插入一个开销块。开销块在整个业务流中起到定位功能,确定了开销块位置就可以知道业务中第一个数据块组的位置,以及后续数据块组的位置。在FlexE协议中,定义8个开销块组成一帧称为开销帧,如图4所示,其中,一个开销块由2比特的块标志和64比特的块内容组成,块标志位于前2列,后面64列是块内容。开销帧中第一个开销块的块标志是10,后面7个开销块的块标志是01或SS(SS表示内容不确定)。当开销块的指定位置是4B(16进制,标识为0x4B)和05(16进制,标识为0x5)时,则表示该开销块是开销帧的第一个开销块,和后面的7个开销块组成一个开销帧。第一个开销块中包含如下信息:0x4B(8位,十六进制的4B)、C比特(1位,指示调整控制)、OMF比特(1位,开销帧复帧指示)、RPF比特(1位,远端缺陷指示)、RES比特(1位,保留位)、FlexE group number(20位,表示捆绑组的编号)、0x5(4位,十六进制的5)、000000(28位,都是0)。其中的0x4B和0x5是第一个开销块的标志指示,在接收时,当找到一个开销块中对应位置是0x4B和0x5时,表示该开销块是开销帧的第一个开销块,和之后连续的7个开销块组成一个开销帧。在开销帧中,reserved部分是保留内容,尚未定义。PHY number(PHY编号)表示本成员PHY(物理层)在group组中的编号,编号范围是0-255之间。PHY map(位图)表示group组中每个PHY的在位情况,一帧中PHY map有8位,32帧构成的复帧中共有256位,表示256个PHY成员是否在group组中,如果在,则对应位置“1”,否则置“0”。在100G速率的FlexE帧中有20个时隙,每个时隙可以承载客户信息,由Client calendar(客户日程表)来指示每个时隙承载的客户名称。一帧承载一个时隙的客户名称,1个复帧可以承载32个时隙,实际只有20个时隙,前20个时隙有效,后16个时隙保留。时隙承载客户名称由Client calendar A和Client calendar B来表示,正常工作时,只有一组指示处于工作状态(由C比特指示哪组处于工作状态),一组处于备用状态。两组指示用于时隙动态调整,当时隙改配时,只改变备用状态的时隙内容,然后两边同时切换到新配置表上。开销帧中其他字节内容请参考相关技术。
在第一个开销块中,OMF字段是复帧指示信号,如图5所示。OMF是单比特数值,在100G的帧结构中,连续16帧中的OMF为0,然后连续16帧中OMF为1,之后连续16帧中为0,再之后连续16帧中为1,每32帧重复一次,这样由32帧组成一个复帧,在一个复帧中共有8*32个开销块。
图6是FlexE协议承载客户业务的过程示意图,如图6所示,客户业务先进行64/66编 码,将客户业务流切成64比特(8个字节)长的比特块,然后对64比特的数据信息进行编码,变成66比特的数据块。经过64/66编码,业务流变成66比特长度的数据块流。在该数据块流中插入或删除空闲(idle)信息块,进行速度调整来适配FlexE协议中master calendar的速率。按照时隙配置情况将66比特的数据块放在FlexE协议的master calendar中。时隙规划表结构如图5,在FlexE协议中每个成员中划分20个时隙(每个时隙是一个66比特的数据块,每个时隙代表5G业务带宽),如果有4个成员,则时隙规划表中共有80个时隙。通过配置,决定每条客户业务选择那些时隙进行承载。时隙规划表将所有时隙进行分组,每组20个时隙,发送给FlexE协议定义的每个成员,每个成员在这些时隙基础上插入FlexE开销块(开销块也是66比特块,每间隔20*1023个时隙块插一个开销块,见图2)。在图中每个成员就是一个sub calendar(子日程表),在一个PHY上承载传递。在插入FlexE开销块后,每个PHY对承载的业务流进行扰码(scramble),经过PMA(Physical Medium Attachment,物理介质连接)发送出去。
在接收端,如图7所示,PMA接收到信号,经过解扰码(descramble)恢复出66比特块。在66比特块中,每个PHY寻找FlexE协议的开销块,以开销块为基准位置恢复FlexE帧结构,获得sub calendar。所有成员的时隙按照次序排列,重新恢复出master calendar结构。根据配置信息,从master calendar相应时隙中取出业务流,删除空闲信息块,然后进行66/64比特解码,恢复出原始客户业务。
对于50G速率的PHY成员,数据块和开销块关系如图8所示,时隙数量是10个,每间隔1023*2*10个数据块插入一个FlexE开销块,开销块和数据块的比例为1:(1023*2*10),与100G速率成员的开销块和数据块的比例1:(1023*20)完全一致。和100G速率一样,8个开销块组成一个FlexE帧,如图9所示,帧结构内容和100G FlexE速率的帧结构保持一致。区别在于OMF字段、PHY map和Client calendar三个字段:OMF字段(由连续8个“0”和连续8个“1”组成)表示复帧结构,复帧的帧数量从32变成16,即16帧组成一个复帧,在一个复帧中共有8*16个开销块。在该复帧模式下,group组最大成员数量减半,从256减少为128,这样PHY map总比特数量从256减少为128,如图9所示。由于时隙数量是10个,因此Client calendar字段有16条,前10条表示10个时隙的客户标识,后面的6条作为保留字段,如图10所示。
对于25G速率的PHY成员,数据块和开销块关系如图11所示,时隙数量是5个,每间隔1023*4个数据块插入一个FlexE开销块,开销块和数据块的比例为1:(1023*4*5),与100G速率成员的开销块和数据块的比例1:(1023*20)完全一致。和100G速率一样,8个开销块组成一个FlexE帧,如图12所示,帧结构内容和100GFlexE速率的帧结构保持一致。区别在于OMF字段、PHY map和Client calendar三个字段:OMF字段(由连续4个“0”和连续4个“1”组成)表示复帧结构,复帧组成从32变成8,表示8个开销帧组成一个复帧,在一个复帧中共有8*8个开销块。在该复帧下,group组最大成员数量从256减少到64,这样PHY map总比特数量从256减少为64。由于时隙数量是5个,因此Client calendar 字段有8条,前5条表示5个时隙的客户标识,后面的3条作为保留字段,如图13所示。
本公开一实施例提供一种业务发送方法,如图14所示,包括:
步骤1401,将客户业务映射并划分为一条或多条第一速率的业务流;
比如,将客户业务映射到master calendar中进行承载,然后按照协议标准将master calendar分成若干个100G速率的sub calendar成员;
又比如,将客户业务映射到master calendar中进行承载,然后将master calendar分成若干个50G速率的sub calendar成员。
步骤1402,将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;
比如,将一条100G的sub calendar成员分割成2条50G速率的业务流;
步骤1403,将所述业务流通过对应速率的通道发送。
比如,将100G的业务流通过100G光模块发送;将50G业务流通过50G光模块发送,将25G业务流通过25G光模块发送。
在一实施例中,所述将至少一条第一速率的业务流分割成多条其他速率的业务流包括:
将至少一条100G速率的业务流分割成2条50G速率的业务流;
或者,将至少一条100G速率的业务流分割成4条25G速率的业务流;
或者,将至少一条100G速率的业务流分割成2条50G速率的业务流,将其中一条50G速率的业务流分割成2条25G速率的业务流;即100G的业务流分割成一条50G速率的业务流和两条25G速率的业务流;
或者,将至少一条50G速率的业务流分割成2条25G速率的业务流。
可能将一条100G的业务流进行分割,也可能将多条100G的业务流进行分割。
在一实施例中,所述将至少一条第一速率的业务流分割成多条其他速率的业务流包括:
以一个数据块为单位按间插顺序将所述第一速率的业务流的数据块分割成多条第二速率的业务流,在所述第二速率的业务流中与所述第一速率的业务流的开销块位置对应的位置保留空白开销块;其中,一个数据块为66比特;第一速率比如为100G,第二速率比如为50G,比如,分割成2条时,偶数数据块为一条业务流,奇数数据块为一条业务流;分割成4条时,则第0、4、8、12...个数据块组成一条业务流,第1、5、9、13...个数据块组成一条业务流,第2、6、10、14...个数据块组成一条业务流,第3、7、11、15...个数据块组成一条业务流。
或者,以一个数据块为单位按间插顺序将所述第一速率的业务流的数据块分割成多条第二速率的业务流,在所述第二速率的业务流中与所述第一速率的业务流的开销块位置对应的位置保留空白开销块;以一个数据块为单位按间插顺序将一条第二速率的业务流的数据块分割成多条第三速率的业务流,在所述第三速率的业务流中与所述第二速率的业务流 的开销块位置对应位置保留空白开销块。第一速率比如为100G,第二速率比如为50G,第三速率比如为25G;比如,将偶数数据块作为一条50G速率的业务流,奇数数据块作为一条50G速率的业务流;然后,选择其中一条50G速率的业务流,再将该50G速率的业务流的偶数数据块作为一条25G速率的业务流,该50G速率的业务流的奇数数据块作为一条25G速率的业务流;需要说明的是,也可以直接将100G速率的业务流划分成1条50G速率的业务流和2条25G速率的业务流,比如,将偶数数据块作为一条50G速率的业务流,将4N+1的数据块作为一条25G速率的业务流,将4N+3的数据块作为一条25G速率的业务流。
分割时只分割数据块,不分割开销块,开销块位置仍保留在分割后业务流中,每一条业务流都保持相同的开销块位置。
在一实施例中,所述在所述其他速率的业务流中填充开销块包括以下至少之一:
将一条100G速率的业务流分割成2条50G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在偶数开销块位置的开销块填充开销块内容,即只在第0、2、4、6...30个开销块位置填充开销块内容;丢弃未填充开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低7位内容来自该100G速率的业务流中对应开销块位置的开销块中的对应内容;PHY number字段的最高位填充用于区分业务流的指示信息,第一条50G速率业务流中的开销块的其余字段内容来自该100G速率业务流中偶数帧的开销块中的对应字段,第二条50G速率业务流中的其余字段内容来自该100G速率业务流中奇数帧的开销块中的对应字段;例如,如果是100G分割成2条50G,则第一条50G业务流中的Client calendar来自原复帧中第0、2、4、6...帧的Client calendar;第二条50G业务流中的Client calendar来自原复帧中第1、3、5、7...帧的Client calendar;
将一条50G速率的业务流分割成2条25G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在偶数开销块位置的开销块填充开销块内容,即只在第0、2、4、6...14个开销块位置(偶数开销块)填充开销块内容,丢弃未填充开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低7位内容来自该50G速率的业务流中对应开销块位置的开销块中的对应内容;PHY number字段的最高位填充用于区分业务流的指示信息;第一条25G速率业务流中的开销块的其余字段内容来自该50G速率业务流中偶数帧的开销块中的对应字段,第二条25G速率业务流中的其余字段内容来自该50G业务流中奇数帧的开销块中的对应字段;
将一条100G速率的业务流分割成4条25G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在4的整数倍开销块位置的开销块填充开销块内容,即只在第0、4、8、12...28个开销块位置(间隔4个开销块)填充开销块内容,丢弃未填充开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低6位内容来自第一速率的业务流中对应开销块位置的开销块中的对应内容;PHY  number字段的最高2位填充用于区分业务流的指示信息,第一条25G速率业务流中开销块的其余字段内容来自所述100G速率业务流中第4N帧的开销块中的对应内容,N为大于等于0的整数;第二条25G速率业务流中开销块的其余字段内容来自所述100G业务流中第4N+1帧的开销块中的对应字段;第三条25G速率业务流中开销块的其余字段内容来自所述100G业务流中第4N+2帧的开销块中的对应字段;第四条25G速率业务流中开销块的其余字段内容来自所述100G速率业务流中第4N+3帧的开销块中的对应字段。例如,如果是是100G业务流分割成4条25G业务流,则第一条25G业务流中的Client calendar来自原复帧中第0、4、8、12...帧的Client calendar;第二条25G业务流中的Client calendar来自原复帧中第1、5、9、13...帧的Client calendar,第三条25G业务流中的Client calendar来自原复帧中第2、6、10、14...帧的Client calendar,第四条25G业务流中的Client calendar来自原复帧中第3、7、11、15...帧的Client calendar。
如图15所示,本公开一实施例提供一种业务接收方法,包括:
步骤1501,将多条速率低于第一速率的业务流交织形成一条第一速率的业务流;
其中,多条业务流的速率之和为第一速率;
步骤1502,在所述第一速率的业务流中填充开销块内容;
步骤1503,从所述第一速率的业务流中恢复出客户业务。
其中,一条第一速率的业务流为一个sub calendar成员,一个或多个sub calendar成员恢复出master calendar,从master calendar中提取、恢复客户业务。
在一实施例中,所述将多条速率低于第一速率的业务流交织形成一条第一速率的业务流包括:
将2条50G速率的业务流交织形成一条100G速率的业务流;
或者,将4条25G速率的业务流交织形成一条100G速率的业务流;
或者,将2条25G速率的业务流交织形成一条50G速率的业务流,将该条50G速率的业务流与另一条50G速率的业务流交织成一条100G速率的业务流。
交织时可以以一个比特为单位进行交织,也可以也一个数据块为单位进行交织,在一实施例中,所述将多条速率低于第一速率的业务流交织形成一条第一速率的业务流包括:
将业务流的复帧边界对齐后以一个数据块为单位进行间插方式交织,将交织后所得的开销块进行分散,使得相邻开销块之间间隔1023*20个数据块。开销块在交织后先保留开销块位置,然后将连续的开销块移动均匀分散开。比如,如果是连续2个开销块,则将一个开销块向后移动1023*20个数据块位置;如果是连续的4个开销块,则将一个开销块向后移动1023*20个数据块位置,一个开销块向后移动2*1023*20个数据块位置,一个开销块向后移动3*1023*20个数据块位置;移动后所有开销块之间等间隔1023*20个数据块。
在一实施例中,所述在所述第一速率的业务流中填充开销块内容包括以下至少之一:
当2条50G速率的业务流交织形成一条100G速率的业务流时,从复帧边界开始,将第一条50G业务流复帧的开销块内容依次填充到100G业务流复帧中偶数帧的开销块的对 应位置,将第二条50G业务流复帧的开销块内容依次填充到100G业务流复帧中奇数帧的开销块的对应位置;将PHY number字段最高的一位清零;比如,将第一条50G业务流复帧中的开销块内容依次填充到100G业务流复帧中第0、2、4、6、8、...、30帧中开销块的对应位置;第二条50G业务流复帧中的开销块内容依次填充到100G业务流复帧中第1、3、5、7、9、...、31帧中开销块的对应位置。
当4条25G速率的业务流交织形成一条100G速率的业务流时,从复帧边界开始,将第一条25G业务流复帧的开销块内容填充到100G业务流复帧中第4N帧中开销块的对应位置,N为大于等于0的整数,即第一条25G业务流复帧中每帧8个开销块内容填充到100G业务流复帧中第0、4、8、12、16、...、28帧中开销块的对应位置;第二条25G业务流复帧的开销块内容填充到100G业务流复帧中第4N+1帧中开销块的对应位置,即第二条25G业务流复帧中每帧8个开销块内容填充到100G业务流复帧中第1、5、9、13、17、...、29帧中开销块的对应位置;第三条25G业务流复帧的开销块内容填充到100G业务流复帧中第4N+2帧中开销块的对应位置,即第三条25G业务流复帧中每帧8个开销块内容填充到100G业务流复帧中第2、6、10、14、18、...、30帧中开销块的对应位置;第四条25G业务流复帧中开销块内容填充到100G业务流复帧中第4N+3帧中开销块的对应位置,即第四条25G业务流复帧中每帧8个开销块内容填充到100G业务流复帧中第3、7、11、15、19、...、31帧中开销块的对应位置;将PHY number字段最高的两位清零;
当2条25G业务流交织成1条50G业务流时,将第一条25G业务流复帧的依次开销块内容填充到50G业务流复帧中偶数帧中开销块的对应位置,即第一条25G业务流复帧中每帧8个开销块内容填充到50G业务流复帧中第0、2、4、6、8、...、14帧中开销块的对应位置;第二条25G业务流复帧的开销块内容依次填充到50G业务流复帧中奇数帧中开销块的对应位置,即第二条25G业务流复帧中每帧8个开销块内容填充到50G业务流复帧中第1、3、5、7、9、...、15帧中开销块的对应位置;将PHY number字段最高的两位清零。
由于50G速率PHY在时隙数量和100G的时隙数量不一样,复帧结构也不一样,不同速率的PHY无法直接捆绑形成一个group组,如图16所示。如果先将2个50G速率的PHY成员处理成1个100G速率PHY成员结构,就可以按照现有标准内容和其他100G速率的PHY成员组成一个group组,实现50G速率和100G速率PHY成员的捆绑成group组,如图17所示。同样的道理,当25G的PHY和100G速率的PHY实现捆绑时,先将4个25G的PHY处理成一条100G速率的PHY成员结构,再和其他100G速率的PHY成员组成一个group组,如图18所示所示;当25G的PHY和50G、100G速率的PHY实现捆绑时,先将2个25G的PHY处理成一条50G的PHY成员结构,然后和另外一个50G的PHY成员处理成一条100G速率的PHY成员结构,再和其他100G速率的PHY成员组成一个group组,如图19所示所示。
下面通过示例详细说明不同速率的交织和解交织实现。
一条100G业务流解交织成两条50G的业务流如图20所示,100G速率FlexE业务流中每间隔1023*20数据块有一个开销块,每8个开销块组成一帧,每32帧组成一个复帧。在一个复帧中共有8*32个开销块,这些开销块用数组(帧序列号:帧内开销块序列号)表示:0:0、0:1、0:2、0:3、0:4、0:5、0:6、0:7、1:0、1:1、1:2、1:3、1:4、1:5、1:6、1:7、...、31:0、31:1、31:2、31:3、31:4、31:5、31:6、31:7,一共256个开销块。每帧中8个开销块组成的帧结构如图4所示,32个帧组成的复帧结构如图5所示。以66比特块为单位,对时隙部分的数据块进行解交织(按照交织的逆过程进行分割),分成两组业务流。只对时隙部分的数据部分的内容进行解交织,FlexE开销块部分保留空闲位置,如图20中的中间部分。在100G的FlexE业务中有20个时隙,从0到19,当解交织后,第一条业务流中保留了原业务流中偶数时隙,0、2、4、...、18;第二条业务流中保留了原业务流中奇数时隙,1、、3、5、...、19。在分开的两组业务流中填充开销块内容,填充开销块内容的方法是:
原复帧中第一开销块位置(开销数组0:0)作为新两条业务流复帧中第一个开销块位置。
只在原开销块中偶数块位置填充开销块,奇数块位置不填充。例如0:0、0:2、0:4、0:6、1:0、1:2、1:4、1:6、...、31:0、31:2、31:4、31:6。每条50G业务流一个复帧中填充128个开销块,是100G业务流一个复帧中开销块的一半数量。其他未填写开销块内容的开销块直接删除。
50G业务流开销块中,C比特、RPF、CR、CA字段直接拷贝原开销块中的对应内容;PHY number字段的低位7个比特(0-6比特)直接拷贝原开销块中PHY number字段的低位0-6比特。PHY number字段的高位表示业务流,比如,第一条50G业务流中PHY number字段的第7位(即最高位)置为0,第二条50G业务流中PHY number字段的第7位(即最高位)置为1,用于区分两条50G的业务流。Client calendar A和Client calendar B在一帧中第三块中,这部分来自原开销块的对应内容,其中第一条50G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中偶数帧的开销块中的对应内容,如第0帧、第2帧、第4帧、...、第30帧;第二条50G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中原奇数帧中的对应内容,如第1帧、第3帧、第5帧、...、第31帧。PHY map、Manage channel-section(段层管理通道)、Manage channel-shim to shim(shim层到shim层管理通道)字段内容处理和Client calendar(客户日程表)处理类似,第一条50G业务流的PHY map、Manage channel-section、Manage channel-shim to shim字段内容来自原100G业务流中偶数帧的开销块中的对应内容,如第0帧、第2帧、第4帧、...、第30帧;第二条50G业务流中开销字段PHY map、Manage channel-section、Manage channel-shim to shim来自原100G业务流中原奇数帧中的对应内容,如第1帧、第3帧、第5帧、...、第31帧。
在填充FlexE开销块后形成50G速率的FlexE业务(未填充内容的开销块位置直接删除),在2个50G速率线路上传送。
两条50G的业务流交织成一条100G业务流的交织过程如下,如图21所示,在50G速率的FlexE业务流中每间隔1023*2*10个数据块有一个开销块。两条50G速率的FlexE业务流以FlexE开销块位置(复帧边界)为基准对齐,然后以66比特块为单位进行交织,形成一条100G的业务流。交织后,数据块是两条50G业务流中数据块的间插结果。开销块在交织后只保留开销块位置,暂不填充开销块内容,如图21所示为交织后的结果。交织后得到速率为100G的业务流,每间隔2*1023*2*10个数据块出现两个开销块位置。将连续两个开销块中的一个开销块向后移动1023*2*10个数据块,这样每间隔1023*2*10个数据块出现一个FlexE开销块位置,只需要填充开销块内容就和FlexE V1.0标准定义的内容完全一致。其中,在100G业务流中开销块位置填充内容规则如下:
以复帧为单位,将第一条50G业务流每帧中8个开销块填充到100G业务流中偶数帧中8个开销块位置,将第二条50G业务流每帧中8个开销块填充到100G业务流中奇数帧中8个开销块位置。具体过程如下:将第一条50G业务复帧中第0帧的8个开销块填充到100G业务流中第0帧的8个开销块位置(例如:A 0:0填充到0:0,A 0:1填充到0:1,A 0:3填充到0:3,...,A 0:7填充到0:7),然后将第二条50G业务流复帧中第0帧的8个开销块填充到100G业务流中第1帧的8个开销块位置(例如:B 0:0填充到1:0,B 0:1填充到1:1,B 0:3填充到1:3,...,B 0:7填充到1:7)。完成上轮填充后开始第二轮填充,将第一条50G业务流复帧中第1帧的8个开销块填充到100G业务流复帧中第2帧的8个开销块位置,第二条50G业务流复帧中第1帧的8个开销块填充到100G业务流复帧中第3帧的8个开销块位置,以此类推。
对于50G的FlexE帧,group组最大成员数是128,因此PHY number字段只有低7位有效,最高一位可能用于表示业务流。当50G的FlexE帧中的开销块填充到100G业务流中时,将PHY number字段的最高一位置为“0”,清除其他用途。
当填充开销块后就形成一个100G成员FlexE协议信息流,可以采用FlexE标准定义内容和其他100G速率的成员组成group组,进行客户业务恢复。
一条100G业务流解交织成四条25G的业务流如图22所示,100G速率FlexE业务的结构如图22上面部分,每间隔1023*20数据块有一个开销块,每8个开销块组成一个开销帧,每32帧组成一个复帧。在一个复帧中共有8*32个开销块,这些开销块用数组(帧序列号:帧内块序列号)表示:0:0、0:1、0:2、0:3、0:4、0:5、0:6、0:7、1:0、1:1、1:2、1:3、1:4、1:5、1:6、1:7、...、31:0、31:1、31:2、31:3、31:4、31:5、31:6、31:7,一共256个开销块。解交织过程是以66比特块为单位,对时隙部分的数据块进行解交织(按照交织的逆过程进行分割),分成四组业务流。只对时隙部分内容进行解交织,FlexE开销块部分保留开销块位置,如图22中的中间部分。在100G的FlexE业务中有20个时隙,从0到19,当解交织后,第一条业务流中保留了原业务流中第0、4、8、12、16时隙;第二条业务流中保留了原业务流中第1、5、9、13、17时隙;第三条业务流中保留了原业务流中第2、6、10、14、18时隙;第四条业务流中保留了原业务流中第3、7、11、15、19时 隙。在分开的四组业务流中填充开销块内,填充开销块内的方法是:
原复帧中第一开销块位置(开销数组0:0)为新业务流复帧的第一个开销块位置。
在原开销块中每间隔4个开销块位置填充开销内容,即第0、4、8、16、24、28个开销块位置上填充内容,每条25G业务流的复帧由8帧组成,填充共8*8个开销块,是100G业务复帧中开销块数量的四分之一。其他未填写开销块内容的开销块直接删除。
在填充的开销块内容时,C比特、RPF、CR、CA字段直接拷贝原开销块中的对应内容;PHY number字段的低位0-5比特直接拷贝原开销块中PHY number字段的低位0-5比特。PHY number字段的高位表示业务流,比如,第一条25G业务流中PHY number字段的最高2位置为“00”,第二条25G业务流中PHY number字段的最高2位置为“01”,第三条25G业务流中PHY number字段的最高2位置为“10”,第四条25G业务流中PHY number字段的最高2位置为“11”,最高两位用于区分4条25G的业务流。Client calendar A和Client calendar B在一帧中第三块中,这部分来自原开销块的对应内容:第一条25G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中第0、4、8、12、16、20、24、28帧的开销块中的对应内容;第二条25G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中第1、5、9、13、17、21、25、29帧中的对应内容。第三条25G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中第2、6、10、14、18、22、26、30帧的开销块中的对应内容;第四条25G业务流中开销字段Client calendar A和Client calendar B来自原100G业务流中第3、7、11、15、19、23、27、31帧中的对应内容。PHY map、Manage channel-section、Manage channel-shim to shim字段内容处理和Client calendar处理类似,第一条25G业务流的PHY map、Manage channel-section、Manage channel-shim to shim字段内容来自原100G业务流中第0、4、8、12、16、20、24、28帧的开销块中的对应内容;第二条25G业务流中开销字段PHY map、Manage channel-section、Manage channel-shim to shim来自原100G业务流中第1、5、9、13、17、21、25、29帧中的对应内容;第三条25G业务流的PHY map、Manage channel-section、Manage channel-shim to shim字段内容来自原100G业务流中第2、6、10、14、18、22、26、30帧的开销块中的对应内容;第四条25G业务流中开销字段PHY map、Manage channel-section、Manage channel-shim to shim来自原100G业务流中第3、7、11、15、19、23、27、31帧中的对应内容。
填充FlexE开销块内容后形成25G速率的FlexE业务(未填充内容的开销块位置直接删除)在4个25G速率的线路上传送。
四条25G的业务流交织成一条100G业务流的实现如图23所示,在25G速率的FlexE业务中每1023*4*5数据块有一个开销块,四条25G速率的FlexE业务以FlexE开销块位置(复帧边界)为基准对齐,然后以66比特块为单位进行交织,形成一条100G的业务流。交织后,数据块部分是四条25G业务中数据块的间插结果。开销块部分在交织后只保留开销块位置,暂不填充内容,如图23中交织后的结果。交织后得到速率为100G的业务流, 每间隔4*1023*4*5个数据块出现四个开销块位置。将连续四个开销块中的三个开销块依次等间隔地向后移动1023*4*5数据块位置,这样每间隔1023*4*5个数据块出现一个FlexE开销块位置,只需要填充开销块的内容就和FlexE V1.0标准定义的内容完全一致。开销块位置填充内容方法如下:
填充方式是以复帧为单位,将第一条25G业务流中一个复帧中的每帧8个开销块填充到100G业务流一个复帧中第0、4、8、12、16、20、24、28帧中的8个开销块位置,将第二条25G业务流复帧中的每帧8个开销块填充到100G业务流复帧中第1、5、9、13、17、21、25、29帧中8个开销块位置,将第三条25G业务流复帧中每帧8个开销块填充到100G业务流复帧中第2、6、10、14、18、22、26、30帧中8个开销块位置,将第四条25G业务流复帧中每帧8个开销块填充到100G业务流复帧中第3、7、11、15、19、23、27、31帧中8个开销块位置。例如:将第一条25G业务流复帧中第0帧的8个开销块填充到100G业务流中第0帧的8个开销块位置(例如:A 0:0填充到0:0,A 0:1填充到0:1,A 0:3填充到0:3,...,A 0:7填充到0:7),然后将第二条25G业务流复帧中第0帧的8个开销块填充到100G业务流中第1帧的8个开销块位置(例如:B 0:0填充到1:0,B 0:1填充到1:1,B 0:3填充到1:3,...,B 0:7填充到1:7),将第三条25G业务流复帧中第0帧的8个开销块填充到100G业务流复帧中第2帧的8个开销块位置(例如:C 0:0填充到2:0,C 0:1填充到2:1,C 0:2填充到2:2,...,C 0:7填充到2:7),第四条25G业务流复帧中第0帧的8个开销块填充到100G业务流复帧中第3帧的8个开销块位置(例如:D 0:0填充到3:0,D 0:1填充到3:1,D 0:2填充到3:2,...,D 0:7填充到3:7)。完成一轮填写后,开始第二轮填写流程,将第一条25G业务流复帧中第1帧的8个开销块填充到100G业务流复帧中第4帧的8个开销块位置,将第二条25G业务流复帧中第1帧的8个开销块填充到100G业务流复帧中第5帧的8个开销块位置,以此类推。
对于25G的FlexE帧,group组最大成员数是64,因此PHY number字段只有低6位有效(group组最大成员数是64),最高两位可能用于指示业务流。当25G的FlexE帧中的开销块填充到100G业务流中时,将PHY number字段的最高两位置为“0”,清除其他用途。
当填充开销块后就形成一个100G成员FlexE协议信息流,可以采用FlexE标准定义内容和其他100G速率的成员组成group组,进行业务恢复和处理。
一条50G业务解交织成两条25G的业务过程,以及两条25G的业务交织成一条50G业务,处理过程与一条100G业务解交织成两条50G的业务过程,以及两条50G的业务交织成一条100G业务类似,只是将32帧组成的复帧减半为16帧组成的复帧,以及将16帧组成的复帧减半为8帧组成的复帧。
一条50G业务流解交织成两条25G时,50G速率FlexE业务流中每间隔1023*2*10数据块有一个开销块,每8个开销块组成一帧,每16帧组成一个复帧。在一个复帧中共有8*16个开销块,这些开销块用数组(帧序列号:帧内开销块序列号)表示:0:0、0:1、 0:2、0:3、0:4、0:5、0:6、0:7、1:0、1:1、1:2、1:3、1:4、1:5、1:6、1:7、...、15:0、15:1、15:2、15:3、15:4、15:5、15:6、15:7,一共128个开销块。以66比特块为单位,对时隙部分的数据块进行解交织(按照交织的逆过程进行分割),分成两组业务流。只对时隙部分的数据部分的内容进行解交织,FlexE开销块部分保留空闲位置,在50G的FlexE业务中有10个时隙,从0到9,当解交织后,第一条业务流中保留了原业务流中偶数时隙,0、2、4、6、8;第二条业务流中保留了原业务流中奇数时隙,1、3、5、7、9。在分开的两组业务流中填充开销块内容,填充开销块内容的方法是:
原复帧中第一开销块位置(开销数组0:0)作为新两条业务流复帧中第一个开销块位置。
只在原开销块中偶数块位置填充开销块,奇数块位置不填充。例如0:0、0:2、0:4、0:6、1:0、1:2、1:4、1:6、...、15:0、15:2、15:4、15:6。每条25G业务流一个复帧中填充64个开销块,是50G业务流一个复帧中开销块的一半数量。其他未填写开销块内容的开销块直接删除。
25G业务流开销块中,C比特、RPF、CR、CA字段直接拷贝原开销块中的对应内容;PHY number字段的低位6个比特(0-5比特)直接拷贝原开销块中PHY number字段的低位0-5比特。PHY number字段的高2位表示业务流,Client calendar A和Client calendar B在一帧中第三块中,这部分来自原开销块的对应内容,其中第一条25G业务流中开销字段Client calendar A和Client calendar B来自原50G业务流中偶数帧的开销块中的对应内容,如第0帧、第2帧、第4帧、...、14帧;第二条25G业务流中开销字段Client calendar A和Client calendar B来自原50G业务流中原奇数帧中的对应内容,如第1帧、第3帧、第5帧、...、第15帧。PHY map、Manage channel-section、Manage channel-shim to shim字段内容处理和Client calendar处理类似,第一条25G业务流的PHY map、Manage channel-section、Manage channel-shim to shim字段内容来自原50G业务流中偶数帧的开销块中的对应内容,如第0帧、第2帧、第4帧、...、第14帧;第二条25G业务流中开销字段PHY map、Manage channel-section、Manage channel-shim to shim来自原50G业务流中原奇数帧中的对应内容,如第1帧、第3帧、第5帧、...、第15帧。
在填充FlexE开销块后形成25G速率的FlexE业务(未填充内容的开销块位置直接删除),在2个25G速率线路上传送。
两条25G的业务流交织成一条50G业务流的交织过程如下,在25G速率的FlexE业务流中每间隔1023*4*5个数据块有一个开销块。两条25G速率的FlexE业务流以FlexE开销块位置(复帧边界)为基准对齐,然后以66比特块为单位进行交织,形成一条50G的业务流。交织后,数据块是两条25G业务流中数据块的间插结果。开销块在交织后只保留开销块位置,暂不填充开销块内容,交织后得到速率为50G的业务流,每间隔2*1023*2*10个数据块出现两个开销块位置。将连续两个开销块中的一个开销块向后移动1023*2*10个数据块,这样每间隔1023*2*10个数据块出现一个FlexE开销块位置。其中,在50G业务流中开销块位置填充内容规则如下:
以复帧为单位,将第一条25G业务流每帧中8个开销块填充到50G业务流中偶数帧中8个开销块位置,将第二条25G业务流每帧中8个开销块填充到50G业务流中奇数帧中8个开销块位置。具体过程如下:将第一条25G业务复帧中第0帧的8个开销块填充到50G业务流中第0帧的8个开销块位置(例如:A 0:0填充到0:0,A 0:1填充到0:1,A 0:3填充到0:3,...,A 0:7填充到0:7),然后将第二条25G业务流复帧中第0帧的8个开销块填充到50G业务流中第1帧的8个开销块位置(例如:B 0:0填充到1:0,B 0:1填充到1:1,B0:3填充到1:3,...,B 0:7填充到1:7)。完成上轮填充后开始第二轮填充,将第一条25G业务流复帧中第1帧的8个开销块填充到50G业务流复帧中第2帧的8个开销块位置,第二条25G业务流复帧中第1帧的8个开销块填充到50G业务流复帧中第3帧的8个开销块位置,以此类推。
对于25G的FlexE帧,group组最大成员数是64,因此PHY number字段只有低6位有效,最高2位可能用于表示业务流。当25G的FlexE帧中的开销块填充到50G业务流中时,将PHY number字段的最高2位置为“0”,清除其他用途。
当填充开销块后就形成一个50G成员FlexE协议信息流,可以和其他50G速率的成员组成group组,进行客户业务恢复。
上面的案例是本公开的几种具体实方式,对于不同速率的客户业务、不同速率的PHY,可以有各类不同组合和具体实现。需要说明的是,上述实施例中交织和解交织时以1个数据块为单位进行交织和解交织,但本公开不限于此,在其他实施例中,也可以以多个数据块或者其他比特数为单位进行交织和解交织。
如图24所示,本公开一实施例提供业务发送装置2400,包括存储器2410和处理器2420,所述存储器2410存储有程序,所述程序在被所述处理器2420读取执行时,执行以下操作:
将客户业务映射到一条或多条第一速率的业务流;
将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;
将所述业务流通过对应速率的通道发送。
在其他实施例中,所述程序在被所述处理器读取执行时,还执行上述任一实施例所述的业务发送方法。
本公开一实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述任一实施例所述的业务发送方法。
如图25所示,本公开一实施例提供业务接收装置2500,包括存储器2510和处理器2520,所述存储器2510存储有程序,所述程序在被所述处理器2520读取执行时,执行以下操作:
将多条速率第一速率的业务流交织形成一条第一速率的业务流;
在所述第一速率的业务流中填充开销块内容;
从所述第一速率的业务流中恢复出客户业务。
在其他实施例中,所述程序在被所述处理器读取执行时,还执行上述任一实施例所述的业务接收方法。
本公开一实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述任一实施例所述的业务接收方法。
所述计算机可读存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
虽然本公开所揭露的实施方式如上,但所述的内容仅为便于理解本公开而采用的实施方式,并非用以限定本公开。任何本公开所属领域内的技术人员,在不脱离本公开所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本公开的专利保护范围,仍须以所附的权利要求书所界定的范围为准。
工业实用性
本公开涉及通信技术领域。本公开的技术方案将客户业务映射到一条或多条第一速率的业务流;将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;将所述业务流通过对应速率的通道发送。本公开提供的方案,实现了不同速率成员之间的业务传输。

Claims (12)

  1. 一种业务发送方法,包括:
    将客户业务映射到一条或多条第一速率的业务流;
    将至少一条第一速率的业务流分割成多条其他速率的业务流,在所述其他速率的业务流中填充开销块;
    将所述业务流通过对应速率的通道发送。
  2. 如权利要求1所述的业务发送方法,其中,所述将至少一条第一速率的业务流分割成多条其他速率的业务流包括:
    将至少一条100G速率的业务流分割成2条50G速率的业务流;
    或者,将至少一条100G速率的业务流分割成4条25G速率的业务流;
    或者,将至少一条100G速率的业务流分割成2条50G速率的业务流,将其中一条50G速率的业务流分割成2条25G速率的业务流;
    或者,将至少一条50G速率的业务流分割成2条25G速率的业务流。
  3. 如权利要求1或2所述的业务发送方法,其中,所述将至少一条第一速率的业务流分割成多条其他速率的业务流包括:
    以一个数据块为单位按间插顺序将所述第一速率的业务流的数据块分割成多条第二速率的业务流,在所述第二速率的业务流中与所述第一速率的业务流的开销块位置对应的位置保留空白开销块;
    或者,以一个数据块为单位按间插顺序将所述第一速率的业务流的数据块分割成多条第二速率的业务流,在所述第二速率的业务流中与所述第一速率的业务流的开销块位置对应的位置保留空白开销块;以一个数据块为单位按间插顺序将一条第二速率的业务流的数据块分割成多条第三速率的业务流,在所述第三速率的业务流中与所述第二速率的业务流的开销块位置对应位置保留空白开销块。
  4. 如权利要求3所述的业务发送方法,其中,所述在所述其他速率的业务流中填充开销块包括以下至少之一:
    将一条100G速率的业务流分割成2条50G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在偶数开销块位置的开销块填充开销块内容,丢弃未填充开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低7位内容来自该100G速率的业务流中对应开销块位置的开销块中的对应内容;PHY number字段的最高位填充用于区分业务流的指示信息,第一条50G速率业务流中的开销块的其余字段内容来自该100G速率业务流中偶数帧的开销块中的对应字段,第二条50G速率业务流中的其余字段内容来自该100G速率业务流中奇数帧的开销块中的对应字段;
    将一条50G速率的业务流分割成2条25G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在偶数开销块位置的开销块填充开销块内容,丢弃未填充 开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低7位内容来自该50G速率的业务流中对应开销块位置的开销块中的对应内容;PHY number字段的最高位填充用于区分业务流的指示信息;第一条25G速率业务流中的开销块的其余字段内容来自该50G速率业务流中偶数帧的开销块中的对应字段,第二条25G速率业务流中的其余字段内容来自该50G业务流中奇数帧的开销块中的对应字段;
    将一条100G速率的业务流分割成4条25G速率的业务流时,从复帧的首个即第零个开销块位置开始,只在4的整数倍开销块位置的开销块填充开销块内容,丢弃未填充开销块内容的开销块;开销块中填充的C比特字段、RPF字段、CR字段、CA字段、PHY number字段的低6位内容来自第一速率的业务流中对应开销块位置的开销块中的对应内容;PHY number字段的最高2位填充用于区分业务流的指示信息;第一条25G速率业务流中开销块的其余字段内容来自所述100G速率业务流中第4N帧的开销块中的对应字段,N为大于等于0的整数;第二条25G速率业务流中开销块的其余字段内容来自所述100G业务流中第4N+1帧的开销块中的对应字段;第三条25G速率业务流中开销块的其余字段内容来自所述100G业务流中第4N+2帧的开销块中的对应字段;第四条25G速率业务流中开销块的其余字段内容来自所述100G速率业务流中第4N+3帧的开销块中的对应字段。
  5. 一种业务发送装置,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现如权利要求1至4任一所述的业务发送方法。
  6. 一种计算机存储介质,其上存储有指令,当所述指令被处理器执行时,实现如权利要求1至4任一所述的业务发送方法。
  7. 一种业务接收方法,包括:
    将多条速率低于第一速率的业务流交织形成一条第一速率的业务流;
    在所述第一速率的业务流中填充开销块内容;
    从所述第一速率的业务流中恢复出客户业务。
  8. 如权利要求7所述的业务接收方法,其中,所述将多条速率低于第一速率的业务流交织形成一条第一速率的业务流包括:
    将2条50G速率的业务流交织形成一条100G速率的业务流;
    或者,将4条25G速率的业务流交织形成一条100G速率的业务流;
    或者,将2条25G速率的业务流交织形成一条50G速率的业务流,将该50G速率的业务流与另一条50G速率的业务流交织成一条100G速率的业务流。
  9. 如权利要求7或8所述的业务接收方法,其中,所述将多条速率低于第一速率的业务流交织形成一条第一速率的业务流包括:
    将进行交织的业务流的复帧边界对齐后以一个数据块为单位进行间插方式交织,将交织后所得的开销块进行分散,使得相邻开销块之间间隔1023*20个数据块。
  10. 如权利要求9所述的业务接收方法,其中,所述在所述第一速率的业务流中填充开销块内容包括以下至少之一:
    当2条50G速率的业务流交织形成一条100G速率的业务流时,从复帧边界开始,将第一条50G业务流复帧的开销块内容依次填充到100G业务流复帧中偶数帧的开销块的对应位置,将第二条50G业务流复帧的开销块内容依次填充到100G业务流复帧中奇数帧的开销块的对应位置;将PHY number字段最高的一位清零;
    当4条25G速率的业务流交织形成一条100G速率的业务流时,从复帧边界开始,将第一条25G业务流复帧的开销块内容依次填充到100G业务流复帧中第4N帧中开销块的对应位置,N为大于等于0的整数;第二条25G业务流复帧的开销块内容依次填充到100G业务流复帧中第4N+1帧中开销块的对应位置;第三条25G业务流复帧的开销块内容依次填充到100G业务流复帧中第4N+2帧中开销块的对应位置;第四条25G业务流复帧中开销块内容依次填充到100G业务流复帧中第4N+3帧中开销块的对应位置;将PHY number字段最高的两位清零;
    当2条25G业务流交织成1条50G业务流时,将第一条25G业务流复帧的开销块内容依次填充到50G业务流复帧中偶数帧中开销块的对应位置;第二条25G业务流复帧的开销块内容依次填充到50G业务流复帧中奇数帧中开销块的对应位置;将PHY number字段最高的两位清零。
  11. 一种业务接收装置,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现如权利要求7至10任一所述的业务接收方法。
  12. 一种计算机存储介质,其上存储有指令,当所述指令被处理器执行时,实现如权利要求7至10任一所述的业务接收方法。
PCT/CN2019/075393 2018-03-01 2019-02-18 一种业务发送方法及装置、业务接收方法及装置 WO2019165908A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/967,791 US20210399992A1 (en) 2018-03-01 2019-02-18 Service transmitting method and device, and service receiving method and device
EP19761658.4A EP3737012A4 (en) 2018-03-01 2019-02-18 SERVICE TRANSFER METHOD AND DEVICE AND SERVICE RECEIVING METHOD AND DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810170556.3 2018-03-01
CN201810170556.3A CN110224946B (zh) 2018-03-01 2018-03-01 一种业务发送方法及装置、业务接收方法及装置

Publications (1)

Publication Number Publication Date
WO2019165908A1 true WO2019165908A1 (zh) 2019-09-06

Family

ID=67805958

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075393 WO2019165908A1 (zh) 2018-03-01 2019-02-18 一种业务发送方法及装置、业务接收方法及装置

Country Status (4)

Country Link
US (1) US20210399992A1 (zh)
EP (1) EP3737012A4 (zh)
CN (1) CN110224946B (zh)
WO (1) WO2019165908A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118197B (zh) * 2019-06-19 2021-07-09 深圳市中兴微电子技术有限公司 一种开销监控方法和装置、计算机可读存储介质
CN112311510B (zh) * 2019-07-26 2024-04-09 华为技术有限公司 业务数据传输的方法和通信装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102439995A (zh) * 2011-08-24 2012-05-02 华为技术有限公司 一种传送超高速以太网业务的方法和装置
CN102820951A (zh) * 2012-07-30 2012-12-12 华为技术有限公司 光传送网中传送、接收客户信号的方法和装置
US20150104186A1 (en) * 2013-10-15 2015-04-16 Nec Laboratories America, Inc. FLEXIBLE 400G AND 1 Tb/s TRANSMISSION OVER TRANSOCEANIC DISTANCE
WO2017070851A1 (en) * 2015-10-27 2017-05-04 Zte Corporation Channelization for flexible ethernet
CN106803814A (zh) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 一种灵活以太网路径的建立方法、装置及系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100596043C (zh) * 2004-08-26 2010-03-24 华为技术有限公司 实现低速信号在光传输网络中透明传送的方法和装置
CN100589365C (zh) * 2007-09-14 2010-02-10 中兴通讯股份有限公司 一种光传输网中光净荷单元的时隙划分与开销处理的方法
JP5835059B2 (ja) * 2012-03-29 2015-12-24 富士通株式会社 データ伝送装置及びデータ伝送方法
WO2013185327A1 (zh) * 2012-06-14 2013-12-19 华为技术有限公司 传送、接收客户信号的方法和装置
US9590756B2 (en) * 2013-09-16 2017-03-07 Applied Micro Circuits Corporation Mapping a plurality of signals to generate a combined signal comprising a higher data rate than a data rate associated with the plurality of signals
JP6412158B2 (ja) * 2014-11-28 2018-10-24 日本電信電話株式会社 フレーマ、及びフレーミング方法
US9838290B2 (en) * 2015-06-30 2017-12-05 Ciena Corporation Flexible ethernet operations, administration, and maintenance systems and methods
CN110719143A (zh) * 2015-07-30 2020-01-21 华为技术有限公司 用于数据传输的方法、发送机和接收机
CN106559141B (zh) * 2015-09-25 2020-01-10 华为技术有限公司 一种信号发送、接收方法、装置及系统
CN107105355B (zh) * 2016-02-23 2020-05-05 中兴通讯股份有限公司 一种交换方法及交换系统
CN107566074B (zh) * 2016-06-30 2019-06-11 华为技术有限公司 光传送网中传送客户信号的方法及传送设备
CN106911426B (zh) * 2017-02-16 2020-07-28 华为技术有限公司 一种灵活以太网中传输数据的方法及设备
CN109802742B (zh) * 2017-11-16 2020-05-19 华为技术有限公司 一种传输数据的方法、设备及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102439995A (zh) * 2011-08-24 2012-05-02 华为技术有限公司 一种传送超高速以太网业务的方法和装置
CN102820951A (zh) * 2012-07-30 2012-12-12 华为技术有限公司 光传送网中传送、接收客户信号的方法和装置
US20150104186A1 (en) * 2013-10-15 2015-04-16 Nec Laboratories America, Inc. FLEXIBLE 400G AND 1 Tb/s TRANSMISSION OVER TRANSOCEANIC DISTANCE
WO2017070851A1 (en) * 2015-10-27 2017-05-04 Zte Corporation Channelization for flexible ethernet
CN106803814A (zh) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 一种灵活以太网路径的建立方法、装置及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3737012A4 *

Also Published As

Publication number Publication date
EP3737012A4 (en) 2021-10-13
CN110224946B (zh) 2022-05-27
EP3737012A1 (en) 2020-11-11
CN110224946A (zh) 2019-09-10
US20210399992A1 (en) 2021-12-23

Similar Documents

Publication Publication Date Title
CN107888516B (zh) 一种承载业务的方法、设备和系统
CN110266612B (zh) 数据传输方法及装置、网络设备及存储介质
CN113784437B (zh) 一种FlexE承载小颗粒业务的实现方法和装置
CN100353690C (zh) 使用普通网络分组发送多个8b/10b位流的多路传输系统
CN113316037B (zh) 一种业务承载的方法、设备和系统
CN106453028B9 (zh) 传输数据的方法和装置
US20180041332A1 (en) Data Processing Method, Data Transmit End, and Data Receive End
EP2975858B1 (en) Method for processing data in the ethernet, physical layer chip and ethernet device
CN114422284B (zh) 一种业务传递方法、设备及存储介质
CN109729588A (zh) 业务数据传输方法及装置
CN111092686A (zh) 一种数据传输方法、装置、终端设备和存储介质
CN107888345B (zh) 一种信息传输的方法和设备
US20120079339A1 (en) Method, device and communication system for retransmission based on forward error correction
WO2020156352A1 (zh) 传输客户业务的方法、装置、系统及计算机可读存储介质
WO2019165908A1 (zh) 一种业务发送方法及装置、业务接收方法及装置
CN105790883B (zh) 一种处理信号的方法及通信设备
CN113595965A (zh) 业务数据处理、交换、提取方法及设备、计算机可读介质
CN112118197B (zh) 一种开销监控方法和装置、计算机可读存储介质
CN111385058A (zh) 一种数据传输的方法和装置
JP2020519100A (ja) フレックスイーサネットプロトコルにおいてトラヒックを伝送する方法、装置及びシステム
CN109818704A (zh) 数据传输方法和设备
CN113472826A (zh) 一种业务承载、提取方法、数据交换方法及设备
CN110417542B (zh) 一种传输客户业务的方法、装置和系统
US20230318934A1 (en) System and method for rate adaptation of packet-oriented client data for transmission over a metro transport network (mtn)
WO2022267882A1 (zh) 业务处理方法及业务处理设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19761658

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019761658

Country of ref document: EP

Effective date: 20200805

NENP Non-entry into the national phase

Ref country code: DE