WO2019128664A1 - 一种数据传输方法、通信设备及存储介质 - Google Patents

一种数据传输方法、通信设备及存储介质 Download PDF

Info

Publication number
WO2019128664A1
WO2019128664A1 PCT/CN2018/119412 CN2018119412W WO2019128664A1 WO 2019128664 A1 WO2019128664 A1 WO 2019128664A1 CN 2018119412 W CN2018119412 W CN 2018119412W WO 2019128664 A1 WO2019128664 A1 WO 2019128664A1
Authority
WO
WIPO (PCT)
Prior art keywords
code block
stream
code
data
block stream
Prior art date
Application number
PCT/CN2018/119412
Other languages
English (en)
French (fr)
Inventor
钟其文
徐小飞
张小俊
牛乐宏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020207021706A priority Critical patent/KR102331530B1/ko
Priority to JP2020536670A priority patent/JP7026802B2/ja
Priority to EP18893906.0A priority patent/EP3726758A4/en
Publication of WO2019128664A1 publication Critical patent/WO2019128664A1/zh
Priority to US16/913,691 priority patent/US11316545B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/02Transmitters
    • H04B1/04Circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • H04J14/0254Optical medium access
    • H04J14/0272Transmission of OAMP information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1664Optical Transport Network [OTN] carrying hybrid payloads, e.g. different types of packets or carrying frames and packets in the paylaod
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0006Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format
    • H04L1/0007Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format by modifying the frame length
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0089Multiplexing, e.g. coding, scrambling, SONET
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a data transmission method, a communication device, and a storage medium.
  • FlexE Flexible Ethernet
  • MAC medium access control
  • FlexE can support the following functions: Binding, multiple Ethernet ports Bind as a link group to support medium access control (MAC) services at a rate greater than a single Ethernet port; sub-rates, by assigning time slots to support traffic, the rate is less than the link group bandwidth or less than a single Ethernet port Bandwidth MAC service; channelization, which supports simultaneous transmission of multiple MAC services in a link group by allocating time slots for services, for example, supporting simultaneous transmission of one 150G and two 25G MAC services in a 2x 100GE link group.
  • MAC medium access control
  • FlexE divides time slots by Time Division Multiplexing (TDM) to achieve hard isolation of transmission pipeline bandwidth.
  • a service data stream can be allocated to one or more time slots to achieve matching of various rate services.
  • a FlexE group (also known as FlexE Group in English) can contain one or more physical link interfaces (English can be written as PHY).
  • FIG. 1 exemplarily shows a schematic diagram of a communication system based on a flexible Ethernet protocol. As shown in FIG. 1, the FlexE Group includes four PHYs.
  • a flexible Ethernet protocol client (FlexE Client) represents a customer data stream that is transmitted over a specified time slot (one time slot or multiple time slots) on the FlexE Group.
  • One FlexE Group can host multiple FlexE Clients, and one FlexE Client corresponds to one user.
  • Service data flow typically referred to as Medium Access Control (MAC) Client
  • MAC Medium Access Control
  • FlexE Shim flexible Ethernet protocol functional layer
  • Ethernet layer is a new generation of switching networking technology with deterministic ultra-low latency characteristics.
  • Bit Block sequences such as unscrambled 64B/66B Bit Block sequences, or equivalent 8/10b Bit Block sequences, Ethernet media.
  • FIG. 2 exemplarily shows a schematic diagram of an X-E communication system architecture.
  • the communication system may include two types of communication devices, such as communication device one 1011 and communication device two 1012 in FIG.
  • the communication device 1011 can also be described as a communication device at the edge of the carrier network (hereinafter referred to as the network).
  • the English can be called a Provider Edge node, which can be simply referred to as a PE node.
  • the communication device 21012 may also be described as a communication device in a carrier network (hereinafter referred to as a network).
  • the English may be referred to as a Provider node, and may be simply referred to as a P node.
  • One side of the communication device 1011 may be connected to the user equipment or may be connected to the customer network device.
  • the interface connected to the user equipment or the client network device may be referred to as a user network interface (UNI), and may also be described as an interface between the network and the user.
  • the other side of the communication device 1011 is connected to the communication device 21012.
  • the other side of the communication device 1011 and the communication device 210 are connected through an inter-network interface 1112 (Network to Network Interface, NNI). .
  • the inter-network interface 1112 can also be described as an interface between networks or communication devices within the network.
  • the communication device 21012 can be connected to other communication devices (such as other communication devices 2 or communication devices 1). Only one communication device 2 is schematically shown in the figure. Those skilled in the art can know that in two One or more connected communication devices may be included between one communication device.
  • the adapter can be configured on the interface side of the communication device (English can be called adaptor).
  • the UNI side adapter (English can be called U-adaptor) 1113 configured on the UNI1111 side, and the adapter configured on the NNI1112 side. Called N-adaptor) 1114.
  • the X-E switching module 1115 (which may be referred to as an X-ESwitch in English) may be configured in the first communication device and the second communication device.
  • a schematic diagram of an end-to-end path 1116 is shown by way of example in FIG.
  • X-E is currently based on the end-to-end networking of the FlexE interface and is a flat, non-hierarchical networking exchange.
  • OIF FlexE currently defines 5Gbps rate slot (SLOT) particles based on 64B/66B Bit Block (hereafter referred to as 64B/66Bb).
  • SLOT 5Gbps rate slot
  • 64B/66Bb 64B/66B Bit Block
  • Any FlexE Client can allocate a total bandwidth rate of Q*5Gbps on a FlexE-based NNI or UNI. (Sequences of Q are in the range of integers greater than or equal to 1) are carried in a number of time slots.
  • the P node of the X-E network needs to parse and extract each FlexE Clieng and exchange it for processing, which lacks hierarchical multiplexing considerations.
  • FIG 3 exemplarily shows a communication diagram of an X-Ethernet flat networking technology applied to an end-to-end networking of a metropolitan area and a backbone network, where tens of thousands of dedicated line services need to be scheduled between multiple cities.
  • the aggregation node (aggregation as shown in Figure 3) and the backbone node (the backbone shown in Figure 3) manage hundreds of thousands of end-to-end cross-connections, which have difficulties in management and operation.
  • Core nodes (such as aggregation nodes and backbone nodes) have difficulties and pressures in dealing with the large number of cross-connections on the data side.
  • the embodiment of the present application provides a data transmission method, a communication device, and a storage medium, which are used to alleviate the pressure brought by the number of cross-connections of intermediate nodes in the network to the intermediate node, and also reduce the pressure on network management and operation and maintenance.
  • the embodiment of the present application provides a data transmission method, in which a Q first code block stream is obtained, where Q is an integer greater than 1; the coding type of the first code block stream is M1/N1 bits. Encoding, M1 is a positive integer, N1 is an integer not less than M1; the bit corresponding to the code block in the Q first code block stream is placed in the second code block stream to be transmitted; wherein, the coding of the second code block stream The type is M2/N2 bit coding; the code block corresponding bits in the Q first code block stream are carried in the payload area of the code block in the second code block stream; wherein, M2 is a positive integer, and the second code block stream is The number of bits carried by the payload area of one code block is not greater than M2; N2 is an integer not less than M2.
  • the solution provided by the embodiment of the present application multiplexes and demultiplexes the code block stream at the granularity of the code block.
  • the second code block stream traverses at least one intermediate node to reach the communication device on the demultiplexing side, and the intermediate node does not
  • the second code block stream is demultiplexed, thereby reducing the number of cross-connections of intermediate nodes in the network, and reducing the pressure on network management and operation and maintenance.
  • the bit corresponding to the code block in the Q first code block stream is placed in the second code block stream to be sent, and may be a code in the Q first code block stream.
  • the sync header area and the asynchronous header area of the block are sequentially placed in the payload area of the code block of the second code block stream. In this way, the synchronization header area and the non-synchronization area of the code block in the first code block stream can be demultiplexed sequentially.
  • all the bits corresponding to the synchronization header area and the asynchronous header area of one of the Q first code block streams are corresponding to at least two code blocks of the second code block stream. Payload area.
  • the first code block stream can be implemented in this manner. Multiplexing of code blocks. For example, if the coding methods of the first code block stream and the second code block stream are both 64B/66B codes, if the first code block stream is not compressed, the two code blocks of the second code block stream may be net.
  • the bearer region carries bits corresponding to one code block of the first code block stream.
  • the second code block stream corresponds to at least one data unit; one of the at least one data unit includes a header code block and at least one data code block; or one of the at least one data unit
  • the data unit includes a header code block, at least one data code block, and a tail code block; or, one of the at least one data unit includes at least one data code block and a tail code block.
  • the boundary division of the data unit can be implemented by the header block and/or the tail code block, so that the communication device recognizes the boundary of each data unit in the second code block stream, and demultiplexes the Q strip.
  • a block flow lays the foundation.
  • the at least one data code block includes at least one first type of data code block; and the code block corresponding bit in the Q first code block stream is carried in at least one of the second code block stream.
  • the code block in the first code block stream can be carried in the second code block stream, thereby implementing code block stream multiplexing based on code block granularity, thereby improving data transmission efficiency.
  • the header code block is an S code block and/or the tail code block is a T code block in order to be compatible with the prior art.
  • the second code block stream further includes identifier indication information corresponding to the code block;
  • the identifier indication information is used to indicate a first code block stream corresponding to the code block.
  • the identifier of the first code block stream corresponding to the code block taken from the first code block stream carried in the second code block stream can be indicated by the identifier indicating information to the communication device on the demultiplexing side, thereby The communication device on the side can demultiplex the Q first block flow to lay the foundation.
  • placing the bit corresponding to the code block in the Q first code block stream into the second code block stream to be sent includes: performing code blocks in the Q first code block stream Performing code block-based time division multiplexing to obtain a code block sequence to be processed; wherein each of the first code block streams in the Q first code block streams corresponds to at least one time slot; the code block included in the code block sequence to be processed Sorting, matching the order of the time slots corresponding to the code blocks included in the code block sequence to be processed; and placing the bits corresponding to the code block sequence to be processed into the second code block stream to be transmitted.
  • the demultiplexing side may determine, according to the order of the code blocks and the ordering relationship of the time slots, the time slots corresponding to the code blocks in the first code block stream of the Q code block to be processed, and further The corresponding relationship between the time slot and the Q first code block stream determines the first code block stream corresponding to each code block, and further recovers the Q first code block streams carried by the second code block stream.
  • the preset code block of the second code block stream carries the time slot allocation indication information; the time slot allocation indication information is used to indicate the correspondence between the Q first code block flow and the time slot.
  • the corresponding relationship between the demultiplexing side slot and the first code block stream is notified by means of the time slot allocation indication information, so that the communication device on the multiplexing side can more flexibly allocate time slots for the Q first code block streams.
  • placing the bit corresponding to the code block sequence to be processed into the second code block stream to be sent includes: compressing consecutive R code blocks in the code block sequence to be processed, and obtaining the compressed a code block sequence; wherein R is a positive integer; and the bit corresponding to the compressed code block sequence is placed in the second code block stream to be transmitted. In this way, the number of bits corresponding to the first code block stream carried in the second code block stream can be reduced, thereby improving data transmission efficiency.
  • R is greater than 1
  • at least two code blocks are included in consecutive R code blocks, and two first code block streams that are taken out of two code blocks are two different first codes.
  • Block flow That is, in the embodiment of the present application, multiple code blocks from different first code block streams may be compressed, thereby implementing compression and improvement of multiple code blocks in the code block multiplexing and demultiplexing scheme. The effect of transmission efficiency.
  • the coded sequence of the compressed code block sequence is M3/N3; M3 is a positive integer, and N3 is an integer not less than M3; and at least one data unit included in the second code block stream
  • the number of the first type of data code blocks included in one data unit is determined according to a common multiple of N3 and M2 and M2; or the first one included in one of the at least one data unit included in the second code block stream
  • the number of class data code blocks is determined according to the least common multiple of N3 and M2 and M2. This allows a data block of the second code block stream to be loaded into a code block of an integer number of first code block streams (this form can also be described as boundary alignment).
  • the method further includes: For the first code block stream in the Q first code block stream, performing: performing an idle IDLE code on the first code block stream according to the bandwidth of the first code block stream and the total bandwidth of the time slot corresponding to the first code block stream; Block addition and deletion processing; wherein, the total bandwidth of the time slot corresponding to the first code block stream is based on the number of time slots corresponding to the first code block stream, and the bandwidth allocated for each time slot corresponding to the first code block stream definite. In this way, the rate of the first code block stream can be adapted to the total rate corresponding to the time slot assigned to it.
  • an embodiment of the present application provides a data transmission method, where a second code block stream is received, where a code block corresponding bit in a Q first code block stream is carried in a second code block stream.
  • the payload area of the code block, Q is an integer greater than 1;
  • the coding type of the first code block stream is M1/N1 bit coding, M1 is a positive integer, N1 is an integer not less than M1; and the coding type of the second code block stream M2/N2 bit coding;
  • M2 is a positive integer, the number of bits carried in the payload area of one code block in the second code block stream is not greater than M2;
  • N2 is an integer not less than M2; demultiplexing Q strip A block flow.
  • the solution provided by the embodiment of the present application multiplexes and demultiplexes the code block stream at the granularity of the code block.
  • the second code block stream traverses at least one intermediate node to reach the communication device on the demultiplexing side, and the intermediate node does not
  • the second code block stream is demultiplexed, thereby reducing the pressure on the intermediate nodes caused by the number of cross-connections of intermediate nodes in the network, and also reducing the pressure on network management and operation and maintenance.
  • the synchronization header area and the asynchronous header area of one of the Q first code block streams are sequentially placed into the payload area of the code block of the second code block stream. In this way, the synchronization header area and the non-synchronization area of the code block in the first code block stream can be demultiplexed sequentially.
  • all the bits corresponding to the synchronization header area and the asynchronous header area of one of the Q first code block streams are corresponding to at least two code blocks of the second code block stream. Payload area.
  • the first code block stream can be implemented in this manner. Multiplexing of code blocks. For example, if the coding methods of the first code block stream and the second code block stream are both 64B/66B codes, if the first code block stream is not compressed, the two code blocks of the second code block stream may be net.
  • the bearer region carries bits corresponding to one code block of the first code block stream.
  • the second code block stream corresponds to at least one data unit; one of the at least one data unit includes a header code block and at least one data code block; or one of the at least one data unit
  • the data unit includes a header code block, at least one data code block, and a tail code block; or, one of the at least one data unit includes at least one data code block and a tail code block.
  • the boundary division of the data unit can be implemented by the header block and/or the tail code block, so that the communication device recognizes the boundary of each data unit in the second code block stream, and demultiplexes the Q strip.
  • a block flow lays the foundation.
  • the at least one data code block includes at least one first type of data code block; and the code block corresponding bit in the Q first code block stream is carried in at least one of the second code block stream.
  • the code block in the first code block stream can be carried in the second code block stream, thereby implementing code block stream multiplexing based on code block granularity, thereby improving data transmission efficiency.
  • the header code block is an S code block and/or the tail code block is a T code block in order to be compatible with the prior art.
  • the second code block stream further includes identifier indication information corresponding to the code block;
  • the identifier indication information is used to indicate a first code block stream corresponding to the code block.
  • the identifier of the first code block stream corresponding to the code block taken from the first code block stream carried in the second code block stream can be indicated by the identifier indicating information to the communication device on the demultiplexing side, thereby The communication device on the side can demultiplex the Q first block flow to lay the foundation.
  • demultiplexing the Q first code block streams includes: acquiring bits corresponding to the code blocks in the Q first code block streams carried by the payload area of the second code block stream. And obtaining a sequence of the code block to be decompressed; and demultiplexing the Q first code block streams according to the sequence of the code block to be decompressed.
  • one code block in the sequence of the code block to be decompressed is obtained by compressing at least two code blocks, at least two code blocks correspond to two different first code block streams. . That is, in the embodiment of the present application, multiple code blocks from different first code block streams may be compressed, thereby implementing compression and improvement of multiple code blocks in the code block multiplexing and demultiplexing scheme. The effect of transmission efficiency.
  • the preset code block of the second code block stream carries the time slot allocation indication information; the time slot allocation indication information is used to indicate the correspondence between the Q first code block flow and the time slot.
  • the corresponding relationship between the demultiplexing side slot and the first code block stream is notified by means of the time slot allocation indication information, so that the communication device on the multiplexing side can more flexibly allocate time slots for the Q first code block streams.
  • demultiplexing the Q first code block streams according to the code block sequence to be decompressed includes: decompressing the code block sequence to be decompressed to obtain a code block sequence to be restored; Determining, according to the code block sequence to be recovered, a first code block stream corresponding to each code block in the code block sequence to be recovered, and obtaining Q first code block streams; wherein each of the Q first code block streams A code block stream corresponds to at least one time slot; the order of the code blocks included in the code block sequence to be recovered matches the order of the time slots corresponding to the code blocks included in the code block sequence to be recovered.
  • the demultiplexing side may determine, according to the order of the code blocks and the ordering relationship of the time slots, the time slots corresponding to the code blocks in the first code block stream of the Q code to be recovered, and further The corresponding relationship between the time slot and the Q first code block stream determines the first code block stream corresponding to each code block, and further recovers the Q first code block streams carried by the second code block stream.
  • the coded sequence of the compressed code block sequence is M3/N3; M3 is a positive integer, and N3 is an integer not less than M3; and at least one data unit included in the second code block stream
  • the number of the first type of data code blocks included in one data unit is determined according to a common multiple of N3 and M2 and M2; or the first one included in one of the at least one data unit included in the second code block stream
  • the number of class data code blocks is determined according to the least common multiple of N3 and M2 and M2. This allows a data block of the second code block stream to be loaded into a code block of an integer number of first code block streams (this form can also be described as boundary alignment).
  • the embodiment of the present application provides a communication device, where the communication device includes a memory, a transceiver, and a processor, where: the memory is used to store an instruction; the processor is configured to control the transceiver to perform signal reception according to an instruction to execute the memory storage. And signaling, the communication device is configured to perform the method of any of the first aspect or the first aspect described above when the processor executes the instruction stored in the memory.
  • the embodiment of the present application provides a communication device, where the communication device includes a memory, a transceiver, and a processor, where: the memory is used to store an instruction; the processor is configured to control the transceiver to perform signal reception according to an instruction to execute the memory storage. And signaling, when the processor executes the instruction stored in the memory, the communication device is configured to perform the method of any of the above second aspect or the second aspect.
  • the embodiment of the present application provides a communication device, which is used to implement any one of the foregoing first aspect or the first aspect, including a corresponding functional module, which is used to implement the steps in the foregoing method.
  • the functions can be implemented in hardware or in hardware by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the structure of the communication device includes a multiplexing demultiplexing unit and a transceiver unit, and the units can perform corresponding functions in the foregoing method examples.
  • the units can perform corresponding functions in the foregoing method examples. For details, refer to the detailed description in the method example, which is not described herein.
  • the embodiment of the present application provides a communication device, which is used to implement the method of any one of the foregoing second aspect or the second aspect, and includes a corresponding functional module, which is used to implement the steps in the foregoing method.
  • the functions can be implemented in hardware or in hardware by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the structure of the communication device includes a multiplexing demultiplexing unit and a transceiver unit, and the units can perform corresponding functions in the foregoing method examples.
  • the units can perform corresponding functions in the foregoing method examples. For details, refer to the detailed description in the method example, which is not described herein.
  • the embodiment of the present application provides a computer storage medium, where the computer storage medium stores instructions, when the computer is running on the computer, causing the computer to perform the first aspect or the method in any possible implementation manner of the first aspect. .
  • an embodiment of the present application provides a computer storage medium, where the computer storage medium stores an instruction, when the computer is running on the computer, causing the computer to perform the method in any possible implementation manner of the second aspect or the second aspect. .
  • an embodiment of the present application provides a computer program product comprising instructions, which when executed on a computer, cause the computer to perform the method of the first aspect or any possible implementation of the first aspect.
  • the embodiment of the present application provides a computer program product comprising instructions, when executed on a computer, causing a computer to perform the method in any of the possible implementations of the second aspect or the second aspect.
  • FIG. 1 is a schematic diagram of a communication system based on a flexible Ethernet protocol
  • FIG. 2 is a schematic diagram of an X-E communication system architecture
  • Figure 3 is a schematic diagram of end-to-end communication
  • FIG. 4 is a schematic structural diagram of a communication system according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another communication system according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a network system according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a data transmission method according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a code block according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of another code block according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a code block according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a data code block according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a T7 code block according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of an IDLE code block according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of another code block according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a FlexE frame according to an embodiment of the present application.
  • 16 is a schematic structural diagram of a second code block stream transmission time slot allocation indication information according to an embodiment of the present disclosure
  • FIG. 17 is a schematic structural diagram of code block stream multiplexing according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic structural diagram of a first code block stream according to an embodiment of the present disclosure.
  • FIG. 19 is a schematic structural diagram of a second code block stream according to an embodiment of the present disclosure.
  • FIG. 20 is a schematic structural diagram of another second code block stream according to an embodiment of the present disclosure.
  • FIG. 21 is a schematic diagram of a compression processing manner according to an embodiment of the present disclosure.
  • FIG. 22 is a schematic diagram of a compression processing manner according to an embodiment of the present disclosure.
  • FIG. 23 is a schematic flowchart of a data transmission method according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of a data transmission structure according to an embodiment of the present application.
  • FIG. 25 is a schematic structural diagram of a communication device according to an embodiment of the present application.
  • FIG. 26 is a schematic structural diagram of another communication device according to an embodiment of the present disclosure.
  • FIG. 27 is a schematic structural diagram of another communication device according to an embodiment of the present disclosure.
  • FIG. 28 is a schematic structural diagram of another communication device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram showing the architecture of a communication system to which the embodiment of the present application is applied.
  • the communication system includes a plurality of communication devices, and a code block stream is transmitted between the communication devices.
  • the communication device in the embodiment of the present application may be a network device, such as a communication device called a PE node, which may be a network edge in the XE network, or a communication device called a P node in the network in the XE network. It can be accessed as a client device to other bearer networks, such as Optical Transport Network (OTN) or Wavelength Division Multiplexing (WDM).
  • OTN Optical Transport Network
  • WDM Wavelength Division Multiplexing
  • the communication device provided in the embodiment of the present application has a multiplexing demultiplexing unit, such as multiplexing in the multiplexing demultiplexing unit 3301 and the communication device 3107 in the communication device 3105 shown in FIG.
  • the communication device with the multiplexing demultiplexing unit can implement multiplexing of the received multiple code streams (the multiplexing in the embodiment of the present application can also be referred to as multiplexing in some documents), and can also implement the received Demultiplexing of a code stream (demultiplexing in the embodiment of the present application may also be referred to as demultiplexing in some documents), which will be exemplified below with reference to FIG.
  • the communication device 3101 outputs a code block stream 3201 to a communication device 3105
  • the communication device 3102 outputs a code block stream 3202 to the communication device 3105
  • the communication device 3103 outputs a code block stream 3203 to the communication device 3105
  • the communication device 3105 includes a multiplexing solution.
  • the multiplexing unit 3301, the communication device 3105 may multiplex the received code block stream 3201, the code block stream 3202, and the code block stream 3203 into one code block stream 3205 for transmission.
  • multi-stage multiplexing can be implemented.
  • the communication device 3105 in FIG. 4 can output the code block stream 3205 to the communication device 3107, and the code block stream 3205 is already a multiplexed code block stream, and the communication device 3107
  • the code block stream 3204 output by the communication device 3104, the code block stream 3206 output by the communication device 3106, and the multiplexed code block stream 3205 output by the communication device 3105 may be multiplexed again by the multiplexing demultiplexing unit 3302, and output.
  • the code block stream 3207 is multiplexed. It can also be described that the communication device 3107 multiplexes the code block stream 3204, the multiplexed code block stream 3205, and the code block stream 3206 into one code block stream 3207.
  • the multiplexed code block stream 3207 can be transmitted between the communication device 3107 and the communication device 3108 and the communication device 3109.
  • the multiplexing demultiplexing unit in the communication device may also have a demultiplexing function, and the multiplexing demultiplexing unit 3303 in the communication device 3109 shown in FIG. 4 may demultiplex the received code block stream 3207, and The demultiplexed code block stream is sent to the corresponding communication device.
  • the demultiplexed code block stream 3204 is sent to the communication device 3110 in FIG. 4, and the demultiplexed code block stream 3201 is sent to the communication device 3111.
  • the demultiplexed code block stream 3202 is transmitted to the communication device 3112
  • the demultiplexed code block stream 3203 is transmitted to the communication device 3113
  • the demultiplexed code block stream 3206 is transmitted to the communication device 3114.
  • the multiplexing and demultiplexing unit 3303 may first demultiplex the code block stream 3207 into a code block stream 3204, a code block stream 3205, and a code block stream 3206, and further multiplex the demultiplexing unit 3303.
  • the code block stream 3205 is then demultiplexed into a code block stream 3201, a code block stream 3202, and a code block stream 3203.
  • the multiplexing demultiplexing unit 3303 of the communication device 3109 in FIG. 4 may include two sub-multiplex demultiplexing units, where one sub-multiplex demultiplexing unit is used to stream the code block.
  • 3207 is demultiplexed into a code block stream 3204, a code block stream 3205, and a code block stream 3206, and the code block stream 3205 is sent to another sub-multiplex demultiplexing unit, and the other sub-multiplex demultiplexing unit sets the code
  • the block stream 3205 is demultiplexed into a code block stream 3201, a code block stream 3202, and a code block stream 3203.
  • FIG. 5 exemplarily provides another schematic diagram of a communication system architecture applicable to the embodiment of the present application.
  • the process of the communication device 3109 receiving the code block stream 3207 is the same as that in FIG. 4, and details are not described herein again.
  • the scheme shown in FIG. 4 is different in that the multiplexing demultiplexing unit 3303 in the communication device 3109 in FIG. 5 demultiplexes the received code block stream 3207 into a code block stream 3204, a code block stream 3205, and a code block stream 3206.
  • the code block stream 3204 is transmitted to the communication device 3110
  • the code block stream 3205 is transmitted to the communication device 3115
  • the code block stream 3206 is transmitted to the communication device 3114.
  • the multiplexing demultiplexing unit 3304 in the communication device 31105 demultiplexes the received code block stream 3205 into a code block stream 3201, a code block stream 3202, and a code block stream 3203, and transmits the code block stream 3201 to the communication device 3111.
  • the code block stream 3202 is transmitted to the communication device 3112, and the code block stream 3203 is transmitted to the communication device 3113.
  • both the multiplexing side and the demultiplexing side can be flexibly configured.
  • the multiplexing demultiplexing unit 3301 and the multiplexing demultiplexing unit 3302 are performed.
  • the two-stage multiplexing obtains the code block stream 3207, and on the demultiplexing side, the code block stream can be demultiplexed into the code block stream 3204 and the code by the multiplexing demultiplexing unit 3303 as shown in FIG. Block stream 3201, code block stream 3202, code block stream 3203, and code block stream 3206.
  • the received code block stream 3207 is first demultiplexed into a code block stream 3204, a code block stream 3205, and a code block stream 3206 by a multiplexing demultiplexing unit 3303, and then a multiplexing solution is obtained.
  • the multiplexing unit 3304 demultiplexes the received code block stream 3205 into a code block stream 3201, a code block stream 3202, and a code block stream 3203.
  • FIG. 6 is a schematic diagram showing a network system architecture provided by an embodiment of the present application.
  • X-Ethernet can be based on traditional Ethernet interface, Fibre Channel (Fibre Channel, FC) Fibre Channel interface, Common Public Radio Interface (CPRI), Synchronous Digital System SDH/SONET, Optical Transport Network OTN and FlexE interfaces
  • the generic data unit sequence stream is cross-connected to provide a specific protocol-independent end-to-end networking technology in which the objects to be exchanged are general data unit sequence streams.
  • the rate adaptation of the sequence of data units to the FlexE time slot or the corresponding physical interface can be achieved by additions and deletions to the accompanying idle (IDLE).
  • the 64B/66B code block stream may be cross-connected based on the 64B/66B code block stream, or may be cross-connected based on the decoded general data unit stream.
  • multiple types of data can be accessed on the access side of the two ends, such as mobile preamble CPRI, mobile backhaul Ethernet and enterprise SDH, and Ethernet private line.
  • the aggregation node of the XE (such as the aggregation shown in FIG. 6) can implement multiplexing (multiplexing) of the Q service code streams to one code stream, thereby reducing the aggregation node.
  • FIG. 6 shows that the solution provided by the embodiment of the present application can effectively reduce the number of cross-connections of the core node (such as the aggregation node and the backbone node of FIG. 6) in the data plane, and mitigate the core node.
  • the core node such as the aggregation node and the backbone node of FIG. 6
  • pressure * in the embodiment of the present application means the meaning of multiplication.
  • the embodiment of the present application provides a data transmission method, where the multiplexing side of the data transmission method can be performed by the communication device 3105 and the communication device 3107 in FIG. 4 and FIG. 5 above, and the data transmission method is solved.
  • the use side can be performed by the communication device 3109 in Fig. 4 described above and the communication device 3205 in Fig. 5.
  • the communication device on the multiplexing side may also be referred to as a first communication device, and the communication device on the demultiplexing side may be referred to as a second communication device.
  • one communication device may have multiplexing capability.
  • FIG. 7 is a schematic flowchart diagram of a data transmission method provided by an embodiment of the present application. As shown in FIG. 7, the method includes:
  • Step 4101 The first communications device obtains the Q first code block stream, where Q is an integer greater than 1.
  • the encoding type of the first code block stream is M1/N1 bit encoding, M1 is a positive integer, and N1 is not less than M1. Integer
  • Step 4102 The first communication device puts the bit corresponding to the code block in the Q first code block stream into the second code block stream to be sent, where the coding type of the second code block stream is M2/N2 bit coding.
  • the code block corresponding bits in the Q first code block stream are carried in the payload area of the code block in the second code block stream; wherein M2 is a positive integer, and the payload area of one code block in the second code block stream
  • M2 is a positive integer
  • N2 is an integer not less than M2.
  • Placing the bit corresponding to the code block in the Q first code block stream into the second code block stream to be transmitted may also be described as multiplexing (or interleaving) the bit corresponding to the code block in the Q first code block stream. In, English can also be written as Interleaving) the second code block stream to be sent.
  • the coding mode of the first code block stream and the coding mode of the second code block stream may be the same. It may be said that M1 may be the same or different from M2, and N1 may be the same as or different from N2.
  • the coding mode of the first code block stream adopts the 8B/10B coding mode
  • the second code block flow adopts the 64B/66B coding mode
  • the coding mode of the first code block flow adopts the 64B/65B coding mode
  • the two code block stream uses 64B/66B encoding.
  • the first communication device 3107 and the first communication device 3109 include at least one first communication device, and the first communication device receives the code block stream.
  • the code block stream 3207 is not demultiplexed, that is, the second code block stream traverses at least one intermediate node to reach the second communication device on the demultiplexing side, and the intermediate node does not need to solve the second code block stream.
  • the second code block stream may be sequentially transmitted into the bearer pipeline formed by the time slot combination in the flexible Ethernet interface group of the current node and the next node, and traversed the network.
  • the second communication device on the demultiplexing side is reached.
  • the intermediate node may reuse the second code block stream and other code block streams again, which is not limited in this embodiment.
  • the solution provided by the embodiment of the present application multiplexes and demultiplexes the code block stream at the granularity of the code block.
  • multiplexing of multiple first code block streams can be implemented. Therefore, the plurality of first code block streams are multiplexed into one second code block stream for transmission, thereby reducing the number of cross-connections that the intermediate node needs to process, and also reducing the pressure on network management and operation and maintenance.
  • the intermediate node in the embodiment of the present application refers to a communication device between the first communication device on the multiplexing side and the second communication device on the demultiplexing side on the transmission path.
  • the step 4102 may be that the synchronization header area and the asynchronous header area of one of the Q first code block streams are sequentially placed into the code block of the second code block stream.
  • Payload area That is, the information carried by the synchronization header area of one code block and the information carried by the asynchronous header area are sequentially placed into the payload of the code block of the second code block stream according to their order in the first code block stream. region.
  • An embodiment of the present application further provides an optional implementation manner, where all the bits corresponding to the synchronization header area and the non-synchronized header area of one of the Q first code block streams are correspondingly placed in the second code block stream.
  • the total code block in the first code block stream is The number of bits is 66 bits, and the total number of bits of one code block of the second code block stream is 66 bits, but the payload area of one code block of the second code block stream is 64 bits, so one of the first code block streams
  • the 66 bits of the code block need to be placed in the payload area of at least two code blocks of the second code block stream.
  • the first code block stream in the embodiment of the present application may also be a multiplexed code block stream.
  • the first communication device 3105 multiplexes the code block stream 3201, the code block stream 3202, and the code block stream 3203. Thereafter, after the multiplexed 3205 is output, the first communication device 3107 can multiplex the code block stream 3204, the code block stream 3206, and the multiplexed code block stream 3205 again. That is to say, the nested application is supported in the embodiment of the present application.
  • the multiplexed code block will be transmitted.
  • the pipeline of the flow is called a high-order pipeline.
  • the pipeline carrying the code block stream 3201, the code block stream 3202, and the code block stream 3203 in FIG. 4 is called a low-order pipeline
  • the pipeline carrying the multiplexed code block stream 3205 is called a pipeline.
  • the pipeline carrying the code block stream 3207 is referred to as a higher-order pipeline.
  • the code blocks of the low-order pipeline can be loaded into the high-order pipeline, and the code blocks of the high-order pipeline can be loaded. Enter a higher level of pipeline to achieve nested multiplexing of higher order pipelines to higher order pipelines.
  • the first communication device in the embodiment of the present application may include multiple interfaces, and may be divided into an interface on the input side and an interface on the output side according to the data transmission direction, the interface on the input side includes multiple, and the interface on the output side includes one or more interfaces.
  • the interface of the first communication device may be configured in advance, and multiple code block streams received by part or all of the interfaces on the input side are multiplexed into one of the plurality of code block streams on one interface on the output side. In the code block stream.
  • the first communication device includes an interface on the input side, including an interface 1, an interface 2, and an interface 3.
  • the output interface includes an interface 4 and an interface 5, and the Q1 and Q2 barcode blocks received by the interface 1 and the interface 2 can be configured to flow through.
  • the multiplexed into a code block stream is output through the interface 4, and the Q3 barcode block stream received by the interface 3 is multiplexed into a code block stream and output through the interface 5.
  • the Q4 barcode blocks in Q1, Q2, and Q3 are multiplexed into one code block stream and output through the interface 4.
  • the Q5 barcode blocks in Q1, Q2, and Q3 are multiplexed into one code block stream and output through the interface 5.
  • the configuration information multiplexed between the interfaces of the first communications device may be adjusted periodically or irregularly, or may be statically fixed.
  • any one of the Q first code block stream and the second code block stream involved in the embodiment of the present application and one of the Q first code block stream and the second code block stream
  • the mentioned code block stream refers to any one of the Q first code block stream and the second code block stream.
  • Block flow except for one code block in the first code block stream and one code block in the second code block stream, the mentioned code blocks refer to the Q first code block stream and the second code block. Any code block in the stream.
  • a code block stream (such as a first code block stream and a second code block stream) defined in the embodiments of the present application may refer to a data stream in units of code blocks.
  • the English of the code block (such as the code block in the first code block stream and the code block in the second code block stream) may be written as a Bit Block, or written in English as a Block.
  • a preset number of bits in a bit stream (the bit stream may be encoded or pre-encoded) may be referred to as a code block (the code block may also be referred to as a bit group or a bit block).
  • one bit may be referred to as one code block, and for example, two bits may be referred to as one code block.
  • the code block defined in the embodiment of the present application may be a code block obtained by encoding a bit stream using an encoding type.
  • some coding modes are defined, such as M1/N1 bit coding, M2/N2 bit coding, and M3/N3 bit coding.
  • M/N coding mode that is, The description of the M/N bit coding in the embodiment of the present application may be applicable to any one or more of M1/N1 bit coding, M2/N2 bit coding, and M3/N3 bit coding, that is, when M1 is applied to the pair M.
  • N1 corresponds to the description of N; that is, when M2 is applicable to the description of M, N2 corresponds to the description of N; that is, when M3 is applicable to the description of M, N2 corresponds to the pair N.
  • M is a positive integer and N is an integer not less than M.
  • M may be equal to N. Therefore, if a code block is divided into a synchronization header area and an asynchronous header area, it may be understood that the synchronization header area carries a bit of 0. Or it can be understood as referring to a preset number of bits as one code block. The boundaries of the code blocks are determined by other technical means.
  • N can be greater than M.
  • N can be greater than M.
  • N can be greater than M.
  • N can be greater than M.
  • a code block obtained by performing DC equalization after encoding using 8B/10B code the number of 8B/10B code block samples of a 10-bit information length is 1024, which is much higher than the number of 256 code block samples required for an 8-bit information length.
  • the 8B/10B code block synchronization can be implemented by the reserved code block samples to identify the boundary of the 8B/10B code block.
  • the 8B/10B code block includes only the unsynchronized header area.
  • FIG. 8 is a schematic diagram showing the structure of a code block provided by an embodiment of the present application. As shown in FIG. 8, the synchronization header area included in the code block 4200 carries a bit of 0, and all the bits included in the code block 4200 are The bits carried by the unsynchronized header area 4201.
  • the M/N bit coding may be 64B/66B coding as defined in 802.3 (64B/66B coding may also be written as 64/66 bit coding), such as
  • code blocks may include a sync header area and an asynchronous header area.
  • the code block obtained by using the M/N bit coding and coding in the embodiment of the present application may be that the non-synchronized header area includes M bits, and the total number of bits of the coded one code block is an N-bit code block; M/N bits
  • the code block obtained after coding and coding can also be described as: a code block composed of M bits of the asynchronous header area and a number of bits of the synchronization header area.
  • the code block 4200 includes a synchronization header area 4301 and an asynchronous header area 4302, optionally, an asynchronous header.
  • the number of bits carried by the area 4302 is M
  • the number of bits carried by the synchronization header area 4301 is (NM).
  • the information carried by the synchronization header area 4301 in the embodiment of the present application may be used to indicate the type of the code block, and the type of the code block may include a control type, a data type, some other types, and the like.
  • the code block stream obtained by M/N bit coding can be transmitted on the Ethernet physical layer link.
  • the M/N bit code can be 8B/10B coded in 1G Ethernet, that is, the 1GE physical layer link is transmitted.
  • 8B/10B coding type code block stream (the code block stream can also be called Block stream in English);
  • M/N bit coding can be 64GE/66B code used in 10GE, 40GE and/or 100GE, ie 10GE, 40GE and / or 100GE physical layer link to pass the 64B/66B code block stream.
  • other coding and decoding may occur.
  • the M/N bit coding in the embodiment of the present application may also be some coding types that appear in the future, such as 128B/130B coding, 256B/257B coding, and the like.
  • the code block may be a code block obtained by using 8B/10B coding according to the Ethernet Physical Coding Sublayer (PCS) sub-layer coding which has been standardized by IEEE 802.3 (also referred to as 8B/). 10B code block), and code blocks obtained by 64B/66B coding (also referred to as 64B/66B code blocks).
  • PCS Physical Coding Sublayer
  • the code block in the embodiment of the present application may be a code block obtained by using the 256B/257B encoding (transcoding) of the 802.3 Ethernet Forward Error Correction (FEC) sublayer (which may be referred to as 256B/257B).
  • the code block, and the code block in the embodiment of the present application may be a code block obtained by using the 64B/65B code block obtained by 64B/66B transcoding in ITU-T G.709 (also referred to as 64B/65B). Code block), 512B/514B code block, etc.
  • the code block in the embodiment of the present application may be a code block (also referred to as a 64B/67B code block) obtained by using the 64B/67B encoding of the Interlaken bus specification.
  • FIG. 10 is a schematic diagram showing the structure of an O code block of the type field 0x4B provided by the embodiment of the present application.
  • the code block 4200 in the embodiment of the present application is an O code block, and the O code block 4200
  • the information carried by the included sync header area 4301 is "SH10", and the "SH10” means that the type of the code block 4200 is a control type.
  • the unsynchronized header area 4302 includes a payload area 4303 and a non-payload area 4304, and the non-payload area 4304 can be used for the bearer type fields "0x4B", "O0", and reserved fields "C4 to C7", reserved fields " C4 to C7” can all be filled with "0x00".
  • “O0” may be filled with feature command words related to the prior art such as “0x0”, “0xF” or “0x5”, and “0xA”, “0x9” or “0x3”, etc. are not used by the prior art.
  • the feature command word is thus distinguished from the prior art, and the content that can be filled with the "O0" field indicates some information.
  • the first code block in the embodiment of the present application may also be a code block including S in a character of the code block, or may be a new code block such as a newly defined O code block.
  • the type field shown in FIG. 10 is an O code block of 0x4B
  • the first code block may be an S code block of 0x33 type field or an S code block of type 0x66 corresponding to the standard 64B/66B code.
  • the S code block is only one type, and the type field is 0x78, which contains 7 bytes of data payload.
  • the S code block may include code blocks of type 0x78, 0x33, and 0x66, and may also include code blocks including S characters in other characters, and the S code block may include 4 bytes. Data payload.
  • SFD Start of Frame Delimiter
  • the sync header area 4301 is "10"
  • the type field of the non-payload area 4304 is "0x78”
  • the subsequent payload area 4303 is all filled with "0x55”
  • the non-payload area after the payload area 4303 All of the 4304 are filled with "0x55” except that the last byte is filled with "0xD5".
  • the code block in the embodiment of the present application may be a data code block.
  • FIG. 11 is a schematic structural diagram of a data code block provided by an embodiment of the present application.
  • the information carried by the sync header area 4301 included in the code block 4200 is "SH01", and the "SH01" means that the type of the code block 4200 is a data type.
  • the payload area 4303 is included in the asynchronous header area 4302.
  • the non-synchronized header areas of the data code blocks are all payload areas, as shown in the D0 to D7 payload areas.
  • the code block in this embodiment of the present application may be a T code block.
  • the T code block may be a code block including T in a character of the code block, and the T code block may include any one of T0 to T7, such as a T0 code block whose type field is 0x87, a T1 code block whose type field is 0x99, and The T7 code block of type field is 0xFF and so on.
  • FIG. 12 is a schematic diagram showing the structure of a T7 code block according to an embodiment of the present application. As shown in FIG. 12, the code block 4200 in the embodiment of the present application is a T7 code block, and the synchronization header area 4301 included in the code block 4200 is included.
  • the information carried is "SH10", and "SH10" means that the type of the code block 4200 is a control type.
  • the unsynchronized header area 4302 includes a payload area 4303 and a non-payload area 4304.
  • the non-payload area 4304 can be used to carry the type field "0xFF".
  • the type fields of the T0 ⁇ T7 code blocks are 0x87, 0x99, 0xAA, 0xB4, 0xCC, 0xD2, 0xE1 and 0xFF, and the T0 ⁇ T7 code blocks can be applied to various Ethernet interfaces using 64B/66B encoding. It should be noted that the T1 to T7 code blocks respectively include a payload area of 1 to 7 bytes.
  • the payload area in the T code block may be used to carry the bit corresponding to the code block taken from the first code block stream; or may not be used to carry the bit corresponding to the code block taken from the first code block stream, such as It can be filled in with 0 or used to carry other indication information.
  • the C1 to C7 in the T0 to T6 code blocks can be processed according to the existing Ethernet technology, that is, the seven IDLE control bytes (C1 to C7 bytes) after the T character, and all of the codes are 7 bits and 0x00. For example, for the T code type of 0xFF, all bits D0 to D6 can be filled with 8 bits "0x00", and are reserved.
  • the code block in the embodiment of the present application may be an IDLE code block.
  • FIG. 13 exemplarily shows a structure of an IDLE code block provided in the embodiment of the present application.
  • the code block 4200 in the embodiment of the present application is shown in FIG.
  • the information carried by the sync header area 4301 included in the code block 4200 is "SH10"
  • the "SH10” means that the type of the code block 4200 is the control type.
  • the asynchronous header area 4302 is used to carry the type field "0x1E", and the other fields "C0-C7" of the asynchronous header area 4302 carry the content of "0x00".
  • the second code block stream includes at least one data unit, and the IDLE code block may be added inside one data unit or may be added between the data units.
  • the indication information may be carried in the second code block stream.
  • the indication information mentioned in the embodiment may be the identifier indication information, the time slot allocation indication information, the multiplexing indication information, and the like mentioned in the subsequent content. Etc.), so that the egress side performs demultiplexing in a manner consistent with the ingress side, or in the case where the multiplexing and demultiplexing side has agreed to reuse the demultiplexing relationship, for verifying the multiplexing and demultiplexing relationship.
  • the code block carrying the indication information may be referred to as an Operations, Administration, and Maintenance (OAM) code block.
  • OAM Operations, Administration, and Maintenance
  • the OAM code block needs a specific type field to form a distinction with the idle code block.
  • FIG. 14 is a schematic structural diagram showing another structure of a code block according to an embodiment of the present application.
  • the information carried by the synchronization header area 4301 included in the code block 4200 of the embodiment of the present application is “SH10”, “ SH10” means that the type of the code block 4200 is a control type.
  • the unsynchronized header area 4302 includes a payload area 4303 and a non-payload area 4304, and the non-payload area can be used to carry the type field "0x00".
  • the OAM code block may be the code block shown in FIG. 14.
  • the time slots of the first code block stream corresponding to the four time slots are carried in the four consecutive preset fields of the OAM code block, so that the corresponding relationship between the time slot and the first code block stream is sent to the opposite end.
  • the four preset fields may be the last four fields of the OAM code block, and the remaining fields may be reserved fields, for example, may be padded with zeros.
  • the OAM code block may replace the IDLE code block in the data unit of the second code block stream, or may be inserted between the data units.
  • the second code block stream corresponds to at least one data unit.
  • a data unit may include multiple structural forms, such as the first type, and one data unit corresponding to the second code block stream may include a header code block and at least one data code block.
  • one data unit corresponding to the second code block stream may include a header code block and at least one data code block. And the end block.
  • one data unit corresponding to the second code block stream may include at least one data code block and a tail code block. The header block and the tail block can be used to carry some information, and can also function to divide a data unit.
  • the header block and the tail block function to define a boundary for a data unit.
  • a data unit corresponding to the second code block stream may include at least one data code block, for example, the number of data code blocks included in one data unit may be set.
  • the code block corresponding bits in the Q first code block stream are carried in the net of any one or more of the first code block, the tail code block and the data code block in the second code block stream. Lotus area.
  • the code block corresponding bits in the Q first code block stream are carried in the payload area of the first code block and the data code block of the second code block stream.
  • the data code block in one of the second code block streams may include at least one first type of data code block; and the Q first code block The code block corresponding to the bit in the stream is carried in a payload area of the first type of data code block in the at least one first type of data code block in the second code block stream; wherein the first class in the second code block stream The number of bits carried in the payload area of the data code block is M2.
  • the data code block in one of the second code block streams may include at least one first type of data code block and at least one second type Data block.
  • the bits corresponding to the code block of the first code block stream are carried on the first type of data code block, and the first code block, the tail code block and the second type data block can be used. And carrying some other information (such as any one or more of subsequent time slot allocation indication information, identification indication information, and multiplexing indication information). It can also be described that the bits corresponding to the code blocks corresponding to each time slot in all the divided time slots are carried in the payload area of the first type of data code block.
  • the number of second type of data code blocks may or may not be zero.
  • the first code block and the last code block in one data unit in the second code block stream in the embodiment of the present application may be some newly set code blocks in a fixed format, the first code block and the tail end code. Blocks can act as boundaries of data units and can carry some information.
  • the header code block may be an O code block, and the O code block may be a code block of the type field 0x4B shown in FIG. 10 above.
  • the header code block may also be an S code block in which other characters defined in the prior art include S characters.
  • the first code block may be an S code block of type 0x33 or an S code block of type 0x78.
  • the first code block when the first code block is an O code block, information may be added in a preset field of the O code block to distinguish it from the prior art form, and the preset field may be a feature command word in the O code block.
  • Unused feature command words such as 0xA or 0x9 or 0x3, of course, it is also possible to use unused 0x00 type code blocks.
  • the header block may include a sync header area and a non-synchronized header area, and the non-synchronized header area includes a non-payload area and a payload area.
  • the trailing code block may be a T code block.
  • the T code block may be a T7 code block whose type field is 0xFF as shown in FIG. 12 above, and may be other T code blocks defined in other prior art, such as any one of the above T0 to T6 code blocks.
  • the S-code and the T-code are used to encapsulate the data unit of the second code block stream, which can be compatible with the prior art, and the second code block stream carrying the multiple first code block streams can traverse the currently supported flat-supported network.
  • X-Ethernet and FlexE Client switch nodes are used to encapsulate the data unit of the second code block stream, which can be compatible with the prior art, and the second code block stream carrying the multiple first code block streams can traverse the currently supported flat-supported network.
  • one of the data units in the second code block stream may optionally include some IDLE code blocks, and the location of the IDLE code blocks in the data unit may be pre-configured or random.
  • some other code blocks may be configured between adjacent data units of the second code block stream, such as a control code block, a data code block, or a code block of another code block type.
  • code blocks may be configured between adjacent data units of the second code block stream.
  • any one or more of the IDLE code block, the S code block, and the code block shown in FIG. 14 described above are disposed between adjacent data units of the second code block stream.
  • One or more IDLE code blocks may be spaced between adjacent data units of the second code block stream.
  • the number of IDLE code blocks between adjacent data units of the second code block stream may be a variable, which may be adjusted according to a specific application scenario.
  • At least two sets of adjacent data units may exist in the second code block stream, and the two sets of adjacent data
  • the number of IDLE code blocks spaced between units is not equal.
  • the IDLE code blocks that are separated between the adjacent data units of the second code block stream are appropriately added or deleted, that is, the increase or decrease of the adaptability is used to implement rate adaptation (also in the embodiment of the present application) It can be to achieve frequency adaptation). For example, if the bandwidth of the pipe carrying the second code block stream is too small, the IDLE code block between the data units in the second code block stream may be appropriately reduced.
  • adjacent data units are The inter-IDLE code block is reduced to zero, ie there is no IDLE code block between two adjacent data units.
  • the IDLE code block between the data units in the second code block stream may be appropriately increased.
  • an idle code block may be inserted at any position of the second code block stream to implement rate adaptation, but a case where the difference in the rate bandwidth is small may be recommended between two data units. Inserting an IDLE code block, for example, can increase the number of IDLE code blocks between data units from one to two or more.
  • an IDLE code block may be added between adjacent data units, in which case the IDLE may be evenly distributed.
  • a sufficient IDLE block margin (more than 200 parts per million (ppm)) can be reserved between the data units of the two-code block stream to support the link rate difference of the Ethernet in extreme cases: /-100 ppm), there is an upper limit on the number of code blocks in one data unit of the second code block stream and the total number of bits in the payload area included in one data unit. It is recommended to take the maximum value based on the upper limit allowed.
  • a number of idle code blocks are added between the data units of the second code block stream, so that the IDLE addition and deletion of the second code block stream may be supported, so that the second code block stream is adapted to the rate difference of the pipeline.
  • the rate difference of the pipeline may be 100 ppm, so that when the bandwidth of the pipeline carrying the second code block stream is small, the rate adaptation may be implemented by deleting the IDLE code block between the data units in the second code block stream.
  • one data unit of the second code block stream includes a header code block, 33 data code blocks, and an IDLE code block.
  • the IDLE code block has a specific gravity of 1/35, which is much larger than 100 ppm (one ten thousandth), so optionally, some IDLE code blocks can be replaced with an Operation Administration and Maintenance (OAM) code block, thereby
  • OAM Operation Administration and Maintenance
  • the structure of the OAM code block may be the structural form of the code block shown in FIG.
  • Such a code block may be used to carry indication information (the indication information may be any one or any of slot allocation indication information, multiplexing indication information, and identification indication information).
  • the corresponding bit of the code block in the first code block stream is correspondingly carried in the second code block stream.
  • the first communication device and the demultiplexing side of the multiplexing side may be An agreement is made between the second communication devices such that the second communication device on the demultiplexing side demultiplexes the Q first code block streams from the second code block stream according to the convention.
  • the second code block stream further includes identifier indication information corresponding to the code block; The identifier indication information is used to indicate a first code block stream corresponding to the code block.
  • the demultiplexing side can determine each code taken from the Q first code block stream carried in the second code block stream.
  • the first code block stream corresponding to the block, thereby demultiplexing each of the first code block streams.
  • the identifier indication information corresponding to one code block in the first code block stream of the Q code block stream which may be the identifier of the first code block stream corresponding to the code block, or other information indicating the information Other information, such as location information of the code block in the second code block stream and identification of the first code block stream.
  • a possible data transmission mode is provided in the embodiment of the present application, so that the second communication device on the demultiplexing side can determine, according to the manner, each of the first code block streams taken from the Q first code block streams carried in the second code block stream.
  • the first code block stream corresponding to the code block, thereby demultiplexing each of the first code block streams.
  • slot division is performed first, and there is a sort relationship between all slots, and then at least one slot is allocated for each first code block stream in the Q first code block stream.
  • the code blocks in the first code block stream of the Q are subjected to code block-based time division multiplexing to obtain a code block sequence to be processed; and the bit corresponding to the code block sequence to be processed is placed in the second to be transmitted.
  • Code block stream wherein each of the first code block streams in the Q first code block stream corresponds to at least one time slot; and the order of the code blocks included in the code block sequence to be processed corresponds to the code block included in the code block sequence to be processed. The sorting of the gap matches.
  • All time slots divided in the embodiment of the present application may allocate only a partial time slot to the Q first code block streams, or may allocate all the divided time slots to the Q first code block streams. For example, 32 time slots are divided, and there are 2 first code block streams, and three of the 32 time slots can be allocated to the two first code block streams, and the remaining 29 time slots can be allocated.
  • the first code block stream for example, may be allocated to other code blocks, such as an IDLE code block or the OAM code block described above, and the like.
  • the network interface in the embodiment of the present application may perform time slot division, and one or more of the divided time slots constitute a pipeline to carry the code block flow.
  • the division of the interface time slot can be flexibly configured in combination with a specific application scenario.
  • a time slot division scheme is provided.
  • the following content in the embodiment of the present application is described by taking the FlexE technology as an example.
  • the FlexE interface is described by taking 64B/66B encoding as an example.
  • FlexE draws on the Synchronous Digital Hierarchy (SDH)/Optical Transport Network (OTN) technology to construct a fixed frame format for physical interface transmission and time division multiplexing (Time Division Multiplexing (TDM)) Different from SDH/OTN, FlexE's TDM slot division granularity can be 66 bits, interspersed with 66 bits between slots, and one 66 bits can carry one 64B/66B code block.
  • Figure 15 A schematic diagram of a structure of a FlexE frame provided by an embodiment of the present application is shown. As shown in FIG. 15, a FlexE frame may include 8 rows, and the location of the first code block in each row is an area carrying FlexE overhead (bearing FlexE). The area of the overhead may also be referred to as a frame header area.
  • the code block carried by the area for carrying the FlexE overhead may be referred to as an overhead code block.
  • 1 overhead code block per line 8 overhead codes included in 8 lines
  • the block constitutes a FlexE overhead frame, and the 32 FlexE overhead frames form a FlexE overhead multiframe.
  • the area outside the FlexE overhead can be divided into TDM time slots, for example, 64B/66B encoding.
  • Code block code as an example, when a region other than the cost for slot allocation, to a particle size of 66 bits is divided, each row corresponds to 20 bits * 1023 bearing space 66, the interface may be divided into 20 slots.
  • the bandwidth of the interface and the number of time slots can be combined to determine the corresponding bandwidth of the single time slot.
  • the 100GE bandwidth is 100 Gbps (Gbps, 1000 megabits per second)
  • each The bandwidth of the time slot can be approximately 100 Gbps bandwidth divided by 20, approximately 5 Gbps.
  • a FlexE Group can contain at least one interface, such as t 100Gbps interfaces, and the total number of slots in the FlexE Group as NNI is t*20.
  • each time slot has a different bandwidth corresponding to each time slot. For example, one slot has a bandwidth of 5 Gbps, another slot has a bandwidth of 10 Gbps, and so on.
  • the manner of dividing the time slot and the manner of determining the bandwidth of each time slot are not limited in the embodiment of the present application.
  • a time slot is allocated for any code block stream, and may also be described as allocating time slots for the pipe carrying the code block stream.
  • the number of time slots allocated for the pipe may be determined according to the service bandwidth of the pipe carrying the code block flow and the bandwidth corresponding to each time slot.
  • the number of time slots allocated for the pipe may be determined according to the traffic rate of the pipe carrying the code block flow and the rate corresponding to each time slot.
  • any multiple time slots in all time slots of the FlexE Group may jointly carry one Ethernet logical port.
  • the bandwidth of a single time slot is 5 Gbps
  • the first code block stream with a bandwidth of 10 GE requires two time slots
  • the first code block stream with a bandwidth of 25 GE requires 5 time slots
  • the first code block with a bandwidth of 150 GE requires 30 time slots. If the encoding method uses 64B/66B encoding, the 66-bit code block stream that is still transmitted sequentially on the Ethernet logical port is still.
  • the total time slot bandwidth (eg, the product of the number of time slots and the bandwidth corresponding to the time slot having the same bandwidth) configured by one code block stream is not less than the effective bandwidth of the code block stream.
  • the effective bandwidth of the code block stream may be the total bandwidth occupied by the data block and the control class code block other than the idle code block of the code block stream. That is to say, the code block stream needs to include a certain reserved code block, such as idle (IDLE), etc., so that the code block stream can be adapted into the allocated time slot (or pipe) by adding or deleting the idle code block.
  • the total bandwidth of the time slot configured by one code block stream is not less than the effective bandwidth of the code block stream; or, optionally, the number of time slots configured by one code block stream is The product of the bandwidth corresponding to a single time slot is not less than the effective bandwidth of the code block stream.
  • each time slot in the divided time slot may be marked with an identifier, and there is a sort relationship between the divided time slots.
  • 20 time slots in FIG. 15 may be sequentially identified as time slot 1 and time slot. 2...slot 20 and so on.
  • the time slots allocated to a certain code block stream in the 20 time slots can be flexibly configured.
  • the allocation of 20 time slots can be identified according to the code block flow identifier to which the time slot belongs.
  • the allocated multiple time slots may be continuous or discontinuous, for example, may be one code.
  • the block stream allocates two slots, slot 0 and slot 1, and may also allocate two slots, slot 0 and slot 3, for the code block stream, which is not limited in this embodiment.
  • the bearer slot of the first code block stream corresponding to the data unit in the second code block stream in the second code block stream in the embodiment, and the total time slot of the time slot configured in the first code block stream (for example, The product of the number of slots corresponding to the time slot corresponding to the time slot of the same bandwidth is not less than the effective bandwidth of the first code block stream.
  • the effective bandwidth of the first code block stream may be the total bandwidth occupied by the data block and the control class code block of the first code block stream except the idle code block. That is to say, the first code block stream needs to include a certain reserved code block, such as idle (IDLE), etc., so that the code block stream can be adapted into the allocated time slot (or pipe) by adding or deleting the idle code block.
  • the total bandwidth of the time slot configured by the first code block stream is not less than the effective bandwidth of the first code block stream; or, optionally, a first code block stream configuration
  • the product of the number of time slots and the bandwidth corresponding to a single time slot is not less than the effective bandwidth of the first code block stream.
  • each time slot may be marked, and the divided time slot may be There is a certain ordering, such as 20 time slots in FIG. 15, which may be sequentially identified as time slot 1, time slot 2, time slot 20, and the like.
  • the time slot allocated to a certain code block stream in the 20 time slots can be flexibly configured. For example, the allocation of 20 time slots can be identified according to the first code block flow identifier to which the time slot belongs.
  • the multiple time slots allocated may be continuous or discontinuous, for example, A first code block stream is allocated to the time slot 0 and the time slot 1 , and the time slot 0 and the time slot 3 are also allocated to the first code block stream, which is not limited in this embodiment. .
  • the total bandwidth of the time slot corresponding to the first code block stream may be determined according to the number of time slots corresponding to the first code block stream and the bandwidth allocated for each time slot corresponding to the first code block stream.
  • the total bandwidth of the time slot corresponding to the first code block stream may be the product of the number of time slots corresponding to the first code block stream and the bandwidth allocated for each time slot corresponding to the first code block stream.
  • IDLE's addition and deletion processing is an effective means to achieve rate adaptation.
  • FlexE Each logical port can carry an Ethernet Media Access Control (MAC) packet data unit sequence stream.
  • packets of a sequence of MAC message data unit streams may have a start and an end.
  • the message is an Inter-Packet Gap (IPG).
  • Idle Idle
  • the sequence of MAC message data unit and IDLE are generally processed after encoding and scrambling, such as 8B/10B encoding used by 1GE; 64GE is generally used for 10GE, 25GE, 40GE, 50GE, 100GE, 200GE and 400GE.
  • /66B encoding the encoded MAC message data unit sequence stream and IDLE are converted into 64B/66BB code blocks.
  • the coded code block may include a start code block corresponding to a MAC message data unit (the Start code block in English, the start code block may be an S code block), and a data code block.
  • the Data code block it can be abbreviated as D code block
  • the end code block English is Termination code block, the end code block can be T code block
  • the idle code block English IDLE code block, which can be abbreviated as I code) Piece.
  • the remaining bandwidth is further divided into 20 time slots, and the two time slots can also ensure the code block flow of a 10GE bandwidth, possibly
  • FlexE can perform FlexE client rate adaptation through addition and deletion of IDLE code blocks. For example, if the bandwidth of the code block stream containing the idle code block is 11GE, but the effective bandwidth is less than the 10G bandwidth of the two FlexE time slots, the two 5G time slots allocated for the first code block flow have a total bandwidth of 10G.
  • the IDLE code block of the part of the code block stream may be deleted; when the bandwidth of the first code block stream is 9G, the total bandwidth of the time slot allocated for the code block stream is 10G, and the first code block stream may be added. More IDLE code blocks.
  • the code block can be directly operated, and the decoded service message flow and IDLE can be operated.
  • the second code block stream needs to be configured with a certain number of idle code blocks in advance.
  • the IDLE may be added or deleted according to the bandwidth of the pipe carrying the second code block stream and the rate difference of the second code block stream.
  • the IDLE code block may be added or deleted for the IDLE code block between adjacent data units of the second code block stream to match the bandwidth of the second code block stream with the pipe carrying the second code block stream.
  • some IDLE code blocks may be added between the data units of the second code block stream, when the rate of the second code block stream is not less than When the bandwidth of the pipe carrying the second code block stream is transmitted, the IDLE code block pre-configured between the data units of the second code block stream may be deleted.
  • the corresponding relationship between the time slot of the first code block stream and the first code block stream in the second code block flow in the embodiment of the present application may be previously divided, and the first communication device and the solution configured on the multiplexing side are configured.
  • the second communication device on the multiplexing side may also be sent by the multiplexing side to the demultiplexing side, or the demultiplexing side may be sent to the multiplexing side, or the centralized server may determine the correspondence between the time slot and the first code block stream.
  • the correspondence between the time slot and the first code block stream is sent to the first communication device on the multiplexing side and the second communication device on the demultiplexing side.
  • the correspondence between the transmission slot and the first code block stream may be sent periodically.
  • the first preset code block of the second code block stream carries the time slot allocation indication information; the time slot allocation indication information is used to indicate the correspondence between the Q first code block flow and the time slot. . That is to say, the time slot allocation indication information is used to indicate the identifier of the time slot allocated by each of the first code block streams in the Q first code block stream.
  • FIG. 16 is a schematic structural diagram showing a second code block stream transmission time slot allocation indication information provided by an embodiment of the present application.
  • the header code block is an O code block
  • the structure of the O code block is shown in FIG.
  • the slot allocation indication information may be carried in the three available bytes D1 to D3 of the O code block of type 0x4B, for example, the block type shown by 18 is 0x4B and the O code is The ID of the first code block stream corresponding to the time slot of the codeword of the 0xA header code block, as shown in FIG. 16, each byte of the two bytes of D2 and D3 of each code block respectively Corresponding to the identifier of the first code block stream corresponding to one time slot.
  • the time slot 0 is used to carry the first first code block stream.
  • the D2 field of the block is filled with 0x08, in this example, slot 1 and slot 2 are assigned and identified as the same first block stream.
  • the code block or bit of the first code block stream is transmitted in the same order as it is transmitted in the second code block stream.
  • the time slot is not allocated, it may be indicated by 0x00 or 0xFF.
  • the time slot allocation indication information may also be transmitted on a code block between adjacent data units, such as a code block of a control type included in an adjacent data code block.
  • a code block in an integer number of first code block streams may be loaded into a data unit of the second code block stream (this form may also be described as a boundary). Aligning, or determining, by the data unit of the second code block stream, each time slot boundary and code block boundary), the first type of data code included in one of the data units in the second code block stream may be determined in advance by calculation The number of blocks used to carry the Q first code block stream.
  • the embodiment of the present application provides a solution, where the number of the first type of data code blocks included in one of the at least one data unit included in the second code block stream is based on a common multiple of N1 and M2 and M2 Determining; for example, the number of first type of data code blocks included in one data unit is at least a common multiple of N1 and M2 and a quotient of M2.
  • the number of data blocks of the first type may be greater than the quotient of the common multiple of N1 and M2 and M2.
  • the number of the first type of data code blocks included in one of the at least one data unit included in the second code block stream is determined according to a least common multiple of N2 and M2 and M2; for example, one data unit is included
  • the number of the first type of data code blocks is at least the least common multiple of N1 and M2 and the quotient of M2, and the number of the first type of data code blocks included in one data unit is greater than the least common multiple of N1 and M2 and the quotient of M2, thereby Transmitting, in the first type of data code block, bits of the code block corresponding to the time slot not allocated to the first code block stream, for example, if a time slot is not allocated, the time may be carried in the first type of data code block.
  • a bit corresponding to a preset code block (for example, an IDLE code block or an Error code block).
  • the first type of data code block defined by the data code block in the embodiment of the present application may be a data code block that carries a code block corresponding to each time slot, and the second type of data code block may be used to carry other information.
  • a bit (such as any one or more of time slot allocation indication information, identification indication information, and multiplexing indication information).
  • the location of the second type of data code block in one data unit may be fixed, or may be notified to the communication device on the multiplexing side and the communication device on the demultiplexing side after configuration.
  • the coding manner of the first code block stream and the coding manner of the second code block stream may be the same or different.
  • the following description is made by taking the 64B/66BB encoding method as the first code block stream and the second code block stream. The following takes an example in which the first code block stream is a 64B/66BB coding type and the second code block stream is a 64B/66BB coding type.
  • FIG. 17 is a schematic structural diagram of a code block stream multiplexing according to an embodiment of the present application.
  • the first code block stream 5201 and the first code block stream 5301 are multiplexed into a second code. Block stream 5401.
  • the pipe 5101 carrying the first code block stream 5201 and the pipe 5102 carrying the first code block stream 5301 in FIG. 17 are multiplexed into the pipe 5103 carrying the second code block stream 5401.
  • the pipeline carrying the first code block stream is referred to as a low-order pipeline
  • the pipeline carrying the second code block stream is referred to as a high-order pipeline
  • two lower-order pipelines (bearing the first code block flow)
  • the pipe 5101 of 5201 and the pipe 5102 carrying the first code block stream 5301) are multiplexed to the higher-order pipe (the pipe 5103 carrying the second code block stream 5401).
  • the coding type of the first code block stream may be multiple, for example, it may be an M/N coding type or a non-M/N coding type.
  • the first code block stream is a 64B/66BB coding type.
  • a plurality of code blocks 5202 are included in the first code block stream 5201, and each code block 5202 includes a sync header area 5206 and an asynchronous head area 5207.
  • FIG. 18 is a schematic structural diagram showing a first code block stream provided by an embodiment of the present application. As shown in FIG. 17 and FIG. 18, a plurality of data units 5208 may be included in the first code block stream 5201, FIG.
  • data unit 5208 can include a header code block 5202, one or more data code blocks 5203, and a tail end code block 5204. That is to say, the code block 5205 included in the first code block stream 5201 may be a control code block (such as a header code block 5202 and a tail code block 5204), or may be a data code block 5203 or an IDLE code block.
  • the code block of the first code block stream in the embodiment of the present application may also refer to a code block included between adjacent data units of the first code block stream, such as included between adjacent data units of the first code block stream. IDLE code block.
  • the sync header area 5206 of the code block 5205 can carry type indication information of the code block.
  • the type indication information of the code block carried by the synchronization header area 5206 in the code block 5205 may be 01, indicating that the code block 5205 is a data code block; for example, when When the code block 5205 is the header code block 5202 or the tail code block 5204, the type indication information of the code block carried by the synchronization header area 5206 in the code block 5205 may be 10 for indicating that the code block 5205 is a control code block.
  • FIG. 17 a plurality of code blocks 5302 are included in the first code block stream 5301, and each code block 5302 includes a sync header area 5306 and an asynchronous header area 5307.
  • FIG. 18 exemplarily shows a structural implementation of a first code block stream.
  • a plurality of data units 5308 may be included in the first code block stream 5301, only exemplarily shown in FIG.
  • data unit 5308 can include a header code block 5302, one or more data code blocks 5303, and a tail end code block 5304.
  • the code block 5305 included in the first code block stream 5301 may be a control code block (such as a header code block 5302 and a tail code block 5304), or may be a data code block 5303 or an IDLE code block.
  • the code block of the first code block stream in the embodiment of the present application may also refer to a code block included between adjacent data units of the first code block stream, such as included between adjacent data units of the first code block stream. IDLE code block.
  • the sync header area 5306 of the code block 5305 can carry type indication information of the code block.
  • the type indication information of the code block carried by the synchronization header area 5306 in the code block 5305 may be 01, indicating that the code block 5305 is a data code block; for example, when the code block 5305 is the header code block 5302 or the tail code block 5304, the type indication information of the code block carried by the synchronization header area 5306 in the code block 5305 may be 10 for indicating that the code block 5305 is a control code block.
  • the first code block stream 5201 is assigned a time slot (English can be written as a slot) 0, and the first code block stream 5301 is assigned a time slot 1 and a time slot 2.
  • a total of 32 time slots are divided, and the remaining time slots 4 to 31 are not allocated.
  • Unallocated time slots can be filled with fixed pattern code blocks. For example, for a 64/66b code block, it may be filled with an IDLE code block, an Error code block, or other defined pattern code blocks that define a code block.
  • FIG. 18 exemplarily shows a schematic structural diagram of a code block extracted from a first code block stream according to a correspondence relationship between a time slot and a first code block stream, and as shown in FIG. 18, the order of time slot 0 to time slot 31 is based on The identifiers of the time slots are sorted, and the identifiers of the time slots are 0 to 31. Therefore, the first communication device sequentially acquires the code blocks corresponding to the time slot 0 to the time slot 31 according to the order of the time slot 0 to the time slot 31, as shown in FIG.
  • FIG. 19 is a schematic structural diagram showing a second code block stream provided by an embodiment of the present application.
  • the second code block stream 5401 entering the pipe 5103 carrying the second code block stream 5401 may include One or more data units 5408, a schematic structural diagram of one data unit 5408 is exemplarily shown in FIG.
  • data unit 5408 can include a plurality of code blocks 5405, which can include a sync header area 5406 and an asynchronous header area 5407.
  • data unit 5408 can include a header code block 5402, one or more data code blocks 5403, and a tail end code block 5404.
  • the code block 5405 included in the first code block stream 5401 may be a control code block (such as a header code block 5402 and a tail code block 5404), or may be a data code block 5403.
  • the sync header area 5406 of the code block 5405 can carry type indication information of the code block.
  • the type indication information of the code block carried by the synchronization header area 5406 in the code block 5405 may be 01, indicating that the code block 5405 is a data code block; for example, when the code block 5405 is the header code block 5402 or the tail code block 5404, the type indication information of the code block carried by the synchronization header area 5406 in the code block 5405 may be 10 for indicating that the code block 5405 is a control code block.
  • the code block corresponding to each time slot that is taken out or generated is placed in the payload area of the second code block stream, and can be placed in the first code block, the last code block, and the first type. a payload area of any one or more of the data code block and the second type of data code block.
  • the code block corresponding to each time slot taken or generated is placed into the second code block stream.
  • a class of data blocks is introduced as an example.
  • the number of data code blocks included in one data unit of the second code block stream in the embodiment of the present application may be flexibly determined, and the first code block stream and the second code block stream are both 64B/66B codes as an example.
  • the number of the first type of data code blocks included in one data unit of the second code block stream for carrying the code blocks corresponding to all time slots is Hb, and the Hb may be based on the Hb.
  • the payload area of the first type of data code block (the payload area of a first type of data code block carries H bits) or some or all of the Hlcm bits (the payload area of the Hb first type of data code blocks)
  • the total number of bits is Hp, Hlcm is less than or equal to Hp)
  • the TDM time slot is divided into a plurality of low-order time slot particles, which are used as low-order pipes based on the combination of the divided time slot particles (the low-order pipeline carries the first code block flow)
  • the pipeline is configured to carry a 64B/66B code block in the first code block stream or a code block that compresses the code block in the first code block stream.
  • the TDM time slot division of the Hlcm bit is equivalently equivalent to the TDM time slot division of the to-be-processed code block sequence obtained after step 4101.
  • the high-order bearer pipeline (the high-order pipeline is carrying the second code block stream) Pipeline) a part or all of the Hlcm bits of the bit corresponding to the payload area of the Hb first type data code block in the data unit of the second code block stream (the payload area of a first type of data code block carrying H bits)
  • the total number of bits in the payload area of the Hb first type data block is Hp, Hlcm is less than or equal to Hp), corresponding to g 66b particles can be divided into p time slots, p can be divisible by g, and g and p are positive integers.
  • the high-order bearer pipeline (the high-order pipeline is the pipeline carrying the second code block stream) the payload area of the Hb first-type data code blocks in the data unit of the second code block stream (a first The payload area of the class data code block carries H bits. Part or all of the Hlcm bits of the corresponding bit (the total number of bits of the payload area of the Hb first type data code blocks is Hp), and Hlcm is less than or equal to Hp.
  • Hp corresponds to g1 M2/N2 bit payload particles
  • g1*N2 is the total number of bits of the payload area of all the first type of data code blocks in one data unit in the second code block stream.
  • the Hlcm bit g3*N3 bits correspond to g3 M3/N3 bit blocks (for example, 512B/514B coded bit blocks), and one M3/N3 code block particle equivalently corresponds to g3*k 66b particles of the to-be-processed code block stream (for example, 512B/
  • the 514B coded bit block is equivalent to four 66b particles), and the equivalent corresponding code block stream is divided into p time slots, p can be divisible by g, and both g and p are positive integers.
  • the embodiment of the present application provides an option for determining the number of data code blocks (or the first type of data code blocks used to carry the first code block stream) included in one data unit in the second code block stream.
  • the first code block stream is in the M1/N1 bit coding mode
  • the second code block stream is in the M2/N2 bit coding mode
  • the compression process is not considered as an example for explanation. Since each code block in the first code block stream is N1 bits, the payload area of the second code block stream needs to be loaded, and the payload area of the data code block of the second code block stream is M2 bits, then N1 and M2 are calculated.
  • the common multiple, the number of data code blocks included in one data unit of the second code block stream may be an integral multiple of the common multiple of N1 and M2 and the quotient of N2.
  • the number of data code blocks included in one data unit of the second code block stream may be an integral multiple of the least common multiple of N1 and M2 and an integer multiple of the quotient of N2.
  • the value of lcm (66, 64) is 2112
  • lcm (66, 64) indicates Find the least common multiple of 66 and 64.
  • the number of data code blocks included in one data unit of the second code block stream may be 33 (33 is the quotient of the common multiple 2112 of 66 and 64 and the bit 64 of the payload area of the data code block of the second code block stream) Integer multiple.
  • the 33 data code blocks representing the second code block stream carry 32 (32 is a common multiple of 21 and 64 2112 and one of the first code block streams)
  • the coder of the bit 66 of the code block corresponds to the code block corresponding to the time slot; when the first code block stream is allocated for one time slot, the code block corresponding to the time slot refers to the first code block flow corresponding to the time slot The code block taken out; when the first code block stream is not allocated for the time slot, the code block corresponding to the time slot refers to the determined pattern code block.
  • the embodiment of the present application provides a possible implementation manner, in which the bits of the payload area of the data code block in one data unit in the second code block stream are calculated.
  • the bit of the payload area of the data code block in one of the data units in the second code block stream is 2122 (2122 is included in one data unit of the second code block stream)
  • the number of data code blocks 33 and the bit 64 of the non-synchronized header area in the data code block are bits.
  • the 2122 bits When the 2122 bits are all used to carry the code block of the first code block stream, it can carry up to 32 64B/ 66B code block, so the time slot can be divided into an integer multiple of 32 timeslots, and the number of time slots can also be a value that can be divisible by 32, such as dividing 16 time slots, 8 time slots or 4 time slots, etc. Wait.
  • the total number of bits of the payload area of all the first type of data code blocks in one data unit of the second code block stream may not be constrained by the above common multiple relationship, such as the second code block stream in the above example.
  • the total number of bits of the payload area of all the first type of data code blocks included in one data unit is greater than 2122 bits, such that when 2122 bits are used to carry bits corresponding to the code block of the first code block stream, redundant Bits may be left unused or used to carry some other indication information.
  • determining a bit of a payload area of all data code blocks (including all first type data code blocks and all second type data code blocks) included in one data unit of the second code block stream The number of transmissions can be considered for transmission efficiency and reserved IDLE. The larger the total number of bits of the payload area of all data code blocks of a data block in the second code block stream, the longer the data unit and the lower the overhead.
  • Block 5205 is encoded with a 64B/66B encoding type, the total number of bits of the resulting code block 5205 is 66 bits, and the number of bits occupied by the asynchronous header region 5407 of one data block 5403 of the second code block stream 5401 is 64. Bit, therefore, one data code block 5403 of the second code block stream carries the first 64 bits of the code block 5205 corresponding to slot 0, and another data code block 5403 of the second code block stream carries the code block 5205 corresponding to slot 0.
  • the embodiment of the present application provides another optional data transmission scheme.
  • the code block sequence to be processed is corresponding.
  • the bit is placed in the second code block stream to be sent, and includes: compressing consecutive R code blocks in the code block sequence to be processed to obtain a compressed code block sequence; wherein R is a positive integer; and the compressed code block sequence is corresponding to The bits are placed in the second code block stream to be transmitted.
  • FIG. 20 is a schematic structural diagram showing another second code block stream provided by an embodiment of the present application, as shown in FIG. 20, FIG. 20 is an improvement performed on the basis of FIG. 19, and FIG.
  • the sequence of code blocks corresponding to each time slot is called a code block sequence to be processed, and the code block sequence to be processed is compressed to obtain a compressed code block sequence, and then the compressed code block sequence is placed into the second code block stream.
  • the compressed code block sequence may be placed into the payload area of the first type of data code block of the second code block stream.
  • the synchronization header area of one code block of the first code block stream and the bit corresponding to the asynchronous synchronization area may be continuously placed in the payload area of the second code block stream. If the code block sequence to be processed is directly put into the second code block stream without being compressed, it means that all the bits of the synchronization header area and the non-synchronization area of all code blocks in the code block sequence to be processed are continuously placed into the second code block. In the stream. If the code block sequence to be processed is compressed into the second code block stream, it means that all the bits of the synchronization header area and the non-synchronization area of all the code blocks in the compressed code block sequence are continuously placed in the second code block stream. of.
  • the code block sequence to be processed is directly inserted into the second code block stream without compression, it refers to the synchronization header area and the non-synchronization header region of the code block sequence to be processed from the first code block stream. All bits of the sync area are successively placed in the second block stream. If the code block sequence to be processed is compressed into the second code block stream, it means that all bits of the synchronization header area and the non-synchronization area of one code block in the first code block stream in the compressed code block sequence are in the compressed code block sequence. The corresponding bits in the compressed code block sequence are successively placed in the second code block stream.
  • the following is an example of compressing a code block sequence to be processed into a code block in a compressed code block sequence. If the code block in the code block sequence to be processed is not directly compressed into the second code block stream, the code to be processed is processed.
  • the case of one code block in a block sequence is similar to the case where the code block sequence to be processed in this example is compressed into one code block in the compressed code block sequence. In this example, as illustrated in conjunction with FIG. 20, as shown in FIG.
  • all bits included in the code block 5205 corresponding to slot 0 in the compressed code block sequence (eg, if the code block includes a sync header area and an asynchronous area)
  • all the bits corresponding to the code block refer to all the bits corresponding to the synchronization header area and the asynchronous header area of the code block) are consecutively placed in the payload area of the first type of data code block of the second code block stream. That is to say, only for the payload area of all the first type of data code blocks in one data unit of the second code block stream, for example, only the first of the data units included in the second code block stream can be simply viewed.
  • the sequence of the first type of data code block included is only for the payload area sequence in the sequence of the first type of data code block, and all the code blocks corresponding to the 32 time slots included in the compressed code block sequence are all
  • the bits are one or more of the payload regions in the sequence of payload regions in a sequence of first type of data blocks in a data unit of the second code block stream.
  • some other code blocks such as a control code block, and a second type, may be included between adjacent two first type data code blocks included in one data unit in the second code block stream.
  • the data code block or the like, that is to say, the payload area sequence in the sequence of the first type of data code block does not include the payload area of the code block except the first type of data code block.
  • the payload area of the first type of data code block is put into the payload area of the first type of data code block as an example. If the bit corresponding to the code block sequence to be processed can also be put into the first code block and the end code block.
  • the above-mentioned payload area sequence can be said to be a payload area sequence composed of a payload area of all code blocks included in one data unit of the second code block stream for carrying bits corresponding to the code block sequence to be processed. .
  • the code block is compressed.
  • the time slot corresponding to each bit and the bit are waiting.
  • the corresponding time slots in the processed code block sequence are the same.
  • the code block sequence to be processed is 64B/66B coded and the code block sequence is 64/65 bit coded
  • a 64B/66B code block in the code block sequence to be processed corresponds to time slot 2, the 64B/66B.
  • the corresponding 64B/65B code block of the code block in the compressed code block sequence also corresponds to time slot 2.
  • slot 2 corresponds to a 64B/66B code block in the sequence of code blocks to be processed, and corresponds to a 64B/65B code block in the sequence of compressed code blocks.
  • each code block in the sequence to be processed can be separately compressed.
  • the synchronization header area of each code block in the sequence to be processed is compressed from 2 bits to 1 bit, for example, "10". Compress to "1" and "01" to "0".
  • the code block code in the code block sequence to be processed is 64B/66B
  • the coded form of the compressed code block sequence becomes 64/65 bit code.
  • a code block whose sync header area is "10" indicates that the type of the code block is a control type.
  • the type field of the code block of the currently widely used control type includes 0x1E, 0x2D, 0x33, 0x4B, 0x55, 0x66, 0x78, 0x87, 0x99, 0xAA, 0xB4, 0xCC, 0xD2, 0xE1, and 0xFF. Other values such as 0x00 are left unused.
  • the type field of the code block occupies 1 byte, so the type field of the code block of the control type can be compressed from 8 bits to 4 bits, for example, "0x1E" is compressed to "0x1", and "0x2D" is compressed to "0x2". Wait.
  • FIG. 21 is a schematic diagram showing a compression processing manner provided by an embodiment of the present application. As shown in FIG.
  • the first bit of the 256B/257B code block is 1 to indicate that the 256B/257B code block does not include
  • the code blocks of the control type in the sequence to be processed are all code blocks of the data type in the sequence to be processed, so that the total 8-bit sync header of the four 64B/66B code blocks in the sequence of code blocks to be processed can be compressed into one. Bit.
  • FIG. 22 is a schematic diagram showing a compression processing manner provided by an embodiment of the present application. As shown in FIG. 22, the first bit of the 256B/257B code block is 0, indicating that the 256B/257B code block is in the 256B/257B code block.
  • a code block including a control type in at least one sequence to be processed and then 4 bits of a type field of a first 64B/66B code block included in the 256B/257B code block may be used to sequentially indicate the 256B/257B code 4 types of 4 64B/66B code blocks from the sequence of code blocks to be processed included in the block, such as 4 64B/66B code blocks from the sequence of code blocks to be processed included in the 256B/257B code block
  • the types are all control types, the 4 bits may be "0000" in order, so that the synchronization header regions of the 4 64B/66B code blocks from the sequence of code blocks to be processed included in the 256B/257B code block may be compressed. That is, the 4-bit space of the type field of the saved code block can be used for the combined order identification of a plurality of code blocks.
  • consecutive R code blocks in the code block sequence to be processed are compressed. If R is greater than 1, at least two code blocks are included in consecutive R code blocks, and two of the two code blocks are taken out.
  • the first code block stream is two different first code block streams.
  • R is 4, and therefore, when 4 consecutively compressed in the sequence of code blocks to be processed, there are at least two code blocks in the consecutive 4 code blocks,
  • the two first code block streams corresponding to the two code blocks are different.
  • the first code block flow corresponding to one code block is the first code block flow 5201 in the foregoing FIG. 18, and the first code block flow corresponding to the other code block. It is the first code block stream 5301 in FIG. 18 described above.
  • the number of the first type of data code blocks included in one data unit in the second code block stream in the embodiment of the present application is not limited, and may be determined according to actual conditions.
  • the code block sequence to be processed is Compression is performed to achieve alignment of the second code block stream and the compressed code block sequence (ie, a code block in a data unit of the second code block stream that can carry an integer number of compressed code block sequences, or
  • the time slot boundary and the code block boundary may be determined by the data unit of the second code block stream, and the method for calculating the number of the first type of data code blocks included in one of the data units in the second code block stream is required to be
  • the coding mode of the compressed code block sequence is calculated.
  • the specific calculation method is to replace the parameter of the coded form of the code block sequence to be processed in the above calculation method with the parameter of the coded form of the compressed code block sequence.
  • the coded sequence of the compressed code block sequence is M3/N3; M3 is a positive integer, and N3 is an integer not less than M3.
  • the embodiment of the present application provides a solution, where the number of the first type of data code blocks included in one of the at least one data unit included in the second code block stream is based on a common multiple of N3 and M2 and M2 Determining; for example, the number of first type of data code blocks included in one data unit is at least a common multiple of N3 and M2 and a quotient of M2.
  • the number of the first type of data code blocks may be greater than the common multiple of N3 and M2 and the quotient of M2, and the number of the first type of data code blocks in one data unit may be an integral multiple of the common multiple of N3 and M2 and the quotient of M2.
  • the number of the first type of data code blocks included in one of the at least one data unit included in the second code block stream is determined according to a least common multiple of N2 and M2 and M2; for example, one data unit is included
  • the number of the first type of data code blocks is at least the least common multiple of N3 and M2 and the quotient of M2, the number of the first type of data code blocks included in one data unit is greater than the least common multiple of N3 and M2 and the quotient of M2, one data.
  • the number of first type of data code blocks included in the unit may be an integral multiple of N3 and M2 and an integer multiple of the quotient of M2.
  • the first type of data code block defined by the data code block in the embodiment of the present application may be a data code block that carries a code block corresponding to each time slot, and the second type of data code block may be used to carry other information.
  • a bit (such as any one or more of time slot allocation indication information, identification indication information, and multiplexing indication information).
  • the location of the second type of data code block in one data unit may be fixed, or may be notified to the first communication device on the multiplexing side and the second communication device on the demultiplexing side after configuration.
  • the multiplexing indication information may be carried in the second code block stream, where the multiplexing indication information is used to indicate that the multiplexed code block is carried in the data unit, that is, the multiplexed
  • the receiving side needs to perform a demultiplexing operation after receiving the code block in the data unit.
  • the multiplexing indication information may be carried in a data unit of the second code block stream, such as any one or more of the first code block, the second type data code block, and the tail code block.
  • the multiplexing indication information may also indicate that only the data unit including the multiplexing indication information carries the multiplexed code block.
  • the multiplexing indication information may be carried on a code block between adjacent data units, for example, an O code block may be configured between adjacent data units, and the multiplexing indication information may be carried in the O The payload area of the code block.
  • the multiplexing indication information after receiving the multiplexing indication information, it may be determined that the data unit received after the multiplexing indication information is carried by the multiplexed code block, and is required.
  • the demultiplexing is performed until the non-multiplexed indication information is received, and the non-multiplexed indication information may indicate that the code block carried by the subsequent data unit of the non-multiplexed indication information does not need to be demultiplexed.
  • the Q-segment may be Each of the third data streams in the three data streams is transcoded, and each of the third data streams is converted into a first code block stream encoded in M1/N1 bit coding.
  • the third data stream may be, for example, a Synchronous Digital Hierarchy (SDH) service signal, and may perform service mapping processing, such as a payload of the data unit that can encapsulate the third data stream into the first code block stream.
  • SDH Synchronous Digital Hierarchy
  • the necessary encapsulation overhead, the OAM code block, and the idle code block are added to obtain the first code block stream corresponding to the third data stream, and adding the preset idle code block in the first code block stream can enable the The adaptation of the first code block stream to the corresponding pipe rate is adapted by the addition and deletion of the idle code block.
  • the service signal of the 8-byte D0-D7 of the SDH service may be mapped to the payload area of a 64B/66B data code block, and the synchronization header '01' may be added, thereby converting the service signal of the 8-byte D0-D7. It is in the form of a 64B/66B code block.
  • X-Ethernet/FlexE allocates a 5 Gbps time slot to a second code block stream with a time slot of 5 Gbps granularity, that is, a time slot bandwidth (also called a rate) of 5 Gbps.
  • a time slot bandwidth also called a rate
  • the structure of one data unit in the second code block stream is [1 header block (the header block can also be called an overhead code block) + 0023 data code blocks + 1 idle code block].
  • the payload area of 33 64B/66B data code blocks to completely load 32 64B/66B code blocks (64B/66B code blocks can be header code blocks, tail code blocks or data code blocks) ( If compression processing is performed, 32 64B/66B code blocks are compressed code block sequences.
  • 5G is the nominal rate of a time slot, that is, the bit rate of the 64B/66B coded except the sync header.
  • the native rate bandwidth of SDH STM-1 is 155.52 Mbps.
  • the native rate bandwidth of SDH STM-1 is 155.52 Mbps.
  • the operating clock frequency of the device or device may be higher by a few ppm, depending on the specific service signal, such as +100ppm or +20ppm, for example, using the loose frequency deviation for Ethernet, ie +100ppm
  • the maximum package bandwidth of SDH STM-1 is 160.7096177Mbps.
  • the allowable frequency offset of optical transport network (OTN) is +/-20ppm; the allowable frequency offset ratio of Synchronous Digital Hierarchy (SDH) The first two are smaller, +/- 4.6ppm in sync.
  • the bandwidth of 160.9579176Mbps (-100ppm: 160.9418218Mbps) is greater than the bandwidth of 160.6935484Mbps (+100ppm: 160.7096177Mbps).
  • the low-order carrier pipeline rate is 100ppm lower and the service signal is 100ppm higher. Therefore, after the SDH STM-1 service signal is encapsulated by the above, the SDH STM-1 package signal can be transmitted in a low-order pipeline by the padding effect of the idle code block.
  • a 5G time slot corresponds to an X-Ethernet high-order pipeline, which can be divided into 31 time slots, each time corresponding to a low-order pipeline, and can be transmitted after encapsulation 1 Road SDH STM-1 business.
  • STM-N is the N-time rate relationship of STM-1
  • service signals such as STM-4 and STM-16 can be carried by the low-order bearer pipeline formed by the above N time slots after being transparently encapsulated in the same manner.
  • the OTN signal is similar to the SDH signal except that the rate is different.
  • the bandwidth of the low-order bearer pipeline is always greater than or equal to the bandwidth encapsulated by the service signal by allocating an appropriate number of time slots, and the rate filling adaptation is implemented by the idle addition and deletion operation.
  • FIG. 23 is a schematic flowchart diagram of a data transmission method provided by an embodiment of the present application. As shown in FIG. 23, the method includes:
  • Step 7201 Receive a second code block stream, where the code block corresponding bit in the Q first code block stream is carried in a payload area of the code block in the second code block stream, and Q is an integer greater than 1;
  • the coding type of the code block stream is M1/N1 bit coding, M1 is a positive integer, N1 is an integer not less than M1; the coding type of the second code block stream is M2/N2 bit coding; M2 is a positive integer, and the second code block
  • the number of bits carried in the payload area of one code block in the stream is not greater than M2; N2 is an integer not less than M2;
  • Step 7202 demultiplexing the Q first code block streams.
  • the code block corresponding to the Q first code block streams carried by the second code block stream may be extracted from the second code block stream.
  • the first code block stream corresponding to each code block is further determined, thereby recovering each first code block stream.
  • the method performed by the first communication device on the multiplexing side is as shown in FIG. 19 above, and the sequence of the code block to be processed is not compressed, in an optional implementation manner, the method is obtained.
  • the bit corresponding to the code block in the Q first code block stream carried by the payload area of the two code block stream obtains the sequence of the code block to be decompressed; according to the sequence of the code block to be decompressed, the Q first code is demultiplexed Block flow.
  • the code block corresponding to each time slot can be extracted from the payload area of the first type of data code block of the second code block stream to obtain a sequence of the code block to be decompressed, and then the sequence of the code block to be decompressed can be sorted according to the order.
  • each time slot corresponds to, for example, 32 time slots are shared, and the second communication device on the demultiplexing side knows the location of the first type of data code block of the code block corresponding to the bearer slot (can be configured in advance or centralized control)
  • the unit or management unit sends to the second communication device on the demultiplexing side, or is sent by the first communication device on the multiplexing side to the second communication device on the demultiplexing side), in one of the data units in the second code block stream
  • the first code block corresponds to time slot 0
  • the second code block corresponds to time slot 1
  • the third code block corresponds to time slot 2 and so on. Sorting sequentially, until after the code block corresponding to the time slot 31, the next code block is determined again as the code block corresponding to the time slot 0, and the subsequent second code block is determined as the code block corresponding to the time slot 1.
  • the second communication device on the demultiplexing side acquires the identifier of the time slot corresponding to each of the first code block streams in the Q first code block stream, that is, obtains the correspondence between the Q first code block stream and the time slot. For example, if a first code block stream is allocated slot 0, then the code block in the sequence of the code block to be decompressed corresponding to slot 0 is determined as the code block in the first code block stream, and the block is restored. The first code block stream.
  • the sequence of the code block to be processed is compressed, in an optional implementation manner,
  • the payload area of the first type of data code block of the two code block stream takes out the code block corresponding to each time slot, and obtains a sequence of the code block to be decompressed.
  • the sequence of the code block to be decompressed is decompressed to obtain a sequence of the code block to be recovered.
  • the first code block stream corresponding to each code block in the code block sequence to be recovered is determined, and the Q first is obtained.
  • each of the first code block streams in the Q first code block stream corresponds to at least one time slot; the order of the code blocks included in the code block sequence to be recovered, and the code block included in the code block sequence to be recovered The ordering of the corresponding time slots is matched.
  • the sequence of the code blocks to be recovered may be corresponding to the order of the respective time slots according to the ordering, for example, 32 time slots are shared, and the second communication device on the demultiplexing side knows the first type of data code blocks of the code blocks corresponding to the bearer slots.
  • the first code block corresponds to time slot 0
  • the second code block corresponds to time slot 1
  • the three code blocks are sequentially sorted corresponding to the time slot 2 and the like, and after being ranked to the code block corresponding to the time slot 31, the next code block is again determined as the code block corresponding to the time slot 0, and the subsequent second code block is determined as The code block corresponding to slot 1.
  • the second communication device on the demultiplexing side acquires the identifier of the time slot corresponding to each of the first code block streams in the Q first code block stream, that is, obtains the correspondence between the Q first code block stream and the time slot. For example, if a first code block stream is allocated slot 0, then the code block in the sequence of the code block to be recovered corresponding to slot 0 is determined as the code block in the first code block stream, and the first A block flow.
  • the compressed code block sequence is 64/65 bit coded
  • the to-be-processed code block sequence is 64B/66B code.
  • the second communication device on the demultiplexing side may obtain the second code block stream.
  • Boundary information of a data unit such as boundary information of an idle code block of a second code block stream, a header block of a data unit (a header code block may also be referred to as an overhead code block) boundary, and a first type of data code block
  • the boundary information of the payload area so that each 64B/65B code can be delimited by 65 bits at a time starting from the first bit of the first first type of data code block in a data unit of the second code block stream.
  • the delimited 64B/65B code block is a code block in the sequence of the code block to be decompressed, and then the code block in the sequence of the decompressed code block can be decompressed according to the first bit information, thereby recovering The 64B/66B code block in the code block sequence is recovered.
  • FIG. 24 is a schematic diagram showing a data transmission structure provided by an embodiment of the present application.
  • the first communication device 4304 is a multiplexing side and the communication device 4306 is a demultiplexing side
  • the first communication is performed.
  • the device 4304 multiplexes the first code block stream 4301 and the second code block stream 4302 into a second code block stream 4303, and the second code block stream is in at least one intermediate node 4305 (two intermediate nodes 4305 are marked in the figure)
  • the communication device between the first communication device on the multiplexing side and the second communication device on the demultiplexing side may be referred to as an intermediate node, and the first communication device 4306 demultiplexes the received second code block stream.
  • a first code block stream 4301 and a first code block stream 4302 are obtained.
  • the solution provided by the embodiment of the present application solves the problem of multiplexing transmission of multiple service signals to a code block stream (64B/66B encoding) based service signal, such as multiple service signals.
  • a code block stream (64B/66B encoding) based service signal such as multiple service signals.
  • Multiplexed into a 64B/66B service signal, cross-connecting and scheduling in the network according to a 64B/66B service signal which can simplify the network operation and maintenance and data plane of X-Ethernet and SPN technologies, thus enabling X-Ethernet and SPN
  • the technology is perfected so that these two technologies can be applied to backbone and long-distance networks.
  • the solution provided by the embodiment of the present application may further provide at least two two carriers carrying two first code block streams in a high-order pipeline carrying the second code block stream on the device of the ingress and egress of the second code block stream.
  • a low-order pipeline which is independently mapped and demapped.
  • the intermediate node (the communication device between the first communication device on the multiplexing side and the second communication device on the demultiplexing side may be referred to as an intermediate node) exchanges, and only needs to process the high-order pipeline, and does not need to process the low-order pipeline, thereby
  • the convergence of the number of pipes can be simplified, and the cross processing of the intermediate nodes can be simplified.
  • the multiplexing efficiency can be effectively improved by optionally encoding and compressing the low-order pipeline signals.
  • the existing network and technology can be effectively compatible, so that the multiplexed high-order pipeline can traverse the existing network nodes and networks supporting the flat networking. Can have good forward and backward compatibility.
  • the present application provides a communication device 8101 for performing any of the multiplexing sides in the above method.
  • 25 is a schematic structural diagram of a communication device provided by the present application.
  • the communication device 8101 includes a processor 8103, a transceiver 8102, a memory 8105, and a communication interface 8104.
  • the processor 8103, The transceiver 8102, the memory 8105, and the communication interface 8104 are connected to each other through a bus 8106.
  • the communication device 8101 in this example may be the first communication device in the above content, and may perform the scheme corresponding to FIG. 7 above.
  • the communication device 8101 may be the communication device 3105 in FIG. 4 and FIG. 5 described above, or may be the communication device 3107. .
  • the bus 8106 can be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 25, but it does not mean that there is only one bus or one type of bus.
  • the memory 8105 may include a volatile memory such as a random-access memory (RAM); the memory may also include a non-volatile memory such as a flash memory.
  • RAM random-access memory
  • the memory may also include a non-volatile memory such as a flash memory.
  • a hard disk drive (HDD) or a solid-state drive (SSD); the memory 8105 may also include a combination of the above types of memories.
  • the communication interface 8104 can be a wired communication access port, a wireless communication interface, or a combination thereof, wherein the wired communication interface can be, for example, an Ethernet interface.
  • the Ethernet interface can be an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface can be a WLAN interface.
  • the processor 8103 can be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • the processor 8103 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 8105 can also be used to store program instructions, the processor 8103 invoking the program instructions stored in the memory 8105, can perform one or more of the steps shown in the above scheme, or alternatively
  • the communication device 8101 enables the functions of the communication device in the above method.
  • the processor 8103 is configured to: according to the instruction for executing the memory storage, and control the transceiver 8102 to perform signal reception and signal transmission, when the processor 8103 executes the instruction of the memory storage, the processor 8103 in the communication device 8101 is configured to: acquire the Q strip.
  • a first code block stream where Q is an integer greater than 1; the coding type of the first code block stream is M1/N1 bit coding, M1 is a positive integer, N1 is an integer not less than M1; and Q first code blocks are The bit corresponding to the code block in the stream is placed in the second code block stream to be sent; wherein the coding type of the second code block stream is M2/N2 bit coding; and the code block corresponding to the bit carrier in the Q first code block stream a payload area of the code block in the second code block stream; where M2 is a positive integer, and the number of bits carried in the payload area of one code block in the second code block stream is not greater than M2; N2 is not less than M2 An integer 8; a transceiver 8102, configured to send a second code block stream.
  • the processor 8103 is configured to perform code block-based time division multiplexing on the code blocks in the Q first code block streams to obtain a code block sequence to be processed.
  • Each of the first code block streams in the code block stream corresponds to at least one time slot; the order of the code blocks included in the code block sequence to be processed matches the order of the time slots corresponding to the code blocks included in the code block sequence to be processed; The bit corresponding to the sequence of code blocks to be processed is placed in the second code block stream to be transmitted.
  • the processor 8103 is configured to compress consecutive R code blocks in the sequence of code blocks to be processed to obtain a compressed code block sequence, where R is a positive integer; The bit corresponding to the block sequence is placed in the second code block stream to be transmitted.
  • the processor 8103 is further configured to, according to the first code block stream in the Q first code block stream, perform: according to a bandwidth of the first code block stream and the first code block. Performing addition and deletion processing of the idle IDLE code block for the first code block stream, where the total bandwidth of the time slot corresponding to the first code block stream is based on the time slot corresponding to the first code block stream The number, as well as the bandwidth allocated for each time slot corresponding to the first code block stream.
  • the data structure of the second code block stream in the embodiment of the present application may be various. For specific examples, refer to the foregoing embodiment, and details are not described herein again.
  • other information carried in the second code block stream such as the identifier indication information, the time slot allocation indication information, the multiplexing indication information, and the like, may be referred to the content of the foregoing embodiment, and details are not described herein again.
  • FIG. 26 exemplarily shows a schematic structural diagram of a communication device provided by the present application.
  • the communication device 8201 includes a processor 8203, a transceiver 8202, a memory 8205, and a communication interface 8204; wherein, the processor 8203, The transceiver 8202, the memory 8205, and the communication interface 8204 are connected to each other through a bus 8206.
  • the communication device 8201 in this example may be the second communication device in the above content, and may perform the foregoing scheme corresponding to FIG. 23, and the communication device 8201 may be the communication device 3109 in FIG. 4 above, or may be the communication in FIG. 5 described above. The device 3109 may also be the communication device 3115 in FIG. 5 described above.
  • the bus 8206 can be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 26, but it does not mean that there is only one bus or one type of bus.
  • the memory 8205 may include a volatile memory such as a random-access memory (RAM); the memory may also include a non-volatile memory such as a flash memory.
  • RAM random-access memory
  • the memory may also include a non-volatile memory such as a flash memory.
  • a hard disk drive (HDD) or a solid-state drive (SSD); the memory 8205 may also include a combination of the above types of memories.
  • the communication interface 8204 can be a wired communication access port, a wireless communication interface, or a combination thereof, wherein the wired communication interface can be, for example, an Ethernet interface.
  • the Ethernet interface can be an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface can be a WLAN interface.
  • the processor 8203 may be a central processing unit (CPU), a network processor (NP), or a combination of a CPU and an NP.
  • the processor 8203 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 8205 can also be used to store program instructions, and the processor 8203 calls the program instructions stored in the memory 8205, can perform one or more steps in the embodiment shown in the above scheme, or alternatively
  • the communication device 8201 is enabled to implement the functions of the communication device in the above method.
  • the processor 8203 is configured to control the transceiver 8202 to perform signal reception and signal transmission according to an instruction to execute the memory storage.
  • the transceiver 8202 in the communication device 8201 is configured to receive the second code.
  • the code block corresponding bits in the Q first code block stream are carried in a payload area of the code block in the second code block stream, and Q is an integer greater than 1;
  • the coding type of the first code block stream is M1/N1 bit coding, M1 is a positive integer, N1 is an integer not less than M1;
  • the coding type of the second code block stream is M2/N2 bit coding; M2 is a positive integer, and one code block in the second code block stream
  • the number of bits carried in the payload area is not greater than M2; N2 is an integer not less than M2; and the processor 8203 is configured to demultiplex the Q first code block streams.
  • the processor 8203 is configured to: obtain, according to the code block in the Q first code block stream carried by the payload area of the second code block stream, to obtain the to-be-decompressed code block. a sequence; demultiplexing the Q first code block streams according to the sequence of the code block to be decompressed.
  • one code block in the sequence of the code block to be decompressed is obtained by compressing at least two code blocks, at least two code blocks correspond to two different first code block streams.
  • the processor 8203 is configured to: decompress the sequence of the code block to be decompressed to obtain a sequence of code blocks to be recovered; and determine, according to the sequence of the code block to be recovered, the sequence of the code block to be recovered. a first code block stream corresponding to each code block, wherein Q first code block streams are obtained; wherein each of the first code block streams in the Q first code block streams corresponds to at least one time slot; the code block sequence to be recovered The ordering of the included code blocks matches the order of the time slots corresponding to the code blocks included in the code block sequence to be recovered.
  • the data structure of the second code block stream in the embodiment of the present application may be various. For specific examples, refer to the foregoing embodiment, and details are not described herein again.
  • other information carried in the second code block stream such as the identifier indication information, the time slot allocation indication information, the multiplexing indication information, and the like, may be referred to the content of the foregoing embodiment, and details are not described herein again.
  • FIG. 27 exemplarily shows a schematic structural diagram of a communication device according to an embodiment of the present application.
  • the communication device 8301 includes a transceiver unit 8302 and a multiplexing demultiplexing unit 8303.
  • the communication device 8301 in this example may be the first communication device in the above content, and may perform the solution corresponding to FIG. 7 above.
  • the communication device 8301 may be the communication device 3105 in FIG. 4 and FIG. 5 described above, or may be the communication device 3107. .
  • the multiplexing demultiplexing unit 8303 is configured to: acquire Q first code block streams, where Q is an integer greater than 1; the coding type of the first code block stream is M1/N1 bit coding, and M1 is a positive integer, N1 An integer that is not less than M1; the bit corresponding to the code block in the Q first code block stream is placed in the second code block stream to be sent; wherein the coding type of the second code block stream is M2/N2 bit coding; The code block corresponding bits in the Q first code block stream are carried in the payload area of the code block in the second code block stream; wherein M2 is a positive integer, and the payload area of one code block in the second code block stream The number of bits carried is not greater than M2; N2 is an integer not less than M2; and the transceiver unit 8302 is configured to send the second code block stream.
  • the transceiver unit 8302 can be implemented by the transceiver 8102 of FIG. 25, and the multiplexing and demultiplexing unit 8303 can be implemented by the processor 8103 of FIG. 25 described above. That is, the transceiver unit 8302 in the embodiment of the present application may perform the implementation of the transceiver 8102 of FIG. 25, and the multiplexing and demultiplexing unit 8303 may perform the execution of the processor 8103 of FIG. 25 described above. For the rest of the content, refer to the above content, and details are not described herein again.
  • the division of the units of the foregoing first communication device and the second communication device is only a division of a logical function, and may be integrated into one physical entity in whole or in part, or may be physically separated.
  • the transceiver unit 8302 can be implemented by the transceiver 8102 of FIG. 25, and the multiplexing and demultiplexing unit 8303 can be implemented by the processor 8103 of FIG. 25 described above.
  • the memory 8105 included in the communication device 8101 can be used to store a code when the processor 8103 included in the communication device 8101 executes a scheme, and the code can be a program/code pre-installed when the communication device 8101 is shipped.
  • FIG. 28 exemplarily shows a schematic structural diagram of a communication device according to an embodiment of the present application.
  • the communication device 8401 includes a transceiver unit 8402 and a multiplexing and demultiplexing unit 8403.
  • the communication device 8401 in this example may be the second communication device in the above content, and may perform the foregoing scheme corresponding to FIG. 23, and the communication device 8401 may be the communication device 3109 in FIG. 4 above, or may be the communication in FIG. 5 described above.
  • the device 3109 may also be the communication device 3115 in FIG. 5 described above.
  • the transceiver unit 8402 is configured to receive the second code block stream, where the code block corresponding bit in the Q first code block stream is carried in the payload area of the code block in the second code block stream, and Q is an integer greater than 1.
  • the coding type of the first code block stream is M1/N1 bit coding, M1 is a positive integer, N1 is an integer not less than M1; the coding type of the second code block stream is M2/N2 bit coding; M2 is a positive integer, The number of bits carried in the payload area of one code block in the two code block stream is not greater than M2; N2 is an integer not less than M2; and the multiplexing demultiplexing unit 8403 is configured to demultiplex the first code block of Q flow.
  • the transceiver unit 8402 can be implemented by the transceiver 8202 of FIG. 26, and the multiplexing and demultiplexing unit 8403 can be implemented by the processor 8203 of FIG. 26 described above. That is, the transceiver unit 8402 in the embodiment of the present application may perform the implementation of the transceiver 8202 of FIG. 26, and the multiplexing and demultiplexing unit 8403 may perform the execution of the processor 8203 of FIG. 26 described above. For the rest of the content, refer to the above content, and details are not described herein again.
  • the division of the units of the foregoing first communication device and the second communication device is only a division of a logical function, and may be integrated into one physical entity in whole or in part, or may be physically separated.
  • the transceiver unit 8402 can be implemented by the transceiver 8202 of FIG. 26, and the multiplexing and demultiplexing unit 8403 can be implemented by the processor 8203 of FIG. 26 described above.
  • the memory 8205 included in the communication device 8201 can be used to store a code when the processor 8203 included in the communication device 8201 executes a scheme, and the code can be a program/code pre-installed when the communication device 8201 is shipped.
  • a computer program product includes one or more instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the instructions may be stored on a computer storage medium or transferred from one computer storage medium to another computer storage medium, for example, instructions may be wired from a website site, computer, server or data center (eg, coaxial cable, fiber optic, digital user) Line (DSL) or wireless (eg infrared, wireless, microwave, etc.) transmission to another website site, computer, server or data center.
  • the computer storage medium can be any available media that can be accessed by the computer or a data storage device such as a server, data center, or the like, including one or more available media.
  • Usable media can be magnetic media (eg, floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.), optical media (eg, CD, DVD, BD, HVD, etc.), or semiconductor media (eg, ROM, EPROM, EEPROM, Non-volatile memory (NAND FLASH), solid state disk (SSD), etc.
  • magnetic media eg, floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.
  • optical media eg, CD, DVD, BD, HVD, etc.
  • semiconductor media eg, ROM, EPROM, EEPROM, Non-volatile memory (NAND FLASH), solid state disk (SSD), etc.
  • embodiments of the present application can be provided as a method, system, or computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowcharts and/or block diagrams, and combinations of flow and/or blocks in the flowcharts and/or ⁇ RTIgt; These instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine such that instructions executed by a processor of a computer or other programmable data processing device are utilized for implementation A means of function specified in a flow or a flow and/or a block diagram of a block or blocks.
  • the instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Time-Division Multiplex Systems (AREA)

Abstract

本申请实施例提供一种数据传输方法、通信设备及存储介质,用于减少网络中中间节点交叉连接数量,本申请实施例中将获取的Q条第一码块流复用为一条第二码块流进行传输,且第一码块流的编码类型为M1/N1比特编码,第二码块流的编码类型为M2/N2比特编码,Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域,也就是说,本申请实施例所提供的方案在码块的粒度上对码块流进行复用和解复用,如此,第二码块流穿越至少一个中间节点到达解复用侧的通信设备,且中间节点不会对第二码块流进行解复用,从而可以减少网络中中间节点交叉连接数量。

Description

一种数据传输方法、通信设备及存储介质
本申请要求在2017年12月29日提交中国专利局、申请号为201711489045.X、发明名称为“一种数据传输方法、通信设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信领域,尤其涉及一种数据传输方法、通信设备及存储介质。
背景技术
光互联网论坛(Optical Internet Forum,OIF)发布了灵活以太网(Flexible Ethernet,FlexE),FlexE是一种支持多种以太网MAC层速率的通用技术。通过将多个100GE(Physical,PHYs)端口绑定,并将每个100GE端口在时域上以5G为颗粒划分为20个时隙,FlexE可支持以下功能:绑定,将多个以太网端口绑定为一个链路组以支持速率大于单个以太网端口的媒体访问控制(Medium Access Control,MAC)业务;子速率,通过为业务分配时隙支持速率小于链路组带宽或者小于单个以太网端口带宽的MAC业务;通道化,通过为业务分配时隙支持在链路组中同时传输多个MAC业务,例如在2x 100GE链路组中支持同时传输一个150G和两个25G的MAC业务。
FlexE通过时分复用(Time Division Multiplexing,TDM)方式划分时隙,实现传输管道带宽的硬隔离,一个业务数据流可以分配到一到多个时隙中,实现了对各种速率业务的匹配。一个FlexE组(英文也可以称为FlexE Group)可以包含一个或多个物理链路接口(英文可以写为PHY)。图1示例性示出了一种基于灵活以太网协议的通信系统示意图,如图1所示,以FlexE Group包括4个PHY示意。灵活以太网协议客户(FlexE Client)代表在FlexE Group上指定时隙(一个时隙或多个时隙)传输的客户数据流,一个FlexE Group上可承载多个FlexE Client,一个FlexE Client对应一个用户业务数据流(典型的,可以称为媒体访问控制(Medium Access Control,MAC)Client),灵活以太网协议功能层(英文可以称为FlexE Shim)层提供FlexE Client到MAC Client的数据适配和转换。
华为技术于2016年12月ITU-T IMT2020 workshop发布一项新技术,该技术体系可以简称为泛在以太网(英文可以称为X-Ethernet或X-E),是一种基于以太网(英文可以称为Ethernet)物理层,具备确定性超低时延特征的新一代交换组网技术。其思路之一是基于的比特块(英文可以称为Bit Block)序列的交换组网,比如未经扰码的64B/66B Bit Block序列,或者等效的8/10b Bit Block序列,以太网媒质不相关接口xMII(例如GMII,XGMII,25GMII等)上的含1比特带外控制指示和8比特字符的9比特块序列等,但缺乏层次化复接考虑,不适用于大规模的组网应用。图2示例性示出了一种X-E通信系统架构示意图,如图2所示,该通信系统可以包括两种类型的通信设备,如图2中的通信设备一1011和通信设备二1012。通信设备一1011也可以描述为运营商网络(以下简称网络)边缘的通信设备,英文可以称为Provider Edge node,可以简称为PE节点。通信设备二1012也可以描述为运营商网络(以下简称网络)内的通信设备,英文可以称为Provider node,可以简称为P节点。
通信设备一1011的一侧可以与用户设备连接,也可以与客户网络设备连接。与用户设备或客户网络设备连接的接口相对而言可以称为用户侧接口1111(User network interface, UNI),也可以描述为网络与用户连接的接口。通信设备一1011的另一侧与通信设备二1012连接,如图2所示,通信设备一1011的另一侧和通信设备二1012之间通过网络间接口1112(Network to Network interface,NNI)连接。网络间接口1112也可以描述为网络之间或网络内的通信设备之间的接口。可选地,通信设备二1012可以与其它通信设备(比如可以为其它的通信设备二或通信设备一)连接,在图中仅示意性示出了一个通信设备二,本领域技术人员可知在两个通信设备一之间可以包括一个或多个连接的通信设备。
如图2所示,在通信设备的接口侧可以配置适配器(英文可以称为adaptor),比如UNI1111侧配置的UNI侧适配器(英文可以称为U-adaptor)1113,NNI1112侧配置的适配器(英文可以称为N-adaptor)1114。网络设备基于X-E接口进行端到端组网时,可以在第一通信设备和第二通信设备中可以配置X-E交换模块1115(英文可以称为X-ESwitch)。图2中示例性示出了端到端路径1116的示意图。
X-E当前基于FlexE接口端到端组网,属于扁平化的非层次化组网交换。OIF FlexE目前基于64B/66B Bit Block(一下简称64B/66Bb)定义5Gbps速率的时隙(SLOT)颗粒,任一的FlexE Client可以通过在基于FlexE的NNI或UNI上分配总带宽速率为Q*5Gbps(Q的取值范围为大于等于1的整数)的若干时隙来承载。X-E网络的P节点需要解析提取每个FlexE Clieng并加以交换处理,缺乏层次化复接考虑。图3示例性示出了一种X-Ethernet扁平化组网技术应用到城域和骨干网络的端到端组网的通信示意图,多个城市之间存在数以万计的专线业务需要调度,汇聚节点(如图3所示的汇聚)和骨干节点(如图3所示的骨干)要管理数以十万、百万的端到端的交叉连接,存在管理和运维方面的困难,每个核心节点(比如汇聚节点和骨干节点)在数据面处理庞大的交叉连接数量,存在困难和压力。
发明内容
本申请实施例提供一种数据传输方法、通信设备及存储介质,用于减轻网络中中间节点交叉连接数量给中间节点带来的压力,也可以减轻网络管理和运维方面的压力。
第一方面,本申请实施例提供一种数据传输方法,该方法中,获取Q条第一码块流,其中,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;将Q条第一码块流中的码块对应的比特放入待发送的第二码块流;其中,第二码块流的编码类型为M2/N2比特编码;Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域;其中,M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数。本申请实施例所提供的方案在码块的粒度上对码块流进行复用和解复用,如此,第二码块流穿越至少一个中间节点到达解复用侧的通信设备,且中间节点不会对第二码块流进行解复用,从而可以减轻网络中中间节点交叉连接数量,也可以减轻网络管理和运维方面的压力。
在一种可选地实施方式中,将Q条第一码块流中的码块对应的比特放入待发送的第二码块流,可以是将Q条第一码块流中的一个码块的同步头区域和非同步头区域依序放入第二码块流的码块的的净荷区域。如此可以依序依次解复用出第一码块流中的码块的同步头区域和非同步区域。
在一种可选地实施方式中,Q条第一码块流中的一个码块的同步头区域和非同步头区域对应的所有比特对应放入第二码块流的至少两个码块的净荷区域。如此,在第一码块流的一个码块承载的所有比特数量大于第二码块流中的一个码块的净荷区域承载的比特的 数量时,可以采取该方式实现第一码块流的码块的复用。比如若第一码块流和第二码块流的编码方式都为64B/66B编码,在第一码块流不经过压缩的情况下,可以通过第二码块流的两个码块的净荷区域来承载第一码块流的一个码块对应的比特。
在一种可选地实施方式中,第二码块流对应至少一个数据单元;至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块。如此,可以通过头码块和/或尾端码块实现对数据单元的边界划分,从而使通信设备识别出第二码块流中的每个数据单元的边界,为解复用出Q条第一码块流奠定基础。
在一种可选地实施方式中,至少一个数据码块包括至少一个第一类数据码块;Q条第一码块流中的码块对应比特承载于第二码块流中的至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为M2。如此可以将第一码块流中的码块的承载在第二码块流中,从而实现基于码块粒度的码块流复用,提高数据传输效率。
在一种可选地实施方式中,为了兼容现有技术,头码块为S码块,和/或,尾端码块为T码块。
在一种可选地实施方式中,针对第二码块流中承载的Q条第一码块流中的一个码块:第二码块流中还包括码块对应的标识指示信息;其中,标识指示信息用于指示码块对应的第一码块流。如此,可以通过标识指示信息向解复用侧的通信设备指示处第二码块流中承载的取自第一码块流的码块对应的第一码块流的标识,从而使为解复用侧的通信设备能够解复用出Q条第一码块流奠定基础。
在一种可选地实施方式中,将Q条第一码块流中的码块对应的比特放入待发送的第二码块流,包括:将Q条第一码块流中的码块进行基于码块的时分复用,得到待处理码块序列;其中,Q条第一码块流中的每条第一码块流对应至少一个时隙;待处理码块序列包括的码块的排序,与待处理码块序列包括的码块所对应的时隙的排序匹配;将待处理码块序列对应的比特放入待发送的第二码块流。如此,可以使解复用侧根据码块的排序和时隙的排序关系,确定出待处理码块序列包括的取自于Q条第一码块流中的码块对应的时隙,进而根据时隙与Q条第一码块流的对应关系确定出每个码块对应的第一码块流,进而恢复出第二码块流承载的Q条第一码块流。
在一种可选地实施方式中,第二码块流的预设码块中承载时隙分配指示信息;时隙分配指示信息用于指示Q条第一码块流和时隙的对应关系。通过时隙分配指示信息的方式通知解复用侧时隙和第一码块流的对应关系,可以使复用侧的通信设备更加灵活的为Q条第一码块流分配时隙成为可能。
在一种可选地实施方式中,将待处理码块序列对应的比特放入待发送的第二码块流,包括:将待处理码块序列中连续R个码块进行压缩,得到压缩后码块序列;其中,R为正整数;将压缩后码块序列对应的比特放入待发送的第二码块流。如此,可以减少第二码块流中承载的第一码块流对应的比特的数量,从而提高数据传输效率。
在一种可选地实施方式中,若R大于1时,连续R个码块中至少包括两个码块,取出两个码块的两个第一码块流是两个不同的第一码块流。也就是说,本申请实施例中可以对来自不同的第一码块流的多个码块进行压缩,从而实现了在该码块复用和解复用的方案中 对多个码块压缩进而提高传输效率的效果。
在一种可选地实施方式中,压缩后码块序列的编码形式为M3/N3;M3为正整数,N3为不小于M3的整数;第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N3和M2的公倍数与M2确定的;或者;第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N3和M2的最小公倍数与M2确定的。如此可以使第二码块流的一个数据单元中装入整数个第一码块流中的码块(这种形式也可以描述为边界对齐)。
在一种可选地实施方式中,接收Q条第一码块流之后,将Q条第一码块流中的码块对应的比特放入待发送的第二码块流之前,还包括:针对Q条第一码块流中的第一码块流,执行:根据第一码块流的带宽与第一码块流对应的时隙的总带宽,对第一码块流执行空闲IDLE码块的增删处理;其中,第一码块流对应的时隙的总带宽是根据第一码块流对应的时隙的数量,以及为第一码块流对应的每个时隙所分配的带宽确定的。如此,可以实现第一码块流的速率与为其分配的时隙对应的总速率实现适配。
第二方面,本申请实施例提供一种数据传输方法,该方法中,接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;第二码块流的编码类型为M2/N2比特编码;M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;解复用出Q条第一码块流。本申请实施例所提供的方案在码块的粒度上对码块流进行复用和解复用,如此,第二码块流穿越至少一个中间节点到达解复用侧的通信设备,且中间节点不会对第二码块流进行解复用,从而可以减轻网络中中间节点交叉连接数量给中间节点带来的压力,也可以减轻网络管理和运维方面的压力。
在一种可选地实施方式中,Q条第一码块流中的一个码块的同步头区域和非同步头区域依序放入第二码块流的码块的的净荷区域。如此可以依序依次解复用出第一码块流中的码块的同步头区域和非同步区域。
在一种可选地实施方式中,Q条第一码块流中的一个码块的同步头区域和非同步头区域对应的所有比特对应放入第二码块流的至少两个码块的净荷区域。如此,在第一码块流的一个码块承载的所有比特数量大于第二码块流中的一个码块的净荷区域承载的比特的数量时,可以采取该方式实现第一码块流的码块的复用。比如若第一码块流和第二码块流的编码方式都为64B/66B编码,在第一码块流不经过压缩的情况下,可以通过第二码块流的两个码块的净荷区域来承载第一码块流的一个码块对应的比特。
在一种可选地实施方式中,第二码块流对应至少一个数据单元;至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块。如此,可以通过头码块和/或尾端码块实现对数据单元的边界划分,从而使通信设备识别出第二码块流中的每个数据单元的边界,为解复用出Q条第一码块流奠定基础。
在一种可选地实施方式中,至少一个数据码块包括至少一个第一类数据码块;Q条第一码块流中的码块对应比特承载于第二码块流中的至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,第二码块流中的一个第一类数据码块的净荷区域承载的比特 的数量为M2。如此可以将第一码块流中的码块的承载在第二码块流中,从而实现基于码块粒度的码块流复用,提高数据传输效率。
在一种可选地实施方式中,为了兼容现有技术,头码块为S码块,和/或,尾端码块为T码块。
在一种可选地实施方式中,针对第二码块流中承载的Q条第一码块流中的一个码块:第二码块流中还包括码块对应的标识指示信息;其中,标识指示信息用于指示码块对应的第一码块流。如此,可以通过标识指示信息向解复用侧的通信设备指示处第二码块流中承载的取自第一码块流的码块对应的第一码块流的标识,从而使为解复用侧的通信设备能够解复用出Q条第一码块流奠定基础。
在一种可选地实施方式中,解复用出Q条第一码块流,包括:获取第二码块流的净荷区域承载的Q条第一码块流中的码块对应的比特,得到待解压缩码块序列;根据待解压缩码块序列,解复用出Q条第一码块流。从第二码块流中的净荷区域承载的Q条第一码块流中的码块对应的比特,并将其确定为码块粒度,进而形成待解压缩码块序列,进而确定出该待解压缩码块序列中每个码块对应的第一码块流标识,从而解复用出Q条第一码块流,从而实现了基于码块粒度的解复用。
在一种可选地实施方式中,若待解压缩码块序列中的一个码块是对至少两个码块进行压缩得到的,则至少两个码块对应两个不同的第一码块流。也就是说,本申请实施例中可以对来自不同的第一码块流的多个码块进行压缩,从而实现了在该码块复用和解复用的方案中对多个码块压缩进而提高传输效率的效果。
在一种可选地实施方式中,第二码块流的预设码块中承载时隙分配指示信息;时隙分配指示信息用于指示Q条第一码块流和时隙的对应关系。通过时隙分配指示信息的方式通知解复用侧时隙和第一码块流的对应关系,可以使复用侧的通信设备更加灵活的为Q条第一码块流分配时隙成为可能。
在一种可选地实施方式中,根据待解压缩码块序列,解复用出Q条第一码块流,包括:将待解压缩码块序列进行解压缩,得到待恢复码块序列;根据待恢复码块序列,确定出待恢复码块序列中每个码块对应的第一码块流,得到Q条第一码块流;其中,Q条第一码块流中的每条第一码块流对应至少一个时隙;待恢复码块序列包括的码块的排序,与待恢复码块序列包括的码块所对应的时隙的排序匹配。如此,可以使解复用侧根据码块的排序和时隙的排序关系,确定出待恢复码块序列包括的取自于Q条第一码块流中的码块对应的时隙,进而根据时隙与Q条第一码块流的对应关系确定出每个码块对应的第一码块流,进而恢复出第二码块流承载的Q条第一码块流。
在一种可选地实施方式中,压缩后码块序列的编码形式为M3/N3;M3为正整数,N3为不小于M3的整数;第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N3和M2的公倍数与M2确定的;或者;第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N3和M2的最小公倍数与M2确定的。如此可以使第二码块流的一个数据单元中装入整数个第一码块流中的码块(这种形式也可以描述为边界对齐)。
第三方面,本申请实施例提供一种通信设备,通信设备包括存储器、收发器和处理器,其中:存储器用于存储指令;处理器用于根据执行存储器存储的指令,并控制收发器进行信号接收和信号发送,当处理器执行存储器存储的指令时,通信设备用于执行上述第一方 面或第一方面中任一种方法。
第四方面,本申请实施例提供一种通信设备,通信设备包括存储器、收发器和处理器,其中:存储器用于存储指令;处理器用于根据执行存储器存储的指令,并控制收发器进行信号接收和信号发送,当处理器执行存储器存储的指令时,通信设备用于执行上述第二方面或第二方面中任一种方法。
第五方面,本申请实施例提供一种通信设备,用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,通信设备的结构中包括复用解复用单元和收发单元,这些单元可以执行上述方法示例中相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第六方面,本申请实施例提供一种通信设备,用于实现上述第二方面或第二方面中的任意一种的方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,通信设备的结构中包括复用解复用单元和收发单元,这些单元可以执行上述方法示例中相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第七方面,本申请实施例提供一种计算机存储介质,计算机存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第一方面或第一方面的任意可能的实现方式中的方法。
第八方面,本申请实施例提供一种计算机存储介质,计算机存储介质中存储有指令,当其在计算机上运行时,使得计算机执行第二方面或第二方面的任意可能的实现方式中的方法。
第九方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行第一方面或第一方面的任意可能的实现方式中的方法。
第十方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行第二方面或第二方面的任意可能的实现方式中的方法。
附图说明
图1为一种基于灵活以太网协议的通信系统示意图;
图2为一种X-E通信系统架构示意图;
图3为一种端到端的通信示意图;
图4为本申请实施例适用的一种通信系统架构示意图;
图5为本申请实施例适用的另一种通信系统架构示意图;
图6为本申请实施例提供的一种网络系统架构示意图;
图7为本申请实施例提供的一种数据传输方法的流程示意图;
图8为本申请实施例提供的一种码块的结构示意图;
图9为本申请实施例提供的另一种码块的结构示意图;
图10为本申请实施例提供的一种码块的结构示意图;
图11为本申请实施例提供的一种数据码块的结构示意图;
图12为本申请实施例提供的一种T7码块的结构示意图;
图13为本申请实施例提供的一种IDLE码块的结构示意图;
图14为本申请实施例提供另一种码块的结构示意图;
图15为本申请实施例提供的一种FlexE帧的结构示意图;
图16为本申请实施例提供的一种第二码块流传输时隙分配指示信息的结构示意图;
图17为本申请实施例提供的一种码块流复用的结构示意图;
图18为本申请实施例提供的一种第一码块流的结构示意图;
图19为本申请实施例提供的一种第二码块流的结构示意图;
图20为本申请实施例提供的另一种第二码块流的结构示意图;
图21为本申请实施例提供的一种压缩处理方式的示意图;
图22为本申请实施例提供的一种压缩处理方式的示意图;
图23为本申请实施例提供的一种数据传输方法的流程示意图;
图24为本申请实施例提供的一种数据传输结构示意图;
图25为本申请实施例提供的一种通信设备的结构示意图;
图26为本申请实施例提供的另一种通信设备的结构示意图;
图27为本申请实施例提供的另一种通信设备的结构示意图;
图28为本申请实施例提供的另一种通信设备的结构示意图。
具体实施方式
应理解,本申请实施例的技术方案可以应用于各种通信系统,例如:移动承载前传或回传领域、城域多业务承载、数据中心互联、工业通讯等基于以太网技术的通讯系统,以及工业或通讯设备内不同元器件或模块之间的通讯系统。
图4示出了示例性示出了本申请实施例适用的一种通信系统架构示意图。如图4所示,该通信系统包括多个通信设备,通信设备之间传输码块流。
本申请实施例中的通信设备可以是网络设备,比如可以是X-E网络中的网络边缘的称为PE节点的通信设备,也可以是X-E网络中的网络内的称为P节点的通信设备,还可以是作为客户设备接入到其他承载网络,例如光传送网(OpticalTransportNetwork,OTN)或波分复用(Wavelength Division Multiplexing,WDM)等。
如图4所示,本申请实施例中提供的通信设备具有复用解复用单元,比如图4中示出的通信设备3105中的复用解复用单元3301、通信设备3107中的复用解复用单元3302和通信设备3109中的复用解复用单元3303。具有复用解复用单元的通信设备可以实现对接收到的多条码流的复用(本申请实施例中的复用在有些文献中也可称为复接),也可以实现对接收到的一条码流的解复用(本申请实施例中的解复用在有些文献中也可称为解复接),下面结合图4进行举例说明。
图4中通信设备3101输出码块流3201至通信设备3105,通信设备3102输出码块流3202至通信设备3105,通信设备3103输出码块流3203至通信设备3105,通信设备3105中包括复用解复用单元3301,通信设备3105可以将接收到的码块流3201、码块流3202和码块流3203复用为一条码块流3205进行传输。
进一步,本申请实施例中可以实现多级复用,比如图4中通信设备3105可以将码块流3205输出给通信设备3107,码块流3205已经是复用后的码块流,通信设备3107可以 通过复用解复用单元3302对通信设备3104输出的码块流3204、通信设备3106输出的码块流3206,以及通信设备3105输出的复用后的码块流3205再次进行复用,输出复用后码块流3207。也可以描述为,通信设备3107将码块流3204、复用后的码块流3205和码块流3206复用为一条码块流3207。
通信设备3107和通信设备3108以及通信设备3109之间可以传输复用后的码块流3207。通信设备中的复用解复用单元还可以具有解复用功能,图4所示的通信设备3109中的复用解复用单元3303可以将接收到的码块流3207解复用,并将解复用后的码块流发送给对应的通信设备,比如图4中将解复用后的码块流3204发送给通信设备3110,将解复用后的码块流3201发送给通信设备3111,将解复用后的码块流3202发送给通信设备3112,将解复用后的码块流3203发送给通信设备3113,将解复用后的码块流3206发送给通信设备3114。
一种可选地实施方案中,复用解复用单元3303可以先将码块流3207解复用为码块流3204、码块流3205和码块流3206,进一步复用解复用单元3303再将码块流3205解复用为码块流3201、码块流3202和码块流3203。一种可选地实施方式中,图4中的通信设备3109的复用解复用单元3303可以包括两个子复用解复用单元,其中一个子复用解复用单元用于将码块流3207解复用为码块流3204、码块流3205和码块流3206,并将码块流3205发给另外一个子复用解复用单元,该另外一个子复用解复用单元将码块流3205解复用为码块流3201、码块流3202和码块流3203。
图5示例性提供了本申请实施例适用的另一种通信系统架构示意图,如图5所示,通信设备3109接收到码块流3207的过程与上述图4中的一致,不再赘述,与图4所示方案不同的是,图5中通信设备3109中的复用解复用单元3303将接收到的码块流3207解复用为码块流3204、码块流3205和码块流3206,并将码块流3204发送给通信设备3110,将码块流3205发送给通信设备3115,将码块流3206发送给通信设备3114。通信设备31105中的复用解复用单元3304将接收到的码块流3205解复用为码块流3201、码块流3202和码块流3203,并将码块流3201发送给通信设备3111,将码块流3202发送给通信设备3112,将码块流3203发送给通信设备3113。
也就是说,本申请实施例中,在复用的一侧和解复用的一侧都可以灵活配置,比如图4中,通过复用解复用单元3301和复用解复用单元3302进行了两级复用,得到码块流3207,而在解复用一侧,既可以如图4中所示,通过复用解复用单元3303将码块流解复用为码块流3204、码块流3201、码块流3202、码块流3203和码块流3206。也可以如图5所示的,先通过复用解复用单元3303将接收到的码块流3207解复用为码块流3204、码块流3205和码块流3206,再通过复用解复用单元3304将接收到的码块流3205解复用为码块流3201、码块流3202和码块流3203。
通过图4和图5所示的方案,可以看出,通信设备3107和通信设备3108以及通信设备3109之间仅仅传输一条码块流,该传输路径上的通信设备仅处理复用后的一条码块流即可,无需对被复用的多条码块流进行解析,可见,应用本申请实施例提供的方案,可以减少中间节点(中间节点比如为图4中的通信设备3108等)的交叉连接数量,减轻网络管理和运维方面工作量。
图6示例性示出了本申请实施例提供的一种网络系统架构示意图。X-Ethernet可以基于传统以太网接口、光纤通道技术(Fiber Channel,FC)光纤通道接口、通用公共无线电 接口(Common Public Radio Interface,CPRI)、同步数字体系SDH/SONET、光传送网OTN和FlexE接口上的通用的数据单元序列流进行交叉连接,提供了一种与具体协议无关的端到端组网技术,其中交换的对象是通用的数据单元序列流。可以通过对伴随的空闲(IDLE)的增删实现对数据单元序列流到FlexE时隙或者相应物理接口的速率适配。具体地可以基于64B/66B码块流进行交叉连接,也可以基于其解码后的通用数据单元流进行交叉连接。如图6所示,在两端的接入侧可以接入多种类型的数据,比如移动前传CPRI、移动后传以太网和企业SDH、以太网专线等。图6的示例中,采用本申请实施例以后,X-E的汇聚节点(如图6中所示的汇聚)可以实现Q条业务码流到一条码流的复接(复用),从而减少汇聚节点和骨干节点需要处理的交叉连接数量。通过图3和图6的对比,可以看出,应用本申请实施例提供的方案,可以有效减少核心节点(比如图6的汇聚节点和骨干节点)在数据面处理的交叉连接数量,减轻核心节点的压力。本申请实施例中的*表示乘的意思。
基于上述描述,本申请实施例提供一种数据传输方法,该数据传输方法的复用侧可以由上述图4和图5中的通信设备3105和通信设备3107来执行,该数据传输方法的解复用侧可以由上述图4中的通信设备3109和图5中的通信设备3205来执行。本申请实施例中也可以将复用侧的通信设备称为第一通信设备,将解复用侧的通信设备称为第二通信设备,可选地,一个通信设备可以具有复用能力,也可以有解复用的能力,也就是说同一个通信设备在一个数据传输链路中可能是复用侧的第一通信设备,在另外一个数据传输链路过程中也可能使解复用侧的第二通信设备。图7示例性示出了本申请实施例提供的一种数据传输方法的流程示意图,如图7所示,该方法包括:
步骤4101,第一通信设备获取Q条第一码块流,其中,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;
步骤4102,第一通信设备将Q条第一码块流中的码块对应的比特放入待发送的第二码块流;其中,第二码块流的编码类型为M2/N2比特编码;Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域;其中,M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数。将Q条第一码块流中的码块对应的比特放入待发送的第二码块流也可以描述为将Q条第一码块流中的码块对应的比特复用入(或交织入,英文也可以写为Interleaving)待发送的第二码块流。
本申请实施例中可选地,第一码块流的编码方式和第二码块流的编码方式可以相同,也可以说,M1可以与M2相同或不相同,N1可以与N2相同或不相同也可以不同,比如第一码块流的编码方式采用8B/10B编码方式,第二码块流采用64B/66B编码方式;或者,第一码块流的编码方式采用64B/65B编码方式,第二码块流采用64B/66B编码方式。
可选地,本申请实施例提供的方案应用于上述图4中时,第一通信设备3107和第一通信设备3109之间包括至少一个第一通信设备,该第一通信设备接收到码块流3207时不会将码块流3207解复用,也就是说,第二码块流穿越至少一个中间节点到达解复用侧的第二通信设备,中间节点不需对第二码块流进行解复用。将本申请实施例应用到X-E中时,也可以描述为,第二码块流依次进入本节点与下一节点的灵活以太网接口组中的时隙组合所构成的承载管道中传输,穿越网络到达解复用侧的第二通信设备。可选地,中间节点可以对第二码块流与其它码块流再次进行复用,本申请实施例不做限制。本申请实施例所提供的方案在码块的粒度上对码块流进行复用和解复用,如此,通过上述步骤4101和步骤4102提供的方案,可以实现多条第一码块流的复用,从而将多条第一码块流复用为一条第 二码块流进行传输,从而可以减少中间节点需处理的交叉连接数量,也可以减轻网络管理和运维方面的压力。可选地,本申请实施例中的中间节点是指在传输路径上复用侧的第一通信设备和解复用侧的第二通信设备之间的通信设备。
一种可选地实施方式中,上述步骤4102可以是将Q条第一码块流中的一个码块的同步头区域和非同步头区域依序放入第二码块流的码块的的净荷区域。也就是说,将该一个码块的同步头区域承载的信息和非同步头区域承载的信息依据它们在第一码块流中的排序依序放入第二码块流的码块的净荷区域。
本申请实施例中还提供一种可选地实施方式,Q条第一码块流中的一个码块的同步头区域和非同步头区域对应的所有比特对应放入第二码块流的至少两个码块的净荷区域。举个例子,比如Q条第一码块流中的每条第一码块流和第二码块流使用的编码方式都是64B/66B编码,则第一码块流中一个码块的总比特数是66比特,而第二码块流的一个码块总比特数是66比特,但第二码块流的一个码块的净荷区域是64比特,所以第一码块流中的一个码块的66比特需要放入第二码块流的至少两个码块的净荷区域。
本申请实施例中第一码块流也可以是复用后的码块流,比如上述图4中,第一通信设备3105对码块流3201、码块流3202和码块流3203进行复用后,输出复用后的3205,第一通信设备3107可以对码块流3204、码块流3206,以及复用后的码块流3205再次进行复用。也就是说,本申请实施例中支持嵌套应用。本申请实施例中针对第一通信设备的输入侧和输出侧的码块流的承载管道,若将传输复用前的码块流的管道称为低阶管道,将传输复用后的码块流的管道称为高阶管道,比如将图4中承载码块流3201、码块流3202和码块流3203的管道称为低阶管道,将承载复用后的码块流3205的管道称为高阶管道,将承载码块流3207的管道称为更高阶管道,则本申请实施例中可以将低阶管道的码块装入高阶管道中,而高阶管道的码块可以装入更高一级的管道中,从而实现高阶管道到更高阶管道的嵌套复接。
本申请实施例中的第一通信设备可以包括多个接口,按数据传输方向可以分为输入侧的接口和输出侧的接口,输入侧的接口包括多个,输出侧的接口包括一个或多个。可选地,可以预先对第一通信设备的接口进行配置,将输入侧的部分或全部接口收到的多个码块流复用到输出侧的一个接口上的多个码块流中的一个码块流中。举个例子,第一通信设备包括输入侧的接口包括接口1、接口2和接口3,输出的接口包括接口4和接口5,可以配置接口1和接口2收到的Q1和Q2条码块流经复用成为一个码块流通过接口4输出,接口3收到的Q3条码块流流经复用成为一个码块流通过接口5输出。也可以是Q1、Q2、Q3中的Q4条码块流经复用成为一个码块流通过接口4输出,Q1、Q2、Q3中的Q5条码块流经复用成为一个码块流通过接口5输出,可选地,第一通信设备的接口之间复用的配置信息可以周期性或者不定时的进行调整,也可以为静态固定配置。
下面对本申请实施例中所涉及到的Q条第一码块流和第二码块流中的任一条码块流,以及Q条第一码块流和第二码块流中的一个码块进行介绍,下文中除特别提到第一码块流和第二码块流外,所提到的码块流均是指Q条第一码块流和第二码块流中的任一条码块流。下文中除特别提到第一码块流中的一个码块和第二码块流中的一个码块外,所提到的码块均是指Q条第一码块流和第二码块流中的任一个码块。
本申请实施例中所定义的码块流(比如第一码块流和第二码块流)可以指以码块为单位的数据流。本申请实施例中,码块(比如第一码块流中的码块和第二码块流中的码块) 的英文可以写为Bit Block,或者英文写为Block。本申请实施例中可以将比特流(该比特流可以是编码后的或编码前的)中预设数量的比特称为一个码块(该码块也可以称为一个比特组或比特块)。比如本申请实施例中可以将1个比特称为一个码块,再比如可以将2个比特称为一个码块。另一种可选地实施方式中,本申请实施例中所定义的码块可以是使用编码类型对比特流进行编码之后得到的码块。本申请实施例中定义了一些编码方式,比如M1/N1比特编码、M2/N2比特编码和M3/N3比特编码,本申请实施例中将这些编码方式统称为M/N编码方式,也就是说本申请实施例中对M/N比特编码的描述可以适用于M1/N1比特编码、M2/N2比特编码和M3/N3比特编码中的任一项或任多项,即当M1适用于对M的描述时,N1对应适用于对N的描述;即当M2适用于对M的描述时,N2对应适用于对N的描述;即当M3适用于对M的描述时,N2对应适用于对N的描述;其中,M为正整数,N为不小于M的整数。
一种可选地实施方式中,M可以等于N,如此,若一个码块分为同步头区域和非同步头区域,则可以理解同步头区域承载的比特位为0。或者也可以理解为将预设数量的比特称为一个码块。通过其他技术手段来确定码块的边界。
另一种可选地实施方式中,N可以大于M。但并没有明确的同步头。比如,使用8B/10B编码进行编码后实现直流均衡后得到的码块,10比特信息长度的8B/10B码块样本为1024个,远高于8比特信息长度需要的256个码块样本数量。可以通过预订的码块样本实现8B/10B码块同步,识别8B/10B码块的边界。该8B/10B码块仅包括非同步头区域。图8示例性示出了本申请实施例提供的一种码块的结构示意图,如图8所示,码块4200包括的同步头区域承载的比特为0,码块4200包括的全部比特都为非同步头区域4201承载的比特。
在N可以大于M的可选地实施方式中,比如,M/N比特编码可以是在802.3中定义的使用64B/66B编码(64B/66B编码也可以写为64/66比特编码),如该标准中定义的,码块可以包括同步头区域和非同步头区域。本申请实施例中使用M/N比特编码编码后得到的码块可以是指非同步头区域包括M个比特,编码后的一个码块的总比特数是N比特的码块;M/N比特编码编码后得到的码块也可以描述为:非同步头区域的M比特+同步头区域的若干个比特构成的码块。图9示例性示出了本申请实施例提供的另一种码块的结构示意图,如图9所示,码块4200包括同步头区域4301和非同步头区域4302,可选地,非同步头区域4302承载的比特的数量为M,同步头区域4301承载的比特的数量为(N-M)。本申请实施例中的同步头区域4301承载的信息可以用于指示码块的类型,码块的类型可以包括控制类型、数据类型以及一些其它类型等等。
实际应用中,可以在Ethernet物理层链路传递经过M/N比特编码所得到的码块流,M/N比特编码可以是1G Ethernet中采用8B/10B编码,即1GE物理层链路传递的为8B/10B编码类型的码块流(码块流的英文也可以称为Block流);M/N比特编码可以是10GE、40GE和/或100GE中采用的64B/66B编码,即10GE、40GE和/或100GE物理层链路传递64B/66B的码块流。未来随Ethernet技术发展,可能会出现其他编码解码,本申请实施例中的M/N比特编码也可以是未来出现的一些编码类型,比如可能出现128B/130B编码、256B/257B编码等等。实际应用中,码块可以是根据IEEE 802.3已经规范的以太网物理编码子层(Physical Coding Sublayer,PCS)子层编码所得到的使用8B/10B编码所得到的码块(也可以称为8B/10B码块)、以及使用64B/66B编码所得到的码块(也可以称为64B/66B码块) 等。再比如本申请实施例中的码块可以是802.3以太网前向纠错码(Forward Error Correction,FEC)子层使用256B/257B编码(转码)所得到的码块(可以称为256B/257B码块),再比如本申请实施例中的码块可以是ITU-T G.709中使用基于64B/66B转码得到的64B/65B码块所得到的码块(也可以称为64B/65B码块)、512B/514B码块等。再比如本申请实施例中的码块可以是Interlaken总线规范的使用64B/67B编码所得到的码块(也可以称为64B/67B码块)等。
现有技术中规定了一些码块的结构形式,比如S码块、数据码块、T码块和IDLE码块。本申请实施例中的码块(比如第一码块流中的码块和第二码块流中的码块)可以是现有技术中规定的这些码块。图10示例性示出了本申请实施例提供的一种类型字段为0x4B的O码块的结构示意图,如图10所示,本申请实施例的码块4200为O码块,O码块4200包括的同步头区域4301承载的信息为“SH10”,“SH10”是指该码块4200的类型为控制类型。非同步头区域4302中包括净荷区域4303和非净荷区域4304,非净荷区域4304可以用于承载类型字段“0x4B”、“O0”和预留字段“C4~C7”,预留字段“C4~C7”可以全部填充“0x00”。可选地,“O0”可以填充为“0x0”、“0xF”或“0x5”等现有技术涉及的特征命令字以及“0xA”、“0x9”或“0x3”等未被现有技术所使用的特征命令字,从而与现有技术形成区别,可以用该“O0”字段填充的内容指示一些信息。可选地,本申请实施例中头码块也可以是指码块的字符中包括S的码块,也可以是新定义的O码块等新码块。比如图10中所示的类型字段为0x4B的O码块,再比如头码块可以为标准64B/66B编码对应的类型字段为0x33的S码块或类型字段为0x66的S码块。对部分高速以太网例如100GE/200GE/400GE,其S码块仅为一种,类型字段为0x78,包含7个字节的数据净荷。但对一些低速以太网,例如10GE/25GE,S码块可以包括类型字段为0x78、0x33和0x66的码块,也可以包括其它字符中包括S字符的码块,S码块可以包含4字节的数据净荷。一种可选地实施方式中,由于传统以太网S码恰好由7字节前导码和1字节帧开始定界符(Start of Frame Delimiter,SFD)编码获得,因此,一种S码块的可能的比特图案中,同步头区域4301为“10”,非净荷区域4304的类型字段为“0x78”,后续净荷区域4303全部填充为“0x55”,净荷区域4303之后的非净荷区域4304中除了最后一个字节填充“0xD5”之外,全部填充“0x55”。
本申请实施例中的码块可以为数据码块,图11示例性示出了本申请实施例提供的一种数据码块的结构示意图,如图11所示,本申请实施例的码块4200为数据码块,码块4200包括的同步头区域4301承载的信息为“SH01”,“SH01”是指该码块4200的类型为数据类型。非同步头区域4302中包括净荷区域4303。数据码块的非同步头区域全部是净荷区域,如图D0~D7所示的净荷区域。
本申请实施例中的码块可以为T码块。T码块可以是码块的字符中包括T的码块,T码块可以包括T0~T7中的任一个码块,比如类型字段为0x87的T0码块、类型字段为0x99的T1码块和类型字段为0xFF的T7码块等等。图12示例性示出了本申请实施例提供的一种T7码块的结构示意图,如图12所示,本申请实施例的码块4200为T7码块,码块4200包括的同步头区域4301承载的信息为“SH10”,“SH10”是指该码块4200的类型为控制类型。非同步头区域4302中包括净荷区域4303和非净荷区域4304。非净荷区域4304可以用于承载类型字段“0xFF”。T0~T7码块的类型字段分别为0x87、0x99、0xAA、0xB4、0xCC、0xD2、0xE1和0xFF,T0~T7码块均可适用于各种采用64B/66B编码的以太网接口。 需要注意的是,T1~T7码块分别包括1个至7个字节的净荷区域。可选地,T码块中的净荷区域可以用于承载取自第一码块流的码块对应的比特;也可以不用于承载取自第一码块流的码块对应的比特,比如可以全部填0处理,或者用于承载其它指示信息。T0~T6码块中的其C1~C7可以按照现有以太网技术处理,即T字符后的7个IDLE控制字节(C1~C7字节),编码后均为7比特0x00。例如对0xFF的T码类型,可以在其D0~D6全部填8比特“0x00”,保留不用。
本申请实施例中的码块可以为IDLE码块,图13示例性示出了本申请实施例提供的一种IDLE码块的结构示意图,如图13所示,本申请实施例的码块4200为IDLE码块,码块4200包括的同步头区域4301承载的信息为“SH10”,“SH10”是指该码块4200的类型为控制类型。非同步头区域4302用于承载类型字段“0x1E”,非同步头区域4302的其它字段“C0~C7”承载的内容为“0x00”。本申请实施例中第二码块流中包括至少一个数据单元,IDLE码块可以添加于一个数据单元的内部,也可以添加在数据单元之间。
可选地,可以在第二码块流中承载一些指示信息(本申请实施例中提到的指示信息可以是后续内容中提到的标识指示信息、时隙分配指示信息和复用指示信息等等),以便出口侧按照与入口侧一致的方式进行解复用,或者在复用与解复用侧已经约定好复用解复用关系的情况下,用于验证复用和解复用关系。承载该指示信息的码块可以称为操作维护管理(Operations,Administration,and Maintenance,OAM)码块。可选地,OAM码块需要特定的类型字段从而与空闲码块形成区分,本申请实施例中可以类型字段为0x00的保留块类型为例,作为OAM码块类型,用于实现与其他码块形成区分。图14示例性示出了本申请实施例提供另一种码块的结构示意图,如图14所示,本申请实施例的码块4200包括的同步头区域4301承载的信息为“SH10”,“SH10”是指该码块4200的类型为控制类型。非同步头区域4302包括净荷区域4303和非净荷区域4304,非净荷区域可用于承载类型字段“0x00”。OAM码块可以是图14所示的码块,在图14中的“0x00”之后的字段填充“0x00”,该字段可以称为OAM码块的Type域,也可以写为OAMType,如一共四个时隙,则在OAM码块的连续4个预设字段中承载该四个时隙对应的第一码块流的标识,从而向对端发送时隙和第一码块流的对应关系,该4个预设字段可以是OAM码块的最后4个字段,其余字段可以为保留字段,比如可以填充0。可选地,OAM码块可以替换第二码块流的数据单元中的IDLE码块,也可以在数据单元之间插入。
基于上述内容,本申请实施例中提供一种第二码块流的可能的结构形式,本领域技术人员可知,第一码块流的结构形式可以是现有技术中已经定义的结构形式,也可以与本申请实施例中第二码块流的结构形式类似或相同,本申请实施例中不做限制。下面对第二码块流可能的几种结构形式进行介绍。可选地,第二码块流对应至少一个数据单元。一个数据单元可能包括多种结构形式,比如第一种,第二码块流对应的一个数据单元可以包括头码块和至少一个数据码块。第二种,考虑兼容和重用现有以太网的帧定界格式,即保有典型的以太网前导码和其对应开始码块(开始码块也称为S码块),以及帧结束符、间隙空闲字节和其对应的结束码块(结束码块可以是T码块)和IDLE码块,可选地,第二码块流对应的一个数据单元可以包括头码块、至少一个数据码块和尾端码块。第三种,第二码块流对应的一个数据单元可以包括至少一个数据码块和尾端码块。头码块和尾端码块可以用于承载一些信息,还可以起到划分一个数据单元的作用,比如头码块和尾端码块起到为一个数据单元限定边界的作用。还有一种可能的结构形式,第二码块流对应的一个数据单 元可以包括至少一个数据码块,比如可以设置一个数据单元中包括的数据码块的数量。上述步骤4102中,Q条第一码块流中的码块对应比特承载于第二码块流中的头码块、尾端码块和数据码块中的任一项或任多项的净荷区域。举个例子,比如Q条第一码块流中的码块对应比特承载于第二码块流的头码块和数据码块的净荷区域中。
一种可选地实施方式中,上述示例的多种结构形式中,第二码块流中的一个数据单元中的数据码块可以包括至少一个第一类数据码块;Q条第一码块流中的码块对应比特承载于第二码块流中的至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为M2。另一种可选地实施方式中,上述示例的多种结构形式中,第二码块流中的一个数据单元中的数据码块可以包括至少一个第一类数据码块和至少一个第二类数据码块。也就是说,在该实施例中,第一码块流的码块对应的比特都承载在第一类数据码块上,而头码块、尾端码块和第二类数据码块可以用于承载一些其它信息(比如后续的时隙分配指示信息、标识指示信息和复用指示信息中的任一项或任多项)。也可以描述为划分的所有时隙中各个时隙对应的码块所对应的比特都承载在第一类数据码块的净荷区域。第二类数据码块的数量可能为0也可能不为0。
可选地,本申请实施例中第二码块流中的一个数据单元中的头码块和尾端码块可以是一些新设置的存在固定格式的码块,该头码块和尾端码块可以起到数据单元的界限的作用,也可以承载一些信息。可选地,为了兼容技术,可选地,头码块可以为O码块,O码块可以是上述图10所示的类型字段为0x4B的码块。可选地,头码块也可以是其它现有技术中定义的字符包括S字符的S码块,比如头码块可以为类型字段为0x33的S码块或类型字段为0x78的S码块。进一步,可选地,头码块为O码块时,可以在O码块的预设字段增加信息,以便跟现有技术形式区别,预设字段可以是O码块中的特征命令字O=0xA或者0x9或者0x3等未被使用的特征命令字,当然也可以使用至今仍保留未用的0x00类型的码块。如图14所示,头码块可以包括同步头区域和非同步头区域,非同步头区域包括非净荷区域和净荷区域。
另一种可选地实施方式中,尾端码块可以为T码块。T码块可以是上述图12所示的类型字段为0xFF的T7码块,也可以是其它现有技术中定义的其它T码块,比如上述T0~T6码块中的任一个码块。使用S码和T码进行第二码块流的数据单元的封装,能够兼容现有技术,承载多个第一码块流的第二码块流可以穿越已经部署的当前支持扁平化组网的X-Ethernet和FlexE Client交换节点。
另外,第二码块流中的一个数据单元中可选地,还可以包括一些IDLE码块,IDLE码块在数据单元中的位置可以是预先配置,也可以是随机的。
可选地,在第二码块流的相邻数据单元之间也可以配置一些其它码块,比如可以是控制码块,也可以是数据码块,也可以是其它码块类型的码块。比如在在第二码块流的相邻数据单元之间配置一些IDLE码块、S码块和上述图14中所示的码块中的任一项或任多项。在第二码块流的相邻数据单元之间可以间隔一个或多个IDLE码块。第二码块流的相邻数据单元之间间隔IDLE码块的数量可以是个变量,可以依据具体应用场景进行调整。一种可选地实施方式中,第二码块流中可以存在至少两组相邻的数据单元(每组相邻的数据单元包括两个相邻的数据单元),该两组相邻的数据单元之间间隔的IDLE码块的数量不相等。可选地,第二码块流的相邻的数据单元之间所间隔的IDLE码块进行适当的增删,也 就是适应性的增加或者减少,用于实现速率适配(本申请实施例中也可以是实现频率适配)。比如,若承载第二码块流的管道的带宽偏小,则可以适当减少第二码块流中的数据单元之间的IDLE码块,一种可能地实现方式中,相邻的数据单元之间的IDLE码块被减少至零,即相邻两个数据单元之间无IDLE码块。再比如,若承载第二码块流的管道的带宽偏大,则可以适当增加第二码块流中的数据单元之间的IDLE码块。另一种可能地实施方式中,可以在第二码块流的任意位置插入空闲码块,以便实现速率适配,但是相对应速率带宽差异较小的情况,可以推荐在两个数据单元之间插入IDLE码块,比如可以将数据单元之间的IDLE码块的数量从1个增加至两个或多个。
在上述在第二码块流的相邻数据单元之间增加IDLE码块的示例中,比如可以在相邻数据单元之间平均增加一个IDLE码块,这种情况下IDLE可以分布较均匀,第二码块流的数据单元之间可以预留足够的IDLE码块余量(200百万分之一(parts per million,ppm)以上,以支持极端情况下的以太网的链路速率差异:+/-100ppm),则第二码块流的一个数据单元中的码块的数量和一个数据单元中包括的净荷区域的比特的总数量存在一个上限。建议在上限允许的基础上取最大值。
可选地,在第二码块流的数据单元之间增加若干空闲码块,从而可以支持后续对该第二码块流进行IDLE增删,从而使该第二码块流适配管道的速率差异,比如管道的速率差异可以是100ppm,从而在承载第二码块流的管道的带宽偏小的时候,可以通过删除第二码块流中数据单元之间的IDLE码块来实现速率适配。
一种可选地实施方式中,第二码块流的一个数据单元包括一个头码块、33个数据码块和一个IDLE码块。IDLE码块的的比重为1/35,远大于100ppm(万分之一),因此可选地,还可以将部分IDLE码块替换为操作维护管理(Operations Administration and Maintenance,OAM)码块,从而在第二码块流中承载一些OAM的信息,OAM码块的结构形式可以为上述图14所示的码块的结构形式。本申请实施例中此类码块可以用以承载指示信息(该指示信息可以是时隙分配指示信息、复用指示信息和标识指示信息中的任一项或任多项)。
本申请实施中,第一码块流中的码块对应的比特对应承载在第二码块流中,一种可选地实施方式中,可以在复用侧的第一通信设备和解复用侧的第二通信设备之间进行约定,从而使解复用侧的第二通信设备根据约定从第二码块流中解复用出Q条第一码块流。另一种可选地实施方式中,针对第二码块流中承载的Q条第一码块流中的一个码块:第二码块流中还包括码块对应的标识指示信息;其中,标识指示信息用于指示码块对应的第一码块流。如此,通过将标识指示信息发送给解复用侧的第二通信设备的方式,可以使解复用侧确定出第二码块流中承载的取自Q条第一码块流的每个码块所对应的第一码块流,从而解复用出各个第一码块流。第二码块流中承载的Q条第一码块流中的一个码块对应的标识指示信息,可以是该码块对应的第一码块流的标识,也可以是其它能够指示出这个信息的其它信息,比如该码块在第二码块流中的位置信息和第一码块流的标识。
本申请实施例中提供一种可能的数据传输方式,可使解复用侧的第二通信设备根据该方式确定出第二码块流中承载的取自Q条第一码块流的每个码块所对应的第一码块流,从而解复用出各个第一码块流。该传输方式中,先进行时隙划分,所有时隙之间存在排序关系,之后为Q条第一码块流中的每条第一码块流分配至少一个时隙。上述步骤4202中,可以将Q条第一码块流中的码块进行基于码块的时分复用,得到待处理码块序列;将待处理码块序列对应的比特放入待发送的第二码块流。其中,Q条第一码块流中的每条第一码 块流对应至少一个时隙;待处理码块序列包括的码块的排序,与待处理码块序列包括的码块所对应的时隙的排序匹配。
本申请实施例中划分的所有时隙可以仅给Q条第一码块流分配部分时隙,也可以将划分的全部时隙分配给Q条第一码块流。比如划分了32个时隙,而存在2条第一码块流,可以将32个时隙中的三个时隙分配给该2条第一码块流,其余的29个时隙可以不分配给第一码块流,比如可以分配给其它码块,比如可以是IDLE码块或者上述的OAM码块等等。
在步骤4101之前,本申请实施例中的网络接口可进行时隙的划分,以划分后的时隙的一个或者多个构成管道进行码块流的承载。具体来说,接口时隙的划分可以结合具体的应用场景灵活配置,本申请实施例中提供一种时隙划分方案。为方便介绍,本申请实施例中下述内容以FlexE技术为例进行介绍,该示例中FlexE接口以64B/66B编码为例进行介绍。FlexE借鉴了同步数字体系(Synchronous Digital Hierarchy,SDH)/光传送网(OpticalTransportNetwork,OTN)技术,对物理接口传输构建固定帧格式,并进行时分复用((Time Division Multiplexing,TDM)的时隙划分。与SDH/OTN不同的是,FlexE的TDM时隙划分粒度可以是66比特的,时隙之间按照66比特进行间插,一个66比特可以对应承载一个64B/66B码块。图15示例性示出了本申请实施例提供的一种FlexE帧的结构示意图,如图15所示,一个FlexE帧可以包含8行,每行第一个码块的位置是承载FlexE的开销的区域(承载FlexE的开销的区域也可以称为帧头区域)(用于承载FlexE开销的区域所承载的码块可以称为开销码块)。每行1个开销码块,8行中包括的8个开销码块构成一个FlexE开销帧,32个FlexE开销帧又构成一个FlexE开销复帧。如图15所示,承载FlexE开销以外区域,可以进行TDM时隙划分。比如以64B/66B编码进行编码的码块为例,对开销以外的区域进行时隙划分时,以66比特为粒度进行划分,每行对应20*1023个66比特的承载空间,接口可以为划分20个时隙。
对时隙划分之后,可以结合接口的带宽和时隙的数量,确定单个时隙对应带宽。结合图15所示的时隙的划分,针对100吉比特以太网(Gigabit Ethernet,GE)的接口,100GE的即可的带宽为100Gbps(Gbps为单位,为每秒1000兆位),则每个时隙的带宽可以约为100Gbps带宽除以20,约为5Gbps。一个FlexE Group可以包含至少一个接口,例如t个100Gbps接口,则FlexE Group作为NNI时总的时隙数为t*20。
上述示例中仅示例性示出了一种时隙的划分方式,本领域技术人员可知,也可以存在其它时隙划分方式,当划分多个时隙时,该多个时隙中可以存在至少两个时隙,该两个时隙中每个时隙对应的带宽不同。比如,一个时隙的带宽为5Gbps,另一个时隙的带宽为10Gbps等等。针对时隙的划分方式,以及每个时隙的带宽的确定方式,本申请实施例中不做限制。
在时隙划分之后,本申请实施例中可以建立任一第一码块流和第二码块流中的每条码块流和时隙的对应关系。可选地,为任一码块流分配时隙,也可以描述为,为承载码块流的管道分配时隙。一种可选地实施方式中,可以根据承载码块流的管道的业务带宽,以及每个时隙对应的带宽,确定为该管道分配的时隙的数量。可选地,也可以描述为可以根据承载码块流的管道的业务速率,以及每个时隙对应的速率,确定为该管道分配的时隙的数量。
可选地,在FlexE系统架构中,若干个物理接口可以级联捆绑构成FlexE Group,FlexE  Group所有时隙中的任意多个时隙可以组合承载一个以太网逻辑端口。例如,当单个时隙的带宽为5Gbps时,带宽为10GE的第一码块流需要两个时隙,带宽为25GE的第一码块流需要5个时隙,带宽为150GE的第一码块流需要30个时隙。若编码方式采用64B/66B编码,则以太网逻辑端口上可见的仍为顺序传输的66比特码块流。
关于接口上的时隙分配,一条码块流配置的时隙总带宽(例如时隙数量与具有相同带宽的时隙对应的带宽的乘积)不小于该码块流的有效带宽。码块流的有效带宽可以是码块流的的除空闲码块以外的其他数据码块和控制类码块占用的总带宽。也就是说,码块流中需要包含一定预留码块,比如空闲(IDLE)等,以便通过空闲码块的增删可以将码块流适配到分配的时隙(或管道)中。基于此,本申请实施例中,可选地,一条码块流配置的时隙总带宽不小于该码块流的有效带宽;或者,可选地,一条码块流配置的时隙的数量与单个时隙对应的带宽的乘积不小于该码块流的有效带宽。
如图15所示,划分的时隙中每个时隙可以带有标识,划分的时隙之间存在排序关系,比如图15中的20个时隙,可以依次标识为时隙1、时隙2…时隙20等等。20个时隙中分配给某一码块流的时隙可以灵活配置,比如可以根据时隙所属的码块流标识对20个时隙的分配进行标识。本申请实施例中,若为一个码块流分配属于该码块流的多个时隙时,所分配的多个时隙的可以是连续的,也可以是不连续的,比如可以为一个码块流分配时隙0和时隙1这两个时隙,也可以为该码块流分配时隙0和时隙3这两个时隙,本申请实施例不做限制。
一种可选地实施方式中,本申请实施例的第二码块流中的数据单元对应的第一码块流的承载时隙,一条第一码块流配置的时隙总带宽(例如时隙数量与具有相同带宽的时隙对应的带宽的乘积)不小于该第一码块流的有效带宽。第一码块流的有效带宽可以是第一码块流的的除空闲码块以外的其他数据码块和控制类码块占用的总带宽。也就是说,第一码块流中需要包含一定预留码块,比如空闲(IDLE)等,以便通过空闲码块的增删可以将码块流适配到分配的时隙(或管道)中。基于此,本申请实施例中,可选地,一条第一码块流配置的时隙总带宽不小于该第一码块流的有效带宽;或者,可选地,一条第一码块流配置的时隙的数量与单个时隙对应的带宽的乘积不小于该第一码块流的有效带宽。
如图15所示,本申请实施例的第二码块流中的数据单元对应的第一码块流的承载所划分的时隙中,每个时隙可以带有标识,划分的时隙可以存在确定排序,比如图15中的20个时隙,可以依次标识为时隙1、时隙2…时隙20等等。20个时隙中分配给某一码块流的时隙可以灵活配置,比如可以根据时隙所属的第一码块流标识对20个时隙的分配进行标识。本申请实施例中,若为一个第一码块流分配属于该码块流的多个时隙时,所分配的多个时隙的可以是连续的,也可以是不连续的,比如可以为一个第一码块流分配时隙0和时隙1这两个时隙,也可以为该第一码块流分配时隙0和时隙3这两个时隙,本申请实施例不做限制。
第一码块流对应的时隙的总带宽可以是根据第一码块流对应的时隙的数量,以及为第一码块流对应的每个时隙所分配的带宽确定的。比如,第一码块流对应的时隙的总带宽可以是第一码块流对应的时隙的数量和为第一码块流对应的每个时隙所分配的带宽的乘积。在上述步骤4101之后,在上述步骤4102之前,针对包含了预设比例空闲码块的Q条第一码块流中的第一码块流,执行:根据第一码块流的带宽与第一码块流对应的时隙的总带宽,对第一码块流执行空闲IDLE码块的增删处理。
IDLE的增删处理是实现速率适配的一个有效手段。下面FlexE为例进行介绍,每个逻辑端口可以承载一个以太网媒体访问控制(Medium Access Control,MAC)报文数据单元序列流。在传统以太网接口上,MAC报文数据单元序列流的报文可以有起始和结束。报文间为分组间隙(Inter-Packet Gap,IPG),可选地,可以在间隙中填充空闲(IDLE)。MAC报文数据单元序列流及IDLE一般都要经过编码扰码等处理后再进行传输,例如1GE采用的8B/10B编码;10GE、25GE、40GE、50GE、100GE、200GE和400GE等则一般采用64B/66B编码,编码后MAC报文数据单元序列流及IDLE转化为64B/66BB码块。
一种可能地实现方式中,编码后的码块可以包括与MAC报文数据单元对应的起始码块(英文为Start码块,起始码块可以为S码块)、数据码块(英文为Data码块,可以简写为D码块)、结束码块(英文为Termination码块,结束码块可以为T码块),以及与空闲码块(英文为IDLE码块,可以简写为I码块)。
结合图15的示例,100GE接口基于64B/66B编码块引入FlexE开销后,剩余带宽再划分20个时隙,2个时隙还能确保装下一个10GE的带宽的码块流,一种可能地实现方式中,FlexE可以通过IDLE码块的增删进行FlexE client速率适配。例如,包含空闲码块的码块流的带宽为11GE,但有效带宽小于两个FlexE时隙的10G带宽时,为该第一码块流分配的两个5G时隙,总带宽为10G,则可以删除码块流中的部分的IDLE码块;第一码块流的带宽为9G时,为该码块流分配的时隙的总带宽为10G,则可以在第一码块流中增加更多的IDLE码块。可选地,在FlexE中,可以直接操作码块,也可以操作解码以后的业务报文流和IDLE。
本申请实施例中,可选地,第二码块流需要预先配置一定数量的空闲码块。第二码块流的传输过程中,可选地,也可以根据承载第二码块流的管道的带宽和第二码块流的速率差异,对第二码块流进行IDLE的增删。具体来说,可以针对第二码块流的相邻的数据单元之间的IDLE码块进行IDLE码块的增删,以使第二码块流与承载第二码块流的管道的带宽匹配。比如,第二码块流的速率小于承载第二码块流的管道的带宽时,可以在第二码块流的数据单元之间增加一些IDLE码块,当第二码块流的速率不小于承载第二码块流的管道的带宽时,可以将预先配置在第二码块流的数据单元之间的IDLE码块删除。
可选地,本申请实施例中的第二码块流承载第一码块流的时隙和第一码块流的对应关系可以是事先划分,并配置在复用侧的第一通信设备和解复用侧的第二通信设备中,也可以由复用侧发送给解复用侧,或者解复用侧发送给复用侧,或者由集中服务器确定了时隙和第一码块流的对应关系后,把时隙和第一码块流的对应关系发送给复用侧的第一通信设备和解复用侧的第二通信设备。发送时隙和第一码块流的对应关系可以是周期性发送的。一种可选地实施方式中,第二码块流的第一预设码块中承载时隙分配指示信息;时隙分配指示信息用于指示Q条第一码块流和时隙的对应关系。也就是说时隙分配指示信息用于指示Q条第一码块流中每条第一码块流所分配的时隙的标识。
图16示例性示出了本申请实施例提供的一种第二码块流传输时隙分配指示信息的结构示意图,如图16所示,当头码块为O码块时,O码块的结构可以参见上述图10所示的内容,可以在类型字段为0x4B的O码块的的3个可用字节D1~D3承载时隙分配指示信息,比如18示出的块类型为0x4B且O码为0xA的头码块的码字的D1~D3承载时隙对应的第一码块流的标识,如图16所示,每个码块的D2和D3两个字节中的每个字节分别对应承 载一个时隙对应的第一码块流的标识。
D2字节和D3字节中8个比特有256个ID标识空间,0x00或者0xFF可以用于标识该时隙未分配,则剩余的254个数值标记可以任意取其中的部分用于32个时隙的组合分配标记。可选地,D1字节中的前4比特用作复帧指示,连续的16个第二码块流的数据单元的封装开销块,复帧指示MFI数值从0~15,(十六进制从0~F),其中,MFI=0的块可以指示时隙slot0对应的第一码块流的标识和时隙slot1对应的第一码块流的标识;MFI=1的块可以指示时隙slot2对应的第一码块流的标识和时隙slot3对应的第一码块流的标识,以此类推。
如图16所示,时隙0用于承载第一条第一码块流(一种可选地实施方式中,第一条第一码块流也可以写为client1),如果该第一条第一码块流的标识为0x01,则在图16的MFI=0的码块的D2字段填入0x01;时隙1用于承载第二条第二码块流(一种可选地实施方式中,第二条第一码块流也可以写为client 2),如果其ID标识为0x08,则在图16的MFI=0的码块的D3字段填入0x08,时隙2用于承载第三条第二码块流(一种可选地实施方式中,第三条第一码块流也可以写为client 3),如果其ID标识为0x08,则在图16的MFI=1的码块的D2字段填入0x08,该示例中,时隙1和时隙2被分配和标识为同一个第一码块流。一个第一码块流被分配多个时隙时,该第一码块流的码块或者比特的发送顺序与其在第二码块流中的发送顺序一致。可选地,若时隙未分配,则可以用0x00或者0xFF指示,比如时隙4未分配,则MFI=2的块中指示时隙slot4对应的第一码块流的标识的字段可以填充0x00或者0xFF。可选地,时隙分配指示信息也可以在相邻数据单元之间的码块上进行传输,比如相邻数据码块中的包括的控制类型的码块等。
本申请实施例中,一种可选地实施方式中,为了可以使第二码块流的一个数据单元中装入整数个第一码块流中的码块(这种形式也可以描述为边界对齐,或者说可以由第二码块流的数据单元可以确定每个时隙边界和码块边界),可以预先通过计算确定第二码块流中的一个数据单元中包括的第一类数据码块的数量,用于承载Q条第一码块流。可选地,本申请实施例提供一种方案,第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N1和M2的公倍数与M2确定的;比如,一个数据单元中包括的第一类数据码块的数量至少是N1和M2的公倍数与M2的商。第一类数据码块的数量可以大于N1和M2的公倍数与M2的商。或者,第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N2和M2的最小公倍数与M2确定的;比如,一个数据单元中包括的第一类数据码块的数量至少是N1和M2的最小公倍数与M2的商,一个数据单元中包括的第一类数据码块的数量大于N1和M2的最小公倍数与M2的商,从而可以在第一类数据码块中承载其它未分配给第一码块流的时隙所对应的码块的比特,比如一个时隙未被分配,则可以在第一类数据码块中承载该时隙对应的预置码块(例如IDLE码块或者Error码块)对应的比特。可选地,本申请实施例中数据码块所定义的第一类数据码块可以是指承载各个时隙对应的码块的数据码块,第二类数据码块可以用于承载其它的信息比特(比如时隙分配指示信息、标识指示信息和复用指示信息等中的任一项或任多项)。第二类数据码块在一个数据单元中的位置可以是固定的,或者配置后通知给复用侧的通信设备和解复用侧的通信设备的。
本申请实施例中可选地,第一码块流的编码方式和第二码块流的编码方式可以相同,也可以不同。下述内容中为了介绍方便,以第一码块流和第二码块流都采用64B/66BB编 码方式为例进行介绍。下面以第一码块流为64B/66BB编码类型和第二码块流为64B/66BB编码类型为例进行举例说明。
图17示例性示出了本申请实施例提供的一种码块流复用的结构示意图,如图17所示,将第一码块流5201和第一码块流5301复用到第二码块流5401。也可以描述为图17中的将承载第一码块流5201的管道5101和承载第一码块流5301的管道5102复用到承载第二码块流5401的管道5103中。若把承载第一码块流的管道称为低阶管道,把承载第二码块流的管道称为高阶管道,则图17中则是将两个低阶管道(承载第一码块流5201的管道5101和承载第一码块流5301的管道5102)复用到高阶管道(承载第二码块流5401的管道5103)。
第一码块流的编码类型可以是多种,比如可以是M/N编码类型,也可以是非M/N编码类型,该示例中以第一码块流为64B/66BB编码类型为例进行介绍。如图17所示,第一码块流5201中包括多个码块5202,每个码块5202包括同步头区域5206和非同步头区域5207。图18示例性示出了本申请实施例提供的一种第一码块流的结构示意图,如图17和图18所示,第一码块流5201中可以包括多个数据单元5208,图18中仅示例性示出了第一码块流5201中的一个数据单元5208的结构示意图。如图18所示,数据单元5208可以包括头码块5202、一个或多个数据码块5203,以及尾端码块5204。也就是说,第一码块流5201包括的码块5205可以是控制码块(比如头码块5202和尾端码块5204),也可以是数据码块5203,也可以是IDLE码块。本申请实施例中的第一码块流的码块也可以是指第一码块流的相邻数据单元之间包括的码块,比如第一码块流的相邻数据单元之间包括的IDLE码块。码块5205的同步头区域5206可以承载码块的类型指示信息。比如当码块5205是数据码块5203时,该码块5205中的同步头区域5206承载的码块的类型指示信息可以是01,用于指示该码块5205是数据码块;再比如,当码块5205是头码块5202或尾端码块5204时,该码块5205中的同步头区域5206承载的码块的类型指示信息可以是10,用于指示该码块5205是控制码块。
如图17所示,第一码块流5301中包括多个码块5302,每个码块5302包括同步头区域5306和非同步头区域5307。图18示例性示出了一种第一码块流的结构实现,如图17和图18所示,第一码块流5301中可以包括多个数据单元5308,图18中仅示例性示出了第一码块流5301中的一个数据单元5308的结构示意图。如图18所示,数据单元5308可以包括头码块5302、一个或多个数据码块5303,以及尾端码块5304。也就是说,第一码块流5301包括的码块5305可以是控制码块(比如头码块5302和尾端码块5304),也可以是数据码块5303,也可以是IDLE码块。本申请实施例中的第一码块流的码块也可以是指第一码块流的相邻数据单元之间包括的码块,比如第一码块流的相邻数据单元之间包括的IDLE码块。码块5305的同步头区域5306可以承载码块的类型指示信息。比如当码块5305是数据码块5303时,该码块5305中的同步头区域5306承载的码块的类型指示信息可以是01,用于指示该码块5305是数据码块;再比如,当码块5305是头码块5302或尾端码块5304时,该码块5305中的同步头区域5306承载的码块的类型指示信息可以是10,用于指示该码块5305是控制码块。
该示例中,比如为第一码块流5201分配了时隙(英文可以写为slot)0,为第一码块流5301分配了时隙1和时隙2。该示例中一共划分32个时隙,其余时隙4至时隙31均未被分配。未分配的时隙可以用固定图案码块加以填充。例如对64/66b码块,可以使用空闲(IDLE)码块、错误(Error)码块或者其他定义码块的确定图案码块加以填充。
图18示例性示出了根据时隙和第一码块流的对应关系从第一码块流取出的码块的结构示意图,如图18所示,时隙0至时隙31的排序是根据时隙的标识进行排序的,时隙的标识是0至31。因此第一通信设备根据时隙0至时隙31的排序,依序循环的获取时隙0至时隙31对应的码块,如图18所示,先获取时隙0对应的码块,由于时隙0被分配给第一码块流5201,因此从第一码块流5201中获取一个码块5205;接着获取时隙1对应的码块,由于时隙1被分配给第一码块流5301,因此从第一码块流5301中获取一个码块5305;接着获取时隙2对应的码块,由于时隙2被分配给第一码块流5301,因此从第一码块流5301中获取一个码块5305;接着获取时隙3对应的码块,由于时隙3至时隙31均未被分配,因此可以全部填充IDLE码块等确定图案码块。之后再循环获取时隙0至时隙31对应的码块。本申请实施例中可以将图18中各个时隙对应的码块对应的序列称为待处理码块序列。
图19示例性示出了本申请实施例提供的一种第二码块流的结构示意图,如图19所示,进入承载第二码块流5401的管道5103的第二码块流5401可以包括一个或多个数据单元5408,图19中示例性示出了一个数据单元5408的结构示意图。如图19所示,数据单元5408可包括多个码块5405,码块5405可以包括同步头区域5406和非同步头区域5407。如图19所示,数据单元5408可以包括头码块5402、一个或多个数据码块5403,以及尾端码块5404。也就是说,第一码块流5401包括的码块5405可以是控制码块(比如头码块5402和尾端码块5404),也可以是数据码块5403。码块5405的同步头区域5406可以承载码块的类型指示信息。比如当码块5405是数据码块5403时,该码块5405中的同步头区域5406承载的码块的类型指示信息可以是01,用于指示该码块5405是数据码块;再比如,当码块5405是头码块5402或尾端码块5404时,该码块5405中的同步头区域5406承载的码块的类型指示信息可以是10,用于指示该码块5405是控制码块。
如图19所示,本申请实施例中将取出或生成的各个时隙对应的码块放置入第二码块流的净荷区域,可以放置在头码块、尾端码块、第一类数据码块和第二类数据码块中的任一项或任多项的净荷区域,该示例中,以将取出或生成的各个时隙对应的码块放置入第二码块流的第一类数据码块为例进行介绍。
本申请实施例中第二码块流的一个数据单元中包括的数据码块的数量可以灵活的确定,以第一码块流和第二码块流都是64B/66B编码为例进行说明,本申请实施例中提供的方案中,第二码块流的一个数据单元中包括的用于承载所有时隙对应的码块的第一类数据码块的数量为Hb个,则可以基于该Hb个第一类数据码块的净荷区域(一个第一类数据码块的净荷区域承载H比特)对应的比特的部分或者全部Hlcm比特(Hb个第一类数据码块的净荷区域的全部比特数量为Hp,Hlcm小于等于Hp),进行TDM时隙划分成若干低阶时隙颗粒,基于所划分的时隙颗粒的组合,作为低阶管道(低阶管道为承载第一码块流的管道)用于承载第一码块流中的64B/66B码块或者对第一码块流中的码块进行压缩后的码块。此处对Hlcm比特的TDM时隙划分与步骤4101后得到的待处理码块序列的TDM时隙划分等效对应。例如,当第一码块流的编码类型为64B/66B且不采取压缩处理时(压缩处理也可以称为转码压缩处理),高阶承载管道(高阶管道为承载第二码块流的管道)第二码块流的数据单元中的Hb个第一类数据码块的净荷区域(一个第一类数据码块的净荷区域承载H比特)对应的比特的部分或者全部Hlcm比特(Hb个第一类数据码块的净荷区域的全部比特数量为Hp,Hlcm小于等于Hp),对应g个66b颗粒可以划分p个时隙,p 可以被g整除,g和p均为正整数。当采取压缩处理时,高阶承载管道(高阶管道为承载第二码块流的管道)第二码块流的数据单元中的Hb个第一类数据码块的净荷区域(一个第一类数据码块的净荷区域承载H比特)对应的比特的部分或者全部Hlcm比特(Hb个第一类数据码块的净荷区域的全部比特数量为Hp),Hlcm小于等于Hp。可选地,Hp与g1个M2/N2比特净荷颗粒对应,g1*N2为第二码块流中的一个数据单元中的所有第一类数据码块的净荷区域的总比特数的全部。Hlcm比特g3*N3比特对应g3个M3/N3比特块(例如512B/514B编码比特块),一个M3/N3码块颗粒等效对应待处理码块流的g3*k个66b颗粒(例如512B/514B编码比特块等效于4个66b颗粒),等效对应待处理码块流划分p个时隙,p可以被g整除,g和p均为正整数。
本申请实施例提供一种可选地用于确定第二码块流中一个数据单元中包括的数据码块(或者说用于承载第一码块流的第一类数据码块)的数量的实施方式。在该实施方式中以第一码块流为M1/N1比特编码方式,第二码块流为M2/N2比特编码方式,且不考虑压缩处理为例进行解释。由于第一码块流中每个码块是N1比特,需要装入第二码块流的净荷区域,第二码块流的数据码块的净荷区域是M2比特,则计算N1和M2的公倍数,第二码块流的一个数据单元中包括的数据码块的数量可以是N1和M2的公倍数与N2的商的整数倍。一种可选地实施方式中,第二码块流的一个数据单元中包括的数据码块的数量可以是N1和M2的最小公倍数与N2的商的整数倍。
结合图19举个例子,比如,若第一码块流和第二码块流的编码类型都是64B/66B编码,则lcm(66,64)的值为2112,lcm(66,64)表示求66和64的最小公倍数。第二码块流的一个数据单元中包括的数据码块的数量可以是33(33是66和64的公倍数2112和第二码块流的数据码块的净荷区域的比特64的商)的整数倍。假设第二码块流中的一个数据单元包括33个数据码块,则表示第二码块流的33个数据码块承载32(32是66和64的公倍数2112和第一码块流的一个码块的比特66的商)个时隙对应的码块;当为一个时隙分配了第一码块流时,该时隙对应的码块是指从该时隙对应的第一码块流中取出的码块;当没有为该时隙分配第一码块流时,该时隙对应的码块是指确定图案码块。
时隙数量的划分可以存在多种方式,本申请实施例提供一种可能地实施方式,该实施方式中,计算第二码块流中的一个数据单元中的数据码块的净荷区域的比特,比如上述结合图19所举的示例中,第二码块流中的一个数据单元中的数据码块的净荷区域的比特是2122(2122是第二码块流的一个数据单元中包括的数据码块的数量33与该数据码块中的非同步头区域的比特64的乘积)比特,当该2122比特全部用于承载第一码块流的码块时,最多可以承载32个64B/66B码块,因此时隙最多可以划分为32的整数倍个时隙,时隙的数量也可以是能被32整除的数值,比如划分16个时隙、8个时隙或4个时隙等等。
可选地,第二码块流的一个数据单元中所有第一类数据码块的净荷区域的总比特数也可以不受上述公倍数关系的约束,比如在上述示例中第二码块流的一个数据单元中包括的所有第一类数据码块的净荷区域的总比特数大于2122个比特,如此当其中的2122比特用于承载第一码块流的码块对应的比特时,多余的比特可以保留不用,或者用于承载一些其它指示信息。实际应用中,可选地,在确定第二码块流的一个数据单元中包括的所有数据码块(包括所有第一类数据码块和所有第二类数据码块)的净荷区域的比特数量的时候可以考虑传输效率和预留的IDLE。第二码块流一个数据单元的所有数据码块的净荷区域的总比特数量越大,该数据单元则越长,开销越低。
如图19所示,将待处理码块序列中的码块对应的所有比特依序放入第二码块流的第一类数据单元的净荷区域,可以看出,时隙0对应的码块5205是以64B/66B编码类型编码的,得到的码块5205的总比特数是66比特,而第二码块流5401的一个数据码块5403的非同步头区域5407占的比特数是64比特,因此,第二码块流的一个数据码块5403承载时隙0对应的码块5205的前64比特,第二码块流的另一个数据码块5403承载时隙0对应的码块5205的后2比特,以及时隙1对应的码块5305的前62比特,以此类推。这个实施例中可以看出,当第一码块流的一个码块的总比特数量大于第二码块流的一个第一类数据码块的净荷区域承载的比特数量时,第一码块流中的一个码块对应的所有比特可以可承载在第二码块流的两个数据码块的净荷区域。
为了进一步提高数据传输效率,提高封装效率,避免逐层封装引入过度的带宽膨胀,本申请实施例提供另外一种可选地数据传输方案,在上述步骤4102中,将待处理码块序列对应的比特放入待发送的第二码块流,包括:将待处理码块序列中连续R个码块进行压缩,得到压缩后码块序列;其中,R为正整数;将压缩后码块序列对应的比特放入待发送的第二码块流。图20示例性示出了本申请实施例提供的另一种第二码块流的结构示意图,如图20所示,图20是在图19的基础上进行的改进,图20中,将获取的各个时隙对应的码块组成的序列称为待处理码块序列,将待处理码块序列进行压缩处理,得到压缩后码块序列,之后将压缩后码块序列放置入第二码块流,可选地,可以将压缩后码块序列放入到第二码块流的第一类数据码块的净荷区域。
一种可选地实施方式中,可以将第一码块流的一个码块的同步头区域和非同步头区域对应的比特连续放入第二码块流的净荷区域。若待处理码块序列未经压缩直接放入第二码块流,则是指待处理码块序列中的所有码块的同步头区域和非同步区域的所有比特是连续放入第二码块流中的。若待处理码块序列经过压缩放入第二码块流,则是指压缩后码块序列中的所有码块的同步头区域和非同步区域的所有比特是连续放入第二码块流中的。
也可以说,若待处理码块序列未经压缩直接放入第二码块流,则是指待处理码块序列中的取自第一码块流中的一个码块的同步头区域和非同步区域的所有比特是连续放入第二码块流中的。若待处理码块序列经过压缩放入第二码块流,则是指压缩后码块序列中的取自第一码块流中的一个码块的同步头区域和非同步区域的所有比特在压缩后码块序列中对应的比特是连续放入第二码块流中的。
下面以待处理码块序列压缩为压缩后码块序列中的一个码块为例说明,若待处理码块序列中的码块未经过压缩直接放入第二码块流中,则待处理码块序列中的一个码块的情况与该示例中待处理码块序列压缩为压缩后码块序列中的一个码块的情况类似。该例子中,结合图20进行举例说明,如图20所示,压缩后码块序列中的时隙0对应的码块5205包括的所有比特(比如若该码块包括同步头区域和非同步区域,则该码块对应的所有比特是指该码块的同步头区域和非同步头区域对应的所有比特)是连续放入第二码块流的第一类数据码块的净荷区域的。也就是说,仅针对第二码块流的一个数据单元中的所有第一类数据码块的净荷区域来说,比如可以单纯的仅看第二码块流中一个数据单元包括的第一类数据码块的序列,仅针对该第一类数据码块的序列中的净荷区域序列来说,压缩后码块序列中包括的一个时隙对应的码块的所有比特(可以是取自第一码块流的一个码块的同步头区域和非同步头区域)是连续放入第二码块流的一个数据单元中的第一类数据码块的序列中的净荷区域序列中的一个或多个净荷区域的。上述示例中,也可以描述为仅针对第二码块 流的一个数据单元中的所有第一类数据码块的净荷区域来说,比如可以单纯的仅看第二码块流中一个数据单元包括的第一类数据码块的序列,仅针对该第一类数据码块的序列中的净荷区域序列来说,压缩后码块序列中包括的32个时隙对应的所有码块的所有比特是连续放入第二码块流的一个数据单元中的第一类数据码块的序列中的净荷区域序列中的一个或多个净荷区域的。可选地,该示例中,第二码块流中的一个数据单元中包括的相邻的两个第一类数据码块之间可以包括一些其它的码块,比如控制码块、第二类数据码块等等,也就是说该第一类数据码块的序列中的净荷区域序列中是不包括除第一类数据码块之外的码块的净荷区域的。该示例中,是以待处理码块序列放入第一类数据码块的净荷区域为例进行说明的,若待处理码块序列对应的比特也可以放入到头码块、尾端码块等等上,则上述的净荷区域序列可以说是第二码块流中一个数据单元包括的用于承载待处理码块序列对应的比特的所有码块的净荷区域构成的净荷区域序列。
在图20中可以看出,本申请实施例中获取每个时隙对应的码块之后,对码块进行压缩,在压缩后码块序列中,每个比特对应的时隙与该比特在待处理码块序列中对应的时隙相同。举个例子,比如待处理码块序列为64B/66B编码,压缩后码块序列为64/65比特编码,则待处理码块序列中一个64B/66B码块对应时隙2,该64B/66B码块在压缩后码块序列中对应的64B/65B码块也对应时隙2。也可以说时隙2在待处理码块序列中对应一个64B/66B码块,在压缩后码块序列中对应一个64B/65B码块。
压缩处理的方式有多种,比如可以对待处理序列中的每个码块单独进行压缩,比如将待处理序列中每个码块的同步头区域由2比特压缩成为1比特,比如将“10”压缩为“1”,将“01”压缩为“0”。待处理码块序列中的码块编码为64B/66B时,压缩后码块序列的编码形式变为64/65比特编码。同步头区域为“10”的码块表示该码块的类型为控制类型。
另一种可选地压缩处理的方式中,由于目前广泛使用的控制类型的码块的类型字段包括0x1E、0x2D、0x33、0x4B、0x55、0x66、0x78、0x87、0x99、0xAA、0xB4、0xCC、0xD2、0xE1和0xFF。0x00等其他数值保留未用。码块的类型字段占用了1字节,因此可以将控制类型的码块的类型字段从8比特压缩为4比特,比如将“0x1E”压缩为“0x1”,将“0x2D”压缩为“0x2”等。如此,节省出的4比特空间可以用于多个码块的组合顺序识别,如此可以获得更高的映射效率。一个典型的例子,此类压缩处理方式之一,可以将待处理序列中连续的多个码块进行压缩。比如,一种可选地实施方式中,可以将待处理码块序列中的4个64B/66B码块转换为一个压缩后码块序列中的256B/257B码块,比如可以通过第1个比特来区分该256B/257B码块是否包含控制块。图21示例性示出了本申请实施例提供的一种压缩处理方式的示意图,如图21所示,该256B/257B码块的第1个比特为1则表示该256B/257B码块不包含待处理序列中的控制类型的码块,全部为待处理序列中的数据类型的码块,如此,待处理码块序列中的4个64B/66B码块的共8比特同步头可以压缩为1比特。图22示例性示出了本申请实施例提供的一种压缩处理方式的示意图,如图22所示,该256B/257B码块的第1个比特为0,则表示该256B/257B码块中包括至少一个待处理序列中的控制类型的码块,之后在该256B/257B码块中包括的第一个64B/66B码块的类型字段的4个比特可以用于依次指示该256B/257B码块中包括的4个来自待处理码块序列的4个64B/66B码块的类型,比如该256B/257B码块中包括的4个来自待处理码块序列的4个64B/66B码块的类型均为控制类型,则该4比特可以依次为“0000”,如此可以将该256B/257B码块中包括的4个来自待处理码块序列的4个64B/66B码块的同步头 区域压缩掉,也就是说,节省出的码块的类型字段的4比特空间可以用于多个码块的组合顺序识别。
一种可选地实施方式中,将待处理码块序列中连续R个码块进行压缩,若R大于1时,连续R个码块中至少包括两个码块,取出两个码块的两个第一码块流是两个不同的第一码块流。在这种可选地实施方式中,比如图21的示例,R为4,因此对待处理码块序列中连续4个进行压缩时,该连续的4个码块中存在至少两个码块,该两个码块对应的两个第一码块流不同,比如一个码块对应的第一码块流为上述图18中的第一码块流5201,另一个码块对应的第一码块流为上述图18中的第一码块流5301。
本申请实施例中第二码块流中一个数据单元中包括的第一类数据码块的数量不限定,可以根据实际情况确定,一种可选地实施方式中,由于将待处理码块序列进行了压缩,若要实现第二码块流和压缩后码块序列的对齐(即第二码块流中的一个数据单元中的可以承载整数个压缩后码块序列中的码块,或者说可以由第二码块流的数据单元可以确定每个时隙边界和码块边界),则第二码块流中的一个数据单元中包括的第一类数据码块的数量的计算方法需要根据压缩后码块序列的编码方式进行计算。具体计算方法是将上述计算方法中的待处理码块序列的编码形式的参数替换为压缩后码块序列的编码形式的参数。具体来说,压缩后码块序列的编码形式为M3/N3;M3为正整数,N3为不小于M3的整数。可选地,本申请实施例提供一种方案,第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N3和M2的公倍数与M2确定的;比如,一个数据单元中包括的第一类数据码块的数量至少是N3和M2的公倍数与M2的商。第一类数据码块的数量可以大于N3和M2的公倍数与M2的商,一个数据单元中的第一类数据码块的数量可以是N3和M2的公倍数与M2的商的整数倍。或者,第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据N2和M2的最小公倍数与M2确定的;比如,一个数据单元中包括的第一类数据码块的数量至少是N3和M2的最小公倍数与M2的商,一个数据单元中包括的第一类数据码块的数量大于N3和M2的最小公倍数与M2的商,一个数据单元中包括的第一类数据码块的数量可以是N3和M2的最小公倍数与M2的商的整数倍。可选地,本申请实施例中数据码块所定义的第一类数据码块可以是指承载各个时隙对应的码块的数据码块,第二类数据码块可以用于承载其它的信息比特(比如时隙分配指示信息、标识指示信息和复用指示信息等中的任一项或任多项)。第二类数据码块在一个数据单元中的位置可以是固定的,或者配置后通知给复用侧的第一通信设备和解复用侧的第二通信设备的。
一种可选地实施方式中,可选地,可以在第二码块流中承载复用指示信息,复用指示信息用于指示数据单元中承载的是复用后的码块,即解复用侧收到该数据单元中的码块后需要进行解复用操作。复用指示信息可以承载在第二码块流的一个数据单元内部,比如承载在头码块、第二类数据码块和尾端码块中的任一项或任多项上,这种情况下,复用指示信息也可以仅指示包括该复用指示信息的数据单元承载的是复用后的码块。另一种可选地实施方式中,复用指示信息可以承载在相邻数据单元之间的码块上,比如相邻数据单元之间可以配置O码块,复用指示信息可以承载在该O码块的净荷区域,这种情况下,接收到复用指示信息之后,则可以确定该复用指示信息之后接收到的数据单元上承载的都是经过复用后的码块,都是需要进行解复用的,直至接收到非复用指示信息为止,非复用指示信息可以指示该非复用指示信息后续的数据单元承载的码块不需要解复用。
上述步骤4101中,一种可选地实施方式中,若获取的来自低阶管道的Q条第三数据流中的每条数据流的编码形式并非M1/N1比特编码,则可以对Q条第三数据流中的每条第三数据流进行编码转换,将每条第三数据流转换为编码形式为M1/N1比特编码的第一码块流。
具体实施中,第三数据流比如可以为同步数字体系(Synchronous Digital Hierarchy,SDH)业务信号,可以进行业务映射处理,比如可以将第三数据流封装到第一码块流的数据单元的净荷区中,再添加必要的封装开销、OAM码块和空闲码块,从而得到该第三数据流对应的第一码块流,在第一码块流中增加预置空闲码块可以使之能够通过空闲码块的增删来适配第一码块流与相应的管道速率的适配。比如,可以将SDH业务的8字节D0~D7的业务信号映射到一个64B/66B数据码块的净荷区,添加同步头‘01’,从而将该8字节D0~D7的业务信号转为64B/66B码块的形式。
下面举个示例,例如,X-Ethernet/FlexE以5Gbps颗粒的一个时隙,即一个时隙的带宽(也可以称速率)为5Gbps,将一个5Gbps时隙的分配个一个第二码块流,如果第二码块流中一个数据单元的结构形式为【1个头码块(头码块也可以称为开销码块)+1023个数据码块+1个空闲码块】。通过上述示例可以看出33个64B/66B数据码块的净荷区域来完整装载32个64B/66B码块(64B/66B码块可以是头码块、尾端码块或数据码块)(若进行了压缩处理,则32个64B/66B码块是压缩后码块序列,若没有进行压缩后处理,则32个64B/66B码块是待处理码块序列,该示例中以未进行压缩处理为例进行说明)。第二码块流中的一个数据单元可以包括t*33个64B/66B数据码块,该t*33个64B/66B数据码块用于承载t*33*64=t*2112比特,按照66比特,最多可以基于TDM划分成为t*32个时隙。本实施例以当t=31,划分31个时隙为例展开描述。31*33*64=31*32*66=65472。第二码块流中一个数据单元可以包括31*33=1023个第一类数据码块。
当划分31时隙时,5000000000*(16383/16384)*(20460/20461)*(1023/1025)*(1/31)=160.9579176Mbps(-100ppm:160.9418218Mbps)。其中5G为一个时隙的标称速率,即64B/66B编码的除去同步头的比特速率,5G信号经过编码后包含64B/66B同步头的总比特率还要提升66/64=3.125%;16383/16384为100GE以太网接口除去线向标(Alignment Marker,AM)对齐码字后的有效带宽;20460/20461表示进一步去除灵活以太网接口的开销后的有效信息带宽;1023/1025表示除去高阶数据单元封装开销和必要的空闲后剩余的有效数码码块比例;1/31表示进行31个时隙划分后,一个时隙的有效承载带宽。即划分出来的用于组合成低阶管道的带宽的一个时隙的带宽为160.95791767Mbps。(考虑工程实际,器件或者设备工作时钟频率可能偏低-100ppm,最小可用的低阶管道承载总带宽为160.9418218Mbps)。
下面看一下SDH STM-1信号,我们需要对该业务信号进行业务信号到低阶数据单元的封装映射。SDH STM-1的原生速率带宽为155.52Mbps,我们对该信号与高阶数据单元一致的可是进行封装,即SDH STM-1的信号装入低阶数据单元的64B/66B数据码块的净荷区中,然后再封装低阶数据单元头开销码块和必要的空闲码块。则相应的低阶数据单元和空闲码块数据流的带宽如下:155.52*(66/64)*(1025/1023)=160.6935484Mbps。可选地,考虑工程实际,器件或者设备工作时钟频率可能偏高若干ppm,由具体的业务信号而定,例如+100ppm或者+20ppm等,例如使用适用于以太网的宽松频偏差,即+100ppm来算,SDH STM-1的最大封装带宽为160.7096177Mbps,实际上光传送网 (OpticalTransportNetwork,OTN)的允许频偏为+/-20ppm;同步数字体系(Synchronous Digital Hierarchy,SDH)的允许频偏比前两者更小,同步情况下+/-4.6ppm。
160.9579176Mbps(-100ppm:160.9418218Mbps)的带宽大于160.6935484Mbps(+100ppm:160.7096177Mbps)的带宽,即使考虑了最极端的情况,低阶承载管道速率偏低100ppm,业务信号偏高100ppm。因此SDH STM-1业务信号经过上述封装后,通过按需空闲码块增加的填充作用,可以实现SDH STM-1封装信号在一个低阶管道中的传输。
最后需要指出的是,按照相同的封装和开销,一个5G时隙作对应一个X-Ethernet高阶管道,可以划分31个时隙,每个时可以对应一个低阶管道,可以传输封装后的1路SDH STM-1业务。由于STM-N是STM-1的N倍速率关系,STM-4、STM-16等业务信号经过相同的透明封装后,可以使用上述N个时隙构成的低阶承载管道进行承载。OTN信号的情况与SDH信号类似,只是速率有差异。在确定的业务带宽需求下,可以通过分配合适的时隙数量,使得低阶承载管道的带宽总是大于等于业务信号封装后的带宽,通过空闲增删操作来实现速率填充适配。
基于上述所论述的复用侧的第一通信设备所执行的方案和相同构思,本申请实施例还提供一种数据传输方法,该数据传输方法的解复用侧的第二通信设备所执行的方法。图23示例性示出了本申请实施例提供的一种数据传输方法的流程示意图,如图23所示,该方法包括:
步骤7201,接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;第二码块流的编码类型为M2/N2比特编码;M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;
步骤7202,解复用出Q条第一码块流。
也就是说解复用侧的第二通信设备在接收到第二码块流时可以从第二码块流中取出第二码块流所承载的Q条第一码块流对应的码块,进一步确定每个码块对应的第一码块流,从而恢复出每个第一码块流。
一种可选地实施方式中,若复用侧的第一通信设备执行的方法如上述图19所示,并未对待处理码块序列进行压缩,则一种可选地实施方式中,获取第二码块流的净荷区域承载的Q条第一码块流中的码块对应的比特,得到待解压缩码块序列;根据待解压缩码块序列,解复用出Q条第一码块流。
也就是说可以从第二码块流的第一类数据码块的净荷区域取出各个时隙对应的码块,得到待解压缩码块序列,之后该待解压缩码块序列依据排序可以与各个时隙的排序对应,比如共划分32个时隙,解复用侧的第二通信设备知道承载时隙对应的码块的第一类数据码块的位置(可以提前配置,或者由集中控制单元或管理单元发送给解复用侧的第二通信设备,或者由复用侧的第一通信设备发送给解复用侧的第二通信设备),第二码块流中的一个数据单元中取出的所有时隙对应的码块组成的待解压缩码块序列中,第一个码块对应时隙0,第二个码块对应时隙1,第三个码块对应时隙2等等依次排序,直至排到时隙31对应的码块后,将下一个码块再次确定为时隙0对应的码块,把后续第二个码块确定为时隙1对应的码块。
进一步,解复用侧的第二通信设备获取Q条第一码块流中每个第一码块流对应的时隙 的标识,即获取Q条第一码块流和时隙的对应关系,比如一条第一码块流分配的是时隙0,则把时隙0对应的待解压缩码块序列中的的码块都确定为该第一码块流中的码块,则恢复出该第一码块流。
另一种可选地实施方式中,若复用侧的第一通信设备执行的方法如上述图20所示,对待处理码块序列进行压缩,则一种可选地实施方式中,可以从第二码块流的第一类数据码块的净荷区域取出各个时隙对应的码块,得到待解压缩码块序列。将待解压缩码块序列进行解压缩,得到待恢复码块序列;根据待恢复码块序列,确定出待恢复码块序列中每个码块对应的第一码块流,得到Q条第一码块流;其中,Q条第一码块流中的每条第一码块流对应至少一个时隙;待恢复码块序列包括的码块的排序,与待恢复码块序列包括的码块所对应的时隙的排序匹配。
之后该待恢复码块序列依据排序可以与各个时隙的排序对应,比如共划分32个时隙,解复用侧的第二通信设备知道承载时隙对应的码块的第一类数据码块的位置(可以提前配置,或者由集中控制单元或管理单元发送给解复用侧的第二通信设备,或者由复用侧的第一通信设备发送给解复用侧的第二通信设备),第二码块流中的一个数据单元中取出的所有时隙对应的码块组成的待恢复码块序列中,第一个码块对应时隙0,第二个码块对应时隙1,第三个码块对应时隙2等等依次排序,直至排到时隙31对应的码块后,将下一个码块再次确定为时隙0对应的码块,把后续第二个码块确定为时隙1对应的码块。
进一步,解复用侧的第二通信设备获取Q条第一码块流中每个第一码块流对应的时隙的标识,即获取Q条第一码块流和时隙的对应关系,比如一条第一码块流分配的是时隙0,则把时隙0对应的待恢复码块序列中的的码块都确定为该第一码块流中的码块,则恢复出该第一码块流。
可选地,比如压缩后码块序列为64/65比特编码,待处理码块序列为64B/66B编码,则在具体实施中,解复用侧的第二通信设备可以获取第二码块流的数据单元的边界信息,比如第二码块流的空闲码块的边界信息、一个数据单元的头码块(头码块也可以称为开销码块)边界和一个第一类数据码块的净荷区域的边界信息,因此可以从第二码块流的一个数据单元中的第一个第一类数据码块的第一个比特开始,一次按照65比特,定界每个64B/65B码块,该定界出的64B/65B码块为待解压缩码块序列中的一个码块,之后可以根据首比特信息,对待解压缩码块序列中的码块进行解压缩,从而恢复出待恢复码块序列中的64B/66B码块。
图24示例性示出了本申请实施例提供的一种数据传输结构示意图,如图24所示,若第一通信设备4304为复用侧,通信设备4306为解复用侧,则第一通信设备4304将第一码块流4301和第二码块流4302经过复用,复用至第二码块流4303,第二码块流在至少一个中间节点4305(图中标出两个中间节点4305,复用侧的第一通信设备和解复用侧的第二通信设备之间的通信设备可以称为中间节点)之间传输,第一通信设备4306对收到的第二码块流解复用,得到第一码块流4301和第一码块流4302。
结合上述内容以及图24,可以看出本申请实施例提供的方案解决了多个业务信号到一个基于码块流(64B/66B编码)的业务信号的复用传输问题,比如将多个业务信号复接成为一个64B/66B业务信号,按照一个64B/66B业务信号在网络中进行交叉连接和调度,可以简化X-Ethernet和SPN技术的网络运维和数据面,从而可以对X-Ethernet和SPN技术进行完善,使这两个技术可以应用于骨干和长途网。本申请实施例提供的方案可以在第二 码块流的入口和出口的设备上,在承载第二码块流的高阶管道里进一步提供了至少两个承载两个第一码块流的两个低阶管道,低阶管道分别独立进行业务的映射和解映射。中间节点(复用侧的第一通信设备和解复用侧的第二通信设备之间的通信设备可以称为中间节点)进行交换,只需要处理高阶管道,不需要处理低阶管道,从而可以实现管道数量的收敛,可以对中间节点的交叉处理得到简化。通过可选地对低阶管道信号的编码压缩,可以有效提升复接效率。通过采用S码和T码封装的高阶管道的承载数据单元,可以有效兼容现有网络和技术,使得复用后的高阶管道能够穿越既有的支持扁平化组网的网络节点和网络,可以具有良好的前向和后向兼容性。
基于上述内容和相同构思,本申请提供一种通信设备8101,用于执行上述方法中的复用侧的任一个方案。图25示例性示出了本申请提供的一种通信设备的结构示意图,如图25所示,通信设备8101包括处理器8103、收发器8102、存储器8105和通信接口8104;其中,处理器8103、收发器8102、存储器8105和通信接口8104通过总线8106相互连接。该示例中的通信设备8101可以是上述内容中的第一通信设备,可以执行上述图7对应的方案,该通信设备8101可以上述图4和图5中的通信设备3105,也可以是通信设备3107。
总线8106可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图25中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器8105可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器8105还可以包括上述种类的存储器的组合。
通信接口8104可以为有线通信接入口,无线通信接口或其组合,其中,有线通信接口例如可以为以太网接口。以太网接口可以是光接口,电接口或其组合。无线通信接口可以为WLAN接口。
处理器8103可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。处理器8103还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选地,存储器8105还可以用于存储程序指令,处理器8103调用该存储器8105中存储的程序指令,可以执行上述方案中所示实施例中的一个或多个步骤,或其中可选地实施方式,使得通信设备8101实现上述方法中通信设备的功能。
处理器8103用于根据执行存储器存储的指令,并控制收发器8102进行信号接收和信号发送,当处理器8103执行存储器存储的指令时,通信设备8101中的处理器8103,用于:获取Q条第一码块流,其中,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;将Q条第一码块流中的码块对应的比特放入待发送的第二码块流;其中,第二码块流的编码类型为M2/N2比特编码;Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域;其中,M2为正整数,第二 码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;收发器8102,用于发送第二码块流。
在一种可选地实施方式中,处理器8103,用于将Q条第一码块流中的码块进行基于码块的时分复用,得到待处理码块序列;其中,Q条第一码块流中的每条第一码块流对应至少一个时隙;待处理码块序列包括的码块的排序,与待处理码块序列包括的码块所对应的时隙的排序匹配;将待处理码块序列对应的比特放入待发送的第二码块流。
在一种可选地实施方式中,处理器8103,用于:将待处理码块序列中连续R个码块进行压缩,得到压缩后码块序列;其中,R为正整数;将压缩后码块序列对应的比特放入待发送的第二码块流。
在一种可选地实施方式中,若R大于1时,连续R个码块中至少包括两个码块,取出两个码块的两个第一码块流是两个不同的第一码块流。
在一种可选地实施方式中,,处理器8103,还用于:针对Q条第一码块流中的第一码块流,执行:根据第一码块流的带宽与第一码块流对应的时隙的总带宽,对第一码块流执行空闲IDLE码块的增删处理;其中,第一码块流对应的时隙的总带宽是根据第一码块流对应的时隙的数量,以及为第一码块流对应的每个时隙所分配的带宽确定的。
本申请实施例中的第二码块流的数据结构可以有多种,具体示例可以参见上述实施例,在此不再赘述。
本申请实施例中,第二码块流中承载的其它信息,比如标识指示信息、时隙分配指示信息和复用指示信息等等都可以参见上述实施例的内容,在此不再赘述。
本申请实施例中从第一码块流中取出的码块在第二码块流中的放置方式,以及第二码块流中一个数据单元中包括的第一类数据码块的数量的确定方案都可以参见上述实施例,在此不再赘述。
基于相同构思,本申请提供一种通信设备8201,用于执行上述方法中的解复用侧的任一个方案。图26示例性示出了本申请提供的一种通信设备的结构示意图,如图26所示,通信设备8201包括处理器8203、收发器8202、存储器8205和通信接口8204;其中,处理器8203、收发器8202、存储器8205和通信接口8204通过总线8206相互连接。该示例中的通信设备8201可以是上述内容中的第二通信设备,可以执行上述图23对应的方案,该通信设备8201可以上述图4中的通信设备3109,也可以是上述图5中的通信设备3109,也可以是上述图5中的通信设备3115。
总线8206可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图26中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器8205可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器8205还可以包括上述种类的存储器的组合。
通信接口8204可以为有线通信接入口,无线通信接口或其组合,其中,有线通信接口例如可以为以太网接口。以太网接口可以是光接口,电接口或其组合。无线通信接口可以为WLAN接口。
处理器8203可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。处理器8203还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选地,存储器8205还可以用于存储程序指令,处理器8203调用该存储器8205中存储的程序指令,可以执行上述方案中所示实施例中的一个或多个步骤,或其中可选地实施方式,使得通信设备8201实现上述方法中通信设备的功能。
处理器8203用于根据执行存储器存储的指令,并控制收发器8202进行信号接收和信号发送,当处理器8203执行存储器存储的指令时,通信设备8201中的收发器8202,用于接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;第二码块流的编码类型为M2/N2比特编码;M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;处理器8203,用于解复用出Q条第一码块流。
在一种可选地实施方式中,处理器8203,用于:获取第二码块流的净荷区域承载的Q条第一码块流中的码块对应的比特,得到待解压缩码块序列;根据待解压缩码块序列,解复用出Q条第一码块流。
在一种可选地实施方式中,若待解压缩码块序列中的一个码块是对至少两个码块进行压缩得到的,则至少两个码块对应两个不同的第一码块流。
在一种可选地实施方式中,处理器8203,用于:将待解压缩码块序列进行解压缩,得到待恢复码块序列;根据待恢复码块序列,确定出待恢复码块序列中每个码块对应的第一码块流,得到Q条第一码块流;其中,Q条第一码块流中的每条第一码块流对应至少一个时隙;待恢复码块序列包括的码块的排序,与待恢复码块序列包括的码块所对应的时隙的排序匹配。
本申请实施例中的第二码块流的数据结构可以有多种,具体示例可以参见上述实施例,在此不再赘述。
本申请实施例中,第二码块流中承载的其它信息,比如标识指示信息、时隙分配指示信息和复用指示信息等等都可以参见上述实施例的内容,在此不再赘述。
本申请实施例中从第一码块流中取出的码块在第二码块流中的放置方式,以及第二码块流中一个数据单元中包括的第一类数据码块的数量的确定方案都可以参见上述实施例,在此不再赘述。
基于相同构思,本申请实施例提供一种通信设备,用于执行上述方法流程中的复用侧的任一个方案。图27示例性示出了本申请实施例提供的一种通信设备的结构示意图,如图27所示,通信设备8301包括收发单元8302和复用解复用单元8303。该示例中的通信设备8301可以是上述内容中的第一通信设备,可以执行上述图7对应的方案,该通信设备8301可以上述图4和图5中的通信设备3105,也可以是通信设备3107。
复用解复用单元8303,用于:获取Q条第一码块流,其中,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;将Q条 第一码块流中的码块对应的比特放入待发送的第二码块流;其中,第二码块流的编码类型为M2/N2比特编码;Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域;其中,M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;收发单元8302,用于发送第二码块流。
本申请实施例中,收发单元8302可以由上述图25的收发器8102实现,复用解复用单元8303可以由上述图25的处理器8103实现。也就是说,本申请实施例中收发单元8302可以执行上述图25的收发器8102所执行的方案,本申请实施例中复用解复用单元8303可以执行上述图25的处理器8103所执行的方案,其余内容可以参见上述内容,在此不再赘述。
应理解,以上各个第一通信设备和第二通信设备的单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。本申请实施例中,收发单元8302可以由上述图25的收发器8102实现,复用解复用单元8303可以由上述图25的处理器8103实现。如上述图25所示,通信设备8101包括的存储器8105中可以用于存储该通信设备8101包括的处理器8103执行方案时的代码,该代码可为通信设备8101出厂时预装的程序/代码。
基于相同构思,本申请实施例提供一种通信设备,用于执行上述方法流程中的解复用侧的任一个方案。图28示例性示出了本申请实施例提供的一种通信设备的结构示意图,如图28所示,通信设备8401包括收发单元8402和复用解复用单元8403。该示例中的通信设备8401可以是上述内容中的第二通信设备,可以执行上述图23对应的方案,该通信设备8401可以上述图4中的通信设备3109,也可以是上述图5中的通信设备3109,也可以是上述图5中的通信设备3115。
收发单元8402,用于接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于第二码块流中的码块的净荷区域,Q为大于1的整数;第一码块流的编码类型为M1/N1比特编码,M1为正整数,N1为不小于M1的整数;第二码块流的编码类型为M2/N2比特编码;M2为正整数,第二码块流中的一个码块的净荷区域承载的比特的数量不大于M2;N2为不小于M2的整数;复用解复用单元8403,用于解复用出Q条第一码块流。
本申请实施例中,收发单元8402可以由上述图26的收发器8202实现,复用解复用单元8403可以由上述图26的处理器8203实现。也就是说,本申请实施例中收发单元8402可以执行上述图26的收发器8202所执行的方案,本申请实施例中复用解复用单元8403可以执行上述图26的处理器8203所执行的方案,其余内容可以参见上述内容,在此不再赘述。
应理解,以上各个第一通信设备和第二通信设备的单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。本申请实施例中,收发单元8402可以由上述图26的收发器8202实现,复用解复用单元8403可以由上述图26的处理器8203实现。如上述图26所示,通信设备8201包括的存储器8205中可以用于存储该通信设备8201包括的处理器8203执行方案时的代码,该代码可为通信设备8201出厂时预装的程序/代码。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现、当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按 照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。指令可以存储在计算机存储介质中,或者从一个计算机存储介质向另一个计算机存储介质传输,例如,指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带、磁光盘(MO)等)、光介质(例如,CD、DVD、BD、HVD等)、或者半导体介质(例如ROM、EPROM、EEPROM、非易失性存储器(NAND FLASH)、固态硬盘(Solid State Disk,SSD))等。
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (30)

  1. 一种数据传输方法,其特征在于,包括:
    获取Q条第一码块流,其中,所述Q为大于1的整数;所述第一码块流的编码类型为M1/N1比特编码,所述M1为正整数,所述N1为不小于所述M1的整数;
    将所述Q条第一码块流中的码块对应的比特放入待发送的第二码块流;其中,所述第二码块流的编码类型为M2/N2比特编码;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的码块的净荷区域;其中,所述M2为正整数,所述第二码块流中的一个码块的净荷区域承载的比特的数量不大于所述M2;所述N2为不小于所述M2的整数。
  2. 如权利要求1所述的方法,其特征在于,所述第二码块流对应至少一个数据单元;
    所述至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,所述至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,所述至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块;
    其中,所述至少一个数据码块包括至少一个第一类数据码块;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的所述至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,所述第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为所述M2。
  3. 如权利要求2所述的方法,其特征在于,所述头码块为S码块,和/或,所述尾端码块为T码块。
  4. 如权利要求2或3所述的方法,其特征在于,所述将所述Q条第一码块流中的码块对应的比特放入待发送的第二码块流,包括:
    将所述Q条第一码块流中的码块进行基于码块的时分复用,得到待处理码块序列;其中,所述Q条第一码块流中的每条第一码块流对应至少一个时隙;所述待处理码块序列包括的码块的排序,与所述待处理码块序列包括的码块所对应的时隙的排序匹配;
    将所述待处理码块序列对应的比特放入待发送的所述第二码块流。
  5. 如权利要求4所述的方法,其特征在于,所述第二码块流的预设码块中承载时隙分配指示信息;
    所述时隙分配指示信息用于指示所述Q条第一码块流和时隙的对应关系。
  6. 如权利要求4或5所述的方法,其特征在于,所述将所述待处理码块序列对应的比特放入待发送的所述第二码块流,包括:
    将所述待处理码块序列中连续R个码块进行压缩,得到压缩后码块序列;其中,所述R为正整数;
    将压缩后码块序列对应的比特放入待发送的第二码块流。
  7. 如权利要求5或6所述的方法,其特征在于,所述压缩后码块序列的编码形式为M3/N3;所述M3为正整数,所述N3为不小于所述M3的整数;
    所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的公倍数与所述M2确定的;或者;所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的最小公倍数与所述M2确定的。
  8. 一种数据传输方法,其特征在于,包括:
    接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于所述第二码块流中的码块的净荷区域,所述Q为大于1的整数;所述第一码块流的编码类型为M1/N1比特编码,所述M1为正整数,所述N1为不小于所述M1的整数;所述第二码块流的编码类型为M2/N2比特编码;所述M2为正整数,所述第二码块流中的一个码块的净荷区域承载的比特的数量不大于所述M2;所述N2为不小于所述M2的整数;
    解复用出所述Q条第一码块流。
  9. 如权利要求8所述的方法,其特征在于,所述第二码块流对应至少一个数据单元;
    所述至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,所述至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,所述至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块;
    其中,所述至少一个数据码块包括至少一个第一类数据码块;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的所述至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,所述第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为所述M2。
  10. 如权利要求9所述的方法,其特征在于,所述头码块为S码块,和/或,所述尾端码块为T码块。
  11. 如权利要求9或10所述的方法,其特征在于,所述解复用出所述Q条第一码块流,包括:
    获取所述第二码块流的净荷区域承载的所述Q条第一码块流中的码块对应的比特,得到待解压缩码块序列;
    根据所述待解压缩码块序列,解复用出所述Q条第一码块流。
  12. 如权利要求10或11所述的方法,其特征在于,所述第二码块流的预设码块中承载时隙分配指示信息;
    所述时隙分配指示信息用于指示所述Q条第一码块流和时隙的对应关系。
  13. 如权利要求10至12任一项所述的方法,其特征在于,所述根据所述待解压缩码块序列,解复用出所述Q条第一码块流,包括:
    将所述待解压缩码块序列进行解压缩,得到待恢复码块序列;
    根据所述待恢复码块序列,确定出所述待恢复码块序列中每个码块对应的第一码块流,得到所述Q条第一码块流;
    其中,所述Q条第一码块流中的每条第一码块流对应至少一个时隙;所述待恢复码块序列包括的码块的排序,与所述待恢复码块序列包括的码块所对应的时隙的排序匹配。
  14. 如权利要求11至13任一项所述的方法,其特征在于,所述压缩后码块序列的编码形式为M3/N3;所述M3为正整数,所述N3为不小于所述M3的整数;
    所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的公倍数与所述M2确定的;或者;所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的最小公倍数与所述M2确定的。
  15. 一种通信设备,其特征在于,包括:
    处理器,用于获取Q条第一码块流,其中,所述Q为大于1的整数;所述第一码块流的编码类型为M1/N1比特编码,所述M1为正整数,所述N1为不小于所述M1的整数;
    将所述Q条第一码块流中的码块对应的比特放入待发送的第二码块流;其中,所述第二码块流的编码类型为M2/N2比特编码;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的码块的净荷区域;其中,所述M2为正整数,所述第二码块流中的一个码块的净荷区域承载的比特的数量不大于所述M2;所述N2为不小于所述M2的整数;
    收发器,用于发送所述第二码块流。
  16. 如权利要求15所述的通信设备,其特征在于,所述第二码块流对应至少一个数据单元;
    所述至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,所述至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,所述至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块;
    其中,所述至少一个数据码块包括至少一个第一类数据码块;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的所述至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,所述第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为所述M2。
  17. 如权利要求16所述的通信设备,其特征在于,所述头码块为S码块,和/或,所述尾端码块为T码块。
  18. 如权利要求16或17所述的通信设备,其特征在于,所述处理器,用于:
    将所述Q条第一码块流中的码块进行基于码块的时分复用,得到待处理码块序列;其中,所述Q条第一码块流中的每条第一码块流对应至少一个时隙;所述待处理码块序列包括的码块的排序,与所述待处理码块序列包括的码块所对应的时隙的排序匹配;
    将所述待处理码块序列对应的比特放入待发送的所述第二码块流。
  19. 如权利要求18所述的通信设备,其特征在于,所述第二码块流的预设码块中承载时隙分配指示信息;
    所述时隙分配指示信息用于指示所述Q条第一码块流和时隙的对应关系。
  20. 如权利要求18或19所述的通信设备,其特征在于,所述处理器,用于:
    将所述待处理码块序列中连续R个码块进行压缩,得到压缩后码块序列;其中,所述R为正整数;
    将压缩后码块序列对应的比特放入待发送的第二码块流。
  21. 如权利要求19或20所述的通信设备,其特征在于,所述压缩后码块序列的编码形式为M3/N3;所述M3为正整数,所述N3为不小于所述M3的整数;
    所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的公倍数与所述M2确定的;或者;所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的最小公倍数与所述M2确定的。
  22. 一种通信设备,其特征在于,包括:
    收发器,用于接收第二码块流;其中,Q条第一码块流中的码块对应比特承载于所述第二码块流中的码块的净荷区域,所述Q为大于1的整数;所述第一码块流的编码类型为M1/N1比特编码,所述M1为正整数,所述N1为不小于所述M1的整数;所述第二码块流的编码类型为M2/N2比特编码;所述M2为正整数,所述第二码块流中的一个码块的净荷区域承载的比特的数量不大于所述M2;所述N2为不小于所述M2的整数;
    处理器,用于解复用出所述Q条第一码块流。
  23. 如权利要求22所述的通信设备,其特征在于,所述第二码块流对应至少一个数据单元;
    所述至少一个数据单元中的一个数据单元包括头码块和至少一个数据码块;或者,所述至少一个数据单元中的一个数据单元包括头码块、至少一个数据码块和尾端码块;或者,所述至少一个数据单元中的一个数据单元包括至少一个数据码块和尾端码块;
    其中,所述至少一个数据码块包括至少一个第一类数据码块;所述Q条第一码块流中的码块对应比特承载于所述第二码块流中的所述至少一个第一类数据码块中的第一类数据码块的净荷区域;其中,所述第二码块流中的一个第一类数据码块的净荷区域承载的比特的数量为所述M2。
  24. 如权利要求23所述的通信设备,其特征在于,所述头码块为S码块,和/或,所述尾端码块为T码块。
  25. 如权利要求23或24所述的通信设备,其特征在于,所述处理器,用于:
    获取所述第二码块流的净荷区域承载的所述Q条第一码块流中的码块对应的比特,得到待解压缩码块序列;
    根据所述待解压缩码块序列,解复用出所述Q条第一码块流。
  26. 如权利要求25所述的通信设备,其特征在于,若所述待解压缩码块序列中的一个码块是对至少两个码块进行压缩得到的,则所述至少两个码块对应两个不同的第一码块流。
  27. 如权利要求25或26所述的通信设备,其特征在于,所述第二码块流的预设码块中承载时隙分配指示信息;
    所述时隙分配指示信息用于指示所述Q条第一码块流和时隙的对应关系。
  28. 如权利要求25至27任一项所述的通信设备,其特征在于,所述处理器,用于:
    将所述待解压缩码块序列进行解压缩,得到待恢复码块序列;
    根据所述待恢复码块序列,确定出所述待恢复码块序列中每个码块对应的第一码块流,得到所述Q条第一码块流;
    其中,所述Q条第一码块流中的每条第一码块流对应至少一个时隙;所述待恢复码块序列包括的码块的排序,与所述待恢复码块序列包括的码块所对应的时隙的排序匹配。
  29. 如权利要求25至28任一项所述的通信设备,其特征在于,所述压缩后码块序列的编码形式为M3/N3;所述M3为正整数,所述N3为不小于所述M3的整数;
    所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的公倍数与所述M2确定的;或者;所述第二码块流中包括的至少一个数据单元中的一个数据单元中包括的第一类数据码块的数量是根据所述N3和所述M2的最小公倍数与所述M2确定的。
  30. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机可执行指令,所述计算机可执行指令在被计算机调用时,使所述计算机执行如权利要求1至14任一权利要求所述的方法。
PCT/CN2018/119412 2017-12-29 2018-12-05 一种数据传输方法、通信设备及存储介质 WO2019128664A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020207021706A KR102331530B1 (ko) 2017-12-29 2018-12-05 데이터 전송 방법, 통신 장치, 및 저장 매체
JP2020536670A JP7026802B2 (ja) 2017-12-29 2018-12-05 データ伝送方法、通信機器、および記憶媒体
EP18893906.0A EP3726758A4 (en) 2017-12-29 2018-12-05 DATA TRANSFER PROCESS, COMMUNICATION DEVICE AND STORAGE MEDIUM
US16/913,691 US11316545B2 (en) 2017-12-29 2020-06-26 Data transmission method, communications device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711489045.XA CN109995455B (zh) 2017-12-29 2017-12-29 一种数据传输方法、通信设备及存储介质
CN201711489045.X 2017-12-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/913,691 Continuation US11316545B2 (en) 2017-12-29 2020-06-26 Data transmission method, communications device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019128664A1 true WO2019128664A1 (zh) 2019-07-04

Family

ID=67066509

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119412 WO2019128664A1 (zh) 2017-12-29 2018-12-05 一种数据传输方法、通信设备及存储介质

Country Status (6)

Country Link
US (1) US11316545B2 (zh)
EP (1) EP3726758A4 (zh)
JP (1) JP7026802B2 (zh)
KR (1) KR102331530B1 (zh)
CN (2) CN109995455B (zh)
WO (1) WO2019128664A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112398756A (zh) * 2019-08-13 2021-02-23 华为技术有限公司 用于传输业务数据的方法和装置
WO2021045964A1 (en) * 2019-09-05 2021-03-11 Ciena Corporation Flexible ethernet and mtn over wireless links

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11153191B2 (en) * 2018-01-19 2021-10-19 Intel Corporation Technologies for timestamping with error correction
CN111147181B (zh) * 2018-11-02 2022-12-09 中兴通讯股份有限公司 业务发送方法、接收方法、装置及系统、存储介质
JP7183930B2 (ja) * 2019-04-12 2022-12-06 日本電信電話株式会社 信号転送システム、信号転送方法及び経路制御装置
US11265096B2 (en) 2019-05-13 2022-03-01 Intel Corporation High accuracy time stamping for multi-lane ports
CN112866138A (zh) * 2019-11-26 2021-05-28 华为技术有限公司 一种资源分配方法、装置和设备
CN112953675A (zh) * 2019-12-10 2021-06-11 华为技术有限公司 一种数据传输方法、通信设备及存储介质
CN113938245A (zh) * 2020-07-13 2022-01-14 华为技术有限公司 一种速率适配方法及装置
CN114125880A (zh) * 2020-08-28 2022-03-01 中国移动通信有限公司研究院 发送端运行、管理和维护插入、提取方法、设备及介质
CN112543349B (zh) * 2020-11-27 2023-03-14 西安空间无线电技术研究所 一种多端口高速数据同步传输方法
CN114584255A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种码块识别方法及装置
CN114915366A (zh) * 2021-02-10 2022-08-16 华为技术有限公司 一种通信方法、设备和芯片系统
CN113824660B (zh) * 2021-09-28 2023-12-26 新华三信息安全技术有限公司 一种码流的透传方法和路由器
CN114584258B (zh) * 2022-02-15 2023-05-26 烽火通信科技股份有限公司 业务时延降低方法、装置、设备及可读存储介质
KR102598313B1 (ko) * 2023-04-26 2023-11-03 (주)썬웨이브텍 25g 프론트홀 망 구성용 리버스 먹스폰더 및 먹스폰더

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335751A (zh) * 2007-06-29 2008-12-31 华为技术有限公司 将以太网编码块映射到光传输网络传输的方法及装置
CN202799014U (zh) * 2012-08-30 2013-03-13 深圳市九洲电器有限公司 多路ts码流的复用系统
US20160308558A1 (en) * 2011-04-13 2016-10-20 Cortina Systems, Inc. Staircase forward error correction coding
CN106301678A (zh) * 2015-06-08 2017-01-04 华为技术有限公司 一种数据处理的方法、通信设备及通信系统
CN106341207A (zh) * 2015-07-06 2017-01-18 华为技术有限公司 一种编码块数据流的发送和接收方法、设备和系统

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8412051B2 (en) 2006-10-13 2013-04-02 Menara Networks, Inc. 40G/100G optical transceivers with integrated framing and forward error correction
US6718491B1 (en) 2000-03-06 2004-04-06 Agilent Technologies, Inc. Coding method and coder for coding packetized serial data with low overhead
JP2003009219A (ja) 2001-06-22 2003-01-10 Nec Corp 多チャネル時分割多重アクセス無線通信システム、及びそのスロット割当情報通知方法
JP2003273939A (ja) 2002-03-13 2003-09-26 Nec Corp 多重伝送システムおよび変換装置と警報転送方法
US8259760B2 (en) 2006-03-31 2012-09-04 Stmicroelectronics, Inc. Apparatus and method for transmitting and recovering multi-lane encoded data streams using a reduced number of lanes
US8990653B2 (en) 2006-03-31 2015-03-24 Stmicroelectronics, Inc. Apparatus and method for transmitting and recovering encoded data streams across multiple physical medium attachments
US7675945B2 (en) * 2006-09-25 2010-03-09 Futurewei Technologies, Inc. Multi-component compatible data architecture
US7995621B2 (en) 2008-10-01 2011-08-09 Nortel Netwoeks Limited Techniques for time transfer via signal encoding
US20120120967A1 (en) 2010-11-12 2012-05-17 Ali Ghiasi Universal Serial Interface
US8898550B2 (en) * 2012-06-26 2014-11-25 International Business Machines Corporation Encoding of data for transmission
CN103684691B (zh) 2013-12-05 2017-01-11 中国航空无线电电子研究所 一种同时支持fc协议8g和16g速率的通讯方法
WO2016172765A1 (en) 2015-04-30 2016-11-03 Metamako Lp A system for multiplexing a plurality of payloads and a method for multiplexing a plurality of payloads
US9800361B2 (en) 2015-06-30 2017-10-24 Ciena Corporation Flexible ethernet switching systems and methods
US9838290B2 (en) 2015-06-30 2017-12-05 Ciena Corporation Flexible ethernet operations, administration, and maintenance systems and methods
EP3713158B1 (en) * 2015-06-30 2022-02-09 Ciena Corporation Time transfer systems and methods over a stream of ethernet blocks
JP6560589B2 (ja) 2015-10-19 2019-08-14 日本放送協会 イーサネットフレーム変換装置及びイーサネットフレーム再変換装置
CN106612203A (zh) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 一种处理灵活以太网客户端数据流的方法及装置
CN106788855B (zh) * 2015-11-23 2018-12-07 华为技术有限公司 一种灵活以太网业务的光传送网承载方法及装置
US10182039B2 (en) * 2016-02-04 2019-01-15 Cisco Technology, Inc. Encrypted and authenticated data frame
CN109150361B (zh) * 2017-06-16 2021-01-15 中国移动通信有限公司研究院 一种传输网络系统、数据交换和传输方法、装置及设备
US10506083B2 (en) * 2017-06-27 2019-12-10 Cisco Technology, Inc. Segment routing gateway storing segment routing encapsulating header used in encapsulating and forwarding of returned native packet
WO2019119389A1 (en) * 2017-12-22 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for configuring a flex ethernet node

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335751A (zh) * 2007-06-29 2008-12-31 华为技术有限公司 将以太网编码块映射到光传输网络传输的方法及装置
US20160308558A1 (en) * 2011-04-13 2016-10-20 Cortina Systems, Inc. Staircase forward error correction coding
CN202799014U (zh) * 2012-08-30 2013-03-13 深圳市九洲电器有限公司 多路ts码流的复用系统
CN106301678A (zh) * 2015-06-08 2017-01-04 华为技术有限公司 一种数据处理的方法、通信设备及通信系统
CN106341207A (zh) * 2015-07-06 2017-01-18 华为技术有限公司 一种编码块数据流的发送和接收方法、设备和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3726758A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112398756A (zh) * 2019-08-13 2021-02-23 华为技术有限公司 用于传输业务数据的方法和装置
CN112398756B (zh) * 2019-08-13 2024-05-17 华为技术有限公司 用于传输业务数据的方法和装置
WO2021045964A1 (en) * 2019-09-05 2021-03-11 Ciena Corporation Flexible ethernet and mtn over wireless links

Also Published As

Publication number Publication date
CN109995455A (zh) 2019-07-09
CN111884753A (zh) 2020-11-03
JP7026802B2 (ja) 2022-02-28
CN109995455B (zh) 2020-07-07
JP2021508214A (ja) 2021-02-25
EP3726758A1 (en) 2020-10-21
US11316545B2 (en) 2022-04-26
US20200328767A1 (en) 2020-10-15
KR20200103077A (ko) 2020-09-01
KR102331530B1 (ko) 2021-12-01
EP3726758A4 (en) 2021-02-17

Similar Documents

Publication Publication Date Title
WO2019128664A1 (zh) 一种数据传输方法、通信设备及存储介质
US11381338B2 (en) Data transmission method, communications device, and storage medium
US20180098076A1 (en) Data Processing Method, Communications Device, and Communications System
EP4178297A1 (en) Data transmission method, and device
US11477549B2 (en) Transmission network system, data switching and transmission method, apparatus and equipment
WO2021180007A1 (zh) 一种业务承载的方法、装置和系统
CN105790883B (zh) 一种处理信号的方法及通信设备
WO2019029419A1 (zh) 透传业务频率的方法和设备
WO2021219079A1 (zh) 业务数据处理、交换、提取方法及设备、计算机可读介质
WO2007131405A1 (fr) Dispositif et procédé pour mettre en oeuvre la transmission de convergence et de non convergence d'une pluralité de services multitrajet
WO2023109424A1 (zh) 一种数据传输的方法以及相关装置
WO2021115215A1 (zh) 一种数据传输方法、通信设备及存储介质
JP7167345B2 (ja) データ伝送方法、通信機器、および記憶媒体
CN115811388A (zh) 一种通信方法、相关装置以及存储介质
WO2020048508A1 (zh) 一种码块生成方法、接收方法和装置
CN113472826A (zh) 一种业务承载、提取方法、数据交换方法及设备
WO2024032269A1 (zh) 通信方法、相关装置及计算机可读存储介质
WO2024002188A1 (zh) 用于灵活以太网的方法、网络设备及存储介质
WO2023231764A1 (zh) 业务数据处理的方法和装置
EP3664371B1 (en) Switching method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18893906

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020536670

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018893906

Country of ref document: EP

Effective date: 20200713

ENP Entry into the national phase

Ref document number: 20207021706

Country of ref document: KR

Kind code of ref document: A