WO2023165412A1 - 数据处理方法、装置、设备、系统及计算机可读存储介质 - Google Patents

数据处理方法、装置、设备、系统及计算机可读存储介质 Download PDF

Info

Publication number
WO2023165412A1
WO2023165412A1 PCT/CN2023/077958 CN2023077958W WO2023165412A1 WO 2023165412 A1 WO2023165412 A1 WO 2023165412A1 CN 2023077958 W CN2023077958 W CN 2023077958W WO 2023165412 A1 WO2023165412 A1 WO 2023165412A1
Authority
WO
WIPO (PCT)
Prior art keywords
overhead
code blocks
code
blocks
time slot
Prior art date
Application number
PCT/CN2023/077958
Other languages
English (en)
French (fr)
Inventor
何向
王心远
任浩
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210554595.XA external-priority patent/CN116743677A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023165412A1 publication Critical patent/WO2023165412A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received

Definitions

  • the present application relates to the technical field of communications, and in particular to a data processing method, device, device, system and computer-readable storage medium.
  • Ethernet Flexible Ethernet
  • the FlexE protocol defines an adaptation layer (FlexE shim) between the media access control (media access control, MAC) layer and the physical coding sublayer (physical coding sublayer, PCS), through which the Ethernet interface rate It can flexibly match various business scenarios.
  • the present application proposes a data processing method, device, device, system, and computer-readable storage medium for implementing FlexE based on code blocks encoded in a specific encoding manner.
  • a data processing method includes: a first device acquires at least one service flow, any service flow in the at least one service flow includes a plurality of code blocks, and the code block includes a data unit and a type, Or the code block includes type, type indication and code block content; then, based on the encoding method of the code block, multiple code blocks included in at least one service flow are mapped to at least one PHY link, and the at least one PHY link uses for transmitting the plurality of code blocks.
  • the method can implement FlexE based on a code block including a data unit and a type, or realize FlexE based on a code block including a type, a type indication, and a code block content.
  • any PHY link in the at least one PHY link includes at least one time slot
  • any PHY link in the at least one PHY link is also used to transmit s overhead A multiframe
  • the format of the s overhead multiframes is determined based on the encoding method
  • the s overhead multiframes include a mapping relationship between the at least one time slot and the at least one service flow, and the mapping relationship is used for mapping the multiple code blocks to the PHY link
  • the s is determined based on the transmission rate of the PHY link.
  • the rate of the time slot is 5m gigabits per second
  • one time slot is used to transmit one code block
  • the m is an integer greater than 1.
  • the at least one The code blocks included in the service flow can be transmitted at a relatively high rate.
  • any overhead multiframe in the s overhead multiframes includes multiple overhead frames, and for at least one overhead frame in the multiple overhead frames, in the at least one overhead frame
  • Any overhead frame includes a mapping relationship between a time slot and a service flow.
  • the any overhead frame includes multiple overhead blocks, and at least one overhead block in the multiple overhead blocks includes the The mapping relationship between a time slot and a service flow.
  • any overhead multiframe includes 32 overhead frames, and one overhead frame includes 2 overhead blocks.
  • any one of the PHY links includes k time slots, and for any overhead multiframe in the s overhead multiframes, the overhead multiframe includes k/s time slots
  • the k is determined based on the ratio of the transmission rate of the PHY link to the rate of the time slot, and the k is an integer multiple of the s.
  • each s time slot in the k time slots is a time slot group, and the k/s time slots corresponding to the i-th overhead multiframe in the s overhead multiframes Including: the i-th time slot in each time slot group, the i is an integer greater than or equal to 0 and less than the s, or the i is an integer greater than or equal to 0 and less than or equal to the s.
  • k/s time slots among the k time slots are a time slot group, and k/s time slots corresponding to the i-th overhead multi-frame among the s overhead multi-frames
  • the time slots include: the time slots included in the i-th time slot group, the i is an integer greater than or equal to 0 and less than the s, or the i is an integer greater than 0 and less than or equal to the s.
  • the k time slots included in the PHY link can be combined in different ways into k/s time slots respectively corresponding to the i-th overhead multiframe, and the correspondence between overhead multiframes and time slots is more flexible.
  • any one of the PHY links is used to transmit s overhead blocks for every transmission of n*k code blocks, and the r-th overhead block in the s overhead blocks is used to form the In the r-th overhead multiframe among the s overhead multiframes, the n is a positive integer, the r is an integer greater than or equal to 0 and less than the s, or the r is greater than 0 and less than or equal to the s integer.
  • the mapping the multiple code blocks to at least one physical layer PHY link includes: obtaining an overhead multiframe corresponding to the at least one service flow; modifying the overhead multiframe , mapping the multiple code blocks to the at least one PHY link based on the modified overhead multiframe.
  • the mapping relationship between at least one time slot included in the PHY link and at least one service flow can be controlled, so that the code blocks included in the service flow can be transmitted on the specified time slot, and the code blocks of the service flow
  • the transmission carrier is more flexible.
  • the multiple code blocks include code blocks whose type is data and code blocks whose type is idle, and the mapping the multiple code blocks to at least one physical layer PHY link, Including: replacing at least one code block whose type is idle among the plurality of code blocks with a code block that includes operation, maintenance and management OAM information, and the OAM information is used to manage code blocks whose type is data among the plurality of code blocks ; Map the replaced multiple code blocks to at least one PHY link.
  • any one of the s overhead multiframes further includes at least two management channels, where the management channels include management information for managing the at least one PHY link.
  • the m is 4 or 5.
  • any PHY can be applied to the FlexE technology based on standards such as IA OIF-FLEXE-02.1/02.2, and has good compatibility.
  • both the overhead block and the code block are 257 bits.
  • the n is 639 or 1279.
  • a data processing method includes: a second device acquires multiple code blocks transmitted by at least one physical layer PHY link, where the multiple code blocks include code blocks of type overhead and code blocks of type A code block of data, the code block of type data includes data unit and type, or, the code block of data type includes type, type indication and code block content; the second device is based on the encoding method of the code block of type data.
  • the code block whose type is overhead and the code block whose type is data are demapped to obtain at least one service flow, and the at least one service flow includes the code block whose type is data.
  • the method can implement FlexE based on a code block including a data unit and a type, or realize FlexE based on a code block including a type, a type indication, and a code block content.
  • a data processing device is provided, the device is applied to the first device, and the device includes:
  • An acquisition module configured to acquire at least one service flow, any one of the at least one service flow includes a plurality of code blocks, the code blocks include data units and types, or the code blocks include types and type indications and code block content;
  • a mapping module configured to map the multiple code blocks to at least one physical layer PHY link based on the encoding manner of the code blocks, and the at least one PHY link is used to transmit the multiple code blocks.
  • any PHY link in the at least one PHY link includes at least one time slot
  • any PHY link in the at least one PHY link is also used to transmit s overhead A multiframe
  • the format of the s overhead multiframes is determined based on the encoding method
  • the s overhead multiframes include a mapping relationship between the at least one time slot and the at least one service flow, and the mapping relationship is used for mapping the multiple code blocks to the PHY link
  • the s is determined based on the transmission rate of the PHY link.
  • the rate of the time slot is 5m gigabits per second
  • one time slot is used to transmit one code block
  • the m is an integer greater than 1.
  • any overhead multiframe in the s overhead multiframes includes multiple overhead frames, and for at least one overhead frame in the multiple overhead frames, in the at least one overhead frame
  • Any overhead frame includes a mapping relationship between a time slot and a service flow.
  • the any overhead frame includes multiple overhead blocks, and at least one overhead block in the multiple overhead blocks includes the The mapping relationship between a time slot and a service flow.
  • any overhead multiframe includes 32 overhead frames, and one overhead frame includes 2 overhead blocks.
  • any one of the PHY links includes k time slots, and for any overhead multiframe in the s overhead multiframes, the overhead multiframe includes k/s time slots
  • the k is determined based on the ratio of the transmission rate of the PHY link to the rate of the time slot, and the k is an integer multiple of the s.
  • each s time slot in the k time slots is a time slot group, and the k/s time slots corresponding to the i-th overhead multiframe in the s overhead multiframes Including: the i-th time slot in each time slot group, the i is an integer greater than or equal to 0 and less than the s, or the i is an integer greater than or equal to 0 and less than or equal to the s.
  • k/s time slots among the k time slots are a time slot group, and k/s time slots corresponding to the i-th overhead multi-frame among the s overhead multi-frames
  • the time slots include: the time slots included in the i-th time slot group, the i is an integer greater than or equal to 0 and less than the s, or the i is an integer greater than 0 and less than or equal to the s.
  • any one of the PHY links is used to transmit s overhead blocks for every transmission of n*k code blocks, and the r-th overhead block in the s overhead blocks is used to form the In the r-th overhead multiframe among the s overhead multiframes, the n is a positive integer, the r is an integer greater than or equal to 0 and less than the s, or the r is greater than 0 and less than or equal to the s integer.
  • the mapping module is configured to obtain an overhead multiframe corresponding to the at least one service flow; modify the overhead multiframe, and convert the multiple codes based on the modified overhead multiframe A block is mapped onto the at least one PHY link.
  • the multiple code blocks include code blocks whose type is data and code blocks whose type is idle, and the mapping module is configured to map at least one of the multiple code blocks whose type is idle
  • a code block is replaced with a code block including operation, maintenance and management OAM information, and the OAM information is used to manage the code blocks of the data type among the multiple code blocks; mapping the replaced multiple code blocks to at least one PHY chain on the way.
  • any one of the s overhead multiframes further includes at least two management channels, where the management channels include management information for managing the at least one PHY link.
  • the m is 4 or 5.
  • both the overhead block and the code block are 257 bits.
  • the n is 639 or 1279.
  • a data processing device is provided, the device is applied to a second device, and the device includes:
  • An acquisition module configured to acquire a plurality of code blocks transmitted by at least one physical layer PHY link, the plurality of code blocks include code blocks whose type is overhead and code blocks whose type is data, and the code blocks whose type is data Including a data unit and a type, or, the code block whose type is data includes a type, a type indication and a code block content;
  • a demapping module configured to demap the code blocks of the data type based on the encoding method of the code blocks of the data type and the code blocks of the overhead type to obtain at least one service flow, and the at least one The service flow includes code blocks of said type being data.
  • a network device including a processor, the processor is coupled with a memory, at least one program instruction or code is stored in the memory, at least one program instruction or code is loaded and executed by the processor, so that the network device realizes The data processing method of any one of the first aspect or the second aspect.
  • a network system in a sixth aspect, includes a first device and a second device, the first device is used to execute any data processing method in the first aspect, and the second device is used to execute The data processing method of any one of the second aspect.
  • a computer-readable storage medium is provided. At least one program instruction or code is stored in the storage medium. When the program instruction or code is loaded and executed by a processor, the computer can realize any of the first aspect or the second aspect.
  • a data processing method is provided.
  • a communication device in an eighth aspect, includes: a transceiver, a memory, and a processor.
  • the transceiver, the memory and the processor communicate with each other through an internal connection path, the memory is used to store instructions, and the processor is used to execute the instructions stored in the memory to control the transceiver to receive signals and control the transceiver to send signals , and when the processor executes the instruction stored in the memory, the processor is made to execute the data processing method in any one of the first aspect or the second aspect.
  • processors there are one or more processors, and one or more memories.
  • the memory may be integrated with the processor, or the memory may be set separately from the processor.
  • the memory may be a non-transitory (non-transitory) memory, such as a read-only memory (read only memory, ROM), which may be integrated with the processor on the same chip, or may be respectively arranged in different On the chip, the application does not limit the type of the memory and the arrangement of the memory and the processor.
  • a non-transitory memory such as a read-only memory (read only memory, ROM)
  • ROM read only memory
  • a computer program product comprising: computer program code, when the computer program code is run by a computer, the computer executes any one of the first aspect or the second aspect. data processing method.
  • a chip including a processor, configured to call and execute instructions stored in the memory from the memory, so that the communication device installed with the chip executes any one of the first aspect or the second aspect. data processing method.
  • the chip further includes: an input interface, an output interface, and the memory, and the input interface, the output interface, the processor, and the memory are connected through an internal connection path.
  • Fig. 1 is the structural representation of a kind of FlexE Group that the embodiment of the present application provides;
  • FIG. 2 is a flow chart of a data processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a code block provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of another code block provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a data structure of a PHY link provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the data structure of another PHY link provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an overhead frame and an overhead multiframe provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a code block included in a PHY link transmission service flow provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a process of mapping multiple code blocks provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another process of mapping multiple code blocks provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of another process of mapping multiple code blocks provided by the embodiment of the present application.
  • Fig. 12 is a flow chart of another data processing method provided by the embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of another data processing device provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • the FlexE protocol enables the rate of the Ethernet interface to flexibly match various business scenarios through the defined adaptation layer, and there is no need to wait for a new fixed rate when a higher bandwidth network processor (network processor, NP)/forwarding device appears When the Ethernet standard is issued, the maximum performance of the equipment can be brought into play.
  • network processor network processor
  • the basic function of FlexE is to transfer the business flows of p flexible Ethernet clients (FlexE clients) according to the FlexE shim
  • the time division multiplexing (time division multiplexing, TDM) mechanism of is mapped to a flexible Ethernet group (FlexE group) composed of q physical layer (physical layer, PHY) links, and p and q are both positive integers.
  • TDM time division multiplexing
  • FlexE group flexible Ethernet group
  • q physical layer
  • An embodiment of the present application provides a data processing method, the method is applied to a network device, and the method can implement FlexE based on a code block encoded in a specific encoding manner.
  • the method includes but is not limited to S201 and S202.
  • the first device acquires at least one service flow, any one of the at least one service flow includes a plurality of code blocks, and the code block includes a data unit and a type, or, the code block includes a type, a type indication, and a code block content.
  • the manner for the first device to obtain at least one service flow includes: the first device generates at least one service flow, or the first device receives at least one service flow sent by other devices.
  • the content included in the code block corresponds to the encoding method of the code block, and the first device can determine the encoding method of the code block according to the content included in the code block.
  • the encoding method of the code block is 256B/257B encoding, that is, the code block is 257 bits code blocks.
  • the 0th bit of the code block indicates a type, and the type is used to indicate whether the code block includes only data or a code block including a control word.
  • the code block is a code block including only data
  • the remaining 256 bits of the code block represent data units, which only include data.
  • the code block is a code block including a control word
  • the 4th to 1st bits of the code block represent a type indication
  • the type indication is used to indicate the position of the control word in the content of the code block
  • the remaining 252 bits of the code block represent the content of the code block, which includes the control word.
  • the content of the code block may also include data, which is not limited in this embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a code block provided in an embodiment of the present application.
  • b represents a bit, for example, 1b represents 1 bit
  • 8b represents 8 bits.
  • the 0th bit of the code block indicates the type
  • the 256th to 1st bits of the code block indicate the data unit.
  • the 0th bit is 1, the 64th to 1st bits are represented as D0, the 128th to 65th bits are represented as D1, the 192nd to 129th bits are represented as D2, and the 256th to 193rd bits are represented as D3, D0 to D3 all belong to the data unit.
  • D0 to D3 are all used to represent data.
  • FIG. 4 is a schematic structural diagram of another code block provided by the embodiment of the present application.
  • b represents bits.
  • the 0th bit of the code block indicates the type
  • the 4th to 1st bits of the code block indicate the type indication
  • the 256th to 5th bits indicate the content of the code block.
  • the structure of the code block is the structure corresponding to any case in FIG. 4 .
  • the code block is the structure corresponding to case 1, the 0th bit of the code block is 0, and the 4th bit to the 1st bit are all 0; the 8th bit to the 5th bit are represented as f_0, the 64th bit to the 9th bit
  • the bit is represented as C0, and the f_0 and C0 belong to the code block content and correspond to the first bit;
  • the 72nd to 65th bits are represented as BTF1, and the 128th to 73rd bits are represented as C1, and the BTF1 and C1 belong to the code block content, and corresponds to the second bit;
  • the 136th to 129th bits are represented as BTF2, the 192nd to 137th bits are represented as C2, the BTF2 and C2 belong to the code block content, and correspond to the 3rd bit;
  • the 200th bit Bits from 193 to 193 are represented as BTF3, and bits from 256 to 201 are represented as C3.
  • the BTF3 and C3 belong to the code block content and correspond to the 4th bit.
  • f_0 and C0 are used to represent a control word
  • BTF1 and C1 are used to represent a control word
  • BTF2 and C2 are used to represent a control word
  • BTF3 and C3 are used to represent a control word.
  • the code block is the structure corresponding to case 2, the 0th bit is 0, the 4th bit to the 1st bit are 1, 0, 0, 0 respectively; the 68th bit Bits to the 5th bit are represented as D0, which belongs to the content of the code block, And it corresponds to the first bit; the 72nd bit to the 69th bit are represented as f_1, the 128th bit to the 73rd bit are represented as C1, the f_1 and C1 belong to the code block content, and correspond to the 2nd bit; the 136th bit to the 73rd bit 129 bits are represented as BTF2, the 192nd to 137th bits are represented as C2, the BTF2 and C2 belong to the content of the code block, and correspond to the 3rd bit; the 200th to 193rd bits are represented as BTF3, and the 256th to 201st bits The bit is represented as C3, and the BTF3 and C3 belong to
  • D0 is used to represent data
  • f_1 and C1 are used to represent a control word
  • BTF2 and C2 are used to represent a control word
  • BTF3 and C3 are used to represent a control word. The other situations in FIG. 4 will not be repeated here.
  • the first device maps the multiple code blocks to at least one PHY link based on the encoding manner of the code blocks, and the at least one PHY link is used to transmit the multiple code blocks.
  • the first device determines the encoding mode of multiple code blocks, and maps the multiple code blocks to at least one PHY link based on the encoding mode of the code blocks, so that the multiple code blocks Transmission is performed over the at least one PHY link. For example, based on the data unit and type included in the code block, the first device determines that the encoding method of the multiple code blocks is 256B/257B encoding, and based on the 256B/257B encoding, maps the multiple code blocks to at least one PHY link superior.
  • the first device determines that the encoding method of the multiple code blocks is 256B/257B encoding based on the type included in the code block, the type indication, and the content of the code block, and based on the 256B/257B encoding, maps the multiple code blocks to on at least one PHY link.
  • the at least one PHY link is called a flexible Ethernet group. That is to say, based on the encoding mode, the first device maps multiple code blocks to a flexible Ethernet group, and the flexible Ethernet group is used to transmit the multiple code blocks.
  • the any PHY link includes at least one time slot, the rate of the time slot is 5m Gbps, one time slot is used to transmit one code block, and m is An integer greater than 1.
  • 5m means 5 times of m, which can also be expressed as 5*m.
  • m is 4 or 5, that is, any PHY link includes at least one time slot with a rate of 20 Gbps, or any PHY link includes at least one time slot with a rate of 25 Gbps.
  • any PHY link includes at least one time slot with a rate of 20Gbps
  • this any PHY can be applicable to the implementation based on standards such as IA OIF-FLEXE-02.1/02.2 FlexE technology, good compatibility.
  • the number of time slots included in the PHY link is determined based on a ratio of the transmission rate of the PHY link to the rate.
  • the number of time slots included in the PHY link is represented by k, that is, the PHY link includes k time slots, and k is determined based on the ratio of the transmission rate of the PHY link to the rate of the time slot.
  • TE 1.6 terabit Ethernet
  • the k time slots are called a cycle period, and the k time slots are reciprocated to form a code block bearer channel.
  • the 32 time slots included in the 800GE PHY are used as a cycle period, and the bearer channel of the code block is formed by the cycle of the 32 time slots.
  • the 64 time slots included in the 1.6TE are used as a cycle, and the bearer channel of the code block is formed by the cycle of the 64 time slots.
  • any PHY link in the at least one PHY link is also used to transmit s overhead multiframes (overhead multiframe), and the format of the s overhead multiframes is based on the encoding method of the code block Sure.
  • the s overhead multiframes include a mapping relationship between at least one time slot included in the PHY link and at least one service flow, and the mapping relationship is used to map multiple code blocks to the at least one PHY link. Wherein, s is determined based on the transmission rate of the PHY link. For example, based on the transmission rate of the PHY link and the number of overhead multiframes Quantitative correspondence determines s.
  • the s overhead multiframes include a mapping relationship between the k time slots and at least one service flow.
  • One time slot corresponds to one service flow, and multiple time slots may correspond to the same service flow or different service flows. That is to say, when s overhead multiframes include a mapping relationship between k time slots and all service flows in at least one service flow, the mapping relationship is used to map multiple code blocks included in all service flows to on the PHY link.
  • the mapping relationship is used to map multiple code blocks included in the part of the service flow to the PHY link superior.
  • s is divisible by k, that is, k is an integer multiple of s.
  • the corresponding relationship between the transmission rate of the PHY link and the number of overhead multiframes may be determined based on the multiple of the transmission rate of the PHY link and the reference rate.
  • the transmission rate of the PHY link is w times the reference rate, and the PHY link is used to transmit w overhead multiframes.
  • the reference rate is 800Gbps.
  • the corresponding relationship between the transmission rate of the PHY link and the number of overhead multiframes includes: when the PHY link is 800GE PHY, s is 1; when the PHY link is 1.6TE PHY In this case, s is 2. That is to say, for an 800GE PHY, when the rate of the time slot is 25Gbps, the 800GE PHY is used to transmit 1 overhead multiframe, and the 1 overhead multiframe includes mapping between 32 time slots and at least one service flow relation. For a 1.6TE PHY, when the rate of the time slot is 25Gbps, the 1.6TE PHY is used to transmit 2 overhead multiframes, and the 2 overhead multiframes include the mapping relationship between 64 time slots and at least one service flow.
  • the overhead multiframe includes a mapping relationship between k/s timeslots and at least one service flow.
  • the 1.6TE PHY is used to transmit 2 overhead multiframes, and each overhead multiframe includes a mapping relationship between 32 time slots and at least one service flow.
  • the k/s time slots corresponding to the i-th overhead multi-frame include but are not limited to the following cases 1 and 2. Wherein, i is an integer greater than or equal to 0 and less than or equal to s, or i is an integer greater than or equal to 0 and less than or equal to s.
  • every s time slot in the k time slots is a time slot group
  • the k/s time slots corresponding to the i-th overhead multi-frame in the s overhead multi-frames include: the i-th time slot in each time-slot group time slots.
  • the 1.6TE PHY includes 64 time slots, and every 2 time slots in the 64 time slots is a time slot group, then the 1.6TE PHY includes 32 time slot groups.
  • the 32 time slots corresponding to the first overhead multiframe include the first time slot in the 32 time slot group, and the 32 time slots corresponding to the second overhead multiframe Include the 2nd slot in the group of 32 slots.
  • the 64 time slots included in the 1.6TE PHY are respectively slot0 to slot63, then the 32 time slots corresponding to the first overhead multiframe include slot0, slot2, slot4, ..., slot62, the second overhead
  • the 32 time slots corresponding to the multiframe include slot1, slot3, slot5, ..., slot63.
  • every k/s time slots in the k time slots is a time slot group
  • the k/s time slots corresponding to the i-th overhead multi-frame in the s overhead multi-frames include: the i-th time-slot group time slots included.
  • the 1.6TE PHY includes 64 time slots, and every 32 time slots in the 64 time slots is a time slot group, then the 1.6TE PHY includes 2 time slot groups.
  • the 32 time slots corresponding to the first overhead multiframe include the time slots included in the first time slot group, and the 32 time slots corresponding to the second overhead multiframe include the second slot pack included time slots.
  • the 64 time slots included in the 1.6TE PHY are respectively slot0 to slot63
  • the 32 time slots corresponding to the first overhead multiframe include slot0, slot1, slot2, ..., slot31
  • the second overhead The 32 time slots corresponding to the multiframe include slot32, slot33, slot34, ..., slot63.
  • the k time slots included in the PHY link can be combined in different ways into k/s time slots respectively corresponding to the i-th overhead multiframe, and the correspondence between overhead multiframes and time slots is more flexible.
  • the any overhead multiframe includes a plurality of overhead frames (overhead frames), and for at least one overhead frame in the plurality of overhead frames frame, any overhead frame in the at least one overhead frame includes a mapping relationship between a time slot and a service flow.
  • the plurality of overhead frames include idle overhead frames, or overhead frames used as reserved bit fields, and the overhead frames used as reserved bit fields may carry information for extended protocols. Since there are various types of overhead frames, the information included in the overhead multiframes is more flexible. In the case that the overhead multiframe includes an overhead frame used as a reserved bit field, the overhead frame can carry richer information, and the overhead multiframe has better scalability.
  • the arbitrary overhead frame may include a plurality of overhead blocks (overhead block, OH block), and at least one overhead block in the plurality of overhead blocks includes an The mapping relationship between a slot and a service flow.
  • the number of overhead frames included in the overhead multiframe is equal to the number of time slots corresponding to the overhead multiframe, that is, for an overhead multiframe, the overhead A multiframe consists of 32 overhead frames.
  • the overhead frame includes 2 overhead blocks.
  • the overhead block may be 257 bits.
  • the number of overhead frames included in the overhead multiframe may also be greater than the number of time slots corresponding to the overhead multiframe.
  • the overhead multiframe includes 34 overhead frames, wherein 32 overhead frames are overhead frames including the mapping relationship between time slots and service flows, and the remaining two overhead frames are overhead frames used as reserved bit fields.
  • the first device maps multiple code blocks to at least one PHY link based on the encoding method of the code block, so that the multiple code blocks are transmitted through the at least one PHY link, and the at least one PHY link
  • the link may also be used to transmit s overhead multiframes, wherein any overhead multiframe may include 32 overhead frames, and any overhead frame may include 2 overhead blocks.
  • the transmission order of the transmission code blocks and overhead multiframes of the any PHY link is: the PHY link transmits n *k code blocks transmit s overhead blocks, and the r-th overhead block in the s overhead blocks is used to form the r-th overhead multi-frame in the s overhead multi-frames, n is a positive integer, and r is greater than or equal to An integer of 0 and less than s, or r is an integer greater than 0 and less than or equal to s.
  • the 800GE PHY includes 32 time slots, and the 800GE PHY transmits one overhead block every time n*32 code blocks are transmitted, and the overhead block is used to form an overhead multiframe transmitted by the 800GE PHY.
  • the 1.6TE PHY includes 64 time slots, and the 1.6TE PHY transmits 2 overhead blocks every time n*64 code blocks are transmitted, wherein the first overhead block is used to form the first overhead Multiframe, the second overhead block is used to form the second overhead multiframe.
  • n is 639 or 1279.
  • the k time slots included in the PHY link correspond to a time slot table (calendar), and the time slot table includes s overhead complexes transmitted by the PHY link.
  • a mapping relationship between at least one time slot included in the frame and at least one service flow For example, for 800GE PHY, the 32 time slots included in the 800GE PHY correspond to a calendar, and the calendar includes a mapping relationship between 32 time slots included in one overhead multiframe transmitted by the 800GE PHY and at least one service flow.
  • the 64 time slots included in the 1.6TE PHY correspond to a calendar
  • the calendar includes the mapping relationship between the 64 time slots included in the two overhead multiframes transmitted by the 1.6TE PHY and at least one service flow .
  • the 32 time slots included in the 800GE PHY cycle transmit 32 blocks at a time, and the calendar corresponding to the 32 time slots included in the 800GE PHY is called 32-block calendar;
  • the 64 time slots included in the 1.6TE PHY cycle transmit 64 blocks at a time, and the calendar corresponding to the 64 time slots included in the 1.6TE PHY is called a 64-block time slot table (64-block calendar). That is to say, for 800GE PHY, every time the 32-block calendar corresponding to the 800GE PHY is repeated n times, the 800GE PHY transmits 1 overhead block.
  • the time slot table (n repetitions of 32-block calendar between FlexE overheads blocks) is repeated n times between the overhead blocks of Flexible Ethernet, wherein the overhead block of Flexible Ethernet (FlexE overhead, FlexE OH) is the above overhead block.
  • the overhead block of Flexible Ethernet (FlexE overhead, FlexE OH) is the above overhead block.
  • every 2 overhead blocks form an overhead frame
  • every 32 overhead frames form an overhead multiframe.
  • every two OH0s form an overhead frame
  • every 32 overhead frames form the previous overhead multiframe among the two overhead multiframes
  • every two OH1s form an overhead frame
  • every 32 overhead frames form two The next overhead multiframe in the overhead multiframe.
  • FIG. 7 is a schematic structural diagram of an overhead frame and an overhead multiframe provided by an embodiment of the present application. As shown in Figure 7, the overhead frame and overhead multiframe include but are not limited to the following 16 items:
  • Client time slot table A (client calendar A) and customer time slot table B (client calendar B)
  • the overhead frame includes two kinds of time slot table configuration information, which are client calendar A and client calendar B respectively.
  • This client time slot table A can be referred to as time slot table A (calendar A) for short
  • this client time slot table B can be referred to as time slot table B (calendar B) for short.
  • the first overhead frame includes: when slot table A is configured as the slot table in use, the client carried calendar A slot in time slot 0 0), and the client carried by slot 0 (client carried calendar B slot 0) in the case that slot B is configured for the slot table being used.
  • the client carried calendar A slot 0 refers to the mapping relationship between slot 0 and at least one service flow in one service flow when the time slot table A is configured as the time slot table in use, that is, mapping to slot 0 transmission business flow.
  • the client carried calendar A slot 0 can be simply referred to as corresponding to the slot table A, the client carried by slot 0.
  • the client carried calendar B slot 0 refers to the service flow mapped to slot 0 when the slot table B is configured for the slot table in use.
  • the client carried calendar B slot 0 can be simply referred to as corresponding to the slot table B, the client carried by slot 0.
  • the principles of the remaining overhead frames in the overhead multiframe are the same as those of the first overhead frame, and will not be repeated here.
  • the mapping relationship between time slot 0 to time slot 31 and at least one service flow is different, so by switching calendar A and calendar B as the time slot table configuration information in use, the multiple time slots
  • the mapping relationship with at least one service flow can be changed. Therefore, in the case that at least one service flow changes, by switching calendar A and calendar B, the mapping relationship between multiple time slots included in at least one PHY link and at least one service flow can be adjusted accordingly, so as to ensure that at least one service flow transmission. For example, when the bandwidth of at least one service flow increases, by switching calendar A and calendar B, it can be ensured that multiple code blocks included in the at least one service flow are mapped to at least one PHY included Transmission is performed in multiple time slots to avoid traffic loss.
  • calendar A and calendar B also have the following characteristics:
  • TX transmit end
  • RX receive end
  • the initiator of time slot negotiation is TX, while RX is in passive receiving state.
  • TX will refresh the changed calendar B to RX through FlexE OH.
  • TX will initiate a time slot table switching request (calendar switch request, CR) time slot negotiation request, requesting to switch the used time slot configuration information to calendar B.
  • TX will trigger both TX and RX to use
  • the time slot configuration information of the time slot is switched to calendar B.
  • a FlexE OH time slot negotiation will also be triggered to ensure that the time slot configuration information in use at both ends is consistent.
  • C when the time slot table being used is configured as calendar A, C is 0, and when the time slot table being used is configured as calendar B, C is 1.
  • Time slot table switching request (calendar switch request, CR)
  • overhead multiframe indicator overhead multiframe indicator
  • This OMF is used to indicate the boundaries of overhead multiframes.
  • the bit field numbered 9 in the first block of the overhead frame as shown in FIG. 7 carries the OMF.
  • the OMF value of the first 16 overhead frames is 0, and the OMF value of the last 16 overhead frames is 1.
  • This RPF is used to indicate a remote physical failure.
  • the bit field numbered 10 in the first block of the overhead frame shown in FIG. 7 carries the RPF.
  • Synchronization control (synchronization control, SC)
  • the SC is used to control overhead frames including synchronization information channels.
  • the bit field numbered 11 in the first block of the overhead frame as shown in FIG. 7 carries the SC.
  • the synchronization information channel is used to carry synchronization information, and the synchronization information is used to make the device that receives the synchronization information perform synchronization based on the synchronization information.
  • At least one PHY link used to transmit code blocks included in the at least one service flow is called a FlexE group.
  • the any PHY link includes s flexible Ethernet instances (FlexE instances), and one FlexE instance includes k/s time slots corresponding to one overhead multiframe.
  • the 800GE PHY includes 1 FlexE instance
  • the 1.6TE PHY includes 2 FlexE instances, where the first FlexE instance includes 32 hours corresponding to the first overhead multiframe slots, the second FlexE instance includes 32 slots corresponding to the second overhead multiframe.
  • the FlexE map includes a plurality of PHY link information, each bit of the FlexE map corresponds to a PHY link, and the value of each bit of the FlexE map is used to indicate whether the PHY link corresponding to the bit is in In the FlexE group. For example, if the value of the bit is the first value, for example, the first value is 1, it is considered that the PHY link corresponding to the bit belongs to the FlexE group. If the value of the bit is the second value, for example, the second value is 0, it is considered that the PHY link corresponding to the bit does not belong to the FlexE group.
  • the FlexE group number is used to identify the FlexE group. As shown in Figure 7, the bit fields numbered 12 to 31 in the first block of the overhead frame carry the FlexE group number.
  • Synchronization head Synchronization head (Synchronization head, SH)
  • the SH is the frame header of the FlexE overhead frame.
  • the bit field numbered 3 in the first block of the overhead frame shown in FIG. 7 and the field under SH in the second block carry the X.
  • the value of X corresponds to the content in the section management channel-1.
  • the content in the section management channel-1 is the content of the control type, and X is 0, and the content in the section management channel-1 is the content of the data type, and X is 1.
  • the content in the section management channel-1 is the content of the control type, and X is 1, and the content in the section management channel-1 is the content of the data type, and X is 0.
  • the management channel includes management information for managing at least one PHY link.
  • the overhead frame includes two section management channels (management channel-section), which are respectively section management channel-1 and section management channel-2, both of which are 8 bytes (byte), and section management channel- Both 1 and section management channel-2 are used for section-to-section management.
  • the numbered Bit fields numbered 0 to 63 carry section management channel-2.
  • the overhead frame includes an adaptation layer to an adaptation layer management channel (management channel-shim to shim), the adaptation layer to adaptation layer management channel is used to carry adaptation layer to adaptation layer management information.
  • the management channel is also used to carry other packets.
  • the management channel is also used to carry a link layer discovery protocol (link layer discovery protocol, LLDP) message or a management message.
  • link layer discovery protocol link layer discovery protocol, LLDP
  • Cyclic redundancy check code-16 (cyclic redundancy check, CRC-16)
  • the CRC-16 is used to perform CRC protection on the content of the overhead block.
  • the CRC-16 is used to check the content before the bit field where the CRC-16 is located and except the content of the first 9 bits and the 32nd to 35th bits.
  • the reserved field is used as an extension field carrying information.
  • FIG. 8 is a schematic diagram of code blocks included in a PHY link transmission service flow provided by an embodiment of the present application.
  • the first device acquires two service flows, one service flow is a 25 gigabit (G) service flow, that is, the rate of the service flow is 25Gbps, and the other service flow is a 75G service flow, that is, the service flow The rate of the stream is 75Gbps.
  • G gigabit
  • the first device maps the code blocks included in the two service flows to a FlexE group composed of two 800GE PHYs, and the rate of the time slots of the two 800GE PHYs is 25Gbps. Then the 25G service flow corresponds to 1 time slot, and the 75G service flow corresponds to 3 time slots. Exemplarily, the code blocks included in the 25G service flow are mapped to slot4 of 800GE PHY1 for transmission, and the code blocks included in the 75G service flow are mapped to slot13 of 800GE PHY1, slot31 of 800GE PHY1 and slot3 of 800GE PHY2 for transmission.
  • the black blocks shown in FIG. 8 are overhead blocks.
  • the at least one service flow is a service flow transmitted by at least one PHY link.
  • Mapping multiple code blocks to at least one PHY link includes: obtaining an overhead multiframe corresponding to at least one service flow; modifying the overhead multiframe, and mapping multiple code blocks to at least one PHY link based on the modified overhead multiframe on the PHY link.
  • the mapping relationship between the time slot included in at least one PHY link and at least one service flow can be modified, so that the code blocks included in the service flow can be switched to different PHY links for transmission.
  • FIG. 9 is a schematic diagram of a process of mapping multiple code blocks provided in an embodiment of the present application.
  • the first device receives the 100G service flow transmitted by the FlexE group consisting of 800GE PHY1, 800GE PHY2, 800GE PHY3 and 800GE PHY4, in which 800GE PHY1, 800GE PHY2, 800GE PHY3 and 800GE PHY4 each include 32 In the time slot, code block A, code block B, code block C, and code block D included in the 100G service flow are transmitted on 800GE PHY1.
  • the first device acquires the overhead multiframe corresponding to the 100G service flow, modifies the overhead multiframe, and maps the code block A, code block B, code block C, and code block D to the 800GE PHY5 based on the modified overhead multiframe. , 800GE PHY6, 800GE PHY7, and 800GE PHY8 on the FlexE group.
  • 800GE PHY5, 800GE PHY6, 800GE PHY7, and 800GE PHY8 all include 32 time slots, and the first device maps the code block A, code block B, code block C, and code block D to 800GE PHY7, so that the code block Block A, Code Block B, Code Block C and Code Block D can be transmitted on 800GE PHY7.
  • the PHY links of the code blocks transmitting the service flow can be the same or different, and the code blocks included in the same service flow can be mapped to the same or different PHY links.
  • At least one PHY link includes multiple and The mapping relationship of at least one service flow is more flexible.
  • the first device receives the 800GE PHY1, 800GE PHY2, 800GE PHY3 and The 100G service flow transmitted by the FlexE group composed of 800GE PHY4 maps the code blocks included in the 100G service flow to the FlexE group composed of 1.6TE PHY1 and 1.6TE PHY2.
  • code block A, code block B, code block C, and code block D included in the 100G service flow are transmitted on 800GE PHY1, and the first device maps code block A, code block B, and code block C to 800GE PHY7 , and map code block D to 800GE PHY8.
  • code block A, code block B, and code block C included in the 100G service flow are transmitted on 800GE PHY1, code block D is transmitted on 800GE PHY2, and the first device transmits code block A, code block B, and code block C Map to 800GE PHY7, and map code block D to 800GE PHY8.
  • code block A, code block B, and code block C included in the 100G service flow are transmitted on 800GE PHY1
  • code block D is transmitted on 800GE PHY2
  • the first device transmits code block A, code block B, and code block C and code block D are mapped to 800GE PHY7.
  • FIG. 10 is a schematic diagram of another process of mapping multiple code blocks provided by the embodiment of the present application.
  • the first device receives a 100G service flow and a 75G service flow transmitted by the FlexE group composed of 800GE PHY1, 800GE PHY2, 800GE PHY3 and 800GE PHY4.
  • 800GE PHY1, 800GE PHY2, 800GE PHY3, and 800GE PHY4 all include 32 time slots, and the code block A, code block B, and code block D included in the 100G service flow are transmitted on 800GE PHY1, and the code block included in the 100G service flow Block C is transmitted on 800GE PHY2, and code block X, code block Y, and code block Z included in the 75G service flow are transmitted on 800GE PHY3.
  • the first device acquires the overhead multiframe corresponding to the 100G service flow and the overhead multiframe corresponding to the 75G service flow, modifies the overhead multiframe corresponding to the 100G service flow and the overhead multiframe corresponding to the 75G service flow, and based on the modified overhead multiframe frame, and map the code block A, code block B, code block C, code block D, code block X, code block Y, and code block Z to the FlexE group composed of 800GE PHY5, 800GE PHY6, 800GE PHY7, and 800GE PHY8 .
  • 800GE PHY5, 800GE PHY6, 800GE PHY7, and 800GE PHY8 all include 32 time slots, and the first device converts code block A, code block B, code block C, code block D, code block X, code block Y, and code block Block Z is mapped to 800GE PHY8.
  • the multiple code blocks included in at least one service flow include code blocks whose type is data and code blocks whose type is idle
  • mapping the multiple code blocks to at least one PHY link includes: Replace at least one code block whose type is idle among the plurality of code blocks with a code block including operation administration and maintenance (OAM) information, and the OAM information is used to manage code blocks whose type is data among the plurality of code blocks ; Map the replaced multiple code blocks to at least one PHY link.
  • the code block whose type is data includes the above code block including only data and the code block including control words.
  • a service flow is divided into multiple subflows, and the code blocks included in the multiple subflows are operated separately, or the overhead frame is further defined into multiple subframes based on OAM information, and the time slot is performed twice based on the subframe.
  • reuse For example, a time slot is divided into multiple sub-slots based on subframes, and multiple code blocks included in at least one service flow are transmitted based on the multiple sub-slots.
  • the data processing method provided in the embodiment of the present application can also be combined with a slicing packet network (slicing packet network, SPN) technology.
  • the first device has a slice packet network channel overhead processor (SPN channel overhead processor, SCO) function.
  • SPN channel overhead processor SPN channel overhead processor
  • FIG. 11 is a schematic diagram of another process of mapping multiple code blocks provided by the embodiment of the present application. As shown in FIG. 11 , the first device receives one 10G service flow and two 25G service flows, and each of the 10G service flow and the 25G service flow includes multiple code blocks.
  • the code block includes a data unit and a type with a single bit length, or the code block includes a type, a type indication, and a code block content.
  • the first device transmits code blocks including the 10G service flow and the 25G service flow on different SPN channels (SPN channel).
  • SPN channel SPN channel
  • D represents a code block whose type is data
  • I represents a code block whose type is idle
  • O represents a code block including OAM management information.
  • the first device exchanges the slices. For example, as shown in FIG. 11 , the first device exchanges and transmits two The SPN channel of the code block included in each 25G service flow.
  • the first device obtains the SPN client (SPN client) based on the code block included in one of the 25G service flows, transmits the code block included in the 25G service flow to an Ethernet (ethernet, ETH) interface, and transfers the code block included in the other 25G service flow
  • SPN client SPN client
  • Ethernet Ethernet
  • ETH Ethernet
  • the code blocks and the code blocks included in the 10G service flow are mapped to at least one PHY link, so as to transmit the multiple code blocks included in the 10G service flow and the multiple code blocks included in the 25G service flow on at least one PHY link.
  • the method further includes: S203, the first device transmits the multiple code blocks to the second device through at least one PHY.
  • the first device and the second device are devices at both ends of a FlexE group, and the first device transmits the multiple code blocks to the second device through at least one PHY included in the FlexE group.
  • the method provided by the embodiment of the present application can implement FlexE based on a code block including a data unit and a type, or implement FlexE based on a code block including a type, a type indication, and a code block content.
  • the rate of the time slot included in the PHY link is 5m Gbps
  • the code blocks included in the at least one service flow can be transmitted at a higher rate.
  • the mapping relationship between at least one time slot included in the PHY link and at least one service flow can be controlled, so that the code blocks included in the service flow can be transmitted on the specified time slot, and the service flow
  • the transmission carrier of the code block is more flexible.
  • the method can realize more refined management of code blocks whose type is data, and the transmission mode of code blocks is more flexible.
  • An embodiment of the present application provides a data processing method. Taking the method applied to the second device as an example, see FIG. 12 , the method includes but is not limited to the following S204 and S205.
  • the second device acquires multiple code blocks transmitted by at least one PHY link.
  • the plurality of code blocks include overhead code blocks and data code blocks
  • the data code blocks include data units and types
  • the data code blocks include type, type indication, and code block content.
  • the first device and the second device are devices at both ends of a FlexE group, that is, the second device receives multiple code blocks transmitted by the first device through the at least one PHY.
  • the code block whose type is data is a 256B/257B encoded code block, that is, the code block is 257 bits; the code block whose type is overhead is the overhead block described in the embodiment of FIG. 2 above, and the overhead block is 257 bits.
  • the second device demaps the code blocks of the data type based on the encoding method of the code blocks of the data type and the code blocks of the overhead type to obtain at least one service flow, and at least one service flow includes the code blocks of the data type .
  • the second device demaps the code block whose type is data based on the 256B/257B encoding manner and the overhead block, to obtain at least one service flow.
  • the second device restores the overhead frame based on the received overhead block, and restores the overhead multiframe based on the overhead frame.
  • the second device demaps the code block whose type is data from the time slot of at least one PHY link to obtain at least one service flow flow.
  • the method provided by the embodiment of the present application can implement FlexE based on a code block including a data unit and a type, or implement FlexE based on a code block including a type, a type indication, and a code block content.
  • FIG. 13 is a schematic structural diagram of a data processing apparatus provided by an embodiment of the present application, the apparatus is applied to a first device, and the first device is the first device shown in FIG. 2 and FIG. 12 above. Based on the following multiple modules shown in FIG. 13 , the data processing apparatus shown in FIG. 13 can perform all or part of the operations performed by the first device. It should be understood that the device may include There are more additional modules than those shown or some of the modules shown are omitted, which is not limited in this embodiment of the present application. As shown in Figure 13, the device includes:
  • An acquisition module 1301, configured to acquire at least one service flow, any one of the at least one service flow includes a plurality of code blocks, and the code block includes a data unit and type, or the code block includes a type, a type indication, and a code block content;
  • the mapping module 1302 is configured to map the multiple code blocks to at least one physical layer PHY link based on the coding mode of the code block, and the at least one PHY link is used to transmit the multiple code blocks.
  • any PHY link in at least one PHY link includes at least one time slot
  • any PHY link in at least one PHY link is also used to transmit s overhead multiframes
  • the format of the overhead multiframes is determined based on the encoding method.
  • the s overhead multiframes include a mapping relationship between at least one time slot and at least one service flow. The mapping relationship is used to map multiple code blocks to the PHY link. s is based on The transmission rate of the PHY link is determined.
  • the rate of a time slot is 5m gigabits per second, one time slot is used to transmit one code block, and m is an integer greater than 1.
  • any overhead multiframe in the s overhead multiframes includes multiple overhead frames, and for at least one overhead frame in the multiple overhead frames, any overhead frame in the at least one overhead frame It includes a mapping relationship between a time slot and a service flow.
  • the any overhead frame includes multiple overhead blocks, and at least one overhead block in the multiple overhead blocks includes a time slot and a service Flow mapping relationship.
  • any overhead multiframe includes 32 overhead frames, and one overhead frame includes 2 overhead blocks.
  • any PHY link includes k time slots
  • the overhead multiframe includes k/s time slots and at least one service flow
  • the mapping relationship of k is determined based on the ratio of the transmission rate of the PHY link to the rate of the time slot, and k is an integer multiple of s.
  • every s time slots in the k time slots is a time slot group
  • the k/s time slots corresponding to the i-th overhead multiframe in the s overhead multiframes include: each time slot The i-th time slot in the slot group, i is an integer greater than or equal to 0 and less than s, or i is an integer greater than or equal to 0 and less than or equal to s.
  • every k/s time slots among the k time slots is a time slot group
  • the k/s time slots corresponding to the i-th overhead multiframe among the s overhead multiframes include: For the time slots included in the i-th time slot group, i is an integer greater than or equal to 0 and less than s, or i is an integer greater than or equal to 0 and less than or equal to s.
  • any PHY link is used to transmit s overhead blocks for every transmission of n*k code blocks, and the rth overhead block in the s overhead blocks is used to form s overhead multiframes
  • the r-th overhead multiframe of , n is a positive integer, r is an integer greater than or equal to 0 and less than s, or r is an integer greater than or equal to 0 and less than or equal to s.
  • the mapping module 1302 is configured to obtain an overhead multiframe corresponding to at least one service flow; modify the overhead multiframe, and map multiple code blocks to at least one PHY chain based on the modified overhead multiframe on the way.
  • the multiple code blocks include code blocks whose type is data and code blocks whose type is idle, and the mapping module 1302 is configured to replace at least one code block whose type is idle among the multiple code blocks with A code block including OAM information for operation, maintenance and management, the OAM information is used to manage code blocks whose type is data among multiple code blocks; and the replaced multiple code blocks are mapped to at least one PHY link.
  • any one of the s overhead multiframes further includes at least two management channels, and the management channels include management information for managing at least one PHY link.
  • m is 4 or 5.
  • both the overhead block and the code block are 257 bits.
  • n 639 or 1279.
  • the apparatus further includes: a sending module 1303, configured to transmit the multiple code blocks to the second device through at least one PHY.
  • the device provided by the embodiment of the present application can implement FlexE based on a code block including a data unit and a type, or implement FlexE based on a code block including a type, a type indication, and a code block content.
  • the rate of the time slot included in the PHY link is 5m Gbps
  • the code blocks included in the at least one service flow can be transmitted at a higher rate.
  • the mapping relationship between at least one time slot included in the PHY link and at least one service flow can be controlled, so that the code blocks included in the service flow can be transmitted on the specified time slot, and the service flow
  • the transmission carrier of the code block is more flexible.
  • the device can implement finer management of code blocks whose type is data, and the transmission mode of code blocks is more flexible.
  • FIG. 14 is a schematic structural diagram of a data processing device provided by an embodiment of the present application, and the device is applied to a second device, which is the second device shown in FIG. 12 above. Based on the following multiple modules shown in FIG. 14 , the data processing apparatus shown in FIG. 14 can perform all or part of the operations performed by the second device. It should be understood that the device may include more additional modules than those shown or omit some of the modules shown therein, which is not limited in this embodiment of the present application. As shown in Figure 14, the device includes:
  • An acquisition module 1401 configured to acquire multiple code blocks transmitted by at least one PHY link, the multiple code blocks include code blocks whose type is overhead and code blocks whose type is data, and the code blocks whose type is data include data units and types , or, a code block whose type is data includes type, type indication and code block content;
  • the demapping module 1402 is configured to demap the code blocks of the data type based on the encoding method of the code blocks of the data type and the code blocks of the overhead type to obtain at least one service flow. code blocks.
  • the device provided by the embodiment of the present application can implement FlexE based on a code block including a data unit and a type, or implement FlexE based on a code block including a type, a type indication, and a code block content.
  • the specific hardware structure of the device in the above embodiment is a network device 1500 as shown in FIG. 15 , which includes a transceiver 1501 , a processor 1502 and a memory 1503 .
  • the transceiver 1501 , the processor 1502 and the memory 1503 are connected through a bus 1504 .
  • the transceiver 1501 is used to obtain at least one service flow and transmit multiple code blocks
  • the memory 1503 is used to store instructions or program codes
  • the processor 1502 is used to call the instructions or program codes in the memory 1503 to make the device execute the above method embodiments Relevant processing steps of the first device or the second device.
  • the network device 1500 in the embodiment of the present application may correspond to the first device or the second device in the above method embodiments, and the processor 1502 in the network device 1500 reads the instructions or program codes in the memory 1503 , enabling the network device 1500 shown in FIG. 15 to perform all or part of the operations performed by the first device or the second device.
  • the network device 1500 may also correspond to the above-mentioned devices shown in FIG. 13 and FIG. 14.
  • the acquiring module 1301, sending module 1303, and acquiring module 1401 involved in FIG. 13 and FIG. 14 are equivalent to the transceiver 1501, and the mapping module 1302 and the Mapping module 1402 processor 1502 .
  • FIG. 16 shows a schematic structural diagram of a network device 2000 provided by an exemplary embodiment of the present application.
  • the network device 2000 shown in FIG. 16 is configured to perform the operations involved in the above data processing methods shown in FIG. 2 and FIG. 12 .
  • the network device 2000 is, for example, a switch, a router, and the like.
  • a network device 2000 includes at least one processor 2001 , a memory 2003 and at least one communication interface 2004 .
  • the processor 2001 is, for example, a general-purpose central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), a network processor (network processor, NP), a graphics processing unit (graphics processing unit, GPU), A neural network processor (neural-network processing units, NPU), a data processing unit (data processing unit, DPU), a microprocessor, or one or more integrated circuits for implementing the solution of this application.
  • the processor 2001 includes an application-specific integrated circuit (application-specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • the PLD is, for example, a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) or any combination thereof. It can realize or execute various logical blocks, modules and circuits described in conjunction with the disclosure of the embodiments of the present invention.
  • the processor may also be a combination that implements computing functions, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • the network device 2000 further includes a bus.
  • the bus is used to transfer information between the various components of the network device 2000 .
  • the bus may be a peripheral component interconnect standard (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 16 , but it does not mean that there is only one bus or one type of bus.
  • the components of the network device 2000 may be connected in other ways besides the bus connection, and the embodiment of the present invention does not limit the connection mode of the components.
  • the memory 2003 is, for example, a read-only memory (read-only memory, ROM) or other types of static storage devices that can store static information and instructions, or a random access memory (random access memory, RAM) or a storage device that can store information and instructions.
  • Other types of dynamic storage devices such as electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc Storage (including Compact Disc, Laser Disc, Optical Disc, Digital Versatile Disc, Blu-ray Disc, etc.), magnetic disk storage medium, or other magnetic storage device, or is capable of carrying or storing desired program code in the form of instructions or data structures and capable of Any other medium accessed by a computer, but not limited to.
  • the memory 2003 exists independently, for example, and is connected to the processor 2001 via a bus.
  • the memory 2003 can also be integrated with the processor 2001.
  • the communication interface 2004 uses any device such as a transceiver to communicate with other devices or a communication network.
  • the communication network can be Ethernet, radio access network (RAN) or wireless local area network (wireless local area networks, WLAN).
  • the communication interface 2004 may include a wired communication interface, and may also include a wireless communication interface.
  • the communication interface 2004 may be an Ethernet (ethernet) interface, a fast ethernet (fast ethernet, FE) interface, a gigabit ethernet (gigabit ethernet, GE) interface, an asynchronous transfer mode (asynchronous transfer mode, ATM) interface, a wireless local area network ( wireless local area networks, WLAN) interface, cellular network communication interface or a combination thereof.
  • the Ethernet interface can be an optical interface, an electrical interface or a combination thereof.
  • the communication interface 2004 may be used for the network device 2000 to communicate with other devices.
  • the processor 2001 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 16 .
  • Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the network device 2000 may include multiple processors, such as the processor 2001 and the processor 2005 shown in FIG. 16 .
  • processors can be a single-core processor (single-CPU) or a multi-core processor (multi-CPU).
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data such as computer program instructions.
  • the network device 2000 may further include an output device and an input device.
  • Output devices communicate with processor 2001 and can display information in a variety of ways.
  • the output device may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a cathode ray tube (cathode ray tube, CRT) display device, or a projector (projector).
  • the input device communicates with the processor 2001 and can receive user input in various ways.
  • the input device may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
  • the memory 2003 is used to store the program code 2010 for implementing the solution of the present application
  • the processor 2001 can execute the program code 2010 stored in the memory 2003 . That is, the network device 2000 can implement the data processing method provided by the method embodiment through the processor 2001 and the program code 2010 in the memory 2003 .
  • One or more software modules may be included in the program code 2010 .
  • the processor 2001 itself may also store program codes or instructions for executing the solutions of the present application.
  • the network device 2000 in the embodiment of the present application may correspond to the first device or the second device in the above method embodiments, and the processor 2001 in the network device 2000 reads the program code 2010 or the program code in the memory 2003
  • the program codes or instructions stored in the processor 2001 enable the network device 2000 shown in FIG. 16 to perform all or part of the operations performed by the first device or the second device.
  • the network device 2000 may also correspond to the above-mentioned devices shown in FIG. 13 and FIG. 14 , and each functional module in the device shown in FIG. 13 and FIG. 14 is implemented by software of the network device 2000 .
  • the functional modules included in the devices shown in FIG. 13 and FIG. 14 are generated after the processor 2001 of the network device 2000 reads the program code 2010 stored in the memory 2003 .
  • the obtaining module 1301 , the sending module 1303 and the obtaining module 1401 involved in FIG. 13 and FIG. 14 are equivalent to the communication interface 2004
  • the mapping module 1302 and the demapping module 1402 are equivalent to the processor 2001 and/or the processor 2005 .
  • each step of the method shown in FIG. 2 and FIG. 12 is completed by an integrated logic circuit of hardware in the processor of the network device 2000 or an instruction in the form of software.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
  • an embodiment of the present application further provides a network system, and the system includes: a first device and a second device.
  • the first device is the network device 1500 shown in FIG. 15 or the network device 2000 shown in FIG. 16
  • the second device is the network device 1500 shown in FIG. 15 or the network device 2000 shown in FIG. 16 .
  • processor may be a central processing unit (CPU), and may also be other general-purpose processors, digital signal processing (digital signal processing, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • a general purpose processor may be a microprocessor or any conventional processor or the like. It should be noted that the processor may be a processor supporting advanced RISC machines (ARM) architecture.
  • ARM advanced RISC machines
  • the above-mentioned memory may include a read-only memory and a random-access memory, and provide instructions and data to the processor.
  • Memory may also include non-volatile random access memory.
  • the memory may also store device type information.
  • the memory can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available.
  • static random access memory static random access memory
  • dynamic random access memory dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous dynamic random access Memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • serial link DRAM SLDRAM
  • direct memory bus random access memory direct rambus RAM
  • the present application provides a computer program (product).
  • the computer program When the computer program is executed by a computer, it can cause a processor or a computer to execute the corresponding steps and/or processes in the above method embodiments.
  • a chip including a processor, configured to call from a memory and execute instructions stored in the memory, so that a communication device installed with the chip executes the methods in the above aspects.
  • the chip further includes: an input interface, an output interface, and the memory, and the input interface, the output interface, the processor, and the memory are connected through an internal connection path.
  • a device is also provided, which includes the above-mentioned chip.
  • the device is a network device.
  • the device is a router or a switch or a server.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or contain one or more available Quality integrated servers, data centers and other data storage devices.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)) and the like.
  • a magnetic medium for example, a floppy disk, a hard disk, or a magnetic tape
  • an optical medium for example, DVD
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)
  • the computer program product includes one or more computer program instructions.
  • the methods of embodiments of the present application may be described in the context of machine-executable instructions, such as program modules included in a device executed on a real or virtual processor of a target.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data structures.
  • the functionality of the program modules may be combined or divided between the described program modules.
  • Machine-executable instructions for program modules may be executed locally or in distributed devices. In a distributed device, program modules may be located in both local and remote storage media.
  • Computer program codes for implementing the methods of the embodiments of the present application may be written in one or more programming languages. These computer program codes can be provided to processors of general-purpose computers, special-purpose computers, or other programmable data processing devices, so that when the program codes are executed by the computer or other programmable data processing devices, The functions/operations specified in are implemented.
  • the program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
  • computer program codes or related data may be carried by any appropriate carrier, so that a device, apparatus or processor can perform various processes and operations described above.
  • Examples of carriers include signals, computer readable media, and the like.
  • Examples of signals may include electrical, optical, radio, sound, or other forms of propagated signals, such as carrier waves, infrared signals, and the like.
  • a machine-readable medium may be any tangible medium that contains or stores a program for or related to an instruction execution system, apparatus, or device.
  • a machine-readable medium can be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More detailed examples of machine-readable storage media include electrical connections with one or more wires, portable computer disks, hard disks, random storage access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash), optical storage, magnetic storage, or any suitable combination thereof.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
  • multiple modules or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be indirect coupling or communication connection through some interfaces, devices or modules, and may also be electrical, mechanical or other forms of connection.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or may be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present application.
  • each functional module in each embodiment of the present application may be integrated into one processing module, each module may exist separately physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules.
  • the integrated module is realized in the form of a software function module and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of software products, and the computer software products are stored in a storage medium
  • several instructions are included to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • first, second and other words in this application are used to distinguish the same or similar items with basically the same function and function. It should be understood that the words “first”, “second” and “nth” There is no logical or chronological dependency between them, and there is no limitation on the number and execution order. It should also be understood that although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device, without departing from the scope of the various described examples. Both the first device and the second device may be any type of network device, and in some cases, may be separate and distinct network devices.
  • if and “if” may be construed to mean “when” ("when” or “upon”) or “in response to determining” or “in response to detecting”.
  • phrases “if it is determined" or “if [the stated condition or event] is detected” may be construed to mean “when determining” or “in response to determining... ” or “upon detection of [stated condition or event]” or “in response to detection of [stated condition or event]”.
  • determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种数据处理方法、装置、设备、系统及计算机可读存储介质,涉及通信技术领域。该方法包括:第一设备获取至少一个业务流,该至少一个业务流中的任一个业务流包括多个码块,该码块包括数据单元和类型,或者该码块包括类型、类型指示和码块内容;第一设备基于码块的编码方式,将该多个码块映射到至少一条PHY链路上,该至少一条PHY链路用于传输该多个码块。该方法能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。

Description

数据处理方法、装置、设备、系统及计算机可读存储介质
本申请要求于2022年3月2日提交的申请号为202210198049.7、发明名称为“数据处理方法、装置和网络系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请还要求于2022年5月20日提交的申请号为202210554595.X、发明名称为“数据处理方法、装置、设备、系统及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种数据处理方法、装置、设备、系统及计算机可读存储介质。
背景技术
随着互联网协议(internet protocol,IP)网络应用和业务的多样化,网络流量增加的趋势越来越明显。由于当前以太网接口标准都是固定速率,光互联网论坛(optical internetworking forum,OIF)组织在电气与电子工程师协会(Institute of Electrical and Electronics Engineers,IEEE)802.3协议的基础上发起了灵活以太(flexible ethernet,FlexE)协议,以满足更高带宽的需求。该FlexE协议定义了一个媒体访问控制(media access control,MAC)层和物理编码子层(physical coding sublayer,PCS)之间的适配层(FlexE shim),通过该适配层使得以太网接口速率可以灵活匹配多种业务场景。
发明内容
本申请提出一种数据处理方法、装置、设备、系统及计算机可读存储介质,用于实现基于以特定编码方式编码的码块来进行的FlexE。
第一方面,提供了一种数据处理方法,该方法包括:第一设备获取至少一个业务流,至少一个业务流中的任一个业务流包括多个码块,该码块包括数据单元和类型,或者该码块包括类型、类型指示和码块内容;然后,基于码块的编码方式,将至少一个业务流包括的多个码块映射到至少一条PHY链路上,该至少一条PHY链路用于传输该多个码块。该方法能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。
在一种可能的实现方式中,所述至少一条PHY链路中的任一条PHY链路包括至少一个时隙,所述至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧,所述s个开销复帧的格式基于所述编码方式确定,所述s个开销复帧包括所述至少一个时隙与所述至少一个业务流的映射关系,所述映射关系用于将所述多个码块映射到所述PHY链路上,所述s基于所述PHY链路的传输速率确定。
在一种可能的实现方式中,所述时隙的速率为5m吉比特每秒,一个时隙用于传输一个码块,所述m为大于1的整数。在PHY链路包括的时隙的速率为5m Gbps的情况下,该至少一个 业务流包括的码块能够以较高的速率进行传输。
在一种可能的实现方式中,所述s个开销复帧中的任一个开销复帧包括多个开销帧,对于所述多个开销帧中的至少一个开销帧,所述至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,对于所述至少一个开销帧中的任一个开销帧,所述任一个开销帧包括多个开销块,所述多个开销块中的至少一个开销块包括所述一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,所述任一个开销复帧包括32个开销帧,一个开销帧包括2个开销块。
在一种可能的实现方式中,所述任一条PHY链路包括k个时隙,对于所述s个开销复帧中的任一个开销复帧,所述开销复帧包括k/s个时隙与所述至少一个业务流的映射关系,所述k基于所述PHY链路的传输速率与所述时隙的速率的比值确定,且所述k为所述s的整数倍。
在一种可能的实现方式中,所述k个时隙中每s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
在一种可能的实现方式中,所述k个时隙中每k/s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。PHY链路包括的k个时隙可以以不同方式组合成分别对应于第i个开销复帧的k/s个时隙,开销复帧与时隙的对应关系较为灵活。
在一种可能的实现方式中,所述任一条PHY链路用于每传输n*k个码块传输s个开销块,所述s个开销块中的第r个开销块用于组成所述s个开销复帧中的第r个开销复帧,所述n为正整数,所述r为大于等于0且小于所述s的整数,或者所述r为大于0且小于等于所述s的整数。
在一种可能的实现方式中,所述将所述多个码块映射到至少一条物理层PHY链路上,包括:获取所述至少一个业务流对应的开销复帧;修改所述开销复帧,基于修改后的开销复帧,将所述多个码块映射到所述至少一条PHY链路上。通过修改开销复帧,能够对PHY链路包括的至少一个时隙与至少一个业务流的映射关系进行控制,使得业务流包括的码块可以在指定的时隙上进行传输,业务流的码块的传输载体更为灵活。
在一种可能的实现方式中,所述多个码块包括类型为数据的码块和类型为空闲的码块,所述将所述多个码块映射到至少一条物理层PHY链路上,包括:将所述多个码块中类型为空闲的至少一个码块替换为包括操作维护管理OAM信息的码块,所述OAM信息用于管理所述多个码块中类型为数据的码块;将替换后的多个码块映射到至少一条PHY链路上。通过将空闲码块替换为包括OAM管理信息的码块,该方法能够实现对类型为数据的码块更精细的管理,码块的传输方式更加灵活。
在一种可能的实现方式中,所述s个开销复帧中的任一个开销复帧还包括至少两个管理通道,所述管理通道包括用于管理所述至少一条PHY链路的管理信息。
在一种可能的实现方式中,所述m为4或5。当m为4时,该任一条PHY可以适用于基于IA OIF-FLEXE-02.1/02.2等标准实现的FlexE技术,兼容性较好。
在一种可能的实现方式中,所述开销块和所述码块均为257比特。
在一种可能的实现方式中,所述n为639或1279。
第二方面,提供了一种数据处理方法,该方法包括:第二设备获取由至少一条物理层PHY链路传输的多个码块,该多个码块包括类型为开销的码块和类型为数据的码块,该类型为数据的码块包括数据单元和类型,或者,该类型为数据的码块包括类型、类型指示和码块内容;第二设备基于类型为数据的码块的编码方式和类型为开销的码块将类型为数据的码块进行解映射,得到至少一个业务流,该至少一个业务流包括该类型为数据的码块。该方法能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。
第三方面,提供了一种数据处理装置,该装置应用于第一设备,该装置包括:
获取模块,用于获取至少一个业务流,所述至少一个业务流中的任一个业务流包括多个码块,所述码块包括数据单元和类型,或者,所述码块包括类型、类型指示和码块内容;
映射模块,用于基于所述码块的编码方式,将所述多个码块映射到至少一条物理层PHY链路上,所述至少一条PHY链路用于传输所述多个码块。
在一种可能的实现方式中,所述至少一条PHY链路中的任一条PHY链路包括至少一个时隙,所述至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧,所述s个开销复帧的格式基于所述编码方式确定,所述s个开销复帧包括所述至少一个时隙与所述至少一个业务流的映射关系,所述映射关系用于将所述多个码块映射到所述PHY链路上,所述s基于所述PHY链路的传输速率确定。
在一种可能的实现方式中,所述时隙的速率为5m吉比特每秒,一个时隙用于传输一个码块,所述m为大于1的整数。
在一种可能的实现方式中,所述s个开销复帧中的任一个开销复帧包括多个开销帧,对于所述多个开销帧中的至少一个开销帧,所述至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,对于所述至少一个开销帧中的任一个开销帧,所述任一个开销帧包括多个开销块,所述多个开销块中的至少一个开销块包括所述一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,所述任一个开销复帧包括32个开销帧,一个开销帧包括2个开销块。
在一种可能的实现方式中,所述任一条PHY链路包括k个时隙,对于所述s个开销复帧中的任一个开销复帧,所述开销复帧包括k/s个时隙与所述至少一个业务流的映射关系,所述k基于所述PHY链路的传输速率与所述时隙的速率的比值确定,且所述k为所述s的整数倍。
在一种可能的实现方式中,所述k个时隙中每s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
在一种可能的实现方式中,所述k个时隙中每k/s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
在一种可能的实现方式中,所述任一条PHY链路用于每传输n*k个码块传输s个开销块,所述s个开销块中的第r个开销块用于组成所述s个开销复帧中的第r个开销复帧,所述n为正整数,所述r为大于等于0且小于所述s的整数,或者所述r为大于0且小于等于所述s的整数。
在一种可能的实现方式中,所述映射模块,用于获取所述至少一个业务流对应的开销复帧;修改所述开销复帧,基于修改后的开销复帧,将所述多个码块映射到所述至少一条PHY链路上。
在一种可能的实现方式中,所述多个码块包括类型为数据的码块和类型为空闲的码块,所述映射模块,用于将所述多个码块中类型为空闲的至少一个码块替换为包括操作维护管理OAM信息的码块,所述OAM信息用于管理所述多个码块中类型为数据的码块;将替换后的多个码块映射到至少一条PHY链路上。
在一种可能的实现方式中,所述s个开销复帧中的任一个开销复帧还包括至少两个管理通道,所述管理通道包括用于管理所述至少一条PHY链路的管理信息。
在一种可能的实现方式中,所述m为4或5。
在一种可能的实现方式中,所述开销块和所述码块均为257比特。
在一种可能的实现方式中,所述n为639或1279。
第四方面,提供了一种数据处理装置,该装置应用于第二设备,该装置包括:
获取模块,用于获取由至少一条物理层PHY链路传输的多个码块,所述多个码块包括类型为开销的码块和类型为数据的码块,所述类型为数据的码块包括数据单元和类型,或者,所述类型为数据的码块包括类型、类型指示和码块内容;
解映射模块,用于基于所述类型为数据的码块的编码方式和所述类型为开销的码块将所述类型为数据的码块进行解映射,得到至少一个业务流,所述至少一个业务流包括所述类型为数据的码块。
第五方面,提供了一种网络设备,包括处理器,处理器与存储器耦合,存储器中存储有至少一条程序指令或代码,至少一条程序指令或代码由处理器加载并执行,以使网络设备实现第一方面或第二方面中任一的数据处理方法。
第六方面,提供了一种网络系统,所述系统包括第一设备和第二设备,所述第一设备用于执行第一方面中任一的数据处理方法,所述第二设备用于执行第二方面中任一的数据处理方法。
第七方面,提供了一种计算机可读存储介质,存储介质中存储有至少一条程序指令或代码,程序指令或代码由处理器加载并执行时以使计算机实现第一方面或第二方面中任一的数据处理方法。
第八方面,提供了一种通信装置,该装置包括:收发器、存储器和处理器。其中,该收发器、该存储器和该处理器通过内部连接通路互相通信,该存储器用于存储指令,该处理器用于执行该存储器存储的指令,以控制收发器接收信号,并控制收发器发送信号,并且当该处理器执行该存储器存储的指令时,使得该处理器执行第一方面或第二方面中任一的数据处理方法。
示例性地,所述处理器为一个或多个,所述存储器为一个或多个。
示例性地,所述存储器可以与所述处理器集成在一起,或者所述存储器与处理器分离设置。
在具体实现过程中,存储器可以为非瞬时性(non-transitory)存储器,例如只读存储器(read only memory,ROM),其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请对存储器的类型以及存储器与处理器的设置方式不做限定。
第九方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被计算机运行时,使得所述计算机执行第一方面或第二方面中任一的数据处理方法。
第十方面,提供了一种芯片,包括处理器,用于从存储器中调用并运行所述存储器中存储的指令,使得安装有所述芯片的通信设备执行第一方面或第二方面中任一的数据处理方法。
示例性地,所述芯片还包括:输入接口、输出接口和所述存储器,所述输入接口、输出接口、所述处理器以及所述存储器之间通过内部连接通路相连。
应当理解的是,本申请的第三方面至第十方面的技术方案及其对应的可能的实现方式所取得的有益效果可参见上述第一方面和第二方面及其对应的可能的实现方式的技术效果,此处不再赘述。
附图说明
图1是本申请实施例提供的一种FlexE Group的结构示意图;
图2是本申请实施例提供的一种数据处理方法的流程图;
图3是本申请实施例提供的一种码块的结构示意图;
图4是本申请实施例提供的另一种码块的结构示意图;
图5是本申请实施例提供的一种PHY链路的数据结构示意图;
图6是本申请实施例提供的另一种PHY链路的数据结构示意图;
图7是本申请实施例提供的一种开销帧和开销复帧的结构示意图;
图8是本申请实施例提供的一种PHY链路传输业务流包括的码块的示意图;
图9是本申请实施例提供的一种映射多个码块的过程示意图;
图10是本申请实施例提供的另一种映射多个码块的过程示意图;
图11是本申请实施例提供的又一种映射多个码块的过程示意图;
图12是本申请实施例提供的另一种数据处理方法的流程图;
图13是本申请实施例提供的一种数据处理装置的结构示意图;
图14是本申请实施例提供的另一种数据处理装置的结构示意图;
图15是本申请实施例提供的一种网络设备的结构示意图;
图16是本申请实施例提供的一种网络设备的结构示意图。
具体实施方式
本申请的实施方式部分使用的术语仅用于对本申请的实施例进行解释,而非旨在限定本申请。下面结合附图,对本申请的实施例进行描述。
由于以太网接口标准制定和产品开发中是阶梯型的,且当前以太网接口标准都是固定速率,因而会存在传送需求和实际设备接口能力之间的差距,需要解决在当前以太网接口速率等级下,满足更高带宽的需求。对此,OIF组织发起了FlexE协议。该FlexE协议通过定义的适配层,使得以太网接口速率可以灵活匹配多种业务场景,并且在更高带宽的网络处理器(network processor,NP)/转发设备出现时,不必等待新的固定速率以太网标准出台,即可发挥设备的最大性能。
其中,FlexE的基本功能是将p个灵活以太网客户(FlexE client)的业务流按照FlexE shim 的时分复用(time division multiplexing,TDM)机制映射到一个由q条物理层(physical layer,PHY)链路组成的灵活以太网组(FlexE group)上,p和q均为正整数。示例性地,FlexE的基本结构可如图1所示。图1中,p为6,q为4,即图1所示的FlexE是将6个FlexE client的业务流按照FlexE shim的TDM机制映射到一个由4条PHY链路组成的FlexE group上。
本申请实施例提供了一种数据处理方法,该方法应用于网络设备,该方法能够实现基于以特定编码方式编码的码块来进行FlexE。以第一设备执行数据处理方法为例,如图2所示,该方法包括但不限于S201和S202。
S201,第一设备获取至少一个业务流,该至少一个业务流中的任一个业务流包括多个码块,该码块包括数据单元和类型,或者,该码块包括类型、类型指示和码块内容。
在一种可能的实现方式中,第一设备获取至少一个业务流的方式包括:第一设备生成至少一个业务流,或者第一设备接收其他设备发送的至少一个业务流。示例性地,码块包括的内容与该码块的编码方式相对应,第一设备根据码块包括的内容即可确定该码块的编码方式。示例性地,在码块包括类型和数据单元,或者码块包括类型、类型指示和码块内容的情况下,该码块的编码方式为256B/257B编码,也即该码块为257比特码块。
在一种可能的实现方式中,码块的第0比特表示类型,该类型用于指示该码块为仅包括数据的码块,还是为包括控制字的码块。例如,在第0比特为1的情况下,该码块为仅包括数据的码块,该码块的其余256比特表示数据单元,数据单元中仅包括数据。在第0比特为0的情况下,该码块为包括控制字的码块,该码块的第4比特至第1比特表示类型指示,类型指示用于指示控制字在码块内容中的位置,该码块的其余252比特表示码块内容,码块内容中包括控制字。当然,在码块为包括控制字的码块的情况下,码块内容中也可以包括数据,本申请实施例对此不加以限定。
示例性地,图3为本申请实施例提供的一种码块的结构示意图。其中,b表示比特,例如,1b表示1比特,8b表示8比特。如图3所示,该码块的第0比特表示类型,该码块的第256比特至第1比特表示数据单元。其中,第0比特为1,第64比特至第1比特表示为D0,第128比特至第65比特表示为D1,第192比特至第129比特表示为D2,第256比特至第193比特表示为D3,该D0至D3均属于数据单元。示例性地,D0至D3均用于表示数据。
示例性地,图4为本申请实施例提供的另一种码块的结构示意图。其中,b表示比特。如图4所示,该码块的第0比特表示类型,该码块的第4比特至第1比特表示类型指示,第256比特至第5比特表示码块内容。示例性地,对于任一个业务流包括的任一个码块,该码块的结构为图4中的任一种情况对应的结构。例如,该码块为情况1对应的结构,该码块的第0比特为0,第4比特至第1比特均为0;第8比特至第5比特表示为f_0,第64比特至第9比特表示为C0,该f_0和C0属于码块内容,且与第1比特对应;第72比特至第65比特表示为BTF1,第128比特至第73比特表示为C1,该BTF1和C1属于码块内容,且与第2比特对应;第136比特至第129比特表示为BTF2,第192比特至第137比特表示为C2,该BTF2和C2属于码块内容,且与第3比特对应;第200比特至第193比特表示为BTF3,第256比特至第201比特表示为C3,该BTF3和C3属于码块内容,且与第4比特对应。示例性地,f_0和C0用于表示1个控制字,BTF1和C1用于表示一个控制字,BTF2和C2用于表示一个控制字,BTF3和C3用于表示一个控制字。
图4中的其余情况与上述情况1原理相同,例如,该码块为情况2对应的结构,第0比特为0,第4比特至第1比特分别为1、0、0、0;第68比特至第5比特表示为D0,该D0属于码块内容, 且与第1比特对应;第72比特至第69比特表示为f_1,第128比特至第73比特表示为C1,该f_1和C1属于码块内容,且与第2比特对应;第136比特至第129比特表示为BTF2,第192比特至第137比特表示为C2,该BTF2和C2属于码块内容,且与第3比特对应;第200比特至第193比特表示为BTF3,第256比特至第201比特表示为C3,该BTF3和C3属于码块内容,且与第4比特对应。示例性地,D0用于表示数据,f_1和C1用于表示一个控制字,BTF2和C2用于表示一个控制字,BTF3和C3用于表示一个控制字。此处不再对图4中的其他情况进行赘述。
S202,第一设备基于码块的编码方式,将多个码块映射到至少一条PHY链路上,该至少一条PHY链路用于传输该多个码块。
在一种可能的实现方式中,第一设备确定多个码块的编码方式,基于码块的编码方式,将该多个码块映射到至少一条PHY链路上,以使该多个码块通过该至少一条PHY链路进行传输。例如,第一设备基于码块包括的数据单元和类型,确定该多个码块的编码方式为256B/257B编码,基于该256B/257B编码,将该多个码块映射到至少一条PHY链路上。又例如,第一设备基于码块包括的类型、类型指示和码块内容,确定该多个码块的编码方式为256B/257B编码,基于该256B/257B编码,将该多个码块映射到至少一条PHY链路上。示例性地,该至少一条PHY链路称为一个灵活以太网组。也就是说,第一设备基于该编码方式,将多个码块映射到一个灵活以太网组上,该灵活以太网组用于传输该多个码块。
示例性地,对于至少一条PHY链路中的任一条PHY链路,该任一条PHY链路包括至少一个时隙,时隙的速率为5m Gbps,一个时隙用于传输一个码块,m为大于1的整数。其中,5m表示m的5倍,也可表示为5*m。在一种可能的实现方式中,m为4或5,也即该任一条PHY链路包括至少一个速率为20Gbps的时隙,或者该任一条PHY链路包括至少一个速率为25Gbps的时隙。示例性地,当m为4时,也即当该任一条PHY链路包括至少一个速率为20Gbps的时隙时,该任一条PHY可以适用于基于IA OIF-FLEXE-02.1/02.2等标准实现的FlexE技术,兼容性较好。
在一种可能的实现方式中,对于任一条PHY链路,该PHY链路包括的时隙的数量基于该PHY链路的传输速率与该速率的比值确定。例如,该PHY链路包括的时隙的数量以k表示,也即,该PHY链路包括k个时隙,k基于该PHY链路的传输速率与该时隙的速率的比值确定。示例性地,在该PHY链路为800吉比特以太网(gigabit ethernet,GE)PHY的情况下,也即该PHY链路的传输速率为800Gbps,k=800Gbps/25Gbps=32。在该PHY链路为1.6太比特以太网(terabit ethernet,TE)PHY的情况下,也即该PHY链路的传输速率为1.6太比特每秒(terabit per second,Tbps),k=1.6Tbps/25Gbps=64。
在一种可能的实现方式中,对于任一条PHY链路包括的k个时隙,该k个时隙称为一个循环周期,通过该k个时隙的循环往复形成码块的承载通道。示例性地,对于800GE PHY,该800GE PHY包括的32个时隙作为一个循环周期,通过该32个时隙的循环往复形成码块的承载通道。对于1.6TE PHY,该1.6TE包括的64个时隙作为一个循环周期,通过该64个时隙的循环往复形成码块的承载通道。
在一种可能的实现方式中,该至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧(overhead multiframe),该s个开销复帧的格式基于码块的编码方式确定。开销复帧的格式请详见后文中的相关说明,此处暂不赘述。该s个开销复帧包括该PHY链路包括的至少一个时隙与至少一个业务流的映射关系,该映射关系用于将多个码块映射到该至少一条PHY链路上。其中,s基于PHY链路的传输速率确定。例如,基于PHY链路的传输速率与开销复帧的数 量的对应关系确定s。示例性地,在该PHY链路包括k个时隙的情况下,该s个开销复帧包括k个时隙与至少一个业务流的映射关系。一个时隙对应于一个业务流,多个时隙可以对应于相同的业务流,也可以对应于不同的业务流。也就是说,在s个开销复帧包括k个时隙与至少一个业务流中的全部业务流的映射关系的情况下,该映射关系用于将该全部业务流包括的多个码块映射到该PHY链路上。在s个开销复帧包括k个时隙与至少一个业务流中的部分业务流的映射关系的情况下,该映射关系用于将该部分业务流包括的多个码块映射到该PHY链路上。
示例性地,s被k整除,也即k为s的整数倍。其中,PHY链路的传输速率与开销复帧的数量的对应关系可以是基于PHY链路的传输速率与参考速率的倍数确定的。例如,PHY链路的传输速率为参考速率的w倍,该PHY链路用于传输w个开销复帧。示例性地,参考速率为800Gbps。
在一种可能的实现方式中,PHY链路的传输速率与开销复帧的数量的对应关系包括:在PHY链路为800GE PHY的情况下,s为1;在PHY链路为1.6TE PHY的情况下,s为2。也就是说,对于一条800GE PHY,在时隙的速率为25Gbps的情况下,该800GE PHY用于传输1个开销复帧,该1个开销复帧包括32个时隙与至少一个业务流的映射关系。对于一条1.6TE PHY,在时隙的速率为25Gbps的情况下,该1.6TE PHY用于传输2个开销复帧,该2个开销复帧包括64个时隙与至少一个业务流的映射关系。
示例性地,对于s个开销复帧中的任一个开销复帧,该开销复帧包括k/s个时隙与至少一个业务流的映射关系。例如,对于上述1.6TE PHY,该1.6TE PHY用于传输2个开销复帧,各个开销复帧包括32个时隙与至少一个业务流的映射关系。在一种可能的实现方式中,对于s个开销复帧中的第i个开销复帧,该第i个开销复帧对应的k/s个时隙包括但不限于如下情况一和情况二。其中,i为大于等于0且小于等于s的整数,或者i为大于0且小于等于s的整数。
情况一,k个时隙中每s个时隙为一个时隙组,s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙。
例如,对于上述1.6TE PHY,该1.6TE PHY包括64个时隙,该64个时隙中每2个时隙为一个时隙组,则该1.6TE PHY包括32个时隙组。该1.6TE PHY用于传输2个开销复帧,其中该2个开销复帧中的第i个开销复帧对应的32个时隙包括该32个时隙组中的第i个时隙。例如,i=0或1,则第0个开销复帧对应的32个时隙包括该32个时隙组中的第0个时隙,该第1个开销复帧对应的32个时隙包括该32个时隙组中的第1个时隙。又例如,i=1或2,则第1个开销复帧对应的32个时隙包括该32个时隙组中的第1个时隙,该第2个开销复帧对应的32个时隙包括该32个时隙组中的第2个时隙。示例性地,该1.6TE PHY包括的64个时隙分别为slot0至slot63,则该第1个开销复帧对应的32个时隙包括slot0、slot2、slot4、…、slot62,该第2个开销复帧对应的32个时隙包括slot1、slot3、slot5、…、slot63。
情况二,k个时隙中每k/s个时隙为一个时隙组,s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙。
例如,对于上述1.6TE PHY,该1.6TE PHY包括64个时隙,该64个时隙中每32个时隙为一个时隙组,则该1.6TE PHY包括2个时隙组。该1.6TE PHY用于传输2个开销复帧,其中该2个开销复帧中的第i个开销复帧对应的32个时隙包括该2个时隙组中的第i个时隙组包括的时隙。例如,i=0或1,则第0个开销复帧对应的32个时隙包括第0个时隙组包括的时隙,该第1个开销复帧对应的32个时隙包括第1个时隙组包括的时隙。又例如,i=1或2,则第1个开销复帧对应的32个时隙包括第1个时隙组包括的时隙,该第2个开销复帧对应的32个时隙包括第2个时隙组包 括的时隙。示例性地,该1.6TE PHY包括的64个时隙分别为slot0至slot63,则该第1个开销复帧对应的32个时隙包括slot0、slot1、slot2、…、slot31,该第2个开销复帧对应的32个时隙包括slot32、slot33、slot34、…、slot63。
本申请实施例中,PHY链路包括的k个时隙可以以不同方式组合成分别对应于第i个开销复帧的k/s个时隙,开销复帧与时隙的对应关系较为灵活。
在一种可能的实现方式中,对于s个开销复帧中的任一个开销复帧,该任一个开销复帧包括多个开销帧(overhead frame),对于该多个开销帧中的至少一个开销帧,该至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。示例性地,该多个开销帧中包括闲置的开销帧,或者用于作为保留位域的开销帧,该用于作为保留位域的开销帧可以携带用于扩展协议的信息。由于开销帧的类型较为多样,开销复帧包括的信息较为灵活。在开销复帧包括用于作为保留位域的开销帧的情况下,该开销帧能够携带更为丰富的信息,该开销复帧的扩展性较好。
示例性地,对于上述至少一个开销帧中的任一个开销帧,该任一个开销帧可以包括多个开销块(overhead block,OH block),该多个开销块中的至少一个开销块包括一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,对于任一个开销复帧,该开销复帧包括的开销帧的数量等于该开销复帧对应的时隙的数量,也就是说,对于一个开销复帧,该开销复帧包括32个开销帧。示例性地,对于该32个开销帧中的任一个开销帧,该开销帧包括2个开销块。对于任一个开销块,该开销块可以为257比特。需要说明的是,该开销复帧包括的开销帧的数量也可以大于开销复帧对应的时隙的数量。例如,该开销复帧包括34个开销帧,其中,32个开销帧为包括时隙与业务流的映射关系的开销帧,其余2个开销帧为用于作为保留位域的开销帧。
基于上述内容可知,第一设备基于码块的编码方式,将多个码块映射到至少一条PHY链路上,以使该多个码块通过该至少一条PHY链路进行传输,该至少一条PHY链路还可以用于传输s个开销复帧,其中,任一个开销复帧可以包括32个开销帧,任一个开销帧可以包括2个开销块。则在一种可能的实现方式中,对于该至少一条PHY链路中的任一条PHY链路,该任一条PHY链路传输码块和开销复帧的传输顺序为:该PHY链路每传输n*k个码块传输s个开销块,该s个开销块中的第r个开销块用于组成该s个开销复帧中的第r个开销复帧,n为正整数,r为大于等于0且小于s的整数,或者r为大于0且小于等于s的整数。
例如,对于800GE PHY,该800GE PHY包括32个时隙,该800GE PHY每传输n*32个码块传输1个开销块,该开销块用于组成该800GE PHY传输的开销复帧。又例如,对于1.6TE PHY,该1.6TE PHY包括64个时隙,该1.6TE PHY每传输n*64个码块传输2个开销块,其中,第1个开销块用于组成第1个开销复帧,第2个开销块用于组成第二个开销复帧。示例性地,n为639或1279。
在一种可能的实现方式中,对于任一条PHY链路,该PHY链路包括的k个时隙对应一个时隙表(calendar),该时隙表包括该PHY链路传输的s个开销复帧所包括的至少一个时隙与至少一个业务流的映射关系。例如,对于800GE PHY,该800GE PHY包括的32个时隙对应一个calendar,该calendar包括该800GE PHY传输的1个开销复帧包括的32个时隙与至少一个业务流的映射关系。又例如,对于1.6TE PHY,该1.6TE PHY包括的64个时隙对应一个calendar,该calendar包括该1.6TE PHY传输的2个开销复帧包括的64个时隙与至少一个业务流的映射关系。
示例性地,在一个时隙用于传输一个码块(block)的情况下,800GE PHY包括的32个时隙循环一次传输32个block,该800GE PHY包括的32个时隙对应的calendar称为32块的时隙表(32-block calendar);1.6TE PHY包括的64个时隙循环一次传输64个block,该1.6TE PHY包括的64个时隙对应的calendar称为64块的时隙表(64-block calendar)。也就是说,对于800GE PHY,该800GE PHY对应的32-block calendar每重复n次,该800GE PHY传输1个开销块。例如,如图5所示,对于800GE PHY,在灵活以太的开销块之间重复n次32块的时隙表(n repetitions of 32-block calendar between FlexE overheads blocks),其中,灵活以太的开销块(FlexE overhead,FlexE OH)即为上述开销块。对于1.6TE PHY,该1.6TE PHY对应的64-block calendar每重复n次,该1.6TE PHY传输2个开销块。例如,如图6所示,对于1.6TE PHY,在灵活以太的开销块之间重复n次64块的时隙表(n repetitions of 64-block calendar between FlexE overheads blocks),其中,OH0表示该2个开销块中在前的一个开销块,OH1表示该2个开销块中在后的一个开销块。
示例性地,对于800GE PHY,每2个开销块组成一个开销帧,每32个开销帧组成一个开销复帧。对于1.6TE PHY,每两个OH0组成一个开销帧,每32个开销帧组成2个开销复帧中的前一个开销复帧;每两个OH1组成一个开销帧,每32个开销帧组成2个开销复帧中的后一个开销复帧。
图7是本申请实施例提供的一种开销帧和开销复帧的结构示意图。如图7所示,该开销帧和开销复帧包括但不限于以下16项内容:
(1)客户时隙表A(client calendar A)和客户时隙表B(client calendar B)
示例性地,该开销帧中包括两种时隙表配置信息,分别为client calendar A和client calendar B。该客户时隙表A可以简称为时隙表A(calendar A),该客户时隙表B可以简称为时隙表B(calendar B)。如图7所示的开销帧的第一个块中编号为64*2+1=129至编号为64*2+16=144的比特位字段携带该client calendar A,编号为64*2+17=145至编号为64*2+32=160的比特位字段携带该client calendar B。
对于该开销复帧包括的第一个开销帧,该第一个开销帧包括:在时隙表A为正在使用的时隙表配置的情况下,时隙0携带的客户(client carried calendar A slot 0),和在时隙B为正在使用的时隙表配置的情况下,时隙0携带的客户(client carried calendar B slot 0)。其中,该client carried calendar A slot 0是指在时隙表A为正在使用的时隙表配置的情况下,slot0与至少一个业务流中的一个业务流的映射关系,也即,映射到slot0传输的业务流。该client carried calendar A slot 0可以简称为对应于时隙表A,slot0携带的客户。该client carried calendar B slot 0是指在时隙表B为正在使用的时隙表配置的情况下,映射到slot0传输的业务流。该client carried calendar B slot 0可以简称为对应于时隙表B,slot0携带的客户。该开销复帧中的其余开销帧与该第一个开销帧原理相同,此处不再赘述。
对应于不同的时隙表配置,时隙0至时隙31与至少一个业务流的映射关系不同,由此通过切换calendar A和calendar B作为正在使用的时隙表配置信息,该多个时隙与至少一个业务流的映射关系能够被改变。从而在至少一个业务流发生变化的情况下,通过切换calendar A和calendar B,能够相应地调整至少一条PHY链路包括的多个时隙与至少一个业务流的映射关系,保证至少一个业务流的传输。例如,在至少一个业务流的带宽增加的情况下,通过切换calendar A和calendar B,能够保证该至少一个业务流包括的多个码块均被映射到至少一条PHY包括的 多个时隙中进行传输,避免出现流量损失。
示例性地,该calendar A和calendar B还具有如下特点:
特点1,任意时间只有一个时隙表配置信息处于使用状态,也就是说,在任意时间,要么calendar A被使用,要么calendar B被使用。
特点2,在对接FlexE group的发送端(transmit end,TX)和接收端(receive end,RX)两端,会有FlexE OH的时隙协商机制来保障TX与RX使用的时隙表配置信息是一致的。例如,calendar A处于使用状态,那么calendar B则处于备用状态。
特点3,时隙协商的发起端是TX,而RX则处于被动接收状态。假设calendar A处于使用状态,那么TX会将变化的calendar B通过FlexE OH刷新给RX。随后TX会发起时隙表切换请求(calendar switch request,CR)时隙协商请求,要求将使用的时隙配置信息切换为calendar B,TX收到RX的回应后,TX触发TX和RX均将使用的时隙配置信息切换为calendar B。需要说明的是,在对接FlexE group的TX和RX两端,首次建立连接后,也会触发一次FlexE OH的时隙协商,以保证两端处于使用状态的时隙配置信息是一致的。
(2)正在使用的时隙表配置(calendar configuration in use,C)
如图7所示的开销帧的第一个块中编号为8的比特位字段、编号为64*1+0=64的比特位字段和编号为64*2+0=128的比特位字段均携带C。示例性地,当正在使用的时隙表配置为calendar A时,C为0,当正在使用的时隙表配置为calendar B时,C为1。
(3)时隙表切换请求(calendar switch request,CR)
该CR用于请求切换使用的时隙表配置信息,如图7所示的开销帧的第一个块中编号为64*2+33=161的比特位字段携带CR。
(4)时隙表切换确认(calendar switch acknowledge,CA)
该CA用于确认切换使用的时隙表配置信息,如图7所示的开销帧的第一个块中编号为64*2+34=162的比特位字段携带CA。
(5)开销复帧指示符(overhead multiframe indicator,OMF)
该OMF用于指示开销复帧的边界。如图7所示的开销帧的第一个块中编号为9的比特位字段携带该OMF。其中,在一个复帧里,前16个开销帧的OMF的值为0,后16个开销帧的OMF的值为1,通过0和1之间的转换,能够确定开销复帧的边界。
(6)远程物理故障(remote PHY fault,RPF)
该RPF用于指示远程物理故障。如图7所示的开销帧的第一个块中编号为10的比特位字段携带该RPF。
(7)同步控制(synchronization control,SC)
该SC用于控制开销帧包括同步信息通道。如图7所示的开销帧的第一个块中编号为11的比特位字段携带该SC。当SC为0时,该开销帧的第二个块中编号为64*1+0=64至编号为64*1+63=127的比特位字段作为适配层至适配层管理通道,也即该开销帧不包括同步信息通道。当SC为1时,该开销帧的第二个块中编号为64*1+0=64至编号为64*1+63=127的比特位字段作为该同步信息通道。
(8)同步信息通道(synchronization messaging channel)
该同步信息通道用于携带同步信息,该同步信息用于使接收到该同步信息的设备基于该同步信息进行同步。如第(7)点中的说明,当SC为1时,图7示出的开销帧的第二个块中编号 为64*1+0=64至编号为64*1+63=127的比特位字段作为该同步信息通道。
(9)灵活以太图(FlexE map)
示例性地,用于传输该至少一个业务流包括的码块的至少一条PHY链路称为一个FlexE group。对于任一条PHY链路,该任一条PHY链路包括s个灵活以太网实例(FlexE instance),一个FlexE instance包括对应于一个开销复帧的k/s个时隙。例如,对于800GE PHY,该800GE PHY包括1个FlexE instance,对于1.6TE PHY,该1.6TE PHY包括2个FlexE instance,其中,第1个FlexE instance包括对应于第1个开销复帧的32个时隙,第2个FlexE instance包括对应于第2个开销复帧的32个时隙。
该FlexE map用于控制哪些FlexE instance是此FlexE group的成员(control of which FlexE instances are members of this group)。如图7所示的开销帧的第一个块中编号为64*1+1=65至编号为64*1+8=72的比特位字段携带该FlexE map。
示例性地,该FlexE map包括多条PHY链路信息,FlexE map的每个比特位对应一条PHY链路,FlexE map的每个比特位的值用于表示该比特位对应的PHY链路是否在该FlexE group中。例如,如果比特位的值为第一值,例如该第一值为1,则认为该比特位对应的PHY链路属于该FlexE group。如果比特位的值为第二值,例如该第二值为0,则认为该比特位对应的PHY链路不属于该FlexE group。
(10)灵活以太实例号(FlexE instance number)
该FlexE instance number用于标识该FlexE group中的FlexE instance(identity of this FlexE instance within the group)。如图7所示的开销帧的第一个块中编号为64*1+9=73至编号为64*1+16=80的比特位字段携带该FlexE instance number。示例性地,该FlexE group中的各个FlexE instance均具有唯一的标识。
(11)灵活以太网组号(FlexE group number)
该FlexE group number用于标识FlexE group。如图7所示的开销帧的第一个块中编号为12至编号为31的比特位字段携带该FlexE group number。
(12)同步头(synchronization head,SH)
该SH为FlexE开销帧的帧头。
(13)未知比特(X)
图7所示的开销帧的第一个块中编号为3的比特位字段和第二个块中的SH下的字段携带该X。示例性地,X的取值与节管理通道-1中的内容相对应。例如,节管理通道-1中的内容为控制类型的内容,X为0,节管理通道-1中的内容为数据类型的内容,X为1。又例如,节管理通道-1中的内容为控制类型的内容,X为1,节管理通道-1中的内容为数据类型的内容,X为0。
(14)管理通道(management channel)
示例性地,管理通道包括用于管理至少一条PHY链路的管理信息。例如,该开销帧包括两个节管理通道(management channel-section),分别为节管理通道-1和节管理通道-2,两个节管理通道均为8字节(byte),节管理通道-1和节管理通道-2均用于节到节的管理(section-to-section management)。如图7所示的开销帧的第一个块中编号为64*3+0=192至编号为64*3+63=255的比特位字段携带节管理通道-1,第二个块中编号为0至编号为63的比特位字段携带节管理通道-2。
示例性地,当SC为0时,该开销帧包括适配层至适配层管理通道(management channel-shim  to shim),该适配层至适配层管理通道用于携带适配层至适配层管理信息。当SC为0时,图7示出的开销帧的第二个块中编号为64*1+0=64至编号为64*1+63=127的比特位字段作为适配层至适配层管理通道。
在一种可能的实现方式中,该管理通道还用于携带其他报文。例如,该管理通道还用于携带链路层发现协议(link layer discovery protocol,LLDP)报文或管理报文。
(15)循环冗余校验码-16(cyclic redundancy check,CRC-16)
该CRC-16用于对开销块的内容进行CRC保护。例如,该CRC-16用于校验该CRC-16所在比特位字段之前且除前9比特和第32比特至第35比特的内容以外的内容。如图7所示的开销帧的第一个块中编号为64*2+48=176至编号为64*2+63=191的比特位字段携带该CRC-16。
(16)预留(reserved)字段
如图7所示的开销帧的第一个块中编号为64*1+17=81至编号为64*1+63=127的比特位字段、编号为64*2+35=163至编号为64*2+47=175的比特位字段均为预留字段。示例性地,该预留字段用于作为携带信息的扩展字段。
通过将至少一个业务流包括的多个码块映射到至少一条PHY链路上,该多个码块能够通过该至少一条PHY链路包括的多个时隙进行传输。示例性地,对于任一个业务流,用于传输该业务流的码块的时隙的速率的总和等于该业务流的速率。例如,图8是本申请实施例提供的一种PHY链路传输业务流包括的码块的示意图。如图8所示,第一设备获取两个业务流,一个业务流为25吉(G)业务流,也即该业务流的速率为25Gbps,另一个业务流为75G业务流,也即该业务流的速率为75Gbps。第一设备将两个业务流包括的码块映射到一个由两条800GE PHY组成的FlexE group上,两条800GE PHY的时隙的速率均为25Gbps。则25G业务流对应1个时隙,75G业务流对应3个时隙。示例性地,该25G业务流包括的码块被映射在800GE PHY1的slot4中传输,该75G业务流包括的码块被映射在800GE PHY1的slot13、800GE PHY1的slot31和800GE PHY2的slot3中传输。图8示出的黑色块为开销块。
在一种可能的实现方式中,该至少一个业务流为至少一条PHY链路传输的业务流。将多个码块映射到至少一条PHY链路上,包括:获取至少一个业务流对应的开销复帧;修改该开销复帧,基于修改后的开销复帧,将多个码块映射到至少一条PHY链路上。通过修改开销复帧,至少一条PHY链路包括的时隙与至少一条业务流的映射关系能够被修改,从而业务流包括的码块能够被切换到不同的PHY链路上传输。例如,图9是本申请实施例提供的一种映射多个码块的过程示意图。如图9所示,第一设备接收到由800GE PHY1、800GE PHY2、800GE PHY3和800GE PHY4组成的FlexE group传输的100G业务流,其中,800GE PHY1、800GE PHY2、800GE PHY3和800GE PHY4均包括32个时隙,该100G业务流包括的码块A、码块B、码块C和码块D在800GE PHY1上传输。第一设备获取该100G业务流对应的开销复帧,修改该开销复帧,基于修改后的开销复帧,将该码块A、码块B、码块C和码块D映射到由800GE PHY5、800GE PHY6、800GE PHY7和800GE PHY8组成的FlexE group上。具体地,800GE PHY5、800GE PHY6、800GE PHY7和800GE PHY8均包括32个时隙,第一设备将该码块A、码块B、码块C和码块D映射到800GE PHY7上,从而该码块A、码块B、码块C和码块D能够在800GE PHY7上传输。
需要说明的是,传输业务流的码块的PHY链路可以相同或不同,同一个业务流包括的码块可以映射到相同或不同的PHY链路上,至少一条PHY链路包括的多个与至少一个业务流的映射关系较为灵活。例如,第一设备接收到由800GE PHY1、800GE PHY2、800GE PHY3和 800GE PHY4组成的FlexE group传输的100G业务流,将该100G业务流包括的码块映射到由1.6TE PHY1和1.6TE PHY2组成的FlexE group上。又例如,该100G业务流包括的码块A、码块B、码块C和码块D在800GE PHY1上传输,第一设备将码块A、码块B和码块C映射到800GE PHY7上,将码块D映射到800GE PHY8上。又例如,该100G业务流包括的码块A、码块B和码块C在800GE PHY1上传输,码块D在800GE PHY2上传输,第一设备将码块A、码块B和码块C映射到800GE PHY7上,将码块D映射到800GE PHY8上。又例如,该100G业务流包括的码块A、码块B和码块C在800GE PHY1上传输,码块D在800GE PHY2上传输,第一设备将码块A、码块B、码块C和码块D均映射到800GE PHY7上。
示例性地,通过修改开销复帧,第一设备将不同业务流包括的码块映射到同一条PHY链路上。例如,图10是本申请实施例提供的另一种映射多个码块的过程示意图。如图10所示,第一设备接收到由800GE PHY1、800GE PHY2、800GE PHY3和800GE PHY4组成的FlexE group传输的一条100G业务流和一条75G业务流。其中,800GE PHY1、800GE PHY2、800GE PHY3和800GE PHY4均包括32个时隙,该100G业务流包括的码块A、码块B和码块D在800GE PHY1上传输,该100G业务流包括的码块C在800GE PHY2上传输,该75G业务流包括的码块X、码块Y和码块Z在800GE PHY3上传输。第一设备获取该100G业务流对应的开销复帧和该75G业务流对应的开销复帧,修改该100G业务流对应的开销复帧和75G业务流对应的开销复帧,基于修改后的开销复帧,将该码块A、码块B、码块C、码块D、码块X、码块Y和码块Z映射到由800GE PHY5、800GE PHY6、800GE PHY7和800GE PHY8组成的FlexE group上。具体地,800GE PHY5、800GE PHY6、800GE PHY7和800GE PHY8均包括32个时隙,第一设备将码块A、码块B、码块C、码块D、码块X、码块Y和码块Z均映射到800GE PHY8上。
在一种可能的实现方式中,至少一个业务流包括的多个码块包括类型为数据的码块和类型为空闲的码块,将多个码块映射到至少一条PHY链路上,包括:将多个码块中类型为空闲的至少一个码块替换为包括操作维护管理(operation administration and maintenance,OAM)信息的码块,该OAM信息用于管理多个码块中类型为数据的码块;将替换后的多个码块映射到至少一条PHY链路上。示例性地,该类型为数据的码块包括上述仅包括数据的码块和包括控制字的码块。通过将类型为空闲的至少一个码块替换为包括OAM信息的码块,能够对类型为数据的码块进行更细化的操作。例如,基于OAM信息,将一个业务流划分为多个子流,对多个子流包括的码块分别操作,或者基于OAM信息将开销帧进一步定义为多个子帧,基于子帧对时隙进行二次复用。例如,基于子帧将一个时隙划分为多个子时隙,基于多个子时隙传输至少一个业务流包括的多个码块。
在一种可能的实现方式中,本申请实施例提供的数据处理方法还能够与切片分组网(slicing packet network,SPN)技术结合。第一设备具有切片分组网络通道开销处理器(SPN channel overhead processor,SCO)的功能。例如,图11是本申请实施例提供的又一种映射多个码块的过程示意图。如图11所示,第一设备接收到一个10G业务流和两个25G业务流,该10G业务流和25G业务流均包括多个码块。其中,码块包括数据单元和单比特长度的类型,或者码块包括类型、类型指示和码块内容。在SPN通道层(SPN channel layer),第一设备在不同的SPN通道(SPN channel)传输将10G业务流和25G业务流包括的码块。其中,D表示类型为数据的码块,I表示类型为空闲的码块,O表示包括OAM管理信息的码块。基于码块的编码方式,第一设备将切片进行交换,例如,如图11所示,基于码块的编码方式,第一设备交换传输两 个25G业务流包括的码块的SPN通道。第一设备基于其中一条25G业务流包括的码块得到SPN客户(SPN client),将该25G业务流包括的码块传输给一个以太网(ethernet,ETH)接口,将另一条25G业务流包括的码块和10G业务流包括的码块映射到至少一条PHY链路上,以在至少一条PHY链路上传输该10G业务流包括的多个码块和该25G业务流包括的多个码块。
在一种可能的实现方式中,参见图12,该方法还包括:S203,第一设备通过至少一条PHY向第二设备传输该多个码块。
示例性地,第一设备和第二设备为一个FlexE group两端的设备,第一设备通过该FlexE group包括的至少一条PHY向第二设备传输该多个码块。
本申请实施例提供的方法能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。在PHY链路包括的时隙的速率为5m Gbps的情况下,该至少一个业务流包括的码块能够以较高的速率进行传输。再有,通过修改开销复帧,能够对PHY链路包括的至少一个时隙与至少一个业务流的映射关系进行控制,使得业务流包括的码块可以在指定的时隙上进行传输,业务流的码块的传输载体更为灵活。另外,通过将空闲码块替换为包括OAM管理信息的码块,该方法能够实现对类型为数据的码块更精细的管理,码块的传输方式更加灵活。
本申请实施例提供了一种数据处理方法,以该方法应用于第二设备为例,参见图12,该方法包括但不限于如下S204和S205。
S204,第二设备获取由至少一条PHY链路传输的多个码块。
在本申请示例性实施例中,该多个码块包括类型为开销的码块和类型为数据的码块,类型为数据的码块包括数据单元和类型,或者,类型为数据的码块包括类型、类型指示和码块内容。
在一种可能的实现方式中,第一设备和第二设备为一个FlexE group两端的设备,也即第二设备通过该至少一条PHY接收第一设备传输的多个码块。示例性地,类型为数据的码块为256B/257B编码的码块,也即该码块为257比特;类型为开销的码块为上述图2实施例中所述的开销块,该开销块为257比特。
S205,第二设备基于类型为数据的码块的编码方式和类型为开销的码块将类型为数据的码块进行解映射,得到至少一个业务流,至少一个业务流包括类型为数据的码块。
在一种可能的实现方式中,第二设备基于该256B/257B编码的编码方式和开销块将类型为数据的码块进行解映射,得到至少一个业务流。示例性地,第二设备基于接收到的开销块恢复开销帧,基于开销帧恢复开销复帧。示例性地,第二设备基于开销复帧包括的至少一个时隙和至少一个业务流的映射关系,将类型为数据的码块从至少一条PHY链路的时隙中解映射,得到至少一个业务流。
本申请实施例提供的方法能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。
图13是本申请实施例提供的一种数据处理装置的结构示意图,该装置应用于第一设备,该第一设备为上述图2和图12所示的第一设备。基于图13所示的如下多个模块,该图13所示的数据处理装置能够执行第一设备所执行的全部或部分操作。应理解到,该装置可以包括 比所示模块更多的附加模块或者省略其中所示的一部分模块,本申请实施例对此并不进行限制。如图13所示,该装置包括:
获取模块1301,用于获取至少一个业务流,至少一个业务流中的任一个业务流包括多个码块,码块包括数据单元和类型,或者,码块包括类型、类型指示和码块内容;
映射模块1302,用于基于码块的编码方式,将该多个码块映射到至少一条物理层PHY链路上,至少一条PHY链路用于传输该多个码块。
在一种可能的实现方式中,至少一条PHY链路中的任一条PHY链路包括至少一个时隙,至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧,s个开销复帧的格式基于该编码方式确定,s个开销复帧包括至少一个时隙与至少一个业务流的映射关系,该映射关系用于将多个码块映射到PHY链路上,s基于PHY链路的传输速率确定。
在一种可能的实现方式中,时隙的速率为5m吉比特每秒,一个时隙用于传输一个码块,m为大于1的整数。
在一种可能的实现方式中,s个开销复帧中的任一个开销复帧包括多个开销帧,对于多个开销帧中的至少一个开销帧,该至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,对于该至少一个开销帧中的任一个开销帧,该任一个开销帧包括多个开销块,多个开销块中的至少一个开销块包括一个时隙与一个业务流的映射关系。
在一种可能的实现方式中,任一个开销复帧包括32个开销帧,一个开销帧包括2个开销块。
在一种可能的实现方式中,任一条PHY链路包括k个时隙,对于s个开销复帧中的任一个开销复帧,该开销复帧包括k/s个时隙与至少一个业务流的映射关系,k基于PHY链路的传输速率与时隙的速率的比值确定,且k为s的整数倍。
在一种可能的实现方式中,k个时隙中每s个时隙为一个时隙组,s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙,i为大于等于0且小于s的整数,或者i为大于0且小于等于s的整数。
在一种可能的实现方式中,k个时隙中每k/s个时隙为一个时隙组,s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙,i为大于等于0且小于s的整数,或者i为大于0且小于等于s的整数。
在一种可能的实现方式中,任一条PHY链路用于每传输n*k个码块传输s个开销块,s个开销块中的第r个开销块用于组成s个开销复帧中的第r个开销复帧,n为正整数,r为大于等于0且小于s的整数,或者r为大于0且小于等于s的整数。
在一种可能的实现方式中,映射模块1302,用于获取至少一个业务流对应的开销复帧;修改开销复帧,基于修改后的开销复帧,将多个码块映射到至少一条PHY链路上。
在一种可能的实现方式中,多个码块包括类型为数据的码块和类型为空闲的码块,映射模块1302,用于将多个码块中类型为空闲的至少一个码块替换为包括操作维护管理OAM信息的码块,OAM信息用于管理多个码块中类型为数据的码块;将替换后的多个码块映射到至少一条PHY链路上。
在一种可能的实现方式中,s个开销复帧中的任一个开销复帧还包括至少两个管理通道,管理通道包括用于管理至少一条PHY链路的管理信息。
在一种可能的实现方式中,m为4或5。
在一种可能的实现方式中,开销块和码块均为257比特。
在一种可能的实现方式中,n为639或1279。
在一种可能的实现方式中,该装置还包括:发送模块1303,用于通过至少一条PHY向第二设备传输该多个码块。
本申请实施例提供的装置能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。在PHY链路包括的时隙的速率为5m Gbps的情况下,该至少一个业务流包括的码块能够以较高的速率进行传输。再有,通过修改开销复帧,能够对PHY链路包括的至少一个时隙与至少一个业务流的映射关系进行控制,使得业务流包括的码块可以在指定的时隙上进行传输,业务流的码块的传输载体更为灵活。另外,通过将空闲码块替换为包括OAM管理信息的码块,该装置能够实现对类型为数据的码块更精细的管理,码块的传输方式更加灵活。
图14是本申请实施例提供的一种数据处理装置的结构示意图,该装置应用于第二设备,该第二设备为上述图12所示的第二设备。基于图14所示的如下多个模块,该图14所示的数据处理装置能够执行第二设备所执行的全部或部分操作。应理解到,该装置可以包括比所示模块更多的附加模块或者省略其中所示的一部分模块,本申请实施例对此并不进行限制。如图14所示,该装置包括:
获取模块1401,用于获取由至少一条PHY链路传输的多个码块,多个码块包括类型为开销的码块和类型为数据的码块,类型为数据的码块包括数据单元和类型,或者,类型为数据的码块包括类型、类型指示和码块内容;
解映射模块1402,用于基于类型为数据的码块的编码方式和类型为开销的码块将类型为数据的码块进行解映射,得到至少一个业务流,至少一个业务流包括类型为数据的码块。
本申请实施例提供的装置能够实现基于包括数据单元和类型的码块来进行的FlexE,或者实现基于包括类型、类型指示和码块内容的码块来进行的FlexE。
应理解的是,上述图13和图14提供的装置在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
上述实施例中的设备的具体硬件结构如图15所示的网络设备1500,包括收发器1501、处理器1502和存储器1503。收发器1501、处理器1502和存储器1503之间通过总线1504连接。其中,收发器1501用于获取至少一个业务流和传输多个码块,存储器1503用于存放指令或程序代码,处理器1502用于调用存储器1503中的指令或程序代码使得设备执行上述方法实施例中第一设备或第二设备的相关处理步骤。在具体实施例中,本申请实施例的网络设备1500可对应于上述各个方法实施例中的第一设备或第二设备,网络设备1500中的处理器1502读取存储器1503中的指令或程序代码,使图15所示的网络设备1500能够执行第一设备或第二设备所执行的全部或部分操作。
网络设备1500还可以对应于上述图13和图14所示的装置,例如,图13和图14中所涉及的获取模块1301、发送模块1303和获取模块1401相当于收发器1501,映射模块1302和解映射模块1402处理器1502。
参见图16,图16示出了本申请一个示例性实施例提供的网络设备2000的结构示意图。图16所示的网络设备2000用于执行上述图2和图12所示的数据处理方法所涉及的操作。该网络设备2000例如是交换机、路由器等。
如图16所示,网络设备2000包括至少一个处理器2001、存储器2003以及至少一个通信接口2004。
处理器2001例如是通用中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、网络处理器(network processer,NP)、图形处理器(graphics processing unit,GPU)、神经网络处理器(neural-network processing units,NPU)、数据处理单元(data processing unit,DPU)、微处理器或者一个或多个用于实现本申请方案的集成电路。例如,处理器2001包括专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。PLD例如是复杂可编程逻辑器件(complex programmable logic device,CPLD)、现场可编程逻辑门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合。其可以实现或执行结合本发明实施例公开内容所描述的各种逻辑方框、模块和电路。所述处理器也可以是实现计算功能的组合,例如包括一个或多个微处理器组合,DSP和微处理器的组合等等。
可选的,网络设备2000还包括总线。总线用于在网络设备2000的各组件之间传送信息。总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图16中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。图16中网络设备2000的各组件之间除了采用总线连接,还可采用其他方式连接,本发明实施例不对各组件的连接方式进行限定。
存储器2003例如是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,又如是随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,又如是电可擦可编程只读存储器(electrically erasable programmable read-only Memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备,或者是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器2003例如是独立存在,并通过总线与处理器2001相连接。存储器2003也可以和处理器2001集成在一起。
通信接口2004使用任何收发器一类的装置,用于与其它设备或通信网络通信,通信网络可以为以太网、无线接入网(RAN)或无线局域网(wireless local area networks,WLAN)等。通信接口2004可以包括有线通信接口,还可以包括无线通信接口。具体的,通信接口2004可以为以太(ethernet)接口、快速以太(fast ethernet,FE)接口、千兆以太(gigabit ethernet,GE)接口,异步传输模式(asynchronous transfer mode,ATM)接口,无线局域网(wireless local area networks,WLAN)接口,蜂窝网络通信接口或其组合。以太网接口可以是光接口,电接口或其组合。在本申请实施例中,通信接口2004可以用于网络设备2000与其他设备进行通信。
在具体实现中,作为一种实施例,处理器2001可以包括一个或多个CPU,如图16中所示的CPU0和CPU1。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,网络设备2000可以包括多个处理器,如图16中所示的处理器2001和处理器2005。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,网络设备2000还可以包括输出设备和输入设备。输出设备和处理器2001通信,可以以多种方式来显示信息。例如,输出设备可以是液晶显示器(liquid crystal display,LCD)、发光二级管(light emitting diode,LED)显示设备、阴极射线管(cathode ray tube,CRT)显示设备或投影仪(projector)等。输入设备和处理器2001通信,可以以多种方式接收用户的输入。例如,输入设备可以是鼠标、键盘、触摸屏设备或传感设备等。
在一些实施例中,存储器2003用于存储执行本申请方案的程序代码2010,处理器2001可以执行存储器2003中存储的程序代码2010。也即是,网络设备2000可以通过处理器2001以及存储器2003中的程序代码2010,来实现方法实施例提供的数据处理方法。程序代码2010中可以包括一个或多个软件模块。可选地,处理器2001自身也可以存储执行本申请方案的程序代码或指令。
在具体实施例中,本申请实施例的网络设备2000可对应于上述各个方法实施例中的第一设备或第二设备,网络设备2000中的处理器2001读取存储器2003中的程序代码2010或处理器2001自身存储的程序代码或指令,使图16所示的网络设备2000能够执行第一设备或第二设备所执行的全部或部分操作。
网络设备2000还可以对应于上述图13和图14所示的装置,图13和图14所示的装置中的每个功能模块采用网络设备2000的软件实现。换句话说,图13和图14所示的装置包括的功能模块为网络设备2000的处理器2001读取存储器2003中存储的程序代码2010后生成的。例如,图13和图14中所涉及的获取模块1301、发送模块1303和获取模块1401相当于通信接口2004,映射模块1302和解映射模块1402相当于处理器2001和/或处理器2005。
其中,图2和图12所示的方法的各步骤通过网络设备2000的处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤,为避免重复,这里不再详细描述。
基于上述图15及图16所示的网络设备,本申请实施例还提供了一种网络系统,该系统包括:第一设备及第二设备。可选的,第一设备为图15所示的网络设备1500或图16所示的网络设备2000,第二设备为图15所示的网络设备1500或图16所示的网络设备2000。
第一设备及第二设备所执行的方法可参见上述图2和图12所示实施例的相关描述,此处不再加以赘述。
应理解的是,上述处理器可以是中央处理器(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。值得说明的是,处理器可以是支持进阶精简指令集机器(advanced RISC machines,ARM)架构的处理器。
进一步地,在一种可选的实施例中,上述存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。
该存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用。例如,静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic random access memory,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
还提供了一种计算机可读存储介质,存储介质中存储有至少一条程序指令或代码,所述程序指令或代码由处理器加载并执行时以使计算机实现图2或图12中的数据处理方法。
本申请提供了一种计算机程序(产品),当计算机程序被计算机执行时,可以使得处理器或计算机执行上述方法实施例中对应的各个步骤和/或流程。
提供了一种芯片,包括处理器,用于从存储器中调用并运行所述存储器中存储的指令,使得安装有所述芯片的通信设备执行上述各方面中的方法。
示例性地,该芯片还包括:输入接口、输出接口和所述存储器,所述输入接口、输出接口、所述处理器以及所述存储器之间通过内部连接通路相连。
还提供了一种设备,该设备包括上述芯片。可选地,该设备为网络设备。示例性地,该设备为路由器或交换机或服务器。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介 质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例中描述的各方法步骤和模块,能够以软件、硬件、固件或者其任意组合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各实施例的步骤及组成。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机程序指令。作为示例,本申请实施例的方法可以在机器可执行指令的上下文中被描述,机器可执行指令诸如包括在目标的真实或者虚拟处理器上的器件中执行的程序模块中。一般而言,程序模块包括例程、程序、库、对象、类、组件、数据结构等,其执行特定的任务或者实现特定的抽象数据结构。在各实施例中,程序模块的功能可以在所描述的程序模块之间合并或者分割。用于程序模块的机器可执行指令可以在本地或者分布式设备内执行。在分布式设备中,程序模块可以位于本地和远程存储介质二者中。
用于实现本申请实施例的方法的计算机程序代码可以用一种或多种编程语言编写。这些计算机程序代码可以提供给通用计算机、专用计算机或其他可编程的数据处理装置的处理器,使得程序代码在被计算机或其他可编程的数据处理装置执行的时候,引起在流程图和/或框图中规定的功能/操作被实施。程序代码可以完全在计算机上、部分在计算机上、作为独立的软件包、部分在计算机上且部分在远程计算机上或完全在远程计算机或服务器上执行。
在本申请实施例的上下文中,计算机程序代码或者相关数据可以由任意适当载体承载,以使得设备、装置或者处理器能够执行上文描述的各种处理和操作。载体的示例包括信号、计算机可读介质等等。
信号的示例可以包括电、光、无线电、声音或其它形式的传播信号,诸如载波、红外信号等。
机器可读介质可以是包含或存储用于或有关于指令执行系统、装置或设备的程序的任何有形介质。机器可读介质可以是机器可读信号介质或机器可读存储介质。机器可读介质可以包括但不限于电子的、磁的、光学的、电磁的、红外的或半导体系统、装置或设备,或其任意合适的组合。机器可读存储介质的更详细示例包括带有一根或多根导线的电气连接、便携式计算机磁盘、硬盘、随机存储存取器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或闪存)、光存储设备、磁存储设备,或其任意合适的组合。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、设备和模块的具体工作过程,可以参见前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、设备或模块的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
该作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以是两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
该集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例中方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请中术语“第一”、“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”、“第n”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。还应理解,尽管以下描述使用术语第一、第二等来描述各种元素,但这些元素不应受术语的限制。这些术语只是用于将一元素与另一元素区别分开。例如,在不脱离各种所述示例的范围的情况下,第一设备可以被称为第二设备,并且类似地,第二设备可以被称为第一设备。第一设备和第二设备都可以是任一类型的网络设备,并且在某些情况下,可以是单独且不同的网络设备。
还应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本申请中术语“至少一个”的含义是指一个或多个,本申请中术语“多个”的含义是指两个或两个以上,例如,多个第二报文是指两个或两个以上的第二报文。本文中术语“系统”和“网络”经常可互换使用。
应理解,在本文中对各种所述示例的描述中所使用的术语只是为了描述特定示例,而并非旨在进行限制。如在对各种所述示例的描述和所附权利要求书中所使用的那样,单数形式“一个(“a”,“an”)”和“该”旨在也包括复数形式,除非上下文另外明确地指示。
还应理解,术语“包括”(也称“includes”、“including”、“comprises”和/或“comprising”)当在本说明书中使用时指定存在所陈述的特征、整数、步骤、操作、元素、和/或部件,但是并 不排除存在或添加一个或多个其他特征、整数、步骤、操作、元素、部件、和/或其分组。
还应理解,术语“若”和“如果”可被解释为意指“当...时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“若确定...”或“若检测到[所陈述的条件或事件]”可被解释为意指“在确定...时”或“响应于确定...”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
还应理解,说明书通篇中提到的“一个实施例”、“一实施例”、“一种可能的实现方式”意味着与实施例或实现方式有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”、“一种可能的实现方式”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。

Claims (40)

  1. 一种数据处理方法,其特征在于,所述方法包括:
    第一设备获取至少一个业务流,所述至少一个业务流中的任一个业务流包括多个码块,所述码块包括数据单元和类型,或者,所述码块包括类型、类型指示和码块内容;
    所述第一设备基于所述码块的编码方式,将所述多个码块映射到至少一条物理层PHY链路上,所述至少一条PHY链路用于传输所述多个码块。
  2. 根据权利要求1所述的方法,其特征在于,所述至少一条PHY链路中的任一条PHY链路包括至少一个时隙,所述至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧,所述s个开销复帧的格式基于所述编码方式确定,所述s个开销复帧包括所述至少一个时隙与所述至少一个业务流的映射关系,所述映射关系用于将所述多个码块映射到所述PHY链路上,所述s基于所述PHY链路的传输速率确定。
  3. 根据权利要求2所述的方法,其特征在于,所述时隙的速率为5m吉比特每秒,一个时隙用于传输一个码块,所述m为大于1的整数。
  4. 根据权利要求2或3所述的方法,其特征在于,所述s个开销复帧中的任一个开销复帧包括多个开销帧,对于所述多个开销帧中的至少一个开销帧,所述至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。
  5. 根据权利要求4所述的方法,其特征在于,对于所述至少一个开销帧中的任一个开销帧,所述任一个开销帧包括多个开销块,所述多个开销块中的至少一个开销块包括所述一个时隙与一个业务流的映射关系。
  6. 根据权利要求5所述的方法,其特征在于,所述任一个开销复帧包括32个开销帧,一个开销帧包括2个开销块。
  7. 根据权利要求2-6任一所述的方法,其特征在于,所述任一条PHY链路包括k个时隙,对于所述s个开销复帧中的任一个开销复帧,所述开销复帧包括k/s个时隙与所述至少一个业务流的映射关系,所述k基于所述PHY链路的传输速率与所述时隙的速率的比值确定,且所述k为所述s的整数倍。
  8. 根据权利要求7所述的方法,其特征在于,所述k个时隙中每s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
  9. 根据权利要求7所述的方法,其特征在于,所述k个时隙中每k/s个时隙为一个时隙组, 所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
  10. 根据权利要求7-9任一所述的方法,其特征在于,所述任一条PHY链路用于每传输n*k个码块传输s个开销块,所述s个开销块中的第r个开销块用于组成所述s个开销复帧中的第r个开销复帧,所述n为正整数,所述r为大于等于0且小于所述s的整数,或者所述r为大于0且小于等于所述s的整数。
  11. 根据权利要求2-10任一所述的方法,其特征在于,所述将所述多个码块映射到至少一条物理层PHY链路上,包括:
    获取所述至少一个业务流对应的开销复帧;
    修改所述开销复帧,基于修改后的开销复帧,将所述多个码块映射到所述至少一条PHY链路上。
  12. 根据权利要求1-10任一所述的方法,其特征在于,所述多个码块包括类型为数据的码块和类型为空闲的码块,所述将所述多个码块映射到至少一条物理层PHY链路上,包括:
    将所述多个码块中类型为空闲的至少一个码块替换为包括操作维护管理OAM信息的码块,所述OAM信息用于管理所述多个码块中类型为数据的码块;
    将替换后的多个码块映射到至少一条PHY链路上。
  13. 根据权利要求2-12任一所述的方法,其特征在于,所述s个开销复帧中的任一个开销复帧还包括至少两个管理通道,所述管理通道包括用于管理所述至少一条PHY链路的管理信息。
  14. 根据权利要求3-13任一所述的方法,其特征在于,所述m为4或5。
  15. 根据权利要求5-14任一所述的方法,其特征在于,所述开销块和所述码块均为257比特。
  16. 根据权利要求10-15任一所述的方法,其特征在于,所述n为639或1279。
  17. 一种数据处理方法,其特征在于,所述方法包括:
    第二设备获取由至少一条物理层PHY链路传输的多个码块,所述多个码块包括类型为开销的码块和类型为数据的码块,所述类型为数据的码块包括数据单元和类型,或者,所述类型为数据的码块包括类型、类型指示和码块内容;
    所述第二设备基于所述类型为数据的码块的编码方式和所述类型为开销的码块将所述类型为数据的码块进行解映射,得到至少一个业务流,所述至少一个业务流包括所述类型为数据的码块。
  18. 一种数据处理装置,其特征在于,所述装置应用于第一设备,所述装置包括:
    获取模块,用于获取至少一个业务流,所述至少一个业务流中的任一个业务流包括多个 码块,所述码块包括数据单元和类型,或者,所述码块包括类型、类型指示和码块内容;
    映射模块,用于基于所述码块的编码方式,将所述多个码块映射到至少一条物理层PHY链路上,所述至少一条PHY链路用于传输所述多个码块。
  19. 根据权利要求18所述的装置,其特征在于,所述至少一条PHY链路中的任一条PHY链路包括至少一个时隙,所述至少一条PHY链路中的任一条PHY链路还用于传输s个开销复帧,所述s个开销复帧的格式基于所述编码方式确定,所述s个开销复帧包括所述至少一个时隙与所述至少一个业务流的映射关系,所述映射关系用于将所述多个码块映射到所述PHY链路上,所述s基于所述PHY链路的传输速率确定。
  20. 根据权利要求19所述的装置,其特征在于,所述时隙的速率为5m吉比特每秒,一个时隙用于传输一个码块,所述m为大于1的整数。
  21. 根据权利要求19或20所述的装置,其特征在于,所述s个开销复帧中的任一个开销复帧包括多个开销帧,对于所述多个开销帧中的至少一个开销帧,所述至少一个开销帧中的任一个开销帧包括一个时隙与一个业务流的映射关系。
  22. 根据权利要求21所述的装置,其特征在于,对于所述至少一个开销帧中的任一个开销帧,所述任一个开销帧包括多个开销块,所述多个开销块中的至少一个开销块包括所述一个时隙与一个业务流的映射关系。
  23. 根据权利要求22所述的装置,其特征在于,所述任一个开销复帧包括32个开销帧,一个开销帧包括2个开销块。
  24. 根据权利要求19-23任一所述的装置,其特征在于,所述任一条PHY链路包括k个时隙,对于所述s个开销复帧中的任一个开销复帧,所述开销复帧包括k/s个时隙与所述至少一个业务流的映射关系,所述k基于所述PHY链路的传输速率与所述时隙的速率的比值确定,且所述k为所述s的整数倍。
  25. 根据权利要求24所述的装置,其特征在于,所述k个时隙中每s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:各个时隙组中的第i个时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
  26. 根据权利要求24所述的装置,其特征在于,所述k个时隙中每k/s个时隙为一个时隙组,所述s个开销复帧中的第i个开销复帧对应的k/s个时隙包括:第i个时隙组包括的时隙,所述i为大于等于0且小于所述s的整数,或者所述i为大于0且小于等于所述s的整数。
  27. 根据权利要求24-26任一所述的装置,其特征在于,所述任一条PHY链路用于每传输n*k个码块传输s个开销块,所述s个开销块中的第r个开销块用于组成所述s个开销复帧中的第r个 开销复帧,所述n为正整数,所述r为大于等于0且小于所述s的整数,或者所述r为大于0且小于等于所述s的整数。
  28. 根据权利要求19-27任一所述的装置,其特征在于,所述映射模块,用于获取所述至少一个业务流对应的开销复帧;修改所述开销复帧,基于修改后的开销复帧,将所述多个码块映射到所述至少一条PHY链路上。
  29. 根据权利要求18-27任一所述的装置,其特征在于,所述多个码块包括类型为数据的码块和类型为空闲的码块,所述映射模块,用于将所述多个码块中类型为空闲的至少一个码块替换为包括操作维护管理OAM信息的码块,所述OAM信息用于管理所述多个码块中类型为数据的码块;将替换后的多个码块映射到至少一条PHY链路上。
  30. 根据权利要求19-29任一所述的装置,其特征在于,所述s个开销复帧中的任一个开销复帧还包括至少两个管理通道,所述管理通道包括用于管理所述至少一条PHY链路的管理信息。
  31. 根据权利要求20-30任一所述的装置,其特征在于,所述m为4或5。
  32. 根据权利要求22-31任一所述的装置,其特征在于,所述开销块和所述码块均为257比特。
  33. 根据权利要求27-32任一所述的装置,其特征在于,所述n为639或1279。
  34. 一种数据处理装置,其特征在于,所述装置应用于第二设备,所述装置包括:
    获取模块,用于获取由至少一条物理层PHY链路传输的多个码块,所述多个码块包括类型为开销的码块和类型为数据的码块,所述类型为数据的码块包括数据单元和类型,或者,所述类型为数据的码块包括类型、类型指示和码块内容;
    解映射模块,用于基于所述类型为数据的码块的编码方式和所述类型为开销的码块将所述类型为数据的码块进行解映射,得到至少一个业务流,所述至少一个业务流包括所述类型为数据的码块。
  35. 一种网络设备,其特征在于,所述网络设备包括:处理器,所述处理器与存储器耦合,所述存储器中存储有至少一条程序指令或代码,所述至少一条程序指令或代码由所述处理器加载并执行,以使所述网络设备实现如权利要求1-17中任一所述的方法。
  36. 一种网络系统,其特征在于,所述网络系统包括第一设备和第二设备,所述第一设备用于执行如权利要求1-16任一所述的方法,所述第二设备用于执行如权利要求17所述的方法。
  37. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条 程序指令或代码,所述程序指令或代码由处理器加载并执行时以使计算机实现如权利要求1-17中任一所述的方法。
  38. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被计算机运行时,使得所述计算机实现如权利要求1-17中任一所述的方法。
  39. 一种芯片,其特征在于,所述芯片包括处理器,所述处理器用于从存储器中调用并运行所述存储器中存储的指令,使得安装有所述芯片的通信设备执行如权利要求1-17中任一所述的方法。
  40. 根据权利要求39所述的芯片,其特征在于,还包括:输入接口、输出接口和所述存储器,所述输入接口、所述输出接口、所述处理器以及所述存储器之间通过内部连接通路相连。
PCT/CN2023/077958 2022-03-02 2023-02-23 数据处理方法、装置、设备、系统及计算机可读存储介质 WO2023165412A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210198049.7 2022-03-02
CN202210198049 2022-03-02
CN202210554595.X 2022-05-20
CN202210554595.XA CN116743677A (zh) 2022-03-02 2022-05-20 数据处理方法、装置、设备、系统及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023165412A1 true WO2023165412A1 (zh) 2023-09-07

Family

ID=87882941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077958 WO2023165412A1 (zh) 2022-03-02 2023-02-23 数据处理方法、装置、设备、系统及计算机可读存储介质

Country Status (2)

Country Link
TW (1) TW202338624A (zh)
WO (1) WO2023165412A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612825A (zh) * 2017-08-04 2018-01-19 华为技术有限公司 建立灵活以太网路径的方法和网络设备
CN109981208A (zh) * 2017-12-27 2019-07-05 华为技术有限公司 基于灵活以太网FlexE传输业务流的方法和装置
CN111092686A (zh) * 2019-11-28 2020-05-01 中兴通讯股份有限公司 一种数据传输方法、装置、终端设备和存储介质
CN113497728A (zh) * 2020-04-03 2021-10-12 华为技术有限公司 一种业务流调整方法和通信装置
CN115276887A (zh) * 2021-04-29 2022-11-01 华为技术有限公司 一种灵活以太网的数据处理方法及相关装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612825A (zh) * 2017-08-04 2018-01-19 华为技术有限公司 建立灵活以太网路径的方法和网络设备
CN109981208A (zh) * 2017-12-27 2019-07-05 华为技术有限公司 基于灵活以太网FlexE传输业务流的方法和装置
CN111092686A (zh) * 2019-11-28 2020-05-01 中兴通讯股份有限公司 一种数据传输方法、装置、终端设备和存储介质
CN113497728A (zh) * 2020-04-03 2021-10-12 华为技术有限公司 一种业务流调整方法和通信装置
CN115276887A (zh) * 2021-04-29 2022-11-01 华为技术有限公司 一种灵活以太网的数据处理方法及相关装置

Also Published As

Publication number Publication date
TW202338624A (zh) 2023-10-01

Similar Documents

Publication Publication Date Title
US11824960B2 (en) Communication method, communications device, and storage medium
US10903929B2 (en) Flexible ethernet group establishment method and device
EP4178297A1 (en) Data transmission method, and device
US11412074B2 (en) Method and device for transparently transmitting service frequency
US11838181B2 (en) Flexible ethernet group management method, device, and computer-readable storage medium
US11799992B2 (en) Data transmission method in flexible ethernet and device
US11412508B2 (en) Data transmission method and device
WO2022052609A1 (zh) 时延补偿方法、装置、设备及计算机可读存储介质
US11750310B2 (en) Clock synchronization packet exchanging method and apparatus
WO2023165412A1 (zh) 数据处理方法、装置、设备、系统及计算机可读存储介质
WO2023109424A1 (zh) 一种数据传输的方法以及相关装置
WO2020169009A1 (zh) 时隙容器的配置方法及装置
CN116743677A (zh) 数据处理方法、装置、设备、系统及计算机可读存储介质
US11438091B2 (en) Method and apparatus for processing bit block stream, method and apparatus for rate matching of bit block stream, and method and apparatus for switching bit block stream
WO2022100110A1 (zh) 网络同步的方法、装置、设备、系统及可读存储介质
WO2023083175A1 (zh) 报文传输方法及通信装置
US11902403B2 (en) Method for receiving code block stream, method for sending code block stream, and communications apparatus
WO2023082087A1 (zh) 一种控制信令传输方法、通信节点和基站
TW202333460A (zh) 編碼方法、解碼方法、裝置、設備、系統及可讀儲存介質
TW202333459A (zh) 編碼方法、解碼方法、裝置、設備、系統及可讀儲存介質
CN116455517A (zh) 编码方法、解码方法、装置、设备、系统及可读存储介质
CN116346734A (zh) 数据包丢包率统计方法、设备以及计算机可读存储介质
CN117675101A (zh) 数据传输方法、装置、系统及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23762812

Country of ref document: EP

Kind code of ref document: A1