CN117221768A - Service data processing method and device - Google Patents

Service data processing method and device Download PDF

Info

Publication number
CN117221768A
CN117221768A CN202210621039.XA CN202210621039A CN117221768A CN 117221768 A CN117221768 A CN 117221768A CN 202210621039 A CN202210621039 A CN 202210621039A CN 117221768 A CN117221768 A CN 117221768A
Authority
CN
China
Prior art keywords
data
time slot
code block
block
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210621039.XA
Other languages
Chinese (zh)
Inventor
郑述乾
刘翔
苏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210621039.XA priority Critical patent/CN117221768A/en
Priority to PCT/CN2023/097660 priority patent/WO2023232097A1/en
Publication of CN117221768A publication Critical patent/CN117221768A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Time-Division Multiplex Systems (AREA)

Abstract

The embodiment of the application provides a service data processing method and device. The method comprises the following steps: and receiving a plurality of service data, carrying out time slot multiplexing on the plurality of service data based on a time slot multiplexing structure to obtain a first data stream, mapping the first data stream into a first data frame, and sending the first data frame. The time slot multiplexing structure comprises m columns of code blocks, the ith column of code block is a first time slot block, the jth column of code block comprises k second time slot blocks, the first data frame comprises N columns of code blocks, the first column of code blocks is a first overhead code block and comprises first indication information and second indication information, the first indication information is used for indicating the starting position of the time slot multiplexing structure in the first data stream in the first data frame, and the second indication information is used for indicating the mapping relation of the first time slot block, the second time slot block and the first data stream. The method defines a frame structure and a time slot multiplexing structure based on code blocks, supports a plurality of services with different bandwidths to carry out time slot multiplexing, simplifies processing complexity and reduces time delay.

Description

Service data processing method and device
Technical Field
The present application relates to the field of optical communications, and more particularly, to a service data processing method and apparatus.
Background
The optical transport network (optical transport network, OTN) is widely deployed on the trunk line, metropolitan core and metropolitan edge, has the natural advantages of high quality, large capacity and wide coverage, and can realize flexible scheduling and management of large-capacity customer services.
The optical data unit 0 (optical data unit, odu 0) frame is used as a minimum rate bearer of the current OTN technology, and the rate of the optical data unit 0 (optical data unit, odu 0) frame is about 1.25 gigabit per second (gigabit per second, gbps) for carrying ethernet service data of 1 Gbps. With the growing demand for low-rate traffic bearers in OTN technology, it is often necessary to map and multiplex existing low-rate traffic data into higher-rate signals when transmitting relatively low-rate traffic using such ultra-high-rate transmission frames, and to load the signals through currently existing optical bearer containers of OTN, such as ODU0. However, in this implementation, the processing complexity of the timeslot multiplexing is high, and meanwhile, the problems of large delay, low bandwidth utilization and the like are also brought.
Disclosure of Invention
The embodiment of the application provides a service data processing method and device, which are based on a unified code block definition frame structure and a time slot multiplexing structure, support a plurality of service data with different bandwidths to carry out time slot multiplexing processing, simplify the complexity of service data processing and reduce time delay.
In a first aspect, a service data processing method is provided. The method may be performed by the sender device or by a component of the sender device (e.g., a chip or a system-on-chip, etc.), as the application is not limited in this regard. The method comprises the following steps: and receiving the first service data and the second service data, carrying out time slot multiplexing on the first service data and the second service data based on a time slot multiplexing structure to obtain a first data stream, mapping the first data stream into a first data frame, and sending the first data frame.
The bandwidth of the first service data is less than or equal to 200 megabits per second (million bits per second, mbps), the time slot multiplexing structure comprises m column code blocks, an ith column code block in the m column code blocks is a first time slot block, a jth column code block in the m column code blocks comprises k second time slot blocks, the first data frame comprises N column code blocks, a first column of the first data frame is a first overhead code block used for managing data code blocks except the first column, the first overhead code block comprises first indication information and second indication information, the first indication information is used for indicating the starting position of the first data stream in the first data frame, the second indication information is used for indicating the mapping relation of the first time slot block, the second time slot block and the first data stream, k, m and N are integers which are greater than 1, and the integer N which is greater than or equal to 1 and less than or equal to m is an integer which is greater than 1.
Based on the scheme, a time slot multiplexing structure and a frame structure (namely a first data frame) for time slot multiplexing are defined based on the unified code block, so that at least two service data are supported to carry out time slot multiplexing, the flow of time slot multiplexing processing is simplified, especially for small-bandwidth service data processing, the bandwidth utilization rate is ensured, and the transmission delay is reduced.
With reference to the first aspect, in certain implementations of the first aspect, the code block includes first information, the first information being used to indicate a code block type of the code block.
With reference to the first aspect, in some implementations of the first aspect, a code block type of the code block is a data code block or a non-data code block, and when the code block type of the code block is a non-data code block, the code block further includes second information, where the second information is used to indicate that the code block type of the non-data code block is an overhead code block or a rate adaptation code block. Based on the above scheme, the type of the code block can be flexibly indicated by introducing the first information and the second information in the code block.
With reference to the first aspect, in certain implementation manners of the first aspect, before performing time slot multiplexing on the first service data and the second service data based on the time slot multiplexing structure to obtain a first data stream, the method further includes: and packaging and rate matching the first service data to obtain a first sub-data stream, and packaging and rate matching the second service data to obtain a second sub-data stream, wherein the first sub-data stream and the second sub-data stream are used for carrying out time slot multiplexing.
Based on the scheme, the sub data streams obtained by packaging and rate matching different service data can be mapped into a time slot multiplexing structure according to the size of a large time slot block or a small time slot block, so that the different service data is ensured to use a code block as a processing granularity when time slots are multiplexed, and the complexity of the subsequent time slot multiplexing is simplified.
With reference to the first aspect, in some implementations of the first aspect, encapsulating and rate matching the first service data to obtain a first sub-data flow includes: and mapping the first service data into a second data frame according to the size of the first time slot block, and performing rate matching on the second data frame according to the size of the second time slot block to obtain the first sub-data stream, wherein the second data frame comprises N columns of code blocks, the first column of the second data frame is a second overhead code block used for managing data code blocks except the first column, and N is an integer greater than 1.
Based on the above scheme, for small bandwidth service (such as first service data), the encapsulation of the small bandwidth service is completed by the second data frame defined by the code block, data interception and rate matching are performed according to the size of the second time slot block, and the data are mapped to the position of the designated small time slot in the time slot multiplexing structure, so that the first service data is ensured to use the code block as processing granularity during time slot multiplexing, the processing process is simplified, the time slot multiplexing of the small bandwidth service is supported, the transmission delay is reduced, and the bandwidth utilization rate is improved.
With reference to the first aspect, in some implementations of the first aspect, encapsulating and rate matching the second service data to obtain a second sub-data stream includes: and mapping the second service data into a second data frame according to the size of the first time slot block, and performing rate matching on the second data frame according to the size of the first time slot block to obtain a second sub-data stream when the bandwidth of the second service data is greater than 200Mbps, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block, the second overhead code block is used for managing data code blocks except the first column, and N is an integer greater than 1.
Based on the above scheme, for the large bandwidth service (such as the second service data), the encapsulation of the large bandwidth service is completed by the second data frame defined by the code block, the data interception and the rate matching are performed according to the size of the first time slot block, and the mapping is performed to the position of the designated large time slot in the time slot multiplexing structure, so that the second service data is ensured to use the code block as the processing granularity when the time slot is multiplexed, and the processing process is simplified.
The technical scheme of the application carries out time slot multiplexing based on the service data with different bandwidths, and realizes more flexible service data bearing. Meanwhile, the code blocks are used as processing granularity, and the large time slot blocks and the small time slot blocks are used for carrying out mixed time slot multiplexing, so that service data with different rates can be processed in a targeted and flexible way, and the service processing flow is simplified.
With reference to the first aspect, in certain implementations of the first aspect, a size of the first slot block is greater than or equal to 64 bytes, and a size of the second slot block is greater than or equal to 8 bytes and less than or equal to 64 bytes. For example, the second slot block has a size of 8, 16, 24, 32, or 64 bytes, and the first slot block has a size of 64, 128, 192, 256, 65, 129, 193, or 257 bytes. Based on the scheme, the first time slot block and the second time slot block can be code blocks with different byte sizes, so that the time slot multiplexing and bearing of service data with different rates can be realized, and the flexibility is high.
With reference to the first aspect, in certain implementations of the first aspect, the sizes of the first slot block and the second slot block satisfy:
X=k*p+c
wherein X is the size of the first slot block, p is the size of the second slot block, k is the number of the second slot blocks, and c is the bit occupied by the first information.
Based on the above scheme, the size of the first time slot block is an integer multiple of the second time slot code block in the present application. That is, for a timeslot multiplexing structure, it can support that m×k second timeslot blocks (small timeslots) and m first timeslot blocks (large timeslots) are mixed according to a certain proportion to perform timeslot multiplexing, so that flexibility is higher and adaptability is stronger.
In a second aspect, a service data processing method is provided. The method may be performed by the receiving end device or by a component of the receiving end device (e.g., a chip or a system-on-chip, etc.), as the application is not limited in this respect. The method comprises the following steps: and receiving the first data frame, demapping the first data stream from the first data frame, and performing time slot de-multiplexing on the first data stream to obtain first service data and second service data.
The first data frame is used for bearing a first data stream, the first data stream is obtained by carrying out time slot multiplexing on first service data and second service data based on a time slot multiplexing structure, the bandwidth of the first service data is smaller than or equal to 200Mbps, the time slot multiplexing structure comprises m column code blocks, an ith column code block is a first time slot block, a jth column code block comprises k second time slot blocks, the first data frame comprises N column code blocks, the first column is a first overhead code block and is used for managing data code blocks except the first column, the first overhead code block comprises first indication information and second indication information, the first indication information is used for indicating the starting position of the first data stream in the first data frame, the second indication information is used for indicating the mapping relation between the first time slot block and the second time slot block and the first data stream, k, m and N are integers larger than or equal to 1 and are integers smaller than or equal to m.
Based on the scheme, a time slot multiplexing structure and a frame structure (namely a first data frame) for time slot multiplexing are defined based on the unified code block, so that at least two service data are supported to carry out time slot multiplexing, the flow of time slot multiplexing processing is simplified, especially for small-bandwidth service data processing, the bandwidth utilization rate is ensured, and the transmission delay is reduced.
With reference to the second aspect, in certain implementations of the second aspect, the code block includes first information, the first information being used to indicate a code block type of the code block.
With reference to the second aspect, in some implementations of the second aspect, the code block type of the code block is a data code block or a non-data code block, and when the code block type of the code block is a non-data code block, the code block further includes second information, where the second information is used to indicate that the code block type of the non-data code block is an overhead code block or a rate adaptation code block. Based on the above scheme, the type of the code block can be flexibly indicated by introducing the first information and the second information in the code block.
With reference to the second aspect, in some implementations of the second aspect, after demapping the first data stream from the first data frame and performing de-slot multiplexing on the first data stream to obtain the first service data and the second service data, the method further includes: and deleting the rate matching code block from the first sub-data stream, and decapsulating the first sub-data stream to obtain first service data, and deleting the rate matching code block from the second sub-data stream, and decapsulating the second sub-data stream to obtain second service data, wherein the first sub-data stream and the second sub-data stream are obtained by performing de-time slot multiplexing.
Based on the scheme, the rate matching code blocks are deleted for different sub-data streams, and different service data obtained by decapsulation are decapsulated, the positions of the different sub-data streams and the large time slot blocks or the small time slot blocks appointed in the time slot multiplexing structure are associated, the association relationship can be further determined based on the existing time slot configuration table, the condition that the code blocks are used as processing granularity when the time slots are multiplexed for different service data is ensured, and the complexity of the subsequent time slot multiplexing is simplified.
With reference to the second aspect, in some implementations of the second aspect, deleting a rate matching code block from the first sub-data stream, and decapsulating the first sub-data stream to obtain first service data includes: and deleting the rate matching code blocks from the first sub-data stream according to the size of the second time slot block to obtain a second data frame, and demapping first service data from the second data frame according to the size of the first time slot block, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block and is used for managing the data code blocks except the first column, and N is an integer larger than 1.
Based on the above scheme, for small bandwidth service (such as first service data), the rate matching code block is deleted according to the size of the second time slot block, and the first service data is demapped from the second data frame defined by the code block according to the size of the first time slot block. Mapping the small bandwidth service to the position of the designated small time slot in the time slot multiplexing structure ensures that the first service data uses the code block as the processing granularity during time slot multiplexing, simplifies the processing process, supports the time slot multiplexing of the small bandwidth service, is beneficial to reducing the transmission delay and improves the bandwidth utilization rate.
With reference to the second aspect, in some implementations of the second aspect, deleting the rate matching code block and decapsulating the second sub-data stream to obtain second service data includes: and deleting the rate matching code blocks from the second sub-data stream according to the size of the first time slot block to obtain a second data frame when the bandwidth of the second service data is greater than 200Mbps, and demapping the second service data from the second data frame according to the size of the first time slot block, wherein the second data frame comprises N columns of code blocks, the first column of the second data frame is a second overhead code block used for managing the data code blocks except the first column, and N is an integer greater than 1.
Based on the above scheme, for a large bandwidth service (e.g., second service data), the rate matching code block is deleted according to the size of the first slot block, and the second service data is demapped from the second data frame defined in the code block according to the size of the first slot block. And mapping the large bandwidth service to the position of the designated large time slot in the time slot multiplexing structure, so that the second service data is ensured to take the code block as the processing granularity during time slot multiplexing, and the processing process is simplified.
According to the technical scheme, the time slot multiplexing is carried out based on the service data with different bandwidths, so that more flexible service data bearing is realized. Meanwhile, the code blocks are used as processing granularity, and the large time slot blocks and the small time slot blocks are used for carrying out mixed time slot multiplexing, so that service data with different rates can be processed in a targeted and flexible way, and the service processing flow is simplified.
With reference to the second aspect, in some implementations of the second aspect, the first slot block is greater than or equal to 64 bytes in size, and the second slot block is greater than or equal to 8 bytes in size and less than or equal to 64 bytes in size. Illustratively, the second slot block is 8, 16, 24, 32, or 64 bytes in size and the first slot block is 64, 128, 192, 256, 65, 129, 193, or 257 bytes in size. Based on the scheme, the first time slot block and the second time slot block can be code blocks with different byte sizes, so that the time slot multiplexing and bearing of service data with different rates can be realized, and the flexibility is high.
With reference to the second aspect, in certain implementations of the second aspect, the first slot block and the second slot block have sizes that satisfy:
X=k*p+c
wherein X is the size of the first slot block, p is the size of the second slot block, k is the number of the second slot blocks, and c is the bit occupied by the first information.
Based on the above scheme, the size of the first time slot block is an integer multiple of the second time slot code block in the present application, that is, for a time slot multiplexing structure, it can support that m×k second time slot blocks (small time slots) and m first time slot blocks (large time slots) are mixed according to a certain proportion to perform time slot multiplexing, so that flexibility is higher and adaptability is stronger.
In a third aspect, a service data processing apparatus is provided. The apparatus is for performing the method provided in the first aspect above. In particular, the service data processing apparatus may comprise means and/or modules for performing the method provided by the first aspect or any of the implementations of the first aspect.
In one implementation, the means for transmitting data is a sender device. The transceiver may be a transceiver, or an input/output interface. The processing module may be at least one processor. Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
In another implementation, the service data processing apparatus is a chip, a system-on-chip or a circuit in the transmitting device. The transceiver module may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, system on a chip or circuit, etc. The processing module may be at least one processor, processing circuit or logic circuit, etc.
The advantages of the method as shown in the above third aspect and its possible designs may be referred to the advantages in the first aspect and its possible designs.
In a fourth aspect, a service data processing apparatus is provided. The apparatus is for performing the method provided in the second aspect above. In particular, the service data processing apparatus may comprise means and/or modules for performing the method provided by the second aspect or any of the implementations of the second aspect.
In one implementation, the service data processing apparatus is a receiving end device. The transceiver may be a transceiver, or an input/output interface. The processing module may be at least one processor. Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
In another implementation, the service data processing apparatus is a chip, a system-on-chip or a circuit in the receiving end device. The transceiver module may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, system on a chip or circuit, etc. The processing module may be at least one processor, processing circuit or logic circuit, etc.
The advantages of the method according to the fourth aspect and possible designs thereof above may be referred to the advantages of the second aspect and possible designs thereof.
In a fifth aspect, a processor is provided for performing the method provided in the above aspects. The operations such as transmitting and acquiring/receiving, etc. related to the processor may be understood as operations such as outputting and receiving, inputting, etc. by the processor, or may be understood as operations such as transmitting and receiving by the radio frequency circuit and the antenna, if not specifically stated, or if not contradicted by actual function or inherent logic in the related description, which is not limited by the present application.
In a sixth aspect, a computer readable storage medium is provided. The computer readable storage medium stores program code for execution by a device, the program code comprising instructions for performing the method provided by any implementation of the first or second aspect described above.
In a seventh aspect, a computer program product comprising instructions is provided. The computer program product, when run on a computer, causes the computer to perform the method provided by any one of the implementations of the first or second aspects described above.
In an eighth aspect, a chip is provided, the chip including a processor and a communication interface. The processor reads instructions stored on the memory through the communication interface and performs the method provided by any implementation manner of the first aspect or the second aspect.
Optionally, as an implementation manner, the chip further includes a memory, where a computer program or an instruction is stored in the memory, and the processor is configured to execute the computer program or the instruction stored in the memory, and when the computer program or the instruction is executed, the processor is configured to perform the method provided in the second aspect or any implementation manner of the second aspect.
In a ninth aspect, there is provided a communication system comprising: the service data processing apparatus according to the third aspect and the service data processing apparatus according to the fourth aspect.
Drawings
Fig. 1 is a schematic view of an application scenario to which the present application is applicable.
Fig. 2 is a schematic diagram of a hardware structure of a network device.
Fig. 3 is a schematic frame structure of an OTN frame.
Fig. 4 is a schematic structural diagram of different types of code blocks according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a frame structure of a data frame according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a frame structure for time slot multiplexing according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a timeslot multiplexing structure according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a size code block combination according to an embodiment of the present application.
Fig. 9 is a schematic flow chart of a service data processing method provided by an embodiment of the present application.
Fig. 10 is a schematic flow chart of mapping a data stream to a data frame according to an embodiment of the present application.
Fig. 11 is a schematic flow chart of another service data processing method provided in an embodiment of the present application.
Fig. 12 is a schematic flow chart of a multi-channel service data processing provided in an embodiment of the present application.
Fig. 13 is a schematic structural diagram of a communication device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application is applicable to optical networks, such as: optical transport network OTN. An OTN is typically formed by connecting a plurality of devices through optical fibers, and may be configured to have different topology types such as linear, ring, mesh, etc., according to specific needs.
Fig. 1 is a schematic view of an application scenario to which the present application is applicable. As shown in fig. 1, the OTN 100 includes 8 interconnected OTN devices 101, i.e., devices a-H. Wherein 102 indicates an optical fiber for connecting two devices; 103 indicates a customer service interface for receiving or transmitting customer service data. As shown in fig. 1, the OTN 100 is used to transmit traffic data for the client devices 1-3. The client device is connected with the OTN device through a client service interface. For example, in FIG. 1, client devices 1-3 are connected to OTN devices A, H, and F, respectively.
Generally, OTN devices are classified into optical layer devices, electrical layer devices, and opto-electronic hybrid devices. An optical layer device refers to a device capable of processing an optical layer signal, such as: an optical amplifier (optical amplifier, OA), an optical add-drop multiplexer (OADM). OA may also be referred to as optical line amplifiers (optical line amplifier, OLA), and is primarily used to amplify optical signals to support transmission over greater distances while ensuring specific performance of the optical signals. OADM is used to spatially transform an optical signal so that it may be output from different output ports (sometimes also referred to as directions). An electrical layer device refers to a device capable of processing an electrical layer signal, such as: a device capable of processing OTN signals. An opto-electronic hybrid device refers to a device that has the capability to process both optical layer signals and electrical layer signals. It should be noted that, depending on the specific integration requirement, one OTN device may integrate a plurality of different functions. The technical scheme provided by the application is suitable for OTN equipment with different forms and integration levels and containing an electric layer function.
Fig. 2 is a schematic diagram of a network device hardware architecture 200. For example, one of the OTN devices a-H in fig. 1. Specifically, the OTN device 200 includes a tributary board 201, a cross board 202, a circuit board 203, an optical layer processing board (not shown in the figure), and a system control and communication-like board 204. It should be noted that, according to specific needs, the types and numbers of the boards included in the network device may be different. For example, the network device as a core node does not have a tributary board 201. As another example, a network device that is an edge node has multiple tributary boards 201, or no optical cross boards 202. As another example, a network device that supports only electrical layer functions may not have an optical layer processing board.
The tributary board 201, the cross board 202, and the circuit board 203 are used to process electrical layer signals of the OTN (e.g., ODU frames in the OTN). The tributary board 201 is used to implement reception and transmission of various customer services, such as synchronous digital hierarchy (synchronous digital hierarchy, SDH) service, packet service, ethernet service, and forwarding service, among others. Still further, the tributary board 201 may be divided into a client side transceiver module and a signal processor. The client-side transceiver module may also be referred to as an optical transceiver, for receiving and/or transmitting traffic data. The signal processor is used for realizing the mapping and demapping processing of the business data to the data frame. The cross board 202 is used to implement exchange of data frames, and exchange of one or more types of data frames is completed. The line board 203 mainly realizes processing of line-side data frames. Specifically, the wiring board 203 may be divided into a line-side optical module and a signal processor. The line-side optical module may be referred to as an optical transceiver, for receiving and/or transmitting data frames. The signal processor is used for multiplexing and demultiplexing data frames at the line side or mapping and demapping processing. The system control and communication class board 204 is used to implement system control. Specifically, information may be collected from different boards, or control instructions may be sent to corresponding boards.
It should be noted that, unless specifically stated otherwise, the specific components (such as a signal processor) may be one or more, and the present application is not limited in any way. The application is not limited in any way by the type of single board and the functional design and number of single boards included in the device. It should be further noted that, in a specific implementation, the two boards may also be designed as one board. In addition, the network device may also include a power supply for standby, a fan for heat dissipation of the device, an auxiliary class board for providing external alarms or accessing an external clock, etc.
In order to facilitate understanding of the technical solution of the present application, some concepts and technologies related to the present application will be briefly described.
1. OTN frame
The data frame structure used by the OTN device is an OTN frame, which is used for carrying various service data and providing rich management and monitoring functions. OTN frames may also be referred to as OTN transport frames. Illustratively, the OTN frame may be an optical data unit frame (optical data unit k, ODUk), ODUCn, ODUflex, or an optical transmission unit k (optical transport unit k, OTUk), an OTUCn, or a flexible OTN (FlexO) frame, or a flexible optical traffic unit (flexible optical service unit, OSUflex) frame, or the like. Wherein OSUflex may also be referred to as OSU frame.
The difference between the ODU frame and the OTU frame is that the OTU frame includes an ODU frame and an OTU overhead. k represents different rate levels, e.g., k=1 represents 2.5Gbps, and k=4 represents 100Gbps; cn represents a variable rate, in particular a rate that is a positive integer multiple of 100 Gbps. Unless specifically stated, an ODU frame refers to any one of ODUk, ODUCn, or ODUflex, and an OTU frame refers to any one of OTUk, OTUCn, or FlexO.
It should be noted that, as OTN technology advances, new types of OTN frames may be defined, and the method is also applicable to the present application.
2. OTN frame structure
Fig. 3 is a schematic diagram of a frame structure of an OTN frame. As shown in fig. 3, the OTN frame is a 4-row multi-column frame structure including an overhead area, a payload area, and a forward error correction (forward error correction, FEC) area. In particular, the specific description of the OTN frame structure may refer to the related description in the current protocol, which is not repeated in the present application.
3. Multiplexing low rate traffic into high rate signals
When a hundred megaethernet (FE) needs to be transmitted by using an OTN, the FE is first mapped into an ODU0 frame with a rate of about 1.25Gbps, and then transmitted in the OTN through OTU 1. The transmission efficiency of the mode is lower, and the bandwidth occupation of the ODU0 is less than 10%. For example, a plurality of E1 signals are first mapped to synchronous transport module-l (synchronous transport module-l, STM-l) interface signals. Where STM-l is one of the SDH signals. The STM-1 interface signals are remapped to ODU0 and then transmitted over OTU1 in the OTN.
The multiplexing of low-rate traffic into high-rate signals, which may also be referred to as signal "multiplexing", is understood in the present application as multiplexing a plurality of signals into OTN signals according to a corresponding slot arrangement.
With the gradual exit of SDH technology from the market and the rapid development of OTN technology, the range of use of OTN technology extends from backbone networks to metropolitan networks, and even access networks. The virtual container (virtual container, VC) in SDH technology is used to carry various low-rate service data (such as 2 Mbps), and the minimum carrier in OTN technology has a rate of about 1.25Gbps and is used to carry various high-rate service data. Therefore, OTN technology faces increasing demands for low rate traffic bearers. The current method for carrying the OTN low-rate service comprises the following steps: the OTN first map multiplexes the low-rate service data into a higher-rate signal, and then carries the signal through an existing high-rate optical carrier (e.g., ODU0 frame). The complexity of the implementation is high, and meanwhile, the problems of poor aging, low bandwidth utilization rate and the like can be caused.
Therefore, how to implement service data bearers at different rates in OTN is a challenge to be solved under the condition of guaranteeing timeliness and bandwidth utilization.
In view of this, the present application provides a method and apparatus for processing service data, which uniformly defines a frame structure and a time slot multiplexing structure based on code blocks, supports a plurality of service data with different bandwidths to perform time slot multiplexing processing, simplifies processing complexity, reduces time delay, and improves bandwidth utilization.
The following description is made for the purpose of facilitating understanding of the embodiments of the present application.
First, in the embodiment of the present application, service data refers to a service carried by an optical transport network or a metropolitan transport network. Such as, but not limited to, ethernet traffic, packet traffic, wireless backhaul traffic, etc. The traffic data may also be referred to as traffic signals, customer data or customer traffic data. It should be understood that the type of service data is not limited in the embodiment of the present application.
Second, in embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association relationship of an association object, meaning that three relationships may exist. For example, a and/or B may represent: a alone, a and B together, and B alone. Wherein A, B can be singular or plural. In the present application, the character "/", generally indicates that the front and rear association objects are an or relationship.
Third, in the embodiments of the present application, the "first", "second", and various numerical numbers (e.g., #1, # 2) in the embodiments shown below are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the processes below do not mean the sequence of execution, and the execution sequence of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present application.
Fourth, the terms "comprises" and "comprising" and any variations thereof in the embodiments of the present application shown below are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
Fifth, in embodiments of the present application, "exemplary" or "such as" and the like are used to denote examples, illustrations, or descriptions, embodiments or designs described as "exemplary" or "such as" should not be construed as being preferred or advantageous over other embodiments or designs. The use of the word "exemplary" or "such as" is intended to present the relevant concepts in a concrete fashion to facilitate understanding.
Sixth, in the embodiment of the present application, the "protocol" may refer to a standard protocol in the OTN field, for example, including the g.709 standard protocol of ITU-T and related protocols applied in future OTN systems, which is not limited in the present application.
Seventh, in the embodiment of the present application, "for indication" includes direct indication and indirect indication. When describing a certain information for indicating a, it may be included that the information indicates a directly or indirectly, and does not necessarily represent that a is carried in the information.
Eighth, in the embodiment of the present application, the symbol "×" is an operation symbol, which represents a product.
Ninth, unless specifically stated otherwise, a specific description of some features in one embodiment may also be applied to explaining the corresponding features of other embodiments. For example, the definition of code blocks in one embodiment may be applied to other embodiments, and in other embodiments, a detailed description may be omitted.
The technical scheme provided by the application will be described in detail below with reference to the accompanying drawings. First, a code block, a data frame, a frame for slot multiplexing, and a slot multiplexing structure according to the present application will be exemplarily described with reference to fig. 4 to 8.
Fig. 4 is a schematic structural diagram of different types of code blocks according to an embodiment of the present application. As shown in (a), (b) and (c) of fig. 4, a data (data) code block (abbreviated as D code), an Overhead (OH) code block (abbreviated as O code) and a rate adaptation code block (idle) code block (abbreviated as I code) are sequentially given in code block units. Wherein, the D code is used for carrying customer service data, and the O code and the I code may be collectively referred to as a control (control) code block (abbreviated as a C code, i.e., a non-data code block). The O-code includes RES, multiplexing hierarchy indication (multiplexing layer indication, MLI), overhead TYPE O_TYPE and OH information fields, and the I-code includes RES, MLI, O _TYPE and AL (h 55) information fields.
Specifically, each code block includes a blk_t (i.e., first information) of c bits (bits) for indicating the type of the code block, c preferably being 3 to 8 bits. For example, c=3, "000" indicates that the code block type is D code, "111" indicates that the code block type is C code, and the other bits indicate abnormal code blocks. Further, when the code block TYPE is a C code, the TYPE of the code block may be further indicated by o_type (i.e., second information). For example, o_type=0 indicates that the code block TYPE is an O code, and o_type=1 indicates that the code block TYPE is an I code. Alternatively, blk_t supports a 1bit error correction capability. For example, "010, 100, 001" indicates a D code after error correction, and "110, 101, 011" indicates an O code or I code after error correction.
It should be understood that the above bit numbers of the first information and the second information and the indication information are only examples, and should not constitute any limitation on the technical solution of the present application.
Illustratively, the sizes of the D code, the O code, and the I code may be X bytes (byte), where X is an integer greater than or equal to 64. For example, X may be 64, 128, 192, 256, etc. Alternatively, X may be 65, 129, 193 or 257, etc., and the present application is not particularly limited thereto. It will be appreciated that the size of the unicode block defined in the present application may have a number of possible values in order to achieve a service data transmission supporting different rates.
It should be noted that, the size of the code block according to the embodiment of the present application may be understood as the bit width of the code block, the bit width may be understood as the number of bits of the code block, that is, the bit width of the code block may be understood as the number of bits (possibly non-integer bytes) occupied by the code block. It should be appreciated that the size of the code block may be an integer number of bytes or a non-integer number of bytes. For convenience of description, the following is exemplified with the size of a code block as an integer byte. In the embodiment of the present application, the meaning of the size and bit width representation of the code block is the same, and the description can be replaced, and the details are not repeated.
Based on the different types of code blocks shown in fig. 4, two different frame structures are exemplarily described below with reference to fig. 5 and 6, respectively.
Fig. 5 is a schematic diagram of a frame structure of a data frame (which may be abbreviated OSU-n, i.e., a second data frame) according to an embodiment of the present application. As shown in fig. 5, OSU-N is a data structure of N columns constructed based on X byte code blocks, each column being understood as an X byte code block. Wherein the first column is an overhead code block and the other columns are data code blocks. The data code block is used to carry customer service data and the overhead code block is used to manage the data code block #1 except the first column. Y is less than X, and N is an integer greater than 1.
It should be appreciated that the first column header may define an OH of Y bytes and that the remaining bytes of the first column (e.g., X-Y-c, c being the bits occupied by BLK_T) other than Y bytes occupied by OH may also be used to carry customer service data.
Fig. 6 is a schematic diagram of a frame structure (which may be abbreviated OSU-m, i.e. first data frame) for time slot multiplexing according to an embodiment of the present application. As shown in fig. 6, a data structure of N columns constructed based on X byte code blocks, each column can be understood as an X byte code block. The difference from the frame structure OSU-n shown in fig. 5 is that the first column of OSU-m further includes a timeslot indication ts_ptr (i.e. first indication information) and a multiplexing structure identifier (multiplex structure identifier, MSI) (i.e. second indication information), where the bit width of ts_ptr and MSI is X-Y-c, and c is the bit occupied by blk_t. Specifically, the TS-PTR is used to indicate that the slot multiplexing structure carrying the first data stream shown in fig. 7 is at the start position of OSU-m. MSI is a time slot multiplexing overhead, multiframe transfer, and is used to indicate the mapping relationship between the first time slot block and the second time slot block shown in fig. 7 and the first data stream (i.e. the data stream obtained by processing the service data through encapsulation, rate matching, etc.), for example, which time slots are occupied by which services.
Fig. 7 is a schematic diagram of a time slot multiplexing structure (may be abbreviated as OSTUG-m) according to an embodiment of the present application. As shown in fig. 7, an m-column data structure constructed based on X byte code blocks, each column can be understood as an X byte code block.
In one possible implementation, the ith column of the ostu-m may be a first slot block (e.g., slot #1, which may be referred to as a large slot), the bit width is X byte, the slot bandwidth is defined as Rh, and the m columns may be m slots #1, corresponding to slot numbers TSH 1-TSHm. For example, x=64 bytes, and the data code block #1 corresponding to the slot #1 may occupy 63 bytes, and blk_t may occupy 1 byte. Specifically, assuming that the bit width size x=256 bytes and the bandwidth rh=100 Mbps, if there is 500M traffic to be transmitted, transmission can be performed through 5 slots # 1.
In another possible implementation, the jth column of the ostu-m may be divided into k second slot blocks (e.g., slots #2, which may be referred to as small slots) based on p bytes, where the bit width is p bytes and the slot bandwidth is defined as Rl, and each column may include k slots #2, with corresponding slot numbers TSL 1-TSLk. By way of example, p may be 8, 16, 24, 32, 64, etc. For example, x=64 byte, p=8 bytes, each column of the OSTUG-m may include 7 data code blocks #2 (slot # 2), taking 56 bytes in total, a slot overhead slot OH (TSOH) may take 7 bytes, and blk_t may take 1 byte. Specifically, assuming that the bit width p=16 byte, k=16, and the bandwidth r1=100/16 Mbps, if there is 100M traffic to be transmitted, transmission through 16 slots #2 is required.
Wherein k and m are integers greater than 1, and i and j are integers greater than or equal to 1 and less than or equal to m.
It should be noted that the values of X, p, rh and R1 are merely examples, and should not be construed as limiting the technical scheme of the present application. In this implementation, X is an integer multiple of p and Rh is an integer multiple of R1. For example, r1=10, p=16, rh=100, and x=256.
Therefore, for one slot multiplexing structure OSTUG-m, it can support the mixing of m first slot blocks (slot # 1) and m x k second slot blocks (slot # 2) according to a certain proportion for slot multiplexing, and the flexibility is higher and the adaptability is stronger. For example, rh=100 Mbps, r1=100/16 Mbps, and for the above-mentioned 500M service to be transmitted, transmission can be performed through 4 slots #1 and 16 slots # 2; alternatively, transmission or the like may also be performed through 3 slots #1 and 32 slots # 2.
The number relationship between m columns of the slot multiplexing structure shown in fig. 7 and N columns of the frame structure shown in fig. 5 and 6 is not limited, i.e., m may be greater than N or less than N. For example, m=10, n=100; or m=20, n=10, etc. to ensure flexibility of slot multiplexing.
Based on the slots #1 and #2 involved in the OSTUG-m shown in fig. 7, a schematic structure diagram of a size code block combination corresponding to a size slot is exemplarily described below with reference to fig. 8.
As shown in fig. 8 (a), a certain column of data code blocks of the OSTUG-m is divided into k second slot blocks (slot # 2) based on p byte, for example, TSL1 to TSLk sequentially correspond to k small code blocks, and the type of each small code block may be indicated by 1bit tsr_blk_t (r is greater than or equal to 1 and less than or equal to k). For example, ts1_blk_t=0 indicates that the first small code block is a D code, and ts3_blk_t=1 indicates that the third small code block is a C code (O code or I code). The type indication information tsr_blk_t and the error correction code (error correction code, ECC) of the k small code blocks may be unified as blk_t placed at the head of the column data code block.
By way of example, X may be 65, 129, 193, 257, etc. For example, each column of x=65byte, p=8byte, k=8, i.e., OSTUG-m may be divided into 8 small code blocks, taking up 64 bytes in total, and blk_t taking up 1 byte.
As shown in fig. 8 (b), a certain column of data code blocks of the OSTUG-m is referred to as a first slot block (slot # 1), i.e., 1 large code block, the header of which includes a type indication information ts_blk_t and ECC. For example, ts_blk_t=0 indicates that the code block is D-code, and ts_blk_t=1 indicates that the code block is C-code (i.e., O-code or I-code).
By way of example, X may be 65, 129, 193, 257, etc. For example, x=65 bytes, the data code block size is 64 bytes, and blk_t occupies 1 byte. It should be noted that, values of X, p and k are merely examples, and should not be construed as limiting the technical solution of the present application. In this implementation, X may not be an integer multiple of p.
It should be appreciated that the above-described large code block and small code block sizes satisfy: x=k×p+c, where X is the bit width of the large code block (i.e., the size of the first slot block), p is the bit width of the small code block (i.e., the size of the second slot block), k is the number of small code blocks (i.e., the number of second slot blocks), and c is the bit occupied by blk_t.
Based on the basic code block structure, the frame structure, and the time slot multiplexing structure shown in fig. 4 to 8, a processing method for performing time slot multiplexing on service data of different rates will be described in detail with reference to fig. 9 and 11.
Fig. 9 shows a flowchart of a service data processing method 900 according to an embodiment of the present application. As shown in fig. 9, the transmitting-end device may be an OTN device or may be performed by a component (such as a chip or a chip system) of the OTN device. The receiving end device may be an OTN device or may be implemented by a component of the OTN device (e.g., a chip or a system-on-chip, etc.). Specifically, the method 900 includes the following steps.
S910, the sending terminal equipment receives the first service data and the second service data.
For example, the bandwidth of the first service data is less than or equal to 200Mbps, e.g., the bandwidth of the first service data is 100Mbps. The bandwidth of the second service data may be less than or equal to 200Mbps, or may be greater than 200Mbps, and the second service data may be one or more, which is not particularly limited in the present application. It should be understood that 200Mbps may be a pre-specified threshold value here. It should also be understood that the bandwidth value range of the first service data is merely exemplary, and the present application is not limited thereto. For example, the bandwidth of the first service data may be less than or equal to 100Mbps.
For convenience of description, the embodiment of the present application refers to a service having service data less than or equal to 200Mbps as a small bandwidth service (e.g., first service data), and a service having service data greater than 200Mbps as a large bandwidth service. That is, the second service data may be service data of a small bandwidth service or service data of a large bandwidth service.
Illustratively, the transmitting-end device is the OTN device described above (OTN device a shown in fig. 1), and receives service data (e.g., first service data and second service data) from the client device (e.g., client device shown in fig. 1). Or the sending end device is other devices capable of realizing the OTN device. The embodiment of the application does not limit the specific form of the sending end equipment, and can realize the function of processing corresponding service data.
S920, the transmitting terminal device performs time slot multiplexing on the first service data and the second service data based on the time slot multiplexing structure to obtain a first data stream.
The definition of the slot multiplexing structure can be seen in fig. 7, and will not be described here again. It should be noted that, performing time slot multiplexing on the first service data and the second service data based on the time slot multiplexing structure to obtain the first data stream may be understood as: the first traffic data is placed in a second time slot block (e.g., a small block of a certain column of the m-column code blocks of the time slot # 2) specified in the time slot multiplexing structure according to the time slot configuration table order, and the second traffic data is placed in a first time slot block (e.g., a certain column of the m-column code blocks of the time slot # 1) specified in the time slot multiplexing structure according to the time slot configuration table order. The timeslot configuration table is specified by the existing protocol, and will not be described herein. That is, after the first service data and the second service data are time-slot multiplexed, one or more time-slot multiplexing structures may be included in the obtained first data stream.
In one possible implementation manner, before the first service data and the second service data are subjected to time slot multiplexing based on the time slot multiplexing structure to obtain a first data stream, the transmitting end device may further encapsulate and rate match the first service data to obtain a first sub-data stream, and encapsulate and rate match the second service data to obtain a second sub-data stream, where the first sub-data stream and the second sub-data stream are used for performing time slot multiplexing.
Illustratively, encapsulating and rate matching the first traffic data to obtain a first sub-data stream includes: and mapping the first service data into a second data frame according to the size of the first time slot block, and performing rate matching on the second data frame according to the size of the second time slot block to obtain a first sub-data stream.
Illustratively, encapsulating and rate matching the second traffic data to obtain a second sub-data stream includes: and mapping the second service data into a second data frame according to the size of the first time slot block, and performing rate matching on the second data frame according to the size of the first time slot block to obtain a second sub-data stream when the bandwidth of the second service data is greater than 200 Mbps.
The frame structure of the second data frame is defined in fig. 5, and will not be described here.
Specifically, the encapsulation of service data can be understood as: the service data is divided into one or more data code blocks in code block units, the code block size is X bytes, and the one or more data code blocks are encapsulated in a second data frame (e.g., OSU-n). That is, the size of the data code block is equal to the size of the code block in the second data frame, and both the data code block and the code block are X bytes.
In particular, rate matching traffic data can be understood as: according to the bandwidths of the service data, intercepting the second data frame according to time slot blocks of different sizes (for example, small bandwidth service corresponds to the second time slot block, large bandwidth service corresponds to the first time slot block, and the size is X bytes), and inserting a rate adaptation code block into the data code block for rate matching. For example, for small bandwidth service, the data code block #1 is sequentially intercepted according to the size of the second time slot block, and the rate adaptation code block #1 is inserted into the plurality of data code blocks #1 to perform rate matching, so as to finally obtain the sub-data stream #1. The size of the data code block #1 and the rate adaptation code block #1 in the sub data stream #1 is the same as the size of the second slot block, for example p bytes. For another example, for the large bandwidth service, column interception is performed on the data code block #2 according to the size of the first time slot block, and rate matching is performed by inserting a rate adaptive code block #2 into the plurality of data code blocks #2, so as to obtain a sub-data stream #2. The data code block #2 and the rate adaptation code block #2 in the sub data stream #2 have the same size as the first slot block, for example X bytes.
It should be understood that the multiple sub-data streams obtained after encapsulation and rate matching are sequentially migrated to the positions of the designated large time slots or small time slots in the time slot multiplexing structure according to the time slot configuration table, so as to complete the movement of the sub-data streams of the multi-path service to the time slot multiplexing structure for time slot multiplexing.
It should be noted that, the processes (such as encapsulation, rate matching and time slot multiplexing) of the processing of the received service data by the sending end device are all performed with the code block as granularity, so as to reduce the complexity of the service data processing. Where "granularity of a code block" may be understood as "granularity of processing of a code block" or may also be understood as "processing size of a code block" or the like, it is indicated that the flow of service data processing is performed based on the code block.
And S930, the transmitting end equipment maps the time slot multiplexing structure of the first data stream into a first data frame.
The frame structure of the first data frame is defined in fig. 6, and will not be described here.
The mapping of the time slot multiplexing structure of the first data stream into the first data frame is illustratively described in connection with fig. 10. Fig. 10 (a) shows an 8-column slot multiplexing structure OSTUG-8, which includes 8 slots ts#1 to ts#8. Fig. 10 (b) is a frame structure OSU-8 for slot multiplexing of 10 columns for carrying 8 slots ts#1 to ts#8 in the slot multiplexing structure OSTUG-8. The first column of OSU-8 is an overhead code block comprising second indication information TUG-PTR for indicating that osug-8 maps to a starting position in OSU-8. The definition of the slot multiplexing structure OSTUG-8 and the frame structure OSU-8 for slot multiplexing can be seen in fig. 6 and fig. 7, and will not be repeated here.
Specifically, during mapping of OSTUG-8 to OSU-8, in the first OSU-8 frame structure, the second column in OSU-8 of the start position (e.g., TS#1) of OSTUG-8 in the first data stream is indicated by TUG-PTR; in the second OSU-8 frame structure, the start position of osug-8 in the first data stream (e.g., TS # 1) is indicated by the TUG-PTR in the seventh column in OSU-8. In the third OSU-8 frame structure, the start position of osug-8 in the first data stream (e.g., TS # 1) is indicated by the TUG-PTR in the eighth column in OSU-8. And by analogy, the mapping process of the OSTUG-8 in the first data stream into the OSU-8 is completed by taking the column number of the OSU-8 as a period. It should be understood that the above number of columns of the slot multiplexing structure and the frame structure for slot multiplexing is only an example and should not constitute any limitation on the technical solution of the present application.
In the embodiment of the present application, an OTN frame (i.e., a first data frame) is taken as an example for description, and no limitation on the technical solution of the present application should be constructed. It should be appreciated that in future technological developments, the application is applicable also for other bearer data frames.
S940, the transmitting device sends the first data frame to the receiving device.
Correspondingly, the receiving end device receives the first data frame from the transmitting end device.
For example, the sending end device may directly send the first data frame (such as OSU-m) to the receiving end device, or may encapsulate the first data frame into an ODU frame of the bearer container, and then send the ODU frame to the receiving end device, which is not limited in this application.
S950, the receiving end device demaps the first data stream from the first data frame, and demultiplexes the first data stream into the first service data and the second service data.
In one possible implementation manner, after the first data stream is demapped from the first data frame and the first data stream is demultiplexed into the first service data and the second service data, the method further includes: and deleting the rate matching code block from the first sub-data stream, and decapsulating to obtain first service data, and deleting the rate matching code block from the second sub-data stream, and decapsulating to obtain second service data, wherein the first sub-data stream and the second sub-data stream are obtained by performing the de-time slot multiplexing.
Exemplary, deleting the rate matching code block from the first sub-data stream, and decapsulating the first sub-data stream to obtain the first service data, which specifically includes: and deleting the rate matching code block from the first sub-data stream according to the size of the second time slot block to obtain a second data frame, and demapping the first service data from the second data frame according to the size of the first time slot block.
Taking the second service data as a large bandwidth service (for example, the bandwidth is greater than 200 Mbps) as an example, deleting the rate matching code block from the second sub-data stream, and decapsulating to obtain the second service data, which specifically includes: and deleting the rate matching code block from the second sub-data stream according to the size of the first time slot block to obtain a second data frame, and demapping second service data from the second data frame according to the size of the first time slot block.
It should be noted that, in the embodiment of the present application, how the receiving end device demaps the first data stream from the received first data frame, and does not limit the first data stream to obtain service data by de-slot multiplexing, and reference may be made to description of demapping and de-slot multiplexing in the related art at present, which is not repeated here.
Fig. 11 is a flow chart illustrating another service data processing method 1100 according to an embodiment of the present application. As shown in fig. 11, the transmitting-end device may be an OTN device or may be performed by a component (such as a chip or a chip system) of the OTN device. The receiving end device may be an OTN device or may be implemented by a component of the OTN device (e.g., a chip or a system-on-chip, etc.). Specifically, the method 1100 includes the following steps.
S1110, the sending terminal equipment receives the first service data and the second service data.
S1120, the sending end equipment encapsulates and rate-matches the first service data to obtain a first sub-data stream.
Similarly, the sending end device encapsulates and rate-matches the second service data to obtain a second sub-data stream.
Wherein the first sub-data stream and the second sub-data stream are used for time slot multiplexing.
Illustratively, encapsulating and rate matching the first traffic data to obtain a first sub-data stream includes: the first traffic data is divided into one or more data code blocks according to the size of the first slot block, and the one or more data code blocks are encapsulated in a second data frame. The size of the plurality of data code blocks is the same as the size of the code blocks in the second data frame, e.g., X bytes. Further, according to the size of the second time slot block, rate matching is performed on the inserted rate matching code blocks in the plurality of data code blocks in the second data frame to obtain a first sub-data stream. The data code blocks and rate adaptation code blocks in the first sub-data stream are the same size as the second slot blocks, e.g., p bytes. I.e. for small bandwidth traffic, the size of the code blocks of X bytes (i.e. the first slot block) is used as granularity for encapsulation, and the size of the code blocks of p bytes (i.e. the second slot block) is used as granularity for rate adaptation.
Illustratively, encapsulating and rate matching the second traffic data to obtain a second sub-data stream includes: the second service data is divided into one or more data code blocks according to the size of the first time slot block, and the one or more data code blocks are encapsulated into a second data frame, wherein the size of the plurality of data code blocks is equal to the size of the code blocks in the second data frame, for example, X bytes. Further, when the second service data is a large bandwidth service, the rate matching code blocks are inserted into the plurality of data code blocks in the second data frame according to the size of the first time slot block to perform rate matching so as to obtain a second sub-data stream. The data code blocks and rate adaptation code blocks in the second sub-data stream are the same size as the second slot blocks, e.g., p bytes. I.e. encapsulation and rate adaptation with a size of X byte code blocks (i.e. first slot blocks) as granularity for large bandwidth traffic.
The frame structure of the first data frame is defined in fig. 6, and will not be described here.
It should be understood that the multiple sub-data streams obtained after encapsulation and rate matching may be sequentially migrated to the positions of the designated large slots or small slots in the slot multiplexing structure according to the slot configuration table, so as to complete the movement of the sub-data streams of the multi-path service to the slot multiplexing structure for slot multiplexing.
And S1130, the transmitting end equipment performs time slot multiplexing on the first sub data stream and the second sub data stream based on the time slot multiplexing structure to obtain a first data stream.
The definition of the slot multiplexing structure can be seen in fig. 7, and will not be described here again.
Illustratively, the code blocks of the first sub-data stream corresponding to the small bandwidth service are sequentially placed at the small time slot position (i.e., the second time slot block) designated in the time slot multiplexing structure according to the time slot configuration table, for example, the time slot #2 in a certain column of code blocks in the m columns of code blocks of the time slot multiplexing structure shown in fig. 7. Similarly, the code blocks of the second sub-data stream corresponding to the large bandwidth service are sequentially placed at the designated large slot position (i.e., the first slot block) in the slot multiplexing structure according to the slot configuration table, for example, a certain whole column of code blocks, such as slot #1, in the m columns of code blocks of the slot multiplexing structure shown in fig. 7.
It should be understood that the movement from the multi-channel bandwidth service to the time slot multiplexing structure is performed according to the time slot configuration table, and in particular, the first data stream obtained after the time slot multiplexing for the small bandwidth service may be carried through the OTN frame.
S1140, the transmitting device maps the first data stream into the first data frame.
S1150, the transmitting device transmits the first data frame to the receiving device.
Correspondingly, the receiving end device receives the first data frame from the transmitting end device.
S1160, the receiving end device demaps the first data stream from the first data frame, and demultiplexes the first data stream into the first service data and the second service data.
For the specific implementation of steps S1110, S1140, S1150, and S1160, reference may be made to steps S910, S930, S940, and S950 in the above method 900, which are omitted herein for brevity.
Based on the above-described processing methods of service data of different bandwidths shown in fig. 9 and 11, an exemplary description will be given below with respect to a flow of multi-channel service data processing (e.g., encapsulation, rate matching, and slot multiplexing) with reference to fig. 12.
Fig. 12 is a schematic flow chart of a multi-channel service data processing according to an embodiment of the present application. As shown in fig. 12, the transmitting end device receives the small bandwidth service #1, the small bandwidth service #2, and the large bandwidth service #1 in this order from the client device, and specific service data processing procedures are as follows.
First, the service data of the small bandwidth service #1 is sliced into one or more data code blocks #1 with the granularity of the code blocks and the size of the code blocks being X bytes. Similarly, small bandwidth traffic #2 is divided into one or more data code blocks #2, and large bandwidth traffic #1 is divided into one or more data code blocks #3. And respectively packaging the segmented data code block #1, the segmented data code block #2 and the segmented data code block #3 into respective corresponding OSU-n frame (such as OSU-n #1, OSU-n #2 and OSU-n # 3) structures. The details of the OSU-n frame structure are shown in fig. 5, and will not be described here again. It should be understood that the sizes of data code block #1, data code block #2, and data code block #3 are the same as the sizes of the code blocks in the OSU-n frame structure. The specific implementation manner of the encapsulation processing for the data code block may refer to the encapsulation processing for the service data in the current OTN network, which is not described herein again.
And secondly, according to the size (p bytes) of the second time slot block, sequentially intercepting the data code blocks #1 and #2 of the OSU #1 and the data code blocks #2, and respectively inserting rate adaptation code blocks into the data code blocks #1 and #2 to perform rate matching to obtain an OSTU sub-data stream #1 and an OSTU sub-data stream #2. Similarly, the data code block #3 of the OSU #3 is intercepted according to the size (e.g., p bytes) of the second time slot block, and a rate adaptation code block is inserted into the plurality of data code blocks #3 for rate matching, so as to obtain an OSTU sub-data stream #3. The specific implementation manner of the interception and rate matching processing on the data code block may refer to the interception and rate matching processing on service data in the current OTN network, which is not described herein.
Then, the OSTU sub-data streams are sequentially placed in the positions of the first slot block or the second slot block specified in OSTUG-m according to the slot configuration table. For example, the OSTU sub-stream #1 corresponding to the small bandwidth service #1 is placed at the position of TSL1 of columns 1 and 2 in OSTUG-m according to the slot configuration table. For another example, OSTU sub-stream #2 corresponding to small bandwidth service #2 is placed at the location of TSL2 of columns 1 and 2 in OSTUG-m according to the slot configuration table. For another example, the OSTU sub-data stream #3 corresponding to the large bandwidth service #1 is put into the 4 th column and the m-2 th column of the OSTUG-m according to the time slot configuration table, and occupies the whole column of the code block. Therefore, the service data with different bandwidths are sequentially subjected to the processes of segmentation, encapsulation, interception, rate matching, time slot multiplexing and the like, and the mapping of the multipath service into the m-column time slot multiplexing structure OSTUG-m is completed.
It should be appreciated that the slot configuration table may be protocol-specified, or pre-configured. It should also be understood that one bandwidth service may occupy one or more first slot blocks and second slot blocks, that is, based on the technical scheme of the present application, the multi-channel service data to be transmitted may be multiplexed by mixing slots through one or more first slot blocks and second slot blocks, which is more flexible.
Finally, one or more OSTUG-m included in the data stream are mapped into OSU-m frames (e.g., non-overhead columns of OSU-m frames), and the OSU-m frames are sent to the receiving device at the service layer pipe. For example, the TS-PTR of the first column in an OSU-m frame indicates that the starting position of the first OSTUG-m in the data stream in the first OSU-m frame is column 4.
In fig. 9, 11 and 12, the above description is given by taking mixed processing (for example, encapsulation, rate matching and time slot multiplexing) of the large bandwidth service and the small bandwidth service as an example, so that the flexibility is high for supporting the time slot multiplexing of the large bandwidth service and the small bandwidth service. Of course, the technical scheme of the application can only perform the processes of encapsulation, rate matching, time slot multiplexing and the like for a plurality of small bandwidth services, or can also be suitable for the processes of encapsulation, rate matching, time slot multiplexing and the like for a plurality of large bandwidth services, and the application is not limited in particular.
It should be understood that the specific examples illustrated in fig. 4-12 in the embodiments of the present application are only intended to help those skilled in the art to better understand the embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
It should also be understood that in some of the above embodiments, the devices in the existing network architecture are mainly used as examples to illustrate (e.g. OTN devices), and the present application is not limited to the specific form of the devices. For example, devices that can achieve the same function in the future are applicable to the present application.
The method for processing service data provided by the embodiment of the application is described in detail above with reference to fig. 4 to 12. The method for processing the service data is mainly introduced from the interaction point of the receiving end equipment and the transmitting end equipment. It will be appreciated that, in order to achieve the above-described functions, the receiving device and the transmitting device include corresponding hardware structures and/or software modules that perform the respective functions.
The following describes in detail the communication device provided in the embodiment of the present application with reference to fig. 13. It should be understood that the descriptions of the apparatus embodiments and the descriptions of the method embodiments correspond to each other, and thus, descriptions of details not shown may be referred to the above method embodiments, and for the sake of brevity, some parts of the descriptions are omitted.
The embodiment of the application can divide the function modules of the sending end device or the receiving end device according to the method example, for example, each function module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The following description will take an example of dividing each functional module into corresponding functions.
Fig. 13 is a schematic structural diagram of a service data processing device according to an embodiment of the present application. As shown in fig. 13, the apparatus 1300 includes a processor 1301, an optical transceiver 1302, and a memory 1303. Wherein the memory 1303 is optional. The apparatus 1300 may be applied to both a transmitting-side device (e.g., the transmitting-side device described above) and a receiving-side device (e.g., the receiving-side device described above).
When applied to a transmitting-side device, the processor 1301 and the optical transceiver 1302 are configured to implement a method performed by the transmitting-side device shown in fig. 9 or 11. In implementation, each step of the process flow may implement the method performed by the transmitting device described in the above figures through an integrated logic of hardware in the processor 1301 or an instruction in software. The optical transceiver 1302 is configured to receive a first data frame for sending to a peer device (also referred to as a receiving device).
When applied to a receiving-side device, the processor 1301 and the optical transceiver 1302 are configured to implement the method performed by the receiving-side device shown in fig. 9 or 11. In implementation, each step of the process flow may implement the method performed by the receiving device described in the foregoing figures through an integrated logic circuit of hardware in the processor 1301 or an instruction in software. The optical transceiver 1302 is configured to receive a first data frame sent by a peer device (also referred to as a transmitting device), and send the first data frame to the processor 1301 for further processing.
The memory 1303 is used to store instructions so that the process 1301 can be used to perform steps as mentioned in the above figures. Alternatively, the memory 1303 is also used to store other instructions to configure parameters of the processor 1301 to implement corresponding functions.
It should be noted that, in the hardware configuration diagram of the network device illustrated in fig. 2, the processor 1301 and the memory 1303 may be located in a tributary board, or may be located in a board where the tributaries and the lines are combined. Alternatively, the processor 1301 and the memory 1303 each include a plurality of boards, respectively located on the tributary board and the circuit board, and the two boards cooperate to perform the foregoing method steps.
It should be noted that the apparatus shown in fig. 13 may also be used to perform the method steps related to the embodiment modification shown in the aforementioned drawings, which are not described herein.
Based on the above embodiments, the present application further provides a computer-readable storage medium. The storage medium has stored therein a software program which, when read and executed by one or more processors, performs the methods provided by any one or more of the embodiments described above. The computer readable storage medium may include: various media capable of storing program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk.
Based on the above embodiments, the present application further provides a chip. The chip includes a processor for implementing the functions involved in any one or more of the embodiments described above, such as acquiring or processing OTN data frames involved in the methods described above. Optionally, the chip further comprises a memory for the necessary program instructions and data to be executed by the processor. The chip may be formed by a chip, or may include a chip and other discrete devices.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the scope of the embodiments of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is also intended to include such modifications and variations.
It should be appreciated that the processor referred to in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be understood that the memory referred to in embodiments of the present application may be volatile memory and/or nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM). For example, RAM may be used as an external cache. By way of example, and not limitation, RAM may include the following forms: static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It should be noted that when the processor is a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, the memory (storage module) may be integrated into the processor.
The elements and steps of the examples described in the embodiments disclosed herein may be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, and such implementations are contemplated as falling within the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Furthermore, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to realize the scheme provided by the application. In addition, each functional unit in each embodiment of the present application may be integrated in one unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. For example, the computer may be a personal computer, a server, or a network device, etc. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. For example, the aforementioned usable medium may include, but is not limited to, a U disk, a removable hard disk, a read-only memory, a random access memory, a magnetic disk, or an optical disk, etc. various media that can store program codes, such as a magnetic disk, a hard disk, a magnetic tape, an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD), etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A method for processing service data, comprising:
receiving first service data and second service data, wherein the bandwidth of the first service data is less than or equal to 200 megabits per second (Mbps);
performing time slot multiplexing on the first service data and the second service data based on a time slot multiplexing structure to obtain a first data stream, wherein the time slot multiplexing structure comprises m column code blocks, an ith column code block in the m column code blocks is a first time slot block, a jth column code block in the m column code blocks comprises k second time slot blocks, k and m are integers greater than 1, and i and j are integers greater than or equal to 1 and less than or equal to m;
mapping the first data stream into a first data frame, wherein the first data frame comprises N columns of code blocks, a first column of the first data frame is a first overhead code block, the first overhead code block is used for managing data code blocks except the first column, the first overhead code block comprises first indication information and second indication information, the first indication information is used for indicating the starting position of the first data stream in the first data frame, the second indication information is used for indicating the mapping relation between the first time slot block, the second time slot block and the first data stream, and N is an integer greater than 1;
And sending the first data frame.
2. The method of claim 1, wherein the code block includes first information indicating a code block type of the code block.
3. The method of claim 2, wherein the code block type of the code block is the data code block or a non-data code block, and wherein when the code block type of the code block is the non-data code block, the code block further comprises second information for indicating that the code block type of the non-data code block is an overhead code block or a rate adaptation code block.
4. A method according to any of claims 1-3, characterized in that before said time slot multiplexing of said first traffic data and said second traffic data based on a time slot multiplexing structure to obtain a first data stream, the method further comprises:
packaging and rate matching the first service data to obtain a first sub-data stream;
packaging and rate matching the second service data to obtain a second sub-data stream;
wherein the first sub-data stream and the second sub-data stream are used for performing the time slot multiplexing.
5. The method of claim 4, wherein said encapsulating and rate matching the first traffic data to obtain a first sub-data stream comprises:
Mapping the first service data into a second data frame according to the size of the first time slot block, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block, the second overhead code block is used for managing data code blocks except the first column, and N is an integer greater than 1;
and carrying out rate matching on the second data frame according to the size of the second time slot block so as to obtain the first sub-data stream.
6. The method of claim 4, wherein said encapsulating and rate matching the second traffic data to obtain a second sub-data stream comprises:
mapping the second service data into a second data frame according to the size of the first time slot block, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block, the second overhead code block is used for managing data code blocks except the first column, and N is an integer greater than 1;
and when the bandwidth of the second service data is greater than 200Mbps, performing rate matching on the second data frame according to the size of the first time slot block to obtain the second sub-data stream.
7. The method of any of claims 1-6, wherein the second block of timeslots is 8, 16, 24, 32 or 64 bytes in size and the first block of timeslots is 64, 128, 192, 256, 65, 129, 193 or 257 bytes in size.
8. The method according to any of claims 2 to 7, wherein the sizes of the first and second blocks of timeslots satisfy:
X=k*p+c
wherein X is the size of the first time slot block, p is the size of the second time slot block, k is the number of the second time slot blocks, and c is the bit occupied by the first information.
9. A method of traffic data processing, comprising:
receiving a first data frame, wherein the first data frame is used for carrying a first data stream, the first data stream is obtained by carrying out time slot multiplexing on the first service data and the second service data based on a time slot multiplexing structure, the bandwidth of the first service data is less than or equal to 200Mbps, the time slot multiplexing structure comprises m column code blocks, an ith column code block in the m column code blocks is a first time slot block, a jth column code block in the m column code blocks comprises k second time slot blocks, the first data frame comprises N column code blocks, a first column of the first data frame is a first overhead code block, the first overhead code block is used for managing data code blocks except the first column, the first overhead code block comprises first indication information and second indication information, the first indication information is used for indicating the starting position of the first data stream in the first data frame, the second indication information is used for indicating the mapping relation of the first time slot block, the second time slot block and the first data stream is an integer which is greater than or equal to 1, and the integer which is greater than or equal to m;
And demapping the first data stream from the first data frame, and performing de-slot multiplexing on the first data stream to obtain the first service data and the second service data.
10. The method of claim 9, wherein the code block includes first information indicating a code block type of the code block.
11. The method of claim 10, wherein the code block type of the code block is the data code block or a non-data code block, and wherein when the code block type of the code block is the non-data code block, the code block further comprises second information for indicating that the code block type of the non-data code block is an overhead code block or a rate adaptation code block.
12. The method according to any of claims 9 to 11, wherein after demapping the first data stream from the first data frame and de-slot multiplexing the first data stream to obtain the first traffic data and the second traffic data, the method further comprises:
deleting a rate matching code block from a first sub-data stream, and decapsulating to obtain the first service data;
Deleting a rate matching code block from the second sub-data stream, and decapsulating to obtain the second service data;
the first sub-data stream and the second sub-data stream are obtained by performing the de-time slot multiplexing.
13. The method of claim 12, wherein said removing rate matching code blocks from the first sub-data stream, decapsulating to obtain said first traffic data, comprises:
deleting rate matching code blocks from the first sub-data stream according to the size of the second time slot block to obtain a second data frame, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block, the second overhead code block is used for managing data code blocks except the first column, and N is an integer greater than 1;
and demapping the first service data from the second data frame according to the size of the first time slot block.
14. The method of claim 12, wherein said removing rate matching code blocks from the second sub-data stream, decapsulating to obtain said second traffic data, comprises:
when the bandwidth of the second service data is greater than 200Mbps, deleting a rate matching code block from the second sub-data stream according to the size of the first time slot block to obtain a second data frame, wherein the second data frame comprises N columns of code blocks, a first column of the second data frame is a second overhead code block, the second overhead code block is used for managing data code blocks except the first column, and N is an integer greater than 1;
And demapping the second service data from the second data frame according to the size of the first time slot block.
15. The method of any of claims 9 to 14, wherein the second block of timeslots is 8, 16, 24, 32 or 64 bytes in size and the first block of timeslots is 64, 128, 192, 256, 65, 129, 193 or 257 bytes in size.
16. The method according to any of claims 10 to 15, wherein the sizes of the first and second blocks of timeslots satisfy:
X=k*p+c
wherein X is the size of the first time slot block, p is the size of the second time slot block, k is the number of the second time slot blocks, and c is the bit occupied by the first information.
17. An apparatus for processing service data, comprising:
a module for performing the method of any one of claims 1 to 8, or
A module for performing the method of any one of claims 9 to 16.
18. An apparatus for processing service data, comprising: a processor and a transceiver for receiving signals from or transmitting signals to other devices than the device, the processor being operable to implement the method of any one of claims 1 to 16 by logic circuitry or executing code instructions.
19. A chip comprising a processor and a communication interface for receiving data frames and transmitting them to the processor or sending them to other communication devices than the one comprising the chip, the processor being adapted to perform the method according to any of claims 1 to 8 or the method according to any of claims 9 to 16.
CN202210621039.XA 2022-06-02 2022-06-02 Service data processing method and device Pending CN117221768A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210621039.XA CN117221768A (en) 2022-06-02 2022-06-02 Service data processing method and device
PCT/CN2023/097660 WO2023232097A1 (en) 2022-06-02 2023-05-31 Service data processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210621039.XA CN117221768A (en) 2022-06-02 2022-06-02 Service data processing method and device

Publications (1)

Publication Number Publication Date
CN117221768A true CN117221768A (en) 2023-12-12

Family

ID=89026963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210621039.XA Pending CN117221768A (en) 2022-06-02 2022-06-02 Service data processing method and device

Country Status (2)

Country Link
CN (1) CN117221768A (en)
WO (1) WO2023232097A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10225037B2 (en) * 2014-10-24 2019-03-05 Ciena Corporation Channelized ODUflex systems and methods
RU2759514C1 (en) * 2018-05-10 2021-11-15 Хуавей Текнолоджиз Ко., Лтд. System, apparatus and method for processing data of low-speed service in optical transport network
CN113645524A (en) * 2020-04-27 2021-11-12 华为技术有限公司 Method, device and equipment for processing service

Also Published As

Publication number Publication date
WO2023232097A1 (en) 2023-12-07

Similar Documents

Publication Publication Date Title
US11082199B2 (en) Data transmission method in optical network and optical network device
CN111865887B (en) Data transmission method and device in optical transport network
CN112042138A (en) Method, device and system for processing low-speed service data in optical transport network
KR20230041057A (en) Data transmission method and device
AU2009352636A1 (en) Dynamic hitless resizing in optical transport networks
WO2021169323A1 (en) Service processing method and processing apparatus in optical transport network, and electronic device
US20160007104A1 (en) Apparatus and method for transporting optical channel data unit odu service
EP3716641A1 (en) Data transport method, device and system
CN112866138A (en) Resource allocation method, device and equipment
CN105933087B (en) Method, relevant device and the system of data processing in a kind of Ethernet
CN112153493B (en) Message processing method and device
CN111989933B (en) Data transmission method and device in optical transport network
CN117221768A (en) Service data processing method and device
CN116264587A (en) Data transmission method and related device
WO2023231764A1 (en) Service data processing method and device
CN112953675A (en) Data transmission method, communication equipment and storage medium
CN101350691B (en) Method and apparatus for service concourse and ADM division-insertion multiplexing
CN116489538A (en) Service data processing method and device
EP3641160A1 (en) Method and device for processing optical supervisory channel in optical network
WO2024109349A1 (en) Data transmission method and apparatus
WO2024051586A1 (en) Method for processing data frame in optical transport network, apparatus and system
CN118074851A (en) Method and device for data transmission
CN117135498A (en) Method and device for transmitting data
CN115603879A (en) Data transmission method and device
CN118074850A (en) Method and device for transmitting data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication