CN114245241B - Time-delay self-adaptive BMP mapping realization method and system - Google Patents

Time-delay self-adaptive BMP mapping realization method and system Download PDF

Info

Publication number
CN114245241B
CN114245241B CN202111581203.0A CN202111581203A CN114245241B CN 114245241 B CN114245241 B CN 114245241B CN 202111581203 A CN202111581203 A CN 202111581203A CN 114245241 B CN114245241 B CN 114245241B
Authority
CN
China
Prior art keywords
data
osu
payload
bit width
data block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111581203.0A
Other languages
Chinese (zh)
Other versions
CN114245241A (en
Inventor
李娜
李光瑜
海增强
王雅琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202111581203.0A priority Critical patent/CN114245241B/en
Publication of CN114245241A publication Critical patent/CN114245241A/en
Application granted granted Critical
Publication of CN114245241B publication Critical patent/CN114245241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0067Provisions for optical access or distribution networks, e.g. Gigabit Ethernet Passive Optical Network (GE-PON), ATM-based Passive Optical Network (A-PON), PON-Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a delay self-adaptive BMP mapping realization method and a system, which relate to the technical field of optical transmission networks and comprise the following steps: and obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy. And performing BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when performing BMP mapping. The self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is. The mapping pattern of the invention is flexible and changeable according to different services, so that different delay characteristics can be generated for different services.

Description

Time-delay self-adaptive BMP mapping realization method and system
Technical Field
The invention relates to the technical field of optical transport networks (optical transport network, OTN), in particular to a method and a system for realizing bit-synchronous mapping procedure (BMP) mapping of a delay adaptive bit synchronization mapping procedure.
Background
The conventional OTN network is divided into a minimum slot granularity of 1.25G, which cannot effectively access and carry low-speed traffic, such as traffic with a data transmission rate of 2 Mbit/s. To address this problem, multiple vendors have proposed the concept of optical service units (Optical Service Unit, OSU), mapping low-speed traffic to OSU containers, which are reused to optical path data units (Optical Channel Data Unit, ODUk/flex).
One of the methods of mapping customer traffic into OSU containers is BMP mapping, which uses a unique mapping pattern for a specific traffic, i.e. the OSU data frame structure filled into OSU blocks is fixed. The frame structure of OSU data includes an overhead area (OH), a payload area (payload), and a padding area (stuff), and the payload amount padded in the payload area is unchanged by adopting a conventional BMP mapping method.
However, since one block of the OSU container is large, the entire OSU block needs to be filled when the customer service is mapped to the OSU container, and if the BMP mapping method of the conventional OTN is adopted, the time for generating one OSU block by the low-speed service is long and the service delay is large.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a delay self-adaptive BMP mapping realization method and a system, which can flexibly adjust the size of a payload area of an OSU data block according to the size of a business bandwidth of a client, thereby reducing the delay generated by generating the OSU data block.
In order to achieve the above purpose, the technical scheme adopted is as follows:
a time delay self-adaptive BMP mapping realization method comprises the following steps:
obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy;
performing BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when performing BMP mapping;
the self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Preferably, the method further comprises:
and sequentially caching the client service data into N-1 RAMs in the form of bit width data, wherein N is the number of bit width data which is contained in the maximum OSU data quantity transmitted in one transmission period, and the OSU data block comprises at least one OSU data quantity transmitted in one transmission period.
Preferably, the method further comprises:
obtaining a shift byte number, a high pointer and a low pointer according to the payload quantity and the bit width data;
when the customer service data is subjected to BMP mapping, current bit width data are obtained, previous bit width data are respectively obtained from N-1 RAMs, each bit width data is shifted according to the number of shift bytes to obtain shift data, effective data are intercepted from each shift data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not more than N.
Preferably, the method further comprises:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
Preferably, the blocks of OSU data blocks corresponding to different customer service bandwidths are the same in size.
A time-delay adaptive BMP mapping implementation system, the system comprising:
the delay self-adaptive unit is used for obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy;
a control unit for processing the payload and the customer service data to obtain a control signal;
the data processing unit is used for performing BMP mapping on the client service data according to the control signal to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when performing BMP mapping;
the self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Preferably, the system further comprises:
the RAM cache unit comprises N-1 RAMs, is used for caching the customer service data into the N-1 RAMs in a bit width data form, wherein N is the number of the bit width data which is contained in the maximum OSU data quantity transmitted in one transmission period, and the OSU data block comprises the OSU data quantity transmitted in at least one transmission period;
the client service data consists of a plurality of bit width data, and the transmission channels corresponding to the client service all transmit one bit width data to the RAM cache unit in the corresponding transmission period.
Preferably, the control signal further includes a shift byte count, a high pointer, and a low pointer;
the data processing unit is further configured to obtain current bit width data when BMP mapping is performed on customer service data, obtain bit width data from N-1 RAMs respectively, shift each bit width data according to a shift byte number to obtain shift data, intercept valid data from each shift data according to a high pointer and a low pointer, splice N valid data, and fill the N valid data into a payload area of an OSU data block to obtain an OSU data block, where N is a positive integer not greater than N.
Preferably, after the data processing unit fills the payload area of the OSU data block according to the payload amount, the overhead area and the filling area of the OSU data block are filled to obtain the OSU data block.
Preferably, the block sizes of OSU data blocks corresponding to different customer service bandwidths generated by the data processing unit are the same.
The invention has the beneficial effects that: the mapping pattern is flexible and changeable according to different services, so that different delay characteristics can be generated for different services.
Drawings
FIG. 1 is a flow chart of a method for implementing a time-lapse adaptive BMP mapping in an embodiment of the present invention.
Fig. 2 is a schematic diagram of a functional module of a BMP mapping implementation system with delay adaptation in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, an embodiment of the present invention provides a method for implementing BMP mapping in a delay adaptive manner, which includes determining, according to a preset adaptive policy, a payload size, that is, a mapping pattern, of a payload area of an OSU data block that matches a customer service bandwidth, and generating OSU data blocks having different payloads according to corresponding mapping patterns when BMP mapping is performed on customer service data of different customer service bandwidths. The self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is. For high-speed service, the payload of the OSU data block is larger, for low-speed service, the payload of the OSU data block is smaller, the mapping pattern is flexibly changeable according to different services, so that different delay characteristics can be generated for different services.
With continued reference to fig. 1, an embodiment of a method for implementing BMP mapping in a time-lapse adaptive manner is provided, where the method includes:
and S1, obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy.
And S2, processing according to the payload quantity and the bit width data to obtain a shift byte number, a high pointer and a low pointer.
And step S3, the client service data are sequentially cached in N-1 RAMs in the form of bit width data. Wherein N is the number of bit width data contained in the maximum OSU data amount transmitted in one transmission period, and the OSU data block includes the OSU data amount transmitted in at least one transmission period.
And S4, respectively acquiring bit width data from n-1 RAMs, shifting each bit width data according to the number of the shift bytes to obtain shift data, intercepting effective data from each shift data according to a high pointer and a low pointer, splicing n effective data, filling the n effective data into a payload area of an OSU data block, and filling an overhead area and a filling area of the OSU data block to obtain the OSU data block. Wherein N is a positive integer not greater than N.
The block sizes of the OSU data blocks corresponding to different customer service bandwidths are the same.
In this embodiment, the payload is the number of bytes of payload in the OSU data block, and the smaller the client service bandwidth is, the smaller the number of bytes of payload in the OSU data block is, the larger the client service bandwidth is, and the larger the number of bytes of payload in the OSU data block is.
The method for generating the payload by the client service bandwidth set by the self-adaptive strategy comprises the following steps: when the bandwidth of the customer service is greater than the bandwidth B1 and less than or equal to the bandwidth B2, a specific payload D1 is generated; another specific payload D2 is generated when the client traffic bandwidth is greater than bandwidth B2 and less than or equal to bandwidth B3. Where bandwidth B1< bandwidth B2< bandwidth B3, whereby a number of specific payload quantities D1, D2 … … can be generated. The payload amount has a minimum value of 1 and a maximum value of the length of the OSU data block minus the length of the OH field in the OSU data block.
The number of payload bytes in the OSU data block has a direct impact on the transmission delay of the customer traffic. Specifically, assuming that the bandwidth of the client service is 2Mbit/s, the length of the OSU data block is 192 bytes, the length of the OH field in the OSU data block is 7 bytes, if the number of bytes of the payload in the OSU data block is 64, the client service needs to transmit 64 bytes before generating a complete OSU data block, and the time for transmitting 64 bytes of the client service is 64 bytes by 8 bits/2 Mbit/s=256 us. If the number of bytes of payload in the OSU data block is 128, the client service needs to transmit 128 bytes before generating a complete OSU data block, and the time for transmitting 128 bytes by the client service is 128 bytes by 8 bits/2 Mbit/s=512 us.
It can be seen that when the byte count of payload in an OSU data block is 64 bytes and 128 bytes, the two delays differ by 512-256=256 us. Thus, the smaller the number of bytes of payload in an OSU data block, the smaller the time required to generate an OSU data block, the smaller the client traffic delay, the larger the number of bytes of payload in an OSU data block, the larger the time required to generate an OSU data block, and the larger the client traffic delay. For low-speed service, the byte number of payload in the OSU data block is reduced in a self-adaptive manner, so that the time delay in the process of mapping the client service to the OSU can be effectively reduced, and better time delay performance is achieved.
Based on the above embodiment, with continued reference to fig. 1, a specific operation of sequentially buffering the customer service data in the form of bit width data into N-1 RAMs in step S2 is described by a specific embodiment:
n-1 RAMs can form a RAM buffer unit 3 for buffering the client service data of different client services, the client services are transmitted in the corresponding transmission channels in the form of the client service data, and the client service bandwidth is the bandwidth of the transmission channel. In a specific transmission, each transmission channel transmits one bit-width data in its corresponding transmission period. The customer service data of each customer service comprises current bit width data, previous first bit width data, previous second bit width data … … and previous N-1 bit width data, the current bit width data can be directly used without being stored in the RAM cache unit 3, the RAM_1 stores the previous first bit width data, the RAM_2 stores the previous second bit width data … …, and the RAM_N-1 stores the previous N-1 bit width data.
In each of the RAMs from the RAM_1 to the RAM_N-1, each customer service occupies 2 address spaces, and the space size of each RAM is 2 times the number of the customer services. For example, channel 0 occupies address 0 and address 1 of each RAM space, channel 1 occupies address 2 and address 3 of each RAM space, and so on.
The implementation of the delay self-adaptive function can generate various OSU mapping patterns, the implementation is realized by using a RAM chain and a pipelining method, the time sequence can be optimized, and the mapping patterns of various OSU data blocks can be flexibly generated.
Based on the above embodiment, with continued reference to fig. 1, the specific operations of obtaining the number of shifted bytes, the high pointer, and the low pointer according to the payload amount and the bit width data processing in step S2 are described by a specific embodiment:
assuming that the customer service data is transmitted 80 bytes in one transmission period, the OSU data is transmitted 128 bytes in one transmission period. Assume that the OSU block is 256 bytes in length, with an OH field of 8 bytes, a payload field of 210 bytes, and a stuff field of 38 bytes. One OSU data block requires 2 transfer cycles to complete, the first transfer cycle transferring an OH field of 8 bytes and a payload of 120 bytes, and the second transfer cycle transferring a payload of 90 bytes and a stuff of 38 bytes.
Taking the transmission channel 0 as an example when the 1 st to 6 th customer service data are continuously input, the read-write signals of each RAM are as follows: 1-5 customer service data input by the transmission channel 0 are alternately written into the address 0/1 space of the RAM_1, and the read address of the RAM_1 is obtained after the write address is inverted. When the ram_1 performs a read operation, the read data is output with a delay of 2 transfer cycles. And delaying the read-write signal of the RAM_1 by 2 transmission periods to obtain the read-write signal of the RAM_2, wherein the data content written by the RAM_2 is the data content read by the RAM_1. After a delay of 2 transmission cycles again, the output data of ram_2 is obtained. The output data of ram_1 is delayed by 2 transmission cycles and the customer service data is delayed by 4 transmission cycles, so that all data of the payload amount required for BMP mapping is obtained later in the same transmission cycle. Depending on the payload amount and bit width data, the number of shifted bytes may be processed in combination with the storage capacity of each RAM, and the high and low pointers may be used to intercept valid data from the bit width data to fill the payload area of the OSU data block. The high pointer indicates from which byte of the bit-width data the OSU data is generated, and the low pointer indicates from which byte of the bit-width data the OSU data is generated.
Based on the above embodiments, with continued reference to fig. 1, the specific operation of filling the payload area of the OSU data block in step S4 is described by a specific embodiment:
when BMP mapping is carried out on customer service data, current bit width data are obtained, bit width data are respectively obtained from N-1 RAMs, each bit width data is shifted according to the number of shifting bytes to obtain shifting data, effective data are intercepted from each shifting data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not more than N. The n valid data includes current bit width data and previous bit width data acquired from the n-1 RAMs.
Based on the above embodiments, with continued reference to fig. 1, the specific operation of generating OSU data blocks in step S4 is described by a specific embodiment:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
In some other embodiments, only the overhead area of the OSU data block may be filled after filling the payload area of the OSU data block according to the payload amount. Whether or not to populate the fill area of the OSU database can be customized.
As shown in fig. 2, an embodiment of the present invention provides a delay-adaptive BMP mapping implementation system, including:
and the delay self-adapting unit 1 is used for obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adapting strategy.
A control unit 2 for processing the control signals based on the payload and the customer service data.
The RAM buffer unit 3 includes N-1 RAMs for buffering the customer service data in the form of bit width data into the N-1 RAMs in sequence, where N is the number of bit width data that is most needed to generate one OSU data block. The client service data consists of a plurality of bit width data, and the transmission channels corresponding to the client service bandwidth all transmit one bit width data to the RAM cache unit 3 in the corresponding transmission period.
And the data processing unit 4 is used for performing BMP mapping on the client service data according to the control signal to obtain an OSU data block, and filling the payload area of the OSU data block according to the payload amount when performing BMP mapping.
The self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Specifically, the input signal of the delay adaptive unit 1 is the customer service bandwidth, the output signal is the payload, and the output signal is connected to the control unit 2.
The control signals comprise a first signal and a second signal, the input signals of the control unit 2 are customer service data and payload, the output signals are the first signal and the second signal, wherein the first signal is sent to the RAM cache unit 3, and the second signal is sent to the data processing unit 4. The first signal controls a read operation of the RAM cache cell 3, which is a read enable of the RAM cache cell 3. The second signal controls the operation of the data processing unit 4 and may comprise the current customer service data need shift byte count, the current customer service data high pointer, the current customer service data low pointer, the previous first bit width data need shift byte count, the previous first bit width data high pointer, the previous first bit width data low pointer … …, the previous N-1 bit width data need shift byte count, the previous N-1 bit width data high pointer, the previous N-1 bit width data low pointer. The high pointer indicates from which byte of the bit-width data the OSU data is generated, and the low pointer indicates from which byte of the bit-width data the OSU data is generated.
The input signal of the RAM buffer unit 3 is customer service data and a first signal, the output signal is a set of bit width data, the set of bit width data includes current bit width data, previous first bit width data, previous second bit width data … …, previous N-1 bit width data, and the output signal is connected to the data processing unit 4.
The data processing unit 4 generates an OSU data block according to the set of bit width data and the second signal, the input signal of which is the set of bit width data output by the RAM buffer unit 3 and the second signal generated by the control unit 2, and the output signal of which is the OSU data block.
The data processing unit 4 shifts the current bit width data according to the number of bytes to be shifted of the current bit width data contained in the second signal to obtain shifted data 0; the data processing unit 4 shifts the first bit width data from the RAM cache unit 3 according to the number of bytes to be shifted of the first bit width data contained in the second signal to obtain shifted data 1; and so on, the previous N-1 th bit wide data from the RAM cache unit 3 is shifted according to the number of bytes to be shifted of the previous N-1 th bit wide data contained in the second signal, to obtain shifted data N-1.
The data processing unit 4 intercepts corresponding data from the shift data 0 according to the current bit width data high pointer and the current bit width data low pointer contained in the second signal; intercepting corresponding data from the shift data 1 according to the previous first bit width data high pointer and the previous first bit width data low pointer contained in the second signal; and by analogy, according to the previous N-1 bit wide data high pointer and the previous N-1 bit wide data low pointer contained in the second signal, intercepting corresponding data from the shift data N-1, and splicing the intercepted data to obtain the OSU data block.
For high-speed service, the payload of the OSU data block is larger, for low-speed service, the payload of the OSU data block is smaller, the mapping pattern is flexibly changeable according to different services, so that different delay characteristics can be generated for different services.
Based on the above-described embodiments, with continued reference to fig. 2, specific operations of the data processing unit 4 are described by way of specific embodiments:
the control unit 2 obtains the byte number a corresponding to the payload amount of the OSU data block to be generated according to the input set of bit width data and the payload amount from the delay adaptive unit 1. The number b of bytes remaining in the set of bit-width data after the previous BMP mapping is obtained (b is 0 in the first BMP mapping). Comparing the byte number a to be generated with the rest byte number b of the set of the bit width data after the previous BMP mapping, if a > b, the first signal is 1, otherwise, the first signal is 0, specifically, the first signal is the read enable of the RAM cache unit 3, and when the first signal is 1, the first signal indicates that the RAM cache unit 3 is read, otherwise, the read operation is not performed. The second signal is derived from the number of bytes a and the number of bytes b.
The invention is not limited to the embodiments described above, but a number of modifications and adaptations can be made by a person skilled in the art without departing from the principle of the invention, which modifications and adaptations are also considered to be within the scope of the invention. What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (8)

1. A method for implementing BMP mapping in a time-lapse self-adaptive manner, the method comprising:
obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy;
performing BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when performing BMP mapping;
the self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is;
sequentially caching the client service data into N-1 RAMs in the form of bit width data, wherein N is the number of bit width data which is contained in the maximum OSU data quantity transmitted in one transmission period, and an OSU data block comprises at least one OSU data quantity transmitted in one transmission period; n-1 RAMs are used for caching customer service data of different customer services.
2. The method for implementing BMP mapping in a time delay adaptive manner according to claim 1, wherein the method further comprises:
obtaining a shift byte number, a high pointer and a low pointer according to the payload quantity and the bit width data;
when the customer service data is subjected to BMP mapping, current bit width data are obtained, previous bit width data are respectively obtained from N-1 RAMs, each bit width data is shifted according to the number of shift bytes to obtain shift data, effective data are intercepted from each shift data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not more than N.
3. The method for implementing BMP mapping in a time delay adaptive manner according to claim 1, wherein the method further comprises:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
4. The method of claim 1, wherein the OSU data blocks corresponding to different customer service bandwidths have the same block size.
5. A time-delay adaptive BMP mapping implementation system, the system comprising:
the delay self-adaptive unit is used for obtaining the payload corresponding to the service bandwidth of the client according to a preset self-adaptive strategy;
a control unit for processing the payload and the customer service data to obtain a control signal;
the data processing unit is used for performing BMP mapping on the client service data according to the control signal to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when performing BMP mapping;
the RAM cache unit comprises N-1 RAMs, is used for caching the customer service data into the N-1 RAMs in a bit width data form, wherein N is the number of the bit width data which is contained in the maximum OSU data quantity transmitted in one transmission period, and the OSU data block comprises the OSU data quantity transmitted in at least one transmission period; n-1 RAMs are used for caching client service data of different client services;
the client business data consists of a plurality of bit width data, and the transmission channels corresponding to the client business all transmit one bit width data to the RAM cache unit in the corresponding transmission period;
the self-adaptive strategy is to judge the bandwidth range of the client service bandwidth, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
6. The delay-adaptive BMP mapping implementation system of claim 5, wherein the control signal further comprises a shift byte count, a high pointer, and a low pointer;
the data processing unit is further configured to obtain current bit width data when BMP mapping is performed on customer service data, obtain bit width data from N-1 RAMs respectively, shift each bit width data according to a shift byte number to obtain shift data, intercept valid data from each shift data according to a high pointer and a low pointer, splice N valid data, and fill the N valid data into a payload area of an OSU data block to obtain an OSU data block, where N is a positive integer not greater than N.
7. The system of claim 5, wherein the data processing unit fills the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload amount.
8. The BMP mapping implementation system of claim 5 wherein the blocks of OSU data blocks generated by the data processing unit corresponding to different customer service bandwidths are the same size.
CN202111581203.0A 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system Active CN114245241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111581203.0A CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111581203.0A CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Publications (2)

Publication Number Publication Date
CN114245241A CN114245241A (en) 2022-03-25
CN114245241B true CN114245241B (en) 2023-11-03

Family

ID=80761332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111581203.0A Active CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Country Status (1)

Country Link
CN (1) CN114245241B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118074856A (en) * 2022-11-22 2024-05-24 华为技术有限公司 Data mapping method, data demapping method and related equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
WO2020156287A1 (en) * 2019-01-28 2020-08-06 中兴通讯股份有限公司 Method, device and system for customer business transfer, and computer readable storage medium
CN111865887A (en) * 2019-04-30 2020-10-30 华为技术有限公司 Data transmission method and device in optical transport network
CN112865911A (en) * 2021-01-18 2021-05-28 中兴通讯股份有限公司 Customer service bearing method and device
WO2021180007A1 (en) * 2020-03-11 2021-09-16 华为技术有限公司 Service bearing method, apparatus and system
WO2021190000A1 (en) * 2020-03-27 2021-09-30 中兴通讯股份有限公司 Service processing method and apparatus in optical transport network, and electronic device
CN113542934A (en) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113542933A (en) * 2020-04-15 2021-10-22 华为技术有限公司 Bandwidth adjusting method and related equipment
CN113573174A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Service processing method and device
CN113630206A (en) * 2020-05-08 2021-11-09 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113726679A (en) * 2020-05-25 2021-11-30 华为技术有限公司 Dynamically configurable data transmission method, device, equipment and storage medium
WO2021244489A1 (en) * 2020-06-03 2021-12-09 中兴通讯股份有限公司 Method and apparatus for transmitting encryption control overhead in optical transport network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
WO2020156287A1 (en) * 2019-01-28 2020-08-06 中兴通讯股份有限公司 Method, device and system for customer business transfer, and computer readable storage medium
CN111865887A (en) * 2019-04-30 2020-10-30 华为技术有限公司 Data transmission method and device in optical transport network
WO2021180007A1 (en) * 2020-03-11 2021-09-16 华为技术有限公司 Service bearing method, apparatus and system
WO2021190000A1 (en) * 2020-03-27 2021-09-30 中兴通讯股份有限公司 Service processing method and apparatus in optical transport network, and electronic device
CN113542933A (en) * 2020-04-15 2021-10-22 华为技术有限公司 Bandwidth adjusting method and related equipment
CN113542934A (en) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113573174A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Service processing method and device
CN113630206A (en) * 2020-05-08 2021-11-09 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113726679A (en) * 2020-05-25 2021-11-30 华为技术有限公司 Dynamically configurable data transmission method, device, equipment and storage medium
WO2021244489A1 (en) * 2020-06-03 2021-12-09 中兴通讯股份有限公司 Method and apparatus for transmitting encryption control overhead in optical transport network
CN112865911A (en) * 2021-01-18 2021-05-28 中兴通讯股份有限公司 Customer service bearing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
政企专线成运营商重要市场OTN技术大显身手;刁兴玲;梅雅鑫;;通信世界(24);全文 *

Also Published As

Publication number Publication date
CN114245241A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
US20050265357A1 (en) Memory caching
US6967951B2 (en) System for reordering sequenced based packets in a switching network
EP1865632B1 (en) A method and apparatus for signal splitting and synthesizing
US7944931B2 (en) Balanced bandwidth utilization
KR20210086420A (en) Neural network data processing apparatus, method and electronic device
US20030043851A1 (en) Transmit virtual concatenation processor
CN114245241B (en) Time-delay self-adaptive BMP mapping realization method and system
US6717960B1 (en) Method for reconstructing an aggregate ATM cell stream and related device
US7362751B2 (en) Variable length switch fabric
US20070089030A1 (en) Configurable bandwidth allocation for data channels accessing a memory interface
CN114915375A (en) Method and device for realizing service mapping
US7379467B1 (en) Scheduling store-forwarding of back-to-back multi-channel packet fragments
EP3528442B1 (en) Method, device, computer storage medium for mapping optical channel data unit frames
KR20010102399A (en) Data communications
US7065628B2 (en) Increasing memory access efficiency for packet applications
CN101656586B (en) Method and device for improving virtual concatenation delay compensation caching efficiency in synchronous digital hierarchy
US7496109B1 (en) Method of maximizing bandwidth efficiency in a protocol processor
US6891863B1 (en) Device and methods for processing channels in a data stream
RU2530298C1 (en) Method, system and device for transmission of signals of synchronous digital hierarchy as to microwave
US7724781B1 (en) Receive virtual concatenation processor
CN103731225A (en) Data transmission method and device
CN112788442B (en) Method and system for bearing low-speed service in OTN (optical transport network)
US7116659B2 (en) Data transmission memory
JP4673697B2 (en) Digital delay buffer and related method
CN110493322A (en) High-performance data packet caching method and its device based on Hierarchical storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant