CN114245241A - Time-delay self-adaptive BMP mapping implementation method and system - Google Patents

Time-delay self-adaptive BMP mapping implementation method and system Download PDF

Info

Publication number
CN114245241A
CN114245241A CN202111581203.0A CN202111581203A CN114245241A CN 114245241 A CN114245241 A CN 114245241A CN 202111581203 A CN202111581203 A CN 202111581203A CN 114245241 A CN114245241 A CN 114245241A
Authority
CN
China
Prior art keywords
data
osu
payload
bit width
client service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111581203.0A
Other languages
Chinese (zh)
Other versions
CN114245241B (en
Inventor
李娜
李光瑜
海增强
王雅琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202111581203.0A priority Critical patent/CN114245241B/en
Publication of CN114245241A publication Critical patent/CN114245241A/en
Application granted granted Critical
Publication of CN114245241B publication Critical patent/CN114245241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0067Provisions for optical access or distribution networks, e.g. Gigabit Ethernet Passive Optical Network (GE-PON), ATM-based Passive Optical Network (A-PON), PON-Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation

Abstract

The invention discloses a delay self-adaptive BMP mapping implementation method and system, relating to the technical field of optical transport networks and comprising the following steps: and obtaining the net load corresponding to the client service bandwidth according to a preset self-adaptive strategy. And carrying out BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload quantity when carrying out BMP mapping. The self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is. The mapping pattern of the invention is flexible and variable according to different services, thereby generating different delay characteristics for different services.

Description

Time-delay self-adaptive BMP mapping implementation method and system
Technical Field
The present invention relates to the field of Optical Transport Network (OTN) technology, and in particular, to a method and system for implementing a delay adaptive (delay adaptive) bit-synchronous mapping procedure (BMP) mapping.
Background
The conventional OTN network is divided into a minimum slot granule of 1.25G, which cannot effectively access and carry low-speed traffic, such as traffic with a data transmission rate of 2 Mbit/s. To solve this problem, multiple manufacturers propose the concept of an Optical Service Unit (OSU), map low-speed services to OSU containers, and multiplex multiple OSU containers to an Optical Channel Data Unit (ODUk/flex).
One of the methods for mapping customer traffic into an OSU container is BMP mapping, and conventional BMP mapping employs a unique mapping pattern for a specific traffic, i.e., the OSU data frame structure filled into OSU blocks is fixed. The frame structure of the OSU data includes an overhead region (OH), a payload region (payload), and a padding region (stuff), and the payload amount padded to the payload region is unchanged by using a conventional BMP mapping method.
However, since one block of the OSU container is large, the entire OSU block needs to be filled when the client service is mapped to the OSU container, and if the traditional BMP mapping method of the OTN is used and a unique mapping pattern is adopted, the time for generating one OSU block by the low-speed service is long, and the service delay is large.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a delay adaptive BMP mapping implementation method and system, which can flexibly adjust the size of a payload area of an OSU data block according to the size of a client service bandwidth, thereby reducing the delay generated by generating the OSU data block.
In order to achieve the above purposes, the technical scheme is as follows:
a delay self-adaptive BMP mapping implementation method comprises the following steps:
obtaining the net load corresponding to the client service bandwidth according to a preset self-adaptive strategy;
carrying out BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when carrying out BMP mapping;
the self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Preferably, the method further comprises:
and sequentially caching the client service data into N-1 RAMs in a form of bit width data, wherein N is the number of bit width data which are mostly contained in OSU data volume transmitted in one transmission cycle, and the OSU data block comprises OSU data volume transmitted in at least one transmission cycle.
Preferably, the method further comprises:
processing according to the payload and bit width data to obtain a shift byte number, a high pointer and a low pointer;
when BMP mapping is carried out on customer service data, current bit width data is obtained, the previous bit width data is obtained from N-1 RAMs respectively, each bit width data is shifted according to the number of shift bytes to obtain shift data, effective data is intercepted from each shift data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not larger than N.
Preferably, the method further comprises:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
Preferably, the block sizes of the OSU data blocks corresponding to different customer service bandwidths are the same.
A latency adaptive BMP mapping implementation system, the system comprising:
the delay self-adapting unit is used for obtaining the net load corresponding to the client service bandwidth according to a preset self-adapting strategy;
the control unit is used for processing according to the payload and the client service data to obtain a control signal;
the data processing unit is used for carrying out BMP mapping on the client service data according to the control signal to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when carrying out BMP mapping;
the self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Preferably, the system further comprises:
the RAM cache unit comprises N-1 RAMs and is used for caching the client service data into the N-1 RAMs in a bit width data mode in sequence, N is the number of bit width data which are contained in the OSU data volume transmitted in one transmission cycle at most, and the OSU data block comprises the OSU data volume transmitted in at least one transmission cycle;
the client service data is composed of a plurality of bit width data, and the transmission channels corresponding to the client services transmit one bit width data to the RAM cache unit in corresponding transmission cycles.
Preferably, the control signal further comprises a shift byte number, a high pointer, and a low pointer;
the data processing unit is further configured to, when performing BMP mapping on the client service data, obtain current bit width data, and obtain bit width data from N-1 RAMs respectively, shift each bit width data according to the number of shift bytes to obtain shift data, intercept effective data from each shift data according to a high pointer and a low pointer, splice N effective data, and fill a payload area of the OSU data block to obtain the OSU data block, where N is a positive integer not greater than N.
Preferably, after the data processing unit fills the payload area of the OSU data block according to the payload amount, the overhead area and the filling area of the OSU data block are filled to obtain the OSU data block.
Preferably, the OSU data blocks generated by the data processing unit corresponding to different customer service bandwidths have the same block size.
The invention has the beneficial effects that: the mapping pattern is flexible and variable according to different services, so that different delay characteristics can be generated for different services.
Drawings
Fig. 1 is a flowchart of a method for implementing delay adaptive BMP mapping in an embodiment of the present invention.
Fig. 2 is a functional block diagram of a delay adaptive BMP mapping implementation system in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, an embodiment of the present invention provides a method for implementing a time-delay adaptive BMP mapping, including determining, according to a preset adaptive policy, a payload size, that is, a mapping pattern, of a payload area of an OSU data block that matches a client service bandwidth, and generating OSU data blocks with different payload quantities according to corresponding mapping patterns when performing BMP mapping on client service data with different client service bandwidths. The self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is. For high-speed service, the payload quantity of OSU data block is large, for low-speed service, the payload quantity of OSU data block is small, and the mapping pattern can be flexibly changed according to different services, so that different delay characteristics can be generated for different services.
Continuing to refer to fig. 1, an embodiment of a delay-adaptive BMP mapping implementation method is provided, where the method includes:
and step S1, obtaining the net load corresponding to the client service bandwidth according to the preset adaptive strategy.
And step S2, processing according to the payload and the bit width data to obtain the shift byte number, the high pointer and the low pointer.
And step S3, sequentially caching the client service data into N-1 RAMs in a form of bit width data. Wherein, N is the number of bit-wide data which is contained in the OSU data volume transmitted in one transmission cycle at most, and the OSU data block includes the OSU data volume transmitted in at least one transmission cycle.
And step S4, respectively acquiring bit width data from n-1 RAMs, shifting each bit width data according to the number of shift bytes to obtain shift data, intercepting effective data from each shift data according to a high pointer and a low pointer, splicing n effective data and filling the effective data to a payload area of an OSU data block, and filling an overhead area and a filling area of the OSU data block to obtain the OSU data block. Wherein N is a positive integer not greater than N.
The block sizes of OSU data blocks corresponding to different client service bandwidths are the same.
In this embodiment, the payload is the number of bytes of payload in the OSU data block, and the smaller the client service bandwidth is, the smaller the number of bytes of payload in the OSU data block is, the larger the client service bandwidth is, the larger the number of bytes of payload in the OSU data block is.
The method for generating payload by client service bandwidth set by the adaptive strategy is as follows: when the client service bandwidth is larger than the bandwidth B1 and is smaller than or equal to the bandwidth B2, a specific payload D1 is generated; when the customer traffic bandwidth is greater than the bandwidth B2 and less than or equal to the bandwidth B3, another specific payload amount D2 is generated. Where bandwidth B1< bandwidth B2< bandwidth B3, a number of specific payload quantities D1, D2 … … may be generated. The payload amount is minimum 1 and maximum is the length of the OSU data block minus the length of the OH field in the OSU data block.
The number of bytes of payload in the OSU data block has a direct influence on the transmission delay of the client service. Specifically, assuming that the client service bandwidth is 2Mbit/s, the length of the OSU data block is 192 bytes, and the length of the OH field in the OSU data block is 7 bytes, if the byte number of the payload in the OSU data block is 64, the client service needs to transmit 64 bytes to generate a complete OSU data block, and the time for transmitting 64 bytes by the client service is 64 bytes 8 bits/2 Mbit/s 256 us. If the byte number of payload in the OSU data block is 128, the client service needs to transmit 128 bytes before generating a complete OSU data block, and the time for transmitting 128 bytes by the client service is 128 bytes 8 bits/2 Mbit/s 512 us.
Therefore, when the byte number of payload in the OSU data block is 64 bytes and 128 bytes, the delay between the two is 512-. Therefore, the smaller the byte number of payload in the OSU data block, the smaller the time required for generating an OSU data block, and the smaller the client service delay, the larger the byte number of payload in the OSU data block, the larger the time required for generating an OSU data block, and the larger the client service delay. For low-speed service, the byte number of payload in an OSU data block is adaptively reduced, so that the delay of the client service in the process of mapping to the OSU can be effectively reduced, and the better delay performance is achieved.
Based on the foregoing embodiment, with continued reference to fig. 1, a specific operation of sequentially caching the client service data in the form of bit width data into N-1 RAMs in step S2 is described by a specific embodiment:
the N-1 RAMs can form an RAM cache unit 3 for caching client service data of different client services, the client services are transmitted in corresponding transmission channels in the form of the client service data, and the client service bandwidth is the bandwidth of the transmission channels. During specific transmission, each transmission channel transmits one bit wide data in a corresponding transmission cycle. The client service data of each client service comprises current bit width data, previous first bit width data and previous N-1 bit width data of the previous second bit width data … …, the current bit width data can be directly used without being stored in the RAM cache unit 3, the RAM _1 stores the previous first bit width data, and the RAM _2 stores the previous second bit width data … …, the RAM _ N-1 stores the previous N-1 bit width data.
In each of the RAMs RAM _1 to RAM _ N-1, each customer service occupies 2 address spaces, and the size of each RAM space is 2 × the number of customer services. For example, channel 0 occupies address 0 and address 1 of each RAM space, channel 1 occupies address 2 and address 3 of each RAM space, and so on.
The realization of the time delay self-adaptive function can generate various OSU mapping patterns, is realized by using an RAM chain and a flow method, can optimize the time sequence, and can flexibly generate the mapping patterns of various OSU data blocks.
Based on the foregoing embodiment, with continuing reference to fig. 1, the specific operations of obtaining the number of shift bytes, the high pointer, and the low pointer according to the payload amount and the bit width data processing in step S2 are described by specific embodiments:
it is assumed that the client service data is transmitted by 80 bytes in one transmission cycle and the OSU data is transmitted by 128 bytes in one transmission cycle. Assume that the length of an OSU block is 256 bytes, with an OH field of 8 bytes, a payload field of 210 bytes, and a stuff field of 38 bytes. An OSU data block requires 2 transmission cycles to complete, the first transmission cycle transmitting an OH field of 8 bytes and a payload of 120 bytes, and the second transmission cycle transmitting a payload of 90 bytes and a stuff of 38 bytes.
Taking the case that the transmission channel 0 continuously inputs 1 st to 6 th customer service data as an example, the read-write signal of each RAM is as follows: 1-5 customer service data inputted by the transmission channel 0 are alternately written into the address 0/1 space of the RAM _1, and the read address of the RAM _1 is obtained by inverting the write address. After the RAM _1 performs the read operation, the read data is output with 2 transmission cycles being delayed. And delaying the read-write signal of the RAM _1 for 2 transmission cycles to obtain a read-write signal of the RAM _2, wherein the data content written in the RAM _2 is the data content read out from the RAM _ 1. After delaying again for 2 transmission cycles, the output data of RAM _2 will be obtained. Delaying the output data of the RAM _1 by 2 transmission cycles and delaying the client service data by 4 transmission cycles, thus obtaining all data of the payload amount required for the BMP mapping later in the same transmission cycle. According to the payload and the bit width data, the number of shift bytes can be processed by combining the storage capacity of each RAM, and the high pointer and the low pointer can be used for intercepting effective data from the bit width data to fill the payload area of the OSU data block. The high pointer indicates that OSU data is generated starting from the several bytes of the bit-wide data and the low pointer indicates that OSU data is generated ending from the several bytes of the bit-wide data.
Based on the foregoing embodiment, with continuing reference to fig. 1, the specific operation of padding the payload area of the OSU data block in step S4 is described by a specific embodiment:
when BMP mapping is carried out on customer service data, current bit width data is obtained, the bit width data is obtained from N-1 RAMs respectively, each bit width data is shifted according to the number of shift bytes to obtain shift data, effective data is intercepted from each shift data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not larger than N. The n valid data include the current bit width data and the previous bit width data obtained from the n-1 RAMs.
Based on the foregoing embodiment, with continuing reference to fig. 1, a specific operation of generating the OSU data block in step S4 is described by a specific embodiment:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
In some other embodiments, after the payload region of the OSU data block is filled according to the payload amount, only the overhead region of the OSU data block may be filled. Whether to populate a populated area of the OSU database may be customized.
As shown in fig. 2, an embodiment of the present invention provides a delay adaptive BMP mapping implementation system, including:
and the delay self-adapting unit 1 is used for obtaining the net load corresponding to the client service bandwidth according to a preset self-adapting strategy.
And the control unit 2 is used for processing and obtaining a control signal according to the payload and the client service data.
And the RAM cache unit 3 comprises N-1 RAMs and is used for caching the client service data into the N-1 RAMs in sequence in a form of bit width data, wherein N is the number of the bit width data which is needed at most for generating one OSU data block. The client service data is composed of a plurality of bit width data, and the transmission channels corresponding to the client service bandwidth all transmit one bit width data to the RAM cache unit 3 in corresponding transmission cycles.
And the data processing unit 4 is configured to perform BMP mapping on the client service data according to the control signal to obtain an OSU data block, and fill a payload area of the OSU data block according to a payload amount when performing BMP mapping.
The self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
Specifically, the input signal of the delay adaptive unit 1 is the client service bandwidth, the output signal is the payload, and the output signal is connected to the control unit 2.
The control signals comprise a first signal and a second signal, the input signals of the control unit 2 are client service data and payload quantity, and the output signals are a first signal and a second signal, wherein the first signal is sent to the RAM buffer unit 3, and the second signal is sent to the data processing unit 4. The first signal controls a read operation of the RAM buffer unit 3, which is a read enable of the RAM buffer unit 3. The second signal controls the operation of the data processing unit 4 and may comprise the number of bytes shifted required for the current customer service data, the current customer service data high pointer, the current customer service data low pointer, the number of bytes shifted required for the first bit wide data ahead, the first bit wide data high pointer ahead, the number of bytes shifted required for the N-1 bit wide data ahead of the first bit wide data low pointer … …, the N-1 bit wide data high pointer ahead, the N-1 bit wide data low pointer ahead. The high pointer indicates that OSU data is generated starting from the several bytes of the bit-wide data and the low pointer indicates that OSU data is generated ending from the several bytes of the bit-wide data.
The input signals of the RAM buffer unit 3 are client service data and a first signal, the output signals are a set of bit width data, the set of bit width data includes current bit width data, previous first bit width data, previous N-1 bit width data of previous second bit width data … …, and the output signals are connected to the data processing unit 4.
The data processing unit 4 generates an OSU data block according to the set of bit-width data and the second signal, the input signal of which is the set of bit-width data output by the RAM buffer unit 3 and the second signal generated by the control unit 2, and the output signal of which is the OSU data block.
The data processing unit 4 shifts the current bit width data according to the number of shift bytes needed by the current bit width data contained in the second signal to obtain shift data 0; the data processing unit 4 shifts the first bit width data from the RAM cache unit 3 according to the first bit width data in the second signal, which needs to be shifted by the number of bytes, to obtain shifted data 1; and in the same way, the previous N-1 bit width data from the RAM cache unit 3 is shifted according to the number of shift bytes needed by the previous N-1 bit width data contained in the second signal, so as to obtain the shift data N-1.
The data processing unit 4 intercepts corresponding data from the shift data 0 according to a current bit width data high pointer and a current bit width data low pointer contained in the second signal; intercepting corresponding data from the shift data 1 according to a first bit width data high pointer and a first bit width data low pointer in front contained in the second signal; by analogy, according to the preceding N-1 bit wide data high pointer and the preceding N-1 bit wide data low pointer contained in the control signal 2, corresponding data is intercepted from the shift data N-1, and the intercepted data is spliced to obtain the OSU data block.
For high-speed service, the payload quantity of OSU data block is large, for low-speed service, the payload quantity of OSU data block is small, and the mapping pattern can be flexibly changed according to different services, so that different delay characteristics can be generated for different services.
Based on the above embodiment, with continuing reference to fig. 2, the specific operation of the data processing unit 4 is described by a specific embodiment:
the control unit 2 obtains the byte number a corresponding to the payload amount of the OSU data block to be generated according to the input bit width data set and the payload amount from the delay adaptive unit 1. Acquiring the residual byte number b of the set of bit-width data after the previous BMP mapping (when the first BMP mapping is performed, b is 0). Comparing the byte number a to be generated with the residual byte number b of the set of bit-width data after the previous BMP mapping, if a > b, the first signal is 1, otherwise, the first signal is 0, specifically, the first signal is the read enable of the RAM cache unit 3, when the first signal is 1, the read operation is performed on the RAM cache unit 3, otherwise, the read operation is not performed. According to the byte number a and the byte number b, a second signal can be obtained.
The present invention is not limited to the above-described embodiments, and it will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and such modifications and improvements are also considered to be within the scope of the present invention. Those not described in detail in this specification are within the skill of the art.

Claims (10)

1. A delay adaptive BMP mapping implementation method is characterized by comprising the following steps:
obtaining the net load corresponding to the client service bandwidth according to a preset self-adaptive strategy;
carrying out BMP mapping on the client service data to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when carrying out BMP mapping;
the self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
2. The method for implementing delay-adaptive BMP mapping of claim 1, further comprising:
and sequentially caching the client service data into N-1 RAMs in a form of bit width data, wherein N is the number of bit width data which are mostly contained in OSU data volume transmitted in one transmission cycle, and the OSU data block comprises OSU data volume transmitted in at least one transmission cycle.
3. The method for implementing delay-adaptive BMP mapping of claim 2, the method further comprising:
processing according to the payload and bit width data to obtain a shift byte number, a high pointer and a low pointer;
when BMP mapping is carried out on customer service data, current bit width data is obtained, the previous bit width data is obtained from N-1 RAMs respectively, each bit width data is shifted according to the number of shift bytes to obtain shift data, effective data is intercepted from each shift data according to a high pointer and a low pointer, N effective data are spliced and filled into a payload area of an OSU data block to obtain the OSU data block, wherein N is a positive integer not larger than N.
4. The method for implementing delay-adaptive BMP mapping of claim 2, the method further comprising:
and filling the overhead area and the filling area of the OSU data block to obtain the OSU data block after filling the payload area of the OSU data block according to the payload quantity.
5. The method of claim 1, wherein the OSU data blocks corresponding to different client service bandwidths have the same block size.
6. A delay-adaptive BMP mapping implementation system, the system comprising:
the delay self-adapting unit is used for obtaining the net load corresponding to the client service bandwidth according to a preset self-adapting strategy;
the control unit is used for processing according to the payload and the client service data to obtain a control signal;
the data processing unit is used for carrying out BMP mapping on the client service data according to the control signal to obtain an OSU data block, and filling a payload area of the OSU data block according to the payload amount when carrying out BMP mapping;
the self-adaptive strategy is to judge the bandwidth range to which the client service bandwidth belongs, and determine the payload according to the bandwidth range, wherein the larger the boundary value of the bandwidth range is, the larger the payload is.
7. The latency-adaptive BMP mapping implementation system of claim 6, wherein the system further comprises:
the RAM cache unit comprises N-1 RAMs and is used for caching the client service data into the N-1 RAMs in a bit width data mode in sequence, N is the number of bit width data which are contained in the OSU data volume transmitted in one transmission cycle at most, and the OSU data block comprises the OSU data volume transmitted in at least one transmission cycle;
the client service data is composed of a plurality of bit width data, and the transmission channels corresponding to the client services transmit one bit width data to the RAM cache unit in corresponding transmission cycles.
8. The system for implementing delay adaptive BMP mapping of claim 7, wherein the control signals further comprise a number of shift bytes, a high pointer, and a low pointer;
the data processing unit is further configured to, when performing BMP mapping on the client service data, obtain current bit width data, and obtain bit width data from N-1 RAMs respectively, shift each bit width data according to the number of shift bytes to obtain shift data, intercept effective data from each shift data according to a high pointer and a low pointer, splice N effective data, and fill a payload area of the OSU data block to obtain the OSU data block, where N is a positive integer not greater than N.
9. The system for implementing delay adaptive BMP mapping of claim 6, wherein the data processing unit fills the overhead region and the fill region of the OSU data block to obtain the OSU data block after filling the payload region of the OSU data block according to the payload amount.
10. The system of claim 6, wherein the OSU data blocks generated by the data processing unit for different customer service bandwidths have the same block size.
CN202111581203.0A 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system Active CN114245241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111581203.0A CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111581203.0A CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Publications (2)

Publication Number Publication Date
CN114245241A true CN114245241A (en) 2022-03-25
CN114245241B CN114245241B (en) 2023-11-03

Family

ID=80761332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111581203.0A Active CN114245241B (en) 2021-12-22 2021-12-22 Time-delay self-adaptive BMP mapping realization method and system

Country Status (1)

Country Link
CN (1) CN114245241B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
WO2020156287A1 (en) * 2019-01-28 2020-08-06 中兴通讯股份有限公司 Method, device and system for customer business transfer, and computer readable storage medium
CN111865887A (en) * 2019-04-30 2020-10-30 华为技术有限公司 Data transmission method and device in optical transport network
CN112865911A (en) * 2021-01-18 2021-05-28 中兴通讯股份有限公司 Customer service bearing method and device
WO2021180007A1 (en) * 2020-03-11 2021-09-16 华为技术有限公司 Service bearing method, apparatus and system
WO2021190000A1 (en) * 2020-03-27 2021-09-30 中兴通讯股份有限公司 Service processing method and apparatus in optical transport network, and electronic device
CN113542933A (en) * 2020-04-15 2021-10-22 华为技术有限公司 Bandwidth adjusting method and related equipment
CN113542934A (en) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113573174A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Service processing method and device
CN113630206A (en) * 2020-05-08 2021-11-09 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113726679A (en) * 2020-05-25 2021-11-30 华为技术有限公司 Dynamically configurable data transmission method, device, equipment and storage medium
WO2021244489A1 (en) * 2020-06-03 2021-12-09 中兴通讯股份有限公司 Method and apparatus for transmitting encryption control overhead in optical transport network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
WO2020156287A1 (en) * 2019-01-28 2020-08-06 中兴通讯股份有限公司 Method, device and system for customer business transfer, and computer readable storage medium
CN111865887A (en) * 2019-04-30 2020-10-30 华为技术有限公司 Data transmission method and device in optical transport network
WO2021180007A1 (en) * 2020-03-11 2021-09-16 华为技术有限公司 Service bearing method, apparatus and system
WO2021190000A1 (en) * 2020-03-27 2021-09-30 中兴通讯股份有限公司 Service processing method and apparatus in optical transport network, and electronic device
CN113542933A (en) * 2020-04-15 2021-10-22 华为技术有限公司 Bandwidth adjusting method and related equipment
CN113542934A (en) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113573174A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Service processing method and device
CN113630206A (en) * 2020-05-08 2021-11-09 中兴通讯股份有限公司 Service processing method, device, network equipment and storage medium
CN113726679A (en) * 2020-05-25 2021-11-30 华为技术有限公司 Dynamically configurable data transmission method, device, equipment and storage medium
WO2021244489A1 (en) * 2020-06-03 2021-12-09 中兴通讯股份有限公司 Method and apparatus for transmitting encryption control overhead in optical transport network
CN112865911A (en) * 2021-01-18 2021-05-28 中兴通讯股份有限公司 Customer service bearing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刁兴玲;梅雅鑫;: "政企专线成运营商重要市场OTN技术大显身手", 通信世界, no. 24 *

Also Published As

Publication number Publication date
CN114245241B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
US20050265357A1 (en) Memory caching
US6967951B2 (en) System for reordering sequenced based packets in a switching network
EP1192753B1 (en) Method and apparatus for shared buffer packet switching
US7944931B2 (en) Balanced bandwidth utilization
US20080013535A1 (en) Data Switch and Switch Fabric
US6128715A (en) Asynchronous transmit packet buffer
WO1999056421A1 (en) A method and apparatus for dynamic allocation of bandwidth to data with varying bit rates
US20140247835A1 (en) System and method for modifying, in a processing pipeline, a length of a data packet in a data block without modifying a length of the data block
US7000073B2 (en) Buffer controller and management method thereof
US7522606B1 (en) Passive packet re-ordering and packet loss detection
CN111526169B (en) Method, medium, server and computer device for transmitting data through network
CN114245241B (en) Time-delay self-adaptive BMP mapping realization method and system
US7379467B1 (en) Scheduling store-forwarding of back-to-back multi-channel packet fragments
EP3528442B1 (en) Method, device, computer storage medium for mapping optical channel data unit frames
CN101656586B (en) Method and device for improving virtual concatenation delay compensation caching efficiency in synchronous digital hierarchy
US7496109B1 (en) Method of maximizing bandwidth efficiency in a protocol processor
CN114915375A (en) Method and device for realizing service mapping
US6914901B1 (en) System and method for communicating using multiple memory banks
US6377586B1 (en) Time switching circuit of synchronous super high speed transmission apparatus and controlling method thereof
CN112788442B (en) Method and system for bearing low-speed service in OTN (optical transport network)
RU2530298C1 (en) Method, system and device for transmission of signals of synchronous digital hierarchy as to microwave
CN103731225A (en) Data transmission method and device
CN114337899B (en) Method, device and system for crossing signal time slots based on packet switching
JP4673697B2 (en) Digital delay buffer and related method
US7724781B1 (en) Receive virtual concatenation processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant