WO2021017726A1 - 改善传输速率的方法、处理器、网络设备和网络系统 - Google Patents

改善传输速率的方法、处理器、网络设备和网络系统 Download PDF

Info

Publication number
WO2021017726A1
WO2021017726A1 PCT/CN2020/099226 CN2020099226W WO2021017726A1 WO 2021017726 A1 WO2021017726 A1 WO 2021017726A1 CN 2020099226 W CN2020099226 W CN 2020099226W WO 2021017726 A1 WO2021017726 A1 WO 2021017726A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
rate
fec
fec code
ratio
Prior art date
Application number
PCT/CN2020/099226
Other languages
English (en)
French (fr)
Inventor
何向
王心远
乐伟军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910731452.XA external-priority patent/CN112291077A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20848134.1A priority Critical patent/EP3996330A4/en
Publication of WO2021017726A1 publication Critical patent/WO2021017726A1/zh
Priority to US17/584,911 priority patent/US20220149988A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/07Synchronising arrangements using pulse stuffing for systems with different or fluctuating information rates or bit rates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0067Rate matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0262Arrangements for detecting the data rate of an incoming signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet

Definitions

  • This application relates to the field of communication technology, and in particular to a method, device, processor, network device, and system for improving transmission rate.
  • the cost of communication equipment is relatively high, and smooth evolution is often adopted during equipment expansion and upgrade, that is, first to obtain higher performance and newer features through the upgrade of modules and line cards.
  • the backplane has become one of the biggest bottlenecks that limit the upgrade of communication equipment.
  • the performance of the backplane often determines the upgradeable prospects of the communication equipment and the life cycle of the equipment.
  • backplanes are sometimes difficult to adapt to future performance requirements.
  • the embodiments of the present application provide methods, devices, processors, network equipment, and systems for improving transmission rate.
  • a method for improving the transmission rate including: obtaining first data at a first rate; adding additional data in a certain proportion to the first data to obtain second data; and sending the second data at a second rate.
  • Second data the second rate is greater than the first rate.
  • the second rate is not an integer multiple of the first rate.
  • the sending the second data at the second rate includes: using a physical channel to send the second data at the second rate, and the data transmission rate of the physical channel is based on the expanded
  • the virtual channels are determined by bit multiplexing, and the number of the expanded virtual channels is determined based on the number of virtual channels transmitting the first data and the number of corresponding physical channels when the data transmission interface uses the second rate to transmit data.
  • the second rate is not an integer multiple of the first rate
  • the number of virtual channels is adjusted to support the number of physical channels at the second rate.
  • the additional data is located in the first part of the second data.
  • the additional data can be added to the first data as a whole.
  • the first part of the second data can be the position before or after the AM character of the alignment mark.
  • the first part of the additional data is located in the first part of the second data
  • the second part of the additional data is located in the second part of the second data
  • the additional data A portion of the first data is included between the first part in and the second part in the additional data.
  • the extra data is segmented into the first data.
  • the first data can be divided into multiple parts, and additional data segments are added to different parts of the first data.
  • the first data includes AM characters
  • adding additional data in a certain proportion to the first data includes: taking AM characters in the first data as a boundary, and In the first data, additional data is inserted in a certain proportion. Since the AM character provides an existing mark for data recognition, the AM character can be used as a reference point to insert additional data, which facilitates subsequent recognition of the inserted data.
  • the adding additional data in a certain proportion to the first data to obtain the second data includes: when the first data is media access control MAC layer data, The first ratio inserts the first additional data into the data of the MAC layer to obtain the second data; or, when the first data is data transmitted on the virtual channel VL after the forward error correction FEC sublayer distributes, Insert second additional data into the data transmitted on the VL distributed by the FEC sublayer at the second ratio to obtain the second data; or, when the first data is after VL remapping and before entering the physical link
  • the third additional data is inserted into the data after the VL remapping and before entering the physical link at a third ratio to obtain the second data; or, when the first data is transmitted on the physical link Data in the fourth ratio, insert fourth additional data in the data transmitted on the physical link to obtain the second data; or, when the first data is original data, use the fifth ratio in the The fifth additional data is inserted into the original data to obtain the second data.
  • the adding additional data in a certain proportion to the first data to obtain the second data includes: based on the second rate, using a forward error correction FEC code to compare the second data
  • One data is encoded to obtain the second data.
  • the encoding of the first data using a forward error correction FEC code based on the second rate to obtain the second data includes: when the first data is an FEC sublayer When transmitting data on the distributed virtual channel VL and using the first FEC code pattern, the second FEC code pattern matching the rate ratio is used to transmit on the VL after the FEC sublayer distribution and the first FEC code pattern is used
  • the encoded data is subjected to secondary encoding to obtain second data, where the rate ratio is the ratio of the second rate to the first rate; or, when the first data is remapped by VL, it enters the physical link
  • the second FEC code type that matches the rate ratio is used for the data that is encoded by the first FEC code type after VL remapping and before entering the physical link.
  • the data is subjected to secondary encoding to obtain the second data; or, when the first data is data transmitted on a physical link and encoded with the first FEC pattern, the second FEC pattern matching the rate ratio is adopted Perform secondary encoding on the data transmitted on the physical link and encoded with the first FEC code type to obtain the second data; or, when the first data is data encoded with the first FEC code type, The data encoded by the first FEC code type is decoded to obtain original data, and the original data is coded using a third FEC code type matching the second rate to obtain the second data, the third FEC code
  • the overhead of the type is greater than the overhead of the first FEC code type; or, when the first data is original data, the original data is coded using a third FEC code type matching the second rate to obtain For the second data, the overhead of the third FEC pattern is greater than the overhead of the first FEC pattern.
  • an apparatus for improving transmission rate includes: an acquisition module for acquiring first data at a first rate; and a processing module for adding additional data in a certain proportion to the first data , Obtain second data; a sending module, configured to send the second data at a second rate, the second rate being greater than the first rate.
  • the second rate is not an integer multiple of the first rate.
  • the sending module is configured to use a physical channel to send the second data at a second rate, and the data transmission rate of the physical channel is determined by bit multiplexing based on the expanded virtual channel, The number of the expanded virtual channels is determined based on the number of virtual channels for transmitting the first data and the number of corresponding physical channels when the data transmission interface uses the second rate to transmit data.
  • the additional data is located in the first part of the second data.
  • the first part of the additional data is located in the first part of the second data
  • the second part of the additional data is located in the second part of the second data
  • the additional data A portion of the first data is included between the first part in and the second part in the additional data.
  • the first data includes an alignment mark AM character
  • the processing module is configured to use the AM character in the first data as a boundary, and a certain percentage in the first data Insert additional data.
  • the processing module is configured to, when the first data is media access control MAC layer data, insert first additional data into the MAC layer data at a first ratio , To obtain the second data; or, when the first data is the data transmitted on the virtual channel VL after the forward error correction FEC sublayer is distributed, the second ratio is transmitted on the VL distributed by the FEC sublayer Insert the second additional data into the data to obtain the second data; or, when the first data is the data after the VL remapping and before entering the physical link, after the VL remapping in the third ratio , Insert third additional data into the data before entering the physical link to obtain the second data; or, when the first data is data transmitted on the physical link, transmit it on the physical link at a fourth ratio Insert fourth additional data into the data to obtain the second data; or, when the first data is original data, insert fifth additional data into the original data at a fifth ratio to obtain the second data.
  • the processing module is configured to use a forward error correction FEC code to encode the first data based on the second rate to obtain the second data.
  • the processing module is configured to use the rate ratio when the first data is transmitted on the virtual channel VL distributed by the FEC sublayer and encoded with the first FEC code.
  • the matched second FEC code pattern performs secondary encoding on the data transmitted on the VL distributed by the FEC sublayer and encoded with the first FEC code pattern to obtain second data, and the rate ratio is the second rate and The ratio of the first rate; or, when the first data is data encoded with the first FEC code pattern after VL remapping, before entering the physical link, adopt the second matching the rate ratio
  • the FEC pattern performs secondary encoding on the data encoded by the first FEC pattern after VL remapping and before entering the physical link to obtain the second data; or, when the first data is on the physical link
  • When transmitting data encoded with the first FEC code type use the second FEC code type matching the rate ratio to perform secondary encoding on the data transmitted on the physical link and coded using the first FEC code type to obtain Second data; or, when the first
  • a processor is also provided, and the processor can be used to execute any of the above-mentioned methods.
  • a network device is also provided, and the network device includes the above-mentioned processor.
  • the network device includes a line card, and the line card includes the aforementioned processor.
  • the network device further includes a backplane.
  • the network device further includes a CDR circuit located between the line card and the backplane, and the line card communicates with the backplane through the CDR circuit.
  • a network system is also provided.
  • the network system includes one or more network devices, and the network devices are any one of the aforementioned network devices.
  • a device for improving a transmission rate comprising: a memory and a processor, the memory stores at least one instruction or program, and the at least one instruction or program is loaded and executed by the processor to implement Any of the above methods for improving the transmission rate.
  • a computer-readable storage medium is also provided, and at least one instruction or program is stored in the storage medium, and the instruction or program is loaded and executed by a processor to implement the method for improving the transmission rate as described above.
  • the device includes a transceiver, a memory, and a processor.
  • the transceiver, the memory, and the processor communicate with each other through an internal connection path
  • the memory is used to store instructions or programs
  • the processor is used to execute the instructions or programs stored in the memory to control the transceiver to receive signals and control
  • the transceiver sends a signal, and when the processor executes the instruction or program stored in the memory, it causes the processor to execute the method in any of the foregoing possible implementation manners.
  • the processor, memory, and transceiver may communicate through a bus.
  • processors there are one or more processors, and one or more memories.
  • the memory may be integrated with the processor, or the memory and the processor may be provided separately.
  • the memory can be a non-transitory (non-transitory) memory, such as a read only memory (ROM), which can be integrated with the processor on the same chip, or can be set in different On the chip, the embodiment of the present application does not limit the type of memory and the setting mode of the memory and the processor.
  • ROM read only memory
  • a computer program (product) is provided, the computer program (product) includes: computer program code, when the computer program code is executed by a computer, the computer executes the methods in the above aspects.
  • a chip including a processor, which is used to call and run instructions or programs stored in the memory from a memory, so that a communication device installed with the chip executes the methods in the foregoing aspects.
  • Another chip including: an input interface, an output interface, a processor, and a memory.
  • the input interface, output interface, the processor, and the memory are connected by an internal connection path, and the processor is used to execute all The code in the memory, when the code is executed, the processor is used to execute the methods in the foregoing aspects.
  • FIG. 1 is a schematic diagram of a network system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the structure of a network device provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • 4A is a schematic diagram of the logical architecture of an Ethernet interface provided by an embodiment of the present application.
  • 4B is a flowchart of a method for improving transmission rate provided by an embodiment of the present application.
  • 5A to 5B are schematic diagrams of encoding in two embodiments of the present application.
  • FIG. 5C is a schematic diagram of a method for improving a data transmission rate provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of various scenarios for inserting additional data pads provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a scenario where stuffing MAC frame(s) is added to the MAC layer according to an embodiment of the present application
  • FIG. 8A is a schematic diagram of a method for extending VL through 8 VL corresponding to AM0 to AM7 according to an embodiment of the present application;
  • FIG. 8A is a schematic diagram of a method for extending VL through 8 VL corresponding to AM0 to AM7 according to an embodiment of the present application;
  • FIG. 8B is a schematic diagram of a method for reusing 8 VLs by 24 VLs to expand a VL provided by an embodiment of the present application;
  • FIG. 9 is a schematic structural diagram of an apparatus for improving transmission rate provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a device for improving transmission rate provided by an embodiment of the present application.
  • this is a network scenario of an embodiment of this application.
  • one or more user equipment 11, 12, 13 etc. access the network via multiple network devices 11, 12, reach the remote network device 31 via one or more intermediate network devices 20 in the network, and finally via The network device 31 communicates with one or more remote user devices 41, 42, 43.
  • the network in Fig. 1 may be a local area network or an operator’s network, and the network device in Fig. 1, such as a routing device or a switching device, can be used as a forwarding device or a gateway device in the network.
  • the network device can be a communication device or other electronic device.
  • the network device includes a line card (line card), a main processing unit (MPU), and a backplane.
  • the line card and the MPU are interconnected through the backplane.
  • the line card and MPU can be interconnected with the backplane through connectors.
  • Line cards are also called line processing units (LPUs), which are used to forward packets, and can be classified into 10G (gigabit), 20G, 40G, 50G, 100G, 120G, 240G, etc. according to their forwarding capabilities.
  • the MPU is responsible for the centralized control and management of network devices. For example, the MPU can perform routing calculations, device management and maintenance functions, data configuration functions, and data storage functions.
  • the network device may also include a physical interface card (PIC).
  • PIC physical interface card
  • the PIC can be inserted into the interface board of the online card, and is responsible for converting the photoelectric signal into a data frame and checking the validity of the data frame.
  • the network device also includes a switch fabric (switch fabric), which is also called a switch fabric unit (SFU), and is responsible for data exchange between each LPU.
  • the switching network board can be interconnected with the main control board and the line card through the backplane.
  • the backplane includes multiple channels. Based on different speeds and specifications, the number of channels on each backplane is different, but the number of channels on the backplane cannot be changed. Each channel on the backplane can be used to transmit data. As for any circuit board, the data transmission rate supported by its channel has a certain upper limit, so when the network equipment needs to be upgraded, the backplane on the existing network equipment cannot cross the new serial/solution in the compatible processor.
  • the rate of the serializer (Serializer/Deserializer, SerDes).
  • the processor may be a network processor (NP) or a central processing unit (CPU). The processor is used for port chips or switching chips.
  • the upper port chip or the switching chip may be an application-specific integrated circuit (ASIC) or a clock and data recovery (clock&data recovery, CDR), and the SerDes may be a circuit in an ASIC or CDR.
  • ASIC application-specific integrated circuit
  • CDR clock and data recovery
  • the SerDes may be a circuit in an ASIC or CDR.
  • the upper limit of the backplane capacity the number of channels ⁇ the maximum data rate that a single channel can transmit.
  • the SerDes rate according to the IEEE 802.3 standard can be:
  • SerDes can also support other 156.25MHz integer multiples of baud rate. For example, 112.5Gbps 4-level pulse amplitude modulation (4level PAM, PAM4) (56.25 Giga-baud (GBd)).
  • next-generation Ethernet interface rate There is currently no standard for the next-generation Ethernet interface rate.
  • 800GbE uses an 8 ⁇ electrical interface as an example for discussion, but it is not limited to this rate.
  • the key is that the current backplane supports the next-generation Ethernet rate, and the capacity of the whole machine is increased by increasing the port rate on the line card.
  • the backplane is connected to the line card through the connector. If you want to use the backplane and connector of the 8 ⁇ 50G era to achieve 8 ⁇ 100G speed, from the printed circuit board (PCB) and the connector According to the performance indicators, it is difficult to meet expectations.
  • PCB printed circuit board
  • the overall capacity is the same as the current one, and has not improved, and future standards may not necessarily support the 50G single-channel rate standard. Therefore, other single channel rates need to be considered.
  • the frequency designed by SerDes is often limited and can only work in a certain range, it is necessary to confirm whether the rate can be supported by SerDes. If there is a range (frequency hole) that is not supported by the SerDes frequency, it needs to be avoided.
  • a phase-locked loop is the core circuit that determines its operating frequency.
  • the working frequency of PLL is usually not continuously adjustable, but a multiple of a certain fundamental frequency.
  • the flexible PLL design can support fractional frequency operation in addition to integer multiples, the frequency at which it can operate is still not continuously adjustable. This determines in principle that SerDes cannot support all frequencies, but can only support certain fixed frequency points. We call the frequency range not supported by PLL "frequency hole”.
  • SerDes since the design of SerDes is often optimized for the frequency points that need to work, it may choose not to support it at non-operating frequencies, or its performance may be poor. For example, 53.125Gbps and 106.25Gbps are commonly used SerDes rates, but 80Gbps may be uncommon rates, so the design is simplified to avoid the frequency band near 80Gbps, which makes the frequency hole larger. For example, a certain SerDes may choose not to Will support rates between 75G and 85Gbps to simplify design and reduce costs.
  • the additional data may be a special and identifiable code block.
  • the embodiment of the present application also provides a method for expanding the number of PCS lanes to adapt to non-standard rate physical interfaces.
  • the backplane connects the main control board and the line card.
  • the main control board includes ASIC1, and the line card includes ASIC2.
  • the main control board also includes a clock and data recovery (clock&data recovery, CDR) circuit CDR1 that communicates with ASIC1.
  • the line card also includes a clock and data recovery circuit CDR2 that communicates with ASIC2.
  • the CDR circuit may appear on the main control board or the line card, but the CDR circuit may not be required if the ASIC capability is sufficient.
  • the embodiments of this application are related to some or all of ASCI1, CDR1, CDR2, and ASIC2.
  • the ASIC1 in the main control board can communicate with the backplane
  • the ASIC2 in the line card can communicate with the backplane.
  • the main control board and the line card can be respectively connected to the backplane through connectors, so as to communicate with the backplane.
  • FIG. 4A it is the logical layer architecture corresponding to the Ethernet interface on the network device in FIG. 1 or FIG. 2 or FIG. 3.
  • PCS/FEC is the PCS layer and FEC sublayer functions defined by the IEEE 802.3 standard. This part of the function is usually integrated in the ASIC.
  • the function of the physical coding sublayer (PCS) is to encode, transcode, scramble, insert AM, FEC encoding and other functions of the data from the MAC layer, and distribute the processed data to multiple virtual On a virtual lane (VL) or a physical lane (PL).
  • VL virtual lane
  • PL physical lane
  • the rule of distributing the processed data to multiple VLs or PLs is not limited in this application. For example, it can be determined based on scenarios or data encoding requirements. For example, taking the 200GE/400GE Ethernet interface as an example, any two consecutive FEC symbols (symbols) are from different codewords, that is, two consecutive FEC symbols of a codeword are distributed to different On VL or PL. However, for a 100GE Ethernet interface, the FEC Symbol is sent to each VL or PL in a round-robin manner.
  • the processed data is transmitted to the physical medium attachment sublayer (PMA) through n VLs, and the PMA transmits the data transmitted on multiple VLs to p channels.
  • bit-mux may also be performed before PMA transmission.
  • the data processed by the PCS/FEC is distributed to p channels on the attachment unit interface (AUI), and the p channels may be VLs.
  • the data on the p channels are mapped to the m physical channels PL on the backplane (backplane) after performing a remapping operation, where m and p are positive integers and m>p>0.
  • the data leaves the backplane through m physical channels, is regrouped and then reaches another PCS/FEC through p channels, and the p channels may be VLs.
  • another PCS/FEC may support a different number of VLs, which is not limited in the embodiment of the present application.
  • the data processed by the PCS layer is distributed to 16 VLs, and the equivalent bit rate on each VL is 26.5625Gbps.
  • the number of physical lanes (PL) is determined by the specific application. For example, when the backplane is designed with a single-channel 50G PAM4 technology (also commonly referred to as 56G PAM4 in the industry, the actual rate is 53.125Gbps), the number of PLs is 8. . Assuming that the total number of channels in the backplane design here is M, the design capacity of the backplane is 50G ⁇ M.
  • the number of physical channels corresponding to an Ethernet interface corresponding to a certain rate standard is N1.
  • the number of physical channels P that a certain Ethernet interface rate can support depends on the number of virtual channels N.
  • bit rate of data transmission on the electrical interface needs to be increased, but the number of payloads is fixed, so additional data needs to be inserted into the original data stream.
  • Increasing the data transmission rate on the electrical interface means that the insertion loss of the backplane channel will be greater, and the crosstalk between the signals will be greater, which will reduce the performance of the link, and even make the bit error rate on the link too high and cause other problem.
  • the embodiment of the present application provides a method for improving the transmission rate.
  • the method increases the transmission rate by adding additional data in a certain proportion to the first data, thereby breaking the backplane to the device when the device is expanded and upgraded.
  • the limitation of expansion and upgrade can not only avoid frequency holes, but also adapt to future performance requirements. Referring to Figure 4B, the method includes:
  • the first data may be FEC-encoded data or raw data.
  • the embodiment of the present application does not limit the type of the first data.
  • the network device shown in Figure 2 in the logical layer architecture corresponding to the Ethernet interface on the network device, PCS/FEC encodes, transcodes, scrambles, inserts AM and the data from the MAC layer. FEC encoding and other processing to obtain the first data.
  • the processed data that is, the first data
  • the rate at which the first data is transmitted may be the first rate, and the embodiment of the present application may acquire the first data on multiple VLs or PLs that transmit the first data after the PCS/FEC.
  • the data after VL remapping and before entering the physical link may also be used as the first data obtained at the first rate.
  • a physical link may have multiple physical channels (Physical Lanes).
  • the first data can also be obtained on the physical link, or the original data can also be obtained before the ASIC performs FEC encoding, and the original data is the first data. Alternatively, the first data can also be obtained at the CDR communicating with the ASIC.
  • the additional data when additional data is added in a certain proportion to the first data, the additional data can be located in the first part of the second data. In this way, the additional data can be added to the first data as a whole.
  • the specific location of the first part of the second data is not limited, and it can be determined based on the content of the first data or based on the scene.
  • the first part of the second data may be a position before or after the AM character.
  • the first part of the additional data is located in the first part of the second data
  • the second part of the additional data is located in the second part of the second data
  • the first part of the additional data and the second part of the additional data include the first part.
  • the extra data is segmented into the first data.
  • the first data may be divided into multiple parts, and different parts of the additional data are added between different parts of the first data.
  • Manner 1 In an embodiment of the present application, the method of integrating FEC by using the extra overhead brought about by increasing the rate is taken as an example to realize adding extra data.
  • ASIC1 communicates with ASIC2 via the backplane.
  • ASIC1 includes MAC, PCS, and distribution modules.
  • ASIC1 may also include some circuits for adjusting positions.
  • CDR1 may also be included between ASIC1 and the backplane.
  • ASIC2 includes Alignment/Deskew (Alignment/Deskew) circuit, regroup circuit, distribution circuit, PCS and MAC.
  • rate adjustment may also be performed at the positions of the alignment/deskew (Alignment/Deskew) circuit 9, the demultiplexing circuit 10, and the recombination distribution circuit 11.
  • CDR2 may also be included between ASIC2 and the backplane.
  • the PCS includes the FEC sublayer, and the data reaches the distribution circuit after being processed by the FEC sublayer, and is distributed to N VLs.
  • the data from N VLs arrive at the backplane after multiple possible position adjustment operations, as shown in Figure 5C 4 5 6 function circuit operation.
  • 4 is the coding function circuit
  • 5 is the bit multiplexing circuit
  • 6 is the coding function circuit.
  • the data reaches the Alignment/Deskew circuit of ASIC2 (i.e. 9).
  • After being processed by the Alignment/Deskew circuit it reaches 10 and the regroup circuit and distribution circuit in (11).
  • the ratio of the extra data to the first data is 1/16, and the ratio of the extra data to the second data is 1/17.
  • the ratio of the encoded data to the bit data of the FEC code pattern matching the rate ratio is the rate ratio.
  • the FEC code pattern matching the rate ratio is the ratio of the encoded data to the bit data is 17/16 FEC.
  • BCH 340, 320
  • BCH 320
  • BCH forward error correction codes Bose-Chaudhuri-Hocquenghem code, BCH code.
  • the FEC+pad method is used to increase the speed, for example, Hamming (127, 120) is used, and a 50-bit pad is inserted after every 100 hamming code blocks , The 50-bit pad is used as additional data.
  • this solution can be implemented in multiple places. For example, based on the second rate, the FEC code is used to encode the first data to obtain the second data, including but not limited to the following methods:
  • the FEC sublayer distributes the data to multiple VLs, and then the VL data can directly pass to one or more second-level FEC encoders (encoders), and the number of VLs remains unchanged after encoding.
  • the first data is data transmitted on the VL distributed by the FEC sublayer and encoded using the first FEC code
  • the second FEC code pair matching the rate ratio is used.
  • the data transmitted on the VL distributed by the FEC sub-layer and encoded with the first FEC code is subjected to secondary encoding to obtain the second data.
  • the rate ratio is the ratio of the second rate to the first rate.
  • the VL has been bit-multiplexed to generate a corresponding number of physical channels.
  • the data streams on different physical channels can be subjected to the second-level FEC encoding in the ASIC.
  • the first data is data encoded with the first FEC code pattern after VL remapping and before entering the physical link
  • the second FEC pattern pair matching the rate ratio is used after VL remapping and enters the physical link.
  • the data previously encoded with the first FEC code type is subjected to secondary encoding to obtain the second data.
  • the first data is data transmitted on a physical link and encoded with the first FEC code type
  • ASIC directly uses higher cost single-stage or multi-stage FEC for encoding (icon 2)
  • the ASIC directly encodes according to the new higher gain FEC.
  • the first data is original data
  • the original data is encoded using a third FEC pattern matching the second rate to obtain the second data.
  • the overhead of the third FEC pattern is greater than that of the first FEC pattern.
  • CDR1 reorganizes, decodes, and corrects data on the link, and then performs a new FEC encoding.
  • the first data is data encoded using the first FEC code type
  • the data encoded using the first FEC code type is decoded to obtain the original data
  • the original data is coded using the third FEC code type matching the second rate
  • the overhead of the third FEC code type is greater than the overhead of the first FEC code type.
  • the overhead of the FEC code type is the data difference
  • the data difference is the difference between the encoded data and the original data
  • the encoded data is The FEC code pattern encodes the data obtained from the original data. For example, if the coded data obtained by encoding the original data using the first FEC code type is coded data 1, the overhead of the first FEC code type is the difference between coded data 1 and the original data. For example, if the coded data obtained by encoding the original data using the third FEC code type is coded data 3, the overhead of the third FEC code type is the difference between coded data 3 and the original data.
  • the new FEC that is, the third FEC code type
  • -SolomonFEC, RS-FEC such as RS(576,514)
  • the new FEC is a completely different type of FEC from the first FEC code type, but has a stronger error correction capability.
  • Manner 2 In another embodiment of the present application, inserting additional data in the MAC layer is taken as an example. This method is suitable for scenarios where the link condition is relatively healthy and the SNR still meets the requirements after the rate is increased.
  • the first data is data of the MAC layer
  • the first additional data is inserted into the data of the MAC layer at the first ratio to obtain the second data.
  • the first ratio may be determined according to the data volume of the second data and the first data, which is not limited in the embodiment of the present application.
  • a frame (stuffing MAC frame(s)) can be filled between normal MAC frames (normal MAC frame(s)), and the frame used for filling can be an idle frame or other specially defined, Data frames that can be identified and discarded at the opposite MAC layer.
  • the padding frame here is similar to the additional data pad described above. In this way, the MAC can identify the original data at the receiving end, and the additional data pad can be found through the code block in the original data or the characters in the message.
  • Manner 3 In another embodiment of the present application, if the first data includes AM characters, the AM in the first data is used as the boundary, and additional data is inserted in the first data at a certain ratio.
  • the receiving end needs to be able to identify and delete the inserted extra data in order to recover the original data according to the processing flow of the PCS layer.
  • the AM character provides an existing mark for data recognition, some additional data can be inserted according to the AM character as a reference point, so as to facilitate subsequent recognition of the inserted data.
  • These data can be inserted along with AM characters on VL at 4 in Fig. 5C, or can be implemented at 6 in Fig. 5C after bit multiplexing.
  • the first data is the data transmitted on the VL distributed by the FEC sublayer at 4 in Fig.
  • the AM character in the data transmitted on the VL distributed by the FEC sublayer at the second ratio, the AM character is the boundary , Insert the second extra data, get the second data.
  • the first data is the data after VL remapping and before entering the physical link at 6 in FIG. 5C
  • the third ratio is the data after VL remapping and before entering the physical link.
  • the AM character is the boundary, and the third additional data is inserted to obtain the second data.
  • the second ratio and the third ratio may be determined based on the data volume of the first data and the data volume of the second data, which is not limited in this application.
  • the extra data pad can choose different lengths according to the implementation. There are also many ways to insert the extra data pad, such as inserting an extra data pad before the AM character or inserting an extra data pad after the AM character, as long as the ratio of the inserted extra data pad to the data (including AM characters) meets the requirements. can. In the implementation, you can also choose to insert enough extra data pads at one time instead of inserting extra data pads in sections.
  • extra data is inserted at 7 shown in Fig. 5C.
  • the fourth additional data is inserted into the data transmitted on the physical link at a fourth ratio to obtain the second data.
  • the fifth additional data is inserted into the original data at a fifth ratio to obtain the second data.
  • the fourth ratio and the fifth ratio can be determined based on the data volume of the first data and the data volume of the second data, which is not limited in this application.
  • the second rate may be an integer multiple of the first rate, and the second rate may not be an integer multiple of the first rate.
  • the second rate is not an integer multiple of the first rate
  • the rate of the virtual channel is higher after the rate is increased, even if it is demultiplexed at the minimum rate in the virtual channel, it will exceed the rate that the physical channel can withstand
  • this application implements The method provided in the example uses bit multiplexing to determine the second rate of the physical channel by expanding the virtual channel.
  • another embodiment of the present application provides a method for expanding the virtual channel VL.
  • N1 that is, the number of virtual channels for transmitting the first data
  • P1 the number of physical channels defined by the standard
  • N2 the number of non-standard VLs that need to be extended, namely The number of virtual channels after expansion
  • P2 the number of physical channels corresponding to a single Ethernet port when data is transmitted at the B2 rate because the backplane cannot support the B1 rate (that is, the data transmission interface uses the first The number of physical channels corresponding to data transmission at two rates).
  • N2 be equal to the least common multiple of N1 and P2. In this way, N2 can be divisible by P2, and N2 can simply reuse N1 VL structures.
  • FIG. 8A there are 8 VLs, corresponding to AM0-AM7; as shown in Fig. 8B, it is expanded to 24 VLs, 8 VLs are reused, and repeated three times. Because it is a backplane connection, and the backplane is implemented internally by the manufacturer, which interface can be determined which PL corresponds to, and since the correspondence between PL and VL is known, when the backplane is connected, the interface and The relationship between VL, there is no need to find AM characters again, and then use AM to distinguish channels. Of course, you can also select different AM characters, AM0-23 are different, here is another way to extend VL.
  • sending the second data at the second rate includes: using the physical channel to send the second data at the second rate.
  • the data transmission rate of the physical channel is based on the expanded virtual channel. Multiplexing is determined, and the number of expanded virtual channels is determined based on the number of virtual channels for transmitting the first data and the number of corresponding physical channels when the data transmission interface uses the second rate to transmit data.
  • the number of expanded virtual channels when the number of expanded virtual channels is determined based on the number of virtual channels for transmitting the first data and the number of physical channels corresponding to the data transmission interface using the second rate to transmit data, it may be based on the virtual channels for transmitting the first data
  • the number of, and the least common multiple of the number of corresponding physical channels when the data transmission interface uses the second rate to transmit data is determined.
  • the method provided by the embodiment of the present application increases the transmission rate by adding additional data in a certain proportion to the first data, so that when the device is expanded and upgraded, the limitation of the backplane on the expansion and upgrade of the device is broken, not only can avoid frequency holes, but also It can also adapt to future performance requirements.
  • the embodiment of the present application provides a device for improving the transmission rate.
  • the device includes:
  • the obtaining module 901 is configured to obtain first data at a first rate
  • the processing module 902 is configured to add additional data in a certain proportion to the first data to obtain the second data;
  • the sending module 903 is configured to send second data at a second rate, where the second rate is greater than the first rate.
  • the second rate is not an integer multiple of the first rate.
  • the sending module 903 is configured to use the physical channel to send the second data at the second rate, and the data transmission rate of the physical channel is determined based on the bit multiplexing of the expanded virtual channel, and the expanded virtual channel The number is determined based on the number of virtual channels for transmitting the first data and the number of corresponding physical channels when the data transmission interface uses the second rate to transmit data.
  • the additional data is located in the first part of the second data.
  • the first part of the additional data is located in the first part of the second data
  • the second part of the additional data is located in the second part of the second data
  • the first part of the additional data and the first part of the additional data A part of the first data is included between the two parts.
  • the first data includes an alignment mark AM character
  • the processing module is configured to use the AM character in the first data as a boundary and insert additional data in a certain proportion in the first data.
  • the processing module 902 is configured to, when the first data is MAC layer data, insert the first additional data into the MAC layer data at a first ratio to obtain the second data; or, when When the first data is the data transmitted on the virtual channel VL distributed by the FEC sublayer, insert the second additional data into the data transmitted on the VL distributed by the FEC sublayer at the second ratio to obtain the second data; or, when When the first data is data after VL remapping and before entering the physical link, insert third additional data into the data after VL remapping and before entering the physical link at a third ratio to obtain the second data; or , When the first data is data transmitted on the physical link, insert fourth additional data into the data transmitted on the physical link at a fourth ratio to obtain the second data; or, when the first data is original data, The fifth additional data is inserted into the original data at the fifth ratio to obtain the second data.
  • the processing module 902 is configured to use the FEC code to encode the first data to obtain the second data based on the overhead of the second rate or the overhead of the first rate.
  • the processing module 902 is configured to use the first data that matches the rate ratio when the first data is transmitted on the virtual channel VL distributed by the FEC sublayer and encoded with the first FEC pattern.
  • the second FEC code pattern performs secondary coding on the data transmitted on the VL after the FEC sublayer is distributed and encoded with the first FEC code pattern to obtain the second data, and the rate ratio is the ratio of the second rate to the first rate; or, when When the first data is data encoded with the first FEC code pattern after VL remapping and before entering the physical link, the second FEC pattern pair matching the rate ratio is used after VL remapping and before entering the physical link.
  • the second FEC code type performs secondary coding on the data transmitted on the physical link and encoded using the first FEC code type to obtain second data; or, when the first data is data coded using the first FEC code type, Use the data encoded by the first FEC code type to decode to obtain the original data, and use the third FEC code type matching the second rate to encode the original data to obtain the second data.
  • the overhead of the third FEC code type is greater than that of the first FEC
  • the overhead of the code pattern or, when the first data is the original data, the original data is encoded using the third FEC code pattern matching the second rate to obtain the second data, and the overhead of the third FEC code pattern is greater than that of the first FEC Pattern overhead.
  • the embodiment of the present application provides a processor, which can be used to execute any of the aforementioned methods for improving the transmission rate.
  • An embodiment of the present application provides a network device. As shown in FIG. 2 or FIG. 3, the network device includes the foregoing processor.
  • the network device includes a line card, and the line card includes the aforementioned processor.
  • the network device further includes a backplane.
  • the network device further includes a CDR located between the line card and the backplane, and the line card communicates with the backplane through the CDR.
  • the embodiments of the present application provide a network system, the network system includes one or more network devices, and the network devices are any of the foregoing network devices.
  • an embodiment of the present application further provides a device 1000 for improving transmission rate.
  • the transmission rate improving device 1000 shown in FIG. 10 is used to perform operations involved in the above method for improving transmission rate.
  • the device 1000 for improving the transmission rate includes a memory 1001, a processor 1002, and an interface 1003, and the memory 1001, the processor 1002, and the interface 1003 are connected by a bus 1004.
  • At least one instruction is stored in the memory 1001, and at least one instruction is loaded and executed by the processor 1002, so as to implement any one of the aforementioned methods for improving the transmission rate.
  • the interface 1003 is used to communicate with other devices in the network.
  • the interface 1003 may be implemented in a wireless or wired manner.
  • the interface 1003 may be a network card.
  • the device 1000 that improves the transmission rate can communicate with other network devices through the interface 1003.
  • FIG. 10 only shows a simplified design of the device 1000 for improving the transmission rate.
  • the device 1000 for improving the transmission rate may include any number of interfaces, processors or memories.
  • the aforementioned processor may be a central processing unit (CPU), other general-purpose processors, digital signal processing (DSP), application specific integrated circuit (ASIC), Field-programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or any conventional processor. It is worth noting that the processor may be a processor that supports an advanced reduced instruction set machine (advanced RISC machines, ARM) architecture.
  • advanced RISC machines advanced reduced instruction set machine
  • the foregoing memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • the memory may also include non-volatile random access memory.
  • the memory can also store device type information.
  • the memory may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory, where the non-volatile memory may be read-only memory (ROM) , Programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • ROM read-only memory
  • PROM Programmable read-only memory
  • EPROM erasable PROM
  • electrically erasable programmable read-only memory electrically EPROM, EEPROM
  • flash memory electrically erasable programmable read-only memory
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic random access memory dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous dynamic random access Memory double data date SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • serial link DRAM SLDRAM
  • direct memory bus random access memory direct rambus RAM
  • a computer-readable storage medium is also provided, and at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to realize the method for improving the transmission rate as described above.
  • This application provides a computer program.
  • the computer program When the computer program is executed by a computer, it can cause a processor or computer to execute various operations and/or procedures corresponding to the foregoing method embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk).
  • ROM read only memory
  • RAM random access memory
  • magnetic disks magnetic disks, optical disks, etc.
  • a computer device which can be a personal computer, a server, or a network communication device such as a media gateway) ) Perform the methods described in each embodiment or some parts of the embodiment of the present invention.
  • RS-FEC Reed-Solomon FEC, Reed-Solomon forward error correction code
  • BCH code Bose–Chaudhuri–Hocquenghem, BCH forward error correction code
  • VL Virtual Lane virtual channel, equivalent to PCS Lane
  • SerDes Serializer/Deserializer, serial/deserializer (device)
  • PLL Phase-Locked Loop, phase-locked loop
  • Gbps Giga-bit per second, Gigabit per second
  • GBd GBaud, Giga-baud, Giga-baud.
  • PAM Pulse Amplitude Modulation, pulse amplitude modulation
  • PAM4 4-level PAM, 4-level pulse amplitude modulation, also written as PAM-4
  • OSI model Open Systems Interconnection model, open system interconnection model
  • PCB Printed Circuit Board, printed circuit board.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Communication Control (AREA)

Abstract

本申请提供了一种改善传输速率的方法、装置、处理器、网络设备和系统。该方法包括:以第一速率获得第一数据;在所述第一数据中以一定比例加入额外数据,得到第二数据;以第二速率发送所述第二数据,所述第二速率大于所述第一速率。通过在第一数据中以一定比例加入额外数据的方式,提高传输速率,从而在设备扩容升级时,打破背板对设备扩容升级的限制,不仅能够避免频率空洞,还可以适应未来的性能要求。

Description

改善传输速率的方法、处理器、网络设备和网络系统
本申请要求于2019年07月27日提交中国专利局、申请号为201910685561.2、名称为“改善传输速率的方法、处理器、网络设备和网络系统”的中国专利申请,以及2019年08月08日提交中国专利局、申请号为201910731452.X、名称为“改善传输速率的方法、装置、处理器、网络设备和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,具体涉及一种改善传输速率的方法、装置、处理器、网络设备和系统。
背景技术
通信设备成本较高,在设备扩容升级时往往采取平滑演进的方式,即先通过模块、线卡等的升级获得更高的性能和更新的特性。于是,背板便成为限制通信设备升级的最大瓶颈之一,背板的性能往往决定了通信设备可升级的前景和设备的生命周期。然而,背板作为硬件,有时难以适应未来的性能要求。
发明内容
本申请实施例提供了改善传输速率的方法、装置、处理器、网络设备及系统。
一方面,提供了一种改善传输速率的方法,包括:以第一速率获得第一数据;在所述第一数据中以一定比例加入额外数据,得到第二数据;以第二速率发送所述第二数据,所述第二速率大于所述第一速率。通过在第一数据中以一定比例加入额外数据的方式,提高传输速率,从而在设备扩容升级时,打破背板对设备扩容升级的限制,不仅能够避免频率空洞,还可以适应未来的性能要求。
在一种示例性实施例中,所述第二速率不是所述第一速率的整数倍。
在一种示例性实施例中,所述以第二速率发送所述第二数据,包括:采用物理通道以第二速率发送所述第二数据,所述物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,所述扩充后的虚拟通道的数量基于传输所述第一数据的虚拟通道的数量以及数据传输接口采用所述第二速率传输数据时对应的物理通道的数量确定。针对第二速率不是第一速率的整数倍的情况,通过调整虚拟通道数量,以便支持该第二速率下的物理通道数量。
在一种示例性实施例中,所述额外数据位于所述第二数据的第一部分。该种方式下,可以将额外数据作为一个整体加入第一数据中,示例性地,该第二数据的第一部分可以是对齐标志AM字符之前或之后的位置。
在一种示例性实施例中,所述额外数据中的第一部分位于所述第二数据的第一部分,所述额外数据的第二部分位于所述第二数据的第二部分,所述额外数据中的第一部分和所述额外数据中的第二部分之间包括第一数据的一部分。该种方式下,额外数据被分段加入第一数据中。示例性地,第一数据可以划分多个部分,额外数据分段加入到第一数据的不 同部分。
在一种示例性实施例中,所述第一数据包括AM字符,所述在所述第一数据中以一定比例加入额外数据,包括:以所述第一数据中的AM字符为边界,在所述第一数据中以一定比例插入额外数据。由于AM字符为数据识别提供了已有的标记,可以以AM字符为参考点,插入额外数据,从而便于后续对插入的数据进行识别。
在一种示例性实施例中,所述在所述第一数据中以一定比例加入额外数据,得到第二数据,包括:当所述第一数据为媒体接入控制MAC层的数据时,以第一比例在所述MAC层的数据中插入第一额外数据,得到第二数据;或,当所述第一数据为前向纠错FEC子层分发后的虚拟通道VL上传输的数据时,以第二比例在所述FEC子层分发后的VL上传输的数据中插入第二额外数据,得到第二数据;或,当所述第一数据为经过VL重映射之后、进入物理链路之前的数据时,以第三比例在所述经过VL重映射之后、进入物理链路之前的数据中插入第三额外数据,得到第二数据;或,当所述第一数据为物理链路上传输的数据时,以第四比例在所述物理链路上传输的数据中插入第四额外数据,得到第二数据;或,当所述第一数据为原始数据时,以第五比例在所述原始数据中插入第五额外数据,得到第二数据。一个物理链路(Physical Link)上可以有多个物理通道(Physical Lanes),在多个位置插入额外数据,方式灵活。
在一种示例性实施例中,所述在所述第一数据中以一定比例加入额外数据,得到第二数据,包括:基于所述第二速率,采用前向纠错FEC码对所述第一数据编码,得到第二数据。由于速率提升后,相对于背板设计时的指标,背板布线及连接器所带来的插损会增加,信号之间的串扰也会增加,从而导致信噪比(signal-to-noise ratio,SNR)严重降低。而躲避频率空洞需要对链路提速,这也带来一定的可用开销,因此可以利用这部分开销,通过增加额外前向纠错(forward error correction,FEC)来弥补SNR损失。
在一种示例性实施例中,所述基于所述第二速率,采用前向纠错FEC码对所述第一数据编码,得到第二数据,包括:当所述第一数据为FEC子层分发后的虚拟通道VL上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对所述FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据,所述速率比为所述第二速率与所述第一速率的比;或,当所述第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为采用第一FEC码型编码的数据时,对所述采用第一FEC码型编码的数据进行解码,得到原始数据,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销;或,当所述第一数据为原始数据时,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销。
一方面,提供了一种改善传输速率的装置,所述装置包括:获取模块,用于以第一速率获得第一数据;处理模块,用于在所述第一数据中以一定比例加入额外数据,得到第 二数据;发送模块,用于以第二速率发送所述第二数据,所述第二速率大于所述第一速率。
在一种示例性实施例中,所述第二速率不是所述第一速率的整数倍。
在一种示例性实施例中,所述发送模块,用于采用物理通道以第二速率发送所述第二数据,所述物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,所述扩充后的虚拟通道的数量基于传输所述第一数据的虚拟通道的数量以及数据传输接口采用所述第二速率传输数据时对应的物理通道的数量确定。
在一种示例性实施例中,所述额外数据位于所述第二数据的第一部分。
在一种示例性实施例中,所述额外数据中的第一部分位于所述第二数据的第一部分,所述额外数据的第二部分位于所述第二数据的第二部分,所述额外数据中的第一部分和所述额外数据中的第二部分之间包括第一数据的一部分。
在一种示例性实施例中,所述第一数据包括对齐标志AM字符,所述处理模块,用于以所述第一数据中的AM字符为边界,在所述第一数据中以一定比例插入额外数据。
在一种示例性实施例中,所述处理模块,用于当所述第一数据为媒体接入控制MAC层的数据时,以第一比例在所述MAC层的数据中插入第一额外数据,得到第二数据;或,当所述第一数据为前向纠错FEC子层分发后的虚拟通道VL上传输的数据时,以第二比例在所述FEC子层分发后的VL上传输的数据中插入第二额外数据,得到第二数据;或,当所述第一数据为经过VL重映射之后、进入物理链路之前的数据时,以第三比例在所述经过VL重映射之后、进入物理链路之前的数据中插入第三额外数据,得到第二数据;或,当所述第一数据为物理链路上传输的数据时,以第四比例在所述物理链路上传输的数据中插入第四额外数据,得到第二数据;或,当所述第一数据为原始数据时,以第五比例在所述原始数据中插入第五额外数据,得到第二数据。
在一种示例性实施例中,所述处理模块,用于基于所述第二速率,采用前向纠错FEC码对所述第一数据编码,得到第二数据。
在一种示例性实施例中,所述处理模块,用于当所述第一数据为FEC子层分发后的虚拟通道VL上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对所述FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据,所述速率比为所述第二速率与所述第一速率的比;或,当所述第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为采用第一FEC码型编码的数据时,对所述采用第一FEC码型编码的数据进行解码,得到原始数据,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销;或,当所述第一数据为原始数据时,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销。
还提供了一种处理器,所述处理器可用于执行上述任一所述的方法。
还提供了一种网络设备,所述网络设备包括上述处理器。
在一种示例性实施例中,所述网络设备包括线卡,所述线卡包括上述的处理器。
在一种示例性实施例中,所述网络设备还包括背板。
在一种示例性实施例中,所述网络设备还包括位于线卡和背板之间的CDR电路,所述线卡通过所述CDR电路与所述背板通信。
还提供了一种网络系统,所述网络系统包括一个或多个网络设备,所述网络设备为上述任一所述的网络设备。
还提供一种改善传输速率的设备,所述设备包括:存储器及处理器,所述存储器中存储有至少一条指令或程序,所述至少一条指令或程序由所述处理器加载并执行,以实现上述任一所述的改善传输速率的方法。
还提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令或程序,所述指令或程序由处理器加载并执行以实现如上任一所述的改善传输速率的方法。
提供了另一种通信装置,该装置包括:收发器、存储器和处理器。其中,该收发器、该存储器和该处理器通过内部连接通路互相通信,该存储器用于存储指令或程序,该处理器用于执行该存储器存储的指令或程序,以控制收发器接收信号,并控制收发器发送信号,并且当该处理器执行该存储器存储的指令或程序时,使得该处理器执行上述任一种可能的实施方式中的方法。在一种实施例中,处理器和存储器、收发器之间可通过总线通信。
作为一种示例性实施例,所述处理器为一个或多个,所述存储器为一个或多个。
作为一种示例性实施例,所述存储器可以与所述处理器集成在一起,或者所述存储器与处理器分离设置。
在具体实现过程中,存储器可以为非瞬时性(non-transitory)存储器,例如只读存储器(read only memory,ROM),其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。
提供了一种计算机程序(产品),所述计算机程序(产品)包括:计算机程序代码,当所述计算机程序代码被计算机运行时,使得所述计算机执行上述各方面中的方法。
提供了一种芯片,包括处理器,用于从存储器中调用并运行所述存储器中存储的指令或程序,使得安装有所述芯片的通信设备执行上述各方面中的方法。
提供另一种芯片,包括:输入接口、输出接口、处理器和存储器,所述输入接口、输出接口、所述处理器以及所述存储器之间通过内部连接通路相连,所述处理器用于执行所述存储器中的代码,当所述代码被执行时,所述处理器用于执行上述各方面中的方法。
附图说明
图1是本申请实施例提供的一种网络系统的示意图;
图2是本申请实施例提供的网络设备的结构示意图;
图3是本申请实施例提供的网络设备的结构示意图;
图4A是本申请实施例提供的以太网接口的逻辑架构示意图;
图4B是本申请实施例提供的改善传输速率的方法流程图;
图5A~5B为两种本申请实施例的编码示意图;
图5C为本申请实施例提供的改善数据传输速率的方法示意图;
图6是本申请实施例提供的插入额外数据pad的多种场景示意图;
图7是本申请实施例提供的在MAC层加入stuffing MAC frame(s)的场景示意图;
图8A是本申请实施例提供的通过8条VL对应AM0~AM7扩展VL的方法的示意图;
图8B是本申请实施例提供的24个VL重用8个VL扩展VL的方法的示意图;
图9是本申请实施例提供的改善传输速率的装置结构示意图;
图10是本申请实施例提供的改善传输速率的设备结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面结合附图和实施方式对本申请实施例作进一步的详细说明。
如图1所示,为本申请实施例的一种网络场景。在该场景中,一个或多个用户设备11、12、13等经由多个网络设备11、12接入网络,经由网络中的一个或多个中间网络设备20到达远端网络设备31,最终经由网络设备31与远端的一个或多个用户设备41、42、43通信。图1的网络可以是本地局域网或运营商网络,图1的网络设备,比如路由设备或交换设备,可作为网络中的转发设备或网关设备。网络设备可以是通信设备或其他电子设备。
如图2所示,该网络设备包括线卡(line card)、主控板(main processing unit,MPU)和背板,线卡和MPU通过背板互联。如图3所示,线卡和MPU可通过连接器与背板互联。线卡也称为线路板(line processing unit,LPU),用于转发报文,按照转发能力可以分为10G(gigabit,吉比特)、20G、40G、50G、100G、120G、240G等。MPU负责网络设备的集中控制和管理,比如MPU可以执行路由计算、设备管理和维护功能、数据配置功能、保存数据等功能。网络设备也可以包括物理接口卡(physical interface card,PIC),PIC可以插在线卡的接口板上,负责把光电信号转换为数据帧并对数据帧进行“合法性”检查。在有些实施例中,所述网络设备也包括交换网板(switch fabric),交换网板也称为交换网板单元(switch fabric unit,SFU),负责各个LPU之间的数据交换。所述交换网板可以通过背板与主控板及线卡互联。
背板包括多个通道,基于不同速率和规格,每个背板上的通道数量不同,但是背板上的通道数量不可更改,背板上的每个通道可用于传输数据。由于对于任意一个电路板,其通道支持的数据传输速率都存在某个上限,因而当网络设备需要升级时,现有网络设备上的背板无法跨代兼容处理器中的新的串行/解串行(器)(Serializer/Deserializer,SerDes)的速率。处理器可以是网络处理器(network processor,NP)或中央处理器(central processing unit,CPU)。该处理器用于端口芯片或交换芯片。具体实现上端口芯片或交换芯片可以是专用集成电路(application-specific integrated circuit,ASIC)或时钟和数据恢复(clock&data recovery,CDR),而SerDes可以是ASIC或CDR中的电路。实际上,背板容量上限=通道数量×单通道可传输数据最大速率。
以当前400GbE端口速率为例,电接口每个物理通道上,根据IEEE 802.3标准规定的SerDes速率可以是:
-26.5625吉比特每秒(Giga-bit per second,Gbps)(16通道400GAUI-16),
-53.125Gbps(8通道400GAUI-8),
-106.25Gbps(4通道400GAUI-4,标准制定中)
除以上速率之外,通常SerDes还可以支持其他一些156.25MHz整数倍波特率的速率。例如112.5Gbps 4电平脉冲幅度调制(4level PAM,PAM4)(56.25吉波特(Giga-baud,GBd))。
下一代以太接口速率目前尚无标准,下面假设800GbE采用8×电接口为例讨论,但不限于该速率。要想实现平滑升级,关键在于当前背板支持下一代以太速率,通过提高线卡上端口速率来提高整机容量。如图3所示,背板通过连接器与线卡连接,如果要以8×50G时代的背板及连接器来实现8×100G速率,从印刷电路板(printed circuit board,PCB)和连接器的性能指标来看难以达到预期。如果背板继续使用50G单通道技术,则整体容量与当前一样,并没有提高,而且未来标准不一定还会支持50G单通道速率标准。因此,需要考虑其他单通道速率。但是因为SerDes设计的频率往往具有局限性,仅仅能工作在某些范围,所以需要确认该速率是否SerDes能够支持,如果存在SerDes频率不支持的范围(频率空洞),需要避开。单通道速率提高后,相对于背板设计时的指标,背板布线及连接器所带来的插损会增加,信号之间的串扰也会增加,从而导致信噪比(signal-to-noise ratio,SNR)严重降低。而躲避频率空洞需要对链路提速,这也带来一定的可用开销,因此可以利用这部分开销,通过增加额外前向纠错(forward error correction,FEC)来弥补SNR损失。
SerDes设计中,锁相环(phase-locked loop,PLL)是决定其可运行频率的核心电路。PLL工作的频率通常不是连续可调的,而是某基础频率的倍数。灵活的PLL设计虽然除了整数倍、也可以支持分数倍频运行,但它能运行的频率仍旧不是连续可调。这从原理上决定了SerDes不可能支持所有的频率,而是仅仅能够支持某些固定的频率点。我们称PLL不支持的频率范围为“频率空洞”。
并且,由于SerDes的设计上,往往针对需要工作的频率点进行优化设计,在非工作频率上可能选择不支持、或者性能较差。例如53.125Gbps和106.25Gbps是常用的SerDes速率,但是80Gbps附近可能是不常用速率,从而设计上简化避开了80Gbps这个频率附近的频段,从而使频率空洞变得更大,例如某SerDes可能选择不会支持75G到85Gbps之间的速率,以简化设计、降低成本。
在本申请实施例中,提供以下三种高链路速率的方式:
a)根据目标速率所能带来的开销空间,选择一种开销合适的FEC;
b)以对齐标志(alignment marker,AM)字符为边界,一次性或者分段插入额外数据,该数据格式不限;
c)通过媒介接入控制(medium access control,MAC)层插入额外数据,该额外数据可以是特殊的、可识别的码块。
本申请实施例也提供为适配非标准速率物理接口而扩展PCS lane数量的方法。
如图2所示,为本申请实施例的网络设备。其中,背板连接主控板与线卡。主控板包括ASIC1,线卡包括ASIC2。在一些实施例中,主控板也包括与ASIC1通信的时钟和数据恢复(clock&data recovery,CDR)电路CDR1。在一些实施例中,线卡也包括与ASIC2通信的时钟和数据恢复电路CDR2。在一些实施例中,CDR电路可能会出现在主控板或者线卡上,但在ASIC能力充足的情况下也可能不需要CDR电路。本申请实施例与ASCI1、CDR1、CDR2、ASIC2中部分或者全部相关。主控板中的ASIC1可以与背板互相通信, 线卡中的ASIC 2可以与背板互相通信。如图3所示,主控板和线卡可分别通过连接器与背板连接,从而与背板互相通信。
如图4A所示,为图1或图2或图3中网络设备上的以太网接口所对应的逻辑层架构。基于根据本申请实施例在系统中的不同实现位置,该图4A会略有不同。其中,PCS/FEC为IEEE 802.3标准定义的PCS层及FEC子层功能。该部分功能通常集成在ASIC中。物理编码子层(physical coding sublayer,PCS)功能在于将来自MAC层的数据进行编码、转码、扰码、插入AM、FEC编码等功能,并且将处理后的数据按照一定规律分发到多条虚拟通道(virtual lane,VL)或物理通道(physical lane,PL)上。其中,将处理后的数据分发到多条VL或PL的规律本申请不进行限定,示例性地,可以基于场景或者数据编码要求来定。例如,以200GE/400GE的以太网接口为例,任何两个连续的FEC符号(symbol)都是来自不同的码字(codeword),也即一个码字的两个连续的FEC symbol分发到不同的VL或PL上。但是对于100GE的以太网接口,循环(round-robin)发送FEC Symbol到各个VL或PL上。
如图4A所示,将处理后的数据通过n条VL传输至物理介质接入子层(physical medium attachment sublayer,PMA),由PMA将多条VL上传输的数据传输到p条通道。示例性地,PMA传输之前,还可以进行比特复用(bit-mux)。比如PCS/FEC处理后的数据被分布到附接单元接口(attachment unit interface,AUI)上的p个通道中,该p个通道可以是VL。p个通道上的数据被执行重映射(remapping)操作后映射到背板(backplane)m个物理通道PL中,m和p为正整数且m>p>0。经背板处理后,数据经由m个物理通道离开背板,被重组(regroup)后经p个通道到达另一PCS/FEC,该p个通道可以是VL。示例性地,另一PCS/FEC可以支持不同的VL数量,本申请实施例对此不加以限定。
以现有的400GbE标准为例,PCS层处理后的数据被分发到16条VL上,每条VL上的等效比特速率为26.5625Gbps。而物理通道(physical lane,PL)数量由具体应用决定,例如该背板设计时采用的是单通道50G PAM4技术(业界也通常称为56G PAM4,实际速率为53.125Gbps),则PL数量为8。假设此处的背板设计时总的通道数量为M,该背板的设计容量则为50G×M。如果希望升级设备,更换支持更高电接口速率的单板,例如,支持单通道100G PAM4技术(业界也成为112G PAM4,实际速率106.25Gbps)的单板,可通过提高每个通道的速率,来提高整机的容量。然而,不管是PCB上的布线、还是单板与背板之间的连接器(connector),其性能受到材料、设计等诸多方面的限制,难以应对新的更高速率。所以,需要找到一个比原有电接口速率B0高、但又比新电接口速率B1低的合适的电接口传输速率B2。其中,B0<B2<B1。
假设新电接口速率下,对应某种速率标准的以太网接口对应的物理通道数量为N1。该以太网接口速率等于N1×B1。(例如,对于400GbE接口,如果B1=100Gbps,则N1=4)B2对应的电接口通道数量为N2,则:B2×N2=B1×N1。由于B2<B1,所以N2>N1。
现有的以太网标准中,某一以太网接口速率可以支持的物理通道数量P,取决于虚拟通道数量N。例如16个虚拟通道,可以对应生成16或8或4等等数量的物理通道,通过简单的比特复用即可实现。如果电接口的速率提高一倍,则对应的物理通道数量可以减少为原来的1/2。但是,如果电接口速率提升倍数不是整数倍,则需要通过调整虚拟通道数量N,以便支持该速率B2下的物理通道数量P2。依旧以400GbE为例,从8×50G到4×100G 之间,可能存在5×80G这种搭配(B2=80G,N2=5)。对于未来800GbE,可能会存在8×100G组合,如果背板无法支持100G电接口,可能会存在10×80G或者12×66.67G等等速率。甚至,只需要可以保证N2×B2>=N1×B1,便可以有足够的能力将N1×B1的数据总速率通过N2条比标准速率低的电接口进行传输。
在以上实施例中,举了10×80G来支持800GbE的例子,如果SerDes恰好不支持该电接口速率,则较难以采用该配置进行。为了避开这个空洞,可以通过提高电接口上数据传输的速率,因为如果降低电接口速率,则需要更多条背板上的物理通道,而这个数量是限制死的,为了充分利用背板上的通道,N2往往是通过背板的传输能力计算获得的最大利用率的数值,如果降低电接口速率,N2数值需要增大,从而使得背板能够支持的以太网接口的总量变小。电接口上数据传输的比特率要提高,但是载荷数量是固定的,所以需要向原始数据流插入额外的数据。提高电接口上数据传输速率,意味着背板信道的插损会更大,信号之间的串扰也会更大,从而降低链路的性能,甚至使得链路上误码率过高而导致其他问题。
对此,本申请实施例提供了一种改善传输速率的方法,该方法通过在第一数据中以一定比例加入额外数据的方式,提高传输速率,从而在设备扩容升级时,打破背板对设备扩容升级的限制,不仅能够避免频率空洞,还可以适应未来的性能要求。参见图4B,该方法包括:
401,以第一速率获得第一数据。
该第一数据可以是经过FEC编码的数据,也可以是原始数据,本申请实施例不对第一数据的类型进行限定。
例如,结合图2所示的网络设备为例,在网络设备上的以太网接口所对应的逻辑层架构中,PCS/FEC将来自MAC层的数据进行编码、转码、扰码、插入AM和FEC编码等处理,得到第一数据。之后,将处理后的数据即第一数据按照一定规律分发到多条VL或PL上。此时传输该第一数据的速率可以是第一速率,本申请实施例可以在PCS/FEC之后传输第一数据的多条VL或PL上获取第一数据。也可以将经过VL remapping之后、进入物理链路之前的数据作为以第一速率获得的第一数据。其中,一个物理链路(Physical Link)上可以有多个物理通道(Physical Lanes)。示例性地,还可以在物理链路上获取第一数据,或者,还可以在ASIC进行FEC编码前获取原始数据,该原始数据即作为第一数据。又或者,还可以在与ASIC通信的CDR处获得第一数据。
402,在第一数据中以一定比例加入额外数据,得到第二数据。
示例性地,在第一数据中以一定比例加入额外数据时,该额外数据可以位于第二数据的第一部分,该种方式下,可以将额外数据作为一个整体加入第一数据中,本实施例不对第二数据的第一部分的具体位置进行限定,可基于第一数据的内容来定,或者基于场景来定。示例性地,该第二数据的第一部分可以是AM字符之前或之后的位置。
或者,该额外数据中的第一部分位于第二数据的第一部分,额外数据的第二部分位于第二数据的第二部分,额外数据中的第一部分和额外数据中的第二部分之间包括第一数据的一部分。该种方式下,额外数据被分段加入第一数据中。示例性地,第一数据可以划分多个部分,额外数据的不同部分加入到第一数据的不同部分之间。
无论采用哪种插入方式,本申请实施例提供的方法可以在多个位置插入额外数据,方 式灵活。接下来,以如下三种加入额外数据的方式进行举例说明。
方式一:在本申请的一种实施例中,以通过利用提高速率带来的额外开销来集成FEC的方式实现加入额外数据为例。
该方式一中,可以基于第二速率,采用前向纠错FEC码对第一数据编码,得到第二数据。如图5C所示,ASIC1经由背板与ASIC2通信,ASIC1包括MAC、PCS、分发(distribution)模块,ASIC1也可能包括一些调整位置的电路。ASIC1与背板之间也可能包括CDR1。ASIC2包括对齐/去偏斜(Alignment/Deskew)电路、regroup电路、distribution电路及PCS和MAC。示例性地,在对齐/去偏斜(Alignment/Deskew)电路⑨、解复用电路⑩、重组分发电路11位置处还可以进行速率调整(Rate adjustment)。ASIC2与背板之间也可能包括CDR2。数据经ASIC1的MAC层处理后到达ASIC1的PCS,PCS包括FEC sublayer,数据经由FEC子层(sublayer)处理后到达distribution电路,分发到N个VL。N个VL来的数据经多个可能的调整位置的操作,如图5C的④⑤⑥功能电路的操作后到达背板。其中,④为编码功能电路,⑤为比特复用电路,⑥为编码功能电路。当然也可能经由⑦的CDR1处理后到达背板。经背板处理后,数据到达ASIC2的Alignment/Deskew电路(即⑨),经Alignment/Deskew电路处理后到达⑩和位于⑾的regroup电路及distribution电路,处理后到达位于⑿的标准处理电路,经处理后到达ASIC2的PCS和MAC层。
继续以单个物理通道上传输第一数据的第一速率为80Gbps和在该单个物理通道上传输第二数据的第二速率为85Gbps为例,第二速率与第一速率的速率比为85/80=17/16。那么,额外数据与第一数据的比例为1/16,额外数据与第二数据的比例为1/17。与速率比匹配的FEC码型的编码数据与比特数据的比例为速率比,例如,速率比为17/16时,与速率比匹配的FEC码型为编码数据与比特数据的比例是17/16的FEC。例如如图5A所示,RS(34,32),BCH(340,320)等,其中BCH(340,320)为BCH前向纠错编码(Bose–Chaudhuri–Hocquenghem code,BCH code)中的一种。或者如图5B,如果FEC的开销和速率提高的开销存在一定的比例误差情况下,采用FEC+pad的方式进行提速,例如采用Hamming(127,120),并且每100个hamming码块之后插入50比特pad,这50比特的pad用作额外数据。如图5C所示,该方案既可以选择在多处实现,例如,基于第二速率,采用FEC码对第一数据编码,得到第二数据,包括但不限于如下几种方式:
A.在FEC子层分发后的VL上实现(图标④处)
在A方式中,FEC子层将数据分发到多条VL上,继而VL数据可直接通往一个或者多个第二级FEC编码器(encoder),编码后保持VL数量不变。
例如,当第一数据为FEC子层分发后的VL上传输且采用第一FEC码型编码的数据时,在FEC子层分发后的VL上,采用与速率比匹配的第二FEC码型对FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据。其中,速率比为第二速率与第一速率的比。
B.在VL remapping之后、进入物理链路之前实现(图标⑥处)
在B方式中,VL已经经过比特复用生成对应数量的物理通道,此时可以在ASIC内对不同物理通道上的数据流分别进行第二级FEC编码。
例如,第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据,采用与速率比匹配的第二FEC码型对经过VL重映射之后、进入物理链路之 前且采用第一FEC码型编码的数据进行二级编码,得到第二数据。
C.在物理链路上获取数据流,再进行编码实现(图标⑦处)
在C方式中,物理链路上的数据经过CDR1时,进行第二级FEC编码。
例如,第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据。
D.ASIC直接采用开销更高的单级或者多级FEC进行编码(图标②处)
在D方式中,ASIC直接按照新的更高增益的FEC进行编码。
例如,第一数据为原始数据,采用与第二速率匹配的第三FEC码型对原始数据进行编码,得到第二数据,第三FEC码型的开销大于第一FEC码型的开销。
E.CDR1内终结原有的FEC,采用开销更高的单级或者多级FEC进行编码(图标⑦处)
在E方式中,CDR1对链路上的数据进行重组、解码、纠错,然后进行新的FEC编码。
例如,第一数据为采用第一FEC码型编码的数据,对采用第一FEC码型编码的数据进行解码,得到原始数据,采用与第二速率匹配的第三FEC码型对原始数据进行编码,得到第二数据,第三FEC码型的开销大于第一FEC码型的开销。
其中,上述方式D和方式E中,无论是第三FEC码型还是第一FEC码型,FEC码型的开销为数据差值,数据差值为编码数据与原始数据之差,编码数据为采用该FEC码型对原始数据进行编码得到的数据。例如,采用第一FEC码型对原始数据进行编码得到的编码数据为编码数据1,则第一FEC码型的开销为编码数据1与原始数据之差。例如,采用第三FEC码型对原始数据进行编码得到的编码数据为编码数据3,则第三FEC码型的开销为编码数据3与原始数据之差。新的FEC即第三FEC码型可以是和第一FEC码型FEC1同类型的但是开销更高的编码(例如FEC1采用RS(544,514),新FEC采用里德-所罗门前向纠错码(Reed-SolomonFEC,RS-FEC),比如RS(576,514)),也或者新的FEC是和第一FEC码型完全不同类型的FEC,但是纠错能力更强。
方式二:在本申请的又一种实施例中,以在MAC层插入额外数据为例。该方法适用于链路状况比较健康、提高速率之后SNR仍旧满足要求的场景。示例性地,当第一数据为MAC层的数据时,以第一比例在MAC层的数据中插入第一额外数据,得到第二数据。其中,第一比例可以根据第二数据与第一数据的数据量来确定,本申请实施例在此不进行限定。
如图7所示,可以在正常MAC帧(normal MAC frame(s))之间填充帧(stuffing MAC frame(s)),用于填充的帧可以是idle帧,也可以是其他特殊定义的、可以在对端MAC层识别并丢弃的数据帧。此处的填充帧类似于上述的额外数据pad。该种方式下,MAC在接收端可以识别原始数据,通过原始数据中的码块或报文中的字符可查找额外数据pad。
方式三:在本申请的又一个实施例中,第一数据包括AM字符,则以第一数据中的AM为边界,在第一数据中以一定比例插入额外数据。
例如,以AM字符为参考,在数据中一次性或者分段插入额外数据,以增加数据传输速率为例。由于FEC编码可以提高SNR,而在链路状况比较健康、提高速率之后SNR仍 旧满足要求的时候,可以无需采用FEC编码的方式来提高SNR,而可以采用插入无效数据的方式来达到增加数据传输速率的目的。当然,在链路状况比较健康、提高速率之后,即使SNR仍旧满足要求,也仍然可以采用FEC编码的方式,即插入的额外数据可以是FEC码。至于插入哪种额外数据,本申请实施例对此不加以限定。但是,由于此处原始数据是经过PCS层处理的、已经不具有报文的格式的数据流,接收端需要能识别并且删除插入的额外数据,才能按照PCS层的处理流程恢复出原始数据。由于AM字符为数据识别提供了已有的标记,可以按照AM字符为参考点,插入一些额外的数据,从而便于后续对插入的数据进行识别。这些数据可以在VL上随AM字符在图5C的④处被插入,也可以在比特复用之后在图5C的⑥处实现。示例性地,当第一数据为图5C的④处FEC子层分发后的VL上传输的数据时,以第二比例在FEC子层分发后的VL上传输的数据中,以AM字符为边界,插入第二额外数据,得到第二数据。示例性地,当第一数据为图5C的⑥处经过VL重映射之后、进入物理链路之前的数据时,以第三比例在经过VL重映射之后、进入物理链路之前的数据中,以AM字符为边界,插入第三额外数据,得到第二数据。需要说明的是,第二比例和第三比例可基于第一数据的数据量以及第二数据的数据量来定,本申请对此不加以限定。
无论是在哪个位置处插入额外数据,对于以AM字符为边界插入额外数据的方式包括但不限于有如图6所示的几种,图6中的每种插入方式如下:
(1)以两个AM字符为边界,中间的数据等分,在等分的数据之间插入pad。
(2)以两个AM字符为边界,中间的数据等分,在等分的数据之前插入pad。
(3)以两个AM字符为边界,中间的数据等分,在等分的数据之后插入pad。
(4)以两个AM字符为边界,中间的数据等分,在等分的数据之前和之后都插入pad。
(5)在AM字符后面一次性插入pad。
(6)在AM字符前面一次性插入pad。
额外数据(pad)的选择,建议使用PRBS31序列逐段选取,以便保证数据的随机性,避免产生频谱上的毛刺。额外数据pad可以根据实现方式选择不同的长度。额外数据pad的插入方式也可以有多种,比如在AM字符前面插入额外数据pad或者在AM字符后插入额外数据pad,只要保证插入的额外数据pad与数据(包含AM字符)的比例满足要求即可。实现中也可以选择一次性插入足够多的额外数据pad,而不是分段插入额外数据pad。
除上述三种方式外,还可以有其他插入数据的方式。示例性地,在图5C所示的⑦处插入额外数据。例如,当第一数据为图5C的⑦处物理链路上传输的数据时,以第四比例在物理链路上传输的数据中插入第四额外数据,得到第二数据。示例性地,当第一数据为原始数据时,以第五比例在原始数据中插入第五额外数据,得到第二数据。其中,第四比例和第五比例可基于第一数据的数据量以及第二数据的数据量来定,本申请对此不加以限定。
403,以第二速率发送第二数据,第二速率大于第一速率。
示例性地,第二速率可以是第一速率的整数倍,第二速率也可以不是第一速率的整数倍。针对第二速率不是第一速率的整数倍,如果提速后,虚拟通道的速率较大,即使按照虚拟通道中的最小速率去复用,也会超过物理通道所承受的速率的情况,本申请实施例 提供的方法通过扩充虚拟通道,采用比特复用来确定物理通道的第二速率。针对第二速率不是第一速率的整数倍的情况,本申请的又一种实施例提供了一种扩充虚拟通道VL的方法。
对于已有的标准N1(即传输第一数据的虚拟通道的数量)条VL,其可以被P1整除,其中P1是标准定义的物理通道数量;N2是需要扩展到的非标准的VL数量,即扩充后的虚拟通道的数量),其可以被P2整除,其中P2是因为背板无法支持B1速率而采用B2速率传输数据时对应的单个以太网端口对应的物理通道数量(即数据传输接口采用第二速率传输数据时对应的物理通道的数量)。
在本申请的一种实施例中,令N2等于N1和P2的最小公倍数,这样一来,N2可以被P2整除,并且N2可以简单复用N1条VL结构。
例如,如果N1=8,P2=12,可以使N2=24,这样24条VL,可以通过简单的2:1比特复用实现12条PL,同时,由于N2=3*N1,可以在FEC分发数据时,将一次轮循周期从8改为24。为了在接收端可以成功识别这些VL,还可以借用已有的AM图案进行简单重复使用。
如图8A所示,是8条VL,对应AM0~AM7;如图8B所示,扩展到24个VL,重用8个VL,重复三遍。因为是背板连接,而背板在厂商内部实现,可确定出哪个PL对应哪个接口,又由于已知PL跟VL之间的对应关系,因而在背板连接的情况下,能得出接口跟VL的关系,也就不需要重新找AM字符,采用AM再去区分通道了。当然也可以选取不同的AM字符,AM0~23各不相同,这里是另一种扩展VL的方法。
在扩充了虚拟通道的情况下,以第二速率发送第二数据,包括:采用物理通道以第二速率发送第二数据,如上所述,物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,扩充后的虚拟通道的数量基于传输第一数据的虚拟通道的数量以及数据传输接口采用第二速率传输数据时对应的物理通道的数量确定。
示例性地,基于传输第一数据的虚拟通道的数量以及数据传输接口采用第二速率传输数据时对应的物理通道的数量确定扩充后的虚拟通道的数量时,可以基于传输第一数据的虚拟通道的数量以及数据传输接口采用第二速率传输数据时对应的物理通道的数量的最小公倍数确定。
本申请实施例提供的方法,通过在第一数据中以一定比例加入额外数据的方式,提高传输速率,从而在设备扩容升级时,打破背板对设备扩容升级的限制,不仅能够避免频率空洞,还可以适应未来的性能要求。
此外,由于速率提升后,相对于背板设计时的指标,背板布线及连接器所带来的插损会增加,信号之间的串扰也会增加,从而导致SNR严重降低。而躲避频率空洞需要对链路提速,这也带来一定的可用开销,因而利用速率提升所带来的开销,通过增加额外FEC来弥补SNR损失。
本申请实施例提供了一种改善传输速率的装置,参见图9,该装置包括:
获取模块901,用于以第一速率获得第一数据;
处理模块902,用于在第一数据中以一定比例加入额外数据,得到第二数据;
发送模块903,用于以第二速率发送第二数据,第二速率大于第一速率。
在一种示例性实施例中,第二速率不是第一速率的整数倍。
在一种示例性实施例中,发送模块903,用于采用物理通道以第二速率发送第二数据,物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,扩充后的虚拟通道的数量基于传输第一数据的虚拟通道的数量以及数据传输接口采用第二速率传输数据时对应的物理通道的数量确定。
在一种示例性实施例中,额外数据位于第二数据的第一部分。
在一种示例性实施例中,额外数据中的第一部分位于第二数据的第一部分,额外数据的第二部分位于第二数据的第二部分,额外数据中的第一部分和额外数据中的第二部分之间包括第一数据的一部分。
在一种示例性实施例中,第一数据包括对齐标志AM字符,处理模块,用于以第一数据中的AM字符为边界,在第一数据中以一定比例插入额外数据。
在一种示例性实施例中,处理模块902,用于当第一数据为MAC层的数据时,以第一比例在MAC层的数据中插入第一额外数据,得到第二数据;或,当第一数据为FEC子层分发后的虚拟通道VL上传输的数据时,以第二比例在FEC子层分发后的VL上传输的数据中插入第二额外数据,得到第二数据;或,当第一数据为经过VL重映射之后、进入物理链路之前的数据时,以第三比例在经过VL重映射之后、进入物理链路之前的数据中插入第三额外数据,得到第二数据;或,当第一数据为物理链路上传输的数据时,以第四比例在物理链路上传输的数据中插入第四额外数据,得到第二数据;或,当第一数据为原始数据时,以第五比例在原始数据中插入第五额外数据,得到第二数据。
在一种示例性实施例中,处理模块902,用于基于第二速率的开销或第一速率的开销,采用FEC码对第一数据编码,得到第二数据。
在一种示例性实施例中,处理模块902,用于当第一数据为FEC子层分发后的虚拟通道VL上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据,速率比为第二速率与第一速率的比;或,当第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当第一数据为采用第一FEC码型编码的数据时,对采用第一FEC码型编码的数据进行解码,得到原始数据,采用与第二速率匹配的第三FEC码型对原始数据进行编码,得到第二数据,第三FEC码型的开销大于第一FEC码型的开销;或,当第一数据为原始数据时,采用与第二速率匹配的第三FEC码型对原始数据进行编码,得到第二数据,第三FEC码型的开销大于第一FEC码型的开销。
本申请实施例提供了一种处理器,该处理器可用于执行上述任一所述的改善传输速率的方法。
本申请实施例提供了一种网络设备,如图2或图3所示,该网络设备包括上述处理器。
在一种示例性实施例中,网络设备包括线卡,线卡包括上述处理器。
在一种示例性实施例中,网络设备还包括背板。
在一种示例性实施例中,网络设备还包括位于线卡和背板之间的CDR,线卡通过CDR 与背板通信。
本申请实施例提供了一种网络系统,该网络系统包括一个或多个网络设备,网络设备为上述任一种网络设备。
参见图10,本申请实施例还提供一种改善传输速率的设备1000,图10所示的改善传输速率的1000用于执行上述改善传输速率的方法所涉及的操作。该改善传输速率的设备1000包括:存储器1001、处理器1002及接口1003,存储器1001、处理器1002及接口1003之间通过总线1004连接。
其中,存储器1001中存储有至少一条指令,至少一条指令由处理器1002加载并执行,以实现上述任一所述的改善传输速率的方法。
接口1003用于与网络中的其他设备进行通信,该接口1003可以通过无线或有线的方式实现,示例性地,该接口1003可以是网卡。例如,改善传输速率的设备1000可通过该接口1003与其他网络设备进行通信。
应理解的是,图10仅仅示出了改善传输速率的设备1000的简化设计。在实际应用中,改善传输速率的设备1000可以包含任意数量的接口,处理器或者存储器。此外,上述处理器可以是中央处理器(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。值得说明的是,处理器可以是支持进阶精简指令集机器(advanced RISC machines,ARM)架构的处理器。
进一步地,在一种可选的实施例中,上述存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。
该存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者,其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用。例如,静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic random access memory,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
还提供了一种计算机可读存储介质,存储介质中存储有至少一条指令,指令由处理器加载并执行以实现如上任一所述的改善传输速率的方法。
本申请提供了一种计算机程序,当计算机程序被计算机执行时,可以使得处理器或计算机执行上述方法实施例中对应的各个操作和/或流程。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到上述实施例方法中的全部或部分可借助软件加通用硬件平台的方式来实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如只读存储器(Read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者诸如媒体网关等网络通信设备)执行本发明各个实施例或者实施例的某些部分所述的方法。
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备及系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的设备及系统实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请的可选实施方式,并非用于限定本申请的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
本申请涉及的术语解释
FEC:Forward Error Correction前向纠错
RS-FEC:Reed-Solomon FEC,里德-所罗门前向纠错码
BCH code:Bose–Chaudhuri–Hocquenghem,BCH前向纠错码
PCS:Physical Coding Sublayer物理编码子层
PMA:Physical Medium Attachment Sublayer物理介质接入子层
PMD:Physical Media Dependent物理介质关联层
PHY:Physical物理层
AM:Alignment Marker对齐标志
VL:Virtual Lane虚拟通道,等同于PCS Lane
PL:Physical Lane物理通道*
SerDes:Serializer/Deserializer,串行/解串行(器)
PLL:Phase-Locked Loop,锁相环
CDR:Clock&Data Recovery,时钟和数据恢复
Gbps:Giga-bit per second,吉比特每秒
GBd:GBaud,Giga-baud,吉波特。
PAM:Pulse Amplitude Modulation,脉冲幅度调制
PAM4:4level PAM,4电平脉冲幅度调制,也写作PAM-4
OSI model:Open Systems Interconnection model,开放系统互联模型
PCB:Printed Circuit Board,印刷电路板。
一个Physical Link(物理链路)上可以有多个Physical Lanes(物理通道)。

Claims (24)

  1. 一种改善传输速率的方法,其特征在于,包括:
    以第一速率获得第一数据;
    在所述第一数据中以一定比例加入额外数据,得到第二数据;
    以第二速率发送所述第二数据,所述第二速率大于所述第一速率。
  2. 根据权利要求1所述的方法,其特征在于,所述第二速率不是所述第一速率的整数倍。
  3. 根据权利要求2所述的方法,其特征在于,所述以第二速率发送所述第二数据,包括:
    采用物理通道以所述第二速率发送所述第二数据,所述物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,所述扩充后的虚拟通道的数量基于传输所述第一数据的虚拟通道的数量以及数据传输接口采用所述第二速率传输数据时对应的物理通道的数量确定。
  4. 根据权利要求1-3中任一所述的方法,其特征在于,所述额外数据位于所述第二数据的第一部分。
  5. 根据权利要求1-3中任一所述的方法,其特征在于,所述额外数据中的第一部分位于所述第二数据的第一部分,所述额外数据的第二部分位于所述第二数据的第二部分,所述额外数据中的第一部分和所述额外数据中的第二部分之间包括第一数据的一部分。
  6. 根据权利要求1-5中任一所述的方法,其特征在于,所述第一数据包括对齐标志AM字符,所述在所述第一数据中以一定比例加入额外数据,包括:
    以所述第一数据中的AM字符为边界,在所述第一数据中以一定比例插入额外数据。
  7. 根据权利要求1-6中任一所述的方法,其特征在于,所述在所述第一数据中以一定比例加入额外数据,得到第二数据,包括:
    当所述第一数据为媒体接入控制MAC层的数据时,以第一比例在所述MAC层的数据中插入第一额外数据,得到第二数据;或,
    当所述第一数据为前向纠错FEC子层分发后的虚拟通道VL上传输的数据时,以第二比例在所述FEC子层分发后的VL上传输的数据中插入第二额外数据,得到第二数据;或,
    当所述第一数据为经过VL重映射之后、进入物理链路之前的数据时,以第三比例在所述经过VL重映射之后、进入物理链路之前的数据中插入第三额外数据,得到第二数据;或,
    当所述第一数据为物理链路上传输的数据时,以第四比例在所述物理链路上传输的数据中插入第四额外数据,得到第二数据;或,
    当所述第一数据为原始数据时,以第五比例在所述原始数据中插入第五额外数据,得到第二数据。
  8. 根据权利要求1-6中任一所述的方法,其特征在于,所述在所述第一数据中以一定比例加入额外数据,得到第二数据,包括:
    基于所述第二速率,采用前向纠错FEC码对所述第一数据编码,得到第二数据。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述第二速率,采用前向纠 错FEC码对所述第一数据编码,得到第二数据,包括:
    当所述第一数据为FEC子层分发后的虚拟通道VL上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对所述FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据,所述速率比为所述第二速率与所述第一速率的比;或,
    当所述第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,
    当所述第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,
    当所述第一数据为采用第一FEC码型编码的数据时,对所述采用第一FEC码型编码的数据进行解码,得到原始数据,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销;或,
    当所述第一数据为原始数据时,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销。
  10. 一种改善传输速率的装置,其特征在于,所述装置包括:
    获取模块,用于以第一速率获得第一数据;
    处理模块,用于在所述第一数据中以一定比例加入额外数据,得到第二数据;
    发送模块,用于以第二速率发送所述第二数据,所述第二速率大于所述第一速率。
  11. 根据权利要求10所述的装置,其特征在于,所述第二速率不是所述第一速率的整数倍。
  12. 根据权利要求11所述的装置,其特征在于,所述发送模块,用于采用物理通道以第二速率发送所述第二数据,所述物理通道传输数据的速率基于扩充后的虚拟通道进行比特复用确定,所述扩充后的虚拟通道的数量基于传输所述第一数据的虚拟通道的数量以及数据传输接口采用所述第二速率传输数据时对应的物理通道的数量确定。
  13. 根据权利要求10-12中任一所述的装置,其特征在于,所述额外数据位于所述第二数据的第一部分。
  14. 根据权利要求10-12中任一所述的装置,其特征在于,所述额外数据的第一部分位于所述第二数据的第一部分,所述额外数据的第二部分位于所述第二数据的第二部分,所述额外数据中的第一部分和所述额外数据中的第二部分之间包括第一数据的一部分。
  15. 根据权利要求10-14中任一所述的装置,其特征在于,所述第一数据包括对齐标志AM字符,所述处理模块,用于以所述第一数据中的AM字符为边界,在所述第一数据中以一定比例插入额外数据。
  16. 根据权利要求10-15中任一所述的装置,其特征在于,所述处理模块,用于当所述第一数据为媒体接入控制MAC层的数据时,以第一比例在所述MAC层的数据中插入第一额外数据,得到第二数据;或,当所述第一数据为前向纠错FEC子层分发后的虚拟 通道VL上传输的数据时,以第二比例在所述FEC子层分发后的VL上传输的数据中插入第二额外数据,得到第二数据;或,当所述第一数据为经过VL重映射之后、进入物理链路之前的数据时,以第三比例在所述经过VL重映射之后、进入物理链路之前的数据中插入第三额外数据,得到第二数据;或,当所述第一数据为物理链路上传输的数据时,以第四比例在所述物理链路上传输的数据中插入第四额外数据,得到第二数据;或,当所述第一数据为原始数据时,以第五比例在所述原始数据中插入第五额外数据,得到第二数据。
  17. 根据权利要求10-15中任一所述的装置,其特征在于,所述处理模块,用于基于所述第二速率,采用前向纠错FEC码对所述第一数据编码,得到第二数据。
  18. 根据权利要求17所述的装置,其特征在于,所述处理模块,用于当所述第一数据为FEC子层分发后的虚拟通道VL上传输且采用第一FEC码型编码的数据时,采用与速率比匹配的第二FEC码型对所述FEC子层分发后的VL上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据,所述速率比为所述第二速率与所述第一速率的比;或,当所述第一数据为经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述经过VL重映射之后、进入物理链路之前且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为物理链路上传输且采用第一FEC码型编码的数据时,采用与所述速率比匹配的第二FEC码型对所述物理链路上传输且采用第一FEC码型编码的数据进行二级编码,得到第二数据;或,当所述第一数据为采用第一FEC码型编码的数据时,对所述采用第一FEC码型编码的数据进行解码,得到原始数据,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销;或,当所述第一数据为原始数据时,采用与所述第二速率匹配的第三FEC码型对所述原始数据进行编码,得到第二数据,所述第三FEC码型的开销大于所述第一FEC码型的开销。
  19. 一种处理器,其特征在于,所述处理器可用于执行上述权利要求1-9中任一所述的方法。
  20. 一种网络设备,其特征在于,所述网络设备包括权利要求19所述的处理器。
  21. 根据权利要求20所述的网络设备,其特征在于,所述网络设备包括线卡,所述线卡包括权利要求19所述的处理器。
  22. 根据权利要求20或21所述的网络设备,其特征在于,所述网络设备还包括背板。
  23. 根据权利要求20-22中任一所述的网络设备,其特征在于,所述网络设备还包括位于线卡和背板之间的时钟和数据恢复CDR电路,所述线卡通过所述CDR电路与所述背板通信。
  24. 一种网络系统,其特征在于,所述网络系统包括一个或多个网络设备,所述网络设备为权利要求20-23中任一所述的网络设备。
PCT/CN2020/099226 2019-07-27 2020-06-30 改善传输速率的方法、处理器、网络设备和网络系统 WO2021017726A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20848134.1A EP3996330A4 (en) 2019-07-27 2020-06-30 METHOD FOR IMPROVING TRANSMISSION SPEED, PROCESSOR, NETWORK DEVICE, AND NETWORK SYSTEM
US17/584,911 US20220149988A1 (en) 2019-07-27 2022-01-26 Method for Adjusting Transmission Rate, Processor, Network Device, and Network System

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910685561.2 2019-07-27
CN201910685561 2019-07-27
CN201910731452.X 2019-08-08
CN201910731452.XA CN112291077A (zh) 2019-07-27 2019-08-08 改善传输速率的方法、装置、处理器、网络设备和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/584,911 Continuation US20220149988A1 (en) 2019-07-27 2022-01-26 Method for Adjusting Transmission Rate, Processor, Network Device, and Network System

Publications (1)

Publication Number Publication Date
WO2021017726A1 true WO2021017726A1 (zh) 2021-02-04

Family

ID=74229173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099226 WO2021017726A1 (zh) 2019-07-27 2020-06-30 改善传输速率的方法、处理器、网络设备和网络系统

Country Status (3)

Country Link
US (1) US20220149988A1 (zh)
EP (1) EP3996330A4 (zh)
WO (1) WO2021017726A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117675101A (zh) * 2022-09-08 2024-03-08 华为技术有限公司 数据传输方法、装置、系统及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1816969A (zh) * 2003-04-30 2006-08-09 马科尼通讯股份有限公司 前向纠错编码
CN101247200A (zh) * 2007-02-15 2008-08-20 华为技术有限公司 一种otu信号的复用/解复用系统及方法
CN103534971A (zh) * 2013-05-17 2014-01-22 华为技术有限公司 一种fec编解码的数据处理方法和相关装置
EP2701334A1 (en) * 2011-04-21 2014-02-26 Fujitsu Limited Data reception apparatus, marker information extraction method, and marker position detection method
CN106464427A (zh) * 2015-04-23 2017-02-22 华为技术有限公司 一种数据处理方法和数据发送端以及接收端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8370704B2 (en) * 2009-03-09 2013-02-05 Intel Corporation Cable interconnection techniques
US8984380B2 (en) * 2011-07-01 2015-03-17 Altera Corporation Method and system for operating a communication circuit configurable to support one or more data rates
US9246617B2 (en) * 2013-09-09 2016-01-26 Applied Micro Circuits Corporation Reformating a plurality of signals to generate a combined signal comprising a higher data rate than a data rate associated with the plurality of signals
US9602401B2 (en) * 2014-09-22 2017-03-21 Intel Corporation Technologies for high-speed PCS supporting FEC block synchronization with alignment markers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1816969A (zh) * 2003-04-30 2006-08-09 马科尼通讯股份有限公司 前向纠错编码
CN101247200A (zh) * 2007-02-15 2008-08-20 华为技术有限公司 一种otu信号的复用/解复用系统及方法
EP2701334A1 (en) * 2011-04-21 2014-02-26 Fujitsu Limited Data reception apparatus, marker information extraction method, and marker position detection method
CN103534971A (zh) * 2013-05-17 2014-01-22 华为技术有限公司 一种fec编解码的数据处理方法和相关装置
CN106464427A (zh) * 2015-04-23 2017-02-22 华为技术有限公司 一种数据处理方法和数据发送端以及接收端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3996330A4

Also Published As

Publication number Publication date
US20220149988A1 (en) 2022-05-12
EP3996330A1 (en) 2022-05-11
EP3996330A4 (en) 2022-10-12

Similar Documents

Publication Publication Date Title
US10374782B2 (en) Full duplex transmission method for high speed backplane system
US20220077875A1 (en) Data Transmission Method, Encoding Method, Decoding Method, Apparatus, Device, and Storage Medium
US8732375B1 (en) Multi-protocol configurable transceiver with independent channel-based PCS in an integrated circuit
US20240283565A1 (en) Interface, electronic device, and communication system
WO2021017726A1 (zh) 改善传输速率的方法、处理器、网络设备和网络系统
CN112291077A (zh) 改善传输速率的方法、装置、处理器、网络设备和系统
CN110830152B (zh) 接收码块流的方法、发送码块流的方法和通信装置
US20150106679A1 (en) Defect propagation of multiple signals of various rates when mapped into a combined signal
US20100316068A1 (en) Transport Over an Asynchronous XAUI-like Interface
CN116455516A (zh) 编码方法、解码方法、装置、设备、系统及可读存储介质
CN112543080B (zh) 误码率检测的方法和装置
CN117083820A (zh) 数据传输方法、通信设备及系统
JP7192195B2 (ja) コードブロックストリームの受信方法、コードブロックストリームの送信方法、および通信装置
WO2024148984A1 (zh) 传输数据的方法、装置、设备、系统及存储介质
WO2023131003A1 (zh) 编码方法、解码方法、装置、设备、系统及可读存储介质
WO2024001230A1 (zh) 承载方法、通信设备以及存储介质
TW202429847A (zh) 傳輸資料的方法、裝置、設備、系統及儲存介質
CN118041488A (zh) 数据传输方法、装置、系统及计算机可读存储介质
CN116455517A (zh) 编码方法、解码方法、装置、设备、系统及可读存储介质
CN117692114A (zh) 一种传输数据的方法和装置
TW202433863A (zh) 一種乙太網中發送資料的方法、設備和系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20848134

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020848134

Country of ref document: EP

Effective date: 20220202