WO2024061124A1 - 数据处理方法、装置、电子设备及可读存储介质 - Google Patents

数据处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2024061124A1
WO2024061124A1 PCT/CN2023/119095 CN2023119095W WO2024061124A1 WO 2024061124 A1 WO2024061124 A1 WO 2024061124A1 CN 2023119095 W CN2023119095 W CN 2023119095W WO 2024061124 A1 WO2024061124 A1 WO 2024061124A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
data packet
predetermined
frame
packet
Prior art date
Application number
PCT/CN2023/119095
Other languages
English (en)
French (fr)
Inventor
赵治鹏
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2024061124A1 publication Critical patent/WO2024061124A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received

Definitions

  • This application relates to but is not limited to the field of data processing technology.
  • GFP Generic Framing Procedure
  • OTN optical transport network
  • Related GFP technology requires data caching when implementing flow control processing of customer information, adjusting the overhead position of GFP frames, and rate adaptation of GFP frames to the OTN network. The buffering of these data increases system power consumption and data delay.
  • This application provides a data processing method, device, electronic equipment and readable storage medium.
  • this application provides a data processing method, which includes: preprocessing the first data packet in the data stream to generate a data queue. When a predetermined frame exists in the data queue, the predetermined frame is located at a predetermined location. position; inserting at least one gap at a predetermined position of the first data packet in the data stream; inserting overhead data in the at least one gap to generate a first format frame; scrambling and mapping the first format frame.
  • this application provides a data processing device, including: a preprocessing unit configured to preprocess the first data packet in the data stream to generate a data queue. When there is a predetermined frame in the data queue, The predetermined frame is located at a predetermined position; a gap insertion unit is used to insert at least one gap at a predetermined position of the first data packet in the data stream; an overhead data insertion unit is used to insert overhead data in the at least one gap to generate the first Format frame; mapping unit, used for scrambling and mapping the first format frame.
  • the present application provides an electronic device.
  • the electronic device includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor.
  • the program or instructions are When executed by the processor, the processor is caused to implement the steps of the method described in the first aspect.
  • the present application provides a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the processor implements the method described in the first aspect. steps of the method.
  • Figure 1 is a schematic flow chart of the data processing method provided by this application.
  • Figure 2 is a schematic structural diagram of an electronic device that executes the data processing method provided by this application;
  • Figure 3 is a flow diagram of a comparative example of the present application.
  • FIG. 4 is another schematic flow chart of the data processing method provided by this application.
  • FIG. 5 is a schematic structural diagram of the data processing device provided by this application.
  • Figure 6 is a schematic structural diagram of the electronic device provided by this application.
  • first”, “second”, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. should understand this Data so used are interchangeable under appropriate circumstances so that embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and terms distinguished by "first”, “second”, etc. are generally It is a category, and the number of objects is not limited.
  • the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • Figure 1 is a schematic flowchart of the data processing method provided by this application.
  • Figure 2 is a schematic structural diagram of an electronic device that executes the data processing method provided by this application.
  • the electronic device that executes the data processing method provided by this application includes: a flow control unit 10, a gap adjustment unit 20, a GFP encapsulation unit 30 and a mapping unit 50.
  • sigema-delta represents a self-oscillating unit configured to generate uniform request data indications
  • remain_reg represents a register for temporarily storing data.
  • the flow control unit takes a number (called reg over-limit back pressure).
  • the method includes the following steps S102 to S108.
  • step S102 preprocess the first data packet in the data flow to generate a data queue.
  • the predetermined frame After preprocessing, if a predetermined frame exists in the data queue, the predetermined frame is located at a predetermined position.
  • the predetermined frame can be an idle frame (idle), and the predetermined position can be the packet header or packet tail, that is, through preprocessing, the idle frame is set at the packet header or packet tail of the data packet in the data queue (for example, the idle frame setting after the end of the first packet).
  • This step can be performed by the flow control unit 10. Through this step, the data read from the flow control rate limiting fifo (first in, first out) of the flow control unit 10 can be judged by the preprocessing process according to the functional requirements whether (need to) be inserted at the header or tail of the data.
  • Predetermined frames e.g., idle frames).
  • invalid bytes between the header position and the end position of the first data packet in the data stream may be removed.
  • a data queue is generated through the first data packet.
  • the predetermined length condition can be that the data is spliced to one beat.
  • the predetermined length is spliced after the tail of the first data packet. frame, and generate the data queue through the spliced first data packet.
  • an idle frame can be spliced after the end of the first data packet, and the spliced first data Packets generate the data queue.
  • the packet length of the first data packet may be written into the data queue after the tail of the first data packet and before splicing the predetermined frame.
  • the remaining portion of the integer number of predetermined frames can be temporarily stored; the remaining portion can be combined with the first data packet.
  • the headers of the two data packets are spliced.
  • the indication signal of the remaining part is combined with the second data
  • the header of the packet is aligned and output, and the indication signal is used to indicate the remaining number of bytes of the predetermined frame.
  • the number of remaining bytes of idle (0, 1, 2, or 3 bytes) is used as an indication signal and is output in alignment with the packet header.
  • the gap adjustment unit 20 may determine the number of gaps to insert the first data packet according to the indication signal.
  • step S104 at least one gap is inserted at a predetermined position of the first data packet in the data stream.
  • the data stream can be expanded at the header and tail to reserve the overhead position of the GFP frame. If the idle byte indication corresponding to the packet header is not 0, the remaining idle positions are simultaneously expanded.
  • step S106 overhead data is inserted into the at least one gap to generate a first format frame.
  • the payload length is determined according to the packet length calculated by the flow control unit 10, and then the cyclic redundancy check (Cyclic redundancy check, CRC) 16 is calculated to form a core header.
  • CRC Cyclic redundancy check
  • PFI payload frame check
  • Sequence Fra Check Sequence, FCS
  • user data type etc. determine the payload type, perform CRC16 calculation, and form a payload header.
  • CRC32 calculation of the GFP payload information must be performed to obtain the FCS, and then the Overhead data, such as core header and payload header, are sequentially inserted into the gap position before the packet header, and FCS is inserted into the gap position after the packet tail to form a GFP-F frame.
  • step S108 the first format frame is scrambled and mapped.
  • mapping unit 50 After being processed by the GFP encapsulation unit, it is sent to the mapping unit 50 for mapping into an optical channel data unit (ODU) frame after being scrambled.
  • the mapping unit 50 generates the rate of the ODU frame through self-oscillation, and requests data from the front stage (adjusting the gap unit 20) at this rate, and assembles it according to the ODU frame format.
  • the data processing device of this comparative example includes: a flow control unit 10, a gap adjustment unit 20, a GFP encapsulation unit 30, GFP speed regulating unit (hereinafter referred to as speed regulating unit) 40 and mapping unit 50 .
  • the flow control unit 10 first performs simple processing on the customer service: deleting the entire IPG (packet gap), calculating the packet length, etc.
  • the processed data packets are written into the FIFO.
  • the customer service rate compared to the OTN service rate
  • the FIFO waterline may reach the preset threshold.
  • a flow control signal must be sent to the customer service equipment to stop it. Send data.
  • Adjust the gap unit 20 take out the data from the flow control unit 10, delete the invalid bytes between the two packets, and then splice the data into the FIFO in one beat. If there is a short flow interruption due to speed limit or back pressure, The last output packet is very likely to be less than one beat of data after splicing. At this time, the tail data of the "last packet" output by the flow control unit 10 needs to be forcibly written into the FIFO. After reading from the FIFO, the data stream is expanded at the header and tail to reserve the overhead position of the GFP frame.
  • the data that is too late to be output is temporarily stored, and is spliced and output with the next beat data; when the temporarily stored data exceeds the preset value, the FIFO of the gap unit 20 is adjusted by back pressure. If the FIFO of the gap unit 20 is adjusted to reach the high water mark, Then the FIFO of the flow control unit 10 is back-pressured. In order to ensure the continuity of the GFP-F frame sequence in the data stream, if the "last packet" is found when reading the FIFO, the invalid data after the end of the packet needs to be replaced with idle (idle frame).
  • the operation of the GFP encapsulation unit 30 is similar to the embodiment of the present application. It determines the payload length according to the packet length calculated by the flow control unit 10, and then performs CRC16 calculation to form a core header. According to the PFI (payload FCS indicator), the customer data type After determining the payload type, perform the same CRC16 calculation to form a payload header. If PFI is enabled, CRC32 calculation is also performed on the GFP payload information to obtain the FCS (frame check sequence), and then the core header and payload header are sequentially Insert the gap position before the packet header and insert the FCS into the gap position after the packet tail to form a GFP-F frame.
  • PFI payload FCS indicator
  • CRC32 calculation is also performed on the GFP payload information to obtain the FCS (frame check sequence), and then the core header and payload header are sequentially Insert the gap position before the packet header and insert the FCS into the gap position after the packet tail to form a GFP-F frame.
  • the GFP encapsulated data is sent to the GFP speed regulating unit 40.
  • the data is first buffered into the FIFO. Because the customer service needs to be mapped to an ODU (optical path data unit) frame, the mapping unit 50 will buffer the data at a relatively constant rate.
  • the data of the speed regulating unit 40 is read, so a cache is needed here to absorb the jitter of the customer's business (otherwise the data will be lost).
  • an idle frame can be inserted every N GFP frames according to the configuration to achieve the purpose of speed adjustment. If an alarm is received, the management frame is inserted into the data stream through the backpressure speed regulation FIFO.
  • the GFP-F frame after speed adjustment is scrambled and sent to the mapping unit 50 for mapping into an ODU frame.
  • the mapping unit 50 generates the rate of the ODU frame through self-oscillation, requests data from the front end (speed regulating unit 40) at this rate, and assembles it according to the ODU frame format.
  • the data processing method provided by the embodiment of the present application changes the 32-depth fifo into a register that caches two-shot data, which greatly reduces hardware resources.
  • GFP framing processing reduces the number of caches used and the overall cache depth, and uses fewer hardware resources to achieve low-power, low-latency data transmission.
  • the implementation method of this application deletes and moves the gap of the data packet, such as the Medium Access Control (MAC) packet forward through step S102, and encapsulates it into the gap before the GFP-F frame.
  • the data cache is implemented through registers and back-pressure operations.
  • the buffer FIFO of the adjustment gap unit 20 in Figure 3 is saved.
  • the implementation methods of the present application can be applied to optical transport network (optical transport network) framers (FRAMER) and OTN bearer network processing chips.
  • the implementation methods of the present application can support long-distance optical transmission systems and can be used for optical transmission equipment in OTN product lines, including backbone, metropolitan area, aggregation access and data centers.
  • FIG 4 is a schematic flow chart of the data processing method provided by the embodiment of the present application. As shown in the figure, the method includes the following steps S202 to S210.
  • step S202 the first data packet in the data stream is pre-processed to generate a data queue.
  • step S204 when the size of the data flow in the register is greater than or equal to a predetermined value, current limiting processing is performed on the output of the data flow.
  • the rate limit configured token rate is set through the flow control unit to be greater than the user service rate, for example, one millionth greater than the customer service rate.
  • N is a positive integer greater than or equal to 1.
  • the flow control FIFO reading is not restricted by the token bucket.
  • idle frames are no longer filled in the data stream, which improves bandwidth utilization and enables adaptive adjustment of the rate.
  • This implementation removes the fifo used in speed adjustment and changes a 32-depth fifo into a register that caches two-shot data, which greatly reduces hardware resources and achieves better bandwidth adjustment effects.
  • the implementation of this application moves the read request of the mapping unit forward and uses the FIFO of the flow control unit to absorb jitter; the rate adjustment is realized through the token bucket configuration of the flow control unit, saving the cache FIFO of the speed regulation unit.
  • the adjustment gap fifo (gap fifo) and the speed adjustment fifo are removed, while the flow control fifo specifications remain unchanged. Therefore, the embodiment of the present invention saves the hardware cache and the self-built test resources of the memory inside the cache, and reduces the consumption of data entering and exiting the fifo. time, saving hardware resources while reducing power consumption and transmission delay.
  • step S206 at least one gap is inserted at a predetermined position of the first data packet in the data stream.
  • step S208 overhead data is inserted in the at least one gap to generate a first format frame.
  • step S210 the first format frame is scrambled and mapped.
  • Steps S202, S206-S210 of this embodiment can all adopt the description of the relevant steps of the embodiment in Figure 1, and will not be described again here.
  • FIG. 5 shows a schematic structural diagram of a data processing device provided by this application.
  • the device 500 includes: a pre-processing unit 510, a gap insertion unit 520, an overhead data insertion unit 530 and a mapping unit 540.
  • the preprocessing unit 510 is configured to preprocess the first data packet in the data stream to generate a data queue. If there is a predetermined frame in the data queue, the predetermined frame is located at a predetermined position; the gap insertion unit 520 is configured to generate a data queue in the data queue. Subscription of the first packet in the flow Insert at least one gap at the position; the overhead data insertion unit 530 is configured to insert overhead data in the at least one gap to generate a first format frame; the mapping unit 540 is configured to scramble and map the first format frame.
  • the preprocessing unit 510 removes invalid bytes between the header position and the end position of the first data packet in the data stream; when the length of the first data packet meets the predetermined If the length condition is met, a data queue is generated through the first data packet; if the length of the first data packet does not meet the predetermined length condition, the predetermined length is spliced after the tail of the first data packet. frame, and generate the data queue through the spliced first data packet.
  • the preprocessing unit 510 writes the packet length of the first data packet into the data queue.
  • the preprocessing unit 510 when an integer number of predetermined frames are not spliced after the end of the first data packet, temporarily stores the remaining parts of the integer number of predetermined frames; The remaining part is spliced with the header of the second data packet.
  • the preprocessing unit 510 when the header of the second data packet does not splice the remaining portion of an integral number of predetermined frames, the preprocessing unit 510 combines the indication signal of the remaining portion with the second The packet header of the data packet is output in alignment, and the indication signal is used to indicate the remaining number of bytes of the predetermined frame.
  • the preprocessing unit 510 determines the number of gaps into which the first data packet is inserted according to the indication signal.
  • the preprocessing unit 510 when the size of the data stream in the register is greater than or equal to a predetermined value, performs current limiting processing on the output of the data stream.
  • the preprocessing unit 510 sets the token rate of the speed limit configuration to be greater than the user service rate; blocks the N+1th data packet in the FIFO; and blocks the Nth data packet in the FIFO. At least one scheduled frame is sent after the packet.
  • the predetermined position includes a packet header or a packet tail position, and/or the predetermined frame is an idle frame.
  • the device 500 provided in the embodiment of the present application can execute each method described in the previous method implementation, and realize the functions and beneficial effects of each method described in the previous method implementation, which will not be described again here.
  • Figure 6 shows a schematic diagram of the hardware structure of an electronic device that performs data processing provided by the embodiment of the present application.
  • the electronic device includes a processor, optionally including an internal bus, a network interface, and a memory.
  • the memory may include memory, such as high-speed random access memory (Random-Access Memory, RAM), or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM random access memory
  • non-volatile memory such as at least one disk memory.
  • the electronic equipment may also include other hardware required by the business.
  • the processor, network interface, and memory can be connected to each other through an internal bus, which can be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an extended industry standard Structure (Extended Industry Standard Architecture, EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only a two-way arrow is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • Memory used to store programs.
  • a program may include program code including computer operating instructions.
  • Memory may include internal memory and non-volatile memory and provides instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, forming a device for locating the target user at the logical level.
  • the processor executes the program stored in the memory, and can be used to execute: the data processing method described in the embodiment of FIG. 1 and/or FIG. 4 .
  • the above is the data processing method described in the embodiment of this application: Figure 1 and/or Figure 4.
  • the methods disclosed in the illustrated embodiments may be applied in, or implemented by, a processor.
  • the processor may be an integrated circuit chip that has signal processing capabilities.
  • each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated processor Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the electronic device can also execute each of the methods described in the foregoing method implementations, and realize the functions and beneficial effects of each of the methods described in the foregoing method implementations, which will not be described again here.
  • the electronic device of this application does not exclude other implementation methods, such as logic devices or a combination of software and hardware, etc. That is to say, the execution subject of the following processing flow is not limited to each logical unit. It can also be hardware or logic devices.
  • Embodiments of the present application also propose a computer-readable storage medium that stores one or more programs.
  • the one or more programs When the one or more programs are executed by an electronic device including multiple application programs, the one or more programs cause the The electronic device performs the following operations: the data processing method described in the embodiment of FIG. 1 and/or FIG. 4 .
  • the computer-readable storage medium includes read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc.
  • embodiments of the present application also provide a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions When executed by a computer, the following flow is implemented: the data processing method described in the embodiment of Figure 1 and/or Figure 4 .
  • a data queue is generated by preprocessing the first data packet in the data flow.
  • the predetermined frame is located at a predetermined position; when the first data packet in the data flow is Insert at least one gap at a predetermined position of the data packet; insert overhead data in the at least one gap to generate a first format frame; convert the first format Scrambling and mapping of frames can reduce system power consumption and data delay.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • read-only memory read-only memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory CD-ROM
  • DVD digital versatile disc
  • Magnetic tape cassettes tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种数据处理方法、装置、电子设备及可读存储介质。该数据处理方法包括:对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;在数据流中第一数据包的预定位置插入至少一个间隙;在所述至少一个间隙插入开销数据以生成第一格式帧;将所述第一格式帧进行加扰和映射处理。

Description

数据处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请要求2022年9月20日提交给中国专利局的第202211145806.0号专利申请的优先权,其全部内容通过引用合并于此。
技术领域
本申请涉及但不限于数据处理技术领域。
背景技术
随着第五代移动通信(5G)技术的逐渐发展,其大带宽、低时延和海量连接等优势深受人们喜爱,与此同时,功耗较大带来的高额成本也成为制约5G技术发展和普及的一大瓶颈。
GFP(Generic Framing Procedure,通用成帧规程)作为一种链路层标准,可以灵活的将客户业务进行封装,使其在OTN(optical transport network,光传送网)网络上传送。相关GFP技术在实现客户信息的流控处理、GFP帧的开销位置调节、GFP帧到OTN网络的速率适配时都需要数据的缓存,这些数据的缓冲增加了系统功耗和数据的延时。
发明内容
本申请提供一种数据处理方法、装置、电子设备及可读存储介质。
第一方面,本申请提供了一种数据处理方法,包括:对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;在数据流中第一数据包的预定位置插入至少一个间隙;在所述至少一个间隙插入开销数据以生成第一格式帧;将所述第一格式帧进行加扰和映射处理。
第二方面,本申请提供了一种数据处理装置,包括:预处理单元,用于对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;间隙插入单元,用于在数据流中第一数据包的预定位置插入至少一个间隙;开销数据插入单元,用于在所述至少一个间隙插入开销数据以生成第一格式帧;映射单元,用于将所述第一格式帧进行加扰和映射处理。
第三方面,本申请提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时,使得所述处理器实现如第一方面所述的方法的步骤。
第四方面,本申请提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时,使得所述处理器实现如第一方面所述的方法的步骤。
附图说明
图1是本申请提供的数据处理方法的一种流程示意图;
图2是执行本申请提供的数据处理方法的电子设备的结构示意图;
图3是本申请的一个比较例的流程示意图;
图4是本申请提供的数据处理方法的另一种流程示意图;
图5是本申请提供的数据处理装置的一种结构示意图;
图6是本申请提供的电子设备的一种结构示意图。
具体实施方式
下面将结合本申请实施方式中的附图,对本申请实施方式中的技术方案进行清楚地描述,显然,所描述的实施方式是本申请一部分实施方式,而不是全部的实施方式。基于本申请中的实施方式,本领域普通技术人员获得的所有其他实施方式,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这 样使用的数据在适当情况下可以互换,以便本申请的实施方式能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过示例性实施方式及其应用场景对本申请实施方式提供的数据处理方法进行详细地说明。
图1是本申请提供的数据处理方法的一种流程示意图,图2是执行本申请提供的数据处理方法的电子设备的结构示意图。如图1和图2所示,执行本申请提供的数据处理方法的电子设备包括:流控单元10、调整间隙单元20、GFP封装单元30和映射单元50。在图2中,sigema-delta表示一个自振单元,其配置为产生均匀的请求数据指示;remain_reg表示暂存数据的寄存器,其中,当remain_reg存储的数据字节数超过一定值时,会暂停从流控单元取数(称为reg超限反压)。该方法包括以下步骤S102至S108。
在步骤S102,对数据流中第一数据包进行预处理生成数据队列。
经过预处理,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置。可选的,该预定帧可以为空闲帧(idle),预定位置可以为包头或包尾,即通过预处理,将空闲帧设置在数据队列中数据包的包头或包尾(例如,空闲帧设置在第一数据包的包尾之后)。本步骤可以通过流控单元10执行。通过本步骤,从流控单元10的流控限速fifo(先进先出)中读取出的数据,根据功能需要,预处理过程自行判断,是否(需要)在数据的包头或包尾处插入预定帧(例如,空闲帧)。
在一种实现方式中,可以将所述数据流中第一数据包的包头位置和包尾位置之间的无效字节移除。在所述第一数据包的长度满足预定长度条件的情况下,通过所述第一数据包生成数据队列。其中,预定长度条件可以为数据拼接满一拍。在所述第一数据包的长度不满足预定长度条件的情况下,在所述第一数据包的包尾之后拼接所述预定 帧,并通过拼接后的第一数据包生成所述数据队列。例如,第一数据包的长度不满一拍,或者在等待预定配置时间后不满一拍的情况下,可以在所述第一数据包的包尾之后拼接空闲帧,并通过拼接后的第一数据包生成所述数据队列。
在一种实现方式中,可以在所述第一数据包的包尾之后拼接预定帧之前,将所述第一数据包的包长写入所述数据队列。
在一种实现方式中,在所述第一数据包的包尾之后未拼接整数个预定帧的情况下,可以将所述整数个预定帧中的剩余部分暂存;将所述剩余部分与第二数据包的包头拼接。
类似地,在另一种实现方式中,在所述第二数据包的包头未拼接整数个预定帧中的所述剩余部分的情况下,将所述剩余部分的指示信号与所述第二数据包的包头对齐输出,所述指示信号用于指示所述预定帧剩余字节数。例如,将idle剩余字节数(0,1,2,或3个字节)作为一个指示信号,与包头对齐输出。调整间隙单元20收到数据后,可以根据所述指示信号,确定插入所述第一数据包的间隙的个数。
在步骤S104,在数据流中第一数据包的预定位置插入至少一个间隙。
通过调整间隙单元20,可以在包头包尾处对数据流进行扩充,预留出GFP帧的开销位置,如果包头对应的idle字节指示不为0,则同时扩充出补齐剩余idle的位置。
在步骤S106,在所述至少一个间隙插入开销数据以生成第一格式帧。
通过GFP封装单元30,根据流控单元10计算得到的包长确定净荷长度,再进行循环冗余校验(Cyclic redundancy check,CRC)16计算,组成核心头,根据PFI(净荷帧校验序列(Frame Check Sequence,FCS)指示符),用户数据类型等确定净荷类型,进行CRC16计算,组成净荷头,PFI使能有效还要对GFP净荷信息进行CRC32计算得出FCS,然后将开销数据,例如核心头和净荷头等依次插入包头之前的空隙位置,将FCS插入包尾之后的空隙位置,形成GFP-F帧。
在步骤S108,将所述第一格式帧进行加扰和映射处理。
GFP封装单元处理后经加扰送给映射单元50映射成光通道数据单元(Optical channel Data Unit,ODU)帧。映射单元50通过自振产生ODU帧的速率,并以此速率向前级(调整间隙单元20)请求数据,并按照ODU帧格式进行组装。。
在此,为清楚地说明本申请实施方式的有益效果,提出以下比较例,如图3所示,该比较例的数据处理装置包括:流控单元10、调整间隙单元20、GFP封装单元30、GFP调速单元(下文简称为调速单元)40和映射单元50。
流控单元10,首先对客户业务进行简单处理:删除整拍IPG(包间隙),计算包长等。处理后的数据包写入FIFO。当客户业务速率(相比OTN业务速率)偏大,或者需要实现限速功能时,FIFO水线就可能达到预设的门限,这时就要发送一个流控信号给客户业务设备,使其停发数据。
调整间隙单元20,从流控单元10取出数据,将数据删去两包之间的无效字节后进行拼接拼满一拍写入FIFO,若因限速或被反压出现短暂断流时,最后输出的一包,在拼接后有很大可能不足一拍数据,此时需强制将流控单元10输出的“最后一包”的包尾数据写入FIFO。从FIFO读出后在包头包尾处对数据流进行扩充,预留出GFP帧的开销位置。扩充过程中对来不及输出的数据进行暂存,和下一拍数据拼接输出;暂存数据超过预设值时,反压调整间隙单元20的FIFO,如果调整间隙单元20的FIFO到达高水位线,则反压流控单元10的FIFO。为保证数据流中GFP-F帧序列的连续性,若读取FIFO时发现“最后一包”,需将包尾后的无效数据替换为idle(空闲帧)。
GFP封装单元30的操作与本申请实施方式类似,根据流控单元10计算得到的包长确定净荷长度,再进行CRC16计算,组成核心头,根据PFI(净荷FCS指示符),客户数据类型等确定净荷类型,同样进行CRC16计算,组成净荷头,PFI使能有效还要对GFP净荷信息进行CRC32计算得出FCS(帧校验序列),然后将这核心头,净荷头依次插入包头之前的空隙位置,将FCS插入包尾之后的空隙位置,形成GFP-F帧。
GFP封装后的数据送到GFP调速单元40中,首先将数据缓存到FIFO中,因为客户业务要映射为ODU(光通路数据单元)帧,所以映射单元50会以一种相对恒定的速率来读取调速单元40的数据,所以此处需要一个缓存来吸纳客户业务的抖动(否则就会丢失数据)。调速处理时,根据配置可以每隔N个GFP帧插入一拍idle帧,以实现调节速率的目的。如果接受到告警,通过反压调速FIFO将管理帧插入数据流。
调速处理后的GFP-F帧经加扰送给映射单元50映射成ODU帧。映射单元50通过自振产生ODU帧的速率,并以此速率向前级(调速单元40)请求数据,并按照ODU帧格式进行组装。
与该比较例相比,本申请实施方式提供的数据处理方法将32深度的fifo变更为一个缓存两拍数据的寄存器,大大减小了硬件资源。GFP成帧处理减少了缓存使用个数以及整体缓存深度,使用更少的硬件资源实现了低功耗、低延时的数据传输。
本申请实施方式通过步骤S102将数据包,例如媒体接入控制(Medium Access Control,MAC)包的间隙删除前移,封装成GFP-F帧前的间隙,数据缓存通过寄存器与反压操作实现,节约了图3调整间隙单元20的缓存FIFO。
本申请实施方式可应用于光传送网(optical transport network)成帧器(FRAMER)中、OTN承载网处理芯片中,本申请实施方式可支持长距离光传输系统,用于OTN产品线的光传输设备,包括骨干、城域、汇聚接入和数据中心。
图4是本申请实施方式提供的数据处理方法的一种流程示意图,如图所示,该方法包括以下步骤S202至S210。
在步骤S202,对数据流中第一数据包进行预处理生成数据队列。
在步骤S204,在寄存器中数据流的大小大于等于预定值的情况下,对所述数据流的输出进行限流处理。
在寄存器中数据流的大小大于等于预定值的情况下,通过流控单元将限速配置的令牌速率设置为大于用户业务速率,例如,大于客户业务速率数百万分之一。将第N+1个所述数据包阻拦在FIFO中; 在第N个所述数据包之后发送至少一个预定帧,例如一定数量的空闲帧(idle),直接反压流控FIFO。N为正整数且大于或等于1。在无突发状态下,流控FIFO的读取不受令牌桶限制。与比较例相比,当用户业务处于正常范围时,数据流中不再填充idle帧,提高了带宽的利用率,实现了速率的自适应调整。本实施方式去除了调速时用到的fifo,将一个32深度的fifo变更为一个缓存两拍数据的寄存器,大大减小了硬件资源,还能实现更优异的带宽调整效果。
对比图2和图3可发现,本申请实施方式没有调速单元,将映射单元的读取请求反馈到调整间隙单元,当映射请求速率恒定,而客户速率较大时,调整间隙单元(寄存器超过门限)会反压流控FIFO,以此吸收抖动。
本申请实施方式将映射单元的读取请求前移,利用流控单元的FIFO吸收抖动;通过流控单元的令牌桶配置实现速率调节,节约了调速单元的缓存FIFO。去除调整间隙fifo(gap fifo)和调速fifo,而流控fifo规格依然保持不变,因此本发明实施方式节约了硬件缓存以及缓存内部的存储器自建内测试资源,减少了数据进出fifo消耗的时间,在节约硬件资源的同时降低功耗,减小传输延时。
在步骤S206,在数据流中第一数据包的预定位置插入至少一个间隙。
在步骤S208,在所述至少一个间隙插入开销数据以生成第一格式帧。
在步骤S210,将所述第一格式帧进行加扰和映射处理。
本实施方式的步骤S202、S206-S210均可采用图1实施方式相关步骤的说明,在此不再赘述。
图5示出本申请提供的数据处理装置的结构示意图,该装置500包括:预处理单元510、间隙插入单元520、开销数据插入单元530和映射单元540。
预处理单元510配置为对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;间隙插入单元520配置为在数据流中第一数据包的预定 位置插入至少一个间隙;开销数据插入单元530配置为在所述至少一个间隙插入开销数据以生成第一格式帧;映射单元540配置为将所述第一格式帧进行加扰和映射处理。
在一种可能的实现方式中,预处理单元510将所述数据流中第一数据包的包头位置和包尾位置之间的无效字节移除;在所述第一数据包的长度满足预定长度条件的情况下,通过所述第一数据包生成数据队列;在所述第一数据包的长度不满足预定长度条件的情况下,在所述第一数据包的包尾之后拼接所述预定帧,并通过拼接后的第一数据包生成所述数据队列。
在一种可能的实现方式中,预处理单元510将所述第一数据包的包长写入所述数据队列。
在一种可能的实现方式中,预处理单元510在所述第一数据包的包尾之后未拼接整数个预定帧的情况下,将所述整数个预定帧中的剩余部分暂存;将所述剩余部分与第二数据包的包头拼接。
在一种可能的实现方式中,预处理单元510在所述第二数据包的包头未拼接整数个预定帧的所述剩余部分的情况下,将所述剩余部分的指示信号与所述第二数据包的包头对齐输出,所述指示信号用于指示所述预定帧剩余字节数。
在一种可能的实现方式中,预处理单元510根据所述指示信号,确定插入所述第一数据包的间隙的个数。
在一种可能的实现方式中,预处理单元510在寄存器中数据流的大小大于等于预定值的情况下,对所述数据流的输出进行限流处理。
在一种可能的实现方式中,预处理单元510将限速配置的令牌速率设置为大于用户业务速率;将第N+1个所述数据包阻拦在FIFO中;在第N个所述数据包之后发送至少一个预定帧。
在一种可能的实现方式中,所述预定位置包括包头或包尾位置,和/或所述预定帧为空闲帧。
本申请实施方式提供的该装置500,可执行前文方法实施方式中所述的各方法,并实现前文方法实施方式中所述的各方法的功能和有益效果,在此不再赘述。
图6示出执行本申请实施方式提供的数据处理的电子设备的硬件结构示意图,参考该图,在硬件层面,电子设备包括处理器,可选地,包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,该图中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成定位目标用户的装置。处理器,执行存储器所存放的程序,并可以用于执行:图1和/或图4实施方式所述的数据处理方法。
上述如本申请:图1和/或图4实施方式所述的数据处理方法。所示实施方式揭示的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实 现或者执行本申请实施方式中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施方式所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该电子设备还可执行前文方法实施方式中所述的各方法,并实现前文方法实施方式中所述的各方法的功能和有益效果,在此不再赘述。
当然,除了软件实现方式之外,本申请的电子设备并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
本申请实施方式还提出了一种计算机可读存储介质,所述计算机可读介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:图1和/或图4实施方式所述的数据处理方法。
其中,所述的计算机可读存储介质包括只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
进一步地,本申请实施方式还提供了一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,实现以下流程:图1和/或图4实施方式所述的数据处理方法。
在本申请实施方式中,通过对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;在数据流中第一数据包的预定位置插入至少一个间隙;在所述至少一个间隙插入开销数据以生成第一格式帧;将所述第一格 式帧进行加扰和映射处理,能够降低系统功耗和数据的延时。
总之,以上所述仅为本申请的示例性实施方式,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
上述实施方式阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书中的各个实施方式均采用递进的方式描述,各个实施方式之间相同相似的部分互相参见即可,每个实施方式重点说明的都 是与其他实施方式的不同之处。尤其,对于系统实施方式而言,由于其基本相似于方法实施方式,所以描述的比较简单,相关之处参见方法实施方式的部分说明即可。

Claims (12)

  1. 一种数据处理方法,包括:
    对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;
    在数据流中第一数据包的预定位置插入至少一个间隙;
    在所述至少一个间隙插入开销数据以生成第一格式帧;
    将所述第一格式帧进行加扰和映射处理。
  2. 根据权利要求1所述的方法,其中,所述对数据流中第一数据包进行预处理,包括:
    将所述数据流中第一数据包的包头位置和包尾位置之间的无效字节移除;
    在所述第一数据包的长度满足预定长度条件的情况下,通过所述第一数据包生成数据队列;
    在所述第一数据包的长度不满足预定长度条件的情况下,在所述第一数据包的包尾之后拼接所述预定帧,并通过拼接后的第一数据包生成所述数据队列。
  3. 根据权利要求2所述的方法,其中,在所述第一数据包的包尾之后拼接预定帧之前,所述方法还包括:
    将所述第一数据包的包长写入所述数据队列。
  4. 根据权利要求2所述的方法,其中,所述第一数据包的包尾之后拼接预定帧,包括:
    在所述第一数据包的包尾之后未拼接整数个预定帧的情况下,将所述整数个预定帧中的剩余部分暂存;
    将所述剩余部分与第二数据包的包头拼接。
  5. 根据权利要求4所述的方法,其中,所述将所述剩余部分与第二数据包的包头拼接,包括:
    在所述第二数据包的包头未拼接整数个预定帧的所述剩余部分的情况下,将所述剩余部分的指示信号与所述第二数据包的包头对齐输出,所述指示信号用于指示所述预定帧剩余字节数。
  6. 根据权利要求5所述的方法,其中,所述在数据流中第一数据包的预定位置插入至少一个间隙,包括:
    根据所述指示信号,确定插入所述第一数据包的间隙的个数。
  7. 根据权利要求1所述的方法,其中,所述方法还包括:
    在寄存器中数据流的大小大于等于预定值的情况下,对所述数据流的输出进行限流处理。
  8. 根据权利要求7所述的方法,其中,所述对所述数据流的输出进行限流处理,包括:
    将限速配置的令牌速率设置为大于用户业务速率;
    将第N+1个所述数据包阻拦在FIFO中;
    在第N个所述数据包之后发送至少一个预定帧。
  9. 根据权利要求1所述的方法,其中,所述预定位置包括包头或包尾位置,和/或所述预定帧为空闲帧。
  10. 一种数据处理装置,包括:
    预处理单元,配置为对数据流中第一数据包进行预处理生成数据队列,在所述数据队列中存在预定帧的情况下,所述预定帧位于预定位置;
    间隙插入单元,配置为在数据流中第一数据包的预定位置插入至少一个间隙;
    开销数据插入单元,配置为在所述至少一个间隙插入开销数据以生成第一格式帧;
    映射单元,配置为将所述第一格式帧进行加扰和映射处理。
  11. 一种电子设备,其中,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-9任一项所述的数据处理方法的步骤。
  12. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-9任一项所述的数据处理方法的步骤。
PCT/CN2023/119095 2022-09-20 2023-09-15 数据处理方法、装置、电子设备及可读存储介质 WO2024061124A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211145806.0A CN117792564A (zh) 2022-09-20 2022-09-20 一种数据处理方法、装置电子设备及可读存储介质
CN202211145806.0 2022-09-20

Publications (1)

Publication Number Publication Date
WO2024061124A1 true WO2024061124A1 (zh) 2024-03-28

Family

ID=90387533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/119095 WO2024061124A1 (zh) 2022-09-20 2023-09-15 数据处理方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN117792564A (zh)
WO (1) WO2024061124A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004180215A (ja) * 2002-11-29 2004-06-24 Nippon Telegr & Teleph Corp <Ntt> フレーム信号処理方法及び中継装置
CN101160853A (zh) * 2005-11-28 2008-04-09 华为技术有限公司 一种数据包传输方法及数据网络系统、网络节点
JP2009105723A (ja) * 2007-10-24 2009-05-14 Nippon Telegr & Teleph Corp <Ntt> ディジタル伝送装置およびディジタル伝送プログラム
CN106571890A (zh) * 2015-10-12 2017-04-19 深圳市中兴微电子技术有限公司 一种速率适配方法和装置
CN109698728A (zh) * 2017-10-20 2019-04-30 深圳市中兴微电子技术有限公司 Interlaken接口与FlexE IMP的对接方法、对接设备及存储介质
CN110557217A (zh) * 2018-06-01 2019-12-10 华为技术有限公司 一种业务数据的处理方法及装置
CN112118073A (zh) * 2019-06-19 2020-12-22 华为技术有限公司 一种数据处理方法、光传输设备及数字处理芯片
CN113542934A (zh) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 业务处理方法、装置、网络设备和存储介质
CN114520937A (zh) * 2020-11-20 2022-05-20 华为技术有限公司 Pon中的数据传输方法、装置和系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004180215A (ja) * 2002-11-29 2004-06-24 Nippon Telegr & Teleph Corp <Ntt> フレーム信号処理方法及び中継装置
CN101160853A (zh) * 2005-11-28 2008-04-09 华为技术有限公司 一种数据包传输方法及数据网络系统、网络节点
JP2009105723A (ja) * 2007-10-24 2009-05-14 Nippon Telegr & Teleph Corp <Ntt> ディジタル伝送装置およびディジタル伝送プログラム
CN106571890A (zh) * 2015-10-12 2017-04-19 深圳市中兴微电子技术有限公司 一种速率适配方法和装置
CN109698728A (zh) * 2017-10-20 2019-04-30 深圳市中兴微电子技术有限公司 Interlaken接口与FlexE IMP的对接方法、对接设备及存储介质
CN110557217A (zh) * 2018-06-01 2019-12-10 华为技术有限公司 一种业务数据的处理方法及装置
CN112118073A (zh) * 2019-06-19 2020-12-22 华为技术有限公司 一种数据处理方法、光传输设备及数字处理芯片
CN113542934A (zh) * 2020-04-21 2021-10-22 中兴通讯股份有限公司 业务处理方法、装置、网络设备和存储介质
CN114520937A (zh) * 2020-11-20 2022-05-20 华为技术有限公司 Pon中的数据传输方法、装置和系统

Also Published As

Publication number Publication date
CN117792564A (zh) 2024-03-29

Similar Documents

Publication Publication Date Title
KR102499335B1 (ko) 신경망 데이터 처리 장치, 방법 및 전자 장비
US20160127072A1 (en) Method and apparatus for increasing and decreasing variable optical channel bandwidth
CN108023829B (zh) 报文处理方法及装置、存储介质、电子设备
CN112948295B (zh) 一种基于axi4总线的fpga与ddr高速数据包传输系统及方法
CN111107017A (zh) 一种交换机报文拥塞的处理方法、设备以及存储介质
WO2018176934A1 (zh) 一种数据流控方法及装置
WO2022205255A1 (zh) 一种数据传输的方法及装置
US20030012223A1 (en) System and method for processing bandwidth allocation messages
CN107888337B (zh) 一种fpga、fpga处理信息的方法、加速装置
US12010045B2 (en) Packet processing device and packet processing method
CN108614792B (zh) 1394事务层数据包存储管理方法及电路
WO2019084789A1 (zh) 直接存储器访问控制器、数据读取方法和数据写入方法
CN110012367B (zh) 用于gpon olt的omci组帧装置及组帧方法
CN114422617B (zh) 一种报文处理方法、系统及计算机可读存储介质
WO2022174444A1 (zh) 一种数据流传输方法、装置及网络设备
US10255226B2 (en) Format agnostic data transfer method
WO2024061124A1 (zh) 数据处理方法、装置、电子设备及可读存储介质
US20190286589A1 (en) Apparatus and method to improve performance in dma transfer of data
US7379467B1 (en) Scheduling store-forwarding of back-to-back multi-channel packet fragments
US20050243866A1 (en) Packet transmission apparatus, packet transmission system and packet transmission method
US8862783B2 (en) Methods and system to offload data processing tasks
WO2020001487A1 (zh) 开销传输方法、装置、设备及计算机可读存储介质
US7496109B1 (en) Method of maximizing bandwidth efficiency in a protocol processor
WO2021217520A1 (zh) 一种数据传输方法及装置
WO2024199298A1 (zh) 映射方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23867413

Country of ref document: EP

Kind code of ref document: A1