CN115941792B - Method and device for processing data blocks of flexible Ethernet and storage medium - Google Patents

Method and device for processing data blocks of flexible Ethernet and storage medium Download PDF

Info

Publication number
CN115941792B
CN115941792B CN202211518862.4A CN202211518862A CN115941792B CN 115941792 B CN115941792 B CN 115941792B CN 202211518862 A CN202211518862 A CN 202211518862A CN 115941792 B CN115941792 B CN 115941792B
Authority
CN
China
Prior art keywords
data
channel
channels
matched
beat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211518862.4A
Other languages
Chinese (zh)
Other versions
CN115941792A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yige Technology Co ltd
Original Assignee
Suzhou Yige Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yige Technology Co ltd filed Critical Suzhou Yige Technology Co ltd
Priority to CN202211518862.4A priority Critical patent/CN115941792B/en
Priority to PCT/CN2022/143203 priority patent/WO2024113448A1/en
Publication of CN115941792A publication Critical patent/CN115941792A/en
Application granted granted Critical
Publication of CN115941792B publication Critical patent/CN115941792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • H04L47/762Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • H04L47/783Distributed allocation of resources, e.g. bandwidth brokers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Communication Control (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a method and a device for processing a data block of a flexible Ethernet and a storage medium. The method comprises the following steps: receiving a data stream of the flexible Ethernet, which is transmitted by multiple channels concurrently, inserting the data block into a target channel in the multiple channels under the condition that the data block needs to be inserted, and transmitting the data block in the current beat; caching the data units matched with the target channel, and transmitting in the next beat; and transmitting the matched target data unit in the current beat through other channels except the target channel in the multiple channels, wherein the target data unit is other channels, and the data unit with the earliest corresponding node number is matched with the matched data unit in the cache and the matched data unit in the data node. The method solves the problem that in the related art, a blank block is inserted into a data stream to reserve the position of the inserted data block, and in the scene of multi-channel parallel transmission of the data stream, the effective bandwidth in the data stream is easily wasted in a mode of reserving an overhead block through the blank block.

Description

Method and device for processing data blocks of flexible Ethernet and storage medium
Technical Field
The present invention relates to the field of data communications, and in particular, to a method and apparatus for processing a data block of a flexible ethernet, and a storage medium.
Background
In the flexible ethernet architecture, the configuration information needs to be transferred by inserting the overhead block, and the data bandwidth after the overhead block is inserted is larger than the effective data bandwidth, so that the 66-bit coded stream enters a FlexE Shim (a logic layer between the intervening control layer of the MAC medium and the physical coding sublayer of the PHY, and the core architecture of FlexE technology is realized through a Slot distribution mechanism based on a Slot allocator, the Slot allocation mechanism, and the Slot allocation mechanism) needs to perform rate adjustment, and an overhead block is reserved for the lower layer logic to insert the overhead block, and an IDLE (blank block, specified in the 802.3 ethernet protocol, the lowest bit of the synchronization header is 1, and the other bits are all 0, and the function of the method is to adjust the rate by deleting the IDLE to coordinate the rate of the receiving end and the transmitting end) or delete the IDLE.
Since the bandwidth of FlexE exceeds 100G, and if data is transmitted in a serial manner, the clock frequency exceeds 1G, and such a high clock frequency cannot be satisfied in a physical level, flexE is generally required to transmit data in parallel by using multiple channels. While flexible ethernet requires the insertion of overhead blocks in the data stream to convey configuration information, overhead blocks do not appear on all channels at the same time, resulting in a waste of effective data bandwidth.
Aiming at the problem that in the related art, a blank block is inserted into a data stream to reserve the position of the inserted data block, and in the scene of multi-channel parallel transmission of the data stream, the effective bandwidth in the data stream is easily wasted in a mode of reserving an overhead block through the blank block, no effective solution is proposed at present.
Disclosure of Invention
The main objective of the present application is to provide a method and an apparatus for processing a data block of a flexible ethernet, and a storage medium, so as to solve the problem that in the related art, a blank block is inserted into a data stream to reserve a position for inserting the data block, and in a scenario of multi-channel parallel transmission of the data stream, an overhead block is reserved by the blank block, which easily results in waste of effective bandwidth in the data stream.
To achieve the above object, according to one aspect of the present application, there is provided a method for processing a data block of a flexible ethernet, the method comprising: receiving a data stream of multichannel concurrent transmission of a flexible Ethernet, wherein the data stream comprises a plurality of data frames, the data frames are divided into a plurality of data sections, and each data section comprises a plurality of data units which are matched with the multichannel respectively; under the condition that a data block is required to be inserted, the data block is inserted into a target channel in the multiple channels and is transmitted in the current beat, wherein the target channel is determined by traversing the multiple channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet; caching the data units matched with the target channel, and transmitting the data units in the next beat, wherein the multichannel is used for transmitting the data units of the inserted data blocks and/or data sections beat by beat; and transmitting the matched target data units in the current beat through other channels except the target channel in the multi-channel, wherein the target data units are the data units with the earliest corresponding node number in the matched data units in the cache and the matched data units in the data nodes in the other channels.
Optionally, transmitting the matched target data unit in the current beat through other channels except the target channel in the multiple channels includes: reading a data unit which is not transmitted by a data section of the last buffer memory; distributing the data units in the cache to all the channels matched with the other channels for transmission in the current beat; and transmitting the matched data units in the currently received data section in the current beat in channels which are not allocated to the cached data units in the other channels.
Optionally, the method further comprises: under the condition that each channel of the multi-channel completes one-time transmission of the data block, a back pressure signal is concurrent once through the multi-channel; wherein the backpressure signal is used for reserving space for multichannel transmission of next beat of data.
Optionally, the predetermined sequence is sorting from small to large or from large to small according to the sizes of the channel numbers of the multiple channels; a first channel in the other channels transmits the data units distributed in the cache, and a second channel in the other channels transmits the received data units of the data section; wherein the first channel is a channel in the predetermined sequence, in which the data block has been transmitted, and the second channel is a channel in the predetermined sequence, in which the data block has not been transmitted.
Optionally, receiving the data stream of the multichannel concurrent transmission of the flexible ethernet includes: and receiving a bit code data stream transmitted from a logic layer of the flexible Ethernet to a physical layer, wherein the bit code data stream is transmitted by sections.
Optionally, the method further comprises: reading a data unit of a data section of the previous buffer memory from the buffer memory under the condition that the data block does not need to be inserted; distributing the data units in the cache to all channels matched with each other in the multiple channels for transmission in the current beat; and transmitting the data units matched in the received data section in the current beat under the condition that the channels in the multiple channels are not matched with the data units in the cache.
Optionally, the method further comprises: under the condition that no data unit is cached in the cache, determining that the multiple channels do not transmit the data blocks; and directly acquiring the currently received data section of the data stream through the multichannel, and transmitting the currently received data section in the current beat through the multichannel.
To achieve the above object, according to another aspect of the present application, there is provided a processing apparatus for a data block of a flexible ethernet, including: the receiving module is used for receiving a data frame of a data stream which is transmitted by multiple channels of the flexible Ethernet in parallel, wherein the data frame is divided into a plurality of data sections, and each data section comprises a plurality of data units which are matched with the multiple channels respectively; the first transmission module is used for inserting the data block into a target channel in the multiple channels and transmitting the data block in the current beat under the condition that the data block needs to be inserted, wherein the target channel is determined by traversing the multiple channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet; the buffer module is used for buffering the data units matched with the target channel and transmitting the data units in the next beat, wherein the multichannel is used for transmitting the data units of the inserted data blocks and/or data sections beat by beat; and the second transmission module is used for transmitting the matched target data unit in the current beat through other channels in the multiple channels, wherein the target data unit is the data unit with the earliest corresponding node number in the data unit matched in the buffer memory and the data unit matched in the data node in the other channels.
To achieve the above object, according to another aspect of the present application, there is provided a computer-readable storage medium for storing a program, wherein the program performs the method for processing a data block of a flexible ethernet according to any one of the above.
To achieve the above object, according to another aspect of the present application, there is provided an electronic device including one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for processing a data block of a flexible ethernet network as set forth in any one of the above.
After receiving a data section of a data frame in a data stream which needs multichannel parallel transmission, judging whether the data section needs to be inserted into a data block, inserting the data block into a matched target channel for transmission in the current beat under the condition that the data block needs to be inserted, caching a data unit of the data section which the target channel should transmit, and transmitting in the next beat. And transmitting the matched target data units through other channels except the target channel, wherein for the other channels, the data unit of the previous beat is acquired and output from the buffer memory and is transmitted in the current beat, and the matched data unit is not stored in the buffer memory, and the matched data unit is directly acquired from the received data section and is transmitted in the current beat. The purpose of this is to insert only one data unit resource when inserting a data block, and to perform dislocation processing on the data section to be transmitted, thereby saving the effective data bandwidth of the data stream.
The method achieves the purpose of inserting the data block through the resource of one data unit, compared with the prior art, the method only transmits the inserted data block by one beat, achieves the technical effect of reducing the effective data resource occupied by the insertion of the data block, further solves the problem that the effective bandwidth in the data stream is wasted due to the fact that the blank block is inserted into the data stream to reserve the position for inserting the data block in the related art and the overhead block is reserved by the blank block in the scene of multi-channel parallel transmission of the data stream.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a method for processing a data block of a flexible Ethernet according to an embodiment of the application;
fig. 2 is a schematic diagram of a related art data block insertion data flow provided according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a data block insertion data stream provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a device for processing data blocks of a flexible Ethernet provided according to an embodiment of the application;
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The present invention is described below in connection with preferred implementation steps, and fig. 1 is a flowchart of a method for processing a data block of a flexible ethernet according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
step S101, receiving a data stream of multichannel concurrent transmission of a flexible Ethernet, wherein the data stream comprises a plurality of data frames, the data frames are divided into a plurality of data sections, and each data section comprises a plurality of data units which are matched with the multichannel respectively;
step S102, under the condition that a data block needs to be inserted, the data block is inserted into a target channel in a plurality of channels and is transmitted in the current beat, wherein the target channel is determined by traversing the plurality of channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet;
step S103, caching the data units matched with the target channel, and transmitting in the next beat, wherein the multichannel is used for transmitting the data units of the inserted data blocks and/or data sections beat by beat;
step S104, transmitting the matched target data unit in the current beat through other channels except the target channel in the multiple channels, wherein the target data unit is the data unit with the earliest corresponding node number in the matched data unit and the matched data unit in the data node in the cache in the other channels.
After receiving a data section of a data frame in a data stream which needs to be transmitted in parallel by multiple channels, the steps judge whether the data section needs to be inserted into a data block, insert the data block into a matched target channel for transmission in the current beat under the condition that the data block needs to be inserted, buffer a data unit of the data section which the target channel should transmit, and place the data unit in the next beat for transmission.
And transmitting the matched target data units through other channels except the target channel, wherein for the other channels, the data unit of the previous beat is acquired and output from the buffer memory and is transmitted in the current beat, and the matched data unit is not stored in the buffer memory, and the matched data unit is directly acquired from the received data section and is transmitted in the current beat. The purpose of this is to insert only one data unit resource when inserting a data block, and to perform dislocation processing on the data section to be transmitted, thereby saving the effective data bandwidth of the data stream.
The method achieves the purpose of inserting the data block through the resource of one data unit, compared with the prior art, the method only transmits the inserted data block by one beat, achieves the technical effect of reducing the effective data resource occupied by the insertion of the data block, further solves the problem that the effective bandwidth in the data stream is wasted due to the fact that the blank block is inserted into the data stream to reserve the position for inserting the data block in the related art and the overhead block is reserved by the blank block in the scene of multi-channel parallel transmission of the data stream.
The execution body of the above steps may be a flexible ethernet network, which may be configured to perform data processing on a processor, a calculator, or a controller, to perform the data processing operations in the above steps, for example, step S102 to step S104.
The flexible ethernet is disposed between the MAC and PHY layers for signal data conversion between the MAC and PHY layers. In the process from the MAC layer to the PHY layer, the data stream is converted into a physical pulse signal, or referred to as a digital signal. And the data unit is firstly sent to the physical layer, and data transmission is carried out through the virtual channel of the physical layer.
In the transmitting process of the data stream, in order to reduce the requirement on the clock frequency and improve the transmitting efficiency of the data stream, a multichannel concurrency mode is adopted for transmitting. Specifically, the data frame of the data stream is divided into a plurality of data sections, as shown in fig. 2, columns with different gray scales represent different data sections, and sections 1-N form a data frame, where each data section includes a plurality of data units, and in fig. 2, the same gray scale column corresponds to a square area of a plurality of channel channels. The data belonging to a section of the data stream is also valid data.
The configuration information of the data streams is transmitted simultaneously. The configuration information may include clock frequency, data size, sequence of data sections, etc. In the case of multi-channel concurrent data segments, configuration information needs to be inserted into the data segments in the form of data blocks for transmission. The data block of the configuration information may be an overhead block.
To avoid the problem of wasting effective bandwidth caused by transmitting individual data blocks that occupy the resources of the entire beat. In the embodiment, under the condition that a data block needs to be inserted, the data block is transmitted through a target channel in multiple channels, wherein the target channel is determined by traversing the multiple channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet; that is, the inserted data block only needs to be sent through one target channel in the same data beat of the multiple channels. To reduce the effective bandwidth occupied by the insertion and transmission of data blocks.
Since the data segments in the data stream are all ordered and a target channel is transmitting a data block at the current beat, it is not possible to retransmit the data units in the data segments it was supposed to transmit. This may lead to confusion in the transmission of the data section. And (3) rearranging the data section of the data stream receiving end to generate larger processing pressure.
For this purpose, the present embodiment divides the transmission of data blocks given by the same number as the number of multiple channels into a transmission process in which each channel of the multiple channels transmits a data block once, and the hysteresis effect of the data units in the data section is just integrated into a whole section. During this transmission process, the multiple channels are polled in a fixed order.
For example, when a first data block is transmitted, a first data channel is transmitted in a certain beat, when a second data block is transmitted in a certain beat, a second data channel is transmitted in a second sequence, and so on, when the last data block is transmitted, each channel performs one data transmission, so that the delay of the transmission of the data stream by multiple channels just completes a complete data section.
And then the data section of the transmission process is processed in a certain mode, so that the data stream receiving end can be conveniently rearranged in an opposite mode, and the rearrangement efficiency of the data section by the receiving end is improved.
Specifically, in this embodiment, the target channel for transmitting the data block adopts a mode of delaying transmission of the data unit that should be transmitted by one beat. Specifically, the data unit matched with the target channel is buffered, and is transmitted in the next beat, and the data unit of the data section that should be transmitted in the next beat is delayed until the next beat is transmitted. Thus, the data section data units of each beat of transmission of the target channel after the transmission process are delayed by one beat of transmission.
The buffer can be buffered by a register, and the operation can be completed by only a single register without storing data by a large-area storage unit, so that the area is saved and the speed is ensured.
The other channels are also divided into a first channel and a second channel according to whether the data channel is transmitted with the data block, wherein the first channel is a channel which is transmitted with the data block in a predetermined sequence, and the second channel is a channel which is not transmitted with the data block in the predetermined sequence.
Therefore, only the transmitted data blocks need to be counted, the data channels with the sequence numbers smaller than or equal to the count of the current data blocks in the preset sequence can be known to belong to the first channel, otherwise, the data channels with the sequence numbers larger than the count of the current data blocks in the preset sequence belong to the second channel.
For the first channel, the data units transmitted by the first channel are all the data units matched with the last beat obtained from the buffer memory, and for the second channel, the data units transmitted by the second channel are all the data units matched with the current received data section. Thus, the data blocks can be inserted in order according to rules, and the data sections are transmitted.
As shown in fig. 3, the effective data bandwidth of multiple channels can be greatly saved in such a way, and compared with the prior art, the technical effect of reducing effective data resources occupied by inserting data blocks by inserting the data blocks in a whole beat is achieved.
The problem that the effective bandwidth in the data stream is wasted easily due to the mode of inserting blank blocks into the data stream to reserve the positions for inserting the data blocks and reserving overhead blocks in the multi-channel parallel transmission data stream in the related art is solved.
Optionally, transmitting the matched target data unit in the current beat through other channels than the target channel in the multiple channels includes: reading a data unit which is not transmitted by a data section of the last buffer memory; distributing the data units in the cache to all channels matched with other channels for transmission in the current beat; and transmitting the matched data units in the currently received data section in the current beat in channels which are not allocated to the cached data units in other channels.
When the other channels transmit the matched target data units, the data units cached in the previous beat can be directly read from the cache, and each beat of data units has a matching relationship with the multiple channels, so that the data units cached in the cache are directly distributed to the corresponding data channels.
The data channel of the distributed data unit is the first channel, and the data unit distributed by the buffer memory is directly sent. And the data channel which is not distributed is the second channel, and the matched data units in the received data section can be directly transmitted.
As for the received data units in the data section that are not transmitted in the present beat, the data units are directly stored in the buffer for transmission in the next beat.
Optionally, the method further comprises: under the condition that each channel of the multiple channels completes one-time transmission of the data block, a back pressure signal is concurrent once through the multiple channels; wherein the backpressure signal is used for reserving space for multichannel transmission of next beat of data.
Under the condition that each channel of the multi-channel completes one time of transmission of the data block, the transmission process is completed, and the next transmission process can be isolated by the multi-channel concurrence one beat of back pressure signal. And the data stream receiving end is convenient for restoring the data section.
Specifically, when the data stream receiving end receives the data stream, the data of one transmission process and the data of the other transmission process are isolated, and the data in one transmission process which is successfully received can be processed by receiving the back pressure signal, including extraction of the data block and rearrangement and restoration of the data section.
Optionally, the predetermined order is sorting from small to large or from large to small according to the sizes of the channel numbers of the multiple channels; a first channel in other channels transmits the data units distributed in the cache, and a second channel in other channels transmits the data units of the received data section; the first channel is a channel in which the data blocks are transmitted in a predetermined sequence, and the second channel is a channel in which the data blocks are not transmitted in the predetermined sequence.
The predetermined sequence is a sequence generated by traversing the multiple channels according to a certain rule, the rule can be ordered according to channel identifiers of the multiple channels, the identifiers can be set and assigned to the multiple channels, and the rules can be ordered according to the existing identifier parameters of the multiple channels.
For example, the present embodiment uses the channel numbers of multiple channels as a basis for sorting. As shown in fig. 2 and 3, the number of the multi-channels is 0,1,2,3, … N. The size may be from small to large or from large to small. Currently this sequence is the simplest and most convenient. Other traversing modes are not excluded, for example, from small to odd, and from small to large to even, and even, the user can input the sequence according to the requirement.
Optionally, receiving the data stream of the multichannel concurrent transmission of the flexible ethernet includes: the logic layer of the receiving flexible Ethernet transmits bit code data streams to the physical layer, wherein the bit code data streams are transmitted by sections.
As described above, the flexible ethernet is disposed between the MAC and PHY layers for signal data conversion between the MAC and PHY layers. The flexible ethernet may be FlexE and the logical layer may be a FlexE shimm layer. The processing method is that the flexible Ethernet inserts the data blocks into the bit code data stream sent by the logic layer to the physical layer.
Optionally, the method further comprises: reading the data unit of the data section of the previous buffer from the buffer without inserting the data block; distributing the data units in the cache to the matched channels in the multiple channels for transmission in the current beat; and under the condition that the channels in the multiple channels are not matched with the data units in the cache, transmitting the matched data units in the received data section at the current beat.
In the same transmission process, the transmission of data units for each beat of data after the transmission of the data block is delayed by one beat due to the data channel over which the data block was transmitted. Therefore, in some data section transmission scenarios that do not require insertion of data blocks, it is also necessary to distinguish channels in the multi-channel.
The distinguishing principle is the same as that described above, namely, the first channel in the multiple channels transmits the data units distributed in the buffer memory, the second channel in the multiple channels transmits the data units of the received data section; the first channel is a channel in which the data blocks are transmitted in a predetermined sequence, and the second channel is a channel in which the data blocks are not transmitted in the predetermined sequence.
In the implementation, a mode of directly reading the data unit cached in the previous beat from the cache and directly distributing the data unit cached in the cache to the corresponding data channel can also be adopted. The data channel of the distributed data unit is the first channel, and the data unit distributed by the buffer memory is directly sent. And the data channel which is not distributed is the second channel, and the matched data units in the received data section can be directly transmitted.
Optionally, the method further comprises: under the condition that no data unit is cached in the cache, determining that no data block is transmitted by the multiple channels; the data section of the currently received data stream is directly acquired through multiple channels, and is transmitted in the current beat through multiple channels.
Under the condition that no cached data unit exists in the cache, the transmission process is described as starting from the beginning to the current beat, and no data channel has sent a data block, then the multiple channels receive data segments from the sending end of the data stream according to a normal processing mode, the currently received data segments are matched with the multiple channels, and the transmission is performed in the current beat.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in a different order than that illustrated herein.
It should be noted that this application also provides an alternative embodiment, and the following detailed description of this embodiment is provided.
The embodiment provides a method for inserting an overhead block in a flexible ethernet under multi-channel transmission, so as to solve the technical problem.
According to the OIF-flex-01 protocol, it is necessary to insert overhead blocks in the data stream at specified locations. Since the overhead blocks are only used to transmit the configuration information of the FlexE, the user side is not the valid data required by the user side, and if the overhead blocks are to be inserted into the valid data stream with multi-channel transmission in the related art, the valid bandwidth is wasted or the data transmission sequence is changed.
Fig. 2 is a schematic diagram of inserting data blocks into a data stream in the related art according to an embodiment of the present application, as shown in fig. 2, the upper digits indicate the sequence of transmitting data, the lower digits 1 to N indicate the number of inserted overhead blocks, the blocks with gray scale indicate the effective data to be transmitted, and the same gray scale indicates the same section of data.
If no overhead block needs to be inserted, the first beat transmits the first column of right-hand data, the second beat transmits the second column of right-hand data, and so on. However, since the overhead block configuration information is inserted, the second beat can only transmit the overhead blocks, and one beat can transmit N data, but in fig. 2, only one overhead block is transmitted in one channel, and other channels do not transmit data, one beat of signal needs to be back-pressed (i.e., the user side MAC layer is controlled to stop transmitting one beat of data by the back-pressed signal) every time the overhead block is transmitted, so that bandwidth is seriously wasted.
The solution provided by this embodiment is as follows: the input data is buffered by one beat by using a register, when the channel does not need to send an overhead block, the data (generally from a FlexE Shim layer) input to the module is directly output to the next module (generally a scrambling module), when the channel needs to send the overhead block, the overhead block is immediately sent, and meanwhile, the data in the next beat of the input register is directly output to the next module.
Fig. 3 is a schematic diagram of a data block insertion data flow according to an embodiment of the present application, as shown in fig. 3, when a Channel0 Channel does not need to send an overhead block, a first beat directly outputs data of a first column gray level from the right of an input, a second beat Channel0 Channel needs to send an overhead block 1, the data of the input beat is buffered in a register, the overhead block 1 is directly output, the data in the register is output in the next beat, that is, the data of the second column gray level from the right of the output of the second beat should be output, and the Channel1 does not output the overhead block, so that the second beat outputs the data of the second column gray level from the right, and the third beat outputs the data of a lighter gray level from the third column from the right.
When all channels are inserted with the overhead blocks, namely, all the data output by the channels are taken out from the register, a back pressure signal is sent, the data of the user side is back pressure one beat to adjust the speed, and by the method, only one beat of data of the MAC layer of the user side is needed to be back pressure when the overhead blocks are inserted into the N channels, so that the bandwidth waste is greatly reduced.
The present embodiment utilizes registers to buffer data and inserts overhead blocks into the data stream. The overhead block is used as an insertion signal to decide whether the output data is read from the register or directly from the input. And controlling the user side MAC layer to stop sending data by using the back pressure signal, so that the insertion of the overhead block does not influence the normal transmission of the data stream.
And receiving a data stream transmitted by the flexible Ethernet multichannel concurrency, wherein the data stream comprises a plurality of data frames, and the data frames comprise a plurality of data units matched with the multichannel respectively. One register is added for each channel to register one beat of data, wherein the registered data is used for adjusting the data sequence after the overhead blocks are inserted, so that the data cannot be disordered. One beat of space is reserved for all channels using a backpressure signal for backpressure a beat of a customer's data stream so that there is no need to reserve an intervening space separately for each channel. And controlling the data reading position of each channel by using a signal, wherein the signal is whether an overhead block is inserted into each channel, whether reserved space is utilized or not, and the reading position is a register or directly input data.
The method solves the problem that the effective bandwidth of the data stream is wasted easily caused by inserting the configuration information of the overhead blocks during multi-channel transmission. Because each channel is inserted with an overhead block independently, if one channel transmits the overhead block, the other channels directly transmit blank blocks without data rearrangement, so that bandwidth is wasted, and if the other channels normally transmit data, the data sequence is disordered, and the receiving end cannot extract valid data correctly.
The method solves the problems that the overhead blocks cannot be inserted due to insufficient blank blocks for deleting, and the resource waste caused by reordering is avoided when the overhead blocks are deleted. In order to insert the overhead blocks into the effective data, the lossless speed regulation of the data stream is required, and a space is reserved for inserting the overhead blocks at the back by adopting the deleted blank blocks, so that the defect of the blank blocks for deletion easily occurs, the overhead blocks cannot be inserted, and the resource waste of reordering is avoided when the overhead blocks are deleted.
The overhead blocks can be inserted efficiently without wasting effective data transmission bandwidth and without having to make complex data reordering designs. The operation can be completed by using a single register, and a large-area storage unit is not needed for storing data, so that the area is saved and the speed is ensured. The back pressure signal back pressure is used for sending data of the MAC layer of the user end, IDLE in the data stream is not required to be deleted, and transmission errors caused by insufficient IDLE in the data stream are avoided.
Fig. 4 is a schematic diagram of a processing device for a data block of a flexible ethernet according to an embodiment of the present application, as shown in fig. 4, and the embodiment of the present application further provides a processing device for a data block of a flexible ethernet, and it should be noted that the processing device for a data block of a flexible ethernet according to an embodiment of the present application may be used to execute the processing method for a data block of a flexible ethernet provided by the embodiment of the present application. The following describes a device for processing data blocks of a flexible ethernet provided in an embodiment of the present application. The device comprises: the receiving module 41, the first transmitting module 42, the buffer module 43, and the second transmitting module 44 are described in detail below.
A receiving module 41, configured to receive a data section of a data stream that is transmitted by multiple channels of the flexible ethernet network, where the data section includes a plurality of data units that are matched with the multiple channels respectively; a first transmission module 42, connected to the receiving module 41, configured to insert a data block into a target channel of the multiple channels and transmit the data block at a current beat when the data block needs to be inserted, where the target channel is determined by traversing the multiple channels according to a predetermined sequence, and the data block is used to identify configuration information of the flexible ethernet; the buffer module 43 is connected to the first transmission module 42, and is configured to buffer the data units matched with the target channel for transmission in the next beat, where the multiple channels are used for transmitting the data units of the inserted data blocks and/or data segments beat by beat; the second transmission module 44 is connected to the above-mentioned buffer module 43, and is configured to transmit, through other channels in the multiple channels, the matched target data unit in the current beat, where the target data unit is the other channels, and the corresponding node number of the matched data unit in the buffer and the matched data unit in the data node is the earliest data unit.
According to the processing device for the data blocks of the flexible Ethernet, after the data section of the data frame in the data stream which needs to be transmitted in parallel by multiple channels is received, whether the data section needs to be inserted into the data block is judged, and under the condition that the data block needs to be inserted into the data block, the data block is inserted into a matched target channel to be transmitted in the current beat, and the data unit of the data section which needs to be transmitted by the target channel is cached and is placed in the next beat to be transmitted.
And transmitting the matched target data units through other channels except the target channel, wherein for the other channels, the data unit of the previous beat is acquired and output from the buffer memory and is transmitted in the current beat, and the matched data unit is not stored in the buffer memory, and the matched data unit is directly acquired from the received data section and is transmitted in the current beat. The purpose of this is to insert only one data unit resource when inserting a data block, and to perform dislocation processing on the data section to be transmitted, thereby saving the effective data bandwidth of the data stream.
The method achieves the purpose of inserting the data block through the resource of one data unit, compared with the prior art, the method only transmits the inserted data block by one beat, achieves the technical effect of reducing the effective data resource occupied by the insertion of the data block, further solves the problem that the effective bandwidth in the data stream is wasted due to the fact that the blank block is inserted into the data stream to reserve the position for inserting the data block in the related art and the overhead block is reserved by the blank block in the scene of multi-channel parallel transmission of the data stream.
The processing device of the data block of the flexible ethernet comprises a processor and a memory, wherein the receiving module 41, the first transmission module 42, the buffer module 43, the second transmission module 44 and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more than one, and the problem that a user cannot determine whether to adapt when the capacitive pen with the non-capacitive screen in the related technology is used is solved by adjusting kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the invention provides a computer readable storage medium, on which a program is stored, which when being executed by a processor, implements a method for processing data blocks of the flexible Ethernet.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program runs to execute a processing method of a data block of the flexible Ethernet.
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 5, the embodiment of the present application provides an electronic device 50, where the device includes a processor, a memory, and a program stored on the memory and capable of running on the processor, and when the processor executes the program, the steps of the method for processing a data block of the flexible ethernet are implemented by the processor.
The device herein may be a server, PC, PAD, cell phone, etc.
The present application also provides a computer program product adapted to perform a program initialized with any of the above method steps when executed on a processing device of a data block of a flexible ethernet.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable flexible ethernet data block processing device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable flexible ethernet data block processing device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable flexible ethernet data block processing device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a processing device of a computer or other programmable flexible ethernet data block, such that a series of operational steps are performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (8)

1. A method for processing a data block of a flexible ethernet network, the method comprising:
receiving a data stream of multichannel concurrent transmission of a flexible Ethernet, wherein the data stream comprises a plurality of data frames, the data frames are divided into a plurality of data sections, and each data section comprises a plurality of data units which are matched with the multichannel respectively;
under the condition that a data block is required to be inserted, the data block is inserted into a target channel in the multiple channels and is transmitted in the current beat, wherein the target channel is determined by traversing the multiple channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet;
caching the data units matched with the target channel, and transmitting the data units in the next beat, wherein the multichannel is used for transmitting the data units of the inserted data blocks and/or data sections beat by beat;
transmitting the matched target data unit in the current beat through other channels except the target channel in the multiple channels, wherein the target data unit is the data unit matched in a cache of the other channels and the data unit with the earliest corresponding node number in the matched data units in the data nodes;
Reading a data unit of a data section of the previous buffer memory from the buffer memory under the condition that the data block does not need to be inserted;
distributing the data units in the cache to each matched channel in the multiple channels for transmission in the current beat;
transmitting the data units matched in the received data section in the current beat under the condition that the channels in the multi-channel are not matched with the data units in the cache;
the transmitting the matched target data unit in the current beat through other channels except the target channel in the multiple channels comprises the following steps:
reading a data unit which is not transmitted by a data section of the last buffer memory;
distributing the data units in the cache to all the channels matched with the other channels for transmission in the current beat;
and transmitting the matched data units in the currently received data section in the current beat in channels which are not allocated to the cached data units in the other channels.
2. The method according to claim 1, wherein the method further comprises:
under the condition that each channel of the multi-channel completes one-time transmission of the data block, a back pressure signal is concurrent once through the multi-channel;
Wherein the backpressure signal is used for reserving space for multichannel transmission of next beat of data.
3. The method of claim 2, wherein the predetermined order is ordered from small to large or from large to small by the size of the channel number of the multi-channel;
a first channel in the other channels transmits the data units distributed in the cache, and a second channel in the other channels transmits the received data units of the data section;
wherein the first channel is a channel in the predetermined sequence, in which the data block has been transmitted, and the second channel is a channel in the predetermined sequence, in which the data block has not been transmitted.
4. The method of claim 2, wherein receiving the data stream of the multi-channel concurrent transmission of the flexible ethernet comprises:
and receiving a bit code data stream transmitted from a logic layer of the flexible Ethernet to a physical layer, wherein the bit code data stream is transmitted by sections.
5. The method according to claim 1, wherein the method further comprises:
under the condition that no data unit is cached in the cache, determining that the multiple channels do not transmit the data blocks;
And directly acquiring the currently received data section of the data stream through the multichannel, and transmitting the currently received data section in the current beat through the multichannel.
6. A device for processing data blocks of a flexible ethernet network, comprising:
the receiving module is used for receiving a data frame of a data stream which is transmitted by multiple channels of the flexible Ethernet in parallel, wherein the data frame is divided into a plurality of data sections, and each data section comprises a plurality of data units which are matched with the multiple channels respectively;
the first transmission module is used for inserting the data block into a target channel in the multiple channels and transmitting the data block in the current beat under the condition that the data block needs to be inserted, wherein the target channel is determined by traversing the multiple channels according to a preset sequence, and the data block is used for identifying configuration information of the flexible Ethernet;
the buffer module is used for buffering the data units matched with the target channel and transmitting the data units in the next beat, wherein the multichannel is used for transmitting the data units of the inserted data blocks and/or data sections beat by beat;
the second transmission module is used for transmitting the matched target data unit in the current beat through other channels in the multiple channels, wherein the target data unit is the data unit matched in the buffer memory of the other channels and the data unit with the earliest corresponding node number in the matched data units in the data nodes;
The transmitting the matched target data unit in the current beat through other channels in the multiple channels comprises:
reading a data unit which is not transmitted by a data section of the last buffer memory;
distributing the data units in the cache to all the channels matched with the other channels for transmission in the current beat;
transmitting the data units matched in the data section received currently in the current beat in channels which are not allocated to the cached data units in the other channels;
the first transmission module is further used for reading the data unit of the data section cached in the last beat from the cache under the condition that the data block does not need to be inserted; distributing the data units in the cache to each matched channel in the multiple channels for transmission in the current beat; and transmitting the data units matched in the received data section in the current beat under the condition that the channels in the multiple channels are not matched with the data units in the cache.
7. A computer-readable storage medium storing a program, wherein the program performs the method of processing a data block of a flexible ethernet according to any one of claims 1 to 5.
8. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of processing data blocks of a flexible ethernet network of any of claims 1-5.
CN202211518862.4A 2022-11-30 2022-11-30 Method and device for processing data blocks of flexible Ethernet and storage medium Active CN115941792B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211518862.4A CN115941792B (en) 2022-11-30 2022-11-30 Method and device for processing data blocks of flexible Ethernet and storage medium
PCT/CN2022/143203 WO2024113448A1 (en) 2022-11-30 2022-12-29 Method and apparatus for processing data block of flexible ethernet, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211518862.4A CN115941792B (en) 2022-11-30 2022-11-30 Method and device for processing data blocks of flexible Ethernet and storage medium

Publications (2)

Publication Number Publication Date
CN115941792A CN115941792A (en) 2023-04-07
CN115941792B true CN115941792B (en) 2024-02-02

Family

ID=86697138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211518862.4A Active CN115941792B (en) 2022-11-30 2022-11-30 Method and device for processing data blocks of flexible Ethernet and storage medium

Country Status (2)

Country Link
CN (1) CN115941792B (en)
WO (1) WO2024113448A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070851A1 (en) * 2015-10-27 2017-05-04 Zte Corporation Channelization for flexible ethernet
CN109660373A (en) * 2017-10-10 2019-04-19 深圳市中兴微电子技术有限公司 The method and communication equipment, computer readable storage medium of remote loopback detection
CN110798415A (en) * 2018-08-03 2020-02-14 中兴通讯股份有限公司 Service transmission method, equipment and computer storage medium
CN111416630A (en) * 2020-03-05 2020-07-14 烽火通信科技股份有限公司 Coding and decoding method and system
CN112929199A (en) * 2019-12-06 2021-06-08 华为技术有限公司 Method, apparatus and computer readable storage medium for managing flexible Ethernet group
CN113923002A (en) * 2021-09-29 2022-01-11 山石网科通信技术股份有限公司 Computer network intrusion prevention method and device, storage medium and processor
CN114124844A (en) * 2021-11-30 2022-03-01 新华三半导体技术有限公司 Data processing method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322367B (en) * 2017-01-16 2022-01-14 中兴通讯股份有限公司 Method, equipment and system for service delivery
CN109600188B (en) * 2017-09-30 2020-04-28 华为技术有限公司 Data transmission method, transmission equipment and transmission system
CN109962807B (en) * 2017-12-26 2022-04-26 华为技术有限公司 Fault indication method and device in flexible Ethernet
CN112243019B (en) * 2019-07-19 2021-09-07 烽火通信科技股份有限公司 Method and system for establishing flexible Ethernet management channel
CN116057865A (en) * 2020-07-31 2023-05-02 华为技术有限公司 Method and equipment for adjusting physical interface in flexible Ethernet group

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070851A1 (en) * 2015-10-27 2017-05-04 Zte Corporation Channelization for flexible ethernet
CN109660373A (en) * 2017-10-10 2019-04-19 深圳市中兴微电子技术有限公司 The method and communication equipment, computer readable storage medium of remote loopback detection
CN110798415A (en) * 2018-08-03 2020-02-14 中兴通讯股份有限公司 Service transmission method, equipment and computer storage medium
CN112929199A (en) * 2019-12-06 2021-06-08 华为技术有限公司 Method, apparatus and computer readable storage medium for managing flexible Ethernet group
WO2021109705A1 (en) * 2019-12-06 2021-06-10 华为技术有限公司 Flexible ethernet group management method and device, and computer-readable storage medium
CN111416630A (en) * 2020-03-05 2020-07-14 烽火通信科技股份有限公司 Coding and decoding method and system
CN113923002A (en) * 2021-09-29 2022-01-11 山石网科通信技术股份有限公司 Computer network intrusion prevention method and device, storage medium and processor
CN114124844A (en) * 2021-11-30 2022-03-01 新华三半导体技术有限公司 Data processing method and system

Also Published As

Publication number Publication date
CN115941792A (en) 2023-04-07
WO2024113448A1 (en) 2024-06-06

Similar Documents

Publication Publication Date Title
US6967951B2 (en) System for reordering sequenced based packets in a switching network
KR20210086420A (en) Neural network data processing apparatus, method and electronic device
US7373467B2 (en) Storage device flow control
US20030056073A1 (en) Queue management method and system for a shared memory switch
US9490848B2 (en) Data handling system comprising memory banks and data rearrangement
JP2003037572A (en) Scheduling system
KR102524566B1 (en) A packet memory system, method and device for preventing underrun
CN111181874B (en) Message processing method, device and storage medium
CN115941792B (en) Method and device for processing data blocks of flexible Ethernet and storage medium
US10061513B2 (en) Packet processing system, method and device utilizing memory sharing
US10031884B2 (en) Storage apparatus and method for processing plurality of pieces of client data
US20160085683A1 (en) Data receiving device and data receiving method
US10637780B2 (en) Multiple datastreams processing by fragment-based timeslicing
CN104052831A (en) Data transmission method and device based on queues and communication system
CN115865808B (en) Method, device and storage medium for processing data blocks of flexible Ethernet
WO2014006507A2 (en) Systems and methods for providing replicated data from memories to processing clients
CN112732188A (en) Optimization method and system based on ID distribution efficiency of distributed storage logical volume
US9747226B2 (en) Packet processing system, method and device to optimize packet buffer space
KR100950356B1 (en) Data transfer unit with support for multiple coherency granules
JP6667524B2 (en) Dynamic RAM sharing in software-defined TDD communication
US20030110305A1 (en) Systematic memory location selection in ethernet switches
US9612950B2 (en) Control path subsystem, method and device utilizing memory sharing
CN103731225A (en) Data transmission method and device
CN115865299B (en) Method, device and storage medium for processing time slot data of flexible Ethernet
CN109413122B (en) Data processing method, network processor and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant