CN114124844B - Data processing method and system - Google Patents

Data processing method and system Download PDF

Info

Publication number
CN114124844B
CN114124844B CN202111442450.2A CN202111442450A CN114124844B CN 114124844 B CN114124844 B CN 114124844B CN 202111442450 A CN202111442450 A CN 202111442450A CN 114124844 B CN114124844 B CN 114124844B
Authority
CN
China
Prior art keywords
channel
data
scheduled
buffer
shared buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111442450.2A
Other languages
Chinese (zh)
Other versions
CN114124844A (en
Inventor
成放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Semiconductor Technology Co Ltd
Original Assignee
New H3C Semiconductor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Semiconductor Technology Co Ltd filed Critical New H3C Semiconductor Technology Co Ltd
Priority to CN202111442450.2A priority Critical patent/CN114124844B/en
Publication of CN114124844A publication Critical patent/CN114124844A/en
Application granted granted Critical
Publication of CN114124844B publication Critical patent/CN114124844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/14Multichannel or multilink protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The embodiment of the application provides a data processing method and a system, wherein the method comprises the following steps: reading data of channels to be scheduled in time from a shared buffer, wherein the shared buffer is used for buffering the data of a plurality of channels; processing the data of the channel to be scheduled according to the MAC layer protocol to obtain the processed data of the channel to be scheduled; caching the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled; and reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the current time slot according to the corresponding relation between the channel identification and the time slot. By applying the technical scheme provided by the embodiment of the application, the scale of the data processing logic can be reduced, and the resources occupied by the data processing logic can be reduced.

Description

Data processing method and system
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a data processing method and system.
Background
FlexE (Flex Ethernet) provides a mechanism to support matching of various MAC (Media Access Control, medium access control) layers with PHY (Physics) layers, and is therefore widely used.
At present, when realizing data transmission of a plurality of clients, flexE inserts or deletes idle blocks after 66B encoding of a plurality of clients, and distributes the clients to a Calendar through control logic to complete data transmission.
In the related art, each Client performs storage management and MAC layer protocol processing independently, that is, one Client has an independent buffer and an independent MAC layer protocol processing logic. When there are multiple clients to process, multiple independent buffers and multiple independent MAC layer protocol processing logic are required, which makes the data processing logic of the device large-scale and difficult to implement.
Disclosure of Invention
An object of the embodiments of the present application is to provide a data processing method and system, so as to reduce the scale of data processing logic and reduce the resources occupied by the data processing logic. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a data processing method, where the method includes:
reading data of channels to be scheduled in a time stamp from a shared buffer, wherein the shared buffer is used for buffering the data of a plurality of channels;
processing the data of the channel to be scheduled according to an MAC layer protocol to obtain the processed data of the channel to be scheduled;
Caching the processed data of the channel to be scheduled into a cache queue corresponding to the channel to be scheduled;
and reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the current time slot according to the corresponding relation between the channel identification and the time slot.
In a second aspect, embodiments of the present application provide a data processing system, the system comprising: the system comprises a shared buffer, a shared buffer scheduling module, an MAC layer protocol processing module and an independent buffer, wherein the independent buffer comprises buffer queues corresponding to a plurality of channels;
the shared buffer scheduling module is used for sending a scheduling signal aiming at a channel to be scheduled to the shared buffer;
the shared buffer is configured to read, based on the scheduling signal, data of a channel to be scheduled from data of a plurality of channels cached in the shared buffer, and send the data to the MAC layer protocol processing module;
the MAC layer protocol processing module is used for processing the data of the channel to be scheduled according to an MAC layer protocol to obtain the processed data of the channel to be scheduled, and caching the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled;
And the independent buffer is used for reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the time slot according to the corresponding relation between the channel identification and the time slot.
The beneficial effects of the embodiment of the application are that:
according to the technical scheme provided by the embodiment of the application, the data of a plurality of channels are stored in the shared buffer, the data of one channel in the shared buffer is processed according to the MAC layer protocol, the confusion of the MAC layer protocol processing is avoided, the MAC layer protocol processing requirement of the data is met, and the data is stored in the corresponding buffer queue after the MAC layer protocol processing, so that the calendar data can be obtained by the follow-up processing of the data of the corresponding channel of each time slot. Therefore, in the embodiment of the application, under the condition that the processing requirement of the MAC layer protocol of the data is met, the data buses are multiplexed by a plurality of channels to be processed, namely, a set of MAC layer protocol processing logic is multiplexed by a plurality of channels, so that the scale of the data processing logic is reduced, and the resources occupied by the data processing logic are reduced.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a schematic flow chart of a first method for processing data according to an embodiment of the present application;
fig. 2 is a schematic diagram of a message processed by a MAC layer protocol provided in an embodiment of the present application;
FIG. 3 is a second flowchart of a data processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a backpressure controller provided in an embodiment of the present application;
FIG. 5 is a third flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 6 is a fourth flowchart of a data processing method according to an embodiment of the present disclosure;
FIG. 7 is a first schematic diagram of a data processing system according to an embodiment of the present application;
FIG. 8 is a second exemplary architecture diagram of a data processing system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
For ease of understanding, the words appearing in the embodiments of the application are explained below.
FlexE: a layer of protocol is added between the Ethernet MAC layer and the PHY layer, so that various mechanisms for matching the MAC layer with the PHY layer are supported, and the flexible mapping relation between the Ethernet MAC layer and the PHY layer is realized. The rates of the MAC layer and PHY layer in conventional ethernet must match, whereas in FlexE protocol the MAC layer rate may be greater than or equal to the PHY layer rate. The FlexE protocol can implement PHY layer binding, and the MAC layer supporting the 200G rate is transmitted on 2 PHY layers with 100G rate; the FlexE protocol may also implement sub-rates for the PHY layer, with the MAC layer supporting 50G rates being transported over the PHY layer at 100G rates.
Client: an ethernet MAC layer may be referred to as a Client. Clients may also be referred to as channels. The channel may be used for transmitting data or receiving data, that is, the channel described in the embodiments of the present application may be used as a transmitting channel or a receiving channel.
FIFO (First Input First Output, first-in first-out) queues: a widely used memory architecture can be used for data stream shaping, data cross-clock domain, etc.
Overflow: meaning that there is still a write operation after the buffer is full, resulting in data loss or overwriting.
Data in transit: the buffer writes data for which the port is not back-flushed.
When clapping: referring to the current beat, a beat includes a plurality of time slots.
At present, when the FlexE realizes the transmission of a plurality of Client data, the Client data are respectively inserted into or deleted from an idle block after 66B coding, and then the Client data are distributed to the Calendar through a control logic to complete the data transmission.
In the related art, each Client performs storage management and MAC layer protocol processing independently, that is, one Client has an independent buffer and an independent MAC layer protocol processing logic. When there are multiple clients to process, multiple independent buffers and multiple independent MAC layer protocol processing logic are required, which makes the data processing logic of the device large-scale and difficult to implement.
In order to solve the above problems, embodiments of the present application provide a data processing method. The data processing method can be applied to electronic devices with FlexE interfaces, such as routers, switches, etc.
In the data processing method, data of a plurality of channels are stored in a shared buffer, the plurality of data of one channel in the shared buffer are processed in one beat according to an MAC layer protocol, confusion of the MAC layer protocol processing is avoided, the MAC layer protocol processing requirement of the data is met, and the data are stored in corresponding buffer queues after the MAC layer protocol processing, so that the data of the corresponding channel of each beat of time slot are processed later, and calendar data are obtained. Therefore, in the embodiment of the application, under the condition that the processing requirement of the MAC layer protocol of the data is met, the data buses are multiplexed by a plurality of channels to be processed, namely, a set of MAC layer protocol processing logic is multiplexed by a plurality of channels, so that the scale of the data processing logic is reduced, and the resources occupied by the data processing logic are reduced.
The data processing method provided by the embodiment of the application is described in detail below through a specific embodiment. For the sake of understanding, the following description uses the electronic device as an execution body, and is not intended to be limiting.
As shown in fig. 1, fig. 1 is a first flowchart of a data processing method according to an embodiment of the present application, where the data processing method includes the following steps.
Step S11, reading data of the channels to be scheduled in time from a shared buffer, wherein the shared buffer is used for buffering the data of a plurality of channels.
In this embodiment of the present application, the electronic device is configured with a shared buffer, where data of multiple channels is cached. Optionally, the correspondence between the channel identifier and the data is stored in the shared buffer, so that the data of the corresponding channel can be read based on the correspondence.
Alternatively, the scheduling strength of the channel is proportional to the bandwidth of the channel, i.e., the greater the bandwidth is, the more times the channel is scheduled to guarantee the channel bandwidth. The electronic device can determine the scheduling strength of the channel based on the bandwidth of the channel, determine the channel to be scheduled when the channel is to be scheduled based on the scheduling strength of the channel, and further read the data of the channel to be scheduled when the channel is to be scheduled from the shared buffer. Here, in one beat, only one channel is to be scheduled, that is, in one beat, the data processed by the MAC layer protocol only belongs to one channel, so that the problem of confusion of data processing can be effectively avoided.
For example, the data of channel 1 and the data of channel 2 are stored in the shared buffer. The bandwidth of channel 1 is 50G, and the bandwidth of channel 2 is 100G. Bandwidth of channel 1: bandwidth of channel 2 = 1:2, then the ratio of the scheduling strength of channel 1 to the scheduling strength of channel 2 is 1:2, i.e. data of primary channel 1 is scheduled, data of secondary channel 2 is scheduled, i.e. data of primary channel 1 is read from the shared buffer, and data of secondary channel 2 is read from the shared buffer.
In order to avoid the problem of overflow of the shared buffer, for each channel, such as the first channel, the electronic device may determine in real time whether the amount of stored data in the storage space allocated for the first channel in the shared buffer reaches a first preset threshold. The magnitude of the first preset threshold value can be set according to actual requirements. When the stored data amount in the storage space allocated for the first channel in the shared buffer reaches a first preset threshold, the fact that the remaining storage space allocated for the first channel in the shared buffer is insufficient is indicated, and back pressure is carried out on an upstream data source of the first channel, so that the problem of overflow is avoided. The upstream data source is a data source for writing data into the shared buffer. If the amount of stored data in the storage space allocated for the first channel in the shared buffer does not reach the first preset threshold, it is indicated that the remaining storage space allocated for the first channel in the shared buffer is sufficient, and no other processing may be performed.
For example, a ready line may be configured between the upstream data source and the shared buffer, where the upstream data source may write data into the shared buffer when the ready line signal is high; when the ready line signal is low, the upstream data source will not write data into the shared buffer. When the stored data amount in the storage space allocated for the first channel in the shared buffer reaches a first preset threshold, a ready line indicating to write the data of the first channel into the shared buffer can be pulled down, so that the data of the first channel is suspended from being written into the shared buffer by an upstream data source of the first channel, the purpose of back pressure on the upstream data source of the first channel is further realized, and the problem of overflow is effectively prevented.
In this embodiment of the present application, the first preset thresholds corresponding to different channels may be the same or different. The larger the first preset threshold value corresponding to the channel, the larger the storage space allocated for the channel in the shared buffer.
And step S12, processing the data of the channel to be scheduled according to the MAC layer protocol to obtain the processed data of the channel to be scheduled.
In this embodiment of the present application, according to the MAC layer protocol, the electronic device inserts a field specified by the MAC layer protocol into the data of the channel to be scheduled, so as to implement MAC layer protocol processing on the data of the channel to be scheduled, for example, as shown in fig. 2, a packet header inserts a 7-byte Preamble (Preamble) and a 1-byte SFD (Start of Frame Delimiter, start delimiter), and a packet trailer inserts 4-byte CRC (Cyclic Redundancy Check ) bits.
And step S13, caching the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled.
In the embodiment of the application, the electronic device is configured with the cache queue corresponding to the channel, and it can be understood that the electronic device records the corresponding relation between the channel identifier and the cache queue, and the channel identifier corresponds to the cache queue one by one, so that the channel corresponds to the cache queue one by one. The buffer queue may be a FIFO queue, with multiple FIFO queues in a single buffer. The input data in the buffer queue is a channel multiplexed data bus. After the electronic device obtains the processed data of the channel to be scheduled, the processed data can be cached to a cache queue corresponding to the channel to be scheduled according to the identification of the channel to be scheduled.
In an embodiment of the present application, for each channel, for example, the second channel, the electronic device may determine in real time whether the amount of data stored in the buffer queue corresponding to the second channel reaches a second preset threshold, where the size of the second preset threshold may be set according to an actual requirement. When the stored data amount in the buffer queue corresponding to the second channel reaches a second preset threshold, the fact that the remaining storage space of the buffer queue corresponding to the second channel is insufficient is indicated, and back pressure is carried out on the storage space allocated for the second channel in the shared buffer, so that the problem of overflow in the buffer queue is avoided. If the amount of the stored data in the buffer queue corresponding to the second channel does not reach the second preset threshold, the remaining storage space of the buffer queue corresponding to the second channel is sufficient, and other processing can be omitted.
For example, a ready line can be configured between the cache queue and the shared buffer, and when the signal of the ready line is at a high level, the shared buffer can write data into the cache queue; when the ready line signal is low, the shared buffer will not write data into the buffer queue. When the amount of stored data in the storage space of the buffer queue corresponding to the second channel reaches a second preset threshold, a ready line for indicating writing data into the buffer queue corresponding to the second channel can be pulled down, so that the shared buffer pauses writing the data of the second channel into the buffer queue, the purpose of back pressure on the storage space allocated for the second channel in the shared buffer is realized, and the problem of overflow is effectively prevented.
In this embodiment of the present application, the buffer queue is smaller than or equal to the specified capacity threshold, so that the depth of the buffer queue is not too deep. Thus, the problem of occupying too many resources due to too many channels can be prevented. The specified capacity threshold can be set according to actual requirements. This is not limited.
In one example, when the processing delay between the shared buffer and the buffer queue corresponding to the second channel is n beats, the depth of the buffer queue corresponding to the second channel is at least 2n beats of data.
In this embodiment of the present application, the processing delay between the shared buffer and the buffer queue corresponding to the second channel is n beats, which indicates: there is n beats of delay on the entry path from the shared cache to the corresponding cache queue of the second channel, and there are n beats of in-transit data. Because the in-transit data is back-pressured, the buffer queue must be able to absorb, and the second channel corresponds to a buffer queue depth of at least +n beats of data. Meanwhile, in order to ensure that the buffer queue is not empty, the back pressure threshold value is at least set to be the number of in-transit data, namely n beats of data. Therefore, in the embodiment of the application, the depth of the buffer queue corresponding to the second channel is at least 2n beats of data volume, so that the problems of overflow and empty reading can be effectively avoided, and the storage resource is also greatly solved.
Step S14, according to the corresponding relation between the pre-stored channel identification and the time slot, reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the time slot.
In the embodiment of the application, the correspondence between the channel identifier and the time slot is prestored in the electronic device. Optionally, the correspondence between the channel identifier and the time slot is stored in a calendar table (calendar).
The electronic equipment determines the channel identification corresponding to the time slot according to the corresponding relation between the pre-stored channel identification and the time slot, further determines the target channel corresponding to the channel identification, further determines the cache queue corresponding to the target channel needing to read data, and further reads the data of the target channel from the cache queue corresponding to the target channel.
According to the technical scheme provided by the embodiment of the application, the data of a plurality of channels are stored in the shared buffer, the data of one channel in the shared buffer is processed according to the MAC layer protocol, the confusion of the MAC layer protocol processing is avoided, the MAC layer protocol processing requirement of the data is met, and the data is stored in the corresponding buffer queue after the MAC layer protocol processing, so that the calendar data can be obtained by the follow-up processing of the data of the corresponding channel of each time slot. Therefore, in the embodiment of the application, under the condition that the processing requirement of the MAC layer protocol of the data is met, the data buses are multiplexed by a plurality of channels to be processed, namely, a set of MAC layer protocol processing logic is multiplexed by a plurality of channels, so that the scale of the data processing logic is reduced, and the resources occupied by the data processing logic are reduced.
In order to avoid overflow and reduce the probability of data loss or coverage, the embodiment of the application further provides a data processing method, as shown in fig. 3. The method includes steps S31 to S36, wherein steps S31 to S32 are one implementation manner of step S11, and steps S33 to S35 are the same as steps S12 to S14, which are not described herein.
Step S31, judging whether the data quantity to be stored corresponding to the channel to be scheduled is smaller than or equal to the expected remaining storage capacity. If yes, go to step S32; if not, step S36 is performed.
In this embodiment of the present application, the processing delay between the shared buffer and the buffer queue corresponding to the channel to be scheduled is n beats, that is, there are n beats of in-transit data on the entry path from the shared buffer exit to the buffer queue corresponding to the channel to be scheduled. The electronic device determines the sum of the data volume of the channel to be scheduled when the channel to be scheduled is scheduled and the data volume in the middle of n beats as the data volume to be stored, and determines the sum of the residual storage capacity of the buffer queue corresponding to the channel to be scheduled when the channel to be scheduled is scheduled and the expected data volume to be read in the subsequent n beats as the expected residual storage capacity.
The electronic device determines whether the amount of data to be stored is less than or equal to an expected remaining storage capacity. If the amount of data to be stored is less than or equal to the expected remaining storage capacity, the data indicating that the data to be scheduled by the beat can be stored in the buffer queue, step S32 is executed, and the data of the channel to be scheduled is read from the shared buffer.
If the amount of data to be stored is greater than the expected remaining storage capacity, it indicates that the remaining storage space of the buffer queue is insufficient, and when the data scheduled by the beat cannot be stored in the buffer queue, an overflow condition occurs, and the data scheduling is ended, and step S36 is executed.
Such as the backpressure control mechanism shown in figure 4. FF in fig. 4 represents the data expected to be read in a subsequent beat. RAM (Random Access Memory ) is used in the shared buffer to store channel data. Assume that when the channel to be scheduled is client_1, the data amount of client_1 is a, the data amount in transit of n client_1 is B, the remaining storage space of the buffer queue corresponding to client_1 is C, and the data amount expected to be read by the subsequent n client_1 is D. And the cache queue side calculates C+D, determines the expected residual storage capacity and sends the expected residual storage capacity to the shared cache side. When the client_1 data is scheduled to be dequeued (i.e., the client_1 data is read from the RAM), the shared buffer side determines whether (a+b) > (c+d) is true. If yes, namely (a+b) > (c+d), it is indicated that when the client_1 data of the beat schedule cannot be stored in the cache queue corresponding to the client_1, an overflow condition occurs, and the shared buffer side executes step S36; otherwise, step S32 is executed to read the data of the channel to be scheduled from the shared buffer.
The expected data quantity of the subsequent n beats can be determined by the corresponding relation between the time slot recorded in the canendar and the channel identifier, and the data quantity of the buffer queue corresponding to the channel to be scheduled expected to be read in the subsequent n beats. The size of n can be set according to actual requirements.
Step S32, reading the data of the channel to be scheduled from the shared buffer.
Step S36, back pressure is carried out on the storage space allocated for the channel to be scheduled in the shared buffer.
A ready line can be configured between the shared buffer and the cache queue, and when a signal of the ready line is in a high potential, the electronic equipment can read data from the shared buffer and write the read data into the cache queue; when the signal of the ready line is at a low level, the electronic device will not read data from the shared buffer, and will not write the read data into the buffer queue.
If the amount of data to be stored is greater than the expected remaining storage space, the electronic device may pull down the signal of the ready line of the channel to be scheduled, and in this case, the electronic device may not read data from the shared buffer, and may not write the read data into the buffer queue, so as to implement back pressure on the storage space allocated for the channel to be scheduled in the shared buffer.
In the technical scheme provided by the embodiment of the application, when the risk of overflow is determined, back pressure is carried out on the storage space allocated for the channel to be scheduled in the shared buffer, so that the problem of overflow is solved.
In addition, when determining whether the back pressure needs to be carried out on the storage space allocated for the channel to be scheduled in the shared buffer, the expected read data volume of the subsequent n beats is considered, so that the buffer queue can be effectively prevented from being empty, because the buffer queue can be read while the data enter the buffer queue, if the read data volume of the future n beats is not considered, the calculated residual data volume of the buffer queue is smaller than the actual residual data volume, and excessive back pressure on the shared buffer is caused.
In an embodiment of the present application, in order to meet a request for sending a flow control frame, the embodiment of the present application further provides a data processing method, as shown in fig. 5. The method includes steps S51 to S57, wherein steps S51 to S54 are the same as steps S11 to S14, and are not described herein.
Step S55, when a request for sending the flow control frame of the channel to be scheduled is received, the flow control frame of the channel to be scheduled is obtained.
In this embodiment of the present application, the request (e.g., fc-req) for sending a flow control frame may be generated by a flow control module in an electronic device, or may be sent to the electronic device by other devices, which is not limited.
In this embodiment of the present application, the flow control frame (e.g., fc-pkt) may be stored in the shared buffer in advance. When a flow control frame sending request of a channel to be scheduled is received, the electronic equipment acquires a pre-stored flow control frame.
And step S56, according to the MAC layer protocol, processing the flow control frame to obtain a processed flow control frame corresponding to the channel to be scheduled.
In the embodiment of the present application, after obtaining a request for sending a flow control frame, the electronic device does not schedule the channel data stored in the shared buffer any more, obtains the flow control frame of the channel to be scheduled, and processes the flow control frame according to a processing channel data processing manner. The processing manner of the stream control frame in step S56 may refer to the processing manner of the channel data in the above-mentioned step S12. And will not be described in detail herein.
And step S57, caching the processed flow control frames of the channels to be scheduled into the cache queues corresponding to the channels to be scheduled.
In the technical scheme provided by the embodiment of the application, when the flow control frame sending request is received, the shared buffer side does not output channel data any more, but outputs the flow control frame, so that the flow control frame is processed, and the sending request of the flow control frame is met.
In an embodiment of the present application, a data processing method is further provided in an embodiment of the present application, as shown in fig. 6. The method includes steps S61-S67, wherein steps S61-S64 are the same as steps S11-S14, and are not described herein.
Step S65, determining the number of idle blocks to be inserted corresponding to the target channel according to the number of idle blocks which are preset and the number of idle blocks to be deleted, wherein the number of idle blocks to be deleted is determined according to the alignment mark.
In this embodiment of the present application, a calendar table (calendar) may be used to record the correspondence between the channel identifier and the timeslot. According to the corresponding relation between the pre-stored channel identification and the time slot, the electronic equipment can determine the target channel identification corresponding to the time slot when the time slot is shot, and further determine the target channel corresponding to the target channel identification.
Wherein the beat may include one or more time slots, based on which the target channel determined by the electronic device may be one or more.
In the embodiment of the present application, the electronic device pre-configures the number of idle blocks (such as cfg_idle_num), so as to achieve data and rate alignment between the receiving end and the transmitting end, and generate an alignment mark (such as an Align Marker), and based on the Align Marker, the number of idle blocks to be deleted in the target channel can be determined.
The electronic equipment determines the number of idle blocks to be inserted according to the number of idle blocks which are pre-configured and the number of idle blocks to be deleted, namely the difference value between the number of idle blocks which are pre-configured and the number of idle blocks to be deleted.
The embodiment of the present application does not limit the execution order of step S64 and step S65.
Step S66, when the tail of the message of the target channel is read, outputting the idle blocks with the number of idle blocks to be inserted corresponding to the target channel.
For a target channel, when the electronic equipment reads the tail of the message of the target channel, the electronic equipment does not read the data of the target channel from the buffer queue, and outputs the idle blocks in the time slots corresponding to the plurality of target channels of the idle blocks to be inserted in the follow-up process, and outputs the idle blocks of the number of the idle blocks to be inserted.
In this embodiment of the present application, the electronic device inserts or deletes an idle block at the tail of the packet, so as to complete data transmission or processing. The number of inserted idle blocks has an impact on the read enable of the cache queue. For example, if a target channel needs to insert 8 byte idle blocks, 8 bytes of data are read from the buffer queue corresponding to the target channel.
In addition, in the embodiment of the present application, the electronic device outputs the data of the target channel according to the calendar table, and thus, the output data of the target channel may be referred to as calendar data.
According to the technical scheme provided by the embodiment of the application, according to the alignment mark and the number of the idle blocks which are preconfigured, when the data of the target channel is read, the idle blocks are inserted into the target channel, so that the data and the rate between the receiving end and the sending end are aligned, the enough idle speed in the transmitted calendar data is ensured, OAM (Operation Administration and Maintenance, operation maintenance management) information and the like are carried, and the transmitted calendar data is convenient to manage.
Corresponding to the above data processing method, the embodiment of the present application further provides a data processing system, as shown in fig. 7, including: the system comprises a shared buffer 71, a shared buffer scheduling module 72, a MAC layer protocol processing module 73 and an independent buffer 74, wherein the independent buffer 74 comprises buffer queues corresponding to a plurality of channels.
A shared buffer scheduling module 72, configured to send a scheduling signal for a channel to be scheduled to the shared buffer 71;
the shared buffer 71 is configured to read data of a channel to be scheduled from data of a plurality of channels buffered in the shared buffer 71 based on a scheduling signal, and send the data to the MAC layer protocol processing module 73;
the MAC layer protocol processing module 73 is configured to process data of a channel to be scheduled according to a MAC layer protocol, obtain processed data of the channel to be scheduled, and cache the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled;
and the independent buffer 74 is used for reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the time slot according to the corresponding relation between the channel identification and the time slot.
According to the technical scheme provided by the embodiment of the application, the data of a plurality of channels are stored in the shared buffer, the data of one channel in the shared buffer is processed according to the MAC layer protocol, the confusion of the MAC layer protocol processing is avoided, the MAC layer protocol processing requirement of the data is met, and the data is stored in the corresponding buffer queue after the MAC layer protocol processing, so that the calendar data can be obtained by the follow-up processing of the data of the corresponding channel of each time slot. Therefore, in the embodiment of the application, under the condition that the processing requirement of the MAC layer protocol of the data is met, the data buses are multiplexed by a plurality of channels to be processed, namely, a set of MAC layer protocol processing logic is multiplexed by a plurality of channels, so that the scale of the data processing logic is reduced, and the resources occupied by the data processing logic are reduced.
Optionally, the processing delay between the shared buffer 71 and the buffer queue corresponding to the channel to be scheduled is n beats;
the shared buffer 71 may be specifically configured to read data of a channel to be scheduled from data of a plurality of channels buffered in the shared buffer 71 if an amount of data to be stored corresponding to the channel to be scheduled is less than or equal to an expected remaining storage capacity;
the shared buffer 71 may be further configured to back-pressure a storage space allocated for the channel to be scheduled in the shared buffer 71 if the amount of data to be stored is greater than the expected remaining storage capacity;
the data quantity to be stored is the sum value of the data quantity of the channel to be scheduled when the channel to be scheduled is scheduled and the data quantity of n beats in the middle, and the expected residual storage capacity is the sum value of the residual storage capacity of the cache queue corresponding to the channel to be scheduled and the data quantity expected to be read in the subsequent n beats.
Optionally, the shared buffer 71 may be further configured to back-pressure an upstream data source of any channel when the amount of stored data in the storage space allocated for any channel in the shared buffer 71 reaches a first preset threshold, where the upstream data source is a data source for writing data into the shared buffer 71.
Optionally, the shared buffer 71 may be further configured to acquire the flow control frame of the channel to be scheduled and send the flow control frame to the MAC layer protocol processing module 73 when receiving a flow control frame sending request of the channel to be scheduled;
The MAC layer protocol processing module 73 may be further configured to process the flow control frame according to a MAC layer protocol, obtain a processed flow control frame corresponding to the channel to be scheduled, and cache the processed flow control frame of the channel to be scheduled in a cache queue corresponding to the channel to be scheduled.
Optionally, the data processing system may also include a free block insertion calculation module 76 and a calendar construction module 75, as shown in FIG. 8.
A calendar construction module 75 for determining and storing the correspondence between the channel identifications and the time slots. In this case, the free block insertion calculation module 76 may obtain the correspondence between the channel identifier and the time slot from the calendar construction module 75, and instruct the independent buffer 74 to read the data from the buffer queue corresponding to the corresponding target channel.
The idle block insertion calculation module 76 is configured to determine, when the tail of the message of the target channel is read, the number of idle blocks to be inserted corresponding to the target channel according to the number of idle blocks configured in advance and the number of idle blocks to be deleted, where the number of idle blocks to be deleted is determined according to the alignment mark;
the independent buffer 74 may be further configured to output the number of idle blocks to be inserted corresponding to the target channel when the tail of the message of the target channel is read.
Optionally, the independent buffer 74 may be further configured to back-pressure the storage space allocated for any channel in the shared buffer 71 when the amount of data stored in the buffer queue corresponding to the channel reaches the second preset threshold.
Optionally, when the processing delay of the shared buffer 71 and the buffer queue corresponding to any channel is n beats, the depth of the buffer queue corresponding to any channel is at least 2n beats of data.
Alternatively, the degree of scheduling per channel is proportional to the bandwidth of the channel.
In this embodiment of the present application, the shared buffer 71, the shared buffer scheduling module 72, the MAC layer protocol processing module 73, the independent buffer 74, the calendar construction module 75, and the idle block insertion calculation module 76 may be implemented in hardware, may be implemented in software, and are not limited thereto.
In this embodiment, a main memory (i.e., RAM) is configured in the shared buffer 71, the shared buffer 71 stores input data of channels, and a plurality of channels share one RAM. In this way, RAM resources can be utilized to the maximum. The different channels may be distinguished by channel identification.
The shared buffer scheduling module 72 schedules channel data stored in the shared buffer 71; when a request is sent by a channel flow control frame, the flow control frame is inserted, that is, the flow control frame is output by the shared buffer 71, and the shared buffer scheduling module 72 stops scheduling the channel data in the shared buffer 71.
The input data of the independent buffer 74 is a multiplexed data bus of channels, and the channel data is stored in the corresponding buffer queue according to the identification of the channel. In this embodiment, the write side of the independent buffer 74 is finely controlled, so as to ensure that the buffer queue will not overflow and be empty, and ensure that the FlexE works normally, and see the embodiment shown in fig. 3 and the related description of step 13.
The free block insertion calculation module 76 generates a read enable of the buffer queue corresponding to the target channel of the current time slot according to the calendar information in the calendar construction module 75, and reads the data of the target channel of the current time slot. In addition, the idle block insertion calculation module 76 may also calculate the number of idle blocks to be inserted according to the number of idle blocks to be inserted (e.g. cfg_idle_num) and the number of idle blocks to be deleted due to the insertion of alignment marks (e.g. Align marks). The number of idle blocks to be inserted has an impact on cache queue read enable. The timing of the insertion of the idle block is determined according to the tail information of the message.
In the embodiment of the present application, the data output from the independent buffer 74 is FlexE calendar data.
In the embodiment of the present application, the channel data is written into the shared buffer 71 first, and the shared buffer scheduling module 72 controls the channel data dequeuing. When the channel has a request for stream control frame transmission, the stream control frame is inserted, and the shared buffer scheduling module 72 does not schedule the data stored in the shared buffer 71. The dequeued channel data is added with a preamble, SFD, FCS (Frame Check Sequence ), and the like in the MAC layer protocol processing module 73. The FCS may be calculated using CRC-32 according to a formula specified by the ethernet protocol. Due to the addition of the above fields, the message data will be inflated. And de-multiplexing the channel multiplexing data bus processed by the MAC layer protocol, and storing the data into a buffer queue corresponding to the channel.
The calendar construction module 75 stores calendar information, that is, correspondence between channel identifications and time slots, according to which the independent buffer 74 reads channel data corresponding to each beat of time slots from the buffer queue corresponding to the channel. Since the data stored in the cache queue of the independent cache 74 is not inserted into the idle block, the data amount of the inserted idle block is subtracted from the data amount actually read from the cache queue corresponding to the channel.
Through the cooperation of above-mentioned each module, the technical scheme that this application embodiment provided can reach following beneficial effect:
1. a shared buffer is used at the FlexE entry and all channels share a block of main memory RAM. Compared with the use of independent buffers at the entrance, the use of shared buffers can effectively reduce resources and area.
Taking the example of a back pressure delay of 20 beats for the upstream module, the data in transit is at most 20 beats. Each channel needs to support the bandwidth up to FlexE (for example, flexE bandwidth 400G, and the rate of each channel is up to 400G), and when the buffer queue of the channel reaches the threshold value, 20 beats of data need to be absorbed, and then the buffer depth of each channel is 20×2=40 beats of data, and the total buffer depth is 40N beats of data (n=number of channels). If the shared buffer is used, the average buffer depth allocated to each channel is M beats of data, the total buffer depth is m×n > =40 beats of data, and at the same time, considering that at least 2 addresses are allocated to one channel, the total buffer depth > =2n beats of data. 2N is much smaller than 40N.
Taking a typical application n=128 as an example, the total buffer depth is 128×40=5120 when using independent buffers, and 128×2=256 when using shared buffers.
2. And inserting the flow control frame of the channel on the multiplexing data bus at the outlet of the shared buffer memory, and multiplexing the shared buffer memory scheduling module and the data bus.
3. When the channel data is processed by the MAC layer protocol, the channel multiplexing data bus is subjected to field insertion and CRC calculation, so that resources can be effectively multiplexed. The resources correspond to 1/N of the channel independent MAC layer protocol processing (n=number of channels).
4. The back pressure of the buffer queue considers the in-transit data of the writing side and the expected read data quantity of the reading side, can accurately back pressure the data source, can realize the buffer queue with the minimum depth, and ensures that the buffer queue does not overflow and is free from reading.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the system, since it is substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (16)

1. A method of data processing, the method comprising:
reading data of channels to be scheduled in a time stamp from a shared buffer, wherein the shared buffer is used for buffering the data of a plurality of channels;
processing the data of the channel to be scheduled according to a Media Access Control (MAC) layer protocol to obtain processed data of the channel to be scheduled;
caching the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled;
and reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the current time slot according to the corresponding relation between the channel identification and the time slot.
2. The method of claim 1, wherein a processing delay between the shared buffer and a buffer queue corresponding to the channel to be scheduled is n beats;
the step of reading the data of the waiting scheduling channel from the shared buffer comprises the following steps:
If the data quantity to be stored corresponding to the channel to be scheduled is smaller than or equal to the expected remaining storage capacity, reading the data of the channel to be scheduled from the shared buffer;
the data quantity to be stored is the sum of the data quantity of a channel to be scheduled in a time-taking scheduling mode and the data quantity in the middle of n-taking mode, and the expected residual storage capacity is the sum of the residual storage capacity of a cache queue corresponding to the channel to be scheduled and the data quantity expected to be read in the subsequent n-taking mode;
the method further comprises the steps of:
and if the data quantity to be stored is larger than the expected residual storage capacity, back-pressing the storage space allocated for the channel to be scheduled in the shared buffer.
3. The method according to claim 1, wherein the method further comprises:
and when the stored data amount in the storage space allocated for any channel in the shared buffer reaches a first preset threshold, back-pressure is carried out on an upstream data source of any channel, wherein the upstream data source is a data source for writing data into the shared buffer.
4. The method according to claim 1, wherein the method further comprises:
when a request for sending the flow control frame of the channel to be scheduled is received, acquiring the flow control frame of the channel to be scheduled;
Processing the flow control frame according to the MAC layer protocol to obtain a processed flow control frame corresponding to the channel to be scheduled;
and caching the processed flow control frame into a cache queue corresponding to the channel to be scheduled.
5. The method according to claim 1, characterized in that the method further comprises:
determining the number of idle blocks to be inserted corresponding to the target channel according to the number of idle blocks which are preset and the number of idle blocks to be deleted, wherein the number of idle blocks to be deleted is determined according to an alignment mark;
and outputting the idle blocks of the number of the idle blocks to be inserted corresponding to the target channel when the tail of the message of the target channel is read.
6. The method according to claim 1, wherein the method further comprises:
and when the stored data quantity in the cache queue corresponding to any channel reaches a second preset threshold value, back-pressure is carried out on the storage space allocated for any channel in the shared buffer.
7. The method of claim 6, wherein when the processing delay between the shared buffer and the buffer queue corresponding to any one of the channels is n beats, the depth of the buffer queue corresponding to any one of the channels is at least 2n beats of data.
8. The method of any of claims 1-7, wherein the scheduling strength of each channel is proportional to the bandwidth of the channel.
9. A data processing system, the system comprising: the system comprises a shared buffer, a shared buffer scheduling module, a Media Access Control (MAC) layer protocol processing module and an independent buffer, wherein the independent buffer comprises buffer queues corresponding to a plurality of channels;
the shared buffer scheduling module is used for sending a scheduling signal aiming at a channel to be scheduled to the shared buffer;
the shared buffer is configured to read, based on the scheduling signal, data of a channel to be scheduled from data of a plurality of channels cached in the shared buffer, and send the data to the MAC layer protocol processing module;
the MAC layer protocol processing module is used for processing the data of the channel to be scheduled according to an MAC layer protocol to obtain the processed data of the channel to be scheduled, and caching the processed data of the channel to be scheduled to a cache queue corresponding to the channel to be scheduled;
and the independent buffer is used for reading and outputting the data of the target channel from the buffer queue corresponding to the target channel corresponding to the time slot according to the corresponding relation between the channel identification and the time slot.
10. The system of claim 9, wherein a processing delay between the shared buffer and a buffer queue corresponding to the channel to be scheduled is n beats;
if the data quantity to be stored corresponding to the channel to be scheduled is smaller than or equal to the expected remaining storage capacity, reading the data of the channel to be scheduled from the data of a plurality of channels cached in the shared buffer;
if the data quantity to be stored is larger than the expected residual storage capacity, back-pressure is carried out on the storage space allocated for the channel to be scheduled in the shared buffer;
the data quantity to be stored is the sum value of the data quantity of the channel to be scheduled in the time-beat scheduling and the data quantity in the way of n beats, and the expected residual storage capacity is the sum value of the residual storage capacity of the cache queue corresponding to the channel to be scheduled and the data quantity expected to be read in the subsequent n beats.
11. The system of claim 9, wherein the shared buffer is further configured to back-pressure an upstream data source of any channel when an amount of stored data in a storage space allocated for the any channel in the shared buffer reaches a first predetermined threshold, the upstream data source being a data source that writes data into the shared buffer.
12. The system of claim 9, wherein the shared buffer is further configured to acquire the flow control frame of the channel to be scheduled and send the flow control frame to the MAC layer protocol processing module when receiving the flow control frame sending request of the channel to be scheduled;
the MAC layer protocol processing module is further configured to process the flow control frame according to an MAC layer protocol, obtain a processed flow control frame corresponding to the channel to be scheduled, and buffer the processed flow control frame of the channel to be scheduled into a buffer queue corresponding to the channel to be scheduled.
13. The system of claim 9, further comprising a free block insertion calculation module and a calendar construction module;
the calendar construction module is used for storing the corresponding relation between the channel identification and the time slot;
the idle block insertion calculation module is used for determining the number of idle blocks to be inserted corresponding to the target channel according to the number of idle blocks to be deleted and the number of preset idle blocks when the tail of the message of the target channel is read, wherein the number of idle blocks to be deleted is determined according to the alignment mark;
and the independent buffer is further used for outputting the idle blocks with the number of the idle blocks to be inserted corresponding to the target channel when the tail of the message of the target channel is read.
14. The system of claim 9, wherein the independent buffer is further configured to back-pressure the storage space allocated for any channel in the shared buffer when the amount of data stored in the buffer queue corresponding to the any channel reaches a second preset threshold.
15. The system of claim 14, wherein when the processing delay of the shared buffer and the buffer queue corresponding to any one of the channels is n beats, the depth of the buffer queue corresponding to any one of the channels is at least 2n beats of data.
16. The system of any of claims 9-15, wherein the scheduling strength of each channel is proportional to the bandwidth of the channel.
CN202111442450.2A 2021-11-30 2021-11-30 Data processing method and system Active CN114124844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111442450.2A CN114124844B (en) 2021-11-30 2021-11-30 Data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111442450.2A CN114124844B (en) 2021-11-30 2021-11-30 Data processing method and system

Publications (2)

Publication Number Publication Date
CN114124844A CN114124844A (en) 2022-03-01
CN114124844B true CN114124844B (en) 2024-02-23

Family

ID=80368247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111442450.2A Active CN114124844B (en) 2021-11-30 2021-11-30 Data processing method and system

Country Status (1)

Country Link
CN (1) CN114124844B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022243B (en) * 2022-06-28 2023-05-26 绿盟科技集团股份有限公司 Data flow control method, device and system, electronic equipment and storage medium
CN115941792B (en) * 2022-11-30 2024-02-02 苏州异格技术有限公司 Method and device for processing data blocks of flexible Ethernet and storage medium

Also Published As

Publication number Publication date
CN114124844A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN114124844B (en) Data processing method and system
US6687225B1 (en) Bandwidth control apparatus
JP2615036B2 (en) Packet transmission equipment
US5761430A (en) Media access control for isochronous data packets in carrier sensing multiple access systems
EP2222005B1 (en) Dynamic bandwidth allocation circuit, dynamic bandwidth allocation method, dynamic bandwidth allocation program and recording medium
JP2970685B2 (en) Access control system for multi-channel transmission ring
US5140587A (en) Broadband ring communication system and access control method
US6967951B2 (en) System for reordering sequenced based packets in a switching network
EP2378721B1 (en) Bandwidth allocation method and routing apparatus
CN111095860B (en) Method and device for clock synchronization
EP2434775A2 (en) Method and apparatus for supporting differentiated performance for multiple categories of packets in a passive optical network
CN110086728B (en) Method for sending message, first network equipment and computer readable storage medium
US20030012223A1 (en) System and method for processing bandwidth allocation messages
WO2017063457A1 (en) Rate adaptation method and apparatus, and computer storage medium
JPH10224380A (en) Electric communication system and method to transfer cell having header with address and payload configuring streaming data such as audio and video data in asynchronous transfer mode
WO2002001785A1 (en) Media access control for isochronous data packets in carrier sensing multiple access systems
KR20210137204A (en) A method implemented by computer means of a communication entity in a packet-switched network, a computer program, a computer-readable non-transitory recording medium, and a communication entity in a packet-switched network
US10686897B2 (en) Method and system for transmission and low-latency real-time output and/or processing of an audio data stream
JP2003518874A (en) data communication
US6618374B1 (en) Method for inverse multiplexing of ATM using sample prepends
US6975643B2 (en) Queue management in packet switched networks
US20030123451A1 (en) Combined use timer system for data communication
JP2004048640A (en) Atm-pon slave device and transmission/reception method of the same
US6665298B1 (en) Reassembly unit and a method thereof
EP4002862A1 (en) An optical line terminal and an optical network unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant