CN109308180B - Processing method and processing device for cache congestion - Google Patents

Processing method and processing device for cache congestion Download PDF

Info

Publication number
CN109308180B
CN109308180B CN201810937015.9A CN201810937015A CN109308180B CN 109308180 B CN109308180 B CN 109308180B CN 201810937015 A CN201810937015 A CN 201810937015A CN 109308180 B CN109308180 B CN 109308180B
Authority
CN
China
Prior art keywords
data
memory
current
channel
discarding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810937015.9A
Other languages
Chinese (zh)
Other versions
CN109308180A (en
Inventor
耿磊
江源
师克龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Centec Networks Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centec Networks Suzhou Co Ltd filed Critical Centec Networks Suzhou Co Ltd
Priority to CN201810937015.9A priority Critical patent/CN109308180B/en
Publication of CN109308180A publication Critical patent/CN109308180A/en
Application granted granted Critical
Publication of CN109308180B publication Critical patent/CN109308180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)

Abstract

The invention provides a processing method and a processing device for cache congestion, wherein the method comprises the following steps: a data state FIFO memory and a multi-channel data discarding mark register are established corresponding to the data memory; the data state FIFO memory is used for storing the address of the data stored in the data memory, the channel number corresponding to the data channel and the mandatory end bit identifier corresponding to the data channel; in any clock cycle, when any data has a read request and/or a write request, judging whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for buffering through the current data channel according to the current storage state of at least one of the data memory, the data state FIFO memory and the multi-channel data discarding mark register. According to the invention, the data state FIFO memory with asymmetric depth and the data memory is introduced, so that the depth of the data memory is reduced, the memory resource is reduced, and the effective utilization rate of the data memory is improved.

Description

Processing method and processing device for cache congestion
Technical Field
The present invention relates to the field of network communications, and in particular, to a method and an apparatus for processing cache congestion.
Background
Data transmission processing is a common design of a digital circuit, wherein multiple channels of data packets are sent to a low bandwidth interface from a high bandwidth interface at intervals, and under the scene that the high bandwidth interface has no flow control, a data memory is usually needed to buffer the input data packets, but on the premise that data is continuously input, the data memory is quickly full of data and overflows; however, the data memory is not subject to overflow processing, so that when the data memory is full of data, a data end character of forced end needs to be generated to mark the end of the transmission of the data packet in the current data channel, and although part of the data of the channel is already written into the data memory, the subsequent data of the channel needs to be discarded.
In the prior art, as shown in fig. 1, in order to solve the above problem, in a buffering congestion processing architecture, assuming that a minimum depth of a data memory in a design requirement is N and a number of data channels for data input is M, an actual depth of the data memory in a final design is N + M, where M address spaces are reserved for each input channel. When the data memory uses at most N effective addresses, a full flag signal is generated to indicate that the data memory is full and cannot receive input data to the data memory, and a forced ending data ending character is written in a data space reserved in the data memory to remind that the data input by the channel cannot be written into the memory again until the data packet is ended. However, since the data memory resources are precious, if a memory address is reserved for each channel, a large amount of address space is wasted when there are many channels.
Disclosure of Invention
To solve the above technical problem, an object of the present invention is to provide a method and an apparatus for processing cache congestion.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for processing cache congestion, where the method includes: a data state FIFO memory and a multi-channel data discarding mark register are established corresponding to the data memory;
the data memory is used for storing data input through the data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory is used for storing addresses of data stored in the data memory, channel numbers corresponding to data channels and forced end bit identifications corresponding to the data channels, the depth of the data state FIFO memory is N + M at the minimum, the discard flag register is used for storing discard identifications corresponding to any data channel, the minimum width of the discard identification register is M, and the M, N are positive integers greater than 1;
in any clock cycle, when any data has a read request and/or a write request, judging whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for buffering through the current data channel according to the current storage state of at least one of the data memory, the data state FIFO memory and the multi-channel data discarding mark register.
As a further improvement of an embodiment of the present invention, in any clock cycle, when there is a read request and/or a write request for any data, determining whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for buffering through the current data channel according to a current storage state of at least one of the data memory, the data state FIFO memory, and the multi-channel data discard flag register specifically includes:
in any clock cycle, if the current data sends out a write request, traversing the discarding mark register, judging whether the discarding mark corresponding to the discarding mark register of the data channel through which the current data passes is set as an enable,
if yes, discarding the current data; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; when a new data packet is input and the new data packet is input, the state of the corresponding data memory judges whether to update the discarding mark of the discarding mark register or not;
if not, judging whether to discard the current data according to the state of the current data memory.
As a further improvement of an embodiment of the present invention, in any clock cycle, when there is a read request and/or a write request for any data, determining whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for buffering through the current data channel according to a current storage state of at least one of the data memory, the data state FIFO memory, and the multi-channel data discard flag register specifically includes:
in any clock period, if the current data sends out a write request, judging whether the current data memory is full,
if yes, discarding the current data; when the discarding mark of the multi-channel data discarding mark register corresponding to the current data channel is judged to be non-enabled, the channel number of the data channel where the current data is located and the forced ending bit mark adjusted to be enabled are written into the data state FIFO memory; meanwhile, the discarding mark of the multi-channel data discarding mark register corresponding to the current channel is set as enabled; in a continuous clock cycle, discarding all data behind the current data in a data packet where the current data is located, and after discarding the last data of the current data packet, setting a discarding identifier of a multi-channel data discarding mark register corresponding to a current channel as non-enabled;
if not, writing the current data into the data memory for caching, writing the current data into the address of the data memory, the channel number of the data channel where the current data is located and the forced end bit identification which is adjusted to be in the non-enabled state into the data state FIFO memory at the same time.
As a further improvement of the embodiment of the present invention, the "determining whether the current data can be read to leave the current data storage for forwarding according to the current storage state of at least one of the data storage, the data state FIFO storage, and the multi-channel data discard flag register" specifically includes:
in any clock cycle, if a data reading request is received, inquiring a data state FIFO memory, judging whether a mandatory end bit identifier corresponding to a channel number occupied by the current data is set as an enable or not,
if yes, the data memory does not need to be queried;
if not, the data memory address stored in the data state FIFO memory is obtained to inquire the data memory so as to obtain the current data for output.
As a further improvement of an embodiment of the present invention, after "if, without querying the data storage", the method further includes: and generating an error information instruction and sending the error information instruction to the lower-level module so as to perform exception processing on the data and the data packet where the data is located.
In order to achieve the above object, another embodiment of the present invention provides a processing apparatus for buffering congestion, including:
a memory module, the memory module comprising: a data memory, a data state FIFO memory and a multi-channel data discard flag register,
the data memory is used for storing data input through the data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory is used for storing addresses of data stored in the data memory, channel numbers corresponding to data channels and forced end bit identifications corresponding to the data channels, the depth of the data state FIFO memory is N + M at the minimum, the discard flag register is used for storing discard identifications corresponding to any data channel, the minimum width of the discard identification register is M, and the M, N are positive integers greater than 1;
and the processing module is used for judging whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for caching through the current data channel according to the current storage state of at least one of the data memory, the data state FIFO memory and the multi-channel data discarding mark register in any clock cycle when any data has a read request and/or a write request.
As a further improvement of the embodiment of the present invention, the processing module is specifically configured to, in any clock cycle, if the current data sends a write request, traverse the discard flag register, determine whether the discard flag corresponding to the discard flag register in the data channel through which the current data passes is set to be enabled,
if yes, discarding the current data; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; when a new data packet is input and the new data packet is input, the state of the corresponding data memory judges whether to update the discarding mark of the discarding mark register or not;
if not, judging whether to discard the current data according to the state of the current data memory.
As a further improvement of an embodiment of the present invention, the processing module is specifically configured to, in any clock cycle, if the current data issues a write request, determine whether the current data storage is full,
if yes, discarding the current data; when the discarding mark of the multi-channel data discarding mark register corresponding to the current data channel is judged to be non-enabled, the channel number of the data channel where the current data is located and the forced ending bit mark adjusted to be enabled are written into the data state FIFO memory; meanwhile, the discarding mark of the multi-channel data discarding mark register corresponding to the current channel is set as enabled; in a continuous clock cycle, discarding all data behind the current data in a data packet where the current data is located, and after discarding the last data of the current data packet, setting a discarding identifier of a multi-channel data discarding mark register corresponding to a current channel as non-enabled;
if not, writing the current data into the data memory for caching, writing the current data into the address of the data memory, the channel number of the data channel where the current data is located and the forced end bit identification which is adjusted to be in the non-enabled state into the data state FIFO memory at the same time.
As a further improvement of the embodiment of the present invention, the processing module is specifically configured to, in any clock cycle, if a data reading request is received, query the data state FIFO memory, determine whether the flag of the mandatory end bit corresponding to the channel number occupied by the current data is set to be enabled,
if yes, the data memory does not need to be queried;
if not, the data memory address stored in the data state FIFO memory is obtained to inquire the data memory so as to obtain the current data for output.
As a further improvement of the embodiment of the present invention, the processing module is further configured to generate an error message instruction to send to the lower module after the data memory does not need to be queried, so as to perform exception handling on the data and the data packet where the data and the data packet are located.
Compared with the prior art, the method and the device for processing the cache congestion reduce the depth of the data memory, reduce memory resources and improve the effective utilization rate of the data memory by introducing the data state FIFO memory with the asymmetric depth and the data memory.
Drawings
Fig. 1 is a schematic structural diagram of a buffering congestion handling architecture described in the background of the invention;
fig. 2 is a flowchart illustrating a method for handling cache congestion according to an embodiment of the present invention;
fig. 3, fig. 4, and fig. 5 are schematic diagrams respectively illustrating a specific implementation flow of one step in the method for processing cache congestion shown in fig. 2;
FIG. 6 is a block diagram of a buffering congestion handling architecture according to an embodiment of the present invention;
fig. 7 is a block diagram of a processing apparatus for buffering congestion according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
A message is a data unit exchanged and transmitted in a network, i.e. a data block to be sent by a station at one time. The message contains complete data information to be sent, and the message is very inconsistent in length, unlimited in length and variable. The message comprises a plurality of data packets in the transmission process, and each data packet comprises a plurality of data; the application scenario of the invention is that in the process that multi-channel data packets are sent to the low-bandwidth interface from the high-bandwidth interface at intervals, under the scenario that the high-bandwidth interface has no flow control, the data can not overflow.
As shown in fig. 2, in a first embodiment of the present invention, the method for processing cache congestion includes: a data state FIFO memory and a multi-channel data discard flag register are created corresponding to the data memory.
The data memory is used for storing data input through the data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory is a data state memory and is used for storing addresses of data stored in the data memory, channel numbers corresponding to data channels and forced end bit identifications corresponding to the data channels, the depth of the data state FIFO memory is minimum to be N + M, the discard flag register is used for storing discard identifications corresponding to any data channel, the minimum width of the discard identification register is M, and the M, N are positive integers greater than 1.
In the specific embodiment of the present invention, the depth of the newly created data state FIFO memory is set according to the number of data channels and the depth of the data memory, and the minimum value of the depth of the data state FIFO memory is equal to the sum of the number of data channels and the depth of the data memory, so that if the data memory is full and each data channel still has data transmission, the data state FIFO memory still has a free address for writing in the state information corresponding to the current data.
In one embodiment of the present invention, the method includes: in any clock cycle, when any data has a read request and/or a write request, judging whether the current data can be read to leave the current data memory for forwarding and/or enter the data memory for buffering through the current data channel according to the current storage state of at least one of the data memory, the data state FIFO memory and the multi-channel data discarding mark register.
In the embodiment of the invention, when data is written into the data memory or read out from the data memory, the flag bits corresponding to the data state FIFO memory and the multi-channel data discarding flag register are correspondingly changed according to the storage state of the data.
Correspondingly, as shown in fig. 3, in the process of writing data, in any clock cycle, if the current data sends a write request, traversing the discard flag register, and determining whether the discard identifier corresponding to the discard flag register in the data channel through which the current data passes is set to be enabled, if so, discarding the current data; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; when a new data packet is input and the new data packet is input, the state of the corresponding data memory judges whether to update the discarding mark of the discarding mark register or not; if not, judging whether to discard the current data according to the state of the current data memory.
Each storage space of the discarding mark register corresponds to a data channel, the storage state of the discarding mark register is changed along with the storage space in the process of writing data into the data memory, if the data memory is full in the process of writing data, the current data cannot be written into the data memory, so that the address of the data channel, through which the discarding mark register passes, corresponding to the current data needs to be adjusted to enable, the state is maintained until a new effective address is released in the data memory, and a new data packet is input in the corresponding data channel; correspondingly, in the process of writing data into the data memory normally, the address of the data channel through which the discarding mark register passes corresponding to the current data is in a non-enabled state; in a preferred embodiment of the present invention, binary "0" and "1" are used to indicate the enable state of any address of the discard flag register, and when it is "1", it indicates enable, and when it is "0", it indicates disable. In the initial state, the default value of each storage space of the discard flag register is "0", and once data in a data channel corresponding to a certain storage space needs to be discarded, its corresponding bit will be set to "1".
Correspondingly, as shown in fig. 4, in the process of writing data, in any clock cycle, if the current data sends a write request, it is determined whether the current data memory is full, and if so, the current data is discarded; when the discarding mark of the multi-channel data discarding mark register corresponding to the current data channel is judged to be non-enabled, the channel number of the data channel where the current data is located and the forced ending bit mark adjusted to be enabled are written into the data state FIFO memory; meanwhile, the discarding mark of the multi-channel data discarding mark register corresponding to the current channel is set as enabled; in a continuous clock cycle, discarding all data behind the current data in a data packet where the current data is located, and after discarding the last data of the current data packet, setting a discarding identifier of a multi-channel data discarding mark register corresponding to a current channel as non-enabled; if not, writing the current data into the data memory for caching, writing the current data into the address of the data memory, the channel number of the data channel where the current data is located and the forced end bit identification which is adjusted to be in the non-enabled state into the data state FIFO memory at the same time.
It can be understood that, a data packet includes a plurality of data, when one of the data packets is interrupted and discarded, the current data packet is in an incomplete state, so that all data in the current data packet after the data is discarded, and in one clock cycle, one data is usually transmitted, further, it is necessary to discard remaining data of the current data packet in sequence in a plurality of subsequent clock cycles, further, after all the current data packet is discarded, it is necessary to set the discard identifier of the corresponding data channel to be disabled, and when a new data packet is input through the current data channel, the discard identifier of the corresponding data channel is disabled, and the new data packet is reprocessed again according to the state of the data memory.
When a data request is written into the data memory, the state of the data state FIFO memory is adjusted according to whether the written data can be written into the data memory, if the data can be written into the data memory, the data state FIFO memory is written into the address in the data memory, the data channel through which the written data passes, and the mandatory end bit identifier is adjusted to be disabled and then is written into one of the memory addresses of the data state FIFO memory; if the data memory is full, the current data can not be written into the data memory, at this time, the channel number corresponding to the data channel through which the data is written needs to be numbered, and the corresponding mandatory end bit identifier is adjusted to be enabled and then written into a storage address of the data state FIFO memory at the same time; in a preferred embodiment of the present invention, each storage address of the data state FIFO memory has a bit for writing a flag of the forced end bit, and the address indicates the enabled state of the flag of the forced end bit by writing binary "0" and "1", and indicates enabled when the flag is "1", and indicates disabled when the flag is "0".
Correspondingly, as shown in fig. 5, in the process of reading data, if a data reading request is received in any clock cycle, the data state FIFO memory is queried, whether the mandatory end bit flag corresponding to the channel number occupied by the current data is set to enable is determined, and if so, the data memory does not need to be queried; if not, the data memory address stored in the data state FIFO memory is obtained to inquire the data memory so as to obtain the current data for output.
In the embodiment, if the mandatory end bit flag of the data channel occupied by the read data is set to enable, it indicates that the data is discarded data, and the data is not written into the data memory in the writing process, and accordingly, the data memory does not need to be queried, and an error information instruction is generated and sent to the lower module, so as to perform exception processing on the data and the data packet where the data is located; if the corresponding mandatory end bit mark is set to be not enabled, the current data can be read, and at the moment, the data is read according to a normal program so as to release the occupied data memory space and the data state FIFO memory space.
For ease of understanding, the present invention is described with reference to a specific example:
as shown in fig. 6, in this example, the writing process to the data memory when data is continuously input in the same data channel is taken as an example: suppose the data memory has N memory addresses, which are 0, 1, 2 … … N-1, respectively; the number of the data channels is set to be M, the data state FIFO memory has M + N storage positions which are respectively 0, 1 and 2 … … N-1 … … N + M-1, and the discarding mark register has M storage addresses which are respectively 0, 1 and 2 … … M-1; each storage address of the data memory is used for writing data, each storage address of the data state FIFO memory is written into the address, the channel number and the mandatory end bit identifier of the data memory, and each address of the discard flag register is used for storing the discard identifier corresponding to any data channel; note that the data lane to which data is currently input is denoted by data lane 0, and the data lane 0 receives data of data 0, data 1, and data 2 … …, data X, in a plurality of subsequent clock cycles.
Assuming that when the data 0 of the data channel 0 is input in the first clock cycle, the address N-1 of the data memory is written, i.e. the last effective address space of the data memory is written, the address N-1 of the data memory, the channel number 0 of the data channel 0 and the mandatory end bit flag are simultaneously adjusted to 0 and then written into the data state FIFO memory, and the corresponding discard flag of the discard flag register is adjusted to 0.
At the second clock cycle, when data 1 of the data channel 0 is input, the data memory has no address release, that is, reaches a full state, at this time, the channel number 0 of the data channel 0 and the mandatory ending bit flag are written into the data state FIFO memory after being set to 1, and at the same time, the bit0 of the discard flag register is set to 1, so as to indicate that the subsequently input data in the same data packet as the current data needs to be discarded.
In the third clock cycle, data 2 input to data lane 0, first searches for the corresponding data discard flag in the discard register for data lane 0 as 1, and then discards data 2.
Further, in a plurality of subsequent clock cycles, other data in the same data packet as the data 2 in the channel 0 are all processed according to the step of the data 2 until the effective address in the data memory is released and a new data packet is input into the data channel 0, and the data packet cannot be written into the data memory again, and the bit0 of the multi-channel data discarding flag register is set to 0 again.
As shown in fig. 7, in the first embodiment of the present invention, the apparatus for processing cache congestion includes: a memory module 100 and a processing module 200; the memory module 100 includes: a data memory 101, a data state FIFO memory 103, and a multi-channel data discard flag register 105.
The data memory 101 is used for storing data input through a data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory 103 is a data state memory and is configured to store an address of data stored in the data memory 101, a channel number corresponding to a data channel, and a mandatory end bit identifier corresponding to the data channel, where a depth of the data state FIFO memory is minimum N + M, the discard flag register is configured to store a discard identifier corresponding to any data channel, a minimum width of the discard flag register is M, and M, N are positive integers greater than 1.
In the embodiment of the present invention, the depth of the newly created data status FIFO memory 103 is set according to the number of data channels and the depth of the data memory 101, and the minimum value of the depth of the data status FIFO memory 103 is equal to the sum of the number of data channels and the depth of the data memory 101, so that if the data memory 101 is full and each data channel still has data to transmit, the data status FIFO memory 103 still has an empty address for writing the status information corresponding to the current data.
The processing module 200 is configured to, in any clock cycle, when a read request and/or a write request for any data is received, determine whether the current data can be read to leave the current data storage 101 for forwarding and/or enter the data storage 101 through the current data channel for buffering according to a current storage state of at least one of the data storage 101, the data state FIFO memory 103, and the multi-channel data discard flag register 105.
In the embodiment of the present invention, when data is written into the data memory 101 or read out from the data memory 101, the flag bits corresponding to the data state FIFO memory 103 and the multi-channel data discard flag register 105 are changed accordingly according to the storage state of the data.
Correspondingly, in the process of writing data, the processing module 200 is specifically configured to, in any clock cycle, traverse the discard flag register 105 if a write request sent by current data is received, determine whether a discard identifier corresponding to the discard flag register 105 of a data channel through which the current data passes is set to be enabled, and discard the current data if the discard identifier is set to be enabled; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; until a new data packet is input, and the state of the corresponding data memory 101 judges whether to update the discarding mark of the discarding mark register 105 according to the input of the new data packet; if not, judging whether to discard the current data according to the state of the current data memory 101.
Each storage space of the discarding mark register corresponds to a data channel, the storage state of the discarding mark register is changed along with the storage space in the process of writing data into the data memory 101, if the data memory 101 is full in the process of writing data, the current data cannot be written into the data memory 101, so that the address of the data channel through which the discarding mark register corresponds to the current data needs to be adjusted to enable, and the state is maintained until the data memory 101 has a new effective address to release and a new data packet is input into the corresponding data channel; correspondingly, in the process of writing data into the data memory 101 normally, the address of the data channel through which the discarding mark register passes corresponding to the current data is in a non-enabled state; in a preferred embodiment of the present invention, binary "0" and "1" are used to indicate the enable state of any address of the discard flag register, and when it is "1", it indicates enable, and when it is "0", it indicates disable. In the initial state, the default value of each storage space of the discard flag register is "0", and once data in a data channel corresponding to a certain storage space needs to be discarded, its corresponding bit will be set to "1".
Correspondingly, in the process of writing data, the processing module 200 is specifically configured to, in any clock cycle, if a write request is issued by current data, determine whether the current data memory 101 is full, if so, discard the current data, and when it is determined that the discard identifier of the current data channel corresponding to the multi-channel data discard flag register 105 is disabled, write the channel number of the data channel where the current data is located and the mandatory end bit identifier adjusted to be enabled into the data state FIFO memory 103; meanwhile, the discarding mark of the multi-channel data discarding mark register 105 corresponding to the current channel is set as enabled, all data behind the current data in the data packet where the current data is located are discarded in the following clock cycle, and after the last data is discarded, the discarding mark of the multi-channel data discarding mark register 105 corresponding to the current channel is set as disabled; if not, the current data is written into the data memory 101 for caching, and the current data is written into the address of the data memory 101, the channel number of the data channel where the current data is located and the identification of the forced end bit which is adjusted to be in the non-enabled state, and are simultaneously written into the data state FIFO memory 105.
It can be understood that, a data packet includes a plurality of data, when one of the data packets is interrupted and discarded, the current data packet is in an incomplete state, so that all data in the current data packet after the data is discarded, and in one clock cycle, one data is usually transmitted, further, it is necessary to discard remaining data of the current data packet in sequence in a plurality of subsequent clock cycles, further, after all the current data packet is discarded, it is necessary to set the discard identifier of the corresponding data channel to be disabled, and when a new data packet is input through the current data channel, the discard identifier of the corresponding data channel is disabled, and the new data packet is reprocessed again according to the state of the data memory 101.
Each storage space of the data state FIFO memory 103 corresponds to an address for storing data in the data storage 101, a channel number and a mandatory end bit flag corresponding to a data channel, when a data request is written into the data storage 101, the state of the data state FIFO memory 103 is adjusted according to whether the written data can be written into the data storage 101, if the data can be written into the data storage 101, the data is written into the address in the data storage 101, the data channel through which the written data passes, and the mandatory end bit flag is adjusted to be disabled and then is written into one of the storage addresses of the data state FIFO memory 103; if the data memory 101 is full, the current data cannot be written into the data memory 101, and at this time, the channel number corresponding to the data channel through which the data is written and the corresponding mandatory end bit identifier are required to be adjusted to be enabled and then written into a storage address of the data state FIFO memory 103 at the same time; in a preferred embodiment of the present invention, each storage address of the data state FIFO memory 103 has a bit for writing a forced end bit flag, and the address indicates the enable state of the forced end bit flag by writing binary "0" and "1", and indicates enable when the address is "1" and disable when the address is "0".
Correspondingly, in the process of reading data, the processing module 200 is specifically configured to, in any clock cycle, query the data state FIFO memory 103 if a data reading request is received, determine whether the mandatory end bit flag corresponding to the channel number occupied by the current data is set to enable, and if so, do not need to query the data memory 101; if not, the data memory address stored in the data state FIFO memory 103 is obtained to query the data memory 101 to obtain the current data for output.
In this embodiment, if the mandatory end bit flag of the data channel occupied by the read data is set to enable, which indicates that the data is discarded data, the data is not written into the data storage 101 in the writing process, and accordingly, the data storage 101 does not need to be queried, and further, the processing module 200 is further configured to generate an error information instruction and send the error information instruction to a next module, so as to perform exception processing on the data and a data packet where the data is located; if its corresponding mandatory end bit flag is set to disable, it indicates that the current data can be read, at which point the data is read according to the normal procedure to free up the data memory 101 space and data state FIFO memory 103 space it occupies.
Compared with the prior art, the method and the device for processing the cache congestion have the advantages that the data state FIFO memory with the depth being asymmetric to that of the data memory is introduced to record the state of the data packet and the mandatory ending identifier, so that the depth of the data memory is reduced; when the buffer is congested, the message is forcibly discarded, and meanwhile, the forcible ending mark is recorded, so that buffer overflow is avoided on one hand, and whether the message which is partially written into the data memory is a normal message or not can be identified on the other hand, the data memory resources are effectively reduced, and the effective utilization rate of the data memory is improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the modules in the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The above described system embodiments are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts shown as modules are logic modules, i.e. may be located in one module in the chip logic, or may be distributed to a plurality of data processing modules in the chip. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The application can be used in a plurality of general-purpose or special-purpose communication chips. For example: switch chips, router chips, server chips, and the like.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (6)

1. A method for handling cache congestion, the method comprising:
a data state FIFO memory and a multi-channel data discarding mark register are established corresponding to the data memory;
the data memory is used for storing data input through the data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory is used for storing addresses of data stored in the data memory, channel numbers corresponding to data channels and forced end bit identifications corresponding to the data channels, the depth of the data state FIFO memory is N + M at the minimum, the discard flag register is used for storing discard identifications corresponding to any data channel, the minimum width of the discard identification register is M, and the M, N are positive integers greater than 1;
in any clock cycle, when any data has a read request, judging whether the current data can be read to leave the current data memory for forwarding according to the current storage states of the data memory and the data state FIFO memory;
and/or when any data has a write request, judging whether the current data can enter the data memory for caching through the current data channel according to the current storage state registered by the data memory and the multi-channel data discarding mark;
in any clock cycle, when there is a read request for any data, judging whether the current data can be read to leave the current data memory for forwarding according to the current storage states of the data memory and the data state FIFO memory includes:
in any clock cycle, if a data reading request is received, inquiring a data state FIFO memory, judging whether a mandatory end bit identifier corresponding to a channel number occupied by the current data is set as an enable or not,
if yes, the data memory does not need to be queried;
if not, acquiring a data memory address stored in the data state FIFO memory to query the data memory so as to acquire current data for output;
when any data has a write request, judging whether the current data can enter the data memory for caching through the current data channel according to the current storage state registered by the data memory and the multi-channel data discarding mark comprises the following steps:
in any clock cycle, if the current data sends out a write request, traversing the discarding mark register, judging whether the discarding mark corresponding to the discarding mark register of the data channel through which the current data passes is set as an enable,
if yes, discarding the current data; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; when a new data packet is input and the new data packet is input, the state of the corresponding data memory judges whether to update the discarding mark of the discarding mark register or not;
if not, judging whether to discard the current data according to the state of the current data memory.
2. The method for processing cache congestion according to claim 1, wherein, if not, determining whether to discard the current data according to the state of the current data storage specifically comprises:
in any clock period, if the current data sends out a write request, judging whether the current data memory is full,
if yes, discarding the current data; when the discarding mark of the multi-channel data discarding mark register corresponding to the current data channel is judged to be non-enabled, the channel number of the data channel where the current data is located and the forced ending bit mark adjusted to be enabled are written into the data state FIFO memory; meanwhile, the discarding mark of the multi-channel data discarding mark register corresponding to the current channel is set as enabled; in a continuous clock cycle, discarding all data behind the current data in a data packet where the current data is located, and after discarding the last data of the current data packet, setting a discarding identifier of a multi-channel data discarding mark register corresponding to a current channel as non-enabled;
if not, writing the current data into the data memory for caching, writing the current data into the address of the data memory, the channel number of the data channel where the current data is located and the forced end bit identification which is adjusted to be in the non-enabled state into the data state FIFO memory at the same time.
3. The method of claim 1, wherein if the "no data memory query is required", the method further comprises: and generating an error information instruction and sending the error information instruction to the lower-level module so as to perform exception processing on the data and the data packet where the data is located.
4. An apparatus for handling cache congestion, the apparatus comprising:
a memory module, the memory module comprising: the data memory, the data state FIFO memory and the multi-channel data discarding mark register;
the data memory is used for storing data input through the data channel, and the depth of the data memory is N; the number of the data channels is M; the data state FIFO memory is used for storing addresses of data stored in the data memory, channel numbers corresponding to data channels and forced end bit identifications corresponding to the data channels, the depth of the data state FIFO memory is N + M at the minimum, the discard flag register is used for storing discard identifications corresponding to any data channel, the minimum width of the discard identification register is M, and the M, N are positive integers greater than 1;
the processing module is used for judging whether the current data can be read to leave the current data memory for forwarding according to the current storage states of the data memory and the data state FIFO memory when any data has a read request in any clock cycle;
and/or when any data has a write request, judging whether the current data can enter the data memory for caching through the current data channel according to the current storage state registered by the data memory and the multi-channel data discarding mark;
wherein, in any clock cycle, if the current data sends out a write request, the processing module is specifically configured to traverse the discard flag register, determine whether the discard flag corresponding to the discard flag register of the data channel through which the current data passes is set to be enabled,
if yes, discarding the current data; in the subsequent clock period, discarding all data behind the current data in the data packet where the current data is located; when a new data packet is input and the new data packet is input, the state of the corresponding data memory judges whether to update the discarding mark of the discarding mark register or not;
if not, judging whether to discard the current data according to the state of the current data memory;
in any clock cycle, if a data reading request is received, the processing module is specifically configured to query the data state FIFO memory, determine whether the end bit flag corresponding to the channel number occupied by the current data is set to enable,
if yes, the data memory does not need to be queried;
if not, the data memory address stored in the data state FIFO memory is obtained to inquire the data memory so as to obtain the current data for output.
5. The apparatus according to claim 4, wherein the processing module is further configured to, in any clock cycle, if the current data issues a write request, determine whether the current data memory is full,
if yes, discarding the current data; when the discarding mark of the multi-channel data discarding mark register corresponding to the current data channel is judged to be non-enabled, the channel number of the data channel where the current data is located and the forced ending bit mark adjusted to be enabled are written into the data state FIFO memory; meanwhile, the discarding mark of the multi-channel data discarding mark register corresponding to the current channel is set as enabled; in a continuous clock cycle, discarding all data behind the current data in a data packet where the current data is located, and after discarding the last data of the current data packet, setting a discarding identifier of a multi-channel data discarding mark register corresponding to a current channel as non-enabled;
if not, writing the current data into the data memory for caching, writing the current data into the address of the data memory, the channel number of the data channel where the current data is located and the forced end bit identification which is adjusted to be in the non-enabled state into the data state FIFO memory at the same time.
6. The apparatus according to claim 4, wherein the processing module is further configured to generate an error message command to send to the subordinate module after the data memory is not queried, so as to perform exception handling on the data and the data packet where the data is located.
CN201810937015.9A 2018-08-16 2018-08-16 Processing method and processing device for cache congestion Active CN109308180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810937015.9A CN109308180B (en) 2018-08-16 2018-08-16 Processing method and processing device for cache congestion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810937015.9A CN109308180B (en) 2018-08-16 2018-08-16 Processing method and processing device for cache congestion

Publications (2)

Publication Number Publication Date
CN109308180A CN109308180A (en) 2019-02-05
CN109308180B true CN109308180B (en) 2021-01-26

Family

ID=65223725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810937015.9A Active CN109308180B (en) 2018-08-16 2018-08-16 Processing method and processing device for cache congestion

Country Status (1)

Country Link
CN (1) CN109308180B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230185694A1 (en) * 2021-12-10 2023-06-15 International Business Machines Corporation Debugging communication among units on processor simulator

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0803821A3 (en) * 1996-04-26 1998-01-28 Texas Instruments Incorporated DMA channel assignment in a data packet transfer device
CN1545031A (en) * 2003-11-17 2004-11-10 中兴通讯股份有限公司 Data handling method of FIFO memory device
CN101894005A (en) * 2010-05-26 2010-11-24 上海大学 Asynchronous FIFO transmission method from high-speed interfaces to low-speed interfaces
CN101957800A (en) * 2010-06-12 2011-01-26 福建星网锐捷网络有限公司 Multichannel cache distribution method and device
CN102915279A (en) * 2011-08-03 2013-02-06 澜起科技(上海)有限公司 Address assignment method for data registers of distributed cache chipset
US8392799B1 (en) * 2007-04-10 2013-03-05 Marvell International Ltd. Systems and methods for arbitrating use of processor memory
CN103076990A (en) * 2012-12-25 2013-05-01 北京航天测控技术有限公司 Data playback device based on FIFO (First In, First Out) caching structure
CN103338133A (en) * 2013-06-28 2013-10-02 盛科网络(苏州)有限公司 Method and device for dynamically monitoring jamming of message transmitting port
CN104407809A (en) * 2014-11-04 2015-03-11 盛科网络(苏州)有限公司 Multi-channel FIFO (First In First Out) buffer and control method thereof
CN204272167U (en) * 2014-12-09 2015-04-15 中国航空工业集团公司第六三一研究所 A kind of data through type repeat circuit based on memory
CN108111428A (en) * 2017-12-20 2018-06-01 盛科网络(苏州)有限公司 A kind of method and apparatus of congestion control

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0803821A3 (en) * 1996-04-26 1998-01-28 Texas Instruments Incorporated DMA channel assignment in a data packet transfer device
CN1545031A (en) * 2003-11-17 2004-11-10 中兴通讯股份有限公司 Data handling method of FIFO memory device
US8392799B1 (en) * 2007-04-10 2013-03-05 Marvell International Ltd. Systems and methods for arbitrating use of processor memory
CN101894005A (en) * 2010-05-26 2010-11-24 上海大学 Asynchronous FIFO transmission method from high-speed interfaces to low-speed interfaces
CN101957800A (en) * 2010-06-12 2011-01-26 福建星网锐捷网络有限公司 Multichannel cache distribution method and device
CN102915279A (en) * 2011-08-03 2013-02-06 澜起科技(上海)有限公司 Address assignment method for data registers of distributed cache chipset
CN103076990A (en) * 2012-12-25 2013-05-01 北京航天测控技术有限公司 Data playback device based on FIFO (First In, First Out) caching structure
CN103338133A (en) * 2013-06-28 2013-10-02 盛科网络(苏州)有限公司 Method and device for dynamically monitoring jamming of message transmitting port
CN104407809A (en) * 2014-11-04 2015-03-11 盛科网络(苏州)有限公司 Multi-channel FIFO (First In First Out) buffer and control method thereof
CN204272167U (en) * 2014-12-09 2015-04-15 中国航空工业集团公司第六三一研究所 A kind of data through type repeat circuit based on memory
CN108111428A (en) * 2017-12-20 2018-06-01 盛科网络(苏州)有限公司 A kind of method and apparatus of congestion control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高速大容量FIFO缓冲存储器设计;夏琴香 等;《微计算机信息(嵌入式与SOC)》;20091231;第25卷(第12-2期);第6-9页 *

Also Published As

Publication number Publication date
CN109308180A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
KR102317523B1 (en) Packet control method and network device
US11252111B2 (en) Data transmission
WO2021088466A1 (en) Method for improving message storage efficiency of network chip, device, and storage medium
US7464201B1 (en) Packet buffer management apparatus and method
CN109684269B (en) PCIE (peripheral component interface express) exchange chip core and working method
JP2013507022A (en) Method for processing data packets within a flow-aware network node
JPWO2004066571A1 (en) Network switch device and network switch method
CN113411270B (en) Message buffer management method for time-sensitive network
JPWO2004066570A1 (en) Network switch device and network switch method
CN104468401A (en) Message processing method and device
CN111107017A (en) Method, equipment and storage medium for processing switch message congestion
US20040218592A1 (en) Method and apparatus for fast contention-free, buffer management in a multi-lane communication system
CN109308180B (en) Processing method and processing device for cache congestion
CN116955247B (en) Cache descriptor management device and method, medium and chip thereof
EP1508225B1 (en) Method for data storage in external and on-chip memory in a packet switch
CN117499351A (en) Message forwarding device and method, communication chip and network equipment
CN115914130A (en) Data traffic processing method and device of intelligent network card
CN113347112B (en) Data packet forwarding method and device based on multi-level cache
JP2000022724A (en) Packet switch system, integrated circuit including it, packet switch control method and storage medium for packet switch control program
CN117749726A (en) Method and device for mixed scheduling of output port priority queues of TSN switch
CN116074767A (en) Method and application for supporting multicast replication of discrete editing
CN114785396A (en) Method, system and terminal for configuring, searching mapping and managing flow of logical port
CN116414343A (en) Message pointer management device, message pointer management method and chip
CN115268795A (en) Moving method supporting large-scale continuous data
CN115580586A (en) FC switch output queue construction method based on system on chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 215000 unit 13 / 16, 4th floor, building B, No.5 Xinghan street, Suzhou Industrial Park, Jiangsu Province

Patentee after: Suzhou Shengke Communication Co.,Ltd.

Address before: Xinghan Street Industrial Park of Suzhou city in Jiangsu province 215021 B No. 5 Building 4 floor 13/16 unit

Patentee before: CENTEC NETWORKS (SU ZHOU) Co.,Ltd.

CP03 Change of name, title or address