CN109302353B - Method and device for distributing message cache space - Google Patents

Method and device for distributing message cache space Download PDF

Info

Publication number
CN109302353B
CN109302353B CN201710607476.5A CN201710607476A CN109302353B CN 109302353 B CN109302353 B CN 109302353B CN 201710607476 A CN201710607476 A CN 201710607476A CN 109302353 B CN109302353 B CN 109302353B
Authority
CN
China
Prior art keywords
message
uplink
downlink
messages
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710607476.5A
Other languages
Chinese (zh)
Other versions
CN109302353A (en
Inventor
段雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201710607476.5A priority Critical patent/CN109302353B/en
Publication of CN109302353A publication Critical patent/CN109302353A/en
Application granted granted Critical
Publication of CN109302353B publication Critical patent/CN109302353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes

Abstract

A method and device for distributing message buffer space includes: acquiring flow information of a network processor according to a preset period; distributing the on-chip message cache space according to the acquired flow information; the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message buffer space is composed of two or more blocks. The embodiment of the invention improves the utilization rate of the message cache space in the chip; furthermore, the address setting of the dynamically allocated on-chip cache space is realized according to the uplink identification and the downlink identification.

Description

Method and device for distributing message cache space
Technical Field
The present disclosure relates to, but not limited to, mobile communication technologies, and more particularly, to a method and apparatus for allocating message buffer space.
Background
To meet the needs of future network development and improve the performance of routers, core routers in the backbone of the Internet (Internet) have undergone a technical change one after another. Especially in the high-end router market, network processors have become an irreplaceable part of the routing and forwarding engine with their outstanding message processing performance and programmability.
In a network processor system, a Packet Buffer Unit (PBU) is an important component of a network processor and is responsible for on-chip buffering of a Packet. Before the network processor processes the message, the message needs to be cached in a message caching space in a chip; after the network processor processes the message, the message is read from the on-chip message cache space to the next stage. When the number of messages that can be stored in the message buffer space in the chip is too small, the network processor will frequently back-press the preceding stage, and will frequently lose the packets because the buffered messages are not complete packets. Due to the limitation of chip area, the size of the message buffer space in the chip cannot be increased without limit. Under the condition that the size of the on-chip message cache space is certain, the utilization rate of the on-chip message cache space determines the message cache capacity of the chip and the performance of the chip. Therefore, it is necessary to design a scheme for allocating on-chip message buffer space to improve the utilization rate of the on-chip message buffer space.
In the related art, a high-performance network processor divides a message into an uplink path and a downlink path for caching and processing, and the uplink message and the downlink message respectively have an exclusive on-chip message caching space; compared with the method that the uplink message and the downlink message respectively have corresponding on-chip message cache spaces, the method can further improve the utilization rate of the on-chip message cache space by sharing the on-chip message cache space through the uplink path and the downlink path. However, the two shared intra-chip message buffer spaces support static configuration, that is, after the system is reset, the uplink and downlink message buffer spaces may be configured before the network processor starts to receive the message. Once the message begins to be received, the message cache space can not be configured any more, and when the on-chip message cache space allocated to the uplink message or the downlink message is insufficient, the utilization rate of the on-chip message cache space is reduced.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method and a device for distributing message cache space, which can improve the utilization rate of the message cache space in a chip.
The embodiment of the invention provides a method for distributing message cache space, which comprises the following steps:
acquiring flow information of a network processor according to a preset period;
distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space is composed of two or more blocks.
Optionally, the allocating the on-chip packet buffer space according to the obtained flow information includes:
in each preset period, calculating the ratio of the uplink flow information to the downlink flow information to obtain the uplink-downlink flow ratio; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; alternatively, the first and second electrodes may be,
calculating the flow ratio of the upper path to the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
Optionally, the method further includes:
setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message;
when caching the uplink message, setting an uplink cache address for caching the uplink message according to the uplink identifier; when caching the downlink message, setting a downlink cache address for caching the downlink message according to the downlink identifier;
and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address. Optionally, the uplink buffer address includes: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the downlink identifier, the address of the block and the write address inside the block, and the caching the uplink message and the downlink message includes:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively. Optionally, the method further includes:
when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer;
when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer;
when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses;
splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate. Optionally, the method further includes:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message. In another aspect, an embodiment of the present invention further provides a device for allocating a message cache space, where the device includes: an acquisition unit and a distribution unit; wherein the content of the first and second substances,
the acquisition unit is used for: acquiring flow information of a network processor according to a preset period;
the allocation unit is used for: distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space is composed of two or more blocks.
Optionally, the allocation unit is specifically configured to:
in each preset period, calculating the ratio of the uplink flow information to the downlink flow information to obtain the uplink-downlink flow ratio; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; alternatively, the first and second electrodes may be,
calculating the flow ratio of the upper path to the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
Optionally, the apparatus further includes a setting unit and a cache unit; wherein the content of the first and second substances,
the setting unit is used for: setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message; when caching an uplink message and a downlink message, respectively setting an uplink cache address for caching the uplink message and a downlink cache address for caching the downlink message according to the uplink identifier and the downlink identifier;
the buffer unit is used for: and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address.
Optionally, the uplink buffer address includes: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the cache unit is specifically configured to:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively. Optionally, the apparatus further comprises a flow control unit configured to:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message.
Optionally, the apparatus further comprises a distinguishing unit and a splicing unit; wherein the content of the first and second substances,
the distinguishing unit is used for: when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer; when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer; when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses; the splicing unit is used for: splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate.
In another aspect, an embodiment of the present invention further provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used to execute the method for allocating a message cache space.
In another aspect, an embodiment of the present invention further provides a terminal for allocating a message buffer space, where the terminal includes: a memory and a processor; wherein the content of the first and second substances,
the processor is configured to execute program instructions in the memory;
the program instructions read on the processor to perform the following operations:
acquiring flow information of a network processor according to a preset period;
distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space is composed of two or more blocks.
Compared with the related art, the technical scheme of the application comprises the following steps: acquiring flow information of a network processor according to a preset period; distributing the on-chip message cache space according to the acquired flow information; the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message buffer space is composed of two or more blocks. The embodiment of the invention improves the utilization rate of the on-chip message cache space. Further, the address setting of the on-chip cache space which is dynamically allocated is realized according to the uplink identification and the downlink identification
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart of a method for allocating a message buffer space according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a structure of an uplink cache address according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of address processing according to an embodiment of the present invention;
fig. 4 is a block diagram of a device for allocating a message buffer space according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a flowchart of a method for allocating a message buffer space according to an embodiment of the present invention, as shown in fig. 1,
step 100, acquiring flow information of a network processor according to a preset period;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message;
it should be noted that, in the embodiment of the present invention, the preset period may be determined by analyzing according to the flow information of the message cache by a person skilled in the art; when the fluctuation of the flow information is large, the preset period can be set to be small; when the fluctuation of the flow information is small, the preset period can be set to be large; the preset period can be about 30 minutes and can be adjusted according to actual conditions.
Step 101, distributing the on-chip message cache space according to the acquired flow information;
here, the on-chip message buffer space is composed of two or more blocks.
It should be noted that the method for partitioning the on-chip message cache space may be an existing method in the related art, and is not described herein again.
Optionally, the allocating, according to the obtained flow information, a message cache space in a chip includes:
in each preset period, calculating the ratio of the uplink flow information to the downlink flow information to obtain the uplink-downlink flow ratio; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; alternatively, the first and second electrodes may be,
calculating the flow ratio of the upper path to the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
It should be noted that the fluctuation of the preset percentage in the embodiment of the present invention includes: subtracting the flow ratio of the upper path and the lower path in the previous preset period from the flow of the upper path and the lower path in the current preset period to obtain a difference value, and dividing the absolute value of the difference value by the flow ratio of the upper path and the lower path in the previous preset period to obtain a fluctuation percentage; the preset percentage can be analyzed and determined according to the size of the on-chip message cache space and the flow rate of the message cache, and the larger the general on-chip message cache space is, the larger the preset percentage can be; the larger the flow rate of the message cache is, the smaller the preset percentage can be, and the preset percentage can be set to be a numerical value of about 10%; in addition, the blocks allocated to the uplink message and the downlink message are complete blocks, and when the calculated blocks are not complete blocks, a rounding method can be adopted for processing; assuming that the total message buffer space size is 32G, the granularity of the blocks is 4G, and there are 8 blocks in total. According to the formula: the uplink flow/(uplink flow + downlink flow) × 8, then rounding off and taking an integer to obtain the number of blocks allocated to the uplink message, and the rest is the number of blocks allocated to the downlink message. That is, the number of blocks used in the uplink packet is equal to the uplink traffic/(uplink traffic + downlink traffic) × the total number of blocks, and the result is rounded. And the rest blocks are used for the downlink message.
Optionally, the method in the embodiment of the present invention further includes:
setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message;
when caching the uplink message, setting an uplink cache address for caching the uplink message according to the uplink identifier; when caching the downlink message, setting a downlink cache address for caching the downlink message according to the downlink identifier;
and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address.
Optionally, the uplink buffer address in the embodiment of the present invention includes: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the downlink identifier, the address of the block and the write address inside the block, and the caching the uplink message and the downlink message includes:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively.
It should be noted that the uplink identifier and the downlink identifier may be a serial number identifier in the address information; for example, 0 indicates an uplink flag, and 1 indicates a downlink flag. When the blocks of the cache message are the same, the addresses of the blocks in the uplink cache address or the downlink cache address are the same, and the write-in addresses in the blocks are different. The on-chip message cache space for caching the uplink message and the on-chip message cache space for caching the downlink message are independent. The bit width of the write address inside the block is determined according to the depth of the block. Fig. 2 is a schematic diagram of a structure of an uplink cache address according to an embodiment of the present invention, where, as shown in fig. 2, the uplink cache address includes an uplink identifier for determining a block for caching an uplink packet, the block address is used for determining a location of the block, and a write address inside the block is used for writing the cached uplink packet.
Optionally, the method in the embodiment of the present invention further includes:
when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer;
when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer;
when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses;
splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate.
It should be noted that after the message fragmentation is distinguished as the fragmentation of the uplink message or the downlink message, the messages may be sorted and spliced according to the message cache address, and the sorting and splicing method may be implemented by using an existing analysis processing method in the related art, which is not described herein again.
Fig. 3 is a schematic diagram of address processing according to an embodiment of the present invention, as shown in fig. 3, unused uplink idle addresses and unused downlink idle addresses respectively exist in first-out queues (FIFOs) of respective partitions, at this time, the uplink idle addresses and the downlink idle addresses do not have uplink identifiers or downlink identifiers, and the uplink identifiers or the downlink identifiers are marked only after the uplink idle addresses and/or the downlink idle addresses are allocated; forming an uplink cache address after an uplink idle address is marked with an uplink identifier; and marking up and down identification on the down idle address to form a down cache address. For example, after a message fragment of an uplink message is received, an unused address, namely an uplink idle address, is taken out from a FIFO to which a usable block of the uplink message belongs, and then a 1-bit uplink identifier is added to the highest bit of the uplink idle address, so that the bit width of the address is increased; and generating an uplink cache address. When the message fragment is read out from the message buffer space, the uplink buffer address is recycled, because the address stored in the FIFO does not contain the uplink identifier, the uplink identifier is firstly stripped before the uplink buffer address is written into the FIFO, and the bit width of the uplink buffer address is the same as that before the uplink identifier is marked, namely the uplink buffer address is converted into the uplink idle address.
Optionally, the method in the embodiment of the present invention further includes:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message. It should be noted that the uplink idle address in the embodiment of the present invention includes an address that is not subjected to message caching in a chip cache space allocated to the uplink message; the downlink idle address comprises an address which is not cached in the message in a chip cache space allocated to the downlink message; the uplink flow control threshold and the downlink flow control threshold may be equal, and may be determined by analysis according to the delay by a person skilled in the art.
Compared with the related art, the technical scheme of the application comprises the following steps: acquiring flow information of a network processor according to a preset period; distributing the on-chip message cache space according to the acquired flow information; the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message buffer space is composed of two or more blocks. The embodiment of the invention improves the utilization rate of the message cache space in the chip; furthermore, the address setting of the dynamically allocated on-chip cache space is realized according to the uplink identification and the downlink identification.
Fig. 4 is a block diagram of a structure of a device for allocating a message buffer space according to an embodiment of the present invention, as shown in fig. 4, including: an acquisition unit and a distribution unit; wherein the content of the first and second substances,
the acquisition unit is used for: acquiring flow information of a network processor according to a preset period;
the allocation unit is used for: distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space is composed of two or more blocks.
Optionally, the allocation unit in the embodiment of the present invention is specifically configured to:
in each preset period, calculating the ratio of the uplink flow information to the downlink flow information to obtain the uplink-downlink flow ratio; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; alternatively, the first and second electrodes may be,
calculating the flow ratio of the upper path to the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
Optionally, the apparatus in the embodiment of the present invention further includes a setting unit and a cache unit; wherein the content of the first and second substances,
the setting unit is used for: setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message; when caching an uplink message and a downlink message, respectively setting an uplink cache address for caching the uplink message and a downlink cache address for caching the downlink message according to the uplink identifier and the downlink identifier;
the buffer unit is used for: and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address.
Optionally, the uplink buffer address in the embodiment of the present invention includes: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the cache unit is specifically configured to:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively. Optionally, the apparatus according to an embodiment of the present invention further includes a flow control unit, configured to:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message.
Optionally, the apparatus in the embodiment of the present invention further includes a distinguishing unit and a splicing unit; wherein the content of the first and second substances,
the distinguishing unit is used for: when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer; when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer; when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses;
the splicing unit is used for: splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate.
On the other hand, an embodiment of the present invention further provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used to execute the method for allocating a message cache space.
In another aspect, an embodiment of the present invention further provides a terminal for allocating a message cache space, where the terminal includes: a memory and a processor; wherein the content of the first and second substances,
the processor is configured to execute program instructions in the memory;
the program instructions read on the processor to perform the following operations:
acquiring flow information of a network processor according to a preset period;
distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message buffer space is composed of two or more blocks.
The method of the embodiment of the present invention is described in detail below by using application examples, which are only used for illustrating the present invention and are not used for limiting the protection scope of the present invention.
Application example 1
In this application example, a part of the uplink message fragments are stored in a block 1, and the remaining message fragments are received sequentially, and at this time, the Central Processing Unit (CPU) allocates the block 1 to the downlink message for use. The processing procedure of the application example includes:
because the message is an uplink message, the message cache address applied by the uplink message is an uplink cache address, namely an address in a block which can be used by the uplink message; since the block 1 is allocated to the downlink packet for the buffer application, the address in the block 1 is not allocated to the uplink packet. The subsequent message fragmentation of the uplink message can only apply for the idle addresses of other blocks which can be used by the uplink message.
When the later stage processes the message, if the message fragment stored in the block 1 of the packet is to be read, the message can be known to belong to the uplink message according to the uplink identifier in the address, then the fragment is read through the uplink cache address, the address of the block of each fragment caching the uplink message is determined according to the uplink cache address of the fragment of each uplink message, and the fragment of each uplink message is read in the determined address of the block according to the write-in address in the block, so that the message fragment to be processed is obtained.
When the application example reads the fragments of the uplink message from the blocks, the uplink cache address is recycled, and when the uplink cache address is used for caching the uplink message, statistical management can be performed on the use of the uplink cache address according to the uplink identifier.
Application example two
In this application example, an uplink packet cannot use one of the blocks, and the ratio of the uplink traffic to the downlink traffic is determined, and when more blocks need to be allocated to the uplink packet, the processing procedure in this application example includes:
after the blocks are allocated to the uplink message for caching, the blocks can be determined to be used for caching the uplink message through the uplink identifier of the uplink cache address. It should be noted that, if a part of the storage space in the block is used for caching the downlink packet, the part of the storage space is not recycled, the part of the storage space is still used for caching the downlink packet, and only when the part of the downlink packet is read out, the recycling of the downlink buffer address and the configuration of the uplink buffer address can be performed. When all the downlink messages in the block are read out, the block is completely used for caching the uplink messages.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) to perform the steps, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in hardware, for example, by an integrated circuit to implement its corresponding function, or in software, for example, by a processor executing a program/instruction stored in a memory to implement its corresponding function. The present invention is not limited to any specific form of combination of hardware and software.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method for allocating message buffer space, comprising:
acquiring flow information of a network processor according to a preset period;
distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space consists of two or more blocks;
the allocating the on-chip message cache space according to the acquired flow information comprises:
calculating the flow ratio of the upper path and the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
2. The method of claim 1, further comprising:
setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message;
when caching the uplink message, setting an uplink cache address for caching the uplink message according to the uplink identifier; when caching the downlink message, setting a downlink cache address for caching the downlink message according to the downlink identifier;
and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address.
3. The method of claim 2, wherein the upstream buffer address comprises: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the downlink identifier, the address of the block and the write address inside the block, and the caching the uplink message and the downlink message includes:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer;
when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer;
when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses;
splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate.
5. The method of claim 4, further comprising:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message.
6. An apparatus for allocating message buffer space, comprising: an acquisition unit and a distribution unit; wherein the content of the first and second substances,
the acquisition unit is used for: acquiring flow information of a network processor according to a preset period;
the allocation unit is used for: distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space consists of two or more blocks;
the allocation unit is specifically configured to: calculating the flow ratio of the upper path and the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
7. The apparatus according to claim 6, wherein the apparatus further comprises a setting unit and a buffering unit; wherein the content of the first and second substances,
the setting unit is used for: setting up uplink identifiers for all blocks allocated to an uplink message, and setting up downlink identifiers for all blocks allocated to a downlink message; when caching an uplink message and a downlink message, respectively setting an uplink cache address for caching the uplink message and a downlink cache address for caching the downlink message according to the uplink identifier and the downlink identifier;
the buffer unit is used for: and caching the uplink message and the downlink message according to the set uplink cache address and the set downlink cache address.
8. The apparatus of claim 7, wherein the upstream buffer address comprises: an uplink identifier, a block address and a write address inside the block; the downlink cache address comprises: the cache unit is specifically configured to:
when the cached message is an uplink message, writing each message fragment of the uplink message into a message cache space in the chip according to the address of the block in the corresponding uplink cache address and the write-in address in the block;
and when the cached message is a downlink message, writing each message fragment of the downlink message into a message cache space in the fragment according to the address of the fragment in the corresponding downlink cache address and the write-in address in the fragment respectively.
9. The apparatus according to claim 7 or 8, further comprising a fluidic unit for:
when the number of the uplink idle addresses which are not subjected to uplink message caching is smaller than a preset uplink flow control threshold value, performing flow control on the uplink message;
and when the number of the downlink idle addresses which are not subjected to downlink message caching is smaller than a preset downlink flow control threshold value, performing flow control on the downlink message.
10. The apparatus of claim 9, further comprising a distinguishing unit and a splicing unit; wherein the content of the first and second substances,
the distinguishing unit is used for: when the uplink message is written according to the uplink cache address, a linked list is established according to the uplink cache address of the message fragment of each written uplink message; reading the uplink buffer addresses of the message fragments of each uplink message written in from the established linked list before reading the uplink messages of the buffer; when the downlink messages are written according to the downlink cache addresses, a linked list is established according to the downlink cache addresses of the message fragments of the written downlink messages; reading the downlink buffer addresses of the message fragments of each written downlink message from the established linked list before reading the downlink messages of the buffer; when reading out the cached messages, distinguishing each message fragment of the uplink messages and the downlink messages according to the read uplink identifiers in the uplink cache addresses and the read downlink identifiers in the downlink cache addresses;
the splicing unit is used for: splicing each message fragment of the distinguished uplink messages into uplink messages and then sending the uplink messages to a lower level; and splicing all the message fragments of the distinguished downlink messages into downlink messages and sending the downlink messages to the subordinate.
11. A computer storage medium having computer-executable instructions stored thereon for performing the method of allocating message cache space according to any one of claims 1 to 5.
12. A terminal for allocating message buffer space, comprising: a memory and a processor; wherein the content of the first and second substances,
the processor is configured to execute program instructions in the memory;
the program instructions are read by the processor to perform the following operations:
acquiring flow information of a network processor according to a preset period;
distributing the on-chip message cache space according to the acquired flow information;
the flow information comprises uplink flow information of an uplink message and downlink flow information of a downlink message; the on-chip message cache space consists of two or more blocks;
the allocating the on-chip message cache space according to the acquired flow information comprises:
calculating the flow ratio of the upper path and the lower path in the first preset period; according to the calculated up-down flow ratio, distributing the blocks contained in the on-chip message cache space into cache uplink messages or downlink messages respectively; for other preset periods except the first preset period, when the flow ratio of the uplink to the downlink in the current preset period is more than the fluctuation of a preset percentage compared with the flow ratio of the uplink to the downlink in the previous preset period, respectively allocating the blocks contained in the buffer space of the messages in the chip as buffer uplink messages or downlink messages according to the flow ratio of the uplink to the downlink in the current preset period; and when the upstream and downstream flow ratio in the current preset period fluctuates by less than or equal to a preset percentage in the upstream and downstream flow ratio in the previous preset period, keeping the blocks allocated to the upstream message and the downstream message unchanged.
CN201710607476.5A 2017-07-24 2017-07-24 Method and device for distributing message cache space Active CN109302353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710607476.5A CN109302353B (en) 2017-07-24 2017-07-24 Method and device for distributing message cache space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710607476.5A CN109302353B (en) 2017-07-24 2017-07-24 Method and device for distributing message cache space

Publications (2)

Publication Number Publication Date
CN109302353A CN109302353A (en) 2019-02-01
CN109302353B true CN109302353B (en) 2022-03-25

Family

ID=65167174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710607476.5A Active CN109302353B (en) 2017-07-24 2017-07-24 Method and device for distributing message cache space

Country Status (1)

Country Link
CN (1) CN109302353B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014308B (en) * 2021-02-23 2022-08-02 湖南斯北图科技有限公司 Satellite communication high-capacity channel parallel Internet of things data receiving method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1855881A (en) * 2005-04-28 2006-11-01 华为技术有限公司 Method for dynamically sharing space of memory
CN104572498A (en) * 2014-12-26 2015-04-29 曙光信息产业(北京)有限公司 Cache management method for message and device
WO2016086641A1 (en) * 2014-12-05 2016-06-09 中兴通讯股份有限公司 Cache configuration method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10862630B2 (en) * 2015-02-13 2020-12-08 Samsung Electronics Co., Ltd Method and system for contiguous HARQ memory management with memory splitting

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1855881A (en) * 2005-04-28 2006-11-01 华为技术有限公司 Method for dynamically sharing space of memory
WO2016086641A1 (en) * 2014-12-05 2016-06-09 中兴通讯股份有限公司 Cache configuration method and device
CN104572498A (en) * 2014-12-26 2015-04-29 曙光信息产业(北京)有限公司 Cache management method for message and device

Also Published As

Publication number Publication date
CN109302353A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN107665146B (en) Memory management device and method
US9584332B2 (en) Message processing method and device
KR102594657B1 (en) Method and apparatus for implementing out-of-order resource allocation
CN110209348B (en) Data storage method and device, electronic equipment and storage medium
CN105791254B (en) Network request processing method and device and terminal
KR101639797B1 (en) Network interface apparatus and method for processing virtual machine packets
CN110267276B (en) Network slice deployment method and device
CN110800328A (en) Buffer status reporting method, terminal and computer storage medium
CN116010109B (en) Cache resource allocation method and device, electronic equipment and storage medium
CN111124270A (en) Method, apparatus and computer program product for cache management
CN114595043A (en) IO (input/output) scheduling method and device
US10348651B2 (en) Apparatus and method for virtual switching
CN109302353B (en) Method and device for distributing message cache space
CN107896196B (en) Method and device for distributing messages
CN112306693B (en) Data packet processing method and device
JP2008516320A (en) Method and apparatus for determining the size of a memory frame
CN102055671A (en) Priority management method for multi-application packet sending
CN105988871B (en) Remote memory allocation method, device and system
CN114817090B (en) MCU communication management method and system with low RAM consumption
CN107911317B (en) Message scheduling method and device
CN110731109B (en) Resource indication method, equipment and computer storage medium
CN115914130A (en) Data traffic processing method and device of intelligent network card
CN117499351A (en) Message forwarding device and method, communication chip and network equipment
WO2017070869A1 (en) Memory configuration method, apparatus and system
CN105072047A (en) Message transmitting and processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant