CN109729014B - Message storage method and device - Google Patents

Message storage method and device Download PDF

Info

Publication number
CN109729014B
CN109729014B CN201711046599.2A CN201711046599A CN109729014B CN 109729014 B CN109729014 B CN 109729014B CN 201711046599 A CN201711046599 A CN 201711046599A CN 109729014 B CN109729014 B CN 109729014B
Authority
CN
China
Prior art keywords
message
memory
built
peripheral memory
scheduled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711046599.2A
Other languages
Chinese (zh)
Other versions
CN109729014A (en
Inventor
于克东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201711046599.2A priority Critical patent/CN109729014B/en
Publication of CN109729014A publication Critical patent/CN109729014A/en
Application granted granted Critical
Publication of CN109729014B publication Critical patent/CN109729014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a message storage method and a device, which relate to the field of data communication, wherein the message storage method comprises the following steps: when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory according to the time sequence; in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded. The method can effectively ensure that the high-priority message uses the built-in memory with large bandwidth, the low-priority message is moved to the peripheral memory when the message is jammed, the storage link operates according to the high-priority and low-priority, and the bandwidth of the internal memory and the capacity of the peripheral memory are effectively utilized.

Description

Message storage method and device
Technical Field
The present application relates to the field of data communications, and in particular, to a method and apparatus for storing a message.
Background
With the continued development of networks, the bandwidth requirements of service providers and data centers are increasing at unprecedented speeds. Faced with new needs and challenges, new devices must be developed and marketed quickly and require high availability while reducing overall operating costs.
However, for the core backbone network, the bandwidth development of the current memory technology encounters a bottleneck, and DDR (Double Data Rate, double Rate synchronous dynamic random access memory), QDR (Quad Data Rate, quad Rate synchronous dynamic random access memory), and latest HBM (High Bandwidth Memory ) and HMC (Hybrid Memory Cube, hybrid memory cube) cannot meet the storage bandwidth requirement of Tbps level on the core backbone network.
In order to solve the current situation that the current memory device cannot meet the bandwidth requirement, the internal and external collaborative caching technology is generally adopted to improve the bandwidth and the storage capacity. When the external storage bandwidth is insufficient, most designs adopt a strategy of bandwidth allocation in advance, but the time delay is large, and after the buffer is used up, the non-congested high-priority message can also be discarded.
Disclosure of Invention
The application provides a message storage method and a message storage device, which can realize the effect of improving the storage bandwidth by automatically discarding part of messages.
In order to achieve the above object, the present application adopts the following technical scheme:
in a first aspect, the present application provides a method for storing a message, including:
when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory according to the time sequence;
in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded.
Preferably, the method further comprises:
when the output message is required to be scheduled, judging the storage position of the message to be scheduled;
when the scheduled message is stored in the internal memory, dequeuing the scheduled message from the internal memory;
when the scheduled message is stored in the peripheral memory, dequeuing the scheduled message from the peripheral memory;
preferably, the preset condition includes:
a threshold value is preset.
Preferably, discarding the message moved to the peripheral memory includes:
and receiving the trigger signal, and discarding all or part of the messages transferred to the peripheral memory according to the trigger signal until the situation of the back pressure transfer channel of the peripheral memory disappears.
In a second aspect, the present application further provides a message storage device, including:
the moving module is used for moving the messages cached in the built-in memory to the peripheral memory according to the time sequence when the message accumulation capacity of the built-in memory meets the preset condition;
the discarding module is configured to discard the message transferred to the peripheral memory when the peripheral memory back-presses the transfer path during the process of transferring the message.
Preferably, the message storage device further includes:
the judging module is used for judging the storage position of the message to be scheduled when the output message is required to be scheduled;
when the scheduled message is stored in the internal memory, dequeuing the scheduled message from the internal memory;
when the scheduled message is stored in the peripheral memory, dequeuing the scheduled message from the peripheral memory;
preferably, the preset conditions of the moving module include:
a threshold value is preset.
Preferably, the discarding module discards the message moved to the peripheral memory, including:
and receiving the trigger signal, and discarding all or part of the messages transferred to the peripheral memory according to the trigger signal until the situation of the back pressure transfer channel of the peripheral memory disappears.
In a third aspect, the present application further provides a packet storage device, including: a memory and a processor;
the memory is used for storing executable instructions;
the processor is configured to execute the executable instructions stored in the memory, and perform the following operations:
when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory according to the time sequence;
in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded.
Compared with the prior art, the application has the following beneficial effects:
according to the technical scheme, the congestion message or the low-priority message is moved to the peripheral memory; when the moving bandwidth is insufficient, part of messages are discarded to ensure the whole storage bandwidth, so that the use of the built-in memory with large bandwidth by using the high-priority messages can be effectively ensured, the messages with low priority are moved to the peripheral memory when the messages are congested, and the storage link operates according to the high-low priority, so that the bandwidth of the internal memory and the capacity of the peripheral memory are effectively utilized.
Meanwhile, if the moving encounters insufficient bandwidth, the moving message with low priority can be discarded, so that the message with high priority can be ensured to continuously use the internal memory.
Drawings
FIG. 1 is a flow chart of a message storing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a message storage device according to an embodiment of the present application;
fig. 3 is a schematic diagram of discarding a packet according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present application more apparent, the embodiments of the present application will be described with reference to the accompanying drawings, and it should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be arbitrarily combined with each other.
As shown in fig. 1, an embodiment of the present application provides a method for storing a message, including:
s101, when the message accumulation capacity of a built-in memory meets a preset condition, moving the messages cached in the built-in memory to a peripheral memory according to a time sequence;
s102, discarding the message transferred to the peripheral memory when the peripheral memory back-pressure transfer channel is in the process of transferring the message.
In the embodiment of the application, the built-in memory generates message accumulation, and when the message accumulation is carried out under the preset condition, the message which enters the built-in memory first is moved to the low-speed peripheral memory according to the time sequence relation of the message entering the built-in memory. And giving large bandwidth to non-congestion messages.
Because the bandwidth of the peripheral memory is limited, if the peripheral memory has a processing bottleneck in the process of moving the message, the peripheral memory is in back pressure with a moving path because of insufficient bandwidth capacity, at this time, the built-in memory is expected to stop the moving operation, if the data moving operation is suspended, the built-in memory is caused to be continuously piled up, and even the message with high priority is affected, so that the message which needs to be moved at present is actively discarded.
The method further comprises the steps of:
when an output message is to be scheduled, judging the storage position of the message to be scheduled;
when the scheduled message is stored in the internal memory, the scheduled message is scheduled and output from the internal memory;
when the scheduled message is stored in the peripheral memory, the scheduled message is scheduled and output from the peripheral memory;
in the embodiment of the application, when the dequeue is scheduled when the dequeue is authorized, firstly judging whether the message is stored in the built-in memory or the peripheral memory, if the message is stored in the built-in memory, the message is directly scheduled and output from the built-in memory, and if the message is stored in the peripheral memory, the message is scheduled and output from the peripheral memory.
The preset conditions include:
a threshold value is preset.
In step S102, discarding the message moved to the peripheral memory includes:
and receiving the trigger signal, and discarding all or part of the messages moved to the peripheral memory according to the trigger signal.
Specifically, discarding the partial packet moved to the external memory includes:
when the back pressure moving path of the peripheral memory occurs, discarding part of the message moved to the peripheral memory until the situation of the back pressure moving path of the peripheral memory disappears.
As shown in fig. 2, an embodiment of the present application further provides a message storage device, including:
the moving module is used for moving the messages cached in the built-in memory to the peripheral memory according to the time sequence when the message accumulation capacity of the built-in memory meets the preset condition;
the discarding module is configured to discard the message transferred to the peripheral memory when the peripheral memory back-presses the transfer path during the process of transferring the message.
Preferably, the message storage device further includes:
the judging module is used for judging the storage position of the message to be scheduled when the output message is required to be scheduled;
when the scheduled message is stored in the internal memory, dequeuing the scheduled message from the internal memory;
when the scheduled message is stored in the peripheral memory, dequeuing the scheduled message from the peripheral memory;
preferably, the preset conditions of the moving module include:
a threshold value is preset.
Preferably, the discarding module discards the message moved to the peripheral memory, including:
and receiving the trigger signal, and discarding all or part of the messages transferred to the peripheral memory according to the trigger signal until the situation of the back pressure transfer channel of the peripheral memory disappears.
The embodiment of the application also provides a message storage device, which comprises: a memory and a processor;
the memory is used for storing executable instructions;
the processor is configured to execute the executable instructions stored in the memory, and perform the following operations:
when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory according to the time sequence;
in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded.
Example 1
The built-in memory has the characteristics of large bandwidth and small capacity. Correspondingly, the peripheral memory has small bandwidth and large capacity. As shown in fig. 3, in the present application, all incoming messages are stored in the internal memory and not directly to the external memory (as shown in data stream 1 in fig. 3).
If the grant is issued in time, the built-in memory does not generate message accumulation, the message will stay in the built-in memory all the time before being scheduled for dequeuing, and when the message dequeuing is required to be scheduled, the message is dequeued directly from the built-in memory (as shown by data stream 2 in fig. 3), and the bandwidth is large enough.
If the authorization is not issued timely, the built-in memory generates message accumulation, when the message accumulation reaches a certain threshold (the threshold can be configured at will), the message which enters the built-in memory first is moved to the peripheral memory according to the time sequence relation of the message entering the built-in memory (as shown in a data stream 3 in fig. 3), so that the high-capacity characteristic of the peripheral memory can be applied to the message with low priority. When dequeuing is to be scheduled, it is first determined whether the message is stored in the internal memory or the external memory, and if so, dequeued from the external memory (as shown in data stream 4 in fig. 3).
When the moving occurs, the moved message is the message stored in the built-in memory at the earliest, which means that the data message is a congestion data stream or a low-priority message, and the authorization is not obtained later, so that the data message is moved to the low-speed peripheral memory, and a large bandwidth is given to a non-congestion message with high priority. The high-priority message is not moved to the peripheral memory, and is dequeued from the internal memory in time after being authorized.
Because the bandwidth of the peripheral memory is limited, the channel is back-pressure moved due to insufficient bandwidth capability during the process of moving the message. If the data moving operation is suspended during the back pressure period, the built-in memory is caused to be continuously piled up, and finally, the messages with high priority are affected, a part of the messages need to be actively discarded to solve the congestion problem, otherwise, the enqueuing and dequeuing of the messages with high priority are also affected. At this time, in order not to affect the high priority message, the undermoving bandwidth is alleviated by discarding the moved low priority message. Therefore, the built-in memory is not accumulated any more, and the high-priority messages are still smoothly enqueued and dequeued.
Whether the move discard function is turned on or not may be made configurable.
Although the embodiments of the present application are described above, the present application is not limited to the embodiments adopted for the purpose of facilitating understanding of the technical aspects of the present application. Any person skilled in the art can make any modification and variation in form and detail without departing from the core technical solution disclosed in the present application, but the scope of protection defined by the present application is still subject to the scope defined by the appended claims.

Claims (9)

1. A method for storing messages, comprising:
when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory from the early to the late according to the time sequence;
in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded.
2. The message storage method of claim 1, wherein: the method further comprises the steps of:
when the output message is required to be scheduled, judging the storage position of the message to be scheduled;
when the scheduled message is stored in the built-in memory, dequeuing the scheduled message from the built-in memory;
and when the scheduled message is stored in the peripheral memory, dequeuing the scheduled message from the peripheral memory.
3. The message storage method of claim 1, wherein: the preset conditions include:
a threshold value is preset.
4. The message storage method of claim 1, wherein: discarding the message moved to the peripheral memory includes:
and receiving the trigger signal, and discarding all or part of the messages transferred to the peripheral memory according to the trigger signal until the situation of the back pressure transfer channel of the peripheral memory disappears.
5. A message storage device, characterized in that: comprising the following steps:
the moving module is used for moving the messages cached in the built-in memory to the peripheral memory from the early to the late according to the time sequence when the message accumulation capacity of the built-in memory meets the preset condition;
the discarding module is configured to discard the message transferred to the peripheral memory when the peripheral memory back-presses the transfer path during the process of transferring the message.
6. The message storage device of claim 5, wherein: further comprises:
the judging module is used for judging the storage position of the message to be scheduled when the output message is required to be scheduled;
when the scheduled message is stored in the built-in memory, dequeuing the scheduled message from the built-in memory;
and when the scheduled message is stored in the peripheral memory, dequeuing the scheduled message from the peripheral memory.
7. The message storage device of claim 5, wherein: the preset conditions of the moving module comprise:
a threshold value is preset.
8. The message storage device of claim 5, wherein: the discarding module discarding the message moved to the peripheral memory includes:
and receiving the trigger signal, and discarding all or part of the messages transferred to the peripheral memory according to the trigger signal until the situation of the back pressure transfer channel of the peripheral memory disappears.
9. A message storage device, comprising: a memory and a processor;
the memory is used for storing executable instructions;
the processor is configured to execute the executable instructions stored in the memory, and perform the following operations:
when the message accumulation capacity of the built-in memory meets the preset condition, moving the messages cached in the built-in memory to the peripheral memory from the early to the late according to the time sequence;
in the process of transferring the message, when the peripheral memory back-presses the transfer path, the message transferred to the peripheral memory is discarded.
CN201711046599.2A 2017-10-31 2017-10-31 Message storage method and device Active CN109729014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711046599.2A CN109729014B (en) 2017-10-31 2017-10-31 Message storage method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711046599.2A CN109729014B (en) 2017-10-31 2017-10-31 Message storage method and device

Publications (2)

Publication Number Publication Date
CN109729014A CN109729014A (en) 2019-05-07
CN109729014B true CN109729014B (en) 2023-09-12

Family

ID=66294368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711046599.2A Active CN109729014B (en) 2017-10-31 2017-10-31 Message storage method and device

Country Status (1)

Country Link
CN (1) CN109729014B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804156A (en) * 2019-11-13 2021-05-14 深圳市中兴微电子技术有限公司 Congestion avoidance method and device and computer readable storage medium
CN113835611A (en) * 2020-06-23 2021-12-24 深圳市中兴微电子技术有限公司 Storage scheduling method, device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101784082A (en) * 2009-12-22 2010-07-21 中兴通讯股份有限公司 Method and device for enhancing service quality in wireless local area network
CN102571535B (en) * 2010-12-22 2015-02-18 深圳市恒扬科技股份有限公司 Device and method for delaying data and communication system
CN102594691B (en) * 2012-02-23 2019-02-15 中兴通讯股份有限公司 A kind of method and device handling message
CN103888377A (en) * 2014-03-28 2014-06-25 华为技术有限公司 Message cache method and device
CN106470126A (en) * 2015-08-14 2017-03-01 中兴通讯股份有限公司 The monitoring method of port queue blocking and system
CN107222427A (en) * 2016-03-22 2017-09-29 华为技术有限公司 The method and relevant device of a kind of Message processing

Also Published As

Publication number Publication date
CN109729014A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
US8787169B2 (en) System and method for adjusting transport layer processing during flow control and suspension states
CN109120544B (en) Transmission control method based on host end flow scheduling in data center network
EP2702730B1 (en) Effective circuits in packet-switched networks
US20210359931A1 (en) Packet Scheduling Method, Scheduler, Network Device, and Network System
CN111355669A (en) Method, device and system for controlling network congestion
CN113973085B (en) Congestion control method and device
CN113064738B (en) Active queue management method based on summary data
CN113315720B (en) Data flow control method, system and equipment
CN110784415A (en) ECN quick response method and device
CN109729014B (en) Message storage method and device
CN108011845A (en) A kind of method and apparatus for reducing time delay
CN114189477B (en) Message congestion control method and device
CN117749726A (en) Method and device for mixed scheduling of output port priority queues of TSN switch
CN112055382A (en) Service access method based on refined differentiation
CN112804156A (en) Congestion avoidance method and device and computer readable storage medium
EP4436118A1 (en) Message processing method and apparatus, and network device and medium
CN114785744B (en) Data processing method, device, computer equipment and storage medium
CN118819824A (en) Method and device for intelligently adjusting TM (TM) resources
CN118540271A (en) TSN network self-adaptive gate control method
CN103188166A (en) Flow control method utilizing TCP (Transmission Control Protocol)-self-consciousness Random Early Detection (RED)
CN102387541B (en) Iub interface flow control protection method and base station
CN118631749A (en) Data transmission method, device, electronic equipment and storage medium
CN118631757A (en) Credit value-based rate control method, apparatus and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant