CN108874688B - Message data caching method and device - Google Patents

Message data caching method and device Download PDF

Info

Publication number
CN108874688B
CN108874688B CN201810693881.8A CN201810693881A CN108874688B CN 108874688 B CN108874688 B CN 108874688B CN 201810693881 A CN201810693881 A CN 201810693881A CN 108874688 B CN108874688 B CN 108874688B
Authority
CN
China
Prior art keywords
message
descriptor
data
message data
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810693881.8A
Other languages
Chinese (zh)
Other versions
CN108874688A (en
Inventor
谢成祥
袁结全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Forward Industrial Co Ltd
Original Assignee
Shenzhen Forward Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Forward Industrial Co Ltd filed Critical Shenzhen Forward Industrial Co Ltd
Priority to CN201810693881.8A priority Critical patent/CN108874688B/en
Publication of CN108874688A publication Critical patent/CN108874688A/en
Application granted granted Critical
Publication of CN108874688B publication Critical patent/CN108874688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0615Address space extension
    • G06F12/0623Address space extension for memory modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack

Abstract

The invention provides a message data caching method and a device, comprising the following steps: when a burst write instruction is received, identifying the message type of the received message data, and judging whether the current use condition of a cache space corresponding to the message type meets the preset capacity or not; if the current use condition of the cache space meets the preset capacity, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data; and storing the message descriptor into the internal storage block, and storing the message data into a message storage area of the external storage block by taking the cached physical first address as a basis. The message data caching method and the message data caching device can improve the bandwidth utilization rate of the storage address allocation scheme, and further improve the caching speed.

Description

Message data caching method and device
Technical Field
The invention relates to the technical field of data communication, in particular to a message data caching method and device.
Background
With the performance enhancement and the increase of the resource quantity of an FPGA (Field-Programmable Gate Array) board, the FPGA board needs a larger and larger storage space to store Data information, and due to the volume limitation of the FPGA board and the limitation of the storage capacity on the board, the FPGA board is forced to expand the storage space by means of an external mounted storage device, and at present, most of plug-in storage devices adopted by people are usually DDR (Double Data Rate, Double Data synchronous dynamic random access memory). However, in practice, it is found that when a burst request is made to store a small amount of packet data, based on the read-write operation characteristic of DDR, a certain time is required to prepare for allocating a storage address each time the packet data is written, which results in a low bandwidth utilization rate of allocating the storage address and a slow storage speed.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for caching packet data, which can improve the bandwidth utilization rate of a storage address allocation scheme, thereby improving the caching speed.
In order to achieve the purpose, the invention adopts the following technical scheme:
the first aspect of the present invention discloses a message data caching method, which includes:
when a burst write instruction is received, identifying the message type of the received message data, and judging whether the current use condition of a cache space corresponding to the message type meets the preset capacity or not;
if the current use condition of the cache space meets the preset capacity, generating a message descriptor matched with the message data, wherein the message descriptor comprises the cache physical initial address and the message length of the message data;
and storing the message descriptor into an internal storage block, and storing the message data into a message storage area of an external storage block by taking the cache physical initial address as a basis.
As an alternative implementation, in the first aspect of the present invention, the method further includes:
and if the current use condition of the cache space does not meet the preset capacity, discarding the message data and prompting that the storage is full.
As an optional implementation manner, in the first aspect of the present invention, after the storing the packet data in the packet storage area of the external storage block based on the cached physical first address, the method further includes:
judging whether the length of the message exceeds a length threshold value;
and if the message length exceeds the length threshold, storing the message descriptor into a descriptor storage area of an external storage block.
As an optional implementation manner, in the first aspect of the present invention, the storing the packet descriptor in a descriptor storage area of an external storage block includes:
and converting the message descriptor into message format data with a preset length, and storing the message format data into a descriptor storage area of the external storage block.
As an optional implementation manner, in the first aspect of the present invention, after storing the packet descriptor in the descriptor storage area of the external storage block, the method further includes:
judging whether a burst reading instruction is received or not, wherein the burst reading instruction comprises data of a message type to be read;
and if the burst reading instruction is received, reading all target message descriptors corresponding to the message type to be read from the descriptor storage area and the built-in storage block, and reading target message data corresponding to each target message descriptor from the message storage area.
A second aspect of the present invention discloses a message data caching apparatus, including:
the identification and judgment module is used for identifying the message type of the received message data when receiving the burst write instruction and judging whether the current use condition of the cache space corresponding to the message type meets the preset capacity or not;
a descriptor generating module, configured to generate a packet descriptor matched with the packet data when it is determined that the current usage condition of the cache space meets the preset capacity, where the packet descriptor includes the cache physical head address and the packet length of the packet data;
and the storage module is used for storing the message descriptor into an internal storage block and storing the message data into a message storage area of an external storage block by taking the cache physical initial address as a basis.
As an optional implementation manner, in the second aspect of the present invention, the method further includes:
and the prompting module is used for discarding the message data and prompting that the storage is full when the current use condition of the cache space is judged not to meet the preset capacity.
As an optional implementation manner, in the second aspect of the present invention, the identification determining module is further configured to determine whether the length of the packet exceeds a length threshold after the packet data is stored in a packet storage area of an external storage block according to the cached physical first address;
the storage module is further configured to store the message descriptor in a descriptor storage area of an external storage block when it is determined that the message length exceeds the length threshold.
A third aspect of the present invention discloses a storage device, which includes a memory and a processor, where the memory is used to store a computer program, and the processor runs the computer program to make the storage device execute part or all of the message data caching method disclosed in the first aspect.
A fourth aspect of the present invention discloses a computer-readable storage medium storing the computer program used in the storage device of the third aspect.
According to the message data caching method and device provided by the invention, when a burst write instruction is received, the message type of the received message data is firstly identified, and whether the current use condition of the caching space corresponding to the message type meets the preset capacity is judged; if the current use condition of the cache space meets the preset capacity, which indicates that the storage space allocated by the message type is not full, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data; finally, the message descriptors are stored in the built-in storage blocks, and the message data are stored in the message storage areas of the external storage blocks according to the cache physical initial addresses, so that the preparation time for activating the corresponding storage blocks is reduced, and meanwhile, the data of different message types are stored in the data blocks of the built-in storage blocks corresponding to the message types, so that the bandwidth utilization rate of the storage address allocation scheme is effectively improved, and the cache speed is further improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention.
Fig. 1 is a schematic flowchart of a message data caching method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a message data caching method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a message data caching apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a message data caching apparatus according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problems in the prior art, the invention provides a message data caching method and a message data caching device; the technology can identify the message type of the received message data when receiving the burst write command, and judge whether the current use condition of the cache space corresponding to the message type meets the preset capacity; if the current use condition of the cache space meets the preset capacity, which indicates that the storage space allocated by the message type is not full, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data; finally, the message descriptors are stored in the built-in storage blocks, and the message data are stored in the message storage areas of the external storage blocks according to the cache physical initial addresses, so that the preparation time for activating the corresponding storage blocks is reduced, and meanwhile, the data of different message types are stored in the data blocks of the built-in storage blocks corresponding to the message types, so that the bandwidth utilization rate of the storage address allocation scheme is effectively improved, and the cache speed is further improved. Also, the techniques may be implemented in associated software or hardware, as described below by way of example.
Example 1
Referring to fig. 1, fig. 1 is a schematic flowchart of a message data caching method according to an embodiment of the present invention. As shown in fig. 1, the message data caching method may include the following steps:
s101, when receiving a burst write command, identifying the message type of the received message data. The message data caching method provided by the invention is applied to data storage of the FPGA board. The external storage block is an external storage block of the FPGA board, and the internal storage block is a storage block inside the FPGA board.
In the embodiment of the invention, the external storage block can be further divided into a message storage area and a descriptor storage area. The message storage area can be further divided into a plurality of storage areas according to different message types, and each message type corresponds to one storage area. The message storage area is used for storing message data; the descriptor storage area is used for storing the physical initial address of the message data stored in the message storage area and the message length of the message data.
In the embodiment of the present invention, the external memory block may be a DDR SDRAM (Double Data Rate SDRAM, Double-Rate synchronous dynamic random access memory). The step of further dividing the message storage area of the external storage block into a plurality of storage areas according to different message types includes:
determining the total number of message types of message data needing to be cached in a message storage area;
dividing storage blocks 0-6 of eight storage blocks (storage block 0-7) of DDR SDRAM into message storage areas, and dividing storage block 7 into descriptor storage areas;
further dividing each of the storage blocks from 0 to 6 to obtain a plurality of sub-storage blocks; each sub-storage block correspondingly stores message data of one data type, and the number of the sub-storage blocks is equal to the total number of the message types.
In the above embodiment, for example, if there are N buffer data ports of the FPGA board, and there are M message types for the message data that needs to be buffered in each port, the total number of the message types is N × M. Furthermore, each of the memory blocks 0 to 6 is further divided to obtain a plurality of sub memory blocks, the number of which is N × M.
S102, judging whether the current use condition of the cache space corresponding to the message type meets the preset capacity, and if not, executing the step S103 to the step S104; if so, the flow ends.
In the embodiment of the present invention, the current usage condition of the cache space may be a size of existing cache data corresponding to the current packet type, and the embodiment of the present invention is not limited. If the maximum length of the packet is 4KB when the storage capacity allocated to a certain packet type is 10MB (i.e. 10240KB), and the preset capacity can be set as a preset capacity threshold, that is, 10230KB, after receiving the burst write instruction, when it is detected that the current usage amount of the buffer space corresponding to the packet type is less than or equal to 10230Bytes, it indicates that the current usage condition of the buffer space corresponding to the packet type at this time meets the preset capacity, and then steps S103-S105 are performed.
In the embodiment of the invention, the preset capacity is a capacity value set by a user according to the actual memory capacity and the actual required capacity. For example, when the actual memory capacity is 512M and the actual required capacity corresponding to a certain packet type is 64M, after receiving the burst write instruction, and when detecting that the size of the existing cache data corresponding to the certain packet type is smaller than 64M, it indicates that the current usage of the cache space corresponding to the packet type at this time meets the preset capacity, and then step S103 to step S105 are performed.
S103, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data.
In the embodiment of the invention, the process of generating the message descriptor matched with the message data is the process of allocating the cache address of the message data to be stored. After the buffer address allocation for the message data to be stored is completed, the message descriptor corresponding to the message data is obtained. In addition, the basic principle of address allocation provided by the invention is as follows: the storage block, the row address and the column address of the same message data are unchanged; and (3) sequentially rotating different message data and memory blocks, and adding the step length of the row address and a write-in pointer corresponding to the row address.
In the embodiment of the present invention, the cache physical first address includes a row address and a column address. Because the storage blocks of the DDR SDRAM are mutually independent, the message data caching method provided by the embodiment of the invention can ensure that when the message data is subjected to burst write-in operation, the switching among the storage blocks can be reduced by a rotation back-to-back storage method among the storage blocks, thereby realizing the pipelined storage of the message data needing caching among the storage blocks and reducing other necessary preparation time consumed when a single storage block is adopted for storage; in addition, each message data to be cached generates a corresponding message descriptor, where the message descriptor includes information such as the cache physical head address of the corresponding message data and the message length of the message data, and the present invention is not limited.
And S104, storing the message descriptor into the internal storage block, and storing the message data into a message storage area of the external storage block according to the cached physical first address.
In the embodiment of the present invention, after the message descriptor matching the message data is generated, the message descriptor may be temporarily stored in the internal storage block.
In the embodiment of the invention, burst writing and burst reading operations of message data can be carried out, and caching operation of ultra-long message data can be carried out; the cache operation of the super-long message data can also be performed, specifically, the step of reading the super-long message data is as follows:
receiving a read activation instruction, wherein the activation read instruction comprises a storage block address and a row address which need to be activated;
activating the storage blocks corresponding to the storage block addresses and the row addresses according to the read activation instruction;
waiting for a first preset time length, and receiving a reading instruction, wherein the reading instruction comprises a column address;
waiting for a second preset time length, and reading message data corresponding to the column address in the storage block;
and resetting the timer after the message data is read.
Specifically, the step of writing the ultra-long message data is as follows:
receiving a write activation instruction, wherein the write activation instruction comprises a storage block address and a row address which need to be activated;
activating storage blocks corresponding to the storage block addresses and the row addresses according to the write activation instruction;
waiting for a third preset duration, and receiving a write-in command, wherein the write-in command comprises a column address;
waiting for a fourth preset time length, and writing message data into the storage block according to the row address and the column address;
and resetting the timer after the message data is written into the buffer for the fifth preset time.
In the embodiment of the invention, the storage space of the external storage block is divided, and then the message data is classified and cached according to the message type, so that the interference among the message data of different message types is reduced.
In the embodiment of the invention, the storage state of the message can be reported to the user in real time for the user to schedule the reading and writing of the message.
It can be seen that, by implementing the message data caching method described in fig. 1, not only is the preparation time for activating the corresponding storage block reduced, but also data of different message types are stored in the data block of the built-in storage block corresponding to the message type, and the bandwidth utilization rate of the storage address allocation scheme is effectively improved, thereby improving the caching speed.
Example 2
Referring to fig. 2, fig. 2 is a schematic flowchart of a message data caching method according to a second embodiment of the present invention. As shown in fig. 2, the message data caching method may include the following steps:
s201, when receiving the burst write command, identifying the message type of the received message data.
S202, judging whether the current use condition of the cache space corresponding to the message type meets the preset capacity, and if not, executing the step S204-the step S206; if so, step S203 is performed.
S203, discarding the message data and prompting that the storage is full.
In the embodiment of the invention, a user can set the preset capacity of the cache space corresponding to each message type, and when the current use condition of the cache space corresponding to the message type is judged not to meet the preset capacity, the message data is automatically discarded.
In the embodiment of the invention, the preset capacity is a capacity value set by a user according to the actual memory capacity and the actual required capacity. If the current using condition of the cache space corresponding to the message type of the current message data does not meet the preset capacity in the caching process of the current message data, the message is cached completely, the message data is discarded when the next message data comes, and the user is prompted to be full of storage.
S204, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data;
s205, storing the message descriptor into the internal storage block, and storing the message data into the message storage area of the external storage block according to the cache physical first address.
S206, judging whether the length of the message exceeds a length threshold value, and if the length of the message exceeds the length threshold value, executing the step S207 to the step S208; if the length of the message does not exceed the length threshold, the process is ended.
S207, storing the message descriptor into a descriptor storage area of the external storage block.
In this embodiment of the present invention, the length threshold may be set to 512bytes, etc., which is not limited in this embodiment of the present invention.
As an optional implementation manner, storing the message descriptor in the descriptor storage area of the external storage block includes the following steps:
and converting the message descriptor into message format data with a preset length, and storing the message format data into a descriptor storage area of the external storage block.
In the embodiment of the present invention, when the length threshold is set to 512bytes, and when it is determined that the packet length of the packet data exceeds 512bytes, the packet descriptor temporarily stored in the internal storage block is stored in the descriptor storage area of the external storage block in the packet format data of 64 bytes.
S208, judging whether a burst reading instruction is received, wherein the burst reading instruction comprises data of a message type to be read, and executing the step S209 if the burst reading instruction is received; if the burst read command is not received, the process is ended.
In the embodiment of the present invention, the message data caching device may determine, according to the received instruction, whether to perform a burst read operation or a burst write operation of the message data.
S209, reading all target message descriptors corresponding to the message types to be read from the descriptor storage area and the built-in storage block, and reading target message data corresponding to each target message descriptor from the message storage area.
In the embodiment of the present invention, after receiving the burst read instruction, if the data length of the to-be-read packet type is greater than the preset read length threshold (e.g., 512bytes), at least a packet integrity data packet of the preset read length threshold is read each time, and when the data length of the to-be-read packet type is less than or equal to the preset read length threshold (e.g., 512bytes), all the data to be read are read at one time.
In the embodiment of the invention, when the burst reading operation of the message data is carried out, whether a corresponding message descriptor is stored in a descriptor storage area of an external storage block is detected according to the type of the read message, if so, the message descriptor is read out firstly, and then the message data cached in the message storage area of the external storage block is read according to the message descriptor; if not, whether the corresponding message descriptor is stored in the internal storage block is detected, if so, the message data cached in the message storage area of the external storage block is read according to the message descriptor, and if not, a read empty indication is returned to prompt the user not to read the message of the type.
It can be seen that, by implementing the message data caching method provided in fig. 2, that is, by using a pipeline storage manner of data blocks and by the common cooperation of a plurality of storage blocks, back-to-back operation of effective reading and writing among the plurality of storage blocks is realized, and preparation time other than effective reading and writing can be reduced, thereby effectively improving effective reading and writing bandwidth. Each message has a message descriptor for storing information such as a physical first address and a message length of the message stored in the external storage block, and the message descriptor is temporarily stored in the internal storage block.
Example 3
Referring to fig. 3, fig. 3 is a schematic structural diagram of a message data caching apparatus according to a third embodiment of the present invention. As shown in fig. 3, the message data caching apparatus includes:
the identification and determination module 301 is configured to, when a burst write instruction is received, identify a packet type of received packet data, and determine whether a current usage condition of a cache space corresponding to the packet type meets a preset capacity.
The descriptor generating module 302 is configured to generate a message descriptor matched with the message data when the identification and determination module 301 determines that the current usage condition of the cache space meets the preset capacity, where the message descriptor includes a cache physical head address and a message length of the message data.
The storage module 303 is configured to store the message descriptor in the internal storage block, and store the message data in the message storage area of the external storage block according to the cached physical first address.
It can be seen that, with the implementation of the message data caching apparatus described in fig. 3, not only is the preparation time for activating the corresponding storage block reduced, but also data of different message types are stored in the data block of the built-in storage block corresponding to the message type of the data block, and the bandwidth utilization rate of the storage address allocation scheme is effectively improved, thereby improving the caching speed.
Example 4
Referring to fig. 4, fig. 4 is a schematic structural diagram of a message data caching apparatus according to a fourth embodiment of the present invention. The message data caching apparatus shown in fig. 4 is obtained by optimizing the message data caching apparatus shown in fig. 3. As shown in fig. 4, the message data caching apparatus further includes:
and the prompting module 304 is configured to discard the message data and prompt that the storage is full when the identification and determination module 301 determines that the current usage condition of the cache space does not meet the preset capacity.
In this embodiment of the present invention, the identification and determination module 301 is further configured to determine whether the length of the message exceeds a length threshold after the message data is stored in the message storage area of the external storage block according to the cached physical first address.
The storage module 303 is further configured to store the message descriptor in the descriptor storage area of the external storage block when it is determined that the message length exceeds the length threshold.
It can be seen that, by implementing the message data caching apparatus provided in fig. 4, that is, by using a pipeline storage manner of data blocks, and by the common cooperation of a plurality of storage blocks, back-to-back operation of effective reading and writing among the plurality of storage blocks is realized, and preparation time other than effective reading and writing can be reduced, thereby effectively increasing effective reading and writing bandwidth. Each message has a message descriptor for storing information such as a physical first address and a message length of the message stored in the external storage block, and the message descriptor is temporarily stored in the internal storage block.
In addition, the invention also provides a storage device. The storage device comprises a memory and a processor, wherein the memory can be used for storing a computer program, and the processor executes the computer program, so that the storage device executes the functions of the method or each module in the message data caching device.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The embodiment also provides a computer storage medium for storing a computer program used in the storage device.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A message data caching method is characterized by comprising the following steps:
when a burst write instruction is received, identifying the message type of the received message data, and judging whether the current use condition of the cache space corresponding to the message type meets the preset capacity, wherein the total number of the message types is N × M, the number of the cache spaces corresponding to the message type is N × M, N is the number of cache data ports, and M is the number of the message types of the message data needing to be cached in each port;
if the current use condition of the cache space meets the preset capacity, generating a message descriptor matched with the message data, wherein the message descriptor comprises a cache physical initial address and the message length of the message data;
and storing the message descriptors into an internal storage block, and storing the message data into a message storage area of an external storage block according to the cache physical initial address, wherein the external storage block is divided into the message storage area and the descriptor storage area, and the message storage area comprises N × M cache spaces.
2. The message data caching method according to claim 1, further comprising:
and if the current use condition of the cache space does not meet the preset capacity, discarding the message data and prompting that the storage is full.
3. The message data caching method according to claim 1, wherein after the message data is stored in a message storage area of an external storage block based on the cached physical head address, the method further comprises:
judging whether the length of the message exceeds a length threshold value;
and if the message length exceeds the length threshold, storing the message descriptor into a descriptor storage area of an external storage block.
4. The message data caching method according to claim 3, wherein the storing the message descriptor in a descriptor storage area of an external storage block comprises:
and converting the message descriptor into message format data with a preset length, and storing the message format data into a descriptor storage area of the external storage block.
5. The message data caching method according to claim 3, wherein after storing the message descriptor in a descriptor storage region of an external storage block, the method further comprises:
judging whether a burst reading instruction is received or not, wherein the burst reading instruction comprises data of a message type to be read;
and if the burst reading instruction is received, reading all target message descriptors corresponding to the message type to be read from the descriptor storage area and the built-in storage block, and reading target message data corresponding to each target message descriptor from the message storage area.
6. A message data caching apparatus, comprising:
the identification and judgment module is used for identifying the message types of the received message data when a burst write instruction is received, and judging whether the current use condition of the cache space corresponding to the message types meets the preset capacity or not, wherein the total number of the message types is N × M, the number of the cache spaces corresponding to the message types is N × M, N is the number of cache data ports, and M is the number of the message types of the message data needing to be cached at each port;
a descriptor generation module, configured to generate a packet descriptor matched with the packet data when it is determined that the current usage condition of the cache space meets the preset capacity, where the packet descriptor includes a cache physical head address and a packet length of the packet data;
and the storage module is used for storing the message descriptors into an internal storage block and storing the message data into a message storage area of an external storage block according to the cache physical initial address, wherein the external storage block is divided into the message storage area and the descriptor storage area, and the message storage area comprises N × M cache spaces.
7. The message data caching apparatus according to claim 6, further comprising:
and the prompting module is used for discarding the message data and prompting that the storage is full when the current use condition of the cache space is judged not to meet the preset capacity.
8. The message data caching device according to claim 6, wherein the identification determining module is further configured to determine whether the message length exceeds a length threshold after the message data is stored in a message storage area of an external storage block based on the cached physical first address;
the storage module is further configured to store the message descriptor in a descriptor storage area of an external storage block when it is determined that the message length exceeds the length threshold.
9. A storage device comprising a memory for storing a computer program and a processor for executing the computer program to cause the storage device to perform the message data caching method of any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that it stores the computer program used in the storage device of claim 9.
CN201810693881.8A 2018-06-29 2018-06-29 Message data caching method and device Active CN108874688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810693881.8A CN108874688B (en) 2018-06-29 2018-06-29 Message data caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810693881.8A CN108874688B (en) 2018-06-29 2018-06-29 Message data caching method and device

Publications (2)

Publication Number Publication Date
CN108874688A CN108874688A (en) 2018-11-23
CN108874688B true CN108874688B (en) 2021-03-16

Family

ID=64297080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810693881.8A Active CN108874688B (en) 2018-06-29 2018-06-29 Message data caching method and device

Country Status (1)

Country Link
CN (1) CN108874688B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4072084A4 (en) * 2019-12-25 2022-12-28 Huawei Technologies Co., Ltd. Message buffering method, integrated circuit system, and storage medium
CN113259247B (en) * 2020-02-11 2022-11-25 华为技术有限公司 Cache device in network equipment and data management method in cache device
CN112631516B (en) * 2020-12-22 2022-09-30 上海宏力达信息技术股份有限公司 FLASH file management system with service life management function
CN113660180B (en) * 2021-07-30 2023-11-28 鹏城实验室 Data storage method, device, terminal and storage medium
CN114024923A (en) * 2021-10-30 2022-02-08 江苏信而泰智能装备有限公司 Multithreading message capturing method, electronic equipment and computer storage medium
CN117499351A (en) * 2022-07-26 2024-02-02 华为技术有限公司 Message forwarding device and method, communication chip and network equipment

Also Published As

Publication number Publication date
CN108874688A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108874688B (en) Message data caching method and device
US20150278274A1 (en) Retrieving data in a storage system using thin provisioning
CN111125447A (en) Metadata access method, device and equipment and readable storage medium
US9977598B2 (en) Electronic device and a method for managing memory space thereof
US20150143045A1 (en) Cache control apparatus and method
US11010056B2 (en) Data operating method, device, and system
CN106201652B (en) Data processing method and virtual machine
CN101981551A (en) Apparatus and method for cache utilization
CN112506823B (en) FPGA data reading and writing method, device, equipment and readable storage medium
KR20170010810A (en) Method, device and user equipment for reading/writing data in nand flash
CN110688256A (en) Metadata power-on recovery method and device, electronic equipment and storage medium
CN109471843A (en) A kind of metadata cache method, system and relevant apparatus
US20120206981A1 (en) Method and device for writing data
CN102722456B (en) Flash memory device and data writing method thereof
CN110045924B (en) Hierarchical storage method and device, electronic equipment and computer readable storage medium
CN111061429B (en) Data access method, device, equipment and medium
CN106649143B (en) Cache access method and device and electronic equipment
CN104252423A (en) Consistency processing method and device based on multi-core processor
CN103176753A (en) Storage device and data management method of storage device
CN108170380B (en) Method for improving sequential reading performance of solid state disk and solid state disk
CN112905121B (en) Data refreshing method and system
CN108234552B (en) Data storage method and device
CN112000629B (en) Log recording method, device and equipment of storage system and readable storage medium
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
CN109992217B (en) Service quality control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant