CN114968102B - Data caching method, device, system, computer equipment and storage medium - Google Patents

Data caching method, device, system, computer equipment and storage medium Download PDF

Info

Publication number
CN114968102B
CN114968102B CN202210585432.8A CN202210585432A CN114968102B CN 114968102 B CN114968102 B CN 114968102B CN 202210585432 A CN202210585432 A CN 202210585432A CN 114968102 B CN114968102 B CN 114968102B
Authority
CN
China
Prior art keywords
data
storage unit
unit
internal
internal storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210585432.8A
Other languages
Chinese (zh)
Other versions
CN114968102A (en
Inventor
范鑫
胡胜发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ankai Microelectronics Co ltd
Original Assignee
Guangzhou Ankai Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ankai Microelectronics Co ltd filed Critical Guangzhou Ankai Microelectronics Co ltd
Priority to CN202210585432.8A priority Critical patent/CN114968102B/en
Publication of CN114968102A publication Critical patent/CN114968102A/en
Application granted granted Critical
Publication of CN114968102B publication Critical patent/CN114968102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application belongs to the technical field of data storage, and discloses a data caching method, a device, a system, computer equipment and a storage medium, wherein the method comprises the following steps: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit; recording the storage destination information of each data block; when the data transmission condition is met, each data block is acquired from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and each data block is sequentially transmitted to the data output unit. The application can better utilize the internal storage and avoid the delay effect of the external storage on the system.

Description

Data caching method, device, system, computer equipment and storage medium
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a data caching method, device, system, computer device, and storage medium.
Background
In image data processing systems, it is often necessary to buffer the received data stream before distributing it to the required processing modules at the specified times. Because the image data stream generally has larger flow, the on-chip storage unit and the off-chip storage unit are used in a system with larger cache capacity.
The on-chip memory unit and the off-chip memory unit have advantages and disadvantages: the on-chip memory unit has the advantages of high read-write speed, high price and difficult production of relatively large capacity; the off-chip memory unit has low cost and is suitable for large-capacity storage, but the delay in the digital system is larger than that of the on-chip memory unit, and the use of the off-chip memory unit increases the burden of the system bandwidth. However, in the prior art, it is difficult to fully utilize the storage space of the on-chip storage unit, and the off-chip storage unit may cause delay to the system.
Disclosure of Invention
The application provides a data caching method, a device, a system, computer equipment and a storage medium, which can achieve the effects of fully utilizing internal storage and avoiding delay caused by external storage to the system.
In a first aspect, an embodiment of the present application provides a data caching method, where the method includes:
a data receiving step of receiving a data stream input by a data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
a data storage step of sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit configured to transmit the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
a storage destination recording step of recording storage destination information of each data block;
a data transmission step of acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when the data transmission condition is satisfied, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
In one embodiment, dividing the data stream into a plurality of data blocks of a preset size includes:
acquiring a preset first threshold value, and dividing the data stream into a plurality of data blocks with the same size according to the first threshold value in sequence according to time sequence, wherein the size of each data block is equal to the first threshold value.
In one embodiment, determining whether the current remaining space of the first internal memory unit is sufficient to store the data block includes:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
Preferably, the first threshold is greater than or equal to the second threshold.
In one embodiment, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block includes:
the method comprises the steps of obtaining storage destination information of recorded data blocks, wherein the storage destination information of each data block is a first identifier for marking the data block to be stored in a first internal storage unit or a second identifier for marking the data block to be stored in an external storage unit;
When the storage destination information of the data block is the first identifier, acquiring the data block from a first internal storage unit;
and when the storage destination information of the data block is the second identifier, acquiring the data block from the third internal storage unit.
In one embodiment, the external memory unit includes a first external memory unit for receiving output data of the second internal memory unit and a second external memory unit for outputting data to the third internal memory unit; the method further comprises the steps of:
and an iterative storage step, namely determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of the next stage, taking the first external storage unit as a data input unit of the next stage, and taking the second external storage unit as a data output unit of the next stage, and sequentially performing iterative configuration, data receiving, data storage, storage and transmission steps in an iterative manner until the iterative number reaches a preset iterative number.
Preferably, the preset number of iterations is 2.
In one embodiment, the first internal memory unit, the second internal memory unit and the third internal memory unit are on-chip memories, and the external memory unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
In one embodiment, the data stream is an image data stream.
In a second aspect, an embodiment of the present application provides a data caching apparatus, including:
the data receiving module is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
the data storage module is used for sequentially storing a plurality of data blocks with preset sizes; judging whether the residual space of the first internal storage unit is enough to store the data blocks in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit configured to transmit the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
the storage destination recording module is used for recording the storage destination information of each data block;
the data transmission module is used for acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when the data transmission condition is met, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
In a third aspect, an embodiment of the present application provides a data caching system, where the system includes a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit, and a data sending unit;
the data receiving unit is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes, and the size of each data block is equal to a first threshold value; sequentially storing a plurality of data blocks with preset sizes; in the storage process of each data block, acquiring the current residual storage capacity value of the first internal storage unit, and judging whether the current residual space of the first internal storage unit is enough to store the data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit; storing the storage destination information of each data block into an internal marking unit;
the first internal storage unit is used for receiving and storing the data sent by the data receiving unit and sending the current residual storage capacity value of the first internal storage unit to the data receiving unit;
A second internal storage unit for caching data to be transmitted to the external storage unit, the second internal storage unit being configured to transmit the data stored therein to the external storage unit when the amount of the data stored therein reaches a second threshold;
a third internal storage unit for caching data to be read from the external storage unit, the third internal storage unit being configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold value;
the internal marking unit is used for caching the storage destination information of each data block;
the data transmission unit is used for acquiring the storage destination information of each data block from the internal marking unit when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit;
an external storage unit for caching data to be stored in the external storage unit;
and the control unit is used for configuring a first threshold value, a second threshold value, a third threshold value and a data transmission condition.
In one embodiment, the external memory unit of the system includes a first external memory unit for receiving output data of the second internal memory unit and a second external memory unit for outputting data to the third internal memory unit;
The system also comprises an iteration module; the iteration module comprises a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data transmitting unit at the next stage;
the first external storage unit in the system is connected with the data receiving unit of the next stage contained in the iteration module to serve as the data input unit of the next stage in the iteration module, and the second external storage unit is connected with the data transmitting unit of the next stage contained in the iteration module to serve as the data output unit of the next stage in the iteration module;
the connection relation between the units of the next stage in the iteration module is the same as the connection relation between the units in the system.
In a fourth aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program when executed by the processor causes the processor to perform the steps of the data caching method according to any one of the embodiments.
In a fifth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a data caching method of any of the embodiments described above.
In summary, compared with the prior art, the technical scheme provided by the application has the beneficial effects that at least:
the application provides a data caching method, a device, a system, computer equipment and a storage medium, wherein the method comprises the following steps: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit so as to fully utilize internal storage; if not, storing the data block into a second internal storage unit configured to transmit the stored data to an external storage unit when the amount of the stored data reaches a second threshold; recording the storage destination information of each data block; when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value. According to the method, the data flow can be segmented, so that the received data can be stored in the first internal storage unit continuously as long as the residual space of the first internal storage unit is enough to store one data block with a preset size, the first internal storage unit can be fully utilized, and the data quantity to be cached in the external storage unit can be reduced; meanwhile, the first internal storage unit which is the internal storage is directly connected with the data input unit and the data output unit for data transmission, and the external storage unit is not required to be directly connected with the data input unit and the data output unit, but the second internal storage unit and the third internal storage unit which are the internal storage are used for realizing data access with the data processing system, so that the system is ensured to only exchange data with the internal storage, delay and system bandwidth burden caused by the external storage can be avoided, and the data caching efficiency is improved.
Drawings
Fig. 1 is a flowchart of a data caching method according to an exemplary embodiment of the present application.
FIG. 2 is a flow chart of data logging steps provided in an exemplary embodiment of the present application.
Fig. 3 is a flowchart illustrating data transmission steps according to an exemplary embodiment of the present application.
Fig. 4 is a flowchart of a data caching method according to still another exemplary embodiment of the present application.
Fig. 5 is a block diagram of a data caching apparatus according to an exemplary embodiment of the present application.
Fig. 6 is a block diagram of a data buffering apparatus according to still another exemplary embodiment of the present application.
Fig. 7 is a block diagram of a data caching system according to an exemplary embodiment of the present application.
Fig. 8 is a block diagram of a data caching system according to still another exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, an embodiment of the present application provides a data caching method applied to a caching process of a data processing system, where an execution subject is a caching system for illustration, and the method specifically includes the following steps:
step S1, a data receiving step: and receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes.
The data input unit is a data source for inputting a data stream, and can be an audio acquisition device, an image acquisition device and other hardware devices needing to acquire a large amount of data in real time, and the data stream can be various data streams such as audio and image data streams, for example, image data streams. Specifically, the buffer system may receive the data stream input by the data input unit in real time, and divide the data stream into a plurality of data blocks with preset sizes according to a preset data block threshold in the buffer system, so that the size of each data block is the same and does not exceed the data block threshold, and the size of the data block may be equal to or smaller than the data block threshold, for example: when the data block threshold is two lines of image data, the data stream may be divided into a plurality of data blocks having two lines of image data, or the data stream may be divided into a plurality of data blocks having two lines of image data.
Step S2, a data storage step: sequentially storing a plurality of data blocks with preset sizes; in the storing process of each data block, the data block is stored in the first internal storage unit or the second internal storage unit according to whether the current residual space of the first internal storage unit is enough to store the data block, and the second internal storage unit is configured to send the stored data to the external storage unit when the stored data amount reaches a second threshold value.
The first internal storage unit and the second internal storage unit are both internally stored, and the second internal storage unit is used for caching data which needs to be sent to the external storage unit; the second threshold may be a threshold set in advance by the user, for example, half-line image data.
Step S3, storing the forward record step: the storage destination information of each data block is recorded.
Wherein, the storage destination information of each data block can be the identification information or address information of the storage unit; specifically, in the case where one data block is stored in the first internal storage unit, the storage destination information of the data block may be identification information of the first internal storage unit, for example, tag1; when a data block is sent to the second internal storage unit and stored in the external storage unit by the second internal storage unit, the storage destination information of the data block may be identification information of the external storage unit, for example tag2.
Specifically, referring to fig. 2, the data storing step may include the following steps:
step S21, the data stream input by the data input unit is received, and the data stream is divided into a plurality of data blocks with preset sizes.
Step S22, judging whether the current residual space of the first internal storage unit is enough to store the data block.
Step S23, if yes, the data block is stored in the first internal storage unit, and the storage destination information of the data block is recorded.
Step S24, if not, the data block is stored in the second internal storage unit, and the storage destination information of the data block is recorded.
Step S25, judging whether the data amount stored in the second internal storage unit reaches a second threshold value.
And step S26, if yes, the data stored in the second internal storage unit is sent to the external storage unit.
Step S4, a data transmission step: when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
The data sending condition can be an internal triggering condition or an external triggering condition; the internal trigger condition may be a data transmission condition pre-configured in the cache system, for example, when a timer that may be set in the cache system reaches a preset time in the case of pre-setting timing data transmission, that is, the data transmission condition is satisfied, and the data acquisition is started to be used for transmitting; the external trigger condition may be the receipt of a data acquisition request sent by an external data processing module, for example: in the image data processing system, the downstream image processing module of the buffer system has been idle, and the like.
Specifically, when the buffer system satisfies the data transmission condition, each data block is acquired from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and each data block is sequentially transmitted to the data output unit in time sequence.
In the implementation process, the first internal storage unit, the second internal storage unit and the third internal storage unit can all use FIFO (First Input First Output) memories, so that continuous data streams can be buffered, and the effects of preventing data from being lost and improving the transmission speed of the data are achieved.
In step S4, when the data transmission condition is satisfied to output data to the outside, the data block whose cache is acquired from the first internal storage unit may be preferentially acquired. Under the condition that the first internal storage unit is enough to use less cache data, the data can be obtained from the first internal storage unit only and sent to the data output unit, so that the data caching efficiency is improved; when the first internal storage unit is not enough, the external storage unit is used for storing the data which is not stored in the first internal storage unit, the cached data block output of the first internal storage unit can be obtained from the first internal storage unit, and then the cached data block output of the third internal storage unit is obtained; wherein the third internal storage unit automatically acquires data from the external storage unit when the remaining storage space is greater than or equal to the third threshold value, instead of waiting for the data to be acquired from the external storage unit when the data transmission condition is satisfied, so that a part of the data to be output before the data is acquired from the third internal storage unit is generally already stored in the third internal storage unit from the external storage unit, and it is unnecessary to wait for the third internal storage unit to acquire the data from the external storage unit when the data is output. The data transmission step can directly butt-joint the data output unit through the internal storage, so that delay and bandwidth burden caused by external storage can be avoided, and the caching efficiency is improved.
The data caching method provided in the above embodiment includes: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit so as to fully utilize internal storage; if not, storing the data block into a second internal storage unit configured to transmit the stored data to an external storage unit when the amount of the stored data reaches a second threshold; recording the storage destination information of each data block; when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value. According to the method, the data flow can be segmented, so that the received data can be stored in the first internal storage unit continuously as long as the residual space of the first internal storage unit is enough to store one data block with a preset size, the first internal storage unit can be fully utilized, and the data quantity to be cached in the external storage unit can be reduced; meanwhile, the first internal storage unit which is the internal storage is directly connected with the data input unit and the data output unit for data transmission, and the external storage unit is not required to be directly connected with the data input unit and the data output unit, but the second internal storage unit and the third internal storage unit which are the internal storage are used for realizing data access with the data processing system, so that the system is ensured to only exchange data with the internal storage, delay and system bandwidth burden caused by the external storage can be avoided, and the data caching efficiency is improved.
In some embodiments, step S1 specifically includes the steps of:
acquiring a preset first threshold value, and dividing the data stream into a plurality of data blocks with the same size according to the first threshold value in sequence according to time sequence, wherein the size of each data block is equal to the first threshold value. The first threshold is a threshold used for data stream segmentation, and may be a threshold preset by a user, for example, a line of image data.
In the above embodiment, the data stream may be partitioned according to a preset first threshold, so that the size of each data block is equal to the first threshold, and the user may configure the first threshold according to the actual needs, so as to facilitate configuration and adjustment of the size of the data partition.
In some embodiments, step S2 specifically includes the steps of:
detecting a current remaining storage capacity value of the first internal storage unit; and judging whether the residual space of the first internal storage unit is enough to store the data block according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
Specifically, the cache system may monitor the remaining storage capacity value of the first internal storage unit in real time, and compare the current remaining storage capacity value of the first internal storage unit with a first threshold value; if the current residual storage capacity value of the first internal storage unit is greater than or equal to a first threshold value, judging that the current residual space of the first internal storage unit is enough to store the data block; otherwise, it is determined that the current remaining space of the first internal memory unit is insufficient to store the data block.
In this embodiment, by acquiring the current remaining storage capacity value of the first internal storage unit and comparing the current remaining storage capacity value of the first internal storage unit with the first threshold value to determine whether the current remaining space of the first internal storage unit is sufficient to store the data block, it is possible to simply and quickly determine where to store the data block, and further improve efficiency.
In the above embodiments, the first threshold is greater than or equal to the second threshold.
In the above embodiment, the first threshold value may be set to be greater than or equal to the second threshold value, so that the size of each data block is greater than the second threshold value for the second internal storage unit to send data to the external storage, that is, once one data block is stored in the second internal storage unit, the second internal storage unit will send the data block to the external storage unit for storage, so that the storage space of the second internal storage unit is vacated to receive the next data block, which can further reduce delay and improve data caching efficiency.
In some embodiments, referring to fig. 3, step S4 specifically includes the following steps:
step S41, when the data transmission condition is satisfied, the storage destination information of each recorded data block is acquired.
The storage destination information of each data block is a first identifier for indicating that the data block is stored in the first internal storage unit or a second identifier for indicating that the data block is stored in the external storage unit. In the implementation process, the first identifier and the second identifier can use simple digital identifiers so as to save the storage space required for storing the destination information and increase the identification speed of storing the destination information. For example, a first flag may be set to 1 when the storage of the data block goes to the first internal storage unit, and a second flag may be set to 0 when the storage of the data block goes to the external storage unit.
Step S42, judging the storage destination information of the data block as a first identifier or a second identifier.
Step S43, when the storage destination information of the data block is the first identification, acquiring the data block from the first internal storage unit; and when the storage destination information of the data block is the second identifier, acquiring the data block from the third internal storage unit.
Wherein the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value. Specifically, whether the remaining storage space of the third internal storage unit is greater than or equal to a third threshold value is determined, and if yes, data is acquired from the external storage unit and stored in the third internal storage unit. The third threshold may be a threshold set in advance by the user, for example, half-line image data.
Step S44, the acquired data blocks are sequentially sent to the data output unit.
Specifically, the buffer system sequentially sends the acquired data blocks to the data output unit according to the time sequence.
In the above embodiment, the first identifier or the second identifier is used as the storage destination information of the data blocks, so that the storage position of each data block can be more efficiently determined, and the data can be acquired and output by the first internal storage unit or the third internal storage unit.
In some embodiments, the external memory unit includes a first external memory unit for receiving output data of the second internal memory unit and a second external memory unit for outputting data to the third internal memory unit; referring to fig. 4, the method may further include the steps of:
step S5, iterative storage: determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of the next stage, taking the first external storage unit as a data input unit of the next stage, and taking the second external storage unit as a data output unit of the next stage, and sequentially performing iterative configuration steps, data receiving steps, data storage steps, storage destination recording steps and data sending steps in an iterative manner until the iterative number reaches a preset iterative number.
In a specific implementation process, the preset iteration number may be 1 or 2.
In the above embodiment, the data storage can be implemented in an iterative manner, and on the basis of the original cache system, the configuration of the internal storage and the external storage in the cache system is flexibly adjusted by configuring the next-stage internal and external storage combination of iteration, so that a user can configure or expand the cache structure according to actual requirements, for example, an internal and external storage combination architecture with lower total cost or smaller delay is implemented by adjusting the setting proportion of the internal and external storages.
In some embodiments, the first internal memory unit, the second internal memory unit, and the third internal memory unit are on-chip memories, and the external memory unit is an off-chip memory.
Wherein the on-chip memory is also referred to as an on-chip memory. The on-chip Memory of the singlechip comprises an on-chip Read-Only Memory (ROM) and an on-chip random access Memory (Random Access Memory, RAM), wherein the on-chip ROM is used for storing program codes, and the on-chip RAM is used for storing data; the off-chip memory includes an off-chip ROM for storing program codes and an off-chip RAM for storing rewritable data of a user.
Preferably, the on-chip Memory may be a Static Random-Access Memory (SRAM); the off-chip memory may be dynamic random access memory (Dynamic Random Access Memory, DRAM).
The SRAM has the advantages of high read-write speed, high price and small capacity; compared with SRAM, DRAM has high integration level, low power consumption and low cost, and is suitable for large-capacity storage.
In the above embodiment, the first internal memory unit and the first internal memory unit are on-chip memories (for example, SRAM), and the external memory unit is off-chip memories (for example, DRAM), and by the optimized combination of the on-chip memories and the off-chip memories, the effects of reducing the cost and avoiding the delay can be achieved.
Referring to fig. 5, an embodiment of the present application provides a data caching apparatus, which includes:
a data receiving module 11, configured to receive a data stream input by a data input unit, and divide the data stream into a plurality of data blocks with preset sizes;
a data storage module 12, configured to sequentially store a plurality of data blocks of a preset size; judging whether the residual space of the first internal storage unit is enough to store the data blocks in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit configured to transmit the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
A storage destination recording module 13 for recording storage destination information of each data block;
a data transmission module 14, configured to acquire each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when the data transmission condition is satisfied, and sequentially transmit each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
In some embodiments, referring to fig. 6, the external memory unit includes a first external memory unit for receiving output data of the second internal memory unit and a second external memory unit for outputting data to the third internal memory unit; the apparatus further comprises:
the iteration storage module 15 is configured to determine a first internal storage unit, an external storage unit, a second internal storage unit, and a third internal storage unit of a next stage, and sequentially execute an iteration configuration step, a data receiving step, a data storage step, a storage destination recording step, and a data sending step in an iterative manner by using the first external storage unit as a data input unit of the next stage and the second external storage unit as a data output unit of the next stage until the iteration number reaches a preset iteration number.
In a specific implementation process, the preset iteration number may be 1 or 2.
In some embodiments, the data receiving module 11 is specifically configured to:
acquiring a preset first threshold value, and dividing the data stream into a plurality of data blocks with the same size according to the first threshold value in sequence according to time sequence, wherein the size of each data block is equal to the first threshold value.
In some embodiments, the data storage module 12 is specifically configured to:
detecting a current remaining storage capacity value of the first internal storage unit; and judging whether the residual space of the first internal storage unit is enough to store the data block according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
In some embodiments, the first threshold is greater than or equal to the second threshold.
In some embodiments, the data transmission module 14 is specifically configured to:
the method comprises the steps of obtaining storage destination information of recorded data blocks, wherein the storage destination information of each data block is a first identifier for marking the data block to be stored in a first internal storage unit or a second identifier for marking the data block to be stored in an external storage unit; when the storage destination information of the data block is the first identifier, acquiring the data block from a first internal storage unit; and when the storage destination information of the data block is the second identifier, acquiring the data block from the third internal storage unit.
In some embodiments, the first internal memory unit, the second internal memory unit, and the third internal memory unit are on-chip memories, and the external memory unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
In some embodiments, the data stream is an image data stream.
The specific limitation of the data buffering device provided in this embodiment can be referred to the above embodiments of the data buffering method, and will not be repeated here. The modules in the data caching apparatus may be implemented wholly or partly by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Referring to fig. 7, an embodiment of the present application provides a data caching system, which includes a control unit, a data receiving unit 101A, a first internal storage unit 102A, a second internal storage unit 103A, a third internal storage unit 104A, an external storage unit 105A, an internal marking unit 106A, and a data transmitting unit 107A;
A data receiving unit 101A, configured to receive the data stream input by the data input unit 108A, and divide the data stream into a plurality of data blocks with preset sizes, where the size of each data block is equal to a first threshold; sequentially storing a plurality of data blocks with preset sizes; in the storage process of each data block, acquiring a current residual storage capacity value of the first internal storage unit 102A, and judging whether the current residual space of the first internal storage unit 102A is enough to store the data block; if yes, storing the data block into the first internal storage unit 102A; if not, storing the data block into the second internal storage unit 103A; storing the storage destination information of each data block in the internal marking unit 106A;
a first internal storage unit 102A for receiving and storing the data transmitted by the data receiving unit 101A, and transmitting the current remaining storage capacity value of the first internal storage unit 102A to the data receiving unit 101A;
a second internal storage unit 103A for caching data to be transmitted to the external storage unit 105A, the second internal storage unit 103A being configured to transmit the data stored therein to the external storage unit 105A when the amount of the data stored therein reaches a second threshold;
A third internal storage unit 104A for caching data to be read from the external storage unit 105A, the third internal storage unit 104A being configured to acquire data from the external storage unit 105A when a remaining storage space thereof is greater than or equal to a third threshold value;
an internal marking unit 106A, configured to cache the storage destination information of each data block;
a data transmission unit 107A for acquiring the storage destination information of each data block from the internal flag unit 106A when the data transmission condition is satisfied, acquiring each data block from the first internal storage unit 102A or the third internal storage unit 104A according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit 109A;
an external storage unit 105A for caching data to be stored in the external storage unit 105A;
and the control unit is used for configuring a first threshold value, a second threshold value, a third threshold value and a data transmission condition.
The data input unit 108A is a data source, and the data output unit 109A is a data output destination, such as an external data processing system. The data receiving unit 101A is connected to the data input unit 108A, the first internal storage unit 102A, the second internal storage unit 103A, and the internal marking unit 106A, respectively; the data transmitting unit 107A is connected to the data output unit 109A, the first internal storage unit 102A, the third internal storage unit 104A, and the internal marking unit 106A, respectively; the external storage unit 105A is connected to the second internal storage unit 103A and the third internal storage unit 104A, respectively; the control unit is connected to the data receiving unit 101A, the data transmitting unit 107A, the second internal storage unit 103A, and the third internal storage unit 104A, respectively.
In the above system, the interaction between the internal marking unit 106A and the data receiving unit 101A, the data transmitting unit 107A is a marking stream.
In some embodiments, referring to fig. 8, the external storage unit 105A of the system includes a first external storage unit 1051A for receiving the second internal storage unit output data and a second external storage unit 1052A for outputting data to the third internal storage unit;
the system also includes an iteration module; the iteration module comprises a control unit, a data receiving unit 101B, a first internal storage unit 102B, a second internal storage unit 103B, a third internal storage unit 104B, an external storage unit 105B, an internal marking unit 106B and a data sending unit 107B at the next stage;
the first external storage unit 1051A in the system is connected to the data receiving unit 101B of the next stage included in the iteration module as the data input unit 108B of the next stage in the iteration module, and the second external storage unit 1052A is connected to the data transmitting unit 107B of the next stage included in the iteration module as the data output unit 109B of the next stage in the iteration module;
the connection relation between the units of the next stage in the iteration module is the same as the connection relation between the units in the system.
In some embodiments, the first threshold is greater than or equal to the second threshold.
In some embodiments, the first internal memory unit, the second internal memory unit, and the third internal memory unit are on-chip memories, and the external memory unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
The specific limitation of the data buffering system provided in this embodiment may be referred to the above embodiments of the data buffering method, and will not be described herein. The various elements of the data caching system described above may be implemented, in whole or in part, in software, hardware, and combinations thereof. The units can be embedded in hardware or independent of a processor in the computer equipment, and can also be stored in a memory in the computer equipment in a software mode, so that the processor can call and execute the operations corresponding to the units.
An embodiment of the present application provides a computer device that may include a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, causes the processor to perform the steps of the data caching method of any one of the embodiments described above.
The working process, working details and technical effects of the computer device provided in this embodiment may be referred to the above embodiments of the data caching method, which are not described herein.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a data caching method of any of the embodiments described above. The computer readable storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, and/or a Memory Stick (Memory Stick), etc., where the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the computer readable storage medium provided in this embodiment can be referred to the above embodiments of the data caching method, and are not repeated here.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (27)

1. A method of caching data, the method comprising:
a data receiving step of receiving a data stream input by a data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
a data storage step of sequentially storing the plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into the first internal storage unit; if not, storing the data block in a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the stored data amount reaches a second threshold;
A storage destination recording step of recording storage destination information of each of the data blocks;
a data transmission step of acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when a data transmission condition is satisfied, and sequentially transmitting each data block to a data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
2. The method of claim 1, wherein dividing the data stream into a plurality of data blocks of a preset size comprises:
acquiring a preset first threshold value, and dividing the data stream into a plurality of data blocks with the same size according to the first threshold value in time sequence, wherein the size of each data block is equal to the first threshold value.
3. The method of claim 2, wherein determining whether the current remaining space of the first internal memory unit is sufficient to store the data block comprises:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
4. A method according to claim 2 or 3, wherein the first threshold value is greater than or equal to the second threshold value.
5. A method according to any one of claims 1 to 3, wherein said retrieving each of said data blocks from said first or third internal storage unit based on storage destination information for each of said data blocks comprises:
the storage destination information of each recorded data block is obtained, and the storage destination information of each data block is a first identifier for marking that the data block is stored in the first internal storage unit or a second identifier for marking that the data block is stored in the external storage unit;
when the storage destination information of the data block is a first identifier, acquiring the data block from the first internal storage unit;
and when the storage destination information of the data block is the second identifier, acquiring the data block from the third internal storage unit.
6. The method of claim 1, wherein the external storage unit comprises a first external storage unit for receiving the second internal storage unit output data and a second external storage unit for outputting data to the third internal storage unit; the method further comprises the steps of:
And an iterative storage step of sequentially and iteratively executing a step of determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of the next stage, wherein the first external storage unit is used as a data input unit of the next stage, and the second external storage unit is used as a data output unit of the next stage, the data receiving step, the data storage step, the storage going recording step and the data transmitting step until the iteration number reaches a preset iteration number.
7. The method of claim 6, wherein the predetermined number of iterations is 2.
8. The method of claim 1, wherein the first internal memory unit, the second internal memory unit, and the third internal memory unit are each on-chip memory, and the external memory unit is off-chip memory.
9. The method of claim 8, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
10. The method of claim 1, wherein the data stream is an image data stream.
11. A data caching apparatus, the apparatus comprising:
the data receiving module is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
the data storage module is used for sequentially storing the plurality of data blocks with the preset size; judging whether the residual space of the first internal storage unit is enough to store the data blocks in the storage process of each data block; if yes, storing the data block into the first internal storage unit; if not, storing the data block in a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the stored data amount reaches a second threshold;
the storage destination recording module is used for recording the storage destination information of each data block;
the data transmission module is used for acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when the data transmission condition is met, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when its remaining storage space is greater than or equal to a third threshold value.
12. The apparatus of claim 11, wherein the data receiving module is specifically configured to:
acquiring a preset first threshold value, and dividing the data stream into a plurality of data blocks with the same size according to the first threshold value in time sequence, wherein the size of each data block is equal to the first threshold value.
13. The apparatus of claim 12, wherein the data storage module is specifically configured to:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
14. The apparatus of claim 12 or 13, wherein the first threshold is greater than or equal to the second threshold.
15. The apparatus according to any one of claims 11 to 13, wherein the data transmission module is specifically configured to:
the storage destination information of each recorded data block is obtained, and the storage destination information of each data block is a first identifier for marking that the data block is stored in the first internal storage unit or a second identifier for marking that the data block is stored in the external storage unit;
When the storage destination information of the data block is a first identifier, acquiring the data block from the first internal storage unit;
and when the storage destination information of the data block is the second identifier, acquiring the data block from the third internal storage unit.
16. The apparatus of claim 11, wherein the external storage unit comprises a first external storage unit for receiving output data of the second internal storage unit and a second external storage unit for outputting data to the third internal storage unit; the apparatus further comprises:
the iteration storage module is used for sequentially and iteratively executing a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit which determine the next stage, taking the first external storage unit as a data input unit of the next stage, taking the second external storage unit as a data output unit of the next stage, receiving a data stream input by the data input unit, dividing the data stream into a plurality of data blocks with preset sizes and sequentially storing the data blocks with the preset sizes, judging whether the residual space of the first internal storage unit is enough to store the data blocks in the storage process of each data block, storing the data blocks in the first internal storage unit if the residual space of the first internal storage unit is enough to store the data blocks in the storage process of each data block, storing the data blocks in the second internal storage unit if the residual space is not enough to store the data blocks in the first internal storage unit, transmitting the stored data blocks to the external storage unit when the stored data quantity reaches a second threshold value, recording storage destination information of each data block, acquiring the data blocks from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block when the data block storage destination information meets a data transmission condition, and sequentially acquiring the data blocks from the first internal storage unit or the third internal storage unit until the number of the data blocks reaches the first threshold value or the second threshold value, and sequentially acquiring the data blocks from the first internal storage unit and outputting the data blocks and the data storage unit until the data storage destination information reaches the first threshold value.
17. The apparatus of claim 16, wherein the predetermined number of iterations is 2.
18. The apparatus of claim 11, wherein the first internal memory unit, the second internal memory unit, and the third internal memory unit are each on-chip memory, and the external memory unit is off-chip memory.
19. The apparatus of claim 18, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
20. The apparatus of claim 11, wherein the data stream is an image data stream.
21. The data caching system is characterized by comprising a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data sending unit;
the data receiving unit is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes, and the size of each data block is equal to a first threshold value; sequentially storing the data blocks with the preset sizes; in the storage process of each data block, acquiring the current residual storage capacity value of the first internal storage unit, and judging whether the current residual space of the first internal storage unit is enough to store the data block or not; if yes, storing the data block into the first internal storage unit; if not, storing the data block into a second internal storage unit; storing the storage destination information of each data block into the internal marking unit;
The first internal storage unit is used for receiving and storing the data sent by the data receiving unit and sending the current residual storage capacity value of the first internal storage unit to the data receiving unit;
the second internal storage unit is used for caching data which needs to be sent to the external storage unit, and is configured to send the stored data to the external storage unit when the stored data amount reaches a second threshold;
the third internal storage unit is used for caching data to be read from the external storage unit, and is configured to acquire the data from the external storage unit when the residual storage space is greater than or equal to a third threshold value;
the internal marking unit is used for caching the storage destination information of each data block;
the data transmitting unit is used for acquiring the storage destination information of each data block from the internal marking unit when the data transmitting condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit;
An external storage unit for caching data to be stored in the external storage unit;
and the control unit is used for configuring the first threshold value, the second threshold value, the third threshold value and the data transmission condition.
22. The system of claim 21, wherein the external storage unit of the system comprises a first external storage unit for receiving the second internal storage unit output data and a second external storage unit for outputting data to the third internal storage unit;
the system further includes an iteration module; the iteration module comprises a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data sending unit of the next stage;
the first external storage unit in the system is connected with the data receiving unit of the next stage contained in the iteration module to serve as the data input unit of the next stage in the iteration module, and the second external storage unit is connected with the data transmitting unit of the next stage contained in the iteration module to serve as the data output unit of the next stage in the iteration module;
The connection relation between the units of the next stage in the iteration module is the same as the connection relation between the units in the system.
23. The system of claim 21, wherein the first threshold is greater than or equal to the second threshold.
24. The system of claim 21, wherein the first internal memory unit, the second internal memory unit, and the third internal memory unit are each on-chip memory, and the external memory unit is off-chip memory.
25. The system of claim 24, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
26. A computer device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 10.
27. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 10.
CN202210585432.8A 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium Active CN114968102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585432.8A CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585432.8A CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114968102A CN114968102A (en) 2022-08-30
CN114968102B true CN114968102B (en) 2023-10-13

Family

ID=82956202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585432.8A Active CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114968102B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115658328B (en) * 2022-12-07 2023-10-03 摩尔线程智能科技(北京)有限责任公司 Device and method for managing storage space, computing device and chip
CN116339622B (en) * 2023-02-20 2023-11-14 深圳市数存科技有限公司 Data compression system and method based on block level

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055274A (en) * 2016-05-23 2016-10-26 联想(北京)有限公司 Data storage method, data reading method and electronic device
CN106095552A (en) * 2016-06-07 2016-11-09 华中科技大学 A kind of Multi-Task Graph processing method based on I/O duplicate removal and system
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN108415855A (en) * 2018-03-06 2018-08-17 珠海全志科技股份有限公司 It makes video recording file memory method and device, computer installation and storage medium
CN108427539A (en) * 2018-03-15 2018-08-21 深信服科技股份有限公司 Offline duplicate removal compression method, device and the readable storage medium storing program for executing of buffer memory device data
CN110225399A (en) * 2019-06-19 2019-09-10 深圳市共进电子股份有限公司 Streaming Media processing method, device, computer equipment and storage medium
CN113254392A (en) * 2021-07-12 2021-08-13 深圳比特微电子科技有限公司 Data storage method for system on chip and device based on system on chip
WO2022062537A1 (en) * 2020-09-27 2022-03-31 苏州浪潮智能科技有限公司 Data compression method and apparatus, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015101827A1 (en) * 2013-12-31 2015-07-09 Mosys, Inc. Integrated main memory and coprocessor with low latency

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055274A (en) * 2016-05-23 2016-10-26 联想(北京)有限公司 Data storage method, data reading method and electronic device
CN106095552A (en) * 2016-06-07 2016-11-09 华中科技大学 A kind of Multi-Task Graph processing method based on I/O duplicate removal and system
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN108415855A (en) * 2018-03-06 2018-08-17 珠海全志科技股份有限公司 It makes video recording file memory method and device, computer installation and storage medium
CN108427539A (en) * 2018-03-15 2018-08-21 深信服科技股份有限公司 Offline duplicate removal compression method, device and the readable storage medium storing program for executing of buffer memory device data
CN110225399A (en) * 2019-06-19 2019-09-10 深圳市共进电子股份有限公司 Streaming Media processing method, device, computer equipment and storage medium
WO2022062537A1 (en) * 2020-09-27 2022-03-31 苏州浪潮智能科技有限公司 Data compression method and apparatus, and computer-readable storage medium
CN113254392A (en) * 2021-07-12 2021-08-13 深圳比特微电子科技有限公司 Data storage method for system on chip and device based on system on chip

Also Published As

Publication number Publication date
CN114968102A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN114968102B (en) Data caching method, device, system, computer equipment and storage medium
US8417901B2 (en) Combining write commands to overlapping addresses or to a specific page
CN109085997A (en) Memory-efficient for nonvolatile memory continues key assignments storage
CN107870732B (en) Method and apparatus for flushing pages from solid state storage devices
US9411519B2 (en) Implementing enhanced performance flash memory devices
CN117312201B (en) Data transmission method and device, accelerator equipment, host and storage medium
KR20170010810A (en) Method, device and user equipment for reading/writing data in nand flash
CN105373484A (en) Memory distribution, storage and management method in network communication chip
CN114610679A (en) Storage device, data storage method thereof and cloud storage system
CN109542346A (en) Dynamic data cache allocation method, device, computer equipment and storage medium
JP2008234059A (en) Data transfer device and information processing system
CN108287793B (en) Response message buffering method and server
CN112214178B (en) Storage system, data reading method and data writing method
EP1780976A1 (en) Methods and system to offload data processing tasks
US20200073595A1 (en) Flash memory controller capable of improving IOPS performance and corresponding method
CN116225314A (en) Data writing method, device, computer equipment and storage medium
CN112486874B (en) Order-preserving management method and device for I/O (input/output) instructions in wide-port scene
CN103631726B (en) File processing method and device of series-connection streaming computational nodes
CN118605797A (en) Method and device for accessing solid state disk
CN113468195B (en) Server data cache updating method, system and main database server
CN112292660A (en) Method for scheduling data in memory, data scheduling equipment and system
CN117453643B (en) File caching method, device, terminal and medium based on distributed file system
WO2024146550A2 (en) Hybrid ssd, performance optimization method and apparatus thereof, device and storage medium
CN113901008B (en) Data processing method and device, storage medium and computing equipment
CN117931484B (en) Message consumption method, device, equipment and storage medium based on sliding window

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant