CN114968102A - Data caching method, device and system, computer equipment and storage medium - Google Patents

Data caching method, device and system, computer equipment and storage medium Download PDF

Info

Publication number
CN114968102A
CN114968102A CN202210585432.8A CN202210585432A CN114968102A CN 114968102 A CN114968102 A CN 114968102A CN 202210585432 A CN202210585432 A CN 202210585432A CN 114968102 A CN114968102 A CN 114968102A
Authority
CN
China
Prior art keywords
data
storage unit
internal storage
data block
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210585432.8A
Other languages
Chinese (zh)
Other versions
CN114968102B (en
Inventor
范鑫
胡胜发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ankai Microelectronics Co ltd
Original Assignee
Guangzhou Ankai Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ankai Microelectronics Co ltd filed Critical Guangzhou Ankai Microelectronics Co ltd
Priority to CN202210585432.8A priority Critical patent/CN114968102B/en
Publication of CN114968102A publication Critical patent/CN114968102A/en
Application granted granted Critical
Publication of CN114968102B publication Critical patent/CN114968102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application belongs to the technical field of data storage, and discloses a data caching method, a device, a system, computer equipment and a storage medium, wherein the method comprises the following steps: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit; recording storage destination information of each data block; and when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit. The method and the device can achieve the effects of better utilizing internal storage and avoiding the delay of the external storage to the system.

Description

Data caching method, device and system, computer equipment and storage medium
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a data caching method, apparatus, system, computer device, and storage medium.
Background
In an image data processing system, it is usually necessary to buffer a received data stream and distribute the buffered data stream to a desired processing module at a predetermined time. Because the image data stream generally has a large flow, the on-chip storage unit and the off-chip storage unit can be used simultaneously in a system with a large buffer storage amount.
The on-chip memory cell and the off-chip memory cell each have advantages and disadvantages: the on-chip memory unit has the advantages of high read-write speed, high price and difficult realization of larger capacity; the off-chip memory unit is low in cost and suitable for large-capacity storage, but the time delay in a digital system is larger than that of the on-chip memory unit, and the burden of system bandwidth is increased by using the off-chip memory unit. However, in the prior art, the storage space of the on-chip storage unit is difficult to be fully utilized, and the off-chip storage unit causes delay to the system.
Disclosure of Invention
The application provides a data caching method, a data caching device, a data caching system, computer equipment and a storage medium, which can achieve the effects of fully utilizing internal storage and avoiding delay of external storage on the system.
In a first aspect, an embodiment of the present application provides a data caching method, where the method includes:
a data receiving step of receiving a data stream input by a data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
a data storage step, in which a plurality of data blocks with preset sizes are stored in sequence; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
a storage destination recording step, recording the storage destination information of each data block;
a data sending step, when the data sending condition is met, obtaining each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sending each data block to the data output unit in sequence; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold.
In one embodiment, dividing the data stream into a plurality of data blocks of a preset size includes:
acquiring a preset first threshold, and sequentially dividing the data stream into a plurality of data blocks with the same size according to the first threshold in time sequence, wherein the size of each data block is equal to the first threshold.
In one embodiment, determining whether the current remaining space of the first internal storage unit is sufficient to store the data block comprises:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block or not according to the current residual storage capacity value of the first internal storage unit and a first threshold value.
Preferably, the first threshold is greater than or equal to the second threshold.
In one embodiment, retrieving each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block includes:
acquiring recorded storage destination information of each data block, wherein the storage destination information of each data block is a first identifier for marking that the data block is stored in a first internal storage unit or a second identifier for marking that the data block is stored in an external storage unit;
when the storage destination information of the data block is a first identifier, acquiring the data block from a first internal storage unit;
and when the storage destination information of the data block is the second identification, the data block is acquired from the third internal storage unit.
In one embodiment, the external storage unit includes a first external storage unit for receiving data output from the second internal storage unit and a second external storage unit for outputting data to the third internal storage unit; the method further comprises the following steps:
and an iterative storage step, namely determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of the next stage, taking the first external storage unit as a data input unit of the next stage, taking the second external storage unit as a data output unit of the next stage, and sequentially and iteratively executing the iterative configuration step, the data receiving step, the data storage step, the storage destination recording step and the data sending step until the iteration number reaches the preset iteration number.
Preferably, the preset number of iterations is 2.
In one embodiment, the first internal storage unit, the second internal storage unit and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
In one embodiment, the data stream is an image data stream.
In a second aspect, an embodiment of the present application provides a data caching apparatus, including:
the data receiving module is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
the data storage module is used for sequentially storing a plurality of data blocks with preset sizes; judging whether the residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
the storage destination recording module is used for recording the storage destination information of each data block;
the data transmission module is used for acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block and sequentially transmitting each data block to the data output unit when the data transmission condition is met; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold.
In a third aspect, an embodiment of the present application provides a data caching system, where the system includes a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit, and a data sending unit;
the data receiving unit is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes, and the size of each data block is equal to a first threshold value; sequentially storing a plurality of data blocks with preset sizes; in the storage process of each data block, acquiring a current remaining storage capacity value of a first internal storage unit, and judging whether the current remaining space of the first internal storage unit is enough to store the data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit; storing the storage destination information of each data block into an internal marking unit;
the first internal storage unit is used for receiving and storing the data sent by the data receiving unit and sending the current remaining storage capacity value of the first internal storage unit to the data receiving unit;
the second internal storage unit is used for caching data required to be sent to the external storage unit and is configured to send the stored data to the external storage unit when the amount of the data stored in the second internal storage unit reaches a second threshold value;
the third internal storage unit is used for caching data required to be read from the external storage unit and is configured to acquire the data from the external storage unit when the remaining storage space of the third internal storage unit is larger than or equal to a third threshold value;
the internal marking unit is used for caching the storage destination information of each data block;
the data transmission unit is used for acquiring the storage destination information of each data block from the internal marking unit when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit;
the external storage unit is used for caching data needing to be stored in the external storage unit;
and the control unit is used for configuring a first threshold, a second threshold, a third threshold and a data sending condition.
In one embodiment, the external storage units of the system comprise a first external storage unit for receiving data output by the second internal storage unit and a second external storage unit for outputting data to the third internal storage unit;
the system also comprises an iteration module; the iteration module comprises a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data sending unit at the next stage;
a first external storage unit in the system is connected with a next-level data receiving unit contained in the iteration module to serve as a next-level data input unit in the iteration module, and a second external storage unit is connected with a next-level data sending unit contained in the iteration module to serve as a next-level data output unit in the iteration module;
the connection relation between each unit of the next stage in the iteration module is the same as the connection relation between each unit in the system.
In a fourth aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to perform the steps of the data caching method according to any one of the above embodiments.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the data caching method according to any one of the above embodiments.
In summary, compared with the prior art, the beneficial effects brought by the technical scheme provided by the application at least include:
the application provides a data caching method, a data caching device, a data caching system, computer equipment and a storage medium, wherein the method comprises the following steps: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if so, storing the data block into a first internal storage unit so as to fully utilize internal storage; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold; recording storage destination information of each data block; when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold. According to the method, the data stream can be partitioned, so that the received data can be stored in the first internal storage unit continuously as long as the remaining space of the first internal storage unit is enough to store a data block with a preset size, the first internal storage unit can be fully utilized, and the data amount needing to be cached in the external storage unit is reduced; meanwhile, the data input unit and the data output unit are directly butted through the first internal storage unit which is internally stored, data transmission is carried out, the external storage unit is not required to be directly butted with the data input unit and the data output unit, data access between the data input unit and the data processing system is realized through the second internal storage unit and the third internal storage unit which are internally stored, the system is ensured to only exchange data with the internal storage, time delay and system bandwidth burden caused by external storage can be avoided, and the data caching efficiency is improved.
Drawings
Fig. 1 is a flowchart of a data caching method according to an exemplary embodiment of the present application.
FIG. 2 is a flowchart of data logging steps provided in an exemplary embodiment of the present application.
Fig. 3 is a flowchart of data transmission steps provided in an exemplary embodiment of the present application.
Fig. 4 is a flowchart of a data caching method according to another exemplary embodiment of the present application.
Fig. 5 is a block diagram of a data caching apparatus according to an exemplary embodiment of the present application.
Fig. 6 is a block diagram of a data caching apparatus according to another exemplary embodiment of the present application.
Fig. 7 is a block diagram of a data caching system according to an exemplary embodiment of the present application.
Fig. 8 is a block diagram of a data cache system according to another exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an embodiment of the present application provides a data caching method, which is applied to a caching process of a data processing system, and is described by taking an example that an execution subject is a caching system, where the method specifically includes the following steps:
step S1, data receiving step: the method includes receiving a data stream input by a data input unit and dividing the data stream into a plurality of data blocks of a preset size.
The data input unit is a data source for inputting data streams, and may be an audio acquisition device, an image acquisition device, or other hardware devices that need to acquire a large amount of data in real time, and the data streams may be various types of data streams such as audio and images, for example, image data streams. Specifically, the cache system may receive the data stream input by the data input unit in real time, and divide the data stream into a plurality of data blocks of preset sizes according to a data blocking threshold preset in the cache system, so that the size of each data block is the same and does not exceed the data blocking threshold, and the size of the data block may be equal to or smaller than the data blocking threshold, for example: when the data blocking threshold is two lines of image data, the data stream may be divided into a plurality of data blocks of two lines of image data, or the data stream may be divided into a plurality of data blocks of two lines of image data.
Step S2, data storage step: sequentially storing a plurality of data blocks with preset sizes; during the storage of each data block, the data block is stored in the first internal storage unit or the second internal storage unit according to whether the current remaining space of the first internal storage unit is sufficient to store the data block, and the second internal storage unit is configured to transmit the data stored by the second internal storage unit to the external storage unit when the amount of the data stored by the second internal storage unit reaches a second threshold value.
The first internal storage unit and the second internal storage unit are both used for internal storage, and the second internal storage unit is used for caching data needing to be sent to the external storage unit; the second threshold may be a threshold set in advance by the user, for example, half line image data.
Step S3, a store go recording step: and recording the storage destination information of each data block.
Wherein, the storage destination information of each data block may be identification information or address information of the storage unit; specifically, in the case where a data block is stored in the first internal storage unit, the storage destination information of the data block may be identification information of the first internal storage unit, such as tag 1; in the case where a data block is sent to the second internal storage unit and stored in the external storage unit by the second internal storage unit, the storage destination information of the data block may be identification information of the external storage unit, such as tag 2.
Specifically, referring to fig. 2, the data storing step may include the following steps:
in step S21, the data stream input by the data input unit is received and divided into a plurality of data blocks with preset size.
In step S22, it is determined whether the current remaining space of the first internal storage unit is sufficient for storing the data block.
And step S23, if yes, storing the data block into the first internal storage unit, and recording the storage destination information of the data block.
And step S24, if not, storing the data block into the second internal storage unit, and recording the storage destination information of the data block.
In step S25, it is determined whether the amount of data stored in the second internal storage unit reaches a second threshold.
In step S26, if yes, the data stored in the second internal storage unit is sent to the external storage unit.
Step S4, data transmission step: when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold value.
The data sending condition can be an internal trigger condition or an external trigger condition; the internal trigger condition may be a data sending condition configured in advance in the cache system, for example, when a timer which may be provided in the cache system reaches a preset time, that is, when the data sending condition is met, the data is started to be obtained for sending, under the condition that the timed data sending is set in advance; the external trigger condition may be a data acquisition request sent by an external data processing module, for example: in an image data processing system, downstream image processing modules of the cache system have been idle, and so on.
Specifically, when the data transmission condition is satisfied, the cache system acquires each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmits each data block to the data output unit in time sequence.
In a specific implementation process, the First internal storage unit, the second internal storage unit, and the third internal storage unit may all use fifo (First Input First output) memories, and may buffer continuous data streams, so as to prevent data loss and improve the transmission speed of data.
In step S4, when the data is to be externally output while satisfying the data transmission condition, the data block whose cache is to be accessed may be preferentially acquired from the first internal storage unit. When the amount of the cache data is small and the first internal storage unit is enough, the data can be obtained from the first internal storage unit and sent to the data output unit, so that the data cache efficiency is improved; when the cache data is more and the first internal storage unit is not enough, the external storage unit is used for storing the data which cannot be stored in the first internal storage unit, and the cached data block output can be obtained from the first internal storage unit firstly, and then the cached data block output can be obtained from the third internal storage unit; the third internal storage unit can automatically acquire data from the external storage unit when the remaining storage space of the third internal storage unit is larger than or equal to the third threshold, and not acquire data from the external storage unit when the data transmission condition is met, so that part of data to be output before the data is acquired from the third internal storage unit is usually stored in the third internal storage unit from the external storage unit, and the third internal storage unit does not need to wait for acquiring the data from the external storage unit when outputting the data. In the data sending step, the data output unit can be directly connected with the data sending unit through internal storage, so that the time delay and bandwidth burden caused by external storage can be avoided, and the caching efficiency is improved.
The data caching method provided in the above embodiment includes: receiving a data stream input by a data input unit, and dividing the data stream into a plurality of data blocks with preset sizes; sequentially storing a plurality of data blocks with preset sizes; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if so, storing the data block into a first internal storage unit so as to fully utilize internal storage; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold; recording storage destination information of each data block; when the data transmission condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially transmitting each data block to the data output unit; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold. According to the method, the data stream can be partitioned, so that the received data can be stored in the first internal storage unit continuously as long as the remaining space of the first internal storage unit is enough to store a data block with a preset size, the first internal storage unit can be fully utilized, and the data amount needing to be cached in the external storage unit is reduced; meanwhile, the data input unit and the data output unit are directly butted through the first internal storage unit which is internally stored, data transmission is carried out, the external storage unit is not required to be directly butted with the data input unit and the data output unit, data access between the data input unit and the data processing system is realized through the second internal storage unit and the third internal storage unit which are internally stored, the system is ensured to only exchange data with the internal storage, time delay and system bandwidth burden caused by external storage can be avoided, and the data caching efficiency is improved.
In some embodiments, step S1 specifically includes the following steps:
acquiring a preset first threshold, and sequentially dividing the data stream into a plurality of data blocks with the same size according to the first threshold in time sequence, wherein the size of each data block is equal to the first threshold. The first threshold is a threshold used for data stream blocking, and may be a threshold preset by a user, for example, a line of image data.
In the above embodiment, the data stream may be partitioned according to a preset first threshold, so that the size of each data block is equal to the first threshold, and a user may configure the first threshold according to actual needs, so as to configure and adjust the size of the data partition.
In some embodiments, step S2 specifically includes the following steps:
detecting a current remaining storage capacity value of the first internal storage unit; and judging whether the residual space of the first internal storage unit is enough to store the data block or not according to the current residual storage capacity value of the first internal storage unit and a first threshold value.
Specifically, the cache system may monitor the remaining storage capacity value of the first internal storage unit in real time, and compare the current remaining storage capacity value of the first internal storage unit with the first threshold; if the current residual storage capacity value of the first internal storage unit is larger than or equal to the first threshold value, judging that the current residual space of the first internal storage unit is enough to store the data block; otherwise, it is determined that the current remaining space of the first internal storage unit is insufficient to store the data block.
In this embodiment, the current remaining storage capacity value of the first internal storage unit is obtained, and then the current remaining storage capacity value of the first internal storage unit is compared with the first threshold value to determine whether the current remaining space of the first internal storage unit is enough to store the data block, so that where the data block is to be stored can be determined simply and quickly, and the efficiency is further improved.
In the above embodiments, the first threshold value is greater than or equal to the second threshold value.
In the embodiment, the first threshold is set to be greater than or equal to the second threshold, so that the size of each data block is greater than the second threshold of the second internal storage unit to send data to the external storage unit, that is, once a data block is stored in the second internal storage unit, the second internal storage unit sends the data block to the external storage unit for storage, and thus the storage space of the second internal storage unit is freed to receive the next data block, which can further reduce the time delay and improve the data caching efficiency.
In some embodiments, referring to fig. 3, step S4 specifically includes the following steps:
in step S41, when the data transmission condition is satisfied, the storage destination information of each recorded data block is acquired.
The storage destination information of each data block is a first identifier for indicating that the data block is stored in the first internal storage unit or a second identifier for indicating that the data block is stored in the external storage unit. In a specific implementation process, the first identifier and the second identifier can use simple digital identifiers so as to save storage space required for storing the destination information and accelerate the identification speed of the stored destination information. For example, when the storage of the data block is destined to the first internal storage unit, the first flag may be set to 1, and when the storage of the data block is destined to the external storage unit, the second flag may be set to 0.
In step S42, it is determined whether the storage destination information of the data block is the first identifier or the second identifier.
Step S43, when the storage destination information of the data block is the first identifier, acquiring the data block from the first internal storage unit; and when the storage destination information of the data block is the second identification, the data block is acquired from the third internal storage unit.
Wherein the third internal storage unit is configured to retrieve data from the external storage unit when its remaining storage space is greater than or equal to a third threshold. Specifically, whether the remaining storage space of the third internal storage unit is greater than or equal to a third threshold value or not is judged, and if yes, data are obtained from the external storage unit and stored in the third internal storage unit. The third threshold may be a threshold preset by the user, for example, half line of image data.
In step S44, the acquired data blocks are sequentially transmitted to the data output unit.
Specifically, the cache system sequentially sends the acquired data blocks to the data output unit according to a time sequence.
In the above embodiment, the first identifier or the second identifier is used as the storage destination information of the data block, so that the storage location of each data block can be determined more efficiently, and it is determined whether the data is acquired by the first internal storage unit or the third internal storage unit and output.
In some embodiments, the external storage unit includes a first external storage unit for receiving the data output by the second internal storage unit and a second external storage unit for outputting the data to the third internal storage unit; referring to fig. 4, the method may further include the following steps:
step S5, iterative storage step: and determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of the next stage, taking the first external storage unit as a data input unit of the next stage, taking the second external storage unit as a data output unit of the next stage, and sequentially and iteratively executing the iterative configuration step, the data receiving step, the data storage step, the storing destination record step and the data sending step until the iteration number reaches the preset iteration number.
In a specific implementation process, the preset number of iterations may be 1 or 2.
In the above embodiments, data storage may be implemented in an iterative manner, and on the basis of an original cache system, configurations of internal storage and external storage in the cache system are flexibly adjusted by configuring a next-level internal and external storage combination of iteration, so that a user can configure or expand a cache structure according to actual requirements, for example, an internal and external storage combination architecture with a lower total cost or a smaller delay is implemented by adjusting an internal and external storage setting ratio.
In some embodiments, the first internal storage unit, the second internal storage unit and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
Among them, the on-chip memory is also called an on-chip memory. The on-chip Memory of the singlechip comprises an on-chip Read-Only Memory (ROM) and an on-chip Random Access Memory (RAM), wherein the on-chip ROM is used for storing program codes, and the on-chip RAM is used for storing data; the off-chip memory includes an off-chip ROM for storing program codes and an off-chip RAM for storing user's rewritable data.
Preferably, the on-chip Memory may be a Static Random-Access Memory (SRAM); the off-chip Memory may be a Dynamic Random Access Memory (DRAM).
The SRAM has high read-write speed and has the defects of high price and small capacity; compared with SRAM, DRAM has high integration level, low power consumption, low cost and capacity of being used in large capacity storage.
The first internal storage unit, the first internal storage unit and the first internal storage unit in the above embodiments are all on-chip memories (e.g. SRAM), the external storage unit is an off-chip memory (e.g. DRAM), and by optimal combination of the on-chip memories and the off-chip memories, the effects of reducing cost and avoiding delay can be achieved.
Referring to fig. 5, an embodiment of the present application provides a data caching apparatus, including:
a data receiving module 11, configured to receive a data stream input by the data input unit, and divide the data stream into a plurality of data blocks of a preset size;
the data storage module 12 is used for sequentially storing a plurality of data blocks with preset sizes; judging whether the residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into a first internal storage unit; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
a storage destination recording module 13, configured to record storage destination information of each data block;
a data sending module 14, configured to, when a data sending condition is met, obtain each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and send each data block to the data output unit in sequence; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold.
In some embodiments, referring to fig. 6, the external storage unit includes a first external storage unit for receiving the data output from the second internal storage unit and a second external storage unit for outputting the data to the third internal storage unit; the device also includes:
and the iteration storage module 15 is configured to determine a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of a next stage, use the first external storage unit as a data input unit of the next stage, use the second external storage unit as a data output unit of the next stage, and sequentially perform iteration configuration, data receiving, data storage, storing destination record and data sending in an iteration manner until the iteration frequency reaches a preset iteration frequency.
In a specific implementation process, the preset number of iterations may be 1 or 2.
In some embodiments, the data receiving module 11 is specifically configured to:
acquiring a preset first threshold, and sequentially dividing the data stream into a plurality of data blocks with the same size according to the first threshold in time sequence, wherein the size of each data block is equal to the first threshold.
In some embodiments, the data storage module 12 is specifically configured to:
detecting a current remaining storage capacity value of the first internal storage unit; and judging whether the residual space of the first internal storage unit is enough to store the data block or not according to the current residual storage capacity value of the first internal storage unit and a first threshold value.
In some embodiments, the first threshold is greater than or equal to the second threshold.
In some embodiments, the data sending module 14 is specifically configured to:
acquiring recorded storage destination information of each data block, wherein the storage destination information of each data block is a first identifier for marking that the data block is stored in a first internal storage unit or a second identifier for marking that the data block is stored in an external storage unit; when the storage destination information of the data block is a first identifier, acquiring the data block from a first internal storage unit; and when the storage destination information of the data block is the second identification, the data block is acquired from the third internal storage unit.
In some embodiments, the first internal storage unit, the second internal storage unit and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
In some embodiments, the data stream is an image data stream.
For specific limitations of the data caching apparatus provided in this embodiment, reference may be made to the above embodiments of the data caching method, which is not described herein again. All or part of the modules in the data caching device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Referring to fig. 7, an embodiment of the present application provides a data caching system, which includes a control unit, a data receiving unit 101A, a first internal storage unit 102A, a second internal storage unit 103A, a third internal storage unit 104A, an external storage unit 105A, an internal flag unit 106A, and a data sending unit 107A;
a data receiving unit 101A, configured to receive the data stream input by the data input unit 108A, and divide the data stream into a plurality of data blocks of a preset size, where the size of each data block is equal to a first threshold; sequentially storing a plurality of data blocks with preset sizes; in the storage process of each data block, acquiring a current remaining storage capacity value of the first internal storage unit 102A, and judging whether the current remaining space of the first internal storage unit 102A is enough to store the data block; if yes, the data block is stored in the first internal storage unit 102A; if not, storing the data block into a second internal storage unit 103A; storing the storage destination information of each data block into the internal marking unit 106A;
a first internal storage unit 102A for receiving and storing the data transmitted by the data receiving unit 101A, and transmitting the current remaining storage capacity value of the first internal storage unit 102A to the data receiving unit 101A;
a second internal storage unit 103A for buffering data to be transmitted to the external storage unit 105A, the second internal storage unit 103A being configured to transmit the data stored therein to the external storage unit 105A when the amount of the data stored therein reaches a second threshold;
a third internal storage unit 104A for caching data to be read from the external storage unit 105A, the third internal storage unit 104A being configured to obtain the data from the external storage unit 105A when a remaining storage space thereof is greater than or equal to a third threshold;
an internal marking unit 106A, configured to cache storage destination information of each data block;
a data transmitting unit 107A configured to, when a data transmission condition is satisfied, acquire storage destination information of each data block from the internal marking unit 106A, acquire each data block from the first internal storage unit 102A or the third internal storage unit 104A according to the storage destination information of each data block, and sequentially transmit each data block to the data output unit 109A;
an external storage unit 105A for caching data to be stored in the external storage unit 105A;
and the control unit is used for configuring a first threshold, a second threshold, a third threshold and a data transmission condition.
The data input unit 108A is a data source, and the data output unit 109A is a destination of data output, such as an external data processing system. The data receiving unit 101A is connected to the data input unit 108A, the first internal storage unit 102A, the second internal storage unit 103A, and the internal flag unit 106A, respectively; the data transmitting unit 107A is connected to the data output unit 109A, the first internal storage unit 102A, the third internal storage unit 104A, and the internal flag unit 106A, respectively; the external storage unit 105A is connected to the second internal storage unit 103A and the third internal storage unit 104A, respectively; the control unit is connected to the data receiving unit 101A, the data transmitting unit 107A, the second internal storage unit 103A, and the third internal storage unit 104A, respectively.
In the above system, the interaction between the internal flag unit 106A and the data reception unit 101A and the data transmission unit 107A is a flag stream.
In some embodiments, referring to fig. 8, the external storage unit 105A of the system includes a first external storage unit 1051A for receiving the output data of the second internal storage unit and a second external storage unit 1052A for outputting data to the third internal storage unit;
the system also includes an iteration module; the iteration module comprises a control unit, a data receiving unit 101B, a first internal storage unit 102B, a second internal storage unit 103B, a third internal storage unit 104B, an external storage unit 105B, an internal marking unit 106B and a data sending unit 107B of the next stage;
a first external storage unit 1051A in the system is connected with a data receiving unit 101B of a next stage included in the iteration module to serve as a data input unit 108B of the next stage in the iteration module, and a second external storage unit 1052A is connected with a data transmitting unit 107B of the next stage included in the iteration module to serve as a data output unit 109B of the next stage in the iteration module;
the connection relationship between the units of the next stage in the iteration module is the same as the connection relationship between the units in the system.
In some embodiments, the first threshold is greater than or equal to the second threshold.
In some embodiments, the first internal storage unit, the second internal storage unit and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
Preferably, the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
For specific limitations of the data caching system provided in this embodiment, reference may be made to the above embodiments of the data caching method, which are not described herein again. The various elements of the data caching system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The units can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the units.
An embodiment of the present application provides a computer device that may include a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. When executed by a processor, the computer program causes the processor to perform the steps of the data caching method of any one of the embodiments described above.
For the working process, working details, and technical effects of the computer device provided in this embodiment, reference may be made to the above embodiments related to the data caching method, which are not described herein again.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the data caching method according to any one of the above embodiments. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, working details and technical effects of the computer-readable storage medium provided in this embodiment, reference may be made to the above embodiments of the data caching method, which are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (27)

1. A method for caching data, the method comprising:
a data receiving step of receiving a data stream input by a data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
a data storage step, in which the data blocks with the preset sizes are stored in sequence; judging whether the current residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into the first internal storage unit; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
a storage destination recording step of recording storage destination information of each data block;
a data sending step of acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block and sending each data block to a data output unit in sequence when a data sending condition is met; the third internal storage unit is configured to acquire data from the external storage unit when a remaining storage space thereof is greater than or equal to a third threshold.
2. The method of claim 1, wherein the dividing the data stream into a plurality of data blocks of a preset size comprises:
acquiring a preset first threshold, and sequentially dividing the data stream into a plurality of data blocks with the same size according to the first threshold in time sequence, wherein the size of each data block is equal to the first threshold.
3. The method of claim 2, wherein determining whether the current remaining space of the first internal storage unit is sufficient to store the data block comprises:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block or not according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
4. A method according to claim 2 or 3, characterized in that the first threshold value is greater than or equal to the second threshold value.
5. The method according to any one of claims 1 to 3, wherein the obtaining each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block comprises:
acquiring recorded storage destination information of each data block, wherein the storage destination information of each data block is a first identifier for indicating that the data block is stored in the first internal storage unit or a second identifier for indicating that the data block is stored in the external storage unit;
when the storage destination information of the data block is a first identifier, acquiring the data block from the first internal storage unit;
and when the storage destination information of the data block is a second identifier, acquiring the data block from the third internal storage unit.
6. The method of claim 1, wherein the external storage unit comprises a first external storage unit for receiving the second internal storage unit output data and a second external storage unit for outputting data to the third internal storage unit; the method further comprises the following steps:
an iterative storage step, namely determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of a next stage, taking the first external storage unit as a data input unit of the next stage, and taking the second external storage unit as a data output unit of the next stage; and sequentially and iteratively executing the iterative configuration step, the data receiving step, the data storage step, the storage destination record step and the data sending step until the iteration number reaches a preset iteration number.
7. The method of claim 6, wherein the predetermined number of iterations is 2.
8. The method of claim 1, wherein the first internal storage unit, the second internal storage unit, and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
9. The method of claim 8, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
10. The method of claim 1, wherein the data stream is an image data stream.
11. A data caching apparatus, the apparatus comprising:
the data receiving module is used for receiving a data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes;
the data storage module is used for sequentially storing the data blocks with the preset sizes; judging whether the residual space of the first internal storage unit is enough to store the data block or not in the storage process of each data block; if yes, storing the data block into the first internal storage unit; if not, storing the data block into a second internal storage unit, wherein the second internal storage unit is configured to send the stored data to an external storage unit when the amount of the stored data reaches a second threshold;
the storage destination recording module is used for recording the storage destination information of each data block;
the data transmission module is used for acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block and sequentially transmitting each data block to the data output unit when the data transmission condition is met; the third internal storage unit is configured to retrieve data from the external storage unit when its remaining storage space is greater than or equal to a third threshold.
12. The apparatus of claim 11, wherein the data receiving module is specifically configured to:
acquiring a preset first threshold, and sequentially dividing the data stream into a plurality of data blocks with the same size according to the first threshold in time sequence, wherein the size of each data block is equal to the first threshold.
13. The apparatus of claim 12, wherein the data storage module is specifically configured to:
detecting a current remaining storage capacity value of the first internal storage unit;
and judging whether the residual space of the first internal storage unit is enough to store the data block or not according to the current residual storage capacity value of the first internal storage unit and the first threshold value.
14. The apparatus of claim 12 or 13, wherein the first threshold is greater than or equal to the second threshold.
15. The apparatus according to any one of claims 11 to 13, wherein the data sending module is specifically configured to:
acquiring recorded storage destination information of each data block, wherein the storage destination information of each data block is a first identifier for indicating that the data block is stored in the first internal storage unit or a second identifier for indicating that the data block is stored in the external storage unit;
when the storage destination information of the data block is a first identifier, acquiring the data block from the first internal storage unit;
and when the storage destination information of the data block is a second identifier, acquiring the data block from the third internal storage unit.
16. The apparatus of claim 11, wherein the external storage unit comprises a first external storage unit for receiving the data output by the second internal storage unit and a second external storage unit for outputting data to the third internal storage unit; the device further comprises:
the iteration storage module is used for determining a first internal storage unit, an external storage unit, a second internal storage unit and a third internal storage unit of a next stage, taking the first external storage unit as a data input unit of the next stage, and taking the second external storage unit as a data output unit of the next stage; and sequentially and iteratively executing the iterative configuration step, the data receiving step, the data storage step, the storage destination record step and the data sending step until the iteration number reaches a preset iteration number.
17. The apparatus of claim 16, wherein the preset number of iterations is 2.
18. The apparatus of claim 11, wherein the first internal storage unit, the second internal storage unit, and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
19. The apparatus of claim 18, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
20. The apparatus of claim 11, wherein the data stream is an image data stream.
21. A data cache system is characterized by comprising a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data sending unit;
the data receiving unit is used for receiving the data stream input by the data input unit and dividing the data stream into a plurality of data blocks with preset sizes, and the size of each data block is equal to a first threshold value; sequentially storing the data blocks with the preset sizes; in the storage process of each data block, acquiring the current remaining storage capacity value of the first internal storage unit, and judging whether the current remaining space of the first internal storage unit is enough to store the data block; if yes, storing the data block into the first internal storage unit; if not, storing the data block into a second internal storage unit; storing the storage destination information of each data block into the internal marking unit;
the first internal storage unit is used for receiving and storing the data sent by the data receiving unit and sending the current remaining storage capacity value of the first internal storage unit to the data receiving unit;
the second internal storage unit is used for caching data required to be sent to the external storage unit, and the second internal storage unit is configured to send the data stored in the second internal storage unit to the external storage unit when the amount of the data stored in the second internal storage unit reaches a second threshold;
the third internal storage unit is used for caching data needing to be read from the external storage unit, and the third internal storage unit is configured to acquire the data from the external storage unit when the remaining storage space of the third internal storage unit is larger than or equal to a third threshold;
the internal marking unit is used for caching the storage destination information of each data block;
the data sending unit is used for acquiring the storage destination information of each data block from the internal marking unit when a data sending condition is met, acquiring each data block from the first internal storage unit or the third internal storage unit according to the storage destination information of each data block, and sequentially sending each data block to the data output unit;
the external storage unit is used for caching data needing to be stored in the external storage unit;
a control unit, configured to configure the first threshold, the second threshold, the third threshold, and the data transmission condition.
22. The system of claim 21, wherein the external storage units of the system comprise a first external storage unit for receiving data output by the second internal storage unit and a second external storage unit for outputting data to the third internal storage unit;
the system further comprises an iteration module; the iteration module comprises a control unit, a data receiving unit, a first internal storage unit, a second internal storage unit, a third internal storage unit, an external storage unit, an internal marking unit and a data sending unit at the next stage;
a first external storage unit in the system is connected with a next-stage data receiving unit contained in the iteration module to serve as a next-stage data input unit in the iteration module, and a second external storage unit is connected with a next-stage data sending unit contained in the iteration module to serve as a next-stage data output unit in the iteration module;
and the connection relation between all units of the next level in the iteration module is the same as the connection relation between all units in the system.
23. The system of claim 21, wherein the first threshold is greater than or equal to the second threshold.
24. The system of claim 21, wherein the first internal storage unit, the second internal storage unit, and the third internal storage unit are all on-chip memories, and the external storage unit is an off-chip memory.
25. The system of claim 24, wherein the on-chip memory is a static random access memory; the off-chip memory is a dynamic random access memory.
26. A computer arrangement comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 10.
27. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 10.
CN202210585432.8A 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium Active CN114968102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585432.8A CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585432.8A CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114968102A true CN114968102A (en) 2022-08-30
CN114968102B CN114968102B (en) 2023-10-13

Family

ID=82956202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585432.8A Active CN114968102B (en) 2022-05-27 2022-05-27 Data caching method, device, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114968102B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115658328A (en) * 2022-12-07 2023-01-31 摩尔线程智能科技(北京)有限责任公司 Device and method for managing storage space, computing equipment and chip
CN116339622A (en) * 2023-02-20 2023-06-27 深圳市数存科技有限公司 Data compression system and method based on block level

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160188481A1 (en) * 2013-12-31 2016-06-30 Mosys, Inc. Integrated Main Memory And Coprocessor With Low Latency
CN106055274A (en) * 2016-05-23 2016-10-26 联想(北京)有限公司 Data storage method, data reading method and electronic device
CN106095552A (en) * 2016-06-07 2016-11-09 华中科技大学 A kind of Multi-Task Graph processing method based on I/O duplicate removal and system
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN108415855A (en) * 2018-03-06 2018-08-17 珠海全志科技股份有限公司 It makes video recording file memory method and device, computer installation and storage medium
CN108427539A (en) * 2018-03-15 2018-08-21 深信服科技股份有限公司 Offline duplicate removal compression method, device and the readable storage medium storing program for executing of buffer memory device data
CN110225399A (en) * 2019-06-19 2019-09-10 深圳市共进电子股份有限公司 Streaming Media processing method, device, computer equipment and storage medium
CN113254392A (en) * 2021-07-12 2021-08-13 深圳比特微电子科技有限公司 Data storage method for system on chip and device based on system on chip
WO2022062537A1 (en) * 2020-09-27 2022-03-31 苏州浪潮智能科技有限公司 Data compression method and apparatus, and computer-readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160188481A1 (en) * 2013-12-31 2016-06-30 Mosys, Inc. Integrated Main Memory And Coprocessor With Low Latency
CN106055274A (en) * 2016-05-23 2016-10-26 联想(北京)有限公司 Data storage method, data reading method and electronic device
CN106095552A (en) * 2016-06-07 2016-11-09 华中科技大学 A kind of Multi-Task Graph processing method based on I/O duplicate removal and system
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN108415855A (en) * 2018-03-06 2018-08-17 珠海全志科技股份有限公司 It makes video recording file memory method and device, computer installation and storage medium
CN108427539A (en) * 2018-03-15 2018-08-21 深信服科技股份有限公司 Offline duplicate removal compression method, device and the readable storage medium storing program for executing of buffer memory device data
CN110225399A (en) * 2019-06-19 2019-09-10 深圳市共进电子股份有限公司 Streaming Media processing method, device, computer equipment and storage medium
WO2022062537A1 (en) * 2020-09-27 2022-03-31 苏州浪潮智能科技有限公司 Data compression method and apparatus, and computer-readable storage medium
CN113254392A (en) * 2021-07-12 2021-08-13 深圳比特微电子科技有限公司 Data storage method for system on chip and device based on system on chip

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115658328A (en) * 2022-12-07 2023-01-31 摩尔线程智能科技(北京)有限责任公司 Device and method for managing storage space, computing equipment and chip
CN115658328B (en) * 2022-12-07 2023-10-03 摩尔线程智能科技(北京)有限责任公司 Device and method for managing storage space, computing device and chip
CN116339622A (en) * 2023-02-20 2023-06-27 深圳市数存科技有限公司 Data compression system and method based on block level
CN116339622B (en) * 2023-02-20 2023-11-14 深圳市数存科技有限公司 Data compression system and method based on block level

Also Published As

Publication number Publication date
CN114968102B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN114968102A (en) Data caching method, device and system, computer equipment and storage medium
US9253093B2 (en) Method for processing data packets in flow-aware network nodes
JP2021111315A (en) Neural network data processing device, method, and electronic apparatus
CN113419824A (en) Data processing method, device, system and computer storage medium
US20090031092A1 (en) Data reception system
CN113391973B (en) Internet of things cloud container log collection method and device
CN107545050A (en) Data query method and device, electronic equipment
JP2008234059A (en) Data transfer device and information processing system
CN111208941A (en) File management method and device, computer equipment and computer readable storage medium
CN104486442A (en) Method and device for transmitting data of distributed storage system
US20030097418A1 (en) Portable information communication terminal
EP1780976A1 (en) Methods and system to offload data processing tasks
CN109889456B (en) Data transmission method, device, equipment, system and storage medium
CN111694806A (en) Transaction log caching method, device, equipment and storage medium
US20100115387A1 (en) Data receiving apparatus, data receiving method, and computer-readable recording medium
CN112486874B (en) Order-preserving management method and device for I/O (input/output) instructions in wide-port scene
CN210804421U (en) Server system
CN105912477B (en) A kind of method, apparatus and system that catalogue is read
CN112084163B (en) Data writing method and device and computer equipment
CN112292660B (en) Method for scheduling data in memory, data scheduling equipment and system
CN113438274A (en) Data transmission method and device, computer equipment and readable storage medium
CN106919514A (en) Semiconductor device, data handling system and semiconductor device control method
CN103631726B (en) File processing method and device of series-connection streaming computational nodes
US6728861B1 (en) Queuing fibre channel receive frames
CN110764707A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant