WO2021244067A1 - 一种稀释缓存空间的方法、设备以及介质 - Google Patents
一种稀释缓存空间的方法、设备以及介质 Download PDFInfo
- Publication number
- WO2021244067A1 WO2021244067A1 PCT/CN2021/076932 CN2021076932W WO2021244067A1 WO 2021244067 A1 WO2021244067 A1 WO 2021244067A1 CN 2021076932 W CN2021076932 W CN 2021076932W WO 2021244067 A1 WO2021244067 A1 WO 2021244067A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- queue
- buffer space
- data deletion
- queues
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
- G06F3/0641—De-duplication techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
Definitions
- the present invention relates to the field of cache, in particular to a method, equipment and storage medium for diluting cache space.
- an embodiment of the present invention proposes a method for diluting the buffer space, which includes the following steps:
- deleting data on the queue with the longest length in the buffer space at a preset initial speed further includes:
- Data deletion is performed on the queue with the longest length among the plurality of queues.
- deleting data on the queue with the longest length in the buffer space at a preset initial speed further includes:
- the dilution of the buffer space is triggered.
- performing data deletion on the queue with the largest length in the buffer space at a preset initial speed or performing data deletion on each queue that triggers data deletion at the apportionment speed further includes:
- performing data deletion on the queue with the largest length in the buffer space at a preset initial speed or performing data deletion on each queue that triggers data deletion at the apportionment speed further includes:
- an embodiment of the present invention also provides a computer device, including:
- At least one processor At least one processor
- a memory stores a computer program that can run on the processor, and is characterized in that the processor executes the following steps when executing the program:
- deleting data on the queue with the longest length in the buffer space at a preset initial speed further includes:
- Data deletion is performed on the queue with the longest length among the plurality of queues.
- deleting data on the queue with the longest length in the buffer space at a preset initial speed further includes:
- the dilution of the buffer space is triggered.
- performing data deletion on the queue with the largest length in the buffer space at a preset initial speed or performing data deletion on each queue that triggers data deletion at the apportionment speed further includes:
- an embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor Perform any of the steps of the method for diluting the buffer space as described above.
- the present invention has one of the following beneficial technical effects: the solution proposed by the present invention can allocate the data to be deleted to each eligible queue, so that the original cached data can be preserved to the greatest extent, and the data cache invalidation can be broadened. Instead of centrally failing a certain queue or a certain part of data, it can minimize the occurrence of data avalanches and effectively avoid downtime events in the production environment.
- FIG. 1 is a schematic flowchart of a method for diluting a buffer space provided by an embodiment of the present invention
- Figure 2 is a schematic structural diagram of a computer device provided by an embodiment of the present invention.
- Fig. 3 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present invention.
- the data when data needs to be cached, the data is cached through the cache space, and each batch (for example, according to the time range, for example, data within 5s) is sent to the cache space as a unit,
- each batch for example, according to the time range, for example, data within 5s
- the buffer space different batches of data are put into different buffer queues. Since the size of the queue is determined by the amount of data in the buffer, the amount of data buffered in each batch is different, so the size of each queue is different.
- an embodiment of the present invention proposes a method for diluting the buffer space, as shown in FIG.
- Data deletion is performed on the queue with the largest length in the buffer space;
- S2 in response to the length of the queue with the largest length for data deletion being equal to the length of the several queues in the buffer space, triggering the processing of the several queues Perform data deletion;
- S3, use the number of all queues that trigger data deletion and the preset initial speed to calculate the allocation speed;
- S4 perform data deletion on each queue that triggers data deletion at the allocation speed;
- S5 respond to The length of all queues for data deletion is equal to the length of the other several queues in the buffer space, triggers the data deletion of the other several queues, and returns to the step of calculating the apportioning speed;
- S6, stop in response to the trigger
- the buffer space is diluted, and the data deletion process of all queues that trigger data deletion is suspended.
- the solution proposed by the present invention can allocate the data to be deleted to each eligible queue, so that the original cached data can be preserved to the maximum extent, thereby making the data cache invalidation broad, instead of collectively invalidating a certain queue or a certain part of data , which can minimize the occurrence of data avalanche and effectively avoid downtime events in the production environment.
- step S1 in response to triggering to dilute the buffer space, deleting data on the queue with the largest length in the buffer space at a preset initial speed further includes:
- Data deletion is performed on the queue with the longest length among the plurality of queues.
- the cache expiration strategy can be FIFO (First Input First Output), LFU (Least Frequently Used, least recently used algorithm), LRU (Least Recently Used, least recently used algorithm), where FIFO refers to the most recently used algorithm.
- the data that enters the cache first will be cleared out first when the cache space is not enough (when the maximum element limit is exceeded); LFU means that the least used element will be cleaned up, and the cached element is required to have a hit attribute in the cache space If it is not enough, the one with the smallest hit value will be cleared out of the cache; LRU refers to the least recently used, and the cached element has a timestamp. When the cache capacity is full and there is a need to make room to cache new elements , Then the element whose timestamp is the farthest from the current time in the existing cache elements will be cleared from the cache.
- step S1 in response to triggering to dilute the buffer space, deleting data on the queue with the largest length in the buffer space at a preset initial speed further includes:
- the dilution of the buffer space is triggered.
- the cleaning of the cache space can be triggered in various ways, including but not limited to setting the threshold of the cache space.
- the threshold When the amount of data cached in the cache space reaches the threshold, the dilution cleaning of the cache space is triggered; or, when the user wants to When manually triggering to clear the cache space, you can directly issue an instruction to clear the cache space.
- the cache space receives the corresponding instruction, it triggers the dilution and cleaning of the cache space.
- step S2 in response to the length of the queue with the largest length after data deletion being equal to the length of several queues in the buffer space, data deletion is triggered for the several queues Specifically, when the length of the queue with the largest length after data deletion is reduced to the same as the length of one or more queues in the buffer space, the data will be triggered for the one or more queues. delete.
- step S3 the allocation speed is calculated using the number of all queues that trigger data deletion and the preset initial speed. Specifically, the sum of the data deletion speeds of all queues that perform data deletion is the initial speed. In this way, when other queues trigger data deletion, the data deletion speed of the queue with the largest length at this time is shared by other queues, so the data deletion speed slows down.
- the apportioning speed can be obtained by dividing the preset initial speed by the number of queues that trigger data deletion.
- step S5 in response to the length of all queues after data deletion being equal to the lengths of other queues in the buffer space, data deletion of the other queues is triggered, and Return to the step of calculating the apportioning speed.
- step S3 after a new queue triggers data deletion, the length of all queues after data deletion and the other 1 in the buffer space that have not been deleted When the lengths of one or more queues are equal, data deletion is triggered for the other one or more queues that have not undergone data deletion.
- the longest queue Q1 in the buffer space starts to delete data at the preset initial speed, and after the length of the longest queue Q1 is equal to the length of the queue Q2, triggering the queue Q2
- the length of all queues after data deletion can also be the same as the length of multiple queues in the buffer space that have not been deleted, that is, after the longest queue Q1 in the buffer space starts to delete data at the preset initial speed .
- the length of the longest queue Q1 may be equal to the length of the queues Q2 and Q3.
- the data of the queues Q1, Q2, and Q3 are deleted at the preset initial speed/3. In this way, the data dilution rate of Q1 is shared by other queues, and its dilution rate is slowed down, so as to achieve a non-linear data removal.
- step S6 in response to the trigger to stop diluting the buffer space, the data deletion process of all queues that trigger data deletion is suspended. Specifically, after data deletion is performed on the queue, the buffer space is buffered. When the amount of data is less than the safety threshold, the dilution and cleaning of the cache space is triggered to stop; or, when the user wants to manually stop the cleaning of the cache space, he can directly issue an instruction to stop cleaning the cache space. When the cache space receives the corresponding instruction, Then it triggers to stop the dilution and cleaning of the cache space.
- performing data deletion on the queue with the largest length in the buffer space at a preset initial speed or performing data deletion on each queue that triggers data deletion at the apportionment speed further includes:
- each queue that triggers data cleaning can randomly mark the data to be deleted (the marking speed is the preset initial speed or the apportionment speed), and then merge and delete the marked data, release the buffer space, and change the queue length.
- performing data deletion on the queue with the largest length in the buffer space at a preset initial speed or performing data deletion on each queue that triggers data deletion at the apportionment speed further includes:
- the deletion order can be determined by the data attributes in the queue (for example, the hit attribute), and then the data to be deleted is marked according to the deletion order (the marking speed is the preset initial speed or the apportionment speed) , And then merge and delete the marked data, release the buffer space, and change the queue length.
- the data attributes in the queue for example, the hit attribute
- the marking speed is the preset initial speed or the apportionment speed
- the solution proposed by the present invention can allocate the data to be deleted to each eligible queue, so that the original cached data can be preserved to the maximum extent, thereby making the data cache invalidation broad, instead of collectively invalidating a certain queue or a certain part of data , which can minimize the occurrence of data avalanche and effectively avoid downtime events in the production environment.
- an embodiment of the present invention also provides a computer device 201, including:
- At least one processor 220 and
- the memory 210 stores a computer program 211 that can be run on the processor, and the processor 220 executes the steps of any of the above methods for diluting the cache space when the program is executed.
- an embodiment of the present invention also provides a computer-readable storage medium 301.
- the computer-readable storage medium 301 stores computer program instructions 310.
- the program instructions 310 are executed by the processor, the steps of any of the above methods for diluting the cache space are executed.
- the devices, devices, etc. disclosed in the embodiments of the present invention may be various electronic terminal devices, such as mobile phones, personal digital assistants (PDAs, Personal Digital Assistants), tablet computers (PADs, Portable android devices), smart TVs, etc. , It can also be a large-scale terminal device, such as a server, etc. Therefore, the protection scope disclosed in the embodiments of the present invention should not be limited to a specific type of device or device.
- the client disclosed in the embodiment of the present invention may be applied to any of the foregoing electronic terminal devices in the form of electronic hardware, computer software, or a combination of both.
- the method disclosed according to the embodiment of the present invention may also be implemented as a computer program executed by a CPU (central processing unit, central processing unit), and the computer program may be stored in a computer-readable storage medium.
- the computer program executes the above-mentioned functions defined in the method disclosed in the embodiment of the present invention.
- the above method steps and system units can also be implemented using a controller and a computer-readable storage medium for storing a computer program that enables the controller to implement the above steps or unit functions.
- non-volatile memory may include read-only memory (ROM), programmable ROM (PROM, Programmable Read-Only Memory), electrically programmable ROM (EPROM, Erasable Programmable Read-Only Memory), Electrically erasable programmable ROM (EEPROM, Electrically Erasable Programmable read only memory) or flash memory.
- Volatile memory can include random access memory (RAM), which can act as external cache memory.
- RAM can be obtained in various forms, such as synchronous RAM (SRAM, Static Random Access Memory), dynamic RAM (DRAM, Dynamic Random Access Memory), and synchronous DRAM (SDRAM, Synchronous Dynamic Random Access Memory). , Double data rate SDRAM (DDR SDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced SDRAM (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronous link DRAM (SLDRAM, Sync Link Dynamic Random Access Memory), and direct Rambus RAM (RDRAM, Rambus Direct RAM).
- SRAM Static Random Access Memory
- DRAM Dynamic Random Access Memory
- SDRAM Synchronous Dynamic Random Access Memory
- DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- ESDRAM enhanced Synchronous Dynamic Random Access Memory
- SLDRAM synchronous link DRAM
- RDRAM Rambus Direct RAM
- the storage devices of the disclosed aspects are intended to include, but are not limited to, these and other suitable types of memory.
- DSP digital signal processors
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- the processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in combination with a DSP, and/or any other such configuration.
- the steps of the method or algorithm described in combination with the disclosure herein may be directly included in hardware, a software module executed by a processor, or a combination of the two.
- Software modules can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disks, removable disks, CD-ROM (Compact Disc Read-Only Memory), or have Known as any other form of storage medium.
- An exemplary storage medium is coupled to the processor such that the processor can read information from or write information to the storage medium.
- the storage medium may be integrated with the processor.
- the processor and the storage medium may reside in the ASIC.
- the ASIC can reside in the user terminal.
- the processor and the storage medium may reside as discrete components in the user terminal.
- functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the function can be stored as one or more instructions or codes on a computer-readable medium or transmitted through the computer-readable medium.
- Computer-readable media include computer storage media and communication media, including any media that facilitates the transfer of a computer program from one location to another location.
- a storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
- the computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, or may be used to carry or store instructions in the form of Or any other medium that can be accessed by a general-purpose or special-purpose computer or general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium.
- coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL, Digital Subscriber Line) or wireless technologies such as infrared, radio, and microwave to send software from a website, server, or other remote source
- DSL Digital Subscriber Line
- wireless technologies such as infrared, radio, and microwave
- magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs, Digital Video Discs), floppy disks, and blu-ray disks. Disks usually reproduce data magnetically, and Optical discs use lasers to optically reproduce data. Combinations of the above content should also be included in the scope of computer-readable media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
一种稀释缓存空间的方法、设备以及介质,包括:响应于触发对缓存空间进行稀释,以预设初始速度对缓存空间中长度最大的队列进行数据删除(S1);响应于进行数据删除的长度最大的队列的长度达到与缓存空间中的若干个队列的长度相等,触发对若干个队列进行数据删除(S2);利用所有触发数据删除的队列的数量和预设初始速度计算分摊速度(S3);以分摊速度分别对每一个触发数据删除的队列进行数据删除(S4);响应于所有进行数据删除的队列的长度达到与缓存空间中的其他若干个队列的长度相等,触发对其他若干个队列进行数据删除,并返回计算分摊速度的步骤(S5);响应于触发停止对缓存空间进行稀释,暂停数据删除过程(S6)。
Description
本申请要求于2020年06月05日提交中国专利局、申请号为202010505017.8、发明名称为“一种稀释缓存空间的方法、设备以及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及缓存领域,具体涉及一种稀释缓存空间的方法、设备以及存储介质。
在大数据时代,生产中对数据处理的速度要求越来越苛刻,除了在引擎上的改进,缓存技术也无疑被越来越多的人重视,缓存技术的发展很大程度上改善了数据之间的交互速度,但缓存终究还是不能作为持久化磁盘来使用,因此缓存就存在一个数据过期的过程。现有的数据过期清理策略都是线性的使数据失效,很容易造成缓存雪崩现象的出现。
发明内容
有鉴于此,为了克服上述问题的至少一个方面,本发明实施例提出一种稀释缓存空间的方法,包括以下步骤:
响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除;
响应于进行数据删除的所述长度最大的队列的长度达到与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除;
利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度;
以所述分摊速度分别对每一个触发数据删除的队列进行数据删除;
响应于所有进行数据删除的队列的长度达到与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤;
响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程。
在一些实施例中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
根据缓存数据过期策略确定所述缓存空间中待进行数据删除的多个队列;
对所述多个队列中长度最大的队列进行数据删除。
在一些实施例中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
检测所述缓存空间已缓存的数据量是否达到阈值或者判断是否接收到用户主动发出的稀释所述缓存空间的指令;
响应于所述缓存空间已缓存的数据量达到阈值或接收到所述用户主动发出的稀释所述缓存空间的指令,触发对所述缓存空间进行稀释。
在一些实施例中,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:
对触发数据删除的队列中的数据进行随机标记;
对所述随机标记后的数据进行归并删除。
在一些实施例中,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:
确定触发数据删除的队列中的数据的删除优先级,以根据所述删除优先级对数据进行标记;
对所述标记后的数据进行归并删除。
基于同一发明构思,根据本发明的另一个方面,本发明的实施例还提供了一种计算机设备,包括:
至少一个处理器;以及
存储器,所述存储器存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时执行以下步骤:
响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除;
响应于进行数据删除的所述长度最大的队列的长度达到与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除;
利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度;
以所述分摊速度分别对每一个触发数据删除的队列进行数据删除;
响应于所有进行数据删除的队列的长度达到与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤;
响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程。
在一些实施例中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
根据缓存数据过期策略确定所述缓存空间中待进行数据删除的多个队列;
对所述多个队列中长度最大的队列进行数据删除。
在一些实施例中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
检测所述缓存空间已缓存的数据量是否达到阈值或者判断是否接收到用户主动发出的稀释所述缓存空间的指令;
响应于所述缓存空间已缓存的数据量达到阈值或接收到所述用户主动发出的稀释所述缓存空间的指令,触发对所述缓存空间进行稀释。
在一些实施例中,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:
对触发数据删除的队列中的数据进行随机标记;
对所述随机标记后的数据进行归并删除。
基于同一发明构思,根据本发明的另一个方面,本发明的实施例还提 供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时执行如上所述的任一种稀释缓存空间的方法的步骤。
本发明具有以下有益技术效果之一:本发明提出的方案,能够通过将待删除数据分摊给各个符合条件的队列,使得原始缓存数据能够最大限度的保存下来,进而使得数据缓存失效宽泛化,而不是集中失效某一队列或某一部分数据,能最大限度的降低数据雪崩现象的发生,有效避免生产环境中的宕机事件。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本发明的实施例提供的稀释缓存空间的方法的流程示意图;
图2为本发明的实施例提供的计算机设备的结构示意图;
图3为本发明的实施例提供的计算机可读存储介质的结构示意图。
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明实施例进一步详细说明。
需要说明的是,本发明实施例中所有使用“第一”和“第二”的表述均是为了区分两个相同名称非相同的实体或者非相同的参量,可见“第一”“第二”仅为了表述的方便,不应理解为对本发明实施例的限定,后续实施例对此不再一一说明。
在本发明的实施例中,当数据需要进行缓存时,通过缓存空间对数据 进行缓存,每一批次(例如,可以根据时间范围,例如5s之内的数据)作为一个单位发给缓存空间,在缓存空间中将不同批次的数据放入不同的缓存队列中,由于队列的大小由缓存的数据量决定,这样每个批次缓存的数据量不同,所以每个队列大小不一。
根据本发明的一个方面,本发明的实施例提出一种稀释缓存空间的方法,如图1所示,其可以包括步骤:S1,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除;S2,响应于进行数据删除的所述长度最大的队列的长度达到与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除;S3,利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度;S4,以所述分摊速度分别对每一个触发数据删除的队列进行数据删除;S5,响应于所有进行数据删除的队列的长度达到与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤;S6,响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程。
本发明提出的方案,能够通过将待删除数据分摊给各个符合条件的队列,使得原始缓存数据能够最大限度的保存下来,进而使得数据缓存失效宽泛化,而不是集中失效某一队列或某一部分数据,能最大限度的降低数据雪崩现象的发生,有效避免生产环境中的宕机事件。
在一些实施例中,在步骤S1中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
根据缓存数据过期策略确定所述缓存空间中待进行数据删除的多个队列;
对所述多个队列中长度最大的队列进行数据删除。
具体的,缓存过期策略可以是FIFO(First Input First Output,指先进先出算法)、LFU(Least Frequently Used,最近最少使用算法)、LRU(Least Recently Used,最近最少使用算法),其中FIFO指最先进入缓存的数据在 缓存空间不够情况下(超出最大元素限制时)会被首先清理出去;LFU指一直以来最少被使用的元素会被清理掉,要求缓存的元素有一个hit属性,在缓存空间不够的情况下,hit值最小的将会被清出缓存;LRU指最近最少使用的,缓存的元素有一个时间戳,当缓存容量满了,而又需要腾出地方来缓存新的元素的时候,那么现有缓存元素中时间戳离当前时间最远的元素将被清出缓存。
这样,通过缓存过期策略可以确定优先进行清理的多个队列,从而在多个队列中确定长度最大的队列进行数据删除。
需要说明的是,也可以在缓存空间全部的队列中选择一个长度最大的队列进行数据删除,也即可以先不通过缓存过期策略确定优先进行清理的多个队列,直接对所有的队列进行清理。
在一些实施例中,在步骤S1中,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:
检测所述缓存空间已缓存的数据量是否达到阈值或者判断是否接收到用户主动发出的稀释所述缓存空间的指令;
响应于所述缓存空间已缓存的数据量达到阈值或接收到所述用户主动发出的稀释所述缓存空间的指令,触发对所述缓存空间进行稀释。
具体的,可以通过多种途径触发对缓存空间进行清理,包括不限于通过设置缓存空间的阈值,当缓存空间中缓存的数据量达到阈值时,触发对缓存空间的稀释清理;或者,当用户想手动触发清理缓存空间时,可以直接发出清理缓存空间的指令,当缓存空间接收到相应的指令后,则触发对缓存空间的稀释清理。
在一些实施例中,在步骤S2中,响应于进行数据删除后的所述长度最大的队列的长度与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除,具体的,当长度最大的队列在进行数据删除后的长度减小到与缓存空间中1个或更多个的队列的长度相等时,则触发对该1个或更多个的队列进行数据删除。
在一些实施例中,在步骤S3中,利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度,具体的,所有进行数据删除的队列的数据删除速度的总和即为初始速度,这样当有其他队列触发数据删除后,此时长度最大的队列的数据删除速度由于被其他队列进行分摊,因此数据删除的速度变慢。
需要说明的是,分摊速度可以通过预设初始速度除以所有触发数据删除的队列的数量得到。
在一些实施例中,在步骤S5中,响应于所有进行数据删除后的队列的长度与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤,具体的,在步骤S3中,在有新的队列触发数据删除后,当所有进行数据删除后的队列的长度与所述缓存空间中的未进行数据删除的其他1个或更多个队列的长度相等时,则触发对该未进行数据删除的其他1个或更多个的队列进行数据删除。
例如,当触发缓存空间的数据稀释清理后,缓存空间中的最长队列Q1开始以预设初始速度进行数据删除,并在最长队列Q1的长度等于队列Q2的长度后,触发对队列Q2的进行数据删除,则以预设初始速度/2的速度分别对队列Q1和Q2进行数据删除,并在队列Q1和Q2的长度等于队列Q3的长度后,触发对队列Q3的进行数据删除,则以预设初始速度/3的速度分别对队列Q1、Q2和Q3进行数据删除,依次类推。当然,所有进行数据删除后的队列的长度也可以与所述缓存空间中多个未进行数据删除的队列的长度相同,即缓存空间中的最长队列Q1开始以预设初始速度进行数据删除后,最长队列Q1的长度可以等于队列Q2、Q3的长度,此时则以预设初始速度/3的速度分别对队列Q1、Q2和Q3进行数据删除。这样Q1的数据稀释速度被其他队列得到分摊,其稀释速度减慢,从而达到一种非线性的数据清除。
在一些实施例中,在步骤S6中,响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程,具体的,当对队列进行数据删除后,缓存空间中缓存的数据量小于安全阈值时,触发停止对 缓存空间的稀释清理;或者,当用户想手动触发停止清理缓存空间时,可以直接发出停止清理缓存空间的指令,当缓存空间接收到相应的指令后,则触发停止对缓存空间的稀释清理。
在一些实施例中,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:
对触发数据删除的队列中的数据进行随机标记;
对所述随机标记后的数据进行归并删除。
具体的,每个触发数据清理的队列,可以随机标记待删除数据(标记的速度即为预设初始速度或分摊速度),然后对标记的数据进行归并删除,释放缓存空间,改变队列长度。
在一些实施例中,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:
确定触发数据删除的队列中的数据的删除优先级,以根据所述删除优先级对数据进行标记;
对所述标记后的数据进行归并删除。
具体的,每个触发数据清理的队列,可以通过该队列中数据属性(例如hit属性)确定删除顺序,然后根据删除顺序进行标记待删除数据(标记的速度即为预设初始速度或分摊速度),然后对标记的数据进行归并删除,释放缓存空间,改变队列长度。
本发明提出的方案,能够通过将待删除数据分摊给各个符合条件的队列,使得原始缓存数据能够最大限度的保存下来,进而使得数据缓存失效宽泛化,而不是集中失效某一队列或某一部分数据,能最大限度的降低数据雪崩现象的发生,有效避免生产环境中的宕机事件。
基于同一发明构思,根据本发明的另一个方面,如图2所示,本发明的实施例还提供了一种计算机设备201,包括:
至少一个处理器220;以及
存储器210,存储器210存储有可在处理器上运行的计算机程序211,处理器220执行程序时执行如上的任一种稀释缓存空间的方法的步骤。
基于同一发明构思,根据本发明的另一个方面,如图3所示,本发明的实施例还提供了一种计算机可读存储介质301,计算机可读存储介质301存储有计算机程序指令310,计算机程序指令310被处理器执行时执行如上的任一种稀释缓存空间的方法的步骤。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关硬件来完成,的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储记忆体(ROM,Read-Only Memory)或随机存储记忆体(RAM,Random Access Memory)等。上述计算机程序的实施例,可以达到与之对应的前述任意方法实施例相同或者相类似的效果。
此外,典型地,本发明实施例公开的装置、设备等可为各种电子终端设备,例如手机、个人数字助理(PDA,Personal Digital Assistant)、平板电脑(PAD,Portable android device)、智能电视等,也可以是大型终端设备,如服务器等,因此本发明实施例公开的保护范围不应限定为某种特定类型的装置、设备。本发明实施例公开的客户端可以是以电子硬件、计算机软件或两者的组合形式应用于上述任意一种电子终端设备中。
此外,根据本发明实施例公开的方法还可以被实现为由CPU(central processing unit,中央处理器)执行的计算机程序,该计算机程序可以存储在计算机可读存储介质中。在该计算机程序被CPU执行时,执行本发明实施例公开的方法中限定的上述功能。
此外,上述方法步骤以及系统单元也可以利用控制器以及用于存储使得控制器实现上述步骤或单元功能的计算机程序的计算机可读存储介质实现。
此外,应该明白的是,本文的计算机可读存储介质(例如,存储器)可以是易失性存储器或非易失性存储器,或者可以包括易失性存储器和非 易失性存储器两者。作为例子而非限制性的,非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM,Programmable Read-Only Memory)、电可编程ROM(EPROM,Erasable Programmable Read-Only Memory)、电可擦写可编程ROM(EEPROM,Electrically Erasable Programmable read only memory)或快闪存储器。易失性存储器可以包括随机存取存储器(RAM),该RAM可以充当外部高速缓存存储器。作为例子而非限制性的,RAM可以以多种形式获得,比如同步RAM(SRAM,Static Random Access Memory)、动态RAM(DRAM,Dynamic Random Access Memory)、同步DRAM(SDRAM,Synchronous Dynamic Random Access Memory)、双数据速率SDRAM(DDR SDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强SDRAM(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步链路DRAM(SLDRAM,Sync Link Dynamic Random Access Memory)、以及直接Rambus RAM(RDRAM,Rambus Direct RAM)。所公开的方面的存储设备意在包括但不限于这些和其它合适类型的存储器。
本领域技术人员还将明白的是,结合这里的公开所描述的各种示例性逻辑块、模块、电路和算法步骤可以被实现为电子硬件、计算机软件或两者的组合。为了清楚地说明硬件和软件的这种可互换性,已经就各种示意性组件、方块、模块、电路和步骤的功能对其进行了一般性的描述。这种功能是被实现为软件还是被实现为硬件取决于具体应用以及施加给整个系统的设计约束。本领域技术人员可以针对每种具体应用以各种方式来实现的功能,但是这种实现决定不应被解释为导致脱离本发明实施例公开的范围。
结合这里的公开所描述的各种示例性逻辑块、模块和电路可以利用被设计成用于执行这里功能的下列部件来实现或执行:通用处理器、数字信号处理器(DSP,Digital Signal Processor)、专用集成电路(ASIC,Application Specific Integrated Circuit)、现场可编程门阵列(FPGA,Field Programmable Gate Array)或其它可编程逻辑器件、分立门或晶体管逻辑、分立的硬件组 件或者这些部件的任何组合。通用处理器可以是微处理器,但是可替换地,处理器可以是任何传统处理器、控制器、微控制器或状态机。处理器也可以被实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、一个或多个微处理器结合DSP和/或任何其它这种配置。
结合这里的公开所描述的方法或算法的步骤可以直接包含在硬件中、由处理器执行的软件模块中或这两者的组合中。软件模块可以驻留在RAM存储器、快闪存储器、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动盘、CD-ROM(Compact Disc Read-Only Memory,只读光盘)、或本领域已知的任何其它形式的存储介质中。示例性的存储介质被耦合到处理器,使得处理器能够从该存储介质中读取信息或向该存储介质写入信息。在一个替换方案中,存储介质可以与处理器集成在一起。处理器和存储介质可以驻留在ASIC中。ASIC可以驻留在用户终端中。在一个替换方案中,处理器和存储介质可以作为分立组件驻留在用户终端中。
在一个或多个示例性设计中,功能可以在硬件、软件、固件或其任意组合中实现。如果在软件中实现,则可以将功能作为一个或多个指令或代码存储在计算机可读介质上或通过计算机可读介质来传送。计算机可读介质包括计算机存储介质和通信介质,该通信介质包括有助于将计算机程序从一个位置传送到另一个位置的任何介质。存储介质可以是能够被通用或专用计算机访问的任何可用介质。作为例子而非限制性的,该计算机可读介质可以包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储设备、磁盘存储设备或其它磁性存储设备,或者是可以用于携带或存储形式为指令或数据结构的所需程序代码并且能够被通用或专用计算机或者通用或专用处理器访问的任何其它介质。此外,任何连接都可以适当地称为计算机可读介质。例如,如果使用同轴线缆、光纤线缆、双绞线、数字用户线路(DSL,Digital Subscriber Line)或诸如红外线、无线电和微波的无线技术来从网站、服务器或其它远程源发送软件,则上述同轴线缆、光纤线缆、双绞线、DSL或诸如红外线、无线电和微波的无线技术均包括在介质的定义。如这里所使用的,磁盘和光盘包括压缩盘(CD,Compact Disk)、激 光盘、光盘、数字多功能盘(DVD,Digital Video Disc)、软盘、蓝光盘,其中磁盘通常磁性地再现数据,而光盘利用激光光学地再现数据。上述内容的组合也应当包括在计算机可读介质的范围内。
以上是本发明公开的示例性实施例,但是应当注意,在不背离权利要求限定的本发明实施例公开的范围的前提下,可以进行多种改变和修改。根据这里描述的公开实施例的方法权利要求的功能、步骤和/或动作不需以任何特定顺序执行。此外,尽管本发明实施例公开的元素可以以个体形式描述或要求,但除非明确限制为单数,也可以理解为多个。
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者一个以上相关联地列出的项目的任意和所有可能组合。
上述本发明实施例公开实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本发明实施例公开的范围(包括权利要求)被限于这些例子;在本发明实施例的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,并存在如上的本发明实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。因此,凡在本发明实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本发明实施例的保护范围之内。
Claims (10)
- 一种稀释缓存空间的方法,其特征在于,包括以下步骤:响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除;响应于进行数据删除的所述长度最大的队列的长度达到与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除;利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度;以所述分摊速度分别对每一个触发数据删除的队列进行数据删除;响应于所有进行数据删除的队列的长度达到与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤;响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程。
- 如权利要求1所述的方法,其特征在于,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:根据缓存数据过期策略确定所述缓存空间中待进行数据删除的多个队列;对所述多个队列中长度最大的队列进行数据删除。
- 如权利要求1所述的方法,其特征在于,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:检测所述缓存空间已缓存的数据量是否达到阈值或者判断是否接收到用户主动发出的稀释所述缓存空间的指令;响应于所述缓存空间已缓存的数据量达到阈值或接收到所述用户主动发出的稀释所述缓存空间的指令,触发对所述缓存空间进行稀释。
- 如权利要求1所述的方法,其特征在于,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个 触发数据删除的队列进行数据删除,进一步包括:对触发数据删除的队列中的数据进行随机标记;对所述随机标记后的数据进行归并删除。
- 如权利要求1所述的方法,其特征在于,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:确定触发数据删除的队列中的数据的删除优先级,以根据所述删除优先级对数据进行标记;对所述标记后的数据进行归并删除。
- 一种计算机设备,包括:至少一个处理器;以及存储器,所述存储器存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时执行以下步骤:响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除;响应于进行数据删除的所述长度最大的队列的长度达到与所述缓存空间中的若干个队列的长度相等,触发对所述若干个队列进行数据删除;利用所有触发数据删除的队列的数量和所述预设初始速度计算分摊速度;以所述分摊速度分别对每一个触发数据删除的队列进行数据删除;响应于所有进行数据删除的队列的长度达到与所述缓存空间中的其他若干个队列的长度相等,触发对所述其他若干个队列进行数据删除,并返回计算分摊速度的步骤;响应于触发停止对所述缓存空间进行稀释,暂停所有触发数据删除的队列的数据删除过程。
- 如权利要求6所述的设备,其特征在于,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:根据缓存数据过期策略确定所述缓存空间中待进行数据删除的多个队 列;对所述多个队列中长度最大的队列进行数据删除。
- 如权利要求6所述的设备,其特征在于,响应于触发对缓存空间进行稀释,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除,进一步包括:检测所述缓存空间已缓存的数据量是否达到阈值或者判断是否接收到用户主动发出的稀释所述缓存空间的指令;响应于所述缓存空间已缓存的数据量达到阈值或接收到所述用户主动发出的稀释所述缓存空间的指令,触发对所述缓存空间进行稀释。
- 如权利要求6所述的设备,其特征在于,以预设初始速度对所述缓存空间中长度最大的队列进行数据删除或以所述分摊速度分别对每一个触发数据删除的队列进行数据删除,进一步包括:对触发数据删除的队列中的数据进行随机标记;对所述随机标记后的数据进行归并删除。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时执行如权利要求1-5任意一项所述的方法的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/928,150 US11687271B1 (en) | 2020-06-05 | 2021-02-19 | Method for diluting cache space, and device and medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010505017.8 | 2020-06-05 | ||
CN202010505017.8A CN111736769B (zh) | 2020-06-05 | 2020-06-05 | 一种稀释缓存空间的方法、设备以及介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021244067A1 true WO2021244067A1 (zh) | 2021-12-09 |
Family
ID=72648276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/076932 WO2021244067A1 (zh) | 2020-06-05 | 2021-02-19 | 一种稀释缓存空间的方法、设备以及介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11687271B1 (zh) |
CN (1) | CN111736769B (zh) |
WO (1) | WO2021244067A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111736769B (zh) | 2020-06-05 | 2022-07-26 | 苏州浪潮智能科技有限公司 | 一种稀释缓存空间的方法、设备以及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106021445A (zh) * | 2016-05-16 | 2016-10-12 | 努比亚技术有限公司 | 一种加载缓存数据的方法及装置 |
US20160342359A1 (en) * | 2007-04-19 | 2016-11-24 | International Business Machines Corporation | Method for selectively performing a secure data erase to ensure timely erasure |
CN109491928A (zh) * | 2018-11-05 | 2019-03-19 | 深圳乐信软件技术有限公司 | 缓存控制方法、装置、终端及存储介质 |
CN110119487A (zh) * | 2019-04-15 | 2019-08-13 | 华南理工大学 | 一种适用于发散数据的缓存更新方法 |
CN111736769A (zh) * | 2020-06-05 | 2020-10-02 | 苏州浪潮智能科技有限公司 | 一种稀释缓存空间的方法、设备以及介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5757771A (en) * | 1995-11-14 | 1998-05-26 | Yurie Systems, Inc. | Queue management to serve variable and constant bit rate traffic at multiple quality of service levels in a ATM switch |
US9122766B2 (en) * | 2012-09-06 | 2015-09-01 | Microsoft Technology Licensing, Llc | Replacement time based caching for providing server-hosted content |
KR101692055B1 (ko) * | 2016-02-24 | 2017-01-18 | 주식회사 티맥스데이터 | 데이터베이스 서버에서 공유 메모리 관리 방법, 장치 및 컴퓨터 판독가능 저장매체에 저장된 컴퓨터-프로그램 |
CN106899516B (zh) * | 2017-02-28 | 2020-07-28 | 华为技术有限公司 | 一种队列清空方法以及相关设备 |
US10157140B2 (en) * | 2017-03-28 | 2018-12-18 | Bank Of America Corporation | Self-learning cache repair tool |
US10331562B2 (en) * | 2017-03-28 | 2019-06-25 | Bank Of America Corporation | Real-time cache repair tool |
CN110232049A (zh) * | 2019-06-12 | 2019-09-13 | 腾讯科技(深圳)有限公司 | 一种元数据缓存管理方法和装置 |
CN110995616B (zh) * | 2019-12-06 | 2022-05-31 | 苏州浪潮智能科技有限公司 | 一种大流量服务器的管理方法、设备及可读介质 |
CN111209106B (zh) * | 2019-12-25 | 2023-10-27 | 北京航空航天大学杭州创新研究院 | 一种基于缓存机制的流式图划分方法和系统 |
JP2022094705A (ja) * | 2020-12-15 | 2022-06-27 | キオクシア株式会社 | メモリシステムおよび制御方法 |
-
2020
- 2020-06-05 CN CN202010505017.8A patent/CN111736769B/zh active Active
-
2021
- 2021-02-19 US US17/928,150 patent/US11687271B1/en active Active
- 2021-02-19 WO PCT/CN2021/076932 patent/WO2021244067A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160342359A1 (en) * | 2007-04-19 | 2016-11-24 | International Business Machines Corporation | Method for selectively performing a secure data erase to ensure timely erasure |
CN106021445A (zh) * | 2016-05-16 | 2016-10-12 | 努比亚技术有限公司 | 一种加载缓存数据的方法及装置 |
CN109491928A (zh) * | 2018-11-05 | 2019-03-19 | 深圳乐信软件技术有限公司 | 缓存控制方法、装置、终端及存储介质 |
CN110119487A (zh) * | 2019-04-15 | 2019-08-13 | 华南理工大学 | 一种适用于发散数据的缓存更新方法 |
CN111736769A (zh) * | 2020-06-05 | 2020-10-02 | 苏州浪潮智能科技有限公司 | 一种稀释缓存空间的方法、设备以及介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230195352A1 (en) | 2023-06-22 |
US11687271B1 (en) | 2023-06-27 |
CN111736769A (zh) | 2020-10-02 |
CN111736769B (zh) | 2022-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10133679B2 (en) | Read cache management method and apparatus based on solid state drive | |
US10067883B2 (en) | Using an access increment number to control a duration during which tracks remain in cache | |
US8949544B2 (en) | Bypassing a cache when handling memory requests | |
US20130205092A1 (en) | Multicore computer system with cache use based adaptive scheduling | |
TWI688859B (zh) | 記憶體控制器與記憶體頁面管理方法 | |
US11099997B2 (en) | Data prefetching method and apparatus, and storage device | |
US9804971B2 (en) | Cache management of track removal in a cache for storage | |
US10031948B1 (en) | Idempotence service | |
WO2017117734A1 (zh) | 一种缓存管理方法、缓存控制器以及计算机系统 | |
EP3115904B1 (en) | Method for managing a distributed cache | |
CN113094392B (zh) | 数据缓存的方法和装置 | |
WO2017091984A1 (zh) | 数据缓存方法、存储控制装置、及存储设备 | |
WO2018161272A1 (zh) | 一种缓存替换方法,装置和系统 | |
CN111124270A (zh) | 缓存管理的方法、设备和计算机程序产品 | |
WO2021244067A1 (zh) | 一种稀释缓存空间的方法、设备以及介质 | |
CN108897495A (zh) | 缓存更新方法、装置、缓存设备及存储介质 | |
US20130067168A1 (en) | Caching for a file system | |
US20190114082A1 (en) | Coordination Of Compaction In A Distributed Storage System | |
WO2023165543A1 (zh) | 共享缓存的管理方法、装置及存储介质 | |
US11269784B1 (en) | System and methods for efficient caching in a distributed environment | |
WO2023138306A1 (zh) | 应用于全闪存存储的缓存方法和装置、设备及介质 | |
CN111859225A (zh) | 程序文件的访问方法、装置、计算设备和介质 | |
CN110941595A (zh) | 一种文件系统访问方法及装置 | |
US11237975B2 (en) | Caching assets in a multiple cache system | |
CN115840663A (zh) | 刷写元数据的方法、电子设备和计算机程序产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21816719 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21816719 Country of ref document: EP Kind code of ref document: A1 |