CN117472799A - Elastic allocation method, device, terminal and storage medium for cache resources - Google Patents

Elastic allocation method, device, terminal and storage medium for cache resources Download PDF

Info

Publication number
CN117472799A
CN117472799A CN202311831594.6A CN202311831594A CN117472799A CN 117472799 A CN117472799 A CN 117472799A CN 202311831594 A CN202311831594 A CN 202311831594A CN 117472799 A CN117472799 A CN 117472799A
Authority
CN
China
Prior art keywords
storage
cache
storage pool
information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311831594.6A
Other languages
Chinese (zh)
Other versions
CN117472799B (en
Inventor
许宇峰
余锡斌
贾彬浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baike Data Technology Shenzhen Co ltd
Original Assignee
Baike Data Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baike Data Technology Shenzhen Co ltd filed Critical Baike Data Technology Shenzhen Co ltd
Priority to CN202311831594.6A priority Critical patent/CN117472799B/en
Publication of CN117472799A publication Critical patent/CN117472799A/en
Application granted granted Critical
Publication of CN117472799B publication Critical patent/CN117472799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an elastic allocation method, device, terminal and storage medium of cache resources, wherein the method comprises the following steps: acquiring the use scene information of each storage pool, and determining the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool; determining a target storage space corresponding to each storage pool based on the use requirement information of each storage pool, and determining the cache requirement corresponding to each storage pool based on the target storage space; and distributing corresponding cache fragments for each storage pool based on the corresponding cache demand of each storage pool. The invention can carry out slicing processing on the cache resources, flexibly and elastically allocate the cache slices based on the use requirement information of each storage pool, and improve the utilization rate of the cache while meeting the cache requirement and avoid the waste of the storage resources.

Description

Elastic allocation method, device, terminal and storage medium for cache resources
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for elastically allocating cache resources.
Background
For the user, the most interesting for storage is performance and capacity. Although caching may improve performance, particularly for operating systems, may improve operating system smoothness, etc., caching is expensive. In addition, the cache is not reasonably allocated in the prior art, so that the cache utilization rate is low.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention aims to solve the technical problems that the buffer utilization rate is low because the buffer is not reasonably allocated in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a method for elastically allocating cache resources, where the method includes:
acquiring the use scene information of each storage pool, and determining the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool;
determining a target storage space corresponding to each storage pool based on the use requirement information of each storage pool, and determining the cache requirement corresponding to each storage pool based on the target storage space;
and distributing corresponding cache fragments for each storage pool based on the corresponding cache demand of each storage pool.
In one implementation, before the obtaining the usage scenario information of each storage pool, the method further includes:
and receiving a preset instruction, and slicing a preset cache based on the preset instruction to obtain a plurality of cache fragments, wherein the number of the cache fragments is the same as that of the storage pools.
In one implementation, the obtaining usage scenario information of each storage pool and determining usage requirement information of each storage pool based on the usage scenario information includes:
acquiring call information of each storage pool, and determining data to be stored corresponding to each storage pool based on the call information;
determining usage scenario information corresponding to the data to be stored based on the data to be stored;
and acquiring historical use data corresponding to the use scene information, and predicting the use demand information of each storage pool based on the historical use data, wherein the historical use data reflects the data storage event of each storage pool under the use scene information.
In one implementation, the predicting usage requirement information of each storage pool based on the historical usage data includes:
determining the times of data storage events of each storage pool under the use scene information based on the historical use data, and determining the data quantity of each storage pool for data storage at each time;
determining data storage events with the largest data volume in each storage pool, and determining historical demand information corresponding to the data storage events with the largest data volume;
and taking the historical demand information corresponding to the data storage event with the largest data volume in each storage pool as the use demand information of each storage pool.
In one implementation manner, the determining, based on the target storage space, a cache demand corresponding to each storage pool includes:
acquiring the limit storage space of each storage pool, and comparing the target storage space with the limit storage space;
if the target storage space is larger than the limit storage space, determining a difference space between the target storage space and the limit storage space;
and determining the cache demand based on the difference space.
In one implementation manner, the allocating the corresponding cache fragments for each storage pool based on the cache demand corresponding to each storage pool includes:
obtaining standard storage amounts of all cache fragments, and arranging all the cache fragments from small to large according to the standard storage amounts to obtain a first arrangement sequence;
arranging all the storage pools from small to large according to the cache demand to obtain a second arrangement sequence;
and respectively pairing the standard storage amount in the first arrangement sequence and the cache demand amount in the second arrangement sequence from small to large to realize the allocation of the cache fragments, wherein the standard storage amount of the cache fragments allocated to each storage pool is larger than the cache demand amount of the storage pool.
In one implementation, the method further comprises:
monitoring the execution state of the data storage event of the storage pool in real time;
if the execution state is stopped, acquiring the actual storage amount of the storage pool;
and if the actual storage amount is smaller than the limit storage space of the storage pool, releasing the cache fragments of the storage pool.
In a second aspect, an embodiment of the present invention further provides an elastic allocation device for a cache resource, where the device includes:
the system comprises a use requirement determining module, a storage pool management module and a storage pool management module, wherein the use requirement determining module is used for acquiring use scene information of each storage pool and determining the use requirement information of each storage pool based on the use scene information, and the use requirement information is used for reflecting the use purpose of each storage pool;
the cache demand determining module is used for determining a target storage space corresponding to each storage pool based on the use demand information of each storage pool and determining the cache demand corresponding to each storage pool based on the target storage space;
and the cache fragment distribution module is used for distributing corresponding cache fragments for each storage pool based on the cache demand corresponding to each storage pool.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory, a processor, and an elastic allocation program of a cache resource stored in the memory and capable of running on the processor, and when the processor executes the elastic allocation program of the cache resource, the processor implements a step of the elastic allocation method of the cache resource in any one of the foregoing schemes.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a computer readable storage medium stores an elastic allocation program of a cache resource, where when the elastic allocation program of the cache resource is executed by a processor, the steps of the elastic allocation method of the cache resource according to any one of the foregoing schemes are implemented.
The beneficial effects are that: compared with the prior art, the invention provides an elastic allocation method of cache resources, which comprises the steps of firstly obtaining the use scene information of each storage pool, and determining the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool. Then, based on the information of the use requirement of each storage pool, a target storage space corresponding to each storage pool is determined, and based on the target storage space, the cache requirement corresponding to each storage pool is determined. And finally, based on the cache demand corresponding to each storage pool, distributing corresponding cache fragments for each storage pool. The invention can carry out slicing processing on the cache resources, flexibly and elastically allocate the cache slices based on the use requirement information of each storage pool, and improve the utilization rate of the cache while meeting the cache requirement and avoid the waste of the storage resources.
Drawings
Fig. 1 is a flowchart of a specific implementation manner of a method for elastically allocating cache resources according to an embodiment of the present invention.
Fig. 2 is a functional schematic diagram of an elastic allocation device for cache resources according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment provides an elastic allocation method for cache resources, which can realize flexible allocation of the cache resources and improve the utilization rate of the cache resources. In specific application, the embodiment firstly obtains the use scene information of each storage pool, and determines the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool. Then, based on the information of the use requirement of each storage pool, a target storage space corresponding to each storage pool is determined, and based on the target storage space, the cache requirement corresponding to each storage pool is determined. And finally, based on the cache demand corresponding to each storage pool, distributing corresponding cache fragments for each storage pool. Therefore, the embodiment can carry out slicing processing on the cache resources, flexibly and elastically allocate the cache slices based on the use requirement information of each storage pool, improve the utilization rate of the cache while meeting the cache requirement, and avoid the waste of the storage resources.
The elastic allocation method of the cache resources can be applied to terminals, and the terminals can be intelligent product terminals such as computers and mobile phones. Specifically, as shown in fig. 1, the elastic allocation method of the cache resource of the present embodiment includes the following steps:
and step S100, acquiring the use scene information of each storage pool, and determining the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool.
The usage scenario information of each storage pool is different, so that the usage requirement information of each storage pool is different, and the requirement for buffering is also different. The usage scenario information of the present embodiment reflects a scenario in which each storage pool is used, and the usage requirement information is used to reflect a purpose of use of each storage pool. In order to reasonably allocate the cache resources, the embodiment needs to analyze the usage scenario information of each storage pool and the corresponding usage requirement information thereof, so as to obtain the actual situation of each storage pool, so that the cache resources are reasonably allocated in the subsequent steps.
In one implementation, when analyzing the usage requirement information of each storage pool, the embodiment includes the following steps:
step S101, call information of each storage pool is obtained, and data to be stored corresponding to each storage pool is determined based on the call information;
step S102, determining the use scene information corresponding to the data to be stored based on the data to be stored;
step S103, acquiring historical usage data corresponding to the usage scenario information, and predicting the usage requirement information of each storage pool based on the historical usage data, wherein the historical usage data reflects the data storage event of each storage pool under the usage scenario information.
In this embodiment, the buffer is preset, and since the buffer amount of one buffer is relatively large, when a plurality of storage pools need to store data by means of the buffer, a single buffer is difficult to meet the demands of the plurality of storage pools, and is also not easy to cause waste of storage resources. For this reason, the present embodiment may first determine whether to introduce an elastic management mechanism into a cache, and when it is determined to introduce an elastic management mechanism into a cache, slice the cache based on a preset instruction in advance, and divide a cache into a plurality of cache slices, where the number of cache slices is the same as the number of storage pools. Thus, when a plurality of storage pools all need to be cached, a cache fragment can be allocated for the storage pools so as to meet the requirement of data storage. When the method is applied specifically, when slicing is performed on the cache, the cache can be evenly sliced, and a plurality of cache slices with the same storage capacity are obtained. Or, in this embodiment, the cache may be segmented according to different storage amounts, so as to obtain a plurality of cache segments with different storage amounts, which is convenient to meet the storage requirements of different storage pools.
When cache fragments are allocated, the embodiment first obtains call information of each storage pool, where the call information reflects the number of times each storage pool is called and the reason why each storage pool is called. Therefore, based on the call information, the data to be stored corresponding to each storage pool when being called at the moment, namely the data to be stored, which is to be stored in the storage pool, can be determined. And then, the terminal can determine corresponding use scene information according to the data to be stored. For example, when the data to be stored is the data associated with the timestamp, it is indicated that the storage pool is required to store the data according to the timestamp node at this time, so that it can be determined that the usage scenario information corresponding to the storage pool is the data monitoring scenario. For another example, when the data to be stored is related to the user information of a certain application program, the usage scenario information corresponding to the storage pool can be determined to be the private data encryption scenario, and when the storage pool is called by the terminal, the private data encryption scenario at the moment can encrypt and store the private data such as the user information generated in the application program. In one implementation manner, when the usage scenario information is analyzed based on the data to be stored, the preset mapping table is used for matching, so that the usage scenario information corresponding to the data to be stored is determined efficiently.
When the terminal determines the usage scenario information of each storage pool, the terminal may further obtain historical usage data, where the historical usage data reflects data storage events performed by each storage pool under the usage scenario information. Thus, based on the historical usage data in combination with the determined usage scenario information, usage demand information for each storage pool may be predicted. Specifically, the embodiment may first determine, based on the historical usage data, the number of times each storage pool performs a data storage event under the usage scenario information, and determine the data amount of each storage pool performing data storage at a time. That is, the present embodiment analyzes the data amount stored in each storage pool when each data storage event is executed based on the historical usage data, and then screens out the data storage event with the largest data amount for each storage pool, where the data storage event with the largest data amount is the event that the storage pool is most likely to perform under the usage scenario information, so that the terminal can use the historical demand information corresponding to the data storage event with the largest data amount in each storage pool as the usage demand information of each storage pool. Of course, in other implementation manners, the embodiment may analyze, according to the historical usage data of each storage pool, a data storage event with the highest execution frequency in each storage pool, where the data storage event with the highest execution frequency is an event that is most likely to be performed by the storage pool under the usage scenario information, so that the terminal may use, as the usage requirement information of each storage pool, historical requirement information corresponding to the data storage event with the highest execution frequency in each storage pool. It can be seen that the present embodiment can accurately analyze the usage requirement information of each storage pool based on the usage scenario information of each storage pool, so as to determine the cache resources that need to be allocated in the subsequent step for each storage pool.
Step 200, determining a target storage space corresponding to each storage pool based on the usage requirement information of each storage pool, and determining the cache requirement corresponding to each storage pool based on the target storage space.
After determining the usage requirement information of each storage pool, the embodiment may further analyze the target storage space of each storage pool, where the target storage space corresponds to the usage requirement information, and the corresponding target storage spaces are different due to the difference of the usage requirement information. After determining the target storage space, the embodiment may further determine the buffer requirement amount corresponding to each storage pool. In this embodiment, the cache demand corresponds to the target storage space, and the cache demand is the demand of each storage pool for meeting the target storage space by means of the cache on the premise that each storage pool meets the use demand information.
In one implementation, when determining the cache demand, the present embodiment includes the following steps:
step S201, obtaining the limit storage space of each storage pool, and comparing the target storage space with the limit storage space;
step S202, if the target storage space is larger than the limit storage space, determining a difference space between the target storage space and the limit storage space;
step S203, determining the cache demand based on the difference space.
Specifically, after determining the usage requirement information, the embodiment may match the target storage space corresponding to the usage requirement information based on a preset mapping relationship. The target storage space can be used for constructing a corresponding mapping relation according to the maximum storage capacity corresponding to the use demand information in the historical data of each storage pool and then taking the maximum storage capacity as the target storage space of the corresponding storage pool. The embodiment can accurately and efficiently analyze the corresponding target storage space based on the mapping relation. The terminal may then separately obtain the limit storage space for each storage pool, which reflects the maximum amount of data that each storage pool can store. For each storage pool, after comparing the target storage space with the limit storage space, whether each storage pool can meet the requirement of the target storage space or not can be determined, and if the target storage space cannot meet the requirement, the storage pool is required to meet the requirement information by means of caching so as to realize data storage.
In one implementation manner, the embodiment compares, for each storage pool, the determined target storage space corresponding to the usage requirement information with the limit storage space of the storage pool; if the target storage space is larger than the limit storage space, it is indicated that the storage pool cannot meet the use requirement information at this time, and at this time, a difference space between the target storage space and the limit storage space can be calculated, where the difference space is a space to be compensated by means of buffering, so that the embodiment can use the difference space as the buffering requirement of the storage pool. If the target storage space is smaller than the limit storage space, the limit storage space of the storage pool can meet the use requirement information, and the buffer is not needed to be used for compensation, so that the buffer requirement is determined to be 0, namely, the buffer resource is not called, and the waste of the buffer resource is avoided.
And step S300, distributing corresponding cache fragments for each storage pool based on the cache demand corresponding to each storage pool.
After determining the cache demand of each storage pool, the terminal can allocate cache fragments to each storage pool respectively so as to help the storage pool smoothly complete data storage through the cache fragments.
In one implementation, when allocating cache slices, the embodiment includes the following steps:
step S301, obtaining standard storage amounts of all cache fragments, and arranging all the cache fragments from small to large according to the standard storage amounts to obtain a first arrangement sequence;
step S302, arranging all storage pools from small to large according to the cache demand to obtain a second arrangement sequence;
and step S303, respectively pairing the standard storage amount in the first arrangement sequence and the cache demand amount in the second arrangement sequence from small to large to realize the allocation of the cache fragments, wherein the standard storage amount of the cache fragments allocated to each storage pool is larger than the cache demand amount of the storage pool.
Specifically, the embodiment obtains the standard storage amount of each cache slice, where the standard storage amount reflects the data amount that each cache slice can cache. And then, arranging all the cache fragments from small to large according to the standard storage amount to obtain a first arrangement sequence. The first arrangement order at this time can determine the data amount that each cache slice can cache and the gap between each cache slice. And then, the terminal can arrange all the storage pools from small to large according to the cache demand to obtain a second arrangement sequence. The second rank may then represent the size of the cache required for each memory pool. In order to meet the cache requirement of each storage pool, and the number of cache fragments is the same as that of the storage pools, the embodiment can pair the standard storage amount in the first arrangement sequence and the cache requirement amount in the second arrangement sequence from small to large, so that each storage pool is allocated with one cache fragment, and the standard storage amount of the cache fragment allocated by each storage pool is larger than the cache requirement amount of the storage pool, so that each storage pool can meet the use requirement information.
In another implementation manner, since the required buffer demands of the respective storage pools may be different, and the required buffer demands of some storage pools may be 0, and the buffer demand storage pool does not need to allocate buffer slices, when allocating buffer slices to the storage pools, in this embodiment, buffer slices with standard storage amounts slightly greater than or equal to the buffer demands are selected from the first arrangement order based on the buffer demands of the storage pools where the buffer slices need to be allocated respectively, and allocated to the storage pools, so that not only is one buffer slice allocated to each storage pool where the buffer slices need to be allocated, but also the standard storage amount of the buffer slices allocated to each storage pool is slightly greater than or equal to the buffer demand of the storage pool, and the buffer resources are utilized to the greatest extent, thereby avoiding the waste of the buffer resources.
In addition, in other implementations, the embodiment may further monitor, in real time, an execution state of the data storage event in the storage pool after the allocation of the cache partition is completed. If the execution state is stopped, it is indicated that the storage pool is suddenly stopped for data storage at this time, and the terminal can obtain the actual storage amount of the storage pool. If the actual storage amount is smaller than the limit storage space of the storage pool, the fact that the allocated cache fragments are not used by the storage pool at this time indicates that the cache fragments of the storage pool can be released so as to be called by other storage pools, and cache resource waste is avoided. Of course, if the actual storage amount is larger than the limit storage space of the storage pool, but smaller than the sum of the limit storage space of the storage pool and the standard storage amount of the cache partition, it is indicated that a part of the cache space of the cache partition is unused by the storage pool at this time, and the unused cache space can be released at this time so as to be called by other storage pools, thereby avoiding the waste of cache resources.
In summary, the embodiment can flexibly and elastically allocate cache fragments by slicing the cache resources and based on the information of the use requirement of each storage pool, thereby improving the utilization rate of the cache while meeting the cache requirement and avoiding the waste of the storage resources.
Based on the foregoing embodiment, the present invention further provides an elastic allocation device for cache resources, as shown in fig. 2, where the device includes: the use requirement determination module 10, the cache requirement determination module 20, and the cache tile allocation module 30. Specifically, the usage requirement determining module 10 is configured to obtain usage scenario information of each storage pool, and determine usage requirement information of each storage pool based on the usage scenario information, where the usage requirement information is used to reflect a usage purpose of each storage pool. The cache demand determining module 20 is configured to determine, based on the usage demand information of each storage pool, a target storage space corresponding to each storage pool, and determine, based on the target storage space, a cache demand corresponding to each storage pool. The cache partition allocation module 30 is configured to allocate a corresponding cache partition for each storage pool based on the cache demand corresponding to each storage pool.
In one implementation, the apparatus further comprises:
and the cache slicing module is used for receiving a preset instruction, slicing the preset cache based on the preset instruction to obtain a plurality of cache slices, wherein the number of the cache slices is the same as that of the storage pools.
In one implementation, the usage requirement determination module 10 includes:
the to-be-stored data determining unit is used for acquiring calling information of each storage pool and determining to-be-stored data corresponding to each storage pool based on the calling information;
the usage scenario information determining unit is used for determining usage scenario information corresponding to the data to be stored based on the data to be stored;
and the use requirement information determining unit is used for acquiring historical use data corresponding to the use scene information and predicting the use requirement information of each storage pool based on the historical use data, wherein the historical use data reflects data storage events of each storage pool under the use scene information.
In one implementation, the usage requirement information determining unit includes:
a historical data acquisition subunit, configured to determine, based on the historical usage data, the number of times that each storage pool performs a data storage event under the usage scenario information, and determine the data amount of each storage pool performing data storage at each time;
a historical data analysis subunit, configured to determine a data storage event with the largest data volume in each storage pool, and determine historical demand information corresponding to the data storage event with the largest data volume;
and the use requirement determining subunit is used for taking the historical requirement information corresponding to the data storage event with the largest data volume in each storage pool as the use requirement information of each storage pool.
In one implementation, the cache requirement determining module 20 includes:
the storage space comparison unit is used for acquiring the limit storage space of each storage pool and comparing the target storage space with the limit storage space;
a difference space determining unit configured to determine a difference space between the target storage space and the limit storage space if the target storage space is larger than the limit storage space;
and the cache demand determining unit is used for determining the cache demand based on the difference space.
In one implementation, the cache tile allocation module 30 includes:
the first arrangement order determining unit is used for obtaining standard storage amounts of all cache fragments, and arranging all the cache fragments from small to large according to the standard storage amounts to obtain a first arrangement order;
the second arrangement order determining unit is used for arranging all the storage pools from small to large according to the cache demand to obtain a second arrangement order;
and the cache fragment pairing unit is used for respectively pairing the standard storage amount in the first arrangement sequence and the cache demand amount in the second arrangement sequence from small to large to realize the allocation of the cache fragments, wherein the standard storage amount of the cache fragments allocated to each storage pool is larger than the cache demand amount of the storage pool.
In one implementation, the apparatus further comprises:
the execution state monitoring unit is used for monitoring the execution state of the data storage event of the storage pool in real time;
an actual storage amount determining unit, configured to obtain an actual storage amount of the storage pool if the execution state is stopped;
and the cache fragment releasing unit is used for releasing the cache fragments of the storage pool if the actual storage amount is smaller than the limit storage space of the storage pool.
The working principle of each module in the elastic allocation device of the buffer resource in this embodiment is the same as that of each step in the above method embodiment, and will not be described here again.
Based on the above embodiment, the present invention also provides a terminal, and a schematic block diagram of the terminal may be shown in fig. 3. The terminal may include one or more processors 100 (only one shown in fig. 3), a memory 101, and a computer program 102 stored in the memory 101 and executable on the one or more processors 100, e.g., a flexible allocation program of cache resources. The one or more processors 100, when executing the computer program 102, may implement the steps of an embodiment of a method for flexible allocation of cache resources. Alternatively, the functions of the modules/units in the embodiment of the flexible allocation apparatus of cache resources may be implemented by one or more processors 100 when executing computer program 102, which is not limited herein.
In one embodiment, the processor 100 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In one embodiment, the memory 101 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 101 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the electronic device. Further, the memory 101 may also include both an internal storage unit and an external storage device of the electronic device. The memory 101 is used to store computer programs and other programs and data required by the terminal. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be appreciated by those skilled in the art that the functional block diagram shown in fig. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal to which the present inventive arrangements may be applied, as a specific terminal may include more or less components than those shown, or may be combined with some components, or may have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium, that when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, operational database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual operation data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An elastic allocation method for cache resources, which is characterized by comprising the following steps:
acquiring the use scene information of each storage pool, and determining the use requirement information of each storage pool based on the use scene information, wherein the use requirement information is used for reflecting the use purpose of each storage pool;
determining a target storage space corresponding to each storage pool based on the use requirement information of each storage pool, and determining the cache requirement corresponding to each storage pool based on the target storage space;
and distributing corresponding cache fragments for each storage pool based on the corresponding cache demand of each storage pool.
2. The method for flexibly allocating cache resources according to claim 1, wherein before obtaining the usage scenario information of each storage pool, the method further comprises:
and receiving a preset instruction, and slicing a preset cache based on the preset instruction to obtain a plurality of cache fragments, wherein the number of the cache fragments is the same as that of the storage pools.
3. The method for flexibly allocating cache resources according to claim 1, wherein the obtaining usage scenario information of each storage pool and determining usage requirement information of each storage pool based on the usage scenario information comprise:
acquiring call information of each storage pool, and determining data to be stored corresponding to each storage pool based on the call information;
determining usage scenario information corresponding to the data to be stored based on the data to be stored;
and acquiring historical use data corresponding to the use scene information, and predicting the use demand information of each storage pool based on the historical use data, wherein the historical use data reflects the data storage event of each storage pool under the use scene information.
4. The flexible allocation method of cache resources according to claim 3, wherein predicting usage demand information of each storage pool based on the historical usage data comprises:
determining the times of data storage events of each storage pool under the use scene information based on the historical use data, and determining the data quantity of each storage pool for data storage at each time;
determining data storage events with the largest data volume in each storage pool, and determining historical demand information corresponding to the data storage events with the largest data volume;
and taking the historical demand information corresponding to the data storage event with the largest data volume in each storage pool as the use demand information of each storage pool.
5. The method for flexibly allocating cache resources according to claim 1, wherein determining the cache demand corresponding to each storage pool based on the target storage space comprises:
acquiring the limit storage space of each storage pool, and comparing the target storage space with the limit storage space;
if the target storage space is larger than the limit storage space, determining a difference space between the target storage space and the limit storage space;
and determining the cache demand based on the difference space.
6. The method for flexibly allocating cache resources according to claim 1, wherein allocating corresponding cache slices to each storage pool based on the cache demand corresponding to each storage pool comprises:
obtaining standard storage amounts of all cache fragments, and arranging all the cache fragments from small to large according to the standard storage amounts to obtain a first arrangement sequence;
arranging all the storage pools from small to large according to the cache demand to obtain a second arrangement sequence;
and respectively pairing the standard storage amount in the first arrangement sequence and the cache demand amount in the second arrangement sequence from small to large to realize the allocation of the cache fragments, wherein the standard storage amount of the cache fragments allocated to each storage pool is larger than the cache demand amount of the storage pool.
7. The method for flexibly allocating cache resources according to claim 1, further comprising:
monitoring the execution state of the data storage event of the storage pool in real time;
if the execution state is stopped, acquiring the actual storage amount of the storage pool;
and if the actual storage amount is smaller than the limit storage space of the storage pool, releasing the cache fragments of the storage pool.
8. An elastic allocation device for cache resources, the device comprising:
the system comprises a use requirement determining module, a storage pool management module and a storage pool management module, wherein the use requirement determining module is used for acquiring use scene information of each storage pool and determining the use requirement information of each storage pool based on the use scene information, and the use requirement information is used for reflecting the use purpose of each storage pool;
the cache demand determining module is used for determining a target storage space corresponding to each storage pool based on the use demand information of each storage pool and determining the cache demand corresponding to each storage pool based on the target storage space;
and the cache fragment distribution module is used for distributing corresponding cache fragments for each storage pool based on the cache demand corresponding to each storage pool.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a flexible allocation program of a cache resource stored in the memory and executable on the processor, the processor implementing the steps of the flexible allocation method of a cache resource according to any one of claims 1-7 when executing the flexible allocation program of a cache resource.
10. A computer readable storage medium, wherein a flexible allocation program of a cache resource is stored on the computer readable storage medium, and the flexible allocation program of the cache resource, when executed by a processor, implements the steps of the flexible allocation method of the cache resource according to any one of claims 1 to 7.
CN202311831594.6A 2023-12-28 2023-12-28 Elastic allocation method, device, terminal and storage medium for cache resources Active CN117472799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311831594.6A CN117472799B (en) 2023-12-28 2023-12-28 Elastic allocation method, device, terminal and storage medium for cache resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311831594.6A CN117472799B (en) 2023-12-28 2023-12-28 Elastic allocation method, device, terminal and storage medium for cache resources

Publications (2)

Publication Number Publication Date
CN117472799A true CN117472799A (en) 2024-01-30
CN117472799B CN117472799B (en) 2024-04-02

Family

ID=89627881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311831594.6A Active CN117472799B (en) 2023-12-28 2023-12-28 Elastic allocation method, device, terminal and storage medium for cache resources

Country Status (1)

Country Link
CN (1) CN117472799B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388589A (en) * 2018-10-08 2019-02-26 郑州云海信息技术有限公司 A kind of method, equipment and storage medium adjusting cache partitions ratio
CN109739440A (en) * 2018-12-28 2019-05-10 武汉市烽视威科技有限公司 Distributed sharing storage method, storage medium, electronic equipment and system
CN112799584A (en) * 2019-11-13 2021-05-14 杭州海康威视数字技术股份有限公司 Data storage method and device
CN116319839A (en) * 2023-02-13 2023-06-23 上海霄云信息科技有限公司 Bucket cross-pool access method and equipment for distributed storage system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388589A (en) * 2018-10-08 2019-02-26 郑州云海信息技术有限公司 A kind of method, equipment and storage medium adjusting cache partitions ratio
CN109739440A (en) * 2018-12-28 2019-05-10 武汉市烽视威科技有限公司 Distributed sharing storage method, storage medium, electronic equipment and system
CN112799584A (en) * 2019-11-13 2021-05-14 杭州海康威视数字技术股份有限公司 Data storage method and device
CN116319839A (en) * 2023-02-13 2023-06-23 上海霄云信息科技有限公司 Bucket cross-pool access method and equipment for distributed storage system

Also Published As

Publication number Publication date
CN117472799B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
US11531625B2 (en) Memory management method and apparatus
CN107241281B (en) Data processing method and device
CN112214329B (en) Memory management method, device, equipment and computer readable storage medium
US20220121495A1 (en) Memory reclamation method, electronic device and storage medium
CN110209348B (en) Data storage method and device, electronic equipment and storage medium
CN111338779B (en) Resource allocation method, device, computer equipment and storage medium
CN111782148B (en) Data storage control method and device, electronic equipment and storage medium
CN109918382A (en) Data processing method, device, terminal and storage medium
CN112667405B (en) Information processing method, device, equipment and storage medium
CA3128540A1 (en) Cache system hotspot data access method, apparatus, computer device and storage medium
EP3944091B1 (en) Cache allocation method and device, storage medium, and electronic device
CN117472799B (en) Elastic allocation method, device, terminal and storage medium for cache resources
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN110049350B (en) Video transcoding processing method and device, computer equipment and storage medium
CN115208900B (en) Multi-cloud architecture cloud service resource scheduling method based on blockchain and game model
CN114780033A (en) JBOD cascade system, storage resource allocation method and device
CN117453643B (en) File caching method, device, terminal and medium based on distributed file system
CN113127191A (en) Resource updating method, storage medium and related device
CN110659288A (en) Case statement calculation method, system, device, computer equipment and storage medium
CN114185682A (en) Log output method and device, electronic equipment and storage medium
CN115454334A (en) Solid state disk data reading method and device, computer equipment and storage medium
CN117453149B (en) Data balancing method, device, terminal and storage medium of distributed storage system
CN116795531A (en) Resource scheduling method and device, electronic equipment and storage medium
CN115053211A (en) Memory management method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant