CN109582460B - Redis memory data elimination method and device - Google Patents

Redis memory data elimination method and device Download PDF

Info

Publication number
CN109582460B
CN109582460B CN201710911281.XA CN201710911281A CN109582460B CN 109582460 B CN109582460 B CN 109582460B CN 201710911281 A CN201710911281 A CN 201710911281A CN 109582460 B CN109582460 B CN 109582460B
Authority
CN
China
Prior art keywords
memory
memory data
preset
maximum
deleting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710911281.XA
Other languages
Chinese (zh)
Other versions
CN109582460A (en
Inventor
鲁振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710911281.XA priority Critical patent/CN109582460B/en
Publication of CN109582460A publication Critical patent/CN109582460A/en
Application granted granted Critical
Publication of CN109582460B publication Critical patent/CN109582460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • G06F9/4831Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority
    • G06F9/4837Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority time dependent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method and a device for eliminating Redis memory data; the elimination method of the Redis memory data comprises the following steps: after receiving a request for increasing the memory data, the Redis main thread processes the request for increasing the memory data after deleting the memory data according to a first preset strategy if judging that the current memory usage reaches or exceeds a preset maximum memory value and meets a preset condition; the Redis main thread periodically processes time events for eliminating memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value; the first preset strategy and the second preset strategy are elimination strategies of the memory data. At least one embodiment of the present application can avoid bursty delays without increasing complexity.

Description

Redis memory data elimination method and device
Technical Field
The invention relates to the field of data processing, in particular to a method and a device for eliminating Redis memory data.
Background
Redis is an open-source, high-performance Key-Value pair (Key-Value) cache database. Redis allows the size of the maximum memory of the server to be set by configuring the maximum memory value server of the Redis instance; wherein, a Redis instance refers to an entity of a Redis process or a Redis service.
When the Redis main thread receives a command to add memory, it will check whether the amount of memory currently used by Redis (hereinafter referred to as current memory usage) exceeds server. If the memory is exceeded, redis performs a real-time memory elimination process, namely: and selecting part of data to delete according to a configured elimination strategy of the memory data (namely a strategy of preferentially deleting the part of data) until the current memory usage is less than server.
In some cases, if the current memory usage amount exceeds server, maxmeory is many (for example, there is a large Key), the execution time of the memory elimination process is long, resulting in long-time non-response of the Redis service; or the master thread of Redis is possibly blocked by deleting the Key in the process of eliminating the memory, and the user request cannot be responded for a long time.
Redis 4.0 adopts non-blocking deletion of a background thread, namely, the process of deleting partial data exceeding the maximum memory value is put into the background thread for execution; however, the introduction of multiple threads increases the complexity of the program and reduces the stability of the program. When the background thread handles too many things, the background thread may be busy and not reach the time to delete the designated Key, which may result in the backlog of the entire Redis service memory and may exceed the maximum memory by a large amount.
Disclosure of Invention
The application provides a method and a device for eliminating Redis memory data, which avoid sudden delay under the condition of not increasing complexity.
The technical scheme is as follows.
A method for eliminating Redis memory data comprises the following steps:
after receiving a request for increasing the memory data, the Redis main thread processes the request for increasing the memory data after deleting the memory data according to a first preset strategy if judging that the current memory usage reaches or exceeds a preset maximum memory value and meets a preset condition;
the Redis main thread periodically processes time events for eliminating memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
Wherein, the preset condition may refer to:
the current memory usage amount reaches or exceeds N times of the maximum memory value, and N is larger than 1.
Wherein 2 > N > 1.
After deleting the memory data according to the first preset policy, processing the request for adding the memory data may include:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and then processing the request for increasing the memory data.
The time event for eliminating the memory data may further include:
and when the processing time limit of the event at the time is reached or the content usage is lower than the maximum memory value, stopping deleting the memory data.
Wherein, the elimination method can also comprise the following steps:
and adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value.
Wherein, the elimination method can also comprise the following steps:
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
An elimination device for Redis memory data, comprising: a processor and a memory;
the memory is used for storing a program for eliminating memory data; when the program for performing the memory data elimination is read and executed by the processor, the following operations are performed:
after receiving a request for increasing the memory data, if the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition, deleting the memory data according to a first preset strategy, and then processing the request for increasing the memory data;
periodically processing time events for eliminating memory data; the time event for obsoleting the memory data comprises the following steps: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
After deleting the memory data according to the first preset policy, processing the request for adding the memory data may include:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and then processing the request for increasing the memory data.
Wherein the memory is further configured to store a program for performing an adjustment, which when read and executed by the processor, may perform one or more of the following:
adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value;
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
An elimination device for Redis memory data, comprising:
the first processing module is used for processing the request for increasing the memory data after the request for increasing the memory data is received and the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition according to a first preset strategy after the memory data is deleted;
the second processing module is used for periodically processing time events for eliminating the memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
In at least one embodiment of the application, at least part of the Redis memory elimination process can be operated in background tasks in a mode of mutual cooperation between instant elimination and time events, so that sudden delay caused by memory elimination is avoided, and the influence of the memory elimination on the Redis service response time is reduced.
In an implementation manner of the embodiment of the application, the background memory elimination strength can be flexibly adjusted according to the system load, the elimination range can be flexibly adjusted and configured, and gradual memory elimination is realized.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
Fig. 1 is a flowchart of a method for eliminating Redis memory data according to a first embodiment;
FIG. 2 is a flowchart illustrating the processing of a file event in an example according to the first embodiment;
FIG. 3 is a flowchart of a time event process performed in an example according to the first embodiment;
fig. 4 is a schematic diagram of a device for eliminating Redis memory data according to the second embodiment.
Detailed Description
The technical solution of the present application will be described in more detail with reference to the accompanying drawings and embodiments.
It should be noted that, if not conflicting, different features in the embodiments and implementations of the present application may be combined with each other and are within the scope of protection of the present application. Additionally, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
In one configuration, a computing device performing Redis memory data eviction may include one or more processors (CPUs), input/output interfaces, network interfaces, and memory (memories).
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. The memory may include one or more modules.
Computer-readable media include both non-transitory and non-transitory, removable and non-removable storage media that can implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
In an embodiment, a method for eliminating Redis memory data, as shown in fig. 1, includes S110 to S120.
S110, after receiving a request for increasing the memory data, if the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition, deleting the memory data according to a first preset strategy, and then processing the request for increasing the memory data;
s120, periodically processing a time event for eliminating memory data by a Redis main thread; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
In this embodiment, steps S110 and S120 are independent and are executed alternately; there is no necessarily ordering between S110 and S120.
In this embodiment, step S110 may be regarded as performing real-time elimination on the memory data under a specific condition (the current memory usage amount reaches or exceeds the maximum memory value and satisfies a preset condition), and step S120 may be regarded as eliminating the memory data under the specific condition (the current memory usage amount reaches or exceeds the maximum memory value) through a time event.
In this embodiment, at least part of the Redis memory elimination process may be run in a background task by way of matching the immediate elimination and the time event, and if the memory usage only temporarily exceeds the maximum memory value but does not satisfy the preset condition, the memory data elimination may be performed using the time event first, so that the sudden delay caused by the memory elimination may be avoided, and the influence of the memory elimination on the Redis service response time may be reduced. On the other hand, the memory usage reaches or exceeds the maximum memory value and can be eliminated immediately when the preset conditions are met, so that the situation that the memory data cannot be eliminated in time by using a single time event is avoided.
Compared with a background thread deletion scheme, in the embodiment, a time event is adopted to participate in the deletion of the memory data, so that the memory condition is more controllable; in addition, the embodiment does not need to add a background thread, and the maintenance of the single-thread model is also more beneficial to reducing the complexity of the system without synchronous overhead.
In this embodiment, the preset condition is added to prevent the memory data from being eliminated too late by the time event, for example, when the current memory usage exceeds the maximum memory value more, or the memory usage exceeds the maximum memory value for multiple times in the latest period of time or all the time, or the data size written in the latest period of time is larger, a part of the memory data may be deleted immediately first, and then deleted in cooperation with the time event; if the preset condition is not met, even if the current memory usage reaches or exceeds the maximum memory value, the current memory usage can be considered to be only accidentally exceeded, the instant elimination is not carried out at first, and the elimination is temporarily carried out only by using the time event.
Compared with the scheme that the memory is eliminated immediately when the memory usage amount exceeds the maximum memory value in an accidental case, the embodiment can eliminate the memory data exceeding the part when the CPU is idle in the form of a time event, and avoid causing sudden large delay to the service, so that the smoothness of the response speed of the system can be improved, and the request delay under an emergency condition can be reduced.
Generally, a Redis main thread processes a file event and a time event in turn, wherein the file event can be regarded as providing a service or processing a service, the file event can include a write event and a read event, for example, the write event can be the receipt of a write request; the temporal event may be considered a background task.
Wherein, the time event can be divided into a periodic time event and a timed time event; timed time events need to be processed only once at a given time, periodic time events need to be processed at intervals.
Wherein a temporal event may include one or more. A processing time limit, such as 200ms, may be set for each time event; namely: the time event is processed for only 200ms, and then the time event is changed to be processed by other time events or file events.
In this embodiment, the time event for eliminating the memory data may be established by adding a time event: and judging whether the current memory usage reaches or exceeds the maximum memory value, and deleting the memory data according to a second preset strategy if the current memory usage reaches or exceeds the maximum memory value.
In this embodiment, the time event for eliminating the memory data may also be created by adding a judgment statement for judging whether the current memory usage amount reaches or exceeds the maximum memory value to the existing time event for deleting the expired Key at regular time, and other policies may be added to the policy for eliminating the memory data on the basis of deleting the expired Key.
In general, when the time to process a time event is reached, if the Redis main thread is processing a file event, it may wait for the time event that has reached the processing time to be processed after the current file event processing is completed (CPU idle). For example, the time event is set to be processed every 100 milliseconds (ms), but it is the turn to process the time event, if the CPU is busy, it is waited until the CPU is idle to execute the time event.
In this embodiment, when the Redis main thread needs to process a file event, for example, when a write request is received, step S110 may be executed; when the Redis main thread is to process a time event, step S120 is performed. As can be seen, steps S110 and S120 would be performed in turn by the Redis main thread; during the operation of the Redis main thread, the steps S110 and S120 may not be executed or may be executed one or more times according to the memory usage.
In this embodiment, the request for increasing the memory data may be, but is not limited to, a write request.
In this embodiment, the maximum memory value may refer to, but is not limited to, a server.
In this embodiment, the maximum memory value may be any of the following parameters:
the threshold value of the memory data to be eliminated is considered to be required to be eliminated when the threshold value of the memory data to be eliminated reaches or exceeds the maximum memory value (in this embodiment, after the memory data to be eliminated is considered to be required to be eliminated, the memory data can be eliminated in a time event for eliminating the memory data, and the memory data can be immediately eliminated after a preset condition is reached);
memory values defined in the product specification;
memory values allocated for database instances.
Considering that the memory usage may temporarily increase in an emergency, the memory usage may be allowed to temporarily exceed the maximum memory value, for example, in this embodiment, when the memory usage already exceeds the maximum memory value but a time event for eliminating the memory data is not processed, or when the memory usage still exceeds the maximum memory value after the time event for eliminating the memory data is processed, the memory usage is temporarily higher than the maximum memory value.
In practical applications, an upper limit may be set for the memory usage temporarily exceeding the maximum memory value, for example, in an implementation manner, it is assumed that the product specification is 1G of memory, and the maximum memory usage is 1.5G; in this case, the maximum memory value is 1G, and the memory data is considered to be eliminated if the memory usage reaches or exceeds 1G; the maximum memory usage is that the memory usage may be allowed to temporarily exceed the maximum memory value (1G), but at most exceed 0.5G (for example, if the memory usage is greater than 1.5G, the memory usage needs to be eliminated immediately).
In this embodiment, the current memory usage amount may refer to a size of a memory used by a Redis instance corresponding to a Redis main thread; for example, when the Redis main thread starts to process a file event, the memory usage amount is 900M; if the Redis main thread receives the write-in request, the current memory usage amount is 900M; suppose that the write request writes 300M of data. If the Redis main thread then receives a write request, the current memory usage is 900M +300M=1200M.
In this embodiment, the first preset policy and the second preset policy may be the same or different.
The first preset policy and the second preset policy are elimination policies of the memory data, and may mean that the first preset policy and the second preset policy provide criteria for deleting the memory data, such as in what order to delete the memory data, in what priority to delete the memory data, in what condition to select the memory data to be deleted, and the like.
In one implementation, the first preset policy or the second preset policy may adopt any one or more of the following policies:
deleting Key randomly;
deleting only the Key with the expiration time set;
deleting from near to far according to the expiration time of the Key, namely: deleting expired keys preferentially, then deleting keys which are about to expire earlier, for example, deleting Key1 if the keys expire for 24 hours and deleting Key2 if the keys expire for 30 hours earlier;
deleting from small to large according to the priority of the Key;
deleting from small to large according to the access frequency of the Key;
and deleting from far to near according to the last access time of the Key.
In one implementation, the preset condition may refer to:
the current memory usage amount reaches or exceeds N times of the maximum memory value, and N is more than 1; namely: the current memory usage amount reaches or exceeds (100 + X)% of the maximum memory value. The maximum memory value of X% can be regarded as the size of the memory additionally provided for the user, and is used when the memory usage becomes suddenly large.
Where 2 > N > 1.
In this implementation, N may be, but is not limited to being, equal to 1.5.
In the implementation mode, N can be flexibly adjusted automatically or manually according to business conditions, requirements or experimental results and the like, and can be set and adjusted through configuration items.
The implementation mode can be called progressive memory elimination, and whether instant memory elimination is performed or only timed memory elimination is performed can be selected according to the current memory use condition (when the memory use amount reaches or exceeds the maximum memory value, but does not exceed N times of the maximum memory value, only timed memory elimination is performed equivalently).
In this implementation, after deleting the memory data according to the first preset policy, processing the request for adding the memory data may include:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and then processing the request for increasing the memory data.
In other implementations, the preset condition may be or include other conditions; such as may include one or more of the following:
in the last period of time, the times that the memory usage exceeds the maximum memory value exceed the preset times;
in the last period of time, the memory usage always exceeds the maximum memory value;
in the last period of time, the written data volume exceeds a preset data volume threshold;
the amount of data deleted in the time event during the last period of time is below a predetermined deletion amount threshold, etc.
Accordingly, in these other implementations, after deleting the memory data according to the first preset policy, processing the request for adding the memory data may include:
and processing a request for increasing the memory data after deleting the memory data of the preset quantity according to the first preset strategy.
In other implementation manners, after deleting the memory data according to the first preset policy, processing the request for adding the memory data may also include: and deleting the memory data according to a first preset strategy until the memory usage does not meet a preset condition, and processing a request for increasing the memory data.
In one implementation manner, the processing frequency and the processing time limit of the time event for eliminating the memory data can be flexibly adjusted through configuration items, and automatic adjustment or manual adjustment can be performed according to the service condition in an actual scene.
In this implementation, the force for eliminating the memory can be adjusted.
In other implementation modes, the strategy for eliminating the memory data, the range of keys and the like can be automatically or manually adjusted according to the business condition.
In one implementation, the time event for discarding the memory data may further include:
and when the processing time limit of the event at the time is reached or the content usage is lower than the maximum memory value, stopping deleting the memory data.
In this implementation, the elimination method may further include:
and adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value.
In this implementation, the larger the memory usage exceeds the maximum memory value, the larger the processing time limit of the time event for eliminating the memory data is adjusted to be.
In this implementation, the correspondence between the amplitude of the memory usage exceeding the maximum memory value and the processing time limit of the time event for eliminating the memory data may be preset, and the processing time limit may be adjusted according to the correspondence; for example, when the processing time exceeds 1%, the processing is implemented as 10ms, and when the processing time exceeds 20%, the processing time limit is 200ms. The correspondence may be determined according to business conditions, requirements, or experiments, etc.
In this implementation, the processing time limit may also be adjusted according to the change of the memory usage amount exceeding the maximum memory value, for example, if the amplitude is changed from 10% to 20%, the processing time limit is increased by a certain proportion or a certain length.
In this implementation, an upper limit and a lower limit may be set for a processing time limit of a time event for eliminating memory data, and the upper limit and the lower limit should not be exceeded during adjustment.
In one implementation, the elimination method may further include:
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
In this implementation, the hardware resources may include a CPU, a memory, a disk, and the like. The usage data may include, but is not limited to, usage rate, etc.
In this implementation, the higher the utilization rate of the hardware resources is, the lower the processing frequency of the time event for eliminating the memory data is adjusted to be. Such as the usage of one or more of the hardware resources exceeding a usage threshold, reduces the frequency of processing temporal events that are used to evict memory data.
In this implementation, the correspondence between the hardware resource usage data and the processing frequency of the time event for eliminating the memory data may be preset, and the processing frequency may be adjusted according to the correspondence; the correspondence may be determined according to business conditions, requirements, or experiments, etc. The hardware resource usage data may be usage data of one or more hardware resources, which is calculated as a quantized value according to a predetermined manner (such as, but not limited to, a weighted summation manner), and the quantized value is used for corresponding to the processing frequency.
In this implementation, the processing frequency may also be adjusted according to the change of the hardware resource usage data, for example, if the usage rate is reduced by 20%, the processing frequency is increased once per second.
In this implementation, an upper limit and a lower limit may be set for the processing frequency of the time event for eliminating the memory data, and the upper limit and the lower limit should not be exceeded during adjustment. In one implementation, in step S110, if the current memory usage does not exceed the maximum memory value, the request for increasing the memory data may be directly processed; if the current memory usage reaches or exceeds the maximum memory value but does not meet the preset conditions, the request for increasing the memory data can be directly processed.
In one implementation manner, in step S120, the time event may further include: and if the current memory usage does not reach the maximum memory value, the memory data is not deleted.
The present embodiment is described below by way of an example.
In this example, the request to add memory data includes a write request; the predetermined condition is that the current amount of memory usage reaches or exceeds (100 + X)% of the maximum memory value.
In this example, the process of processing a file event by a Redis main thread is shown in FIG. 2, and includes steps 201 to 205:
201. when receiving a new write request, the Redis main thread executes step 202;
202. the Redis main thread judges whether the current memory usage amount exceeds the maximum memory value server of the configured Redis instance or not; if not, go to step 203; if so, go to step 204;
203. the Redis main thread executes the new write request;
204. the Redis main thread judges whether the current memory usage amount exceeds server, maxmeory X (100 + X)%, wherein X% is a buffer memory size threshold additionally provided for a user; if so, go to step 205; if not, executing step 203;
205. the Redis main thread eliminates Key instantly according to a first preset strategy until the current memory usage does not exceed server.
The size of X may be set and adjusted according to the situation, and may be set to 50 in this example.
In this example, the process of processing the time event by the Redis main thread is shown in FIG. 3, and includes steps 301 to 302 performed periodically:
301. judging whether the current memory usage amount exceeds a maximum memory value server of a configured Redis instance or not; if so, go to step 302; if not, ending;
302. and eliminating the Key (namely deleting the Key) according to a second preset strategy until the time limit for processing the time event is reached or the content usage does not exceed the maximum memory value server.
During the running of the Redis main thread, the two flows shown in the FIG. 2 and the FIG. 3 are executed by turns.
Besides the time events corresponding to steps 301 to 302, the Redis main thread can also process other time events.
The time length of the eliminated Key in a single time event (namely the processing time limit of the time event corresponding to the steps 301 to 302) and the range of the selected Key during elimination can be adjusted according to the amplitude that the current memory usage exceeds server. For example, when the current memory usage exceeds server.max memory by more, for example, exceeds server.max memory by 50%, the processing time limit can be prolonged, and the scope of Key selection during elimination can be enlarged; when the exceeding is less, such as exceeding 5% of server.
The execution frequency of the time event can be flexibly adjusted according to the conditions of system load (including CPU, disk and memory), for example, when the CPU utilization rate is low, the execution frequency of the time event can be increased; when the CPU utilization is high, the execution frequency of the time event can be reduced.
An embodiment two, a device for eliminating Redis memory data, includes: a processor and a memory;
the memory is used for storing a program for eliminating memory data; when the program for performing memory data elimination is read and executed by the processor, the following operations are performed:
after a request for increasing the memory data is received, if the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition, deleting the memory data according to a first preset strategy, and then processing the request for increasing the memory data;
periodically processing time events for eliminating memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
In this embodiment, the program for performing the memory data elimination may be a part of the Redis main thread when being read and executed.
In one implementation, the preset condition may refer to:
the current memory usage amount reaches or exceeds N times of the maximum memory value, and N is larger than 1.
In this implementation, there may be, but is not limited to: 2 > N > 1.
In this implementation, after deleting the memory data according to the first preset policy, processing the request for adding the memory data may include:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and processing a request for increasing the memory data.
In one implementation, the time event for obsoleting the memory data may further include:
and when the processing time limit of the event at the time is reached or the content usage is lower than the maximum memory value, stopping deleting the memory data.
In one implementation, the memory may be further configured to store a program for performing an adjustment, and the program for performing an adjustment, when read and executed by the processor, may perform the following operations:
and adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value.
In this implementation, the program for performing adjustment may be a part of the Redis main thread, may be another thread in the Redis process, or may be another process when being read and executed.
In one implementation, the memory may be further configured to store a program for performing an adjustment, and the program for performing an adjustment, when read and executed by the processor, may perform the following operations:
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
In this embodiment, when the program for performing memory data elimination is read and executed, the operations performed may correspond to S110 and S120 in the first embodiment, and when the program for performing memory data elimination is read and executed, other implementation details of the operations performed may refer to the first embodiment; other implementation details of the operation performed by the program for performing the adjustment when the program is read and executed may refer to a corresponding implementation manner in the first embodiment.
In a third embodiment, as shown in fig. 4, an elimination apparatus for Redis memory data includes:
a first processing module 41, configured to, after receiving a request for adding memory data, if it is determined that a current memory usage amount reaches or exceeds a preset maximum memory value and meets a preset condition, delete memory data according to a first preset policy, and then process the request for adding memory data;
a second processing module 42, configured to periodically process a time event for eliminating the memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
In one implementation, the preset condition may refer to:
the current memory usage amount reaches or exceeds N times of the maximum memory value, and N is larger than 1.
In this implementation, there may be, but is not limited to: 2 > N > 1.
In this implementation, after the first processing module deletes the memory data according to the first preset policy, processing the request for increasing the memory data may include:
the file event processing module deletes the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and processes the request for increasing the memory data.
In one implementation, the time event for eliminating the memory data may further include:
and when the processing time limit of the event at the time is reached or the content usage is lower than the maximum memory value, stopping deleting the memory data.
In one implementation, the elimination apparatus may further include:
and the adjusting module is used for adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value.
In one implementation, the elimination apparatus may further include:
and the adjusting module is used for adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
In this embodiment, operations performed by the first processing module and the second processing module may respectively correspond to S110 and S120 of the first embodiment, and other implementation details of the first processing module and the second processing module may refer to the first embodiment; other implementation details of the adjusting module can refer to the corresponding implementation manner in the first embodiment.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present application is not limited to any specific form of hardware or software combination.
There are, of course, many other embodiments of the invention that can be devised without departing from the spirit and scope thereof, and it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the invention.

Claims (9)

1. A method for eliminating Redis memory data comprises the following steps:
after receiving a request for increasing the memory data, the Redis main thread processes the request for increasing the memory data after deleting the memory data according to a first preset strategy if judging that the current memory usage reaches or exceeds a preset maximum memory value and meets a preset condition; the preset conditions are as follows: the current memory usage amount reaches or exceeds N times of the maximum memory value, 2 > N > 1;
the Redis main thread periodically processes time events for eliminating memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value; the time event for eliminating the memory data is established by adding a time event or adding a judgment statement for judging whether the current memory usage reaches or exceeds the maximum memory value in the time event for deleting the expired Key at regular time;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
2. The elimination method of claim 1, wherein processing the request for adding the memory data after deleting the memory data according to the first preset policy comprises:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and then processing the request for increasing the memory data.
3. The eviction method of claim 1, wherein the temporal event for evicting memory data further comprises:
and when the processing time limit of the event at the time is reached or the content usage is lower than the maximum memory value, stopping deleting the memory data.
4. The elimination method of claim 3, further comprising:
and adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value.
5. The elimination method of claim 1, further comprising:
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
6. An elimination device for Redis memory data, comprising: a processor and a memory;
the method is characterized in that:
the memory is used for storing a program for eliminating memory data; when the program for performing memory data elimination is read and executed by the processor, the following operations are performed:
after receiving a request for increasing the memory data, if the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition, deleting the memory data according to a first preset strategy, and then processing the request for increasing the memory data; the preset conditions are as follows: the current memory usage amount reaches or exceeds N times of the maximum memory value, 2 > N > 1;
periodically processing time events for eliminating memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value; the time event for eliminating the memory data is established by adding a time event or adding a judgment statement for judging whether the current memory usage reaches or exceeds the maximum memory value in the time event for deleting the expired Key at regular time;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
7. The elimination apparatus of claim 6, wherein processing the request to add memory data after deleting memory data according to the first predetermined policy comprises:
and deleting the memory data according to a first preset strategy until the memory usage is less than or equal to N times of the maximum memory value, and then processing the request for increasing the memory data.
8. The elimination apparatus of claim 6, wherein the memory is further configured to store a program for adjustment that, when read and executed by the processor, performs one or more of:
adjusting the processing time limit of the time event for eliminating the memory data according to the amplitude that the current memory usage exceeds the maximum memory value;
and adjusting the processing frequency of the time event for eliminating the memory data according to the hardware resource use data of the Redis service.
9. A device for eliminating Redis memory data is characterized by comprising:
the first processing module is used for processing the request for increasing the memory data after the request for increasing the memory data is received and the current memory usage is judged to reach or exceed a preset maximum memory value and meet a preset condition according to a first preset strategy after the memory data is deleted; the preset conditions are as follows: the current memory usage reaches or exceeds N times of the maximum memory value, 2 is more than N and more than 1;
the second processing module is used for periodically processing time events for eliminating the memory data; the time event for obsoleting the memory data comprises: deleting the memory data according to a second preset strategy when the current memory usage reaches or exceeds the maximum memory value; the time event for eliminating the memory data is established by adding a time event or adding a judgment statement for judging whether the current memory usage reaches or exceeds the maximum memory value in the time event for deleting the expired Key at regular time;
the first preset strategy and the second preset strategy are elimination strategies of the memory data.
CN201710911281.XA 2017-09-29 2017-09-29 Redis memory data elimination method and device Active CN109582460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710911281.XA CN109582460B (en) 2017-09-29 2017-09-29 Redis memory data elimination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710911281.XA CN109582460B (en) 2017-09-29 2017-09-29 Redis memory data elimination method and device

Publications (2)

Publication Number Publication Date
CN109582460A CN109582460A (en) 2019-04-05
CN109582460B true CN109582460B (en) 2023-03-21

Family

ID=65919045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710911281.XA Active CN109582460B (en) 2017-09-29 2017-09-29 Redis memory data elimination method and device

Country Status (1)

Country Link
CN (1) CN109582460B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502327A (en) * 2019-08-28 2019-11-26 四川长虹电器股份有限公司 Method based on the processing of Redis high concurrent delayed tasks
CN112347134B (en) * 2020-11-05 2023-05-30 平安科技(深圳)有限公司 Redis cache management method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1558691A (en) * 2004-01-14 2004-12-29 中兴通讯股份有限公司 Method for timed monitoring of memory database in mobile communication equipment
CN102479249A (en) * 2010-11-26 2012-05-30 中国科学院声学研究所 Method for eliminating cache data of memory of embedded browser
CN106897141A (en) * 2015-12-21 2017-06-27 北京奇虎科技有限公司 The processing method and processing device of information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8032719B2 (en) * 2005-04-14 2011-10-04 Tektronix International Sales Gmbh Method and apparatus for improved memory management in data analysis
US8566521B2 (en) * 2010-09-01 2013-10-22 International Business Machines Corporation Implementing cache offloading

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1558691A (en) * 2004-01-14 2004-12-29 中兴通讯股份有限公司 Method for timed monitoring of memory database in mobile communication equipment
CN102479249A (en) * 2010-11-26 2012-05-30 中国科学院声学研究所 Method for eliminating cache data of memory of embedded browser
CN106897141A (en) * 2015-12-21 2017-06-27 北京奇虎科技有限公司 The processing method and processing device of information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Redis的过期淘汰策略;壹页书;《blog.itpub.net/29254281/viewspace-2106910/》;20160525;1-2页 *

Also Published As

Publication number Publication date
CN109582460A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
US10185592B2 (en) Network storage device using dynamic weights based on resource utilization
US8818989B2 (en) Memory usage query governor
US9477618B2 (en) Information processing device, information processing system, storage medium storing program for controlling information processing device, and method for controlling information processing device
US8583608B2 (en) Maximum allowable runtime query governor
US10884667B2 (en) Storage controller and IO request processing method
US10089266B2 (en) Power saving feature for storage subsystems
CN105824691B (en) The method and device of dynamic regulation thread
CN111522636A (en) Application container adjusting method, application container adjusting system, computer readable medium and terminal device
US20220195434A1 (en) Oversubscription scheduling
CN111258967A (en) Data reading method and device in file system and computer readable storage medium
CN111753065A (en) Request response method, system, computer system and readable storage medium
US11936568B2 (en) Stream allocation using stream credits
US9135064B2 (en) Fine grained adaptive throttling of background processes
CN109582460B (en) Redis memory data elimination method and device
CN110781244A (en) Method and device for controlling concurrent operation of database
US9996470B2 (en) Workload management in a global recycle queue infrastructure
US11765099B2 (en) Resource allocation using distributed segment processing credits
CN109726007B (en) Container arrangement quota management method and device and container arrangement system
CN113687781A (en) Method, device, equipment and medium for pulling up thermal data
CN110674064B (en) Data transmission method, device, equipment and computer readable storage medium
US11005776B2 (en) Resource allocation using restore credits
CN107872480B (en) Big data cluster data balancing method and device
CN106484314B (en) Cache data control method and device
CN111176848B (en) Cluster task processing method, device, equipment and storage medium
CN110955502A (en) Task scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant