CN112364100A - Server and server cache persistence method - Google Patents

Server and server cache persistence method Download PDF

Info

Publication number
CN112364100A
CN112364100A CN202011231585.XA CN202011231585A CN112364100A CN 112364100 A CN112364100 A CN 112364100A CN 202011231585 A CN202011231585 A CN 202011231585A CN 112364100 A CN112364100 A CN 112364100A
Authority
CN
China
Prior art keywords
redis
persistence
server
controller
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011231585.XA
Other languages
Chinese (zh)
Inventor
石开元
张宏波
吴连朋
王宝云
夏章抓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Juhaokan Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202011231585.XA priority Critical patent/CN112364100A/en
Publication of CN112364100A publication Critical patent/CN112364100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to the field of data synchronization technologies, and in particular, to a server and a method for server cache persistence. The problems that a plurality of Redis are persistent at the same time, excessive memory is occupied and CPU computing resources are wasted in the server can be solved to a certain extent. The server includes: a first controller configured to: when the first Redis reaches a persistence condition, detecting whether a second Redis which is performing persistence exists; if the second Redis is persisting, adding the first Redis into a persistence waiting queue, and sequentially performing persistence by the first Redis in the persistence waiting queue according to the sequence; otherwise, controlling the first Redis to carry out persistence.

Description

Server and server cache persistence method
Technical Field
The present application relates to the field of data synchronization technologies, and in particular, to a server and a method for server cache persistence.
Background
Redis (Remote Dictionary Server) is a high-performance open-source database, has very high read-write performance, can persist key value pair data stored in a memory to a hard disk, and a user can configure different persistence modes according to actual use scenes, wherein common Redis persistence modes comprise RDB and AOF.
In some implementations of server cache persistence, multiple rediss are generally deployed on a server, each Redis supports RDB persistence, and each RDB persists while consuming at most a memory of the same size as the Redis itself and occupying 1 core CPU resource.
However, in the long-time running process, a situation that multiple rediss are persistent at the same time point occurs, so that the server needs to reserve the sum of the maximum memories required by all the rediss and occupy the CPU core resources with the same number as the Redis, thereby causing waste of server resources.
Disclosure of Invention
In order to solve the problems that a plurality of Redis simultaneously persist, occupy excessive memory and waste CPU computing resources of a server, the application provides the server and a cache persistence method of the server.
The embodiment of the application is realized as follows:
a first aspect of embodiments of the present application provides a server, including: a first controller configured to: when the first Redis reaches a persistence condition, detecting whether a second Redis which is performing persistence exists; if the second Redis is persisting, adding the first Redis into a persistence waiting queue, and sequentially performing persistence by the first Redis in the persistence waiting queue according to the sequence; otherwise, controlling the first Redis to carry out persistence.
A second aspect of an embodiment of the present application provides a method for server cache persistence, where the method includes: when the first Redis reaches a persistence condition, detecting whether a second Redis which is performing persistence exists; if the second Redis is persisting, adding the first Redis into a persistence waiting queue, and sequentially performing persistence by the first Redis in the persistence waiting queue according to the sequence; otherwise, controlling the first Redis to carry out persistence.
The embodiment of the application has the advantages that: by detecting whether persistent Redis exists when the Redis reaches the persistence condition, the phenomenon that different Redis are persisted at the same time can be avoided; further, Redis to be persisted is added into the waiting queue, so that the persistence of the Redis can be orderly carried out, the Redis centralized management on the server is realized, the server only has one Redis at most at the same time point to be persisted, only the memory which is the same as the maximum Redis is required to be reserved, and 1 core CPU resource is required.
Drawings
Specifically, in order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments are briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without any creative effort.
FIG. 1 is a schematic diagram of a system 100 including an implementable cache persistence server according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an exemplary computing device 200 shown in accordance with some embodiments of the present application;
FIG. 3 is a schematic diagram illustrating a server deployed Redis according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a Redis application architecture of a server according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating implementation of cache persistence by a server according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a process of determining that Redis meets a persistence condition by a server according to an embodiment of the present application;
FIG. 7 is a logic diagram illustrating a server implementing server cache persistence according to an embodiment of the present application.
Detailed Description
Certain exemplary embodiments will now be described to provide an overall understanding of the principles of the structure, function, manufacture, and use of the devices and methods disclosed herein. One or more examples of these embodiments are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand that the devices and methods specifically described herein and illustrated in the accompanying drawings are non-limiting exemplary embodiments and that the scope of the various embodiments of the present invention is defined solely by the claims. Features illustrated or described in connection with one exemplary embodiment may be combined with features of other embodiments. Such modifications and variations are intended to be included within the scope of the present invention.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" or the like throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present invention.
Flow charts are used herein to illustrate operations performed by systems according to some embodiments of the present application. It should be expressly understood that the operations of the flow diagrams may be performed out of order, with precision. Rather, these operations may be performed in the reverse order or simultaneously. Also, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
FIG. 1 is a schematic diagram of a system 100 including an implementable cache persistence server according to some embodiments of the present application.
The system 100 including the server that can implement cache persistence is a system that includes a server that can implement cache persistence services. The system 100 including an implementable cache persistence server may include a server 110, at least one storage device 120, at least one network 130, and the server 110 may include a controller 112.
In some embodiments, the server 110 may be a single server or a group of servers. The server farm can be centralized or distributed (e.g., server 110 can be a distributed system). In some embodiments, the server 110 may be local or remote. For example, server 110 may access data stored in storage device 120 via network 130. Server 110 may be directly connected to storage device 120 to access the stored data. In some embodiments, the server 110 may be implemented on a cloud platform. The cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, multiple clouds, the like, or any combination of the above. In some embodiments, server 110 may be implemented on a computing device as illustrated in FIG. 2 herein, including one or more components of computing device 200.
In some embodiments, the server 110 may include a controller 112. The controller 112 may process information and/or data related to the service request to perform one or more of the functions described herein. For example, the controller 112 may send data to the storage device 120 over the network 130 for updating the data stored therein. In some embodiments, the controller 112 may include one or more processors. The controller 112 may include one or more hardware processors, such as a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), an image processor (GPU), a physical arithmetic processor (PPU), a Digital Signal Processor (DSP), a field-programmable gate array (FPGA), a Programmable Logic Device (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination of the above examples.
Storage device 120 may store data and/or instructions. In some embodiments, storage device 120 may store data. In some embodiments, storage device 120 may store data and/or instructions for execution or use by server 110, which server 110 may execute or use to implement the embodiment methods described herein. In some embodiments, storage device 120 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), the like, or any combination of the above. In some embodiments, storage device 120 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, multiple clouds, the like, or any combination of the above.
In some embodiments, the storage device 120 may be connected to a network 130 to enable communication with one or more components in the system 100 including the cache persistence-enabled server. One or more components of system 100, including a cache persistence server may be implemented, may access data or instructions stored in storage 120 via network 130. In some embodiments, the storage device 120 may be directly connected or in communication with one or more components of the system 100 including the cache persistence server may be implemented. In some embodiments, storage device 120 may be part of server 110.
The network 130 may facilitate the exchange of information and/or data. In some embodiments, one or more components in system 100 that includes a cache persistence-enabled server may send information and/or data to other components in system 100 that includes a cache persistence-enabled server over network 130. For example, the server 110 may obtain/obtain the request via the network 130. In some embodiments, the network 130 may be any one of a wired network or a wireless network, or a combination thereof. In some embodiments, the network 130 may include one or more network access points. For example, the network 130 may include wired or wireless network access points, such as base stations and/or Internet switching points 130-1, 130-2, and so forth. Through the access point, one or more components of system 100, including the implementable cache persistence server, may connect to network 130 to exchange data and/or information.
FIG. 2 is a schematic diagram of an exemplary computing device 200 shown in accordance with some embodiments of the present application. Server 110, storage device 120 may be implemented on computing device 200. For example, the controller 112 may be implemented on the computing device 200 and configured to implement the functionality disclosed herein.
Computing device 200 may include any components used to implement the systems described herein. For example, the controller 112 may be implemented on the computing device 200 by hardware, software programs, firmware, or a combination thereof. For convenience only one computer is depicted in the figures, but the computing functions described herein with respect to the system 100 including the cache persistence-enabled server may be implemented in a distributed manner by a set of similar platforms to spread the processing load of the system.
Computing device 200 may include a communication port 250 for connecting to a network for enabling data communication. Computing device 200 may include a processor 220 that may execute program instructions in the form of one or more processors. An exemplary computer platform may include an internal bus 210, various forms of program memory and data storage including, for example, a hard disk 270, and Read Only Memory (ROM)230 or Random Access Memory (RAM)240 for storing various data files that are processed and/or transmitted by the computer. An exemplary computing device may include program instructions stored in read-only memory 230, random access memory 240, and/or other types of non-transitory storage media that are executed by processor 220. The methods and/or processes of the present application may be embodied in the form of program instructions. Computing device 200 also includes input/output component 260 for supporting input/output between the computer and other components. Computing device 200 may also receive programs and data in the present disclosure via network communication.
For ease of understanding, only one processor is exemplarily depicted in fig. 2. However, it should be noted that the computing device 200 in the present application may include multiple processors, and thus the operations and/or methods described in the present application that are implemented by one processor may also be implemented by multiple processors, collectively or independently. For example, if in the present application a processor of computing device 200 performs steps 1 and 2, it should be understood that steps 1 and 2 may also be performed by two different processors of computing device 200, either collectively or independently.
Fig. 3 shows a schematic diagram of a server deployed Redis according to an embodiment of the present application.
In some embodiments, the server provided by the present application provides a Redis persistent centralized management system agent, which is responsible for managing, configuring persistence of all deployed rediss on the server, and listening and detecting the deployed Redis persistent processes of the server, so that at most one Redis is persistent at the same time point by the server, and a Redis deployment structure of the server is as shown in the figure.
For example, the server 1 deploys 2 Redis and Redis persistent centralized management system agents, and the server 2 deploys 3 Redis and Redis persistent centralized management system agents, where the server 1 and the server 2 may be located in the same machine room or different machine rooms.
In some embodiments, a plurality of Redis instances, for example, 3 Redis instances, whose memory sizes are 5G, 10G, and 20G, respectively, are deployed on the server, and the idle memory of the server needs to reserve the sum of the maximum memory of all the Redis instances, that is, the 35G memory, and 3-core CPU computing resources.
Fig. 4 shows a schematic diagram of a server Redis application architecture according to an embodiment of the present application.
In some embodiments, the first machine room and the second machine room are two data centers, and can simultaneously bear services. In some embodiments, the first machine room can bear 60-70% of the services, and the second machine room can bear 40-30% of the services. The machine room two can carry out the unidirectional synchronization of redis data to the machine room one, when the machine room one hangs down, can in time switch the business of machine room one to machine room two, avoided the machine room one to hang down the back, the business on the machine room one is interrupted.
In some embodiments, after the first machine room is hung, a large burden is brought to the second machine room, and in order to avoid the first machine room from being hung, the first machine room is generally provided with more than one redis node. For example, the first machine room is provided with two redis nodes: a first node and a second node, R denotes read data, and W denotes write data. The two redis nodes work in a master-slave mode, one of the redis nodes serves as a master node, the other redis node serves as a slave node, the master node is set as a source redis, the source redis can be subjected to master-slave switching, namely the slave node can be promoted to be the master node, and therefore the source redis can be obtained.
For another example, the first node may be a Redis-M, i.e., a master node, and the second node may be a slave node of the first node, i.e., a Redis-S; or the second node may act as a master node and the first node as a slave node to the second node. And the service of the first machine room can read and write the redis node of the first machine room. When the Redis-M of the first computer room has data to write, the Redis-M sends the written data to the Redis-S, so that the Redis-S and the Redis-M are synchronized.
In some embodiments, the first node may be set to source redis by default, assuming traffic. When the first node is hung, performing master-slave switching to enable the second node to become a source redis to bear services, and when the first node is on line, using the first node as a slave node; when the second node is hung, performing master-slave switching again to enable the first node to become a source redis, and enabling the second node to be a slave node after the second node is on line; and the second machine room can also be provided with two Redis nodes, wherein one Redis node is used as the Redis-M of the second machine room, and the other Redis node is used as the Redis-S of the second machine room. And the service of the second machine room can write the redis of the first machine room, or the second machine room itself has no writing operation and reads the redis of the second machine room, and at this time, the synchronization unit is required to synchronize the redis data of the first machine room to the redis node of the second machine room in real time.
Fig. 5 is a flowchart illustrating a server implementing cache persistence according to an embodiment of the present application.
The application provides a method for server cache persistence, which specifically comprises the following steps.
In step 501, when the first Redis reaches a persistence condition, it is detected whether there is a second Redis that is undergoing persistence.
The application also provides a server, which can realize the server cache persistence method provided by the application, and the server comprises a first controller, wherein the first controller is configured to detect whether a second Redis which is performing persistence exists when the first Redis reaches a persistence condition.
In some embodiments, the first controller detects whether there is a second Redis undergoing persistence when the first Redis reaches the persistence condition, which may be specifically implemented as the first controller detecting whether there is a second Redis undergoing persistence when the first Redis is located at a head position of the persistence wait queue.
For example, the first controller judges whether Redis is waiting for persistence in a persistence waiting queue, and if the first Redis is in the queue and at the head position of the queue, the first controller judges whether other Redis exists at the moment of the server, namely the second Redis is carrying out persistence; if the server has a second Redis which is persisting at the moment, the first controller does not process the first Redis; if the server does not have the second Redis which is persisting at the moment, the first controller controls the first Redis to persist, namely persisting the first Redis at the head position in the persistence waiting queue and removing the first Redis from the persistence waiting queue.
In some embodiments, the first controller detects whether there is a second Redis that is performing persistence when the first Redis reaches the persistence condition, and may specifically be implemented to detect whether there is a second Redis that is performing persistence when the first Redis determines that a specified number of key values of the first Redis change within a specified time.
The Redis persistence policy is configured such that if at least a specified number of key values changes within a specified length of time, a first Redis begins to persist. For example, if the server needs to perform persistence once an hour, the first controller may configure that at least one key value key of the first Redis changes within 3600 seconds after the end of each persistence, and the first controller may control the first Redis to perform persistence.
Fig. 6 shows a flowchart illustrating a process when the server determines that Redis reaches the persistent condition according to an embodiment of the present application.
In step 601, a deployed Redis and a persistence policy thereof of the server are obtained in real time, and a memory space and 1-core CPU computing resource which are the same as the maximum Redis are reserved.
In some embodiments, the first controller determines that a specified number of key values of the first Redis change within a specified time, specifically including that the first controller acquires the deployed Redis of the server and a persistence policy thereof in real time, and reserves a memory space and 1-core CPU computing resources that are the same as the maximum Redis.
For example, the first controller may obtain the deployed Redis of the server and the configured persistence policy thereof in real time; when a new Redis is deployed and started in the server, the agent controlled by the first controller automatically detects and discovers the newly deployed Redis of the server and the persistence strategy thereof in the first time.
In some embodiments, the obtaining, by the first controller, the Redis deployed by the server and the persistent policy thereof in real time specifically includes: determining whether the first Redis has configured a persistence policy; and if not, configuring a default persistence strategy for the first Redis.
For example, a user may configure a persistence policy for a newly deployed first Redis through an agent of a server; if the user does not configure the persistence policy for the newly deployed first Redis through the agent for various reasons, the first controller will configure a default persistence policy for the first Redis. The server does not need to restart the agent in the whole process of configuring the default persistence strategy, and the agent can automatically discover the newly deployed Redis and configure the default persistence strategy for the newly deployed Redis.
In step 602, it is determined whether a specified number of key values of a first Redis change within a specified time according to a last persistence time point of the first Redis, a key value change number, and a persistence policy thereof.
For example, the first controller obtains, in real time, a last persistence time point of the first Redis, that is, a time point at which persistence is performed last time, and a change amount from the last persistence time point to the key value key at the current time, to determine whether the first Redis reaches the persistence condition.
In some embodiments, when the first Redis does not reach the persistence condition, the first controller determines whether a specified number of key values of the third Redis change within a specified time according to a last persistence time point of the third Redis, a key value change number, and a persistence policy thereof, so as to determine whether the third Redis reaches the persistence condition.
In the above embodiment, if the first Redis does not reach the persistence condition, and at least a specified number of key values are not changed within a specified time, the first controller continues to detect Rd ies that has been deployed by another server, that is, the third Redis described in this application.
Continuing to refer to fig. 5, in step 502, if there is the second Redis that is persisting, adding the first Redis to a persistence waiting queue, where the first Redis persists in the persistence waiting queue in order of rank; otherwise, controlling the first Redis to carry out persistence.
It should be noted that a Redis persistent centralized management system agent is deployed on each server. Redis no longer configures the persistence policy itself, and the persistence policy configuration is performed by the first controller-controlled agent.
For example, when a first Redis reaches a persistence condition and other Redis are persisting, the first Redis is added into a persistence waiting queue, and after the persistence of a second Redis which is persisting is completed, the first Redis is controlled to persist, so that the server can only have one Redis at most at any time point, and the computing resources of a CPU (central processing unit) can be saved.
In some embodiments, Redis implements persistence via RDB form, which may maximize the performance of Redis.
For example, a parent process fork (derived) has a child process for persistence, the parent process can perform normal business processing, and the RDB file is compact, smaller than the AOF file, and faster than the AOF in the case of a large data set, so the RDB is usually selected for persistence.
For another example, Redis calls the fork () instruction to derive a child process for persistence, and has a parent process and a child process at the same time; the sub-process writes the data set into a temporary RDB file; when the child process completes writing to the new RDB file, the Redis replaces the original RDB file with the new RDB file, and deletes the old RDB file.
In some embodiments, Redis derives a new process when RDB is persistent, the new process occupies CPU resources of 1 core, a Copy-On-Write strategy of an operating system is adopted, and if the memory of a parent process is not modified, a child process and the parent process share a Page; if the Page of the father process is modified, copying a piece of content before modification to a new process;
when the system is extremely busy, if all pages of a parent process are modified in the process of writing RDB by a child process, twice as much memory is needed. Therefore, one time of memory needs to be reserved for the Redis, for example, the data of the Redis occupies the 5G memory, and the server needs to reserve the 5G free memory.
FIG. 7 is a logic diagram illustrating a server implementing server cache persistence according to an embodiment of the present application.
In some embodiments, the first controller obtains persistent wait queue head data in real time;
if the first controller can fetch data in the persistence waiting queue, namely a first Redis is in the queue and at the head position of the queue in the persistence waiting queue to wait for persistence, the first controller judges whether other Redis exists at the moment of the server, namely a second Redis is carrying out persistence;
if the first controller cannot acquire data in the persistence waiting queue, namely, no Redis waiting persistence exists in the persistence waiting queue, the first controller acquires all the Redis deployed by the server and the configured persistence strategies thereof in real time;
the method comprises the steps that a first controller obtains Redis in real time, for example, the last persistence time point of the first Redis, namely the time point of last persistence, and the change quantity from the last persistence time point to a key value key at the current time;
the first controller determines whether the first Redis has configured a persistence policy; if not, configuring a default persistence strategy for the first Redis; if the persistence policy has been configured, determining whether the first Redis meets a persistence condition;
if the first Redis does not reach the persistence condition, the first controller determines whether an additional Redis, i.e., a third Redis, reaches the persistence condition;
if the first Redis is in the persistence condition, the first controller judges whether other Redis exist at the moment of the server, namely the second Redis is carrying out persistence;
if the server has the second Redis which is persisting at the moment and the first Redis does not exist in the persistence waiting queue, the first controller adds the first Redis into the persistence waiting queue, and the first Redis sequentially executes persistence in the persistence waiting queue according to the sequence;
if the server does not have the second Redis which is persisting at the moment, the first controller controls the first Redis to persist;
if the first Redis is previously present in the persistent wait queue, it is removed from the persistent wait queue.
The method has the advantages that whether persistent Redis exists or not is detected when the Redis reaches the persistence condition, so that different Redis can be prevented from being persisted at the same time; further, Redis to be persisted is added into the waiting queue, so that the persistence of the Redis can be orderly carried out, the Redis centralized management on the server is realized, the server only has one Redis at most at the same time point to be persisted, only the memory which is the same as the maximum Redis is required to be reserved, and 1 core CPU resource is required.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data blocks," modules, "" engines, "" units, "" components, "or" systems. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.

Claims (10)

1. A server, comprising:
a first controller configured to:
when the first Redis reaches a persistence condition, detecting whether a second Redis which is performing persistence exists;
if the second Redis is persisting, adding the first Redis into a persistence waiting queue, and sequentially performing persistence by the first Redis in the persistence waiting queue according to the sequence; otherwise, controlling the first Redis to carry out persistence.
2. The server according to claim 1, wherein the first controller detects whether there is a second Redis that is performing persistence when the first Redis reaches a persistence condition, specifically comprising the first controller:
detecting whether a second Redis which is performing persistence exists when the first Redis is located at the head position of a persistence waiting queue; or
And when the first Redis is judged to have a specified number of key values changed within a specified time, detecting whether a second Redis which is undergoing persistence exists.
3. The server according to claim 2, wherein the first controller determining that the first Redis has a specified number of key values changed within a specified time specifically includes the first controller:
acquiring deployed Redis and a persistence strategy thereof of a server in real time, and reserving a memory space and 1-core CPU (central processing unit) computing resources which are the same as the maximum Redis;
and judging whether the first Redis has a specified number of key values changed within a specified time or not according to the last persistence time point of the first Redis, the key value change number and the persistence strategy thereof.
4. The server of claim 3, wherein the first controller is further configured to:
when the first Redis does not reach the persistence condition, the first controller judges whether the third Redis has a specified number of key values changed within a specified time according to the last persistence time point of the third Redis, the key value change number and the persistence strategy thereof so as to determine whether the third Redis reaches the persistence condition.
5. The server according to claim 3, wherein the first controller obtains the deployed Redis of the server and the persistence policy thereof in real time, and specifically includes the first controller:
determining whether the first Redis has configured a persistence policy;
and if not, configuring a default persistence strategy for the first Redis.
6. A method of server cache persistence, the method comprising:
when the first Redis reaches a persistence condition, detecting whether a second Redis which is performing persistence exists;
if the second Redis is persisting, adding the first Redis into a persistence waiting queue, and sequentially performing persistence by the first Redis in the persistence waiting queue according to the sequence; otherwise, controlling the first Redis to carry out persistence.
7. The method for server cache persistence according to claim 6, wherein detecting whether there is a second Redis that is performing persistence when the first Redis reaches a persistence condition specifically includes:
detecting whether a second Redis which is performing persistence exists when the first Redis is located at the head position of a persistence waiting queue; or
And when the first Redis is judged to have a specified number of key values changed within a specified time, detecting whether a second Redis which is undergoing persistence exists.
8. The method for server cache persistence according to claim 7, wherein determining that a specified number of key values of the first Redis change within a specified time specifically includes:
acquiring deployed Redis and a persistence strategy thereof in real time, and reserving a memory space and 1-core CPU (central processing unit) computing resources which are the same as the maximum Redis;
and judging whether the first Redis has a specified number of key values changed within a specified time or not according to the last persistence time point of the first Redis, the key value change number and the persistence strategy thereof.
9. The method of server cache persistence as claimed in claim 8, the method further comprising:
when the first Redis does not reach the persistence condition, judging whether the third Redis has a specified number of key values changed within a specified time according to the last persistence time point of the third Redis, the key value change number and the persistence strategy thereof so as to determine whether the third Redis reaches the persistence condition.
10. The method for server cache persistence according to claim 8, wherein the obtaining of the deployed Redis and the persistence policy thereof of the server in real time specifically comprises:
determining whether the first Redis has configured a persistence policy;
and if not, configuring a default persistence strategy for the first Redis.
CN202011231585.XA 2020-11-06 2020-11-06 Server and server cache persistence method Pending CN112364100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011231585.XA CN112364100A (en) 2020-11-06 2020-11-06 Server and server cache persistence method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011231585.XA CN112364100A (en) 2020-11-06 2020-11-06 Server and server cache persistence method

Publications (1)

Publication Number Publication Date
CN112364100A true CN112364100A (en) 2021-02-12

Family

ID=74508971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011231585.XA Pending CN112364100A (en) 2020-11-06 2020-11-06 Server and server cache persistence method

Country Status (1)

Country Link
CN (1) CN112364100A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428072A (en) * 2012-05-23 2013-12-04 北京大学 Persistent message publishing method and system
CN106502589A (en) * 2016-10-21 2017-03-15 普元信息技术股份有限公司 The loading of caching or the system and method for persistence is realized based on cloud computing
CN109358805A (en) * 2018-09-03 2019-02-19 中新网络信息安全股份有限公司 A kind of data cache method
CN110365752A (en) * 2019-06-27 2019-10-22 北京大米科技有限公司 Processing method, device, electronic equipment and the storage medium of business datum
CN111625332A (en) * 2020-05-21 2020-09-04 杭州安恒信息技术股份有限公司 Java thread pool rejection policy execution method and device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428072A (en) * 2012-05-23 2013-12-04 北京大学 Persistent message publishing method and system
CN106502589A (en) * 2016-10-21 2017-03-15 普元信息技术股份有限公司 The loading of caching or the system and method for persistence is realized based on cloud computing
CN109358805A (en) * 2018-09-03 2019-02-19 中新网络信息安全股份有限公司 A kind of data cache method
CN110365752A (en) * 2019-06-27 2019-10-22 北京大米科技有限公司 Processing method, device, electronic equipment and the storage medium of business datum
CN111625332A (en) * 2020-05-21 2020-09-04 杭州安恒信息技术股份有限公司 Java thread pool rejection policy execution method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN106227582B (en) Elastic telescopic method and system
CN108809722B (en) Method, device and storage medium for deploying Kubernetes cluster
US10805363B2 (en) Method, device and system for pushing file
CN105426439B (en) Metadata processing method and device
CN111880936B (en) Resource scheduling method, device, container cluster, computer equipment and storage medium
US20240289173A1 (en) Task processing method and apparatus, device, and medium
CN104243537A (en) Automatic retractable method and system used under cloud computing environment
CN106293868A (en) In a kind of cloud computing environment, virtual machine expands capacity reduction method and scalable appearance system
CN107704310B (en) Method, device and equipment for realizing container cluster management
CN109344006A (en) A kind of mirror image management method and mirror image management module
US20240061712A1 (en) Method, apparatus, and system for creating training task on ai training platform, and medium
CN106960054B (en) Data file access method and device
WO2017107483A1 (en) Load balancing method for virtualized network management file downloading, and network management server
JP2023532358A (en) Resource scheduling method, resource scheduling system, and equipment
CN112822299B (en) RDMA (remote direct memory Access) -based data transmission method and device and electronic equipment
WO2021057405A1 (en) Resource sharing method and device
CN109032753A (en) A kind of isomery virtual hard disk trustship method, system, storage medium and Nova platform
CN112364100A (en) Server and server cache persistence method
CN104281587A (en) Connection establishing method and device
CN110782040A (en) Method, device, equipment and medium for training tasks of pitorch
US20230418681A1 (en) Intelligent layer derived deployment of containers
CN115328978B (en) Connection method of connection pool and server
CN116418826A (en) Object storage system capacity expansion method, device and system and computer equipment
CN111431951A (en) Data processing method, node equipment, system and storage medium
CN115865921A (en) Method, system, storage medium and electronic device for constructing container network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210212

RJ01 Rejection of invention patent application after publication