CN109918359B - Database service persistence method and system based on sweep - Google Patents

Database service persistence method and system based on sweep Download PDF

Info

Publication number
CN109918359B
CN109918359B CN201910049793.9A CN201910049793A CN109918359B CN 109918359 B CN109918359 B CN 109918359B CN 201910049793 A CN201910049793 A CN 201910049793A CN 109918359 B CN109918359 B CN 109918359B
Authority
CN
China
Prior art keywords
database
container
service
block
name
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910049793.9A
Other languages
Chinese (zh)
Other versions
CN109918359A (en
Inventor
李东
洪少佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910049793.9A priority Critical patent/CN109918359B/en
Publication of CN109918359A publication Critical patent/CN109918359A/en
Application granted granted Critical
Publication of CN109918359B publication Critical patent/CN109918359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a database service persistence method based on sweep, which comprises a single database and a database cluster persistence method. The method comprises the following steps for a single database: 1. constructing a requested database mirror image; 2. analyzing the database service establishing parameters; 3. creating a ceph block device; 4. creating database containers, etc.; 5. an ingress access container is created. The method comprises the following steps for the database cluster: 1. analyzing the database service establishing parameters; 2. creating a ceph block device; 3. registering for portal access services; 4. creating an access container; 5. registering for database services; 6. a database container is created. The invention discloses a swarm-based database service persistence system, which comprises: an analysis module; a control module; a network module; a registration module, etc. The invention provides a general persistence method for database services, which improves the availability of the database services and the security of data.

Description

Database service persistence method and system based on sweep
Technical Field
The invention relates to the technical field of containers, in particular to a database service persistence method and system based on sweep.
Background
Docker is one of the most popular open source items in the field of container technology at present, and the Docker container can provide developers with packaged applications and related dependency packages into a container with portability. While Swarm is one of the most popular container cluster management tools of docker, and is the native self-contained orchestration tool of docker. A plurality of docker hosts are abstracted into a whole, and various docker resources on the docker hosts are uniformly managed through an entrance.
Ceph is one of the most mainstream open source storage items at present, is a reliable, automatic rebalancing and automatic recovery distributed storage system, adopts a CRUSH algorithm for data addressing, is more efficient than the addressing mode of other storage systems, has more abundant storage characteristics, can provide three storage interfaces of object storage, block device storage and file storage, has the characteristics of multiple backups, no central structure, no single-point fault, good expansibility and the like, and gradually becomes a solution for replacing the traditional storage in a cloud computing environment.
If all data in the container service operation process does not adopt a persistence method, the data are lost and cannot be retrieved when the container instance stops or is deleted or abnormally quits. And there is no general database service for the swarm container cluster management platform, including single database and database cluster persistence methods and systems.
Disclosure of Invention
The invention aims to provide a database service persistence method based on a sweep (container native cluster management tool) aiming at overcoming the defects that the prior sweep container cluster has no universal database service comprising a single database and a database cluster persistence method, and meanwhile, the recovery processing capacity of database service downtime is improved, the recovery time of the service downtime is shortened, and the availability of the database service on the sweep cluster is increased. Meanwhile, the invention also discloses a database service persistence system based on sweep.
A swarm-based database service persistence method comprises a persistence method for a single database and a database cluster, wherein the persistence method for the single database comprises the following steps:
s1, constructing a database mirror image based on the high availability software (keepalived) of the virtual routing redundancy protocol;
s2, creating and analyzing single database parameters, wherein the created parameters comprise: the method comprises the following steps of marking a state service set, a network name, a network driver, a database name, a database service port, the number of database copies, a database starting command, a database stopping command, the size of block equipment, block reading and writing authority, the name of the block equipment, a vip address, an entrance access service name and an entrance access service port;
s3, creating a distributed storage system (ceph) block device according to the analyzed parameters;
s4, creating a database container according to the analysis parameters and mounting distributed storage system block equipment;
and S5, creating a proxy server (nginx) entrance according to the resolution parameters to access the database container.
The persistence method for the database cluster comprises the following steps:
l1, creating database cluster parameters, and analyzing, wherein the created parameters comprise: the method comprises the following steps of marking a state service set, network driving, network name, database service port, database copy number, database starting command, database stopping command, block equipment size, block read-write permission, block equipment name, vip address, entry access service name and entry access service port parameter;
l2, creating a distributed storage system block device according to the analysis parameters;
l3, register database cluster entry access service;
l4, creating a proxy server entry to access the database cluster container according to the analysis parameters;
l5, registration database service;
l6, creating database container according to the analysis parameters and mounting the distributed storage system block device.
Preferably, the pre-constructed keepalive mirror at step S1 creates a database container at step S4 and automatically generates keepalive. conf configuration files and start database scripts and stop database scripts according to the request parameters created at step S2, the request parameters including database service name, database service port, database start command, database stop command, network name, network driver, vip address;
preferably, step S3 calls, according to the parameters parsed in step S2, the api (application programming interface) of ceph to create a block device of the distributed storage system, where the created parameters include a block size, a block name, a pool (pool) corresponding to the block, and block read/write permissions, and only one block device is created for the mount of the database container in step S4.
Preferably, the step S4 creates a database container to automatically generate a keepalive. conf configuration file and start database script and stop database script according to the parameters created in the step S2, the parameters used in the step S4 include database service name, database service port, database start command, database stop command, network name, network driver, vip address, and the container in the step S4 mounts the distributed storage system block device using rexray (data volume plugin).
Preferably, creating a database container according to the parsing parameters and mounting the ceph block device at step S4 includes the following steps:
s4-1, creating a master database container mounting ceph block device;
s4-2, creating a standby database container to mount the same ceph block device.
The keepalive services of the main database container and the standby database container check the weights of the main database container and the standby database container, the database service is started by the container with the high weight value, the database service is stopped by the container with the low weight value, and only one of the main database container and the standby database container runs the database service at the same time.
Preferably, step S5 creates a proxy server portal access container according to the parsing parameters, which will automatically generate nginx.conf configuration files and provide proxy forwarding for the primary and secondary databases, and unless the primary database is down, the proxy server portal access container will forward all requests to the secondary database, and all requests are forwarded to the primary database.
Preferably, step L2 calls, according to the parsing parameter, ceph api (application programming interface) to create a block device of the distributed storage system, where the created parameter includes block size, block name, pool (pool) corresponding to the block, number of blocks, and block read/write permission, where the number of created block devices of the distributed storage system is equal to the number of database containers created in step L6, the ceph block device is mounted to the database container with the same name, and the naming formats of the database container and the ceph block device are: database names- {1,2,3.
Preferably, the step of registering portal access service of step L3 includes the steps of:
l3-1, register the portal access service name with the key-value pair data storage system (etcd);
l3-2, register the portal access service port with etcd.
Preferably, step L4 creates a proxy server portal access container according to the resolution parameters, which includes a unified configuration management tool (confd) service and a proxy server.
The confd service periodically acquires information including an entry service name, an entry service port, a database name and a database port in the etcd, checks whether a key value pair stored in the etcd is changed, if so, regenerates a new nginx.conf (configuration started by the proxy server) according to a configuration template started by the proxy server, and then informs the proxy server to reload the new nginx.conf configuration file.
The proxy server provides load balancing and portal access to the database cluster and tries again after 10s if proxy forwarding fails once.
Preferably, the step L5 registering the database service comprises the steps of:
l5-1, register database container name with etcd;
l5-2, register database container port with etcd.
As long as the registered database container and the entry access service name are the same, the registered database container is added into the proxy server load balancing list, so that if the database cluster is expanded, the database cluster only needs to be registered to the same entry access service name.
Step L6 is creating a database container and mounting distributed storage system block devices according to the parsing parameters, wherein the distributed storage system block devices are mounted to the database container with the same name, and the naming formats of the database container and the ceph block devices are: database names- {1,2,3. }, a database cluster has a number of distributed storage system block devices.
In the above system for swarm-based database service persistence, the system includes:
a control module: the system is used for processing user requests and is responsible for communicating with other modules, and the docker api is called to create corresponding database containers according to the user requests.
A registration module: the method is used for registration and deletion in the database cluster, and the registration parameters comprise an entrance access service name, an entrance access service port, a database service name and a database service port.
A network module: a network (network) for creating a container (docker), and obtaining a network id.
An analysis module: the database server is used for analyzing parameters of the created database service file submitted by the user.
A resource management module: the distributed storage system block device pool is used for providing block devices for container mounting, the module is used for managing the pool, the pool corresponding to the block comprises the allocated block devices and unallocated storage space, and the management mode comprises creation, deletion and query operations.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention realizes the database service persistence method and system based on the sweep, can ensure that the data of the database service is persisted into the ceph block device, and improves the data security of the database; simultaneously, two types of a single database and a database cluster are respectively provided, so that different requirements of users are met; meanwhile, the downtime recovery time of a single database service is shortened, the availability and the recovery capability of the database are improved, the system can facilitate a user to create the database service, and the operation and maintenance difficulty of operation and maintenance personnel is reduced.
Drawings
Fig. 1 is a block diagram of a swarm-based database service persistence system architecture according to the present invention.
Fig. 2 is a general flowchart of the swarm-based database service persistence system of the present invention.
FIG. 3 is a flow chart of a single database in swarm-based database service persistence in an embodiment of the present invention.
Fig. 4 is a flowchart of a database cluster in swarm-based database service persistence in the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples and accompanying drawings, which are provided for illustration only and are not intended to limit the invention.
Example (b):
as shown in fig. 1, a swarm-based database service persistence system is shown, which mainly includes: the system comprises a control module, an analysis module, a registration module, a network module and a resource management module.
The control module is responsible for receiving a request for creating a database sent by a user in the embodiment, after receiving the request, the control module forwards the request to the analysis module for parameter analysis, then forwards parameters including network drive, network name and the like to the network module for creating a network, forwards the parameters including block equipment size, block read-write permission, block equipment name, block equipment pool name and the like to the resource management module for requesting the block equipment, and finally calls a docker api for creating a container and returns the container to the user.
The analysis module is responsible for a request from a control module user in the embodiment, the module can determine whether to create a single database or a database cluster according to the value of the state service set flag, and if the value of the state service set flag is true, the database cluster is created; if the value of the status service set mark is false, a single database is created, then the parameters are returned to the control module, and if the value of the status service set mark is false, the parameters are returned to the control module, and if the value of the status service set mark is the single database, the parameters comprise a network drive, a network name, a database service port, a database starting command, a database stopping command, a block device size, a block reading and writing authority, a block device name, a vip address, an entrance access service name and an entrance access service port; if the database cluster is the database cluster, the return parameters comprise a network drive, a network name, a database service port, the number of database copies, the size of the block device, the block read-write permission, the name of the block device, the name of the entrance access service and the entrance access service port.
The registration module, which in this embodiment is responsible for requests from the control module to create a database cluster where registration information is not required for individual database service creation, provides registration and deletion related registration information including portal access service name and port, database service name and port.
The network module is responsible for a request from the control module for creating a network service in this embodiment, and the module calls a docker api (application programming interface of a container) to check whether a network exists, and if the network exists, the docker api directly returns an overlay network id to the control module, and if the network does not exist, the docker api is called to create an overlay network (overlay network), and then the overlay network id is returned to the control module.
The resource management module is responsible for a request from the control module to create a block device in this embodiment, and the module calls the api (application programming interface) of ceph to create a specified block device, and if the block device already exists, the block device does not need to be created, and the block device is directly returned. The following describes the steps of the swarm-based database service persistence method provided by the embodiment of the present invention in detail.
For a single database, the processing steps of swarm-based database service persistence are as follows:
step 1: when a user constructs a self-defined database mirror image, the user only needs to install the self-defined database on a dockerfile (a text document of a command for combining images) added with a keepalived (high-availability software based on a virtual routing redundancy protocol) base mirror image provided by a system, and then covers an entry entrypoint (a command executed when a container is started) of the mirror image.
Step 2: a user sends a request for creating a single database to a control module, and request parameters comprise a state service set mark, a network name, a network drive, a database name, a database service port, the number of database copies, a database start command, a database stop command, a block device size, a block read-write permission, a block device name, a vip address, an entrance access service name and an entrance access service port; the control module forwards the request to an analysis module, and the analysis module analyzes the request parameters;
and step 3: the resource management module calls the ceph api to create the block device according to the request forwarded by the control module, the request parameters include a pool corresponding to the created block, a block read-write permission, a block size and a block name, and if the block device to be created exists, the block device to be created directly returns to the existing block device;
and 4, step 4: the control module creates a main database container and a standby database container according to request parameters, wherein the request parameters comprise a database service name, a database service port, a database starting command, a database stopping command, a network name, a network driver and a vip address; two container call plug-ins rexray (container volume plug-ins) service mount to the same block device;
and 5: the control module calls docker api to create a nginx access container according to request parameters, wherein the request parameters comprise the name and port of the access container, a network driver and a network name, and the mirror image of the nginx access container is constructed in advance by a system and is used for carrying out load forwarding on a main database container and a standby database container;
in step 2, the parsing module parses the parameters to determine whether the request is a request for creating a single database, and the request parameters must include a database service name, a database service port, a database start command, a database stop command, a vip address, and the like.
In step 4, the control module firstly requests the network module to create an overlay network of the appointed network driver and network name, returns the overlay network id and then calls docker api to create a main database container and a standby database container, simultaneously, a keepalive. conf configuration file is generated, the main database container and the standby database container are communicated through heartbeat, the two containers determine which database container has the database service started according to the weight value, when the container is started, the initialization weight of the primary database container is higher than the initialization weight of the backup database container, therefore, the database of the main database container is started, the database of the standby database container is stopped, when the main database container is down or the database service of the main database container is stopped, the weight of the main database container is reduced and the database of the main database container is stopped, and the weight of the standby database container is increased and the database of the standby database container is started.
In step 5, the mirror of the nginx portal access container is constructed in advance by the system, the mirror mainly comprises the nginx service, and the nginx service is mainly used for proxy tcp (transmission control protocol) forwarding. The control module creates a nginx access container to generate a corresponding nginx.conf configuration file, and carries out proxy forwarding on the two containers, wherein one container becomes a main server, namely a main database container, of the proxy forwarding, and the nginx forwarding is forwarded to a standby server, namely a standby database container, only after the main server goes down.
For the database cluster, the processing steps of swarm-based database service persistence are as follows:
step 1: a user sends a request for creating a database cluster to a control module, wherein the request comprises a state service set mark, a network drive, a network name, a database service port, the number of database copies, a database starting command, a database stopping command, a block device size, a block read-write permission, a block device name, a vip address, an entrance access service name and an entrance access service port parameter, and the control module forwards the request to an analysis module for parameter analysis;
step 2: the resource management module calls ceph api to create block equipment according to a request forwarded by the control module, wherein parameters of the request comprise a pool corresponding to a created block, a block size, a block name, the number of blocks and a block read-write permission, the number of the created blocks is the same as the number of database containers, if the corresponding block equipment exists, the corresponding block equipment is directly returned, and the block equipment naming format is named according to a database- {1,2,3. };
and step 3: the control module registers an entrance access service request of the database cluster to the etcd; including registering portal access service names and ports;
and 4, step 4: the control module requests a network module to create a designated network, returns a network id, and calls a docker api to create according to a request parameter, wherein the access service container comprises nginx and confd, the mirror image of the access container is constructed in advance by the system, and the service name and the open port of the access container are the same as those of the registered access service name and the port in the step 3; the confd service regularly checks whether the key value pair of the etcd changes every 1s, and if the key value pair is modified, added or deleted, the confd service regenerates the nginx. conf configuration file; if the key-value pair does not change, the confd service does not regenerate a new configuration file;
and 5: the control module creates the name and port of the database container in step S6 for the etcd registration, the registration path being the same as the registration entry access service name in step 3.
Step 6: the control module requests a network id from the network module, then calls the docker api to create database containers, and uses the plug-in rexray, wherein each database container respectively mounts corresponding block equipment, and the mounting corresponding relation of the block equipment is as follows: database names- {1,2,3. } and ceph block device name: database names- {1,2,3.
In step 1, the parsing module parses the parameters to determine whether the request is a request for creating a database cluster, and the requested parameters must include: portal access service, portal access service port, database service name, database service port, etc.
In step 4, a mirror image for creating the nginx access container is constructed in advance by the system, the mirror image mainly comprises a nginx service and a confd service, wherein the nginx is used for proxy tcp forwarding, and the confd service is used for monitoring an etcd key value. If the key value pair in the etcd changes, confd will newly generate a new nginx conf according to the template, then inform nginx to check whether the configuration file is correct, i.e. whether the grammar of the nginx configuration file is met, if correct, nginx reloads the newly generated configuration file, if incorrect, nginx still uses the original configuration file. nginx will carry on the polling to transmit to the database container, namely every request distributes to the database container one by one according to the time sequence, if there is a database that transmits and fails once, then in 10s time next, nginx will not try to transmit for the second time again, until after 10s, transmit to the database container again.
In step 6, if the database cluster needs to be expanded, a request may be sent to the control module, the control module registers the key-value pair of the new database container with the etcd, and the name of the new database container is the same as the name of the access service of the entry of the new database container, and the confd may detect the addition of the key-value pair, renew the nginx. And then the control module calls docker api to create a new database container and calls rexray to mount a new block device.
If the database needs to be subjected to capacity reduction, a request can be sent to the control module, the control module calls the docker api to delete the database container, and then the database container is forwarded to the registration module and sent to the etcd to delete the name and the port of the database container. When the etcd key value is changed, confd can detect the deletion of the key value, renew the nginx. conf configuration file, and notify the nginx to reload the configuration file.
The embodiment can complete the persistence of the database service comprising a single database and a database cluster, meet different requirements of users, improve the safety of database data, simultaneously improve the usability of the database service in operation, and shorten the downtime recovery time of the single database service by 76 percent compared with the restarting of a container. Meanwhile, the user can create database service only by sending a database request to the control module, and the operation and maintenance difficulty of operation and maintenance personnel is reduced.
The above is only a preferred embodiment of the present invention, it should be noted that for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which will not affect the effect of the implementation of the present invention and the practicability of the patent, and the equivalent replacement or change according to the technical scheme of the present invention and the inventive concept thereof are all within the protection scope of the present invention.

Claims (4)

1. A database service persistence method based on sweep comprises a persistence method for a single database and a database cluster, and is characterized in that the persistence method for the single database comprises the following steps:
s1, constructing a database mirror image based on the keepalived of the highly available software based on the virtual routing redundancy protocol;
s2, creating and analyzing single database parameters, wherein the created parameters comprise: the method comprises the following steps of marking a state service set, a network name, a network driver, a database name, a database service port, the number of database copies, a database starting command, a database stopping command, the size of block equipment, block reading and writing authority, the name of the block equipment, a vip address, an entrance access service name and an entrance access service port;
s3, creating a distributed storage system ceph block device according to the analyzed parameters: calling ceph api to create distributed storage system block equipment according to the parameters analyzed by the S2, wherein the created parameters comprise block size, block name, pool corresponding to the block and block read-write permission, and only one block equipment is created for mounting of the database container in the step S4;
s4, creating a database container according to the analysis parameters and mounting distributed storage system block equipment, wherein the created database container can automatically generate a keepalive. conf configuration file, a start database script and a stop database script according to the parameters created in the step S2, the parameters used in the step S4 comprise a database service name, a database service port, a database start command, a database stop command, a network name, a network driver and a vip address, and the container used in the step S4 mounts the distributed storage system block equipment by using a data volume plug-in rexray; the method for creating the database container and mounting the distributed storage system block equipment according to the analysis parameters comprises the following steps:
s4-1, creating a main database container mounting distributed storage system block device;
s4-2, creating the same distributed storage system block device with the standby database container mounted and the main data container mounted;
the keepalive services of the main database container and the standby database container check the weights of the main database container and the standby database container, the database service is started by the container with the high weight value, the database service is stopped by the container with the low weight value, and only one of the main database container and the standby database container runs the database service at the same time;
s5, creating a proxy server nginx access database container according to the analysis parameters; creating a proxy server access container according to the analysis parameters, automatically generating nginx.conf configuration files, and simultaneously providing proxy forwarding for the main database and the standby database, wherein the proxy server access container forwards all requests to the standby database unless the main database is down, and forwards all requests to the main database;
the persistence method for the database cluster comprises the following steps:
l1, creating database cluster parameters, and analyzing, wherein the created parameters comprise: the method comprises the following steps of marking a state service set, network driving, network name, database service port, database copy number, database starting command, database stopping command, block equipment size, block read-write permission, block equipment name, vip address, entry access service name and entry access service port parameter;
l2, creating a distributed storage system block device according to the analysis parameters;
l3, register database cluster entry access service;
l4, creating a proxy server entry to access the database cluster container according to the analysis parameters; creating a proxy server access container according to the analysis parameters, wherein the proxy server access container comprises a confd service of a unified configuration management tool and a proxy server;
the confd service regularly acquires information which comprises an entry service name, an entry service port, a database service name and a database port in the etcd, checks whether a key value pair stored in the etcd changes, regenerates a new configuration nginx.conf started by the proxy server according to a configuration template started by the proxy server if the key value pair changes, and then informs the proxy server to reload a new nginx.conf configuration file;
the proxy server provides load balance and access to the database cluster, and if proxy forwarding fails once, the proxy server tries once again after 10 s;
l5, registration database service; the registration database service comprises the steps of:
l5-1, register database container names with the key-value pair data storage system;
l5-2, register a database container port with the key-value pair data storage system;
as long as the registered database container and the access service name are the same, the registered database container is added into the load balancing list of the proxy server, so that if the database cluster is expanded, only the same access service name needs to be registered;
step L6 is creating a database container and mounting distributed storage system block devices according to the parsing parameters, wherein the distributed storage system block devices are mounted to the database container with the same name, and the naming formats of the database container and the ceph block devices are: database name- {1,2,3. }, a database cluster is provided with a plurality of distributed storage system block devices, and the container of the step L6 uses rexray data volume plug-in to mount the distributed storage system block devices;
l6, creating database container according to the analysis parameters and mounting the distributed storage system block device.
2. The method for database service persistence of swarm according to claim 1, wherein step L2 invokes ceph api to create distributed storage system block devices according to parsing parameters, where the created parameters include block size, block name, pool corresponding to the block, number of blocks, and block read/write permission, where the number of created distributed storage system block devices is equal to the number of database containers created in step L6, the ceph block devices are mounted to database containers with the same name, and the naming formats of the database containers and the ceph block devices are both: database names- {1,2,3.
3. The method for database service persistence of swarm according to claim 1, wherein the step of L3 registering portal access service comprises the steps of:
l3-1, register the entry access service name with the key-value pair data storage system etcd;
l3-2, register the portal access service port with the key-value pair data storage system.
4. A swarm-based database service persistence system implementing the method of any of claims 1 to 3, comprising the following modules:
a control module: the system is used for processing user requests and is responsible for communicating with other modules, and calling docker api to create corresponding database containers according to the user requests;
a registration module: the method is used for registering and deleting in the database cluster, and the registration parameters comprise an entrance access service name, an entrance access service port, a database service name and a database service port;
a network module: a network for creating container docker and obtaining network id;
an analysis module: the database server is used for analyzing parameters of a database creation service file submitted by a user;
a resource management module: the distributed storage system block device pool is used for providing block devices for container mounting, the module is used for managing the pool, the pool corresponding to the block comprises the allocated block devices and unallocated storage space, and the management mode comprises creation, deletion and query operations.
CN201910049793.9A 2019-01-18 2019-01-18 Database service persistence method and system based on sweep Active CN109918359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910049793.9A CN109918359B (en) 2019-01-18 2019-01-18 Database service persistence method and system based on sweep

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910049793.9A CN109918359B (en) 2019-01-18 2019-01-18 Database service persistence method and system based on sweep

Publications (2)

Publication Number Publication Date
CN109918359A CN109918359A (en) 2019-06-21
CN109918359B true CN109918359B (en) 2022-03-29

Family

ID=66960469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910049793.9A Active CN109918359B (en) 2019-01-18 2019-01-18 Database service persistence method and system based on sweep

Country Status (1)

Country Link
CN (1) CN109918359B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110806907B (en) * 2019-10-31 2023-04-21 浪潮云信息技术股份公司 Containerized configurable database script execution management system and method
CN110990458B (en) * 2019-12-03 2023-04-18 电子科技大学 Distributed database system, interface communication middleware
CN111966717B (en) * 2020-09-04 2022-06-14 苏州浪潮智能科技有限公司 Data access method and system for reducing database crash probability
CN112256399B (en) * 2020-10-28 2022-08-19 四川长虹电器股份有限公司 Docker-based Jupitter Lab multi-user remote development method and system
CN112486564A (en) * 2020-12-09 2021-03-12 浪潮云信息技术股份公司 Confd dynamic update configuration-based method and system
CN112667747B (en) * 2020-12-31 2021-09-21 北京赛思信安技术股份有限公司 Dynamic configuration multi-database distributed persistence method supporting user-defined plug-in
CN113190627A (en) * 2021-06-02 2021-07-30 南京恩瑞特实业有限公司 Nginx and Mycat based information system architecture and configuration method thereof
CN114500573B (en) * 2021-12-24 2024-04-26 天翼云科技有限公司 Storage volume mounting method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118073A (en) * 2013-01-08 2013-05-22 华中科技大学 Virtual machine data persistence storage system and method in cloud environment
WO2015176636A1 (en) * 2014-05-23 2015-11-26 中国银联股份有限公司 Distributed database service management system
CN107967124A (en) * 2017-12-14 2018-04-27 南京云创大数据科技股份有限公司 A kind of distribution persistence memory storage system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118073A (en) * 2013-01-08 2013-05-22 华中科技大学 Virtual machine data persistence storage system and method in cloud environment
WO2015176636A1 (en) * 2014-05-23 2015-11-26 中国银联股份有限公司 Distributed database service management system
CN107967124A (en) * 2017-12-14 2018-04-27 南京云创大数据科技股份有限公司 A kind of distribution persistence memory storage system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Workload-aware resource management for energy;Kang D K;《2016 IEEE Region 10 Conference (TENCON)》;20161231;2428-2431 *
基于Docker Swarm 集群的调度策略优化;卢胜林等;《信息技术》;20160731(第7期);147-151、155 *

Also Published As

Publication number Publication date
CN109918359A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109918359B (en) Database service persistence method and system based on sweep
CN112099918B (en) Live migration of clusters in a containerized environment
US20230409381A1 (en) Management and orchestration of microservices
US5751962A (en) Object-based systems management of computer networks
CN101997823B (en) Distributed file system and data access method thereof
US11403269B2 (en) Versioning validation for data transfer between heterogeneous data stores
US12111794B2 (en) Replication barriers for dependent data transfers between data stores
US11199989B2 (en) Methods, apparatuses and computer program products for supporting and performing data replication of virtual machine
WO1989002129A1 (en) Session control in network for digital data processing system which supports multiple transfer protocols
US11409711B2 (en) Barriers for dependent operations among sharded data stores
US11082494B2 (en) Cross storage protocol access response for object data stores
US7500251B2 (en) Method and system for managing programs for web service system
CN112698838B (en) Multi-cloud container deployment system and container deployment method thereof
CN115640110B (en) Distributed cloud computing system scheduling method and device
CN112035062B (en) Migration method of local storage of cloud computing, computer equipment and storage medium
CN112328697A (en) Data synchronization method based on big data
CN113486095A (en) Civil aviation air traffic control cross-network safety data exchange management platform
KR102367262B1 (en) Method, Apparatus and System for Monitoring Using Middleware
CN113448775B (en) Multi-source heterogeneous data backup method and device
US11169728B2 (en) Replication configuration for multiple heterogeneous data stores
CN114615263A (en) Cluster online migration method, device, equipment and storage medium
CN113726852A (en) Data transmission method and system, zero terminal and cloud server
CN111949378A (en) Virtual machine starting mode switching method and device, storage medium and electronic equipment
CN106506247B (en) Virtual network creating method and device
CN113672558B (en) Archive management method based on SOA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant