CN114443225A - Method for realizing CephFS file system docking by Openstack virtual machine - Google Patents

Method for realizing CephFS file system docking by Openstack virtual machine Download PDF

Info

Publication number
CN114443225A
CN114443225A CN202210063524.XA CN202210063524A CN114443225A CN 114443225 A CN114443225 A CN 114443225A CN 202210063524 A CN202210063524 A CN 202210063524A CN 114443225 A CN114443225 A CN 114443225A
Authority
CN
China
Prior art keywords
virtual machine
cephfs
file system
module
virtio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210063524.XA
Other languages
Chinese (zh)
Inventor
蒋方文
高传集
孙思清
王新雨
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202210063524.XA priority Critical patent/CN114443225A/en
Publication of CN114443225A publication Critical patent/CN114443225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for realizing a CephFS file system butted by an Openstack virtual machine, which relates to the technical field of cloud computing and comprises the following steps: mounting a host machine to the local based on a kernel mode, namely mounting CephFS: starting a nova component, automatically mounting the CephFS on a computing node configured by a user by a CephFS automatic mounting module, and adding specific characteristics to the computing node on which the CephFS is mounted; the virtual machine accesses the CephFS based on Virtio-fs: a user sends a request for establishing and using a CephFS virtual machine, a request response module responds and filters the request, a virtual machine definition module modifies a virtual machine Libvirt definition file generation method, a Virtio-fs type file system is added to the virtual machine, a virtual machine shared memory is used according to Virtio-fs requirements, and memory related definitions of the virtual machine are modified into a shared memory; starting a virtual machine, communicating with QEMU through a Virtiofsd process, and mapping a specified directory of a CephFS on a host machine to a mount label in the virtual machine; and the database module stores the persistence of the data in the whole life cycle of the virtual machine. The invention can improve the cluster information safety and reduce the network path.

Description

Method for realizing CephFS file system docking by Openstack virtual machine
Technical Field
The invention relates to the technical field of cloud computing, in particular to a method for realizing a CephFS file system in butt joint with an Openstack virtual machine.
Background
With the rapid development of cloud computing technology, virtual machines gradually replace physical machines to run services in various fields. More and more government authorities and enterprises migrate a service system from a traditional computing center to a cloud center virtual machine, the Openstack cloud computing management platform serving as an open source is more and more widely applied, and a large number of Openstack cloud computing management platforms are deployed in the cloud computing center to provide IaaS layer virtual machine management functions for users.
With the rapid development of the Openstack cloud resource management platform, the Ceph distributed storage serving as a storage base of the Openstack cloud resource management platform is also rapidly developed, and the Ceph still becomes a standard of distributed storage and is a friendly and high-performance storage in the forms of supporting objects, blocks and the like. The CephFS is a file system supporting a POSFIX interface, and uses a Ceph storage cluster to store data. The CephFS file system can be conveniently mounted to a local user for a client.
At present, an Openstack virtual machine uses a CephFS, which is directly mounted into the virtual machine or exported as an NFS (network file system) server through NFS-ganesha and other tools, and then the NFS client is mounted to a local use in the virtual machine. This exposes information about the Ceph cluster or ganesha cluster to the virtual machine. The information of the Ceph cluster or the ganesha cluster can be exposed, and certain influence is caused on the security of the cluster.
In addition, exporting the CephFS directory based on NFS-ganesha may create a single point bottleneck. When the number of clients becomes larger and the load becomes larger, the normal service or use of the clients is influenced to some extent.
Disclosure of Invention
Aiming at the requirements and the defects of the prior art development, the invention provides a method for realizing the butt joint of an Openstack virtual machine and a CephFS file system.
The invention discloses a method for realizing a CephFS file system butted by an Openstack virtual machine, which adopts the following technical scheme for solving the technical problems:
an implementation method for an Openstack virtual machine to be in butt joint with a CephFS file system is based on a CephFS automatic mounting module, a request response module, a node scheduling module, a virtual machine definition module and a database module, and comprises the following two stages:
firstly, mounting a CephFS file system to a local host on the basis of a kernel mode:
starting a nova-computer component of Openstack, automatically mounting a CephFS file system on a computing node configured by a user by a CephFS automatic mounting module, and adding a specific feature to the computing node mounted with the CephFS file system for use in subsequent scheduling;
and (II) the virtual machine accesses the locally mounted CephFS file system based on Virtio-fs:
a user issues a request to create a virtual machine using the CephFS file system,
the request response module responds to and filters the request,
a virtual machine definition module modifies a virtual machine Libvirt definition file generation method, adds a Virtio-fs type file system to a virtual machine, uses a virtual machine shared memory according to Virtio-fs requirements, modifies the memory related definition of the virtual machine into the shared memory,
starting a virtual machine, communicating with QEMU through a Virtiofsd process, and mapping a specified directory of a CephFS file system on a host machine to a mount label in the virtual machine;
when the steps are executed, the database module stores the persistence of the data in the whole life cycle of the virtual machine, including the persistence of the request, the persistence of the relevant equipment of the virtual machine and the persistence of the state change of the virtual machine.
Optionally, when the related automatic CephFS mounting module adds a specific feature to the computing node mounting the CephFS, it is necessary to add: whether the indication of the Virtio-fs type file system is supported, a path configuration item of the CephFS mounted in the local file system, a Ceph configuration file path, a used user name, a used key and a used CephFS file system name are added, and a characteristic that the current computing node supports the Virtio-fs is added.
Further optionally, the related automatic CephFS mounting module automatically mounts the CephFS on a computing node configured by a user, and when the computing node is started, the following operations are performed:
(1) verifying whether the current computing node supports a shared file system of a CephFS based on Virtio-fs and verifying whether a configuration item is started;
(2) verifying whether the versions of the current QEMU and Libvirt meet requirements or not, and adding features to the current computing node according to the requirements;
(3) and mounting the corresponding CephFS file system to the CephFS mounting path in the configuration item through the CephProfile path, the used user name, the used key and the CephFS file system name in the configuration item of the nova-computer component.
Preferably, the version of the kernel of the related virtual machine is not lower than 5.4 of Linux, the version of Libvirt is not lower than 6.2 of Libvirt, and the version of QEMU is not lower than 5.0 of QEMU.
Optionally, in the process that the related request response module responds to the user request:
an interface created by the virtual machine needs to be modified, and identification and verification of Virtio-fs type file system equipment are added;
when a virtual machine is created, related information of a Virtio-fs file system to be mounted is added and verified, wherein the related information comprises (a) a file system path to be actually mounted, the path being a path in a CephFS file system, and (b) a mounting tag used in mounting in the virtual machine.
Further optionally, when creating the virtual machine, if a compute node is specified, the specified compute node needs to be verified: whether a Virtio-fs type file system is supported.
Optionally, the related virtual machine definition module uses a virtual machine shared memory according to Virtio-fs, and modifies the memory-related definition of the virtual machine into the shared memory, where the following verification process is required in this process:
(1) verifying whether the request virtual machine needs to mount a Virtio-fs type file system, if so, verifying the versions of the current QEMU and Libvirt, executing the subsequent steps after the verification is passed, and reporting an error and returning to a scheduling reselection node to try after the verification fails;
(2) verifying whether the virtual machine uses the hugepage, if so, entering the next step, and if not, changing the memory definition of the virtual machine into a shared memory form;
(3) acquiring a mount path and a mount label transmitted by a request, and adding Virtio-fs type file system equipment to the virtual machine;
(4) after the definition of the virtual machine is completed, the virtual machine is started, and the Virtiofsd process runs along with the start of the virtual machine and is responsible for processing a file system request from a user.
Optionally, the related database module stores persistence of data in the whole life cycle of the virtual machine, and in the process:
the CephFS automatic mount module needs to store the characteristics added to the computing nodes in the database module for subsequent scheduling;
the request response module needs to store the verified request for creating the virtual machine to the database module for use in subsequent processes;
the node scheduling module needs to acquire resource data and characteristic information of each computing node from the database module for comparison, and ensures that the screening of the computing nodes is completed;
the virtual machine definition module needs to update the virtual machine device data and the resource use data, update and maintain the virtual machine state in time, and store the virtual machine state in the database module.
Compared with the prior art, the method for realizing the CephFS file system butt joint of the Openstack virtual machine has the beneficial effects that:
(1) on one hand, the invention uses Virtio-fs to export the directory in the CephFS file system, mounts the directory in the virtual machine, and does not leak the cluster information, and the information used in mounting the virtual machine is the mount label appointed by Virtio-fs in the virtual machine definition file, so that the CephFS file system can be used more safely; on the other hand, the CephFS file system is mounted on the computing node where each virtual machine is located, so that the single-point bottleneck of exposing the CephFS file system directory based on NFS-ganesha can be eliminated, the network path is reduced, and the data read-write performance of the file system is improved;
(2) according to the invention, the butt joint of the virtual machine and the CephFS file system is divided into two stages, namely the butt joint of the virtual machine and the host, and the butt joint of the host and the CephFS file system, the configuration information of the Ceph cluster is shielded on the host layer, so that the invisibility of the Ceph configuration to tenants is realized, and the safety risk of the Ceph cluster is reduced;
(3) according to the invention, the host mounts the CephFS file system based on the kernel mode, so that the single-point bottleneck of exposing the CephFS file system based on the NFS-ganesha is eliminated, the network path is reduced, the IO performance consumption caused by protocol conversion is reduced, and the data read-write performance of the CephFS file system is improved;
(4) the invention uses the shared memory to access and interact data, and can further improve the data read-write performance of the CephFS file system.
Drawings
FIG. 1 is a connection diagram of an implementation module of the present invention;
figure 2 is a physical architecture diagram of the present invention.
Detailed Description
In order to make the technical solutions, technical problems to be solved, and technical effects of the present invention more clearly apparent, the following description clearly describes the technical solutions of the present invention in combination with specific embodiments.
The first embodiment is as follows:
with reference to fig. 1 and 2, this embodiment provides an implementation method for an Openstack virtual machine to interface a CephFS file system, where the implementation method is based on a CephFS automatic mount module, a request response module, a node scheduling module, a virtual machine definition module, and a database module, and includes the following two stages:
the CephFS file system is mounted to the local by a host machine based on a kernel mode:
and starting a nova-computer component of Openstack, automatically mounting the CephFS file system on a computing node configured by a user by a CephFS automatic mounting module, and adding a specific feature to the computing node mounting the CephFS file system for use in subsequent scheduling.
When the automatic CephFS mounting module adds a specific feature to a computing node mounting the CephFS, the method needs to add the following features to the configuration items of the nova-computer component: whether the indication of the Virtio-fs type file system is supported, a path configuration item of the CephFS mounted in the local file system, a Ceph configuration file path, a used user name, a used key and a used CephFS file system name are added, and a characteristic that the current computing node supports the Virtio-fs is added.
The automatic CephFS mounting module automatically mounts CephFS on a computing node configured by a user, and when the computing node is started, the following operations are carried out:
(1) verifying whether the current computing node supports a shared file system of a CephFS based on Virtio-fs and verifying whether a configuration item is started;
(2) verifying whether the versions of the current QEMU and Libvirt meet requirements or not, and adding features to the current computing node according to the requirements;
(3) and mounting the corresponding CephFS file system to the CephFS mounting path in the configuration item through the CephProfile path, the used user name, the used key and the CephFS file system name in the configuration item of the nova-computer component.
In the embodiment, the kernel version of the virtual machine is not lower than 5.4 of Linux, the Libvirt version is not lower than 6.2 of Libvirt, and the QEMU version is not lower than 5.0 of QEMU.
And (II) the virtual machine accesses the locally mounted CephFS file system based on Virtio-fs:
(I) a user issues a request to create a virtual machine using the CephFS file system.
(II) the request response module responds to and filters the request, in which process:
an interface created by a virtual machine needs to be modified, and identification and verification of Virtio-fs type file system equipment are added;
when a virtual machine is created, related information of a Virtio-fs file system to be mounted is added and verified, wherein the related information comprises (a) a file system path to be actually mounted, the path being a path in a CephFS file system, and (b) a mounting tag used in mounting in the virtual machine.
When creating the virtual machine, if the computing node is designated, the designated computing node is required to be verified: whether a Virtio-fs type file system is supported.
(III) a virtual machine definition module modifies a virtual machine Libvirt definition file generation method, a Virtio-fs type file system is added to a virtual machine, a virtual machine shared memory is used according to Virtio-fs requirements, memory related definitions of the virtual machine are modified into the shared memory, and in the process, the following verification process is required:
(1) verifying whether the request virtual machine needs to mount a Virtio-fs type file system, if so, verifying the versions of the current QEMU and Libvirt, executing the subsequent steps after the verification is passed, and reporting an error and returning to a scheduling reselection node to try after the verification fails;
(2) verifying whether the virtual machine uses the hugepage, if so, entering the next step, and if not, changing the memory definition of the virtual machine into a shared memory form;
(3) acquiring a mount path and a mount label transmitted by a request, and adding Virtio-fs type file system equipment to the virtual machine;
(4) after the definition of the virtual machine is completed, the virtual machine is started, and the Virtiofsd process runs along with the start of the virtual machine and is responsible for processing a file system request from a user.
And (IV) starting the virtual machine, communicating with the QEMU through a Virtiofsd process, and mapping the specified directory of the CephFS file system on the host machine to the mount label in the virtual machine.
When the above-mentioned steps are executed,
(i) the CephFS automatic mount module needs to store the characteristics added to the computing nodes in the database module for subsequent scheduling;
(ii) the request response module needs to store the verified request for creating the virtual machine to the database module for use in subsequent processes;
(iii) the node scheduling module needs to acquire resource data and characteristic information of each computing node from the database module for comparison, and ensures that the screening of the computing nodes is completed;
(iv) the virtual machine definition module needs to update the virtual machine device data and the resource use data, update and maintain the virtual machine state in time, and store the virtual machine state in the database module.
Briefly, the database module stores the persistence of data in the whole life cycle of the virtual machine, including the persistence of requests, the persistence of devices related to the virtual machine, and the persistence of state changes of the virtual machine.
In summary, the method for realizing the Openstack virtual machine docking of the CephFS file system can avoid the leakage of cluster information, eliminate the single-point bottleneck of exposing the CephFS file system directory based on NFS-ganesha, reduce the network path and improve the data read-write performance of the file system.
The principles and embodiments of the present invention have been described in detail using specific examples, which are provided only to aid in understanding the core technical content of the present invention. Based on the above embodiments of the present invention, those skilled in the art should make any improvements and modifications to the present invention without departing from the principle of the present invention, and all such modifications and modifications should fall within the scope of the present invention.

Claims (8)

1. A method for realizing an Openstack virtual machine to be in butt joint with a CephFS file system is characterized in that the method is based on a CephFS automatic mounting module, a request response module, a node scheduling module, a virtual machine definition module and a database module, and comprises the following two stages:
firstly, mounting a CephFS file system to a local host on the basis of a kernel mode:
starting a nova-computer component of Openstack, automatically mounting a CephFS file system on a computing node configured by a user by a CephFS automatic mounting module, and adding a specific feature to the computing node mounted with the CephFS file system for use in subsequent scheduling;
and (II) the virtual machine accesses the locally mounted CephFS file system based on Virtio-fs:
a user issues a request to create a virtual machine using the CephFS file system,
the request response module responds to and filters the request,
a virtual machine definition module modifies a virtual machine Libvirt definition file generation method, adds a Virtio-fs type file system to a virtual machine, uses a virtual machine shared memory according to Virtio-fs requirements, modifies the memory related definition of the virtual machine into the shared memory,
starting a virtual machine, communicating with QEMU through a Virtiofsd process, and mapping a specified directory of a CephFS file system on a host machine to a mount label in the virtual machine;
when the steps are executed, the database module stores the persistence of the data in the whole life cycle of the virtual machine, including the persistence of the request, the persistence of the relevant equipment of the virtual machine and the persistence of the state change of the virtual machine.
2. The method for implementing the Openstack virtual machine docking CephFS file system according to claim 1, wherein when the CephFS automatic mount module adds a specific feature to the computing node on which the CephFS is mounted, it needs to add, in the configuration items of the nova-computer component: whether the indication of the Virtio-fs type file system is supported, a path configuration item of the CephFS mounted in the local file system, a Ceph configuration file path, a used user name, a used key and a used CephFS file system name are added, and a characteristic that the current computing node supports the Virtio-fs is added.
3. The method for implementing an Openstack virtual machine docking CephFS file system according to claim 2, wherein the CephFS automatic mount module automatically mounts a CephFS on a computing node configured by a user, and when the computing node is started, the following operations are performed:
(1) verifying whether a current computing node supports a CephFS shared file system based on Virtio-fs, and verifying whether a configuration item is started;
(2) verifying whether the versions of the current QEMU and Libvirt meet requirements or not, and adding features to the current computing node according to the requirements;
(3) and mounting the corresponding CephFS file system to the CephFS mounting path in the configuration item through the CephProfile path, the used user name, the used key and the CephFS file system name in the configuration item of the nova-computer component.
4. The method for implementing the Openstack virtual machine docking CephFS file system according to claim 3, wherein a kernel version of the virtual machine is not lower than Linux 5.4, a Libvirt version is not lower than 6.2, and a QEMU version is not lower than 5.0.
5. The method for implementing an Openstack virtual machine docking a CephFS file system according to claim 1, wherein the request response module responds to a user request, and in the process:
an interface created by the virtual machine needs to be modified, and identification and verification of Virtio-fs type file system equipment are added;
when a virtual machine is created, adding relevant information of a Virtio-fs file system to be mounted and verifying the Virtio-fs file system, wherein the relevant information comprises (a) a file system path to be actually mounted, which is a path in a CephFS file system, and (b) a mounting tag used in mounting in the virtual machine.
6. The method for implementing an Openstack virtual machine to interface with a CephFS file system according to claim 5, wherein when creating a virtual machine, if a compute node is specified, the specified compute node needs to be verified: whether a Virtio-fs type file system is supported.
7. The method for implementing an Openstack virtual machine docking CephFS file system according to claim 1, wherein the virtual machine definition module uses a virtual machine shared memory according to Virtio-fs needs, modifies a memory-related definition of a virtual machine into the shared memory, and in this process, the following verification process needs to be performed:
(1) verifying whether the request virtual machine needs to mount a Virtio-fs type file system, if so, verifying the versions of the current QEMU and Libvirt, executing the subsequent steps after the verification is passed, and reporting an error and returning to a scheduling reselection node to try after the verification fails;
(2) verifying whether the virtual machine uses the hugepage, if so, entering the next step, and if not, changing the memory definition of the virtual machine into a shared memory form;
(3) acquiring a mount path and a mount label transmitted by a request, and adding Virtio-fs type file system equipment to the virtual machine;
(4) after the definition of the virtual machine is completed, the virtual machine is started, and the Virtiofsd process runs along with the start of the virtual machine and is responsible for processing file system requests from users.
8. The method for implementing an Openstack virtual machine docking CephFS file system according to claim 1, wherein the database module stores persistence of data in the whole life cycle of the virtual machine, and in the process:
the CephFS automatic mount module needs to store the characteristics added to the computing nodes in the database module for subsequent scheduling;
the request response module needs to store the verified request for creating the virtual machine to the database module for use in subsequent processes;
the node scheduling module needs to acquire resource data and characteristic information of each computing node from the database module for comparison, and ensures that the screening of the computing nodes is completed;
the virtual machine definition module needs to update the virtual machine device data and the resource use data, update and maintain the virtual machine state in time, and store the virtual machine state in the database module.
CN202210063524.XA 2022-01-20 2022-01-20 Method for realizing CephFS file system docking by Openstack virtual machine Pending CN114443225A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210063524.XA CN114443225A (en) 2022-01-20 2022-01-20 Method for realizing CephFS file system docking by Openstack virtual machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210063524.XA CN114443225A (en) 2022-01-20 2022-01-20 Method for realizing CephFS file system docking by Openstack virtual machine

Publications (1)

Publication Number Publication Date
CN114443225A true CN114443225A (en) 2022-05-06

Family

ID=81367063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210063524.XA Pending CN114443225A (en) 2022-01-20 2022-01-20 Method for realizing CephFS file system docking by Openstack virtual machine

Country Status (1)

Country Link
CN (1) CN114443225A (en)

Similar Documents

Publication Publication Date Title
KR102293093B1 (en) Versioned hierarchical data structures in a distributed data store
CN102262680B (en) Distributed database proxy system based on massive data access requirement
US11403269B2 (en) Versioning validation for data transfer between heterogeneous data stores
CN101964820B (en) Method and system for keeping data consistency
JP2000339287A (en) Concentrated affinity maintenance device and method therefor in client/server.data processing system managed by work load
EP0730766A1 (en) Computer method and apparatus for asynchronous ordered operations
CN103118073B (en) Virtual machine data persistence storage system and method in cloud environment
US11409711B2 (en) Barriers for dependent operations among sharded data stores
CN109740765A (en) A kind of machine learning system building method based on Amazon server
US10824641B1 (en) Deterministic query-based replication
Raharjo et al. Reliability Evaluation of Microservices and Monolithic Architectures
CN117076096A (en) Task flow execution method and device, computer readable medium and electronic equipment
WO2024113778A1 (en) Holder determination method, apparatus and device for field replace unit device
CN111444148A (en) Data transmission method and device based on MapReduce
CN114443225A (en) Method for realizing CephFS file system docking by Openstack virtual machine
CN111680069B (en) Database access method and device
CN114816682A (en) Distributed transaction processing method, system and device
CN111949378B (en) Virtual machine starting mode switching method and device, storage medium and electronic equipment
CN115587141A (en) Database synchronization method and device
US6671801B1 (en) Replication of computer systems by duplicating the configuration of assets and the interconnections between the assets
US11169728B2 (en) Replication configuration for multiple heterogeneous data stores
CN113448775A (en) Multi-source heterogeneous data backup method and device
CN116974949B (en) Technical operation and maintenance robot control method and system based on multi-mode generation type AI
CN117632445B (en) Request processing method and device, task execution method and device
CN117573730B (en) Data processing method, apparatus, device, readable storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination