CN116540943A - ISCSI service generation method, system and medium based on distributed file system - Google Patents

ISCSI service generation method, system and medium based on distributed file system Download PDF

Info

Publication number
CN116540943A
CN116540943A CN202310572709.8A CN202310572709A CN116540943A CN 116540943 A CN116540943 A CN 116540943A CN 202310572709 A CN202310572709 A CN 202310572709A CN 116540943 A CN116540943 A CN 116540943A
Authority
CN
China
Prior art keywords
linux
file system
logical volume
distributed file
iscsi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310572709.8A
Other languages
Chinese (zh)
Inventor
倪新江
吴佳欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hexin Digital Technology Co ltd
Hexin Technology Co ltd
Original Assignee
Beijing Hexin Digital Technology Co ltd
Hexin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hexin Digital Technology Co ltd, Hexin Technology Co ltd filed Critical Beijing Hexin Digital Technology Co ltd
Priority to CN202310572709.8A priority Critical patent/CN116540943A/en
Publication of CN116540943A publication Critical patent/CN116540943A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1858Parallel file systems, i.e. file systems supporting multiple processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides an ISCSI service generation method, an ISCSI service generation system and an ISCSI service generation medium based on a distributed file system constructed by a plurality of Linux servers, wherein the method comprises the steps of constructing the distributed file system based on the plurality of Linux servers; constructing an image file or a disk partition file based on the distributed file system so as to obtain block equipment corresponding to the image file or the disk partition file according to a creation instruction of an operating system; creating a dynamically extensible logical volume based on a logical volume manager of an operating system; an image is built on an operating system to deploy a container through the image to generate ISCSI services of a disk device having a logical volume of the operating system as a container. The invention operates and constructs the GPFS distributed file system on the server of the POWER architecture, realizes ISCSI service on the server, and solves the problem that the updated GPFS distributed file system is difficult to realize ISCSI storage; the high-availability HA and cross-node dynamic expansion are realized, and the dynamic expansion of the storage space is supported; support container-level quick deployment.

Description

ISCSI service generation method, system and medium based on distributed file system
Technical Field
The present invention relates to the technical field of storage services, and in particular, to a method, a system, and a medium for generating ISCSI services based on a distributed file system constructed by a plurality of Linux servers.
Background
GPFS (General Parallel File System) is a parallel disk file system with its strong scalability achieved by a shared disk architecture. The GPFS file system architecture diagram is shown in fig. 1: a GPFS system is made up of a number of cluster nodes (File System Nodes) on which GPFS file systems and applications run. The cluster nodes are connected with a Shared disk (Shared Disks) through an interactive network architecture (Switch Fabric) network, all the cluster nodes have the same access right to all the Disks, and the files are divided and stored on all the Disks in the file system. Such striped storage not only ensures that the individual disks are load balanced, but also enables the system to achieve the highest throughput.
The interactive network architecture (Switch Fabric) that connects the file system nodes and shared disks may typically comprise a SAN (Storage Area Network) storage area network, such as the FC storage network protocol and ISCSI storage network protocol. However, the GPFS system has abandoned the native support of ISCSI storage network protocols after upgrading to Version5, and there is still a significant ISCSI demand built on the GPFS file system in the production environment.
In addition, although there are ISCSI storage services at the FILEIO level that are easy to implement, they are difficult to implement in terms of dynamic space expansion, while container-based ISCSI services lack easy-to-use management tools. Therefore, there is a need in the art for an ISCSI service implementation that can support dynamic expansion of storage space, and can support rapid deployment at the container level.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide an ISCSI service generation method, system and medium based on a distributed file system constructed by a plurality of Linux servers, for solving the following technical problems: the prior art cannot meet a large amount of ISCSI requirements established on a GPFS file system, and further solves the technical problems that a storage space cannot be dynamically expanded and quick deployment of a container level is not supported.
To achieve the above and other related objects, a first aspect of the present application provides an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers, including: constructing a distributed file system based on a plurality of Linux servers; constructing an image file or a disk partition file according to the storage object type based on the distributed file system, so as to obtain corresponding block equipment according to a creation instruction of the Linux system; creating a dynamically expandable logical volume corresponding to the block device based on a logical volume manager of the Linux system; and constructing a mirror image on the Linux system, and deploying the container through the mirror image so as to generate ISCSI service of the disk device taking the logical volume of the Linux system as the container.
In some embodiments of the first aspect of the present application, the process of building a distributed file system based on a plurality of Linux servers includes: and selecting a plurality of servers based on the POWER architecture, respectively running a Linux system to form a Linux server cluster based on the POWER architecture, and constructing a GPFS distributed file system based on the Linux service cluster.
In some embodiments of the first aspect of the present application, the process of obtaining the block device corresponding to the image file or the disk partition file according to the creation instruction of the operating system includes: obtaining a LOOP block device file based on a block device file creation instruction in the Linux system; and obtaining LOOP block equipment corresponding to the LOOP block equipment file based on the block equipment creation instruction in the Linux system.
In some embodiments of the first aspect of the present application, the creating, by the Linux system based logical volume manager, a dynamically extensible logical volume corresponding to the block device includes: obtaining a physical volume corresponding to the LOOP block device based on a physical volume creation instruction in a Linux system; obtaining a volume group corresponding to the physical volume based on a volume group creation instruction in a Linux system; obtaining a logical volume corresponding to the volume group based on a logical volume creation instruction in the Linux system; and dynamically adjusting the space size of the logical volume under the mode of not interrupting the access of the application program to the logical volume based on a logical volume expansion instruction in the Linux system.
In some embodiments of the first aspect of the present application, the ISCSI service construction process includes: and constructing a DOCKER mirror image on the Linux server to start the DOCKER container through the DOCKER mirror image, and constructing ISCSI service of the disk device taking the logical volume of the Linux server as the DOCKER container according to the DOCKER mirror image.
In some embodiments of the first aspect of the present application, in response to execution of the ISCSI service construction procedure, the method further comprises: and acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for master-slave switching, and mounting the virtual IP address to the running host through a KEEPALIVE service.
In some embodiments of the first aspect of the present application, the method further comprises: constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
In some embodiments of the first aspect of the present application, in response to execution of the ISCSI service construction procedure, the method further comprises: based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host, the ISCSI port of the DOCKER container is managed through a port management tool of the host.
To achieve the above and other related objects, a second aspect of the present application provides an ISCSI service generation system based on a distributed file system constructed by a plurality of Linux servers, comprising: the file system construction module is used for constructing a distributed file system based on a plurality of Linux servers; the block device construction module is used for constructing an image file or a disk partition file according to the storage object type based on the distributed file system, so as to obtain corresponding block devices according to the creation instruction of the Linux system; the logical volume construction module is used for creating a dynamically expandable logical volume corresponding to the block device based on a logical volume manager of the Linux system; and the container deployment module is used for constructing a mirror image on the Linux system so as to deploy the container through the mirror image and generate ISCSI service of the disk device taking the logical volume of the Linux system as the container.
In some embodiments of the second aspect of the present application, the system further comprises a high availability module for performing the following operations: and acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for master-slave switching, and mounting the virtual IP address to the running host through a KEEPALIVE service.
In some embodiments of the second aspect of the present application, the high availability module is further configured to: constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
In some embodiments of the second aspect of the present application, the system further comprises: and the port management module is used for managing the ISCSI port of the DOCKER container through a port management tool of the host based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers.
As described above, the ISCSI service generation method, system and medium based on the distributed file system constructed by a plurality of Linux servers of the present application have the following beneficial effects:
(1) The invention operates and constructs the GPFS distributed file system on the server of the POWER architecture, and realizes ISCSI service on the server, thereby solving the problem of inconvenient use caused by giving up ISCSI storage network protocol after the GPFS distributed file system is upgraded again.
(2) The invention combines the characteristics of the GPSF distributed file system and the KEEPALIVE service to realize the dynamic expansion of the high-availability HA and the cross-node, so the invention can support the dynamic expansion of the storage space.
(3) The invention can support rapid deployment at the container level.
Drawings
FIG. 1 is a schematic diagram of a distributed file system of a GPFS according to an embodiment of the present application.
Fig. 2 is a flowchart of an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a block device creation process according to an embodiment of the present application.
FIG. 4 is a flow chart illustrating a logical volume creation process according to one embodiment of the present application.
FIG. 5 is a schematic diagram showing the relationship between DOCKER mirror image and DOCKER container in one embodiment of the present application.
FIG. 6 is a flow chart of a KEEPALIVE-based service according to an embodiment of the present application.
Fig. 7 is a flowchart of an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an ISCSI service generation system based on a distributed file system constructed by a plurality of Linux servers according to an embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings, which describe several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," "held," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
In order to solve the above-mentioned problems in the background art, the present invention provides a method, a system and a medium for ISCSI service generation based on a distributed file system constructed by a plurality of Linux servers. In order to make the objects, technical solutions and advantages of the present invention more apparent, further detailed description of the technical solutions in the embodiments of the present invention will be given by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before explaining the present invention in further detail, terms and terminology involved in the embodiments of the present invention will be explained, and the terms and terminology involved in the embodiments of the present invention are applicable to the following explanation:
<1> GPFS (General Parallel File System), is a parallel disk file system that ensures that all nodes within a resource group can access the entire file system. GPFS allows clients to share files, which may be distributed on different hard disks of different nodes. GPFS provides a number of standard UNIX file system interfaces that allow applications to run on them without modification or re-editing.
<2> ISCSI (Internet Small Computer System Interface), an Internet small computer system interface, also known as IP-SAN, is a storage technology based on Internet and SCSI-3 protocols.
<3> Power (Performance Optimization With Enhanced RISC) architecture, which is an architecture based on RISC instruction system developed by IBM, has the characteristics of simple structure and high efficiency compared with the processor of X86 architecture, which adopts the POWER architecture. Processors of the POWER architecture are widely used in various fields, up to supercomputers and UNIX servers across the country, down to cellular phones, devices of vehicle systems, etc.
<4> DOCKER container, an open-source application container engine, allows developers to package their applications and rely on packages into a portable container in a unified manner, and then issue them to any server (including Linux or Windows machines) that has the DOCKER engine installed, and also allows virtualization.
<5> ha (high Available), which is a dual cluster system for improving availability clusters, is an effective solution for guaranteeing service continuity, and generally has two or more nodes, and is divided into active nodes and standby nodes.
<6> KEEPALIVE is a lightweight high availability solution in Linux system, which mainly realizes high availability function through route redundancy, and the configuration is simple and can be completed by only one configuration file.
And <7> mirror image is a file system, and provides files and parameter configuration needed by the container in running, which is equivalent to an installation package needed to be downloaded when a certain software is used in normal times and also equivalent to an ISO file needed to be used when an operating system is installed.
<8> the host machine is a dedicated physical server deployed with a virtualized environment, and the user is physically isolated from the servers of other tenants by sharing the resources of the entire physical server.
Embodiments of the present invention provide an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers, a system based on an ISCSI service generation method of a distributed file system constructed by a plurality of Linux servers, and a storage medium storing an executable program for implementing an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers. In terms of implementation of the ISCSI service generation method based on the distributed file system constructed by the plurality of Linux servers, an exemplary implementation scenario of ISCSI service generation based on the distributed file system constructed by the plurality of Linux servers will be described.
Referring to fig. 2, a flowchart of an ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers in an embodiment of the present invention is shown. The ISCSI service generation method based on the distributed file system constructed by a plurality of Linux servers in this embodiment mainly includes the following steps:
Step S1: and constructing a distributed file system based on a plurality of Linux servers.
Preferably, the embodiment of the invention selects a plurality of Linux servers based on Power (Performance Optimization With Enhanced RISC) architecture to form a Linux server cluster based on POWER (Performance Optimization With Enhanced RISC) architecture, and constructs GPFS (General Parallel File System) distributed file system based on the Linux server cluster. The Power (Performance Optimization With Enhanced RISC) architecture Linux server has the advantages of being capable of reducing complexity to the greatest extent, improving efficiency, easily expanding and the like. GPFS (General Parallel File System) distributed file system allows users to share files, which may be distributed on different hard disks of different nodes; the GPFS (General Parallel File System) file system provides a number of standard UNIX file system interfaces that allow applications to run on them without modification or re-editing.
Step S2: and constructing an image file or a disk partition file according to the storage object type based on the distributed file system, so as to obtain corresponding block equipment according to the creation instruction of the Linux system.
It should be noted that both types of image files (FILEIO files) and disk partition files (block files) may be used in creating the block device. The BLOCK file type supports BLOCK type storage objects to be applied to local BLOCK devices and logic devices, and the file type supports file type storage objects to be applied to regular files (such as image files or sparse files) stored on a local disk.
The manner of constructing the FILEIO file and the block file is various, and the embodiment of the present invention is not limited in particular. Taking a file as an example, a file such as a raw format file or a qcow2 format file is taken as an illustration, a truncate instruction in a Linux system can be used for constructing the raw format file, or a qcow2 format file can be created by using a qemu-img instruction in the Linux system.
Illustratively, the above step S2 may be deconstructed into various sub-steps as shown in fig. 3:
step S21: and obtaining the LOOP block device file based on the block device file creation instruction in the Linux system.
For example: LOOP block device files may be constructed using the mknod instruction in the Linux system. It should be noted that, before the Linux system communicates with the device, it is generally required to create a device file stored in the/dev directory, and by default, many device files are already created, or some device files may be created manually through an mknod instruction. The syntax format of the mknod instruction is mknod [ parameter ], common parameters are shown in table 1 below:
table 1: common parameters of a mknod instruction
Parameters (parameters) Meaning of
-Z Setting a secure context
-m Setting authority mode, default as read-write
b Block apparatus
c Character device
-help Displaying help information
--version Displaying version information
It should be understood that a LOOP block device refers to a block device in a Linux system. The Linux system uses a file as a disk device, and a block device refers to a device (such as a disk device, an optical disk, or a flash disk) in which data is stored in units of "blocks" in the Linux system.
The LOOP block device is a pseudo device, the block device is simulated mainly by using a common file which is not a special block file, and the file is used as a magnetic disk or an optical disk after being simulated into the block device; before use, a LOOP block device must be connected to a file, which in combination provides an interface that can replace the block specific file. For example: in the ext4 file system, there is a file with a size of 1000Kib named loop_dev.img, but it may also be mounted as a separate block device under a certain directory of the system, such as a mount directory.
Step S22: and obtaining LOOP block equipment corresponding to the LOOP block equipment file based on a block equipment creation instruction in the Linux system.
For example: the LOOP block device can be built using a lostup instruction in the Linux system. It should be noted that the syntax format of the localup instruction is as follows:
losetup [ -d ] [ e < encryption scheme > ] [ -o < translation number > ] [ cycle device code ] [ file ]; wherein a parameter "-d" indicates the removal device, a parameter "-e < encryption mode >" indicates the start-up encryption encoding, and a parameter "-o < number of translations >" indicates the number of set data translations.
Step S3: and creating a dynamically expandable logical volume corresponding to the block device based on a logical volume manager of the Linux system.
It should be noted that the logical volume manager may be simply referred to as LVM (Logic Volume Manager), which is a mechanism for managing disk partitions in a Linux environment, and is a logical layer built on top of the hard disk and the partition, so as to improve flexibility of disk partition management.
Illustratively, the above step S3 may be construed as various sub-steps as shown in fig. 4:
step S31: and obtaining the physical volume corresponding to the LOOP block device based on a physical volume creation instruction in the Linux system.
For example: the pvcreate instruction in the Linux system may be used to create a physical volume corresponding to the LOOP block device. Note that, the physical volume is simply PV (Physical Volume), which refers to a physical disk partition; the logical volume manager LVM (Logic Volume Manager) may initialize a physical hard disk partition to a physical volume through a pvcreate instruction; if logical volume manager LVM is used to manage this partition, fdisk may be used to change its ID to a value that logical volume manager LVM (Logic Volume Manager) can recognize, i.e., 8e.
The syntax format of the pvc instruction is as follows:
pvcreate [ option ] [ parameter ]; the parameters in the pvc eate instruction are the device file names corresponding to the physical volumes to be created; the options in the pvc instruction are shown in table 2 below.
Table 2: options in the pvc eate instruction
Options Meaning of
-f Forced creation of physical volumes without user confirmation
-u UUID of designated device
-y All questions answer "yes"
-z Whether to utilize the first 4 sectors
Step S32: and obtaining the volume group corresponding to the physical volume based on the volume group creation instruction in the Linux system.
For example: the volume group corresponding to the physical volume can be created by using a vgcreate instruction in the Linux system. Note that, the volume group is simply called VG (Volume Group), which refers to that physical volumes are formed into volume groups; the logical volume manager LVM (Logic Volume Manager) may group physical volumes PV (Physical Volume) into volume groups VG (Volume Group) via a vgcreate instruction.
The syntax format of the vgcreate instruction is as follows:
vgcreate [ option ] [ parameter ]; parameters in the vgcreate instruction include volume group name and physical volume list; the options in the vgcreate instruction are shown in table 3 below.
Table 3: options in the vgcreate instruction
Step S33: and obtaining the logical volume corresponding to the volume group based on a logical volume creation instruction in the Linux system.
For example: the lvcreate instruction in the linux system may be used to create a logical volume corresponding to the volume group. Note that, the logical volume is simply LV (Logic Volume), which is a logical disk drawn in the volume group VG; logical volume LV (Logic Volume) is created above volume group VG (Volume Group), and the device file corresponding to logical volume LV (Logic Volume) is stored under the volume group directory, for example, a logical volume "LVo10" is created on volume group "VG1000", and the device file corresponding to this logical volume is "/dev/VG1000/LVo10".
The syntax format of the lvcreate instruction is as follows:
lvcreate [ parameter ] [ logical volume ]; the parameters in the lvcreate instruction are shown in table 4 below.
Table 4: parameters in the lvcreate instruction
Parameters (parameters) Meaning of
-L Designating the size of a logical volume in units of "KKMMgGtT" bytes
-I Specifying the size of logical volumes (LE number)
Step S34: and dynamically adjusting the space size of the logical volume under the mode of not interrupting the access of the application program to the logical volume based on a logical volume expansion instruction in the Linux system. For example: the lvextend instruction in the Linux system may be used to extend the space size of a logical volume without interrupting access to the logical volume by an application.
The syntax format of the lvextend instruction is as follows:
lvextend [ parameter ] [ logical volume ]; the parameters in the lvextend instruction are shown in table 5 below.
Table 5: parameters in the lvextend instruction
Parameters (parameters) Meaning of
-L Designating the size of a logical volume in units of "KKMMgGtT" bytes
-I Specifying the size of logical volumes (LE number)
Step S4: and constructing a mirror image on the Linux system, and deploying the container through the mirror image so as to generate ISCSI service of the disk device taking the logical volume of the Linux system as the container.
Specifically, the construction process of the ISCSI service includes: and constructing a DOCKER mirror image on the Linux server to start the DOCKER container through the DOCKER mirror image, and constructing ISCSI (Internet Small Computer System Interface) service of the disk device taking the logical volume of the Linux server as the DOCKER container according to the DOCKER mirror image.
It should be noted that, the DOCKER mirror image constructed based on the Linux server in the embodiment of the invention is an autonomously constructed mirror image according to an application scene, and is different from a standard ISCSI DOCKER mirror image and a standard centos mirror image.
In addition, the container of host Linux is a series of processes separated from the rest of the system, runs from another mirror image, and provides all files required for supporting the processes by the mirror image. The image provided by the container contains all the dependencies of the application and is therefore portable and consistent throughout the process from development to testing to production. The Linux server is used as a host of the DOCKER container, and the management and deployment of the container can be performed after the DOCKER engine is installed; after the DOCKER engine is installed, the application programs and the dependent items to be operated can be packaged into DOCKER images, and then the container is operated on the host machine, so that the deployment and management of the application programs are realized.
For ease of understanding, the relationship of the DOCKER mirror image and DOCKER container will be described in detail below in conjunction with FIG. 5:
the DOCKER mirror image is a DOCKER container when the operation is stopped, and the operation of a certain DOCKER container can be stopped and a new DOCKER mirror image can be created from the DOCKER mirror image; in this context, a DOCKER mirror may be understood as a build-time structure, while a DOCKER container may be understood as a run-time structure. From the above structure, it can be seen that: one or more DOCKER containers are typically started from a certain DOCKER image using DOCKER container run and DOCKER service create instructions. Once a DOCKER container is started from a DOCKER image, the two become interdependent, and the DOCKER image cannot be deleted until the DOCKER container started on the DOCKER image is completely stopped. Attempts to delete a DOCKER image without stopping or destroying the DOCKER container using it can result in errors.
ISCSI (Internet Small Computer System Interface) the small computer system interface combines the existing SCSI interface with the Ethernet technology, and connects the ISCSI server (Target) and the client (Initiator) based on the TCP/IP protocol, so that the encapsulated SCSI data packet can be transmitted through the Internet, and finally, the ISCSI server is mapped to a storage space (disk) to provide the client after connection authentication.
In some examples, in response to execution of the ISCSI service building program, the ISCSI service generation method based on a distributed file system built by a plurality of Linux servers further performs the steps as shown in fig. 6:
step S61: and acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for master-slave switching, and mounting the virtual IP address to the running host through a KEEPALIVE service.
Specifically, the virtual IP address is also called VIP (Virtual IP Address), and is mainly used for switching between different hosts, and is mainly used for master-slave switching of a server. The virtual IP address is different from the real IP address of the proxy server, the proxy server gives a range of virtual IP addresses according to the number of the internal machines of the Internet, and distributes a virtual IP address to each client according to a certain rule, so that the indirect connection between the client and the Internet can be realized. Noteworthy are: virtual IP addresses are mainly used for network address translation, network fault tolerance and mobility, and their use cases include applications in terms of high availability (High Availability, HA) of the system, so that after the Master of the host providing the service goes down, the service will switch to the standby host Slave and continue to provide the service to the outside.
The KEEPALIVE service is the next lightweight, high availability solution for Linux systems. High availability (High Avaliability, HA) refers to redundancy and takeover of the host. The KEEPALIVE service mainly realizes high availability through route redundancy, has simple configuration and can be realized only by one configuration file. The KEEPALIVE service is designed for LVS (Linux Virtual Server) and is specifically used for monitoring the status of each service node in the cluster system. According to the third, fourth and fifth layers of exchange mechanisms of the TCP/IP reference model, the state of each service node is detected, if a certain server node is abnormal or works fail, the KEEPALIVE will detect the fault server node and reject the fault server node from the cluster system, the works are all completed automatically, manual intervention is not needed, and only the service node with the fault is repaired.
Step S62: constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
In some examples, the KEEPALIVE service is responsible for detecting whether the logical volume to be the logical volume manager of the primary server is operating properly, i.e., executing the state created by the block device to be reached in steps S21-S22 described above. The keepative management script at least comprises a stopping script of the standby machine, a state checking script of the host machine, a configuration script, a starting script and the like, and is responsible for realizing the switching of the main server and the standby server.
Further, by checking whether the logical volume (LVM LV) to be the logical volume manager of the main server is operating normally through the KEEPALIVE service, and the feature of a distributed system is that the distributed file system of the GPFS (General Parallel File System) is combined, the background storage nodes are multiple, and with the GPFS (General Parallel File System) distributed file system, the physical disk can be increased or decreased at any storage node, that is, the dynamic reduction or expansion of the space is realized, so that the state of creating the logical volume to be achieved in the foregoing steps S31 to S34 can be realized. Thus, by combining GPFS (General Parallel File System) distributed file system with KEEPALIVE service, this feature can be conducted into the DOCKER container, so that ISCSI (Internet Small Computer System Interface) services provided by the DOCKER container are also provided with dynamic expansion or contraction features.
In some examples, in response to execution of the ISCSI service building program, the ISCSI service generation method based on a distributed file system built by a plurality of Linux servers further performs the following: based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host, the ISCSI port of the DOCKER container is managed through a port management tool of the host.
Specifically, the ISCSI port of the DOCKER container may be managed using the ISCSI Target management tool of the host, and the Server-Client model is formed by the receiving end Target providing the storage resource and the originating end Initiator using the remote storage resource. Therefore, the ISCSI Target management tool of the host machine is utilized and combined with the DOCKER container, so that the purpose that SCSI is far away from Linux kernel service can be realized.
The receiving end Target is a disk array or other hosts with disks, the disk space is mapped onto the network through the ISCSI Target tool, and the initiating end Initiator can find and use the disks. Illustratively, on a Linux system, the Initiator may use iscsiadidm tools, while the Target may use targetcli, targetadm, ietadm, etc., management tools.
The whole flow of the ISCSI service generation method based on the distributed file system built by multiple Linux servers according to the present invention will be described in the following with reference to fig. 7, where only the POWER server 1 and the POWER server 2 are taken as examples, but it should be understood that the number of POWER servers is not limited by the present invention.
Step S701: servers based on POWER (Performance Optimization With Enhanced RISC) architecture, namely server 1 and server 2 are selected. And each server runs a Linux operating system, and the Linux operating system runs a container service.
Step S702: a GPFS (General Parallel File System) cluster is built based on server 1 and server 2.
Step S703: a distributed file system over GPFS (General Parallel File System) clusters is built.
Step S704: an image file (FILEIO file) or a disk partition file (block file) is built based on the distributed file system.
Step S705: the LOOP BLOCK device file (LOOP BLOCK FILES) is obtained by means of a BLOCK device file creation instruction of the Linux system.
Illustratively, a LOOP BLOCK device file (LOOP BLOCK FILES) based on a FILEIO file is constructed using the native tool mknod instruction of the Linux system.
Step S706: LOOP devices based on LOOP block device files are built using LOSETUP instructions of the Linux system.
Step S707: the physical volume PV (Physical Volume) of the LOOP device is built by the logical volume manager LVM (Logic Volume Manager).
The pvcreate instruction in the Linux system may be used to create the physical volume PV (Physical Volume) corresponding to the LOOP block device.
Step S708: the volume group VG (Volume Group) of the physical volumes PV (Physical Volume) is constructed by the logical volume manager LVM (Logic Volume Manager).
For example: the volume group VG (Volume Group) corresponding to the physical volume may be created using the vgcreate instruction in the Linux system.
Step S709: logical volumes LV (Logic Volume) of volume group VG (Volume Group) are constructed by logical volume manager LVM (Logic Volume Manager).
For example: the lvcreate instruction in the linux system may be used to create a logical volume LV (Logic Volume) corresponding to the volume group.
Step S710: dynamically extensible file systems LVM LVs are created based on logical volumes LV (Logic Volume) (Logic Volume Manager Logic Volumes).
For example: the lvextend instruction in the Linux system may be used to extend the space size of a logical volume without interrupting access to the logical volume by an application.
Step S711: a ISCSI (Internet Small Computer System Interface) service-supporting DOCKER mirror based on the POWER (Performance Optimization With Enhanced RISC) platform was constructed.
Step S712: and starting the DOCKER container through the DOCKER mirror image, taking a Linux operating system as a host to obtain the container 1 and the container 2, and constructing ISCSI (Internet Small Computer System Interface) service of the disk device taking the logical volume LV (Logic Volume) of the host as the DOCKER container.
Step S713: virtual IP services are implemented on hosts using a lightweight high availability service (KEEPALIVE).
Step S714: and constructing a KEEPALIVE management script by using the lightweight high-availability service (KEEPALIVE), and realizing the high-availability active/standby configuration (active/standby high-availability service configuration) in the steps S705-S706.
Step S715: the storage nodes of the GPFS distributed file system can be used for increasing or reducing the characteristics of physical disks, and based on light-weight high-availability service (KEEPALIVE), KEEPALIVE management scripts are constructed, so that the management of the steps S707-S710 is realized, and high-availability HA and cross-node dynamic expansion are realized.
The cross-node dynamic expansion herein refers to detecting, by the KEEPALIVE service, whether the logical volume to be the logical volume manager of the primary server is operating normally, and is a feature of a distributed system in combination with the GPFS (General Parallel File System) distributed file system, where the background storage node is multiple, and by using the GPFS (General Parallel File System) distributed file system, the physical disk can be increased or decreased at any storage node, that is, the dynamic space reduction or expansion is implemented, so that the state of creation of the logical volume to be achieved in the foregoing steps S31 to S34 can be implemented. Thus, by combining the GPFS distributed file system with the KEEPALIVE service, this feature can be conducted into the DOCKER container, so that the ISCSI (Internet Small Computer System Interface) service provided by the DOCKER container also has the feature of dynamic expansion or contraction.
Step S716: and managing the ISCSI port of the DOCKER container based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host by using a TARGET management tool of the host.
Step S717: and the destination of SCSI far away from the Linux kernel is realized by utilizing a TARGET management tool of the host and combining with a DOCKER container.
It should be understood that the above examples are provided for illustrative purposes and should not be construed as limiting. Also, the method may additionally or alternatively include other features or include fewer features without departing from the scope of the application.
Referring to FIG. 8, a schematic diagram of an ISCSI service generation system based on a distributed file system built by a plurality of Linux servers in an embodiment of the present invention is shown. The system 800 in the embodiment of the present invention includes: a file system construction module 801, a block device construction module 802, a logical volume construction module 803, and a container deployment module 804.
The file system construction module 801 is configured to construct a distributed file system based on a plurality of Linux servers.
The block device building module 802 is configured to build an image file or a disk partition file according to a storage object type based on the distributed file system, so as to obtain a corresponding block device according to a creation instruction of the Linux system.
The logical volume construction module 803 is configured to create a dynamically extensible logical volume corresponding to the block device based on a logical volume manager of the Linux system.
The container deployment module 804 is configured to build a mirror image on the Linux system, so as to deploy a container through the mirror image, thereby generating ISCSI service of a disk device with a logical volume of the Linux system as the container.
In some examples, the system 800 further includes a high availability module 805. The high availability module 805 is configured to perform the following operations: and acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for master-slave switching, and mounting the virtual IP address to the running host through a KEEPALIVE service.
In some examples, the high availability module 805 is further configured to perform the following: constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
In some examples, the system 800 further includes a port management module 806 for managing ISCSI ports of the DOCKER container by a port management tool of the host based on a mapping relationship between ISCSI service ports of the DOCKER container and ISCSI service ports of the host.
It should be noted that: in the ISCSI service generation device based on the distributed file system constructed by the Linux servers provided in the above embodiment, when ISCSI service generation based on the distributed file system constructed by the Linux servers is performed, only the division of the program modules is used for illustration, and in practical application, the process allocation may be performed by different program modules, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processes described above. In addition, the ISCSI service generating device based on the distributed file system built by the Linux servers and the ISCSI service generating method based on the distributed file system built by the Linux servers provided in the above embodiments belong to the same concept, and detailed implementation processes of the ISCSI service generating device are shown in the method embodiments, and are not repeated herein.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the present application provides an ISCSI service generation method, system and medium based on a distributed file system constructed by a plurality of Linux servers, and the present invention operates and constructs a GPFS distributed file system on a server of a POWER architecture, on which ISCSI service is implemented, so as to solve the problem of inconvenient use caused by discarding ISCSI storage network protocol after re-upgrading the GPFS distributed file system; the invention combines the characteristics of the GPSF distributed file system and the KEEPALIVE service to realize the dynamic expansion of the high-availability HA and the cross-node, so the invention can support the dynamic expansion of the storage space; the invention can support rapid deployment at the container level. Therefore, the method effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (11)

1. An ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers, comprising:
constructing a distributed file system based on a plurality of Linux servers;
constructing an image file or a disk partition file according to the storage object type based on the distributed file system, so as to obtain corresponding block equipment according to a creation instruction of the Linux system;
creating a dynamically expandable logical volume corresponding to the block device based on a logical volume manager of the Linux system;
and constructing a mirror image on the Linux system, and deploying the container through the mirror image so as to generate ISCSI service of the disk device taking the logical volume of the Linux system as the container.
2. The ISCSI service generation method based on the distributed file system constructed by the plurality of Linux servers according to claim 1, wherein the process of constructing the distributed file system based on the plurality of Linux servers comprises: and selecting a plurality of servers based on the POWER architecture, respectively running a Linux system to form a Linux server cluster based on the POWER architecture, and constructing a GPFS distributed file system based on the Linux service cluster.
3. The ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers according to claim 1 or 2, wherein the process of obtaining the block device corresponding to the image file or the disk partition file according to the creation instruction of the operating system includes:
Obtaining a LOOP block device file based on a block device file creation instruction in the Linux system;
and obtaining LOOP block equipment corresponding to the LOOP block equipment file based on the block equipment creation instruction in the Linux system.
4. The ISCSI service generation method based on a distributed file system constructed by a plurality of Linux servers according to claim 3, wherein the creating the dynamically extensible logical volume corresponding to the block device by the logical volume manager based on the Linux system comprises:
obtaining a physical volume corresponding to the LOOP block device based on a physical volume creation instruction in a Linux system;
obtaining a volume group corresponding to the physical volume based on a volume group creation instruction in a Linux system;
obtaining a logical volume corresponding to the volume group based on a logical volume creation instruction in the Linux system;
and dynamically adjusting the space size of the logical volume under the mode of not interrupting the access of the application program to the logical volume based on a logical volume expansion instruction in the Linux system.
5. The ISCSI service generation method based on the distributed file system built by a plurality of Linux servers according to claim 1 or 2, wherein the ISCSI service building process comprises: and constructing a DOCKER mirror image on the Linux server to start the DOCKER container through the DOCKER mirror image, and constructing ISCSI service of the disk device taking the logical volume of the Linux server as the DOCKER container according to the DOCKER mirror image.
6. The ISCSI service generation method based on a distributed file system built by a plurality of Linux servers according to claim 1, wherein in response to execution of the ISCSI service building program, the method further comprises:
acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for switching between a host and a standby, and mounting the virtual IP address to the running host through a KEEPALIVE service;
constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
7. The ISCSI service generation method based on a distributed file system built by a plurality of Linux servers according to claim 1, wherein in response to execution of the ISCSI service building program, the method further comprises: based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host, the ISCSI port of the DOCKER container is managed through a port management tool of the host.
8. An ISCSI service generation system based on a distributed file system constructed by a plurality of Linux servers, comprising:
the file system construction module is used for constructing a distributed file system based on a plurality of Linux servers;
the block device construction module is used for constructing an image file or a disk partition file according to the storage object type based on the distributed file system, so as to obtain corresponding block devices according to the creation instruction of the Linux system;
the logical volume construction module is used for creating a dynamically expandable logical volume corresponding to the block device based on a logical volume manager of the Linux system;
and the container deployment module is used for constructing a mirror image on the Linux system so as to deploy the container through the mirror image and generate ISCSI service of the disk device taking the logical volume of the Linux system as the container.
9. The ISCSI service generation system based on a distributed file system built from a plurality of Linux servers of claim 8, further comprising a high availability module for performing the operations of:
acquiring a host IP address of a Linux server serving as a host, virtualizing the host IP address into a virtual IP address for switching between a host and a standby, and mounting the virtual IP address to the running host through a KEEPALIVE service;
Constructing a corresponding KEEPALIVE management script to perform high-availability master-slave configuration of block equipment creation; and constructing a corresponding KEEPALIVE management script to perform high-availability active-standby configuration of logical volume creation, and dynamically adjusting the space size of the logical volume by increasing or decreasing physical disks through storage nodes of the distributed file system.
10. The ISCSI service generation system based on a distributed file system built from a plurality of Linux servers of claim 8, wherein the system further comprises: and the port management module is used for managing the ISCSI port of the DOCKER container through a port management tool of the host based on the mapping relation between the ISCSI service port of the DOCKER container and the ISCSI service port of the host.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the ISCSI service generation method according to any one of claims 1 to 7 based on a distributed file system constructed by a plurality of Linux servers.
CN202310572709.8A 2023-05-19 2023-05-19 ISCSI service generation method, system and medium based on distributed file system Pending CN116540943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310572709.8A CN116540943A (en) 2023-05-19 2023-05-19 ISCSI service generation method, system and medium based on distributed file system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310572709.8A CN116540943A (en) 2023-05-19 2023-05-19 ISCSI service generation method, system and medium based on distributed file system

Publications (1)

Publication Number Publication Date
CN116540943A true CN116540943A (en) 2023-08-04

Family

ID=87455909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310572709.8A Pending CN116540943A (en) 2023-05-19 2023-05-19 ISCSI service generation method, system and medium based on distributed file system

Country Status (1)

Country Link
CN (1) CN116540943A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019890A (en) * 2012-12-24 2013-04-03 清华大学 Block-level disk data protection system and method thereof
CN104750433A (en) * 2015-03-26 2015-07-01 浪潮集团有限公司 Cache design method based on SCST
CN106775924A (en) * 2016-11-07 2017-05-31 北京百度网讯科技有限公司 Virtual machine starts method and apparatus
CN108255643A (en) * 2017-12-25 2018-07-06 南京壹进制信息技术股份有限公司 A kind of continuous data protection method of no agency
CN114840148A (en) * 2022-06-30 2022-08-02 江苏博云科技股份有限公司 Method for realizing disk acceleration based on linux kernel bcache technology in Kubernets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019890A (en) * 2012-12-24 2013-04-03 清华大学 Block-level disk data protection system and method thereof
CN104750433A (en) * 2015-03-26 2015-07-01 浪潮集团有限公司 Cache design method based on SCST
CN106775924A (en) * 2016-11-07 2017-05-31 北京百度网讯科技有限公司 Virtual machine starts method and apparatus
CN108255643A (en) * 2017-12-25 2018-07-06 南京壹进制信息技术股份有限公司 A kind of continuous data protection method of no agency
CN114840148A (en) * 2022-06-30 2022-08-02 江苏博云科技股份有限公司 Method for realizing disk acceleration based on linux kernel bcache technology in Kubernets

Similar Documents

Publication Publication Date Title
CN111522628B (en) Kubernetes cluster building deployment method, framework and storage medium based on OpenStack
US11848817B2 (en) Techniques for updating edge devices
US8473692B2 (en) Operating system image management
US10013213B2 (en) Container migration utilizing state storage of partitioned storage volume
US10552072B1 (en) Managing file system namespace in network attached storage (NAS) cluster
US9817721B1 (en) High availability management techniques for cluster resources
US20170123939A1 (en) Data management agent for selective storage re-caching
US7711683B1 (en) Method and system for maintaining disk location via homeness
CN107368358B (en) Device and method for realizing migration of virtual machine of client among different hosts
US10133646B1 (en) Fault tolerance in a distributed file system
US9847903B2 (en) Method and apparatus for configuring a communication system
CN110198329A (en) Database deployment method, device and system, electronic equipment and readable medium
US20150372935A1 (en) System and method for migration of active resources
CN112202853B (en) Data synchronization method, system, computer device and storage medium
CN112882726B (en) Hadoop and Docker-based deployment method of environment system
US20220229605A1 (en) Creating high availability storage volumes for software containers
CN110262893A (en) The method, apparatus and computer storage medium of configuration mirroring memory
CN105260377A (en) Updating method and system based on hierarchical storage
Khalel et al. Enhanced load balancing in kubernetes cluster by minikube
US20240231620A9 (en) Virtual container storage interface controller
CN109803014B (en) ICOS multi-region and storage system fusion deployment method and device
CN116540943A (en) ISCSI service generation method, system and medium based on distributed file system
CN111078135B (en) Enhanced data storage for virtual nodes in a data processing environment
CN110110004B (en) Data operation method, device and storage medium
CN116339926B (en) Containerized deployment method of ATS software

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination