CN112000295B - Management method, system and device for back-end storage - Google Patents

Management method, system and device for back-end storage Download PDF

Info

Publication number
CN112000295B
CN112000295B CN202010878653.5A CN202010878653A CN112000295B CN 112000295 B CN112000295 B CN 112000295B CN 202010878653 A CN202010878653 A CN 202010878653A CN 112000295 B CN112000295 B CN 112000295B
Authority
CN
China
Prior art keywords
data
end storage
written
storage devices
logic unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010878653.5A
Other languages
Chinese (zh)
Other versions
CN112000295A (en
Inventor
白战豪
胡永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010878653.5A priority Critical patent/CN112000295B/en
Publication of CN112000295A publication Critical patent/CN112000295A/en
Application granted granted Critical
Publication of CN112000295B publication Critical patent/CN112000295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a management method, a system and a device of back-end storage, which are characterized in that the corresponding relation between back-end storage equipment and a logic unit is constructed in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit; after a data writing request of data to be written sent to a target logic unit by a client is received, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices; and after the data to be written is asynchronously written into the plurality of back-end storage devices, returning a request completion message to the client. Therefore, the method and the device adopt a logic unit mapping 'one-to-many' mode to uniformly manage the plurality of back-end storage devices and dispersedly process the data writing requests, namely the plurality of back-end storage devices share the data writing pressure together, so that the storage performance is improved, and the service processing with higher requirements on the storage performance, such as virtualization, is facilitated.

Description

Management method, system and device for back-end storage
Technical Field
The present invention relates to the field of storage, and in particular, to a method, a system, and an apparatus for managing backend storage.
Background
Currently, in an iscsi target architecture, one iscsi target (an application providing a backend storage device (such as a storage Logical volume or a disk) to a client) may create multiple targets (an application mapping the backend storage device), and one target includes multiple LUNs (Logical Unit numbers, logical units) mapped by the multiple backend storage devices one by one. The client can scan the targets on the iscsi target, determine the number of the targets on the iscsi target and the LUNs contained in the iscsi target, and select one LUN to issue a data write request, wherein the iscsi target can write data into the back-end storage device based on the mapping relationship between the LUN and the back-end storage device. However, for the data write request of the client, only one back-end storage device bears the data write pressure each time, which results in poor storage performance and is not beneficial to service processing with high requirements on storage performance, such as virtualization.
Therefore, how to provide a solution to the above technical problems is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a management method, a system and a device of back-end storage, which adopt a logic unit mapping 'one-to-many' mode to uniformly manage a plurality of back-end storage devices and dispersedly process data writing requests, namely the plurality of back-end storage devices share data writing pressure together, thereby improving the storage performance and being beneficial to service processing with higher requirements on the storage performance, such as virtualization and the like.
In order to solve the above technical problem, the present invention provides a method for managing backend storage, including:
the method comprises the steps of constructing a corresponding relation between a back-end storage device and a logic unit in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit;
after a data writing request of data to be written sent to a target logic unit by a client is received, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices; wherein the target logical unit is any one of the logical units;
and after the data to be written is asynchronously written into the plurality of back-end storage devices, returning a request completion message to the client.
Preferably, after receiving a data write request for data to be written issued by a client to a target logic unit, allocating the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data allocation policy, so as to asynchronously write the data to be written to the plurality of back-end storage devices, the process includes:
correspondingly creating a thread pool comprising a plurality of threads and a rear-end storage queue for each logic unit in advance;
after receiving a data writing request of data to be written issued by a client to a target logic unit, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy, and sequentially obtaining sub-data writing requests of all distributed data;
writing the corresponding subdata write-in request into a rear-end storage queue of a target thread pool corresponding to the target logic unit according to the distribution sequence of each distribution data;
and sequentially distributing the subdata write-in requests in the rear-end storage queue of the target thread pool to a plurality of threads of the target thread pool, so that the subdata write-in requests are asynchronously processed by the plurality of threads, and correspondingly writing the distributed data into a plurality of rear-end storage devices.
Preferably, the process of allocating the data to be written to the multiple back-end storage devices corresponding to the target logical unit according to a preset data allocation policy includes:
and averagely distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy.
Preferably, the capacities of the plurality of back-end storage devices corresponding to the target logical unit are the same; the data writing request comprises data to be written, request offset and the total length of the data to be written;
correspondingly, the process of allocating the data to be written to the plurality of back-end storage devices corresponding to the target logic unit according to a preset data allocation strategy and sequentially obtaining sub-data write requests of each allocated data includes:
numbering a plurality of back-end storage devices corresponding to the target logic unit from 0 in advance;
sequentially taking out data with the length of m from the data to be written, distributing the data with the length of m to [ (off/m)% n ] th rear-end storage equipment, and obtaining the writing position of the data with the length of m in the corresponding rear-end storage equipment according to [ off/(n × m) ] + m + [ off- (off/m) × m ]; wherein off is a request offset, and n is the total number of the back-end storage devices corresponding to the target logic unit;
judging whether the length of the remaining data to be written is not less than m, if so, sequentially taking out the data with the length of m from the remaining data to be written, distributing the data with the length of m to the [ ((off/m)% n + 1)% n ] rear-end storage devices, obtaining the writing position of the data with the length of m in the corresponding rear-end storage devices according to [ (off + m)/(n × m) ] + m + [ (off + m) - ((off + m)/m) ] m ], and repeating the steps until the data to be written is completely distributed;
obtaining subdata write-in requests of all the distributed data according to the distribution results of the data to be written; the sub-data write request comprises distribution data with the length of m and the position of the corresponding distributed back-end storage equipment to be written in.
Preferably, m =1MB.
Preferably, before writing the sub-data write request corresponding to each allocation data into the back-end storage queue of the target thread pool corresponding to the target logical unit according to the allocation order of each allocation data, the method for managing back-end storage further includes:
constructing an asynchronous callback pointer assembly into subdata write requests of each distribution data, and writing the assembled subdata write requests into corresponding rear-end storage queues;
correspondingly, the process of writing each piece of distribution data into a plurality of back-end storage devices by using a plurality of threads to asynchronously process each sub-data writing request comprises the following steps:
processing each subdata write-in request asynchronously by utilizing a plurality of threads, and after the target thread correspondingly writes the distribution data into the back-end storage device, performing asynchronous callback operation based on an asynchronous callback pointer so as to inform a system that the target thread completes subdata write-in request processing; wherein the target thread is any one of the threads.
In order to solve the above technical problem, the present invention further provides a management system for backend storage, including:
the relationship construction module is used for constructing the corresponding relationship between the back-end storage equipment and the logic unit in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit;
the data distribution module is used for distributing the data to be written to a plurality of back-end storage devices corresponding to a target logic unit according to a preset data distribution strategy after receiving a data writing request of the data to be written sent to the target logic unit by a client, so as to asynchronously write the data to be written into the plurality of back-end storage devices; wherein the target logic unit is any one of the logic units;
and the message returning module is used for returning a request completion message to the client after the data to be written is asynchronously written into the plurality of back-end storage devices.
Preferably, the data distribution module includes:
the thread pool creating submodule is used for correspondingly creating a thread pool which comprises a plurality of threads and a rear end storage queue for each logic unit in advance;
the request acquisition sub-module is used for distributing the data to be written to a plurality of back-end storage devices corresponding to a target logic unit according to a preset data distribution strategy after receiving a data writing request of the data to be written sent to the target logic unit by a client, and sequentially obtaining sub-data writing requests of all distributed data;
the queue writing sub-module is used for writing the sub-data writing request corresponding to each distribution data into the rear-end storage queue of the target thread pool corresponding to the target logic unit according to the distribution sequence of each distribution data;
and the request processing submodule is used for sequentially distributing the subdata write-in requests in the rear-end storage queue of the target thread pool to a plurality of threads of the target thread pool so as to asynchronously process the subdata write-in requests by utilizing the plurality of threads and correspondingly write the distribution data into a plurality of rear-end storage devices.
Preferably, the request obtaining sub-module is specifically configured to, after receiving a data write request of data to be written issued by a client to a target logic unit, evenly distribute the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution policy, and sequentially obtain sub-data write requests of each distributed data.
In order to solve the above technical problem, the present invention further provides a management device for backend storage, including:
a memory for storing a computer program;
a processor for implementing the steps of any of the above back-end storage management methods when executing the computer program.
The invention provides a management method of back-end storage, which is characterized in that the corresponding relation between back-end storage equipment and a logic unit is constructed in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit; after a data writing request of data to be written sent to a target logic unit by a client is received, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices; and after the data to be written is asynchronously written into the plurality of back-end storage devices, returning a request completion message to the client. Therefore, the method and the device adopt a logic unit mapping 'one-to-many' mode to uniformly manage the plurality of back-end storage devices and dispersedly process the data writing requests, namely the plurality of back-end storage devices share the data writing pressure together, thereby improving the storage performance and being beneficial to service processing with higher requirements on the storage performance, such as virtualization and the like.
The invention also provides a management system and a device of the back-end storage, and the management system and the device have the same beneficial effects as the management method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for managing backend storage according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a management system of backend storage according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a management method, a system and a device for back-end storage, which adopt a way of mapping logical units into one-to-many, uniformly manage a plurality of back-end storage devices and dispersedly process data writing requests, i.e. the plurality of back-end storage devices share data writing pressure together, thereby improving the storage performance and being beneficial to virtualization and other business processes with higher requirements on the storage performance.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for managing backend storage according to an embodiment of the present invention.
The management method of the back-end storage comprises the following steps:
step S1: the method comprises the steps of constructing a corresponding relation between a back-end storage device and a logic unit in advance; the plurality of back-end storage devices are mapped to obtain a logic unit.
Specifically, the corresponding relationship between the backend storage devices and the logic units is constructed in advance, and specifically, one logic unit is set to correspond to a plurality of backend storage devices, that is, the plurality of backend storage devices are mapped to obtain one logic unit, so that for one target, one target includes a plurality of logic units, and each logic unit is mapped to the plurality of backend storage devices in a one-to-many manner.
Step S2: after a data writing request of data to be written sent to the target logic unit by the client is received, the data to be written is distributed to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy, so that the data to be written is asynchronously written into the plurality of back-end storage devices.
It should be noted that the client may scan the targets on the iscsi target, determine the number of targets on the iscsi target and the logic units included in the target, and select one logic unit from the target to issue the data write request to be written with data, where the selected logic unit is referred to as a target logic unit. It should be noted that the preset of the present application is set in advance, and only needs to be set once, and the reset is not needed unless the modification is needed according to the actual situation.
Specifically, considering a plurality of back-end storage devices corresponding to one logic unit, the data to be written can be simultaneously distributed to the plurality of back-end storage devices for storage. Based on this, the data distribution policy for guiding the data to be written to be distributed to the plurality of back-end storage devices is set in advance, so that after a data writing request of the data to be written sent by the client to the target logic unit is received, the data to be written can be distributed to the plurality of back-end storage devices corresponding to the target logic unit according to the set data distribution policy, and the data to be written can be asynchronously written into the plurality of back-end storage devices according to the data distribution result.
And step S3: and after the data to be written is asynchronously written into the plurality of back-end storage devices, returning a request completion message to the client.
Specifically, after the data to be written is asynchronously written into the plurality of back-end storage devices, a request completion message can be returned to the client to inform the client that the data writing is completed.
The invention provides a management method of back-end storage, which is characterized in that a corresponding relation between back-end storage equipment and a logic unit is constructed in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit; after a data writing request of data to be written sent to a target logic unit by a client is received, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices; and after the data to be written is asynchronously written into the plurality of back-end storage devices, returning a request completion message to the client. Therefore, the method and the device adopt a logic unit mapping 'one-to-many' mode to uniformly manage the plurality of back-end storage devices and dispersedly process the data writing requests, namely the plurality of back-end storage devices share the data writing pressure together, so that the storage performance is improved, and the service processing with higher requirements on the storage performance, such as virtualization, is facilitated.
On the basis of the above-described embodiment:
as an optional embodiment, after receiving a data write request of data to be written issued by a client to a target logic unit, allocating the data to be written to a plurality of backend storage devices corresponding to the target logic unit according to a preset data allocation policy, so as to asynchronously write the data to be written to the plurality of backend storage devices, the process includes:
correspondingly creating a thread pool comprising a plurality of threads and a rear-end storage queue for each logic unit in advance;
after receiving a data writing request of data to be written issued by a client to a target logic unit, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy, and sequentially obtaining sub-data writing requests of all distributed data;
writing the corresponding subdata write request into a rear-end storage queue of a target thread pool corresponding to the target logic unit according to the distribution sequence of each distribution data;
and sequentially distributing the subdata write-in requests in the rear-end storage queue of the target thread pool to a plurality of threads of the target thread pool, and asynchronously processing each subdata write-in request by using the plurality of threads to correspondingly write each distribution data into a plurality of rear-end storage devices.
Specifically, a thread pool including a plurality of threads and a back-end storage queue is correspondingly created for each logic unit in advance, and the thread pool is used for being responsible for dispatching the back-end storage task corresponding to the logic unit. Based on this, after receiving a data write request of data to be written issued by a client to a target logic unit, the application allocates the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data allocation policy, and sequentially obtains sub-data write requests of each allocation data (each allocation data is recombined to be written, and the sub-data write requests of the allocation data indicate the back-end storage devices allocated corresponding to the allocation data and the positions to be written into the back-end storage devices allocated corresponding to the allocation data), that is, the data write requests are subjected to decentralized processing. When the data to be written is distributed into the distribution data, the corresponding subdata write requests are written into the rear-end storage queue of the target thread pool corresponding to the target logic unit according to the distribution sequence of the distribution data, and the subdata write requests in the rear-end storage queue of the target thread pool are sequentially distributed to the multiple threads of the target thread pool according to the first-in first-out principle of the rear-end storage queue. After each thread receives a subdata write-in request of the distribution data, the received subdata write-in request is processed immediately, namely the distribution data is written into the corresponding distributed rear-end storage equipment, so that the subdata write-in requests are processed asynchronously by utilizing a plurality of threads, and the distribution data are correspondingly written into the plurality of rear-end storage equipment.
As an optional embodiment, the process of allocating data to be written to a plurality of back-end storage devices corresponding to a target logical unit according to a preset data allocation policy includes:
and averagely distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy.
Specifically, when the data to be written is distributed, the data to be written may be specifically and averagely distributed to a plurality of back-end storage devices corresponding to the target logical unit.
As an optional embodiment, the capacities of the plurality of back-end storage devices corresponding to the target logical unit are the same; the data writing request comprises data to be written, request offset and the total length of the data to be written;
correspondingly, the process of allocating the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to the preset data allocation strategy and sequentially obtaining the subdata write requests of each allocated data includes:
numbering a plurality of back-end storage devices corresponding to the target logic unit from 0 in advance;
sequentially taking out data with the length of m from the data to be written, distributing the data with the length of m to the [ (off/m)% n ] th back-end storage devices, and obtaining the writing position of the data with the length of m in the corresponding back-end storage devices according to [ off/(n)% m + [ off- (off/m)% m ]; wherein off is a request offset, and n is the total number of the back-end storage devices corresponding to the target logic unit;
judging whether the length of the remaining data to be written is not less than m, if so, sequentially taking out the data with the length of m from the remaining data to be written, distributing the data with the length of m to the [ ((off/m)% n + 1)% n ] rear-end storage devices, obtaining the writing position of the data with the length of m in the corresponding rear-end storage devices according to [ (off + m)/(n × m) ] + m + [ (off + m) - ((off + m)/m) ] and repeating the steps until the data to be written is completely distributed;
obtaining subdata write-in requests of all the distributed data according to the distribution results of the data to be written; the sub-data write request comprises the distribution data with the length of m and the position of the corresponding distributed back-end storage equipment to be written.
Specifically, the capacities of the plurality of back-end storage devices corresponding to the target logical unit of the present application are the same, and the capacity of the target logical unit is equal to the sum of the capacities of the plurality of back-end storage devices. The data write request for data to be written includes data to be written, a request offset off, and a total length len of data to be written. Based on this, the data allocation policy is specifically: the plurality of back-end storage devices corresponding to the target logical unit are numbered from 0 in advance. Data with the length of m is sequentially taken out from the head of the data to be written, the data with the length of m is distributed to the [ (off/m)% n ] th rear-end storage device (% represents modular operation, namely, remainder is taken), and the writing position of the data with the length of m in the corresponding rear-end storage device (namely, where the writing starts from the rear-end storage device) is obtained according to [ off/(n)% m + [ off- (off/m)% m ]. And if the length of the residual data to be written is not less than m, sequentially taking out the data with the length of m from the head, distributing the data with the length of m to the [ ((off/m)% n + 1)% n ] rear-end storage devices, and obtaining the writing position of the data with the length of m in the corresponding rear-end storage device according to [ (off + m)/(n)/] m + [ (off + m) - ((off + m)/] m ]. And if the length of the remaining data to be written is not less than m, sequentially taking out the data with the length of m from the head of the remaining data to be written, distributing the data with the length of m to [ ((off/m)% n + 1)% n ] rear-end storage devices, obtaining the writing position of the data with the length of m in the corresponding rear-end storage devices according to [ (off +2 x m)/(n x m) ] + m + [ (off +2 x m) - ((off +2 x m)/m) m ], and repeating the steps until the data to be written is distributed. And when the data to be written is distributed into the distribution data, obtaining a subdata writing request of the distribution data, wherein the subdata writing request represents the back-end storage equipment correspondingly distributed by the distribution data and the position of the back-end storage equipment to be written into the corresponding distribution.
As an alternative embodiment, m =1MB.
Specifically, the application calculates and allocates the data to be written in units of 1MB block size.
As an optional embodiment, before writing the sub-data write request corresponding to each allocation data into the back-end storage queue of the target thread pool corresponding to the target logical unit according to the allocation order of each allocation data, the method for managing back-end storage further includes:
constructing an asynchronous callback pointer assembly into subdata write requests of each distribution data, and writing the assembled subdata write requests into corresponding rear-end storage queues;
correspondingly, the process of using a plurality of threads to asynchronously process each subdata write-in request to correspondingly write each distribution data into a plurality of back-end storage devices comprises the following steps:
processing each subdata write-in request asynchronously by utilizing a plurality of threads, and after the target thread correspondingly writes the distribution data into the back-end storage device, performing asynchronous callback operation based on an asynchronous callback pointer to inform a system of the completion of the processing of the target thread to the subdata write-in request; the target thread is any thread.
Furthermore, before writing the subdata write-in requests of the distribution data into the corresponding back-end storage queue, constructing subdata write-in requests assembled by asynchronous callback pointers into the distribution data, and aiming at performing asynchronous callback operation based on the asynchronous callback pointers after a thread correspondingly writes the distribution data into back-end storage equipment so as to inform a system that the thread completes the subdata write-in requests corresponding to the distribution data.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a management system for backend storage according to an embodiment of the present invention.
The management system of the back-end storage comprises:
the relation construction module 1 is used for constructing the corresponding relation between the back-end storage equipment and the logic unit in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit;
the data distribution module 2 is configured to, after receiving a data write request of data to be written, which is issued by a client to a target logic unit, distribute the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution policy, so as to asynchronously write the data to be written to the plurality of back-end storage devices; wherein, the target logic unit is any logic unit;
and the message returning module 3 is used for returning a request completion message to the client after the data to be written is asynchronously written into the plurality of back-end storage devices.
As an alternative embodiment, the data distribution module 2 includes:
the thread pool creating submodule is used for correspondingly creating a thread pool which comprises a plurality of threads and a rear-end storage queue for each logic unit in advance;
the request acquisition submodule is used for distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy after receiving a data writing request of the data to be written issued by the client to the target logic unit, and sequentially obtaining sub-data writing requests of all the distributed data;
the queue writing sub-module is used for writing the sub-data writing request corresponding to the distribution data into a rear-end storage queue of a target thread pool corresponding to the target logic unit according to the distribution sequence of the distribution data;
and the request processing submodule is used for sequentially distributing the subdata write-in requests in the rear-end storage queue of the target thread pool to a plurality of threads of the target thread pool so as to asynchronously process each subdata write-in request by utilizing the plurality of threads and correspondingly write each distribution data into a plurality of rear-end storage devices.
As an optional embodiment, the request obtaining sub-module is specifically configured to, after receiving a data write request of data to be written, which is issued by a client to a target logic unit, averagely allocate the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data allocation policy, and sequentially obtain sub-data write requests of each allocation data.
For introduction of the management system provided in the present application, please refer to the above-mentioned embodiment of the management method, which is not described herein again.
The present application further provides a management device for backend storage, including:
a memory for storing a computer program;
a processor for implementing the steps of any of the above back-end storage management methods when executing the computer program.
For introduction of the management apparatus provided in the present application, please refer to the embodiments of the management method described above, and the description of the present application is omitted here.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method for managing backend storage, comprising:
the method comprises the steps of constructing a corresponding relation between a back-end storage device and a logic unit in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit;
after a data writing request of data to be written sent to a target logic unit by a client is received, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices; wherein the target logical unit is any one of the logical units;
after the data to be written is asynchronously written into a plurality of back-end storage devices, returning a request completion message to the client;
after receiving a data write request of data to be written issued by a client to a target logic unit, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy so as to asynchronously write the data to be written into the plurality of back-end storage devices, the method comprises the following steps:
correspondingly creating a thread pool comprising a plurality of threads and a rear-end storage queue for each logic unit in advance;
after receiving a data writing request of data to be written sent to a target logic unit by a client, distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy, and sequentially obtaining sub-data writing requests of all distributed data;
writing the corresponding subdata write-in request into a rear-end storage queue of a target thread pool corresponding to the target logic unit according to the distribution sequence of each distribution data;
sequentially distributing subdata write-in requests in a back-end storage queue of the target thread pool to a plurality of threads of the target thread pool so as to asynchronously process each subdata write-in request by utilizing the plurality of threads and correspondingly write each distribution data into a plurality of back-end storage devices;
the process of allocating the data to be written to the plurality of back-end storage devices corresponding to the target logical unit according to a preset data allocation policy includes:
and averagely distributing the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution strategy.
2. The method for managing backend storage according to claim 1, wherein the capacities of the plurality of backend storage devices corresponding to the target logical unit are the same; the data writing request comprises data to be written, request offset and the total length of the data to be written;
correspondingly, the process of allocating the data to be written to the plurality of back-end storage devices corresponding to the target logic unit according to a preset data allocation strategy and sequentially obtaining sub-data write requests of each allocated data includes:
numbering a plurality of back-end storage devices corresponding to the target logic unit from 0 in advance;
sequentially taking out data with the length of m from the data to be written, distributing the data with the length of m to [ (off/m)% n ] rear-end storage devices, and obtaining the writing position of the data with the length of m in the corresponding rear-end storage devices according to [ off/(n)% m + [ off- (off/m)% m ]; wherein off is a request offset, and n is the total number of the back-end storage devices corresponding to the target logic unit;
judging whether the length of the remaining data to be written is not less than m, if so, sequentially taking out the data with the length of m from the remaining data to be written, distributing the data with the length of m to the [ ((off/m)% n + 1)% n ] rear-end storage devices, obtaining the writing position of the data with the length of m in the corresponding rear-end storage devices according to [ (off + m)/(n × m) ] + m + [ (off + m) - ((off + m)/m) ] m ], and repeating the steps until the data to be written is completely distributed;
obtaining subdata writing requests of all the distributed data according to the distribution results of the data to be written; the sub-data writing request comprises distribution data with the length of m and the position of the corresponding distributed rear-end storage equipment to be written in;
wherein the% represents a modulo operation.
3. The method of managing back-end storage according to claim 2, wherein m =1MB.
4. The method for managing backend storage according to claim 1, wherein before writing the sub data write request corresponding to each allocation data into the backend storage queue of the target thread pool corresponding to the target logical unit according to the allocation order of each allocation data, the method for managing backend storage further comprises:
constructing an asynchronous callback pointer assembly into subdata write-in requests of all distributed data so as to write the assembled subdata write-in requests into corresponding rear-end storage queues;
correspondingly, the process of using a plurality of threads to asynchronously process each subdata write-in request to correspondingly write each distribution data into a plurality of back-end storage devices comprises the following steps:
processing each subdata write-in request asynchronously by utilizing a plurality of threads, and after the target thread correspondingly writes the distribution data into the back-end storage device, performing asynchronous callback operation based on an asynchronous callback pointer so as to inform a system that the target thread completes subdata write-in request processing; wherein the target thread is any one of the threads.
5. A system for managing back-end storage, comprising:
the relationship construction module is used for constructing the corresponding relationship between the back-end storage equipment and the logic unit in advance; the method comprises the following steps that a plurality of back-end storage devices are mapped to obtain a logic unit;
the data distribution module is used for distributing the data to be written to a plurality of back-end storage devices corresponding to a target logic unit according to a preset data distribution strategy after receiving a data writing request of the data to be written sent to the target logic unit by a client, so as to asynchronously write the data to be written into the plurality of back-end storage devices; wherein the target logic unit is any one of the logic units;
the message returning module is used for returning a request completion message to the client after the data to be written is asynchronously written into the plurality of back-end storage devices;
wherein the data distribution module comprises:
the thread pool creating submodule is used for correspondingly creating a thread pool which comprises a plurality of threads and a rear-end storage queue for each logic unit in advance;
the request acquisition sub-module is used for distributing the data to be written to a plurality of back-end storage devices corresponding to a target logic unit according to a preset data distribution strategy after receiving a data writing request of the data to be written issued by a client to the target logic unit, and sequentially obtaining sub-data writing requests of all distributed data;
the queue writing sub-module is used for writing the sub-data writing request corresponding to each distribution data into the rear-end storage queue of the target thread pool corresponding to the target logic unit according to the distribution sequence of each distribution data;
the request processing submodule is used for sequentially distributing the subdata write requests in the rear-end storage queue of the target thread pool to a plurality of threads of the target thread pool so as to asynchronously process the subdata write requests by utilizing the plurality of threads and correspondingly write the distributed data into a plurality of rear-end storage devices;
the request obtaining sub-module is specifically configured to, after receiving a data write request of data to be written issued by a client to a target logic unit, evenly distribute the data to be written to a plurality of back-end storage devices corresponding to the target logic unit according to a preset data distribution policy, and sequentially obtain sub-data write requests of each distributed data.
6. A management apparatus for back-end storage, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method of managing back-end storage according to any of claims 1 to 4 when executing said computer program.
CN202010878653.5A 2020-08-27 2020-08-27 Management method, system and device for back-end storage Active CN112000295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010878653.5A CN112000295B (en) 2020-08-27 2020-08-27 Management method, system and device for back-end storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010878653.5A CN112000295B (en) 2020-08-27 2020-08-27 Management method, system and device for back-end storage

Publications (2)

Publication Number Publication Date
CN112000295A CN112000295A (en) 2020-11-27
CN112000295B true CN112000295B (en) 2023-01-10

Family

ID=73471601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010878653.5A Active CN112000295B (en) 2020-08-27 2020-08-27 Management method, system and device for back-end storage

Country Status (1)

Country Link
CN (1) CN112000295B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168656A (en) * 2017-06-09 2017-09-15 郑州云海信息技术有限公司 A kind of volume duplicate collecting system and its implementation method based on multipath disk drive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718447B2 (en) * 2001-06-28 2004-04-06 Hewlett-Packard Development Company, L.P. Method and system for providing logically consistent logical unit backup snapshots within one or more data storage devices

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168656A (en) * 2017-06-09 2017-09-15 郑州云海信息技术有限公司 A kind of volume duplicate collecting system and its implementation method based on multipath disk drive

Also Published As

Publication number Publication date
CN112000295A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN104636080B (en) Storage system and the method for it
KR970000910B1 (en) Multi-media signal processor computer system
US5463776A (en) Storage management system for concurrent generation and fair allocation of disk space among competing requests
US8880837B2 (en) Preemptively allocating extents to a data set
CN103838859B (en) A method of data copy between multi-process under reduction linux
CN106503020B (en) Log data processing method and device
CN102255962A (en) Distributive storage method, device and system
CN110209493B (en) Memory management method, device, electronic equipment and storage medium
CN104317742A (en) Thin provisioning method for optimizing space management
US9223373B2 (en) Power arbitration for storage devices
US9135262B2 (en) Systems and methods for parallel batch processing of write transactions
CN109976907B (en) Task allocation method and system, electronic device and computer readable medium
US6219772B1 (en) Method for efficient memory allocation of small data blocks
CN111930713B (en) Distribution method, device, server and storage medium of CEPH placement group
US7330956B1 (en) Bucket based memory allocation
US20140325177A1 (en) Heap management using dynamic memory allocation
CN111464331B (en) Control method and system for thread creation and terminal equipment
CN115525417A (en) Data communication method, communication system, and computer-readable storage medium
CN111190537B (en) Method and system for managing sequential storage disk in additional writing scene
US11385814B2 (en) Method and device for allocating resource of hard disk in distributed storage system
CN112000295B (en) Management method, system and device for back-end storage
CN107391253B (en) Method for reducing system memory allocation release conflict
CN116955219B (en) Data mirroring method, device, host and storage medium
CN117632516A (en) Resource allocation method and device and computer equipment
CN105183375B (en) A kind of control method and device of the service quality of hot spot data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant