CN106302607B - Block storage system and method applied to cloud computing - Google Patents

Block storage system and method applied to cloud computing Download PDF

Info

Publication number
CN106302607B
CN106302607B CN201510308773.0A CN201510308773A CN106302607B CN 106302607 B CN106302607 B CN 106302607B CN 201510308773 A CN201510308773 A CN 201510308773A CN 106302607 B CN106302607 B CN 106302607B
Authority
CN
China
Prior art keywords
distributed storage
storage node
management server
cluster management
storage unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510308773.0A
Other languages
Chinese (zh)
Other versions
CN106302607A (en
Inventor
王银虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510308773.0A priority Critical patent/CN106302607B/en
Publication of CN106302607A publication Critical patent/CN106302607A/en
Application granted granted Critical
Publication of CN106302607B publication Critical patent/CN106302607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention discloses a kind of data processing method applied to the storage of cloud computing block, comprising: client sends the request instruction for obtaining distributed storage node group list to cluster management server;The cluster management server is instructed according to the distributed storage node group list acquisition request, returns to the distributed storage node group list to client;The client directly carries out data interaction according to the distributed storage node group list, with the distribution memory node.Invention additionally discloses a kind of block storage systems applied to cloud computing.The present invention improves the access efficiency of data in the storage of cloud computing block.

Description

Block storage system and method applied to cloud computing
Technical field
The present invention relates to technical field of network storage more particularly to a kind of block storage systems and side applied to cloud computing Method.
Background technique
With the popularization of rapid development and the cloud computing of network technology, the data based on cloud computing are stored as the great Rong of people The demand that the storage of amount data and safe and convenient obtain data provides good application platform.Existing cloud computing block memory technology Mainly using iSCSI, (Internet Small Computer System Interface, minicomputer system are connect Mouthful) access mode: user is accessed different by the block device of iSCSI initiator (iSCSI) registration in client computer Storage machine front-end proxy agent server iSCSI Target (iscsi target device);For memory space to be mapped to different masses The storage machine front-end proxy agent server iSCSI Target of equipment, then machine, access are accessed by accessing a different storage group of planes To some memory node server.
In above-mentioned access mode, client computer is needed by storage machine front-end proxy agent server, storage group of planes access machine, is deposited Storage node server three, which jumps just be accessed, provides the memory node server of actual storage service, and access path is longer, and every One jump can all bring very big time delay;The front-end proxy agent server and storage group of planes access machine for storing machine are also all I O access Bottleneck;The server being related to during issuing a request to memory node server providing services from client computer is more, needs to throw The resources costs entered are higher.
Summary of the invention
The main purpose of the embodiment of the present invention is to provide a kind of block storage system and its method applied to cloud computing, it is intended to Improve the efficiency of data access in the storage of cloud computing block.
To achieve the above object, the embodiment of the invention provides a kind of block storage systems applied to cloud computing, including visitor Family end, cluster management server, distributed storage node cluster, wherein
The cluster management server is stored with distributed storage node group list;
The distributed storage node cluster includes multiple distributed storage nodes, for providing massive store space;
The client is grouped column for obtaining stored distributed storage node from the cluster management server Table, and any distributed storage node is directly accessed according to the group list information.
In addition, to achieve the above object, the embodiment of the invention also provides a kind of data applied to the storage of cloud computing block Processing method, comprising the following steps:
Client sends the request instruction for obtaining distributed storage node group list to cluster management server;
The cluster management server is instructed according to the distributed storage node group list acquisition request, to the visitor Family end returns to the distributed storage node group list;
The client is directly counted according to the distributed storage node group list with the distribution memory node According to interaction.
The embodiment of the present invention, only need to be according to obtaining from the cluster management server when client accesses storing data Distributed storage node group list can directly access the distributed storage node for being stored with data.Since client is visited every time It asks that all need not move through cluster management server when storing data jumps to distributed storage node again, therefore, reduces due to jumping Time delay brought by turning, decreases the generation of I O access abnormal conditions, to improve the efficiency of data access.
Detailed description of the invention
Fig. 1 is the structural schematic diagram for the block storage system that the present invention is applied to cloud computing;
Fig. 2 is the flow diagram for the block stored data processing method embodiment that the present invention is applied to cloud computing;
Fig. 3 is that the present invention is applied to BSN group of planes initialization in the block stored data processing method of cloud computing and starts thin Change flow diagram;
Fig. 4 is that the present invention is applied to create the refinement process signal of SBS disk in the block stored data processing method of cloud computing Figure;
Fig. 5 is the refinement process signal that the present invention is applied to carry SBS disk in the block stored data processing method of cloud computing Figure;
Fig. 6 is that the present invention is applied to recycle the refinement process signal of SBS disk in the block stored data processing method of cloud computing Figure;
Fig. 7 is that the present invention shows applied to the refinement process of data in magnetic disk migration in the block stored data processing method of cloud computing It is intended to;
Fig. 8 is the refinement flow diagram that the present invention is applied to Box division in the block stored data processing method of cloud computing;
Fig. 9 is the refinement flow diagram that the present invention is applied to that Box merges in the block stored data processing method of cloud computing;
Figure 10 is the refinement process signal that the present invention is applied to group of planes dilatation in the block stored data processing method of cloud computing Figure;
Figure 11 is the refinement process signal that the present invention is applied to group of planes capacity reducing in the block stored data processing method of cloud computing Figure;
Figure 12 is the post-processing that the present invention is applied to BSN discovery faulty disk in the block stored data processing method of cloud computing Refine flow diagram;
Figure 13 is that the present invention has found to lose the thin of the post-processing of disk applied to BSN in the block stored data processing method of cloud computing Change flow diagram;
Figure 14 is that the present invention is applied in the block stored data processing method of cloud computing change in faulty disk or lose disk The refinement flow diagram of disk post-processing;
Figure 15 is the refinement process that the present invention is applied to delay machine situation processing in the block stored data processing method of cloud computing Schematic diagram;
Figure 16 is the refinement that the present invention is applied to the post-processing of delay machine recovery in the block stored data processing method of cloud computing Flow diagram.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
Technical solution of the present invention is further illustrated with reference to the accompanying drawings and specific embodiments of the specification.It should be appreciated that this Locate described specific embodiment to be only used to explain the present invention, be not intended to limit the present invention.
The present invention proposes a kind of cloud computing block storage scheme, and in the storage application of cloud computing block, client is by being stored with BSN list (Block Storage Node list, distributed storage node group list) cluster management server (MGR, Group Manager) obtain a group of planes in all distributed storage nodes (BSN, Block Storage Node) group list Information directly carries out data interaction, and data interaction according to the group list information of acquisition with distributed storage node Without being jumped by cluster management server in journey, to improve the efficiency of data access in the storage of cloud computing block.
As shown in Figure 1, showing a kind of system structure that the block storage system applied to cloud computing is implemented of the present invention.It should Cloud computing block storage system may include client 100, cluster management server 200, in the form of a group of planes existing for distributed storage Node 300.Wherein,
Client 100 is the software run on the subscriber terminal, which can be kernel-driven, is also possible to using soft Part.Account and password login client 100 can be used in user, by access network then with cluster management server 200 or each Distributed storage node 300 is communicated.User terminal can be virtual server either property server.Client 100 The upper access IP configured with cluster management server 200, client 100 can take with cluster management when needed according to access IP Business device 200 is communicated.When user passes through terminal device logs client 100, client 100 passes through network and cluster management Server 200 is communicated, and the login authentication for carrying account and password request is sent to cluster management server 200 and is carried out Login authentication, and the request instruction for obtaining BSN list is sent to cluster management server 200 after login authentication success.
Cluster management server 200 can apply corresponding server for this, be also possible to an independent server, also It can be the server cluster being made of multiple servers.Cluster management server 200 is stored with all distributed storage sections The information of point 300, is accessed by network and manages all distributed storage nodes 300.Cluster management server 200 receives visitor The login authentication request that family end 100 is sent, and according to the account and password in request, confirm whether user is legitimate user.When When confirming that user is legitimate user, cluster management server 200 returns to login authentication success message to client 100.Client After 100 receive login authentication success message, then the request instruction for obtaining BSN list is sent to cluster management server 200. After cluster management server 200 receives the request instruction for obtaining BSN list, it will be stored on cluster management server 200 BSN list returns to client 100.
Distributed storage node 300 can be this using corresponding server, be also possible to an independent server.It is more A distributed storage node 300 forms a distributed storage node cluster.Distributed storage node 300 in a group of planes can be Single cpu mode is also possible to active and standby Dual OMU Servers Mode.Use active and standby Dual OMU Servers Mode in the embodiment of the present invention, two points under the mode The data that cloth memory node 300 stores are identical, synchronous for data, ensure data safety.100 basis of client The BSN list that the cluster management server 200 returns directly accesses distributed storage node 300 by network, to carry out Data interaction.
Using a kind of system structure that the block storage system applied to cloud computing is implemented of the present invention, client 100 with point Data interaction between cloth memory node 300 is only needed by 300 this jump of distributed storage node, and is adopted in the prior art It is compared with iSCSI access mode, saves server apparatus resource, reduce the I O access number in a data interaction, together When decrease due to repeatedly jumping brought time delay.
Following functions may be implemented in above system:
(1) initialization and starting of a group of planes
The cluster management server 200 is also used to:
It is pre-configured with and stores BSN list information, BoxDiskMap (logic storage unit and distributed storage node point The mapping relations of group and disk) information, BoxRange (the hash range of Box) information, TargetMap (the object iSCSI of creation The mapping relations of Target and block storage unit) information, and all configuration informations configured are pushed to all distributions Memory node 300;
The distributed storage node 300 is also used to:
Disk on every other distributed storage node 300 is mounted to local;If TargetMap information non-empty, It creates and starts iSCSI Target.
BSN list, BoxDiskMap, the BoxRange stored on the cluster management server 200 is deposited by cloud computing block Storage system administrator is pre-configured according to the actual situation.The TargetMap stored on cluster management server 200 is to receive After creation SBS (Simple Block Storage, simple block storage) disk request instruction that client 100 is sent, by group of planes pipe Reason server 200 is allocated.If distributed storage node 300 was restarted, in order to guarantee the consistency of data, restart It needs to re-issue TargetMap into all distributed storage nodes 300 afterwards.
ISCSI Target and iSCSI initiator external member is installed, by this on the distributed storage node 300 ISCSI Target in external member is managed local disk.Boxd (distributed storage section in distributed storage node 300 300 kernel softwares of point, provide iSCSI Target memory space) be by preset size fragment by each local disk space Box (logic storage unit), and be numbered to each Box;According to what is obtained from the cluster management server 200 TargetMap information, BoxDiskMap information, BoxRange information request received iSCSI Target read-write/control Lba (logical block address) is converted to blk (block storage unit);SBS-hash algorithm (the BoxId=SBS- proposed through the invention Hash (blkId)=BoxRange [sum of Md5 (blkId) %Box]) blkId (block storage unit number) is mapped to BoxId (number of Box);BoxId finds corresponding GroupId (the grouping volume of distributed storage node 300 by BoxDiskMap Number), the disk on corresponding distributed storage node 300 is then found by BSN list information.
On each of group of planes distributed storage node 300 on the every other distributed storage node 300 of equal carry Disk can also when the data that client 100 to be accessed are not on 300 local disk of distributed storage node accessed The data on required disk, thus any distributed storage node in a group of planes is accessed by the distributed storage node 300 300 all have provide client 100 needed for data access ability.
(2) SBS disk is created
The cluster management server 200 is also used to:
According to the creation SBS disk request instruction, new TargetMap is distributed, and by all allocated TargetMap It is pushed to all distributed storage nodes 300;After receiving the confirmation message that the distributed storage node 300 returns, Return to the corresponding TargetId of newly assigned TargetMap (iSCSI Target number);
The distributed storage node 300 is also used to:
According to the TargetMap that the cluster management server 200 pushes, iSCSI Target is created, and returns to confirmation Message.
The creation SBS disk request instruction passes through periphery OSS (Operation Support System, fortune by administrator Seek support system) it sends, it is also possible to be sent by administrator by system command line mode.The cluster management server 200 TargetMap is distributed according to creation SBS disk request instruction for it.Distributed storage node 300 is receiving cluster management clothes Be engaged in after the updated TargetMap information that device 200 is sent, can by by the updated TargetMap information and it is former The TargetMap information of storage is matched to create the corresponding iSCSI Target of newly assigned TargetMap.Another implementation In example, distributed storage node 300 can also include newly assigned according to what is sent by cluster management server 200 The creation of TargetMap information instructs to create iSCSI Target.After creating iSCSI Target, returns to confirmation and disappear Breath.Cluster management server 200 returns newly assigned after the confirmation message for receiving the transmission of distributed storage node 300 The corresponding TargetId of TargetMap.
The cluster management server 200, all can will be updated when TargetMap information has update TargetMap information re-issues all distributed storage nodes 300, and all distributed storage nodes 300 all can root According to the corresponding iSCSI Target of newest TargetMap information update, to ensure that on each distributed storage node 300 The consistency of iSCSI Target information and the accuracy of data.
(3) carry SBS disk
The client 100 is also used to:
According to SBS disk mounting instructions, by the corresponding SBS disk carry of the SBS disk mounting instructions in the BSN list Any distributed storage node 300.
After SBS disk creates, is sent and hung to client 100 by periphery OSS or system command line mode from administrator Carry the instruction of SBS disk.Client 100 instructs the BSN list obtained from group of planes management server 200 according to the carry SBS disk Carry SBS disk described in any 300 carry of distributed storage node is selected to instruct corresponding SBS disk.Select distributed storage node 300 method can be to be selected in a manner of load balancing, is also possible to randomly choose.If the distributed storage node of selection 300 cannot access, then client 100 reselects distributed storage node 300, until selected distributed storage node 300 can normally access.Client 100 can take after discovery has the distributed storage node 300 that cannot be accessed to cluster management The transmission delay machine of device 200 of being engaged in detects request instruction, triggers cluster management server 200 to the distributed storage section that cannot be accessed Point 300 carries out delay machine detection.
Client 100 is when selection is used for the distributed storage node 300 of carry SBS disk, without judging wanted carry Data are stored on the Box of which 300 disk of distributed storage node on SBS disk, because of selected any distributed storage Node 300 is mounted with disk all in a group of planes, that is, can be accessed in a group of planes by any distributed storage node 300 All data of storage.
(4) SBS disk is recycled
The cluster management server 200 is also used to:
Recycling SBS disk request instruction is received, it is corresponding to delete SBS disk to be recycled according to the recycling SBS disk request instruction TargetMap, and remaining all TargetMap information are pushed to all distributed storage nodes 300;
The distributed storage node 300 is also used to:
According to the received TargetMap information of institute, iSCSI Target corresponding with SBS disk to be recycled is deleted, and more The TargetMap information stored on new distributed storage node 300.
The recycling SBS disk request instruction is taken by periphery OSS or system command line mode to cluster management from administrator Business device 200 is sent.It include the corresponding iSCSI Target information of SBS disk to be recycled in the instruction.Cluster management server 200 is logical The iSCSI Target information crossed in recovery command finds matching TargetMap and deletion, then by it is all there is also TargetMap information be pushed to the update that all distributed storage node 300 carries out information again.All distributions are deposited After storing up the TargetMap information of node 300 upon a reception of an updated, the updated TargetMap information and original have been stored TargetMap information matched it is corresponding to delete the TargetMap being not present on distributed storage node 300 iSCSI Target.In another embodiment, all distributed storage nodes 300 can also be by by cluster management server 200 recovery commands comprising TargetMap information to be deleted sent delete the corresponding iSCSI of TargetMap to be recycled Target.After deleting iSCSI Target, all distributed storage nodes 300 return to recycling success message.Group of planes pipe It is successful to return to recycling after receiving the recycling success message that all distributed storage nodes 300 return for reason server 200 Confirmation message.
During recycling SBS disk, it is responsible for for cluster management server 200 deleting TargetMap to be recycled, then will Updated TargetMap information update is to all distributed storage nodes 300, by all distributed storage nodes 300 Deletion TargetMap to be recycled is executed according to the unified TargetMap information and corresponds to the operation of iSCSI Target, to protect Demonstrate,proved on all distributed storage nodes 300 there is also iSCSI Target be consistent.
(5) data in magnetic disk migrates
The cluster management server 200 is also used to:
After receiving data in magnetic disk migration instruction, instruction storage source distributed storage node is migrated according to the data in magnetic disk 300 and source disk information, purpose distributed storage node 300 and purpose disc information, and by the corresponding disk of source disk BoxStatus is set to transition state, which is pushed to all distributed storage nodes 300;It is deposited to source distribution formula It stores up node 300 and sends data in magnetic disk migration instruction;After the completion of source distribution formula 300 Data Migration of memory node, by source magnetic The corresponding BoxStatus of disk is set to normal condition, and the normal condition is pushed to all distributed storage nodes 300;? After the completion of 300 Data Migration of source distribution formula memory node, updates BoxDiskMap and be pushed to all distributed storages Node 300;
The distributed storage node 300 is also used to:
When receiving the data in magnetic disk migration instruction that the cluster management server 200 is sent, according to the data in magnetic disk It is right to be copied to the purpose distributed storage node 300 by migration instruction for data on 300 disk of source distribution formula memory node The disk answered.
Data in magnetic disk migration refers to the data on BoxId X to BoxId Y, from the Disk of distributed storage node 300A M moves to the Disk N of distributed storage node 300B.The migration of data is migrated as unit of disk, and transition process The state of the corresponding Box of middle migrating data is readable not writeable transition state.
The migration instruction of the cluster management server 200 received data in magnetic disk, can be and pass through periphery OSS by administrator It sends, is also possible to be sent by administrator by system command line mode.
In data in magnetic disk transition process, the logical mappings relationship of institute's migrating data corresponding Box and blk does not change, and changes What is become is the physical mappings relationship of the corresponding Box of institute's migrating data and Disk and distributed storage node 300.Using such number According to moving method, keep the change of data in backstage more simple, transparent.
(6) Box is divided
The cluster management server 200 is also used to:
When starting Box splitting operation, for each Box to be divided, according to the BoxRange newly configured in advance by institute It states Box to be divided and is split into two new Box, described two new Box is made to respectively correspond the hash of former Box in numerical order The first half and latter half of range, and data replication instruction is sent to all distributed storage nodes 300;It is receiving After message is completed in the data duplication that the distributed storage node 300 returns, the BoxRange newly configured in advance is pushed to institute Some distributed storage nodes 300;
The distributed storage node 300 is also used to:
The data replication instruction that the cluster management server 200 is sent is received, the duplication Box's to be divided Data in hash range latter half to the hash range for numbering corresponding original Box latter half new Box, and to described Message is completed in the duplication of 200 returned data of cluster management server;Wherein, the hash range first half of corresponding original Box is numbered The data in hash range first half that new Box has included former Box in division.
The Box division is the splitting operation carried out for Box all in a group of planes.The BoxRange that newly configures is in advance The modification that former configured BoxRange is carried out by periphery OSS or same command line by administrator.
Box division be by original Box M correspond to BoxRange [x~y), be split into Box M' and correspond to BoxRange [x ~(x+y)/2), Box N' correspond to BoxRange [(x+y)/2~y), and BoxRange [(x+y)/2 will be corresponded on original Box M ~y) data copy on Box N'.
The Box splitting method is double of division that BoxRange progress is corresponded to former Box, and method is simple and easily realizes, Time used in the entire Box splitting operation of a group of planes is also shorter.
(7) Box merges
The cluster management server 200 is also used to:
When starting Box union operation, for every a pair of adjacent two Box to be combined in numerical order, according to new in advance Described two Box to be combined are merged into a new Box by the BoxRange of configuration, and to all distributed storage nodes 300 send data replication instruction;It, will after receiving the data duplication that the distributed storage node 300 returns and completing message The BoxRange newly configured in advance is pushed to all distributed storage nodes 300;
The distributed storage node 300 is also used to:
Receive the data replication instruction that the cluster management server 200 is sent, duplication sequence number rearward to Data are to the new Box in merging Box, and replicate to 200 returned data of cluster management server and complete message;Its In, the new Box has included data in the forward Box to be combined of serial number when merging.
The Box, which merges, to be carried out for two Boxs adjacent by original configured BoxRange all in a group of planes Union operation.In advance the BoxRange that newly configures be by administrator by periphery OSS or same command line to it is described originally The modification that the BoxRange of configuration is carried out.
Box merging be by original Box M correspond to BoxRange [x~y), Box N correspond to BoxRange [y~z), conjunction And for Box M' correspond to BoxRange [x~z), and by corresponded on original Box N BoxRange [y~z) data copy to Box On M'.
The Box merging method method is simple and easily realizes, the time used in the entire Box union operation of a group of planes also compares It is shorter.
(8) group of planes dilatation
The cluster management server 200 is also used to:
Receive dilatation instruction after, when exist newly be added distributed storage node 300 when, will it is preconfigured it is described newly add The group list information of the distributed storage node 300 entered, Box and the grouping of the distributed storage node 300 of the new addition and The mapping relation information of disk is stored in cluster management server 200;The group list of all distributed storage nodes 300 is believed Breath, all BoxDiskMap information, all TargetMap information are pushed to all distributed storage nodes 300;It is connecing After receiving the confirmation message that the distributed storage node 300 returns, start Box splitting operation, and after the completion of Box division, BSN list is pushed to the client 100;
The distributed storage node 300 is also used to:
When the distributed storage node 300 is the distributed storage node 300 having originally, the distribution being newly added is deposited Disk on storage node 300 is mounted to local;When the distributed storage node 300 is the distributed storage node 300 being newly added When, the disk on every other distributed storage node 300 is mounted to local, and according to the TargetMap information, wound It builds and starts iSCSI Target.
Distributed storage node 300 uses dual-host backup mode in a group of planes of the embodiment of the present invention, thus dilatation increases every time 300 numbers of distributed storage node must be even number, and the server number after dilatation is the 2 of existing cluster server number Times.
The dilatation instruction, can be and sent by administrator by periphery OSS, be also possible to be ordered by administrator by system Line mode is enabled to send.
The configuration of original distributed storage node 300 in the new distributed storage node 300 and a group of planes that a group of planes is added Software suite is consistent, i.e., the distributed storage node 300 that a group of planes is newly added is provided with iSCSI initiator and iSCSI in advance Target can also start boxd application.
The group of planes dilatation is realized on the basis of Box splitting operation, and method is simple and easily realizes.
(9) group of planes capacity reducing
The cluster management server 200 is also used to:
It receives capacity reducing to instruct and start Box union operation, wherein the capacity reducing instruction includes the distributed storage that need to dissolve 300 information of node;After the completion of union operation, BSN list is modified, and by the BSN list of the modification and when union operation The BoxDiskMap of modification is pushed to all distributed storage nodes 300, and the BSN list of the modification is pushed to client End 100;
The distributed storage node 300 is also used to:
It is mapped to according to the distributed storage node 300 that the BSN list solution extension received need to dissolve local iSCSI Target;
The client 100 is also used to:
Judged according to the BSN list received, if the distributed storage section that client 100 is mounted to Point 300 need to be dissolved, then carry out solution extension and reselect other distributed storage nodes 300 to carry out carry.
Before a group of planes carries out capacity reducing, need whether to confirm 300 group of planes of distributed storage node after reduction by administrator It can satisfy the capacity requirement of storing data, it is also necessary to the distributed storage node 300 to be reduced of confirmation and its corresponding Box。
The instruction of the cluster management server 200 received capacity reducing can be and be sent by administrator by periphery OSS, It can be and sent by administrator by system command line mode.
The group of planes capacity reducing is realized on the basis of Box union operation, and method is simple and easily realizes.
(10) processing of faulty disk situation
The distributed storage node 300 is also used to:
When the disk on the corresponding distributed storage node 300 of the Box to be accessed occurs abnormal, it is isolated described different Normal disk, and notify 200 disk failure of cluster management server;
The cluster management server 200 is also used to:
The BoxStatus being arranged on the abnormal disk is degrading state, issues removable disk when determination is not delay machine situation Notice, and the BoxStatus is pushed to all distributed storage nodes 300.
The disk includes the case where all cannot normally accessing the disk extremely.
Distributed storage node 300 is isolated the faulty disk and refers to distributed storage node 300 by marking the failure Disk is abnormal, suspends the access operation to the faulty disk.
The degrading state refer to BoxStatus be it is readable can not write state.
The cluster management server 200 carries out delay machine to related disk after receiving the disk failure notice Detection.After determining failed disk not and being delay machine situation, cluster management server 200 notifies administrator by periphery OSS platform Replace disk;After determination is delay machine situation, it is transferred to delay machine situation process flow.
Method by all Box being arranged BoxStatus can be protected quickly when finding that disk occurs abnormal Data on the faulty disk are not modified, to avoid the inconsistent of data.
(11) processing of disk situation is lost
The distributed storage node 300 is also used to:
When needing the disk on the distributed storage node 300 to access that can not access, cluster management service is notified Disk loss described in device 200;
The cluster management server 200 is also used to:
It is degrading state that the BoxStatus for losing disk, which is arranged, and it is logical that removable disk is issued when determination is not delay machine situation Know, and the BoxStatus is pushed to all distributed storage nodes 300.
It is described to need the disk on the distributed storage node 300 to access that access, refer to point of carry SBS disk The disk warp on distributed storage node 300 corresponding with the local data of being accessed are mounted to of cloth memory node 300 is excessive After secondary connection still communication failure the case where.
Method by all Box being arranged BoxStatus can be protected quickly when finding that disk occurs abnormal Data on the faulty disk are not modified, to avoid the inconsistent of data.
(12) processing in the case of faulty disk/lose disk after removable disk
The cluster management server 200 is also used to:
After receiving removable disk completion message, Xiang Suoshu distributed storage node 300 sends data in magnetic disk migration instruction;In magnetic After the completion of disk Data Migration, it is normal condition that the BoxStatus, which is arranged, and the BoxStatus is pushed to all points Cloth memory node 300;
The distributed storage node 300 is also used to:
The data in magnetic disk migration instruction that the cluster management server 200 is sent is received, by disk new after replacement Data copy to new disk on reciprocity disk, and return to the cluster management server 200 and complete message.
The removable disk that the cluster management server 200 receives completes message, is after replacing disk on backstage by administrator The message of cluster management server 200 is sent to by peripheral OSS.
Data in magnetic disk migration after the completion of, only need to by modify BoxStatus value can be quickly by the data in magnetic disk Readable write state is reverted to, method is simple and easily realizes.
(13) processing of delay machine situation
The client 100 is also used to:
Judge that the network of the distributed storage node 300 is unreachable or distributed storage node 300 on disk access not Up to after, Xiang Suoshu cluster management server 200 sends delay machine and detects request instruction;
The distributed storage node 300 is also used to:
Detecting that the Box to be accessed corresponds to the disk on distributed storage node 300 or distributed storage node 300 When unreachable, delay machine is sent to cluster management server 200 and detects request instruction;
The cluster management server 200 is also used to:
It receives from the delay machine that the client 100 or distributed storage node 300 are sent and detects request instruction, and to involved And spatial abnormal feature formula memory node 300 carry out heartbeat detection;Or timing carries out heartbeat to all distributed storage nodes 300 Detection;When an exception is detected, the corresponding all BoxStatus of abnormal distributed storage node 300 are detected described in setting For degrading state, and the status information is pushed to all distributed storage nodes 300;BSN list is updated, by the BSN List is pushed to all distributed storage node 300 and client 100, and issues delay machine notice.
Cluster management server 200 passes through distributed storage node involved in ping/pong when carrying out heartbeat detection 300 to the distributed storage node 300 that is related to described in judgement whether delay machine.
The delay machine can be isolated by the BoxStatus value and BSN list of modifying delay machine distributed storage node 300 Distributed storage node 300, method is simple and easily realizes.
(14) processing after delay machine is restored
The distributed storage node 300 is also used to:
Delay machine restore after distributed storage node 300 initialized after, carry be not mounted to also it is local it is all its Disk on his distributed storage node 300 creates iSCSI Target according to TargetMap;Every other distribution Disk on distributed storage node 300 of the memory node 300 after delay machine described in carry is restored again;
The cluster management server 200 is also used to:
After receiving delay machine recovery message, data in magnetic disk migration is sent to the distributed storage node 300 after the recovery Instruction, after the completion of confirming Data Migration, notifies all 300 delay machines of distributed storage node to restore, and by updated BSN List, BoxDiskMap, TargetMap are pushed to all distributed storage nodes 300;Receiving all distributed storages After 300 carry disk of node completes message, it is normal condition that the BoxStatus, which is arranged, and the status information is pushed to institute Updated BSN list is pushed to all clients 100 by some distributed storage nodes 300.
The distributed storage node 300 of the recovery can be the delay machine distributed storage node 300 after repairing Body is also possible to the new distributed storage node 300 for replacing delay machine distributed storage node 300.It is described to delay for replacing The software suite and delay machine distributed storage node configured on the new distributed storage node 300 of machine distributed storage node 300 300 is completely the same.
It is to repair or replacing delay machine distribution by administrator that the delay machine that the cluster management server 200 receives, which restores message, The message sent after formula memory node 300 by periphery OSS or same command line.
Distributed storage node 300 after the recovery is in the data in magnetic disk for receiving the transmission of cluster management server 200 After migration instruction, by the Data Migration in the peer of the distributed storage node 300 after the preconfigured recovery to institute State the distributed storage node 300 after restoring.
Distributed storage node 300 of all distributed storage nodes 300 after receiving delay machine and restoring message, after recovery Disk on every other distributed storage node 300 in a carry group of planes, every other 300 weight of distributed storage node The disk on distributed storage node 300 after new carry recovery.
Processing operation step after delay machine is restored simply easily is realized, and maintains the consistency of data in a group of planes well.
Further, as shown in Fig. 2, showing a kind of block storage system data processing side applied to cloud computing of the present invention One embodiment of method.The embodiment has been described in detail client 100 and passes through cluster management server 200 and distributed storage section Point 300 carries out the process of data interaction.I.e. the above process can comprise the following steps that
Step S101, the described client 100 refers to the request that the cluster management server 200 sends acquisition BSN list It enables;
Client 100 is the software run on the subscriber terminal, which can be kernel-driven, is also possible to using soft Part.Account and password login client 100 can be used in user, by access network then with cluster management server 200 or each Distributed storage node 300 is communicated.User terminal can be virtual server either property server.Client 100 The upper access IP configured with cluster management server 200, client 100 can take with cluster management when needed according to access IP Business device 200 is communicated.When user passes through terminal device logs client 100, client 100 can pass through network and a group of planes Management server 200 is communicated, and the login authentication for carrying account and password request is sent to cluster management server 200 Login authentication is carried out, and sends the request instruction for obtaining BSN list to cluster management server 200 after login authentication success.
Step S102, the described cluster management server 200 receives the BSN list acquisition request instruction;
The BSN list contains all 300 information of distributed storage node and its information of grouping in a group of planes.A group of planes Management server 200 can apply corresponding server for this, be also possible to an independent server, can also be by multiple The server cluster of server composition.Cluster management server 200 is stored with the information of all distributed storage nodes 300, It is accessed by network and manages all distributed storage nodes 300.Cluster management server 200 receives client 100 and sends Login authentication request confirm whether user is legitimate user and according to the account and password in request.When confirmation user is to close When method user, cluster management server 200 returns to login authentication success message to client 100.Client 100 receives login After authenticating success message, then the request instruction for obtaining BSN list is sent to cluster management server 200.Cluster management server After 200 receive the request instruction for obtaining BSN list, the BSN list being stored on cluster management server 200 is returned to Client 100.
Step S103, the described cluster management server 200 returns to BSN to client 100 according to the request instruction list;
Step S104, the described client 100 receives the BSN list;
Step S105, the described client 100 is directly counted according to the BSN list with distributed storage node 300 According to interaction;
Distributed storage node 300 can be this using corresponding server, be also possible to an independent server.It is more A distributed storage node 300 forms a distributed storage node cluster.Distributed storage node 300 in a group of planes can be Single cpu mode is also possible to active and standby Dual OMU Servers Mode.Use active and standby Dual OMU Servers Mode in the embodiment of the present invention, two points under the mode The data that cloth memory node 300 stores are identical, synchronous for data, ensure data safety.100 basis of client The BSN list network that the cluster management server 200 returns directly accesses distributed storage node 300, to carry out data Interaction.
The client 100 obtains the access information of distributed storage node 300 from the BSN list, so as to It is directly communicated with it, improves access efficiency.
As shown in figure 3, showing a group of planes in a kind of block storage system data processing method applied to cloud computing of the present invention One embodiment of initialization and starting.A group of planes initialization and starting process specifically includes the following steps:
Step S201, local disk is added to by the described distributed storage node 300 according to the iSCSI Target of configuration ISCSI Target, the instruction for starting boxd and the cluster management server 200 to be received being waited to send;
ISCSI Target and iSCSI initiator external member is installed, by this on the distributed storage node 300 ISCSI Target in external member is managed local disk.Boxd in distributed storage node 300 will it is local each Disk space is Box by preset size fragment, and is numbered to each Box;According to from the cluster management server 200 The TargetMap information of acquisition, BoxDiskMap information, BoxRange information are by received iSCSI Target read-write/control The lba of request is converted to blk;Hash algorithm (the BoxId=SBS-hash (blkId)=BoxRange proposed through the invention [sum of Md5 (blkId) %Box]) blkId is mapped to BoxId;BoxId is found corresponding by BoxDiskMap Then GroupId finds the disk on corresponding distributed storage node 300 by BSN list information.
Step S202, distributed storage node 300 is the application in non-initial start the machine, that is, is considered the feelings restarted Condition;
If step S203, the described distributed storage node 300 is restarted, cluster management server 200 will be pre- It first configures and is stored in local BSN list information, BoxDiskMap information, BoxRange information, TargetMap information and push away It is sent to all distributed storage nodes 300;
BSN list, BoxDiskMap, the BoxRange stored on the cluster management server 200 is deposited by cloud computing block Storage system administrator is pre-configured according to the actual situation.The TargetMap stored on cluster management server 200 is to receive After creation SBS (Simple Block Storage, simple block storage) disk request instruction that client 100 is sent, by group of planes pipe Reason server 200 is allocated.If distributed storage node 300 was restarted, in order to guarantee the consistency of data, restart It needs to re-issue TargetMap into all distributed storage nodes 300 afterwards.
If step S204, the described distributed storage node 300 is not restarted, cluster management server 200 will be preparatory It configures and is stored in local BSN list information, BoxDiskMap information, BoxRange information and be pushed to all distributions Memory node 300;
Step S205, all distributed storage nodes 300 receive the information of the push, and are believed according to BSN list Breath, is mounted to local for the disk on distributed storage node 300 every other in a group of planes;
Each distributed storage node 300 will not be mounted to local also all in accordance with BSN list information in step S205 Every other distributed storage node 300 on disk be mounted to local.
Step S206, each TargetMap in the received TargetMap information of the judgement of distributed storage node 300 institute It whether is empty;
Step S207, all distributed storage nodes 300 for received TargetMap information, if TargetMap is sky, then does not execute creation iSCSI Target operation;
Step S208, all distributed storage nodes 300 for received TargetMap information, if TargetMap non-empty then creates and starts iSCSI Target.
On each of group of planes distributed storage node 300 on the every other distributed storage node 300 of equal carry Disk can also when the data that client 100 to be accessed are not on 300 local disk of distributed storage node accessed The data on required disk, thus any distributed storage node in a group of planes is accessed by the distributed storage node 300 300 all have provide client 100 needed for data access ability.
It is created as shown in figure 4, showing in a kind of block storage system data processing method applied to cloud computing of the present invention One embodiment of SBS disk.Create SBS disk process specifically includes the following steps:
Step S301, the described cluster management server 200, which is received, passes through periphery OSS or system command line mode by administrator The creation SBS disk request instruction of transmission, and according to the request to create instruction be the client 100 distribute it is new TargetMap;
Step S302, all allocated TargetMap information are pushed to all by the described cluster management server 200 Distributed storage node 300;
All allocated TargetMap include newly assigned TargetMap and original allocated TargetMap。
Step S303, the described distributed storage node 300 receives the TargetMap information of push, if TargetMap non-empty then creates iSCSI Target, and returns to confirmation message to cluster management server 200;
Distributed storage node 300 is in the updated TargetMap information for receiving the transmission of cluster management server 200 It afterwards, can be new to create by being matched with former stored TargetMap information the updated TargetMap information The corresponding iSCSI Target of the TargetMap of distribution.In another embodiment, distributed storage node 300 can also according to by The creation comprising newly assigned TargetMap information that cluster management server 200 is sent instructs to create iSCSI Target. After creating iSCSI Target, confirmation message is returned.
Step S304, it after the described cluster management server 200 receives the confirmation message, returns described newly assigned The corresponding TargetId of TargetMap.
The cluster management server 200, all can will be updated when TargetMap information has update TargetMap information re-issues all distributed storage nodes 300, and all distributed storage nodes 300 all can root According to the corresponding iSCSI Target of newest TargetMap information update, to ensure that on each distributed storage node 300 The consistency of iSCSI Target information and the accuracy of data.
As shown in figure 5, showing carry in a kind of block storage system data processing method applied to cloud computing of the present invention One embodiment of SBS disk.The process of carry SBS disk specifically includes the following steps:
Step S401, after client 100 receives the instruction of carry SBS disk, according to configuration in local cluster management server 200 access IP, the request instruction for obtaining BSN list is sent to cluster management server 200;
After SBS disk creates, is sent and hung to client 100 by periphery OSS or system command line mode from administrator Carry the instruction of SBS disk.
Step S402, BSN list is returned to institute after receiving the request instruction by the described cluster management server 200 State client 100;
Step S403, the described client 100 selects 300 carry of a distributed storage node wound from the BSN list The SBS disk built;
Client 100 selects any distributed storage node according to the BSN list obtained from group of planes management server 200 300 carry SBS disks.The method of selection distributed storage node 300 can be and be selected in a manner of load balancing, be also possible to Machine selection.
Step S404, whether the distributed storage node 300 selected by judgement for carry SBS disk is active state;
If the selected distributed storage node 300 for carry SBS disk is that network is reachable, determine that it is active State;If the selected distributed storage node 300 for carry SBS disk is that network is unreachable, determine in fact inactive State;
If step S405, selected distributed storage node 300 is in active state, SBS disk is mounted to the distribution Formula memory node 300;
If step S406, selected distributed storage node 300 is in an inactive state, or can not find corresponding ISCSI Target, then client 100 selects distributed storage node 300 else.
If the distributed storage node 300 of selection cannot access, client 100 reselects distributed storage node 300, until selected distributed storage node 300 can be accessed normally.Client 100 has the distribution that cannot be accessed in discovery After formula memory node 300, delay machine can be sent to cluster management server 200 and detect request instruction, trigger cluster management server The distributed storage node 300 that cannot be accessed described in 200 pairs carries out delay machine detection.
Client 100 is when selection is used for the distributed storage node 300 of carry SBS disk, without judging wanted carry Data are stored on the Box of which 300 disk of distributed storage node on SBS disk, because of selected any distributed storage Node 300 is mounted with disk all in a group of planes, that is, can be accessed in a group of planes by any distributed storage node 300 All data of storage.
It is recycled as shown in fig. 6, showing in a kind of block storage system data processing method applied to cloud computing of the present invention One embodiment of SBS disk.Recycle SBS disk process specifically includes the following steps:
Step S501, cluster management server 200 receives the instruction of recycling SBS disk request, and it is corresponding to delete the SBS disk TargetMap;
The recycling SBS disk request instruction is taken by periphery OSS or system command line mode to cluster management from administrator Business device 200 is sent.It include the corresponding iSCSI Target information of SBS disk to be recycled in the instruction.
Step S502, the described cluster management server 200 by it is every other there is also TargetMap information be pushed to institute Some distributed storage nodes 300;
Cluster management server 200 is found matching by the iSCSI Target information in recovery command TargetMap is simultaneously deleted, then by it is all there is also TargetMap information be pushed to all distributed storage nodes again 300 carry out the update of information.
Step S503, all distributed storage nodes 300 receive the TargetMap information, delete and are not present The corresponding iSCSI Target of TargetMap, and confirmation message is returned to the cluster management server 200;
It is after the TargetMap information of all distributed storage nodes 300 upon a reception of an updated, this is updated TargetMap information is matched to delete and be not present in distributed storage node with former stored TargetMap information The corresponding iSCSI Target of TargetMap on 300.In another embodiment, all distributed storage nodes 300 can also be with It is to be recycled to delete by the recovery command comprising TargetMap information to be deleted sent by cluster management server 200 The corresponding iSCSI Target of TargetMap.After deleting iSCSI Target, all distributed storage nodes 300 are returned The Huis receives success message.
Step S504, after the described cluster management server 200 receives the confirmation message, returning to recycling, successfully confirmation disappears Breath.
During recycling SBS disk, it is responsible for for cluster management server 200 deleting TargetMap to be recycled, then will Updated TargetMap information update is to all distributed storage nodes 300, by all distributed storage nodes 300 Deletion TargetMap to be recycled is executed according to the unified TargetMap information and corresponds to the operation of iSCSI Target, to protect Demonstrate,proved on all distributed storage nodes 300 there is also iSCSI Target be consistent.
As shown in fig. 7, showing disk in a kind of block storage system data processing method applied to cloud computing of the present invention One embodiment of Data Migration.Data in magnetic disk migration process specifically includes the following steps:
Step S601,200 reception of magnetic disc Data Migration of cluster management server instructs, and determines corresponding source according to the instruction Distributed storage node 300 and source disk, purpose distributed storage node 300 and purpose disk;
The migration instruction of the cluster management server 200 received data in magnetic disk, can be and pass through periphery OSS by administrator It sends, is also possible to be sent by administrator by system command line mode.
Step S602, the described cluster management server 200 is corresponding all by the disk for needing to carry out Data Migration BoxStatus is set as transition state, and the status information is pushed to all distributed storage nodes 300;
The transition state be it is readable can not write state, under the transition state, corresponding data can only read and cannot be into Row modification.
Step S603, the described cluster management server 200 issues data in magnetic disk migration to source distribution formula memory node 300 and refers to It enables;
Step S604, the described source distribution formula memory node 300 receives the data in magnetic disk migration instruction, will according to the instruction The data on disk that need to be migrated copy to the corresponding disk of the purpose distributed storage node 300 as unit of disk On, and message is completed to 200 remigration of cluster management server;
Data in magnetic disk migration refers to the data on BoxId X to BoxId Y, from the Disk of distributed storage node 300A M moves to the Disk N of distributed storage node 300B.The migration of data is migrated as unit of disk, and transition process The state of the corresponding Box of middle migrating data is readable not writeable transition state.
Step S605, the described cluster management server 200 modifies the migration after receiving the migration and completing message Data correspond to the information that BoxDiskMap information is 300 disk of purpose distributed storage node, and the updated mapping is closed It is that information is pushed to all distributed storage nodes 300;
Step S606, the described cluster management server 200, which modifies the migrating data and corresponds to all BoxStatus, to be positive Normal state, and the status information is pushed to all distributed storage nodes 300.
In data in magnetic disk transition process, the logical mappings relationship of institute's migrating data corresponding Box and blk does not change, and changes What is become is the physical mappings relationship of the corresponding Box of institute's migrating data and Disk and distributed storage node 300.Using such number According to moving method, keep the change of data in backstage more simple, transparent.
As shown in figure 8, showing in a kind of block storage system data processing method applied to cloud computing of the present invention Box points The one embodiment split.Box division process specifically includes the following steps:
Step S701, cluster management server 200 starts Box splitting operation, and to all distributed storage nodes 300 Send data replication instruction;
The Box division is the splitting operation carried out for Box all in a group of planes.The BoxRange that newly configures is in advance The modification that former configured BoxRange is carried out by periphery OSS or same command line by administrator.Box division principle be By original Box M correspond to BoxRange [x~y), be split into Box M' correspond to BoxRange [x~(x+y)/2), Box N' couple Answer BoxRange [(x+y)/2~y), and by corresponded on original Box M BoxRange [(x+y)/2~y) data copy to Box On N'.
In the Box splitting operation, cluster management server 200 is for each Box to be divided, according to new in advance The Box to be divided is split into two new Box by the BoxRange of configuration, makes described two new Box in numerical order The first half and latter half of the hash range of former Box are respectively corresponded, is then sent to all distributed storage nodes 300 Data replication instruction.
Step S702, all distributed storage nodes 300 receive the data replication instruction, and return after the completion of execution It returns data duplication and completes message;
All distributed storage nodes 300 receive the data duplication that the cluster management server 200 is sent and refer to After order, the hash range of data in the hash range latter half of the duplication Box to be divided to number correspondence original Box Then the new Box of latter half is replicated to 200 returned data of cluster management server and is completed message;Wherein, number pair Answer the number in the hash range first half that the new Box of the hash range first half of former Box has included former Box in division According to.
Step S703, it after the described cluster management server 200 receives the data duplication completion message, will newly configure in advance BoxRange information be pushed to all distributed storage nodes 300.
The BoxRange newly configured in advance is to be synchronized to machine after modifying Configuration Values by administrator before step S701 Group's management server 200.
The Box splitting method is double of division that BoxRange progress is corresponded to former Box, and method is simple and easily realizes, Time used in the entire Box splitting operation of a group of planes is also shorter.
It is closed as shown in figure 9, showing Box in a kind of block storage system data processing method applied to cloud computing of the present invention And one embodiment.Box merge process specifically includes the following steps:
Step S801, cluster management server 200 starts Box union operation, and to all distributed storage nodes 300 Send data replication instruction;
The Box, which merges, to be carried out for two Boxs adjacent by original configured BoxRange all in a group of planes Union operation.In advance the BoxRange that newly configures be by administrator by periphery OSS or same command line to it is described originally The modification that the BoxRange of configuration is carried out.Box merge principle be original Box M is corresponded to BoxRange [x~y), Box N Corresponding BoxRange [y~z), merge into Box M' correspond to BoxRange [x~z), and BoxRange will be corresponded on original Box N [y~z) data copy on Box M'.
In the Box union operation, cluster management server 200 for every a pair of adjacent in numerical order two to Merge Box, described two Box to be combined is merged by a new Box according to the BoxRange newly configured in advance, then to institute Some distributed storage nodes 300 send data replication instruction.
Step S802, all distributed storage nodes 300 receive the data replication instruction, and return after the completion of execution It returns data duplication and completes message;
After the distributed storage node 300 receives the data replication instruction, duplication sequence number rearward to be combined Data are to the new Box in Box, and replicate to 200 returned data of cluster management server and complete message;Wherein, institute It has included data in the forward Box to be combined of serial number that new Box, which is stated, when merging.
Step S803, it after the described cluster management server 200 receives the data duplication completion message, will newly configure in advance BoxRange be pushed to all distributed storage nodes 300.
The Box merging method is simple and easily realizes, the time used in the entire Box union operation of a group of planes also compares It is short.
As shown in Figure 10, a group of planes in a kind of block storage system data processing method applied to cloud computing of the present invention is shown One embodiment of dilatation.The process of group of planes dilatation specifically includes the following steps:
Step S901, the distributed storage node 300 being newly added is initialized, and waits slave group management service to be received The instruction that device 200 is sent;
The configuration of original distributed storage node 300 in the new distributed storage node 300 and a group of planes that a group of planes is added Software suite is consistent, i.e., the distributed storage node 300 that a group of planes is newly added is provided with iSCSI initiator and iSCSI in advance Target can also start boxd application.
Step S902, the described cluster management server 200 is by 300 information update of distributed storage node of the new addition Into BSN list, BoxDiskMap;
Step S903, the described cluster management server 200 by updated BSN list, BoxDiskMap, and TargetMap is pushed to all distributed storage nodes 300;
Step S904, the described distributed storage node 300 receives all information of the push;
Step S905, the distributed storage node 300 of the described new addition is according to updated BSN list information by a group of planes In disk on every other distributed storage node 300 be mounted to local;
Step S906, the described every other distributed storage node 300 is according to updated BSN list information by institute There is the disk on the distributed storage node 300 being newly added to be mounted to local, and is returned really to the cluster management server 200 Recognize message;
Step S907, the distributed storage node 300 of the described new addition according to TargetMap information creating and starts ISCSI Target, and confirmation message is returned to the cluster management server 200;
Step S908, the described cluster management server 200 receives the confirmation that all distributed storage nodes 300 return After message, start Box splitting operation;
Step S909, BSN list is pushed to all by the described cluster management server 200 after the completion of splitting operation Client 100.
Distributed storage node 300 uses dual-host backup mode, thus the increased distributed storage of dilatation every time in a group of planes 300 numbers of node must be even number, and the server number after dilatation is 2 times of existing cluster server number.
The group of planes dilatation is realized on the basis of Box splitting operation, and method is simple and easily realizes.
As shown in figure 11, a group of planes in a kind of block storage system data processing method applied to cloud computing of the present invention is shown One embodiment of capacity reducing.The process of group of planes capacity reducing specifically includes the following steps:
Step S1001, cluster management server 200 receives capacity reducing instruction, and starts Box union operation;
Before a group of planes carries out capacity reducing, need whether to confirm 300 group of planes of distributed storage node after reduction by administrator It can satisfy the capacity requirement of storing data, it is also necessary to the distributed storage node 300 to be reduced of confirmation and its corresponding Box。
The instruction of the cluster management server 200 received capacity reducing can be and be sent by administrator by periphery OSS, It can be and sent by administrator by system command line mode, contain 300 information of distributed storage node that need to dissolve.
The Box union operation of starting is as described in step S801 to S803 in Fig. 9.
Step S1002, the described cluster management server 200 modifies BSN list after the completion of union operation, and will be described The BoxDiskMap modified when the BSN list and union operation of modification is pushed to all distributed storage nodes 300, by institute The BSN list for stating modification is pushed to client 100;
Step S1003, all distributed storage nodes 300 need to dissolve according to the BSN list solution extension received Distributed storage node 300 is mapped to local iSCSI Target;
Step S1004, the described client 100 is judged according to the BSN list received, if client 100 The distributed storage node 300 being mounted to need to be dissolved, then carry out solution extension and reselect other distributed storage nodes 300 Carry out carry.
The group of planes capacity reducing is realized on the basis of Box union operation, and method is simple and easily realizes.
As shown in figure 12, it shows in a kind of block storage system data processing method applied to cloud computing of the present invention and is distributed One embodiment of the discovery faulty disk of formula memory node 300.Distributed storage node 300 finds that the process of faulty disk specifically includes Following steps:
Step S1101, distributed storage node 300 finds that the Box to be accessed corresponds to the disk of distributed storage node 300 It is abnormal, then the disk of failure is isolated;
The disk includes the case where all cannot normally accessing the disk extremely.
Distributed storage node 300 is isolated the faulty disk and refers to distributed storage node 300 by marking the failure Disk is abnormal, suspends the access operation to the faulty disk.
Step S1102, distributed storage node 300 sends the message for faulty disk occur to cluster management server 200;
It step S1103, will be all on faulty disk after cluster management server 200 receives failure disk message BoxStatus is set as degrading state;
The degrading state refer to BoxStatus be it is readable can not write state.
Step S1104, after determination is not delay machine situation, cluster management server 200 issues removable disk message;
The cluster management server 200 carries out delay machine to related disk after receiving the disk failure notice Detection.After determining failed disk not and being delay machine situation, cluster management server 200 notifies administrator by periphery OSS platform Replace disk;After determination is delay machine situation, then delay machine situation processing step shown in figure 15 is transferred to.
Step S1105, BoxStatus information all on faulty disk is pushed to all by cluster management server 200 Distributed storage node 300.
Method by all Box being arranged BoxStatus can be protected quickly when finding that disk occurs abnormal Data on the faulty disk are not modified, to avoid the inconsistent of data.
As shown in figure 13, it shows in a kind of block storage system data processing method applied to cloud computing of the present invention and is distributed Formula memory node 300 finds to lose one embodiment of disk.It is following that distributed storage node 300 finds that the process for losing disk specifically includes Step:
Step S1201, distributed storage node 300 finds that the Box to be accessed corresponds to the disk of distributed storage node 300 When unreachable, the message of disk loss is sent to cluster management server 200;
Disk on the distributed storage node 300 for needing to access is unreachable, refers to the distribution of carry SBS disk The disk on distributed storage node 300 corresponding with the local data of being accessed are mounted to of formula memory node 300 passes through multiple After connection still communication failure the case where.
It step S1202, will be all on faulty disk after 200 reception of magnetic disc of cluster management server loses message BoxStatus is set as degrading state;
Step S1203, after determination is not delay machine situation, cluster management server 200 issues removable disk message;
If it is delay machine situation, then delay machine situation processing step shown in figure 15 is transferred to.
Step S1204, BoxStatus information all on faulty disk is pushed to all by cluster management server 200 Distributed storage node 300.
Method by all Box being arranged BoxStatus can be protected quickly when finding that disk occurs abnormal Data on the faulty disk are not modified, to avoid the inconsistent of data.
As shown in figure 14, failure in a kind of block storage system data processing method applied to cloud computing of the present invention is shown One embodiment that removable disk post-processes in the case of disk/lose disk.The process that removable disk post-processes in the case of faulty disk/lose disk specifically includes Following steps:
Step S1301, after cluster management server 200 receives removable disk completion message, to the distributed storage after removable disk Node 300 sends data in magnetic disk migration instruction;
The removable disk that the cluster management server 200 receives completes message, is after replacing disk on backstage by administrator The message of cluster management server 200 is sent to by peripheral OSS.
Step S1302, the 300 reception of magnetic disc Data Migration of distributed storage node after removable disk instructs and starts data in magnetic disk Data on the reciprocity disk of its new disk are copied to the new disk by migration operation;
Data in magnetic disk migration operation refers to all steps as shown in Figure 7.
Step S1303, after the completion of Data Migration, distributed storage node 300 is to 200 returned data of cluster management server Message is completed in migration;
Step S1304, after cluster management server 200 receives Data Migration completion message, it is arranged on new disk and owns BoxStatus be normal condition, and the status information is pushed to all distributed storage nodes 300.
Data in magnetic disk migration after the completion of, only need to by modify BoxStatus value can be quickly by the data in magnetic disk Readable write state is reverted to, method is simple and easily realizes.
As shown in figure 15, delay machine in a kind of block storage system data processing method applied to cloud computing of the present invention is shown One embodiment of situation processing.Delay machine situation processing process specifically includes the following steps:
Step S1401, it after client 100 finds that distributed storage node 300 is unreachable in access, is taken to cluster management The transmission delay machine of device 200 of being engaged in detects the instruction of request, while client reselects distributed storage node 300 according to BSN list It accesses;
Step S1402, distributed storage node 300 is after finding the disk exception to be accessed, to cluster management server 200 send disk notifies extremely, and request carries out delay machine detection to involved disk;
Step S1403, cluster management server 200 receives the delay machine detection request instruction;
Step S1404, cluster management server 200 detects request instruction according to the delay machine, to related distribution Memory node 300 carries out heartbeat detection;
Cluster management server 200 passes through distributed storage node involved in ping/pong when carrying out heartbeat detection 300 to the distributed storage node 300 that is related to described in judgement whether delay machine.It is understood that the cluster management server 200 can also start timing detection function, and after timing detection function starting, cluster management server 200 will be to all points Cloth memory node 300 carries out heartbeat detection.
Step S1405, after heartbeat detection failure, the distributed storage node 300 of delay machine is arranged in cluster management server 200 Upper all BoxStatus are degrading state, and the status information is pushed to all distributed storage nodes 300;
Step S1406, the information of delay machine server in BSN list is set NULL by cluster management server 200, and Updated BSN list is pushed to all distributed storage node 300 and client 100;
Step S1407, cluster management server 200 sends delay machine notification message.
The delay machine can be isolated by the BoxStatus value and BSN list of modifying delay machine distributed storage node 300 Distributed storage node 300, method is simple and easily realizes.
As shown in figure 16, delay machine in a kind of block storage system data processing method applied to cloud computing of the present invention is shown Restore one embodiment of post-processing.Delay machine restore post-processing process specifically includes the following steps:
Step S1501, the distributed storage node 300 of recovery is initialized, and waits slave group management service to be received The instruction that device 200 is sent;
The distributed storage node 300 of the recovery can be the delay machine distributed storage node 300 after repairing Body is also possible to the new distributed storage node 300 for replacing delay machine distributed storage node 300.It is described to delay for replacing The software suite and delay machine distributed storage node configured on the new distributed storage node 300 of machine distributed storage node 300 300 is completely the same.
Step S1502, after cluster management server 200 receives delay machine recovery message, the distributed storage section of Xiang Huifu Point 300 sends data in magnetic disk migration instruction;
It is to repair or replacing delay machine distribution by administrator that the delay machine that the cluster management server 200 receives, which restores message, The message sent after formula memory node 300 by periphery OSS or same command line.
Step S1503, the 300 reception of magnetic disc Data Migration of distributed storage node of the described recovery instructs, and starts disk Data migration operation;
Data in magnetic disk migration operation refers to step as shown in Figure 7.Distributed storage node 300 after the recovery is connecing After the data in magnetic disk migration instruction for receiving the transmission of cluster management server 200, by the distribution after the preconfigured recovery The distributed storage node 300 after Data Migration to the recovery in the peer of memory node 300.
Step S1504, the distributed storage node 300 of the described recovery is to 200 returned data of cluster management server Message is completed in migration;
Step S1505, it after cluster management server 200 receives Data Migration completion message, sends delay machine and restores message, And BSN list, BoxDiskMap, TargetMap are pushed to all distributed storage nodes 300;
Step S1506, after all distributed storage nodes 300 receive delay machine recovery message, the distributed storage of recovery Disk on the every other distributed storage node 300 of 300 carry of node, every other distributed storage node 300 are hung Carry the disk on the distributed storage node 300 restored;
Step S1507, the distributed storage node 300 of recovery TargetMap information creating iSCSI based on the received Target;
Step S1508, the distributed storage node 300 of the described recovery is after the completion of creation, to cluster management server 200 It returns and completes message;
Step S1509, the completion that cluster management server 200 receives that all distributed storage nodes 300 return disappears After breath, it is normal condition that BoxStatus all on the distributed storage node 300 of recovery, which is arranged,;
Step S1510, the BoxStatus information of the setting is pushed to all by the described cluster management server 200 Distributed storage node 300;
Step S1511, cluster management server 200 updates BSN list, and the updated group list is pushed to All clients 100.
Processing operation step after delay machine is restored simply easily is realized, and maintains the consistency of data in a group of planes well.
The above description is only a preferred embodiment of the present invention, is not intended to limit its scope of the patents, all to utilize the present invention Equivalent structure or equivalent flow shift made by specification and accompanying drawing content is directly or indirectly used in other relevant technology necks Domain is included within the scope of the present invention.

Claims (19)

1. a kind of cloud computing block storage system, which is characterized in that the system comprises client, cluster management server and distributions Formula memory node cluster;Wherein,
The cluster management server is stored with distributed storage node group list;
The distributed storage node cluster includes multiple distributed storage nodes, for providing massive store space;
The client distributed storage node group list stored for the acquisition from the cluster management server, and Any distributed storage node is directly accessed according to the group list;
Wherein, the cluster management server is also used to: being pre-configured with and distributed storage memory node group list information, is patrolled It collects storage unit and distributed storage node is grouped and the hash range of the mapping relation information of disk, logic storage unit is believed The mapping relation information of breath, iSCSI Target and block storage unit, and all configuration informations configured are pushed to all Distributed storage node.
2. the system as claimed in claim 1, which is characterized in that be mounted with disk on the distributed storage node, the disk Multiple logic storage units including fixed size, each logic storage unit include multiple pieces of storage units of fixed size;
The distributed storage node is also used to:
Disk on every other distributed storage node is mounted to local;If iSCSI Target is reflected with block storage unit Relation information non-empty is penetrated, then create and starts iSCSI Target.
3. system as claimed in claim 2, which is characterized in that the cluster management server is also used to:
According to creation block storage dish request instruction, the mapping relations of new iSCSI Target and block storage unit are distributed, and will The mapping relations of all allocated iSCSI Target and block storage unit are pushed to all distributed storage nodes;It is connecing After receiving the confirmation message that the distributed storage node returns, newly assigned iSCSI Target and block storage unit are returned The corresponding iSCSI Target number of mapping relations;
The distributed storage node is also used to:
According to the mapping relations of the iSCSI Target of the cluster management server push and block storage unit, iSCSI is created Target, and return to confirmation message.
4. system as claimed in claim 2, which is characterized in that the client is also used to:
According to block storage dish mounting instructions, by corresponding piece of storage dish carry of described piece of storage dish mounting instructions in the distribution Any distributed storage node in memory node group list.
5. system as claimed in claim 2, which is characterized in that the cluster management server is also used to:
After receiving data in magnetic disk migration instruction, instruction storage source distributed storage node and source are migrated according to the data in magnetic disk Disc information, purpose distributed storage node and purpose disc information, and by the corresponding disk logic storage unit of source disk State is set to transition state, which is pushed to all distributed storage nodes;It is sent out to source distribution formula memory node Send data in magnetic disk migration instruction;After the completion of the source distribution formula memory node Data Migration, the corresponding logic of source disk is deposited The state of storage unit is set to normal condition, and the normal condition is pushed to all distributed storage nodes;In the source point After the completion of cloth memory node Data Migration, updates logic storage unit and distributed storage node is grouped and the mapping of disk is closed It is and is pushed to all distributed storage nodes;
The distributed storage node is also used to:
When receiving the data in magnetic disk migration instruction that the cluster management server is sent, referred to according to data in magnetic disk migration It enables, data on the source distribution formula memory node disk is copied into the corresponding disk of the purpose distributed storage node.
6. system as claimed in claim 2, which is characterized in that the cluster management server is also used to:
When starting logic storage unit splitting operation, for each logic storage unit to be divided, according to new configuration in advance The hash range of logic storage unit the logic storage unit to be divided is split into two new logic storage units, Described two new logic storage units are made to respectively correspond the first half of the hash range of former logic storage unit in numerical order Point and latter half, and send data replication instruction to all distributed storage node;Receiving the distributed storage After message is completed in the data duplication that node returns, the hash range of the logic storage unit newly configured in advance is pushed to all Distributed storage node;
The distributed storage node is also used to:
Receive the data replication instruction that the cluster management server is sent, the duplication logic storage unit to be divided Hash range latter half in data to the corresponding former logic storage unit of number hash range latter half it is new Logic storage unit, and replicated to the cluster management server returned data and complete message;Wherein, the corresponding former logic of number is deposited The new logic storage unit of the hash range first half of storage unit has included the hash of former logic storage unit in division Data in range first half.
7. system as claimed in claim 2, which is characterized in that the cluster management server is also used to:
When starting logic storage unit union operation, every a pair of adjacent two logics to be combined in numerical order are stored single Described two logic storage units to be combined are merged into one according to the hash range of the logic storage unit newly configured in advance by member A new logic storage unit, and data replication instruction is sent to all distributed storage nodes;Receiving the distribution After message is completed in the data duplication that formula memory node returns, the hash range of the logic storage unit newly configured in advance is pushed to All distributed storage nodes;
The distributed storage node is also used to:
Receive the data replication instruction that the cluster management server is sent, the logic to be combined of duplication sequence number rearward Data are completed to disappear to the new logic storage unit, and to cluster management server returned data duplication in storage unit Breath;Wherein, the new logic storage unit has included in the forward logic storage unit to be combined of serial number when merging Data.
8. system as claimed in claim 6, which is characterized in that the cluster management server is also used to:
After receiving dilatation instruction, when there is the distributed storage node being newly added, by point of the preconfigured new addition Group list information, logic storage unit and the grouping of the distributed storage node of the new addition of cloth memory node and disk Mapping relation information be stored in cluster management server;By the group list information of all distributed storage nodes, all Logic storage unit is grouped with distributed storage node and mapping relation information, all iSCSI Target and the block of disk are deposited The mapping relation information of storage unit is pushed to all distributed storage nodes;It is returned receiving the distributed storage node Confirmation message after, start logic storage unit splitting operation, and logic storage unit division after the completion of, by distributed storage Node group list is pushed to the client;
The distributed storage node is also used to:
It, will be on the distributed storage node being newly added when the distributed storage node is the distributed storage node having originally Disk is mounted to local;When the distributed storage node is the distributed storage node being newly added, by every other distribution Disk on formula memory node is mounted to local, and is believed according to the mapping relations of the iSCSI Target and block storage unit Breath, creates and starts iSCSI Target.
9. system as claimed in claim 7, which is characterized in that the cluster management server is also used to:
It receives capacity reducing to instruct and start logic storage unit union operation, wherein the capacity reducing instruction includes the distribution that need to dissolve Formula memory node information;After the completion of union operation, distributed storage node group list is modified, and the distribution of modification is deposited The grouping of logic storage unit and distributed storage node and the mapping of disk modified when storage node group list and union operation Relationship is pushed to all distributed storage nodes, and the distributed storage node group list of the modification is pushed to client End;
The distributed storage node is also used to:
The distributed storage node mapping that need to dissolve is hung according to the distributed storage node group list information solution received To local iSCSI Target;
The client is also used to:
Judged according to the distributed storage node group list information received, if point that client is mounted to Cloth memory node need to be dissolved, then carry out solution extension and reselect other distributed storage nodes to carry out carry.
10. a kind of data processing method of cloud computing block storage system, which is characterized in that the data processing method includes following Step:
Client sends the request instruction for obtaining distributed storage node group list to cluster management server;
The cluster management server is instructed according to the distributed storage node group list acquisition request, is returned to client The distributed storage node group list;
The client directly carries out data interaction according to the distributed storage node group list, with distribution memory node;
Wherein, the cluster management server stores preconfigured distributed storage node group list information, logic storage Unit and distributed storage node are grouped and the hash range information of the mapping relation information of disk, logic storage unit, in institute It states in the case that distributed storage node do not restart, all information is pushed to all distributed storage nodes;
In the case where the distributed storage node is restarted, then the cluster management server is by the distribution of storage The mapping relation information of memory node group list information, logic storage unit and the grouping of distributed storage node and disk is patrolled The mapping relation information of the hash range information of volume storage unit, iSCSI Target and block storage unit is pushed to all points Cloth memory node.
11. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
Local disk is mapped to iSCSI Target according to the iSCSI Target of configuration by the distributed storage node;
The distributed storage node receives the information of push, and according to distributed storage node group list information, by a group of planes In disk on every other distributed storage node be mounted to local;
The distributed storage node for received iSCSI Target and block storage unit mapping relation information, such as The mapping relations non-empty of fruit iSCSI Target and block storage unit then create and start iSCSI Target.
12. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server receives creation block storage dish request instruction;
The cluster management server is that the client distributes new iSCSI according to the creation block storage dish request instruction The mapping relations of Target and block storage unit, then by the mapping of all allocated iSCSI Target and block storage unit Relation information is pushed to all distributed storage nodes;
The distributed storage node receives the mapping relation information of the iSCSI Target and block storage unit of push, such as The mapping relations non-empty of fruit iSCSI Target and block storage unit, then create iSCSI Target, and to cluster management service Device returns to confirmation message;
After the cluster management server receives the confirmation message, returns to newly assigned iSCSI Target and block storage is single The corresponding iSCSI Target number of the mapping relations of member.
13. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The client, in local cluster management server access address, is sent according to configuration to cluster management server Obtain the request instruction of distributed storage node group list;
The cluster management server returns to distributed storage node group list described after receiving the request instruction Client;
The client selects a distributed storage node carry creation from the distributed storage node group list Block storage dish;
If selected distributed storage node is in an inactive state or can not find corresponding iSCSI Target, client Optionally distributed storage node.
14. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server receives the instruction of recycling block storage dish request;
The cluster management server deletes the corresponding iSCSI Target of described piece of storage dish and the mapping of block storage unit is closed System, and the mapping relation information of every other existing iSCSI Target and block storage unit is pushed to all distributions Memory node;
The distributed storage node receives the mapping relation information of the iSCSI Target and block storage unit, and deletion is not deposited ISCSI Target iSCSI Target corresponding with the mapping relations of block storage unit, and to the cluster management service Device returns to confirmation message;
After the cluster management server receives the confirmation message, returns and recycle successful confirmation message.
15. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server reception of magnetic disc Data Migration instruction determines corresponding source distribution formula storage section according to the instruction Point and source disk, purpose distributed storage node and purpose disk;
The cluster management server will need the state of the corresponding all logic storage units of the disk for carrying out Data Migration to set It is set to transition state, and transition state information is pushed to all distributed storage nodes;
The cluster management server issues data in magnetic disk migration instruction to source distribution formula memory node;
The source distribution formula memory node receives the data in magnetic disk migration instruction, the disk that will need to be migrated according to the instruction On data copied to as unit of disk on the corresponding disk of the purpose distributed storage node, and to the cluster management Server remigration completes message;
The cluster management server modifies migrating data counterlogic storage unit after receiving the migration and completing message Mapping relation information with the grouping of distributed storage node and disk is the information of purpose distributed storage node disk, and will more Mapping relation information after new is pushed to all distributed storage nodes;
It is normal condition that the cluster management server, which modifies the migrating data and corresponds to the state of all logic storage units, And normal state information is pushed to all distributed storage nodes.
16. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server stores each logic storage unit to be divided according to the logic newly configured in advance The logic storage unit to be divided is split into two new logic storage units by the hash range of unit, is made described two New logic storage unit respectively corresponds the first half of the hash range of former logic storage unit and latter half of in numerical order Point, and data replication instruction is sent to all distributed storage nodes;
The distributed storage node receives the data replication instruction that the cluster management server is sent, duplication it is described to Data in the hash range latter half of the logic storage unit of division correspond to the hash model of former logic storage unit to number The new logic storage unit of the latter half enclosed, and replicated to the cluster management server returned data and complete message;
The cluster management server, will be pre- after receiving the data duplication that the distributed storage node returns and completing message The hash range of the logic storage unit first newly configured is pushed to all distributed storage nodes.
17. method as claimed in claim 10, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server is for every a pair of adjacent two logic storage units to be combined in numerical order, according to pre- Described two logic storage units to be combined are merged into one and new patrolled by the hash range of logic storage unit first newly configured Storage unit is collected, and sends data replication instruction to all distributed storage nodes;
The distributed storage node receives the data replication instruction that the cluster management server is sent, and duplication sequence is compiled Data are to the new logic storage unit in logic storage unit to be combined number rearward, and to the cluster management server Message is completed in returned data duplication;
The cluster management server, will be pre- after receiving the data duplication that the distributed storage node returns and completing message The hash range of the logic storage unit first newly configured is pushed to all distributed storage nodes.
18. the method described in claim 16, which is characterized in that the data processing method further includes following below scheme:
The distributed storage node being newly added is initialized, and waits the finger to be received sent from the cluster management server It enables;
After the cluster management server receives dilatation instruction, the distributed storage nodal information of the new addition is updated to arrive and is divided In the mapping relations of cloth memory node group list, logic storage unit and the grouping of distributed storage node and disk;
The cluster management server deposits updated distributed storage node group list, logic storage unit and distribution The grouping of storage node and the mapping relations and iSCSI Target of disk and the mapping relations of block storage unit are pushed to all points Cloth memory node;
The distributed storage node receives all information of push;
The distributed storage node of the new addition will be in a group of planes according to updated distributed storage node group list information Disk on every other distributed storage node is mounted to local, after every other distributed storage node is according to update Distributed storage node group list information the disk on the distributed storage node of all new additions is mounted to local, institute Some distributed storage nodes return to confirmation message to the cluster management server after completing disk carry;
The distributed storage node of the new addition is created according to iSCSI Target and the mapping relation information of block storage unit And start iSCSI Target, and return to confirmation message to the cluster management server;
After the cluster management server receives the confirmation message that all distributed storage nodes return, starting logic storage Cellular spliting operation;
The cluster management server is pushed to all visitors after the completion of splitting operation, by distributed storage node group list Family end.
19. method as claimed in claim 17, which is characterized in that the data processing method is further comprising the steps of:
The cluster management server receives capacity reducing instruction, and the distributed storage section for determining and needing to reduce is instructed according to the capacity reducing The corresponding logic storage unit of point;
The cluster management server starts logic storage unit union operation;
The cluster management server modifies distributed storage node group list after the completion of union operation, and by modification Logic storage unit and distributed storage the node grouping modified when distributed storage node group list and union operation and magnetic The mapping relations of disk are pushed to all distributed storage nodes, and the distributed storage node group list of the modification is pushed To client;
The distributed storage node need to dissolve according to the distributed storage node group list information solution extension received Distributed storage node is mapped to local iSCSI Target;
The client is judged according to the distributed storage node group list information received, if client institute The distributed storage node being mounted to need to be dissolved, then carry out solution extension and reselect other distributed storage nodes to be hung It carries.
CN201510308773.0A 2015-06-05 2015-06-05 Block storage system and method applied to cloud computing Active CN106302607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510308773.0A CN106302607B (en) 2015-06-05 2015-06-05 Block storage system and method applied to cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510308773.0A CN106302607B (en) 2015-06-05 2015-06-05 Block storage system and method applied to cloud computing

Publications (2)

Publication Number Publication Date
CN106302607A CN106302607A (en) 2017-01-04
CN106302607B true CN106302607B (en) 2019-08-16

Family

ID=57659477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510308773.0A Active CN106302607B (en) 2015-06-05 2015-06-05 Block storage system and method applied to cloud computing

Country Status (1)

Country Link
CN (1) CN106302607B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107181757A (en) * 2017-06-27 2017-09-19 新浪网技术(中国)有限公司 Support Memcache Proxy Methods, the apparatus and system of certification and protocol conversion
CN109407964A (en) * 2017-08-18 2019-03-01 阿里巴巴集团控股有限公司 A kind of moving method of data, device and equipment
CN107391050A (en) * 2017-09-14 2017-11-24 郑州云海信息技术有限公司 A kind of data migration method, system, device and computer-readable recording medium
CN108170381B (en) * 2017-12-28 2021-01-01 湖南国科微电子股份有限公司 Method for migrating data from SLC Block to XLC Block
CN108199896A (en) * 2018-01-16 2018-06-22 中电福富信息科技有限公司 Distributed message delivery system based on RabbitMQ
CN110198269B (en) * 2018-04-03 2021-10-08 腾讯科技(深圳)有限公司 Route synchronization system, method and related device for distributed cluster
CN108804711B (en) * 2018-06-27 2022-12-06 郑州云海信息技术有限公司 Data processing method and device and computer readable storage medium
CN111666035B (en) * 2019-03-05 2023-06-20 阿里巴巴集团控股有限公司 Management method and device of distributed storage system
CN112448985B (en) * 2019-09-02 2022-07-15 阿里巴巴集团控股有限公司 Distributed system, network processing method and device and electronic equipment
CN111125011B (en) * 2019-12-20 2024-02-23 深信服科技股份有限公司 File processing method, system and related equipment
CN111026720B (en) * 2019-12-20 2023-05-12 深信服科技股份有限公司 File processing method, system and related equipment
CN111857603B (en) * 2020-07-31 2022-12-02 重庆紫光华山智安科技有限公司 Data processing method and related device
CN112463364A (en) * 2020-11-09 2021-03-09 苏州浪潮智能科技有限公司 Packet-based distributed storage SCSI target service management method and system
CN113986522A (en) * 2021-08-29 2022-01-28 中盾创新数字科技(北京)有限公司 Load balancing-based distributed storage server capacity expansion system
CN116301561A (en) * 2021-12-14 2023-06-23 中兴通讯股份有限公司 Data processing method, system, node and storage medium of distributed system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753617A (en) * 2009-12-11 2010-06-23 中兴通讯股份有限公司 Cloud storage system and method
CN102546823A (en) * 2012-02-18 2012-07-04 南京云创存储科技有限公司 File storage management system of cloud storage system
CN102594852A (en) * 2011-01-04 2012-07-18 中国移动通信集团公司 Data access method, node and system
CN104572344A (en) * 2013-10-29 2015-04-29 杭州海康威视系统技术有限公司 Backup Method and system for multi-cloud data
CN104679665A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and system for achieving block storage of distributed file system
CN105025053A (en) * 2014-04-24 2015-11-04 苏宁云商集团股份有限公司 Distributed file upload method based on cloud storage technology and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753617A (en) * 2009-12-11 2010-06-23 中兴通讯股份有限公司 Cloud storage system and method
CN102594852A (en) * 2011-01-04 2012-07-18 中国移动通信集团公司 Data access method, node and system
CN102546823A (en) * 2012-02-18 2012-07-04 南京云创存储科技有限公司 File storage management system of cloud storage system
CN104572344A (en) * 2013-10-29 2015-04-29 杭州海康威视系统技术有限公司 Backup Method and system for multi-cloud data
CN104679665A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and system for achieving block storage of distributed file system
CN105025053A (en) * 2014-04-24 2015-11-04 苏宁云商集团股份有限公司 Distributed file upload method based on cloud storage technology and system

Also Published As

Publication number Publication date
CN106302607A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106302607B (en) Block storage system and method applied to cloud computing
CN112099918B (en) Live migration of clusters in a containerized environment
US11445019B2 (en) Methods, systems, and media for providing distributed database access during a network split
EP3433741B1 (en) Hybrid garbage collection in a distrubuted storage system
US10713134B2 (en) Distributed storage and replication system and method
US20230308507A1 (en) Commissioning and decommissioning metadata nodes in a running distributed data storage system
US9639437B2 (en) Techniques to manage non-disruptive SAN availability in a partitioned cluster
CN103354923B (en) A kind of data re-establishing method, device and system
US9733989B1 (en) Non-disruptive load-balancing of virtual machines between data centers
RU2653292C2 (en) Service migration across cluster boundaries
CN111078121A (en) Data migration method, system and related components of distributed storage system
US11314459B2 (en) Distributed metadata management in a distributed storage system
KR101551706B1 (en) System and method for configuring virtual machines having high availability in cloud environment, recording medium recording the program thereof
CN113010496B (en) Data migration method, device, equipment and storage medium
US9535629B1 (en) Storage provisioning in a data storage environment
JP2014044553A (en) Program, information processing device, and information processing system
CN104517067B (en) Access the method, apparatus and system of data
CN110535947A (en) A kind of memory device set group configuration node switching method, device and equipment
CN112261097B (en) Object positioning method for distributed storage system and electronic equipment
US20240134526A1 (en) Virtual container storage interface controller
US20230409215A1 (en) Graph-based storage management
JP5713412B2 (en) Management device, management system, and management method
CN108153484A (en) Shared storage system and its management method under a kind of virtualized environment
US20210064596A1 (en) Entry transaction consistent database system
CN115168318A (en) Control method and device for data migration, storage medium and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210924

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right