CN108509150B - Data processing method and device - Google Patents
Data processing method and device Download PDFInfo
- Publication number
- CN108509150B CN108509150B CN201810192378.4A CN201810192378A CN108509150B CN 108509150 B CN108509150 B CN 108509150B CN 201810192378 A CN201810192378 A CN 201810192378A CN 108509150 B CN108509150 B CN 108509150B
- Authority
- CN
- China
- Prior art keywords
- storage
- data
- unit
- storage unit
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012545 processing Methods 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 37
- 238000005192 partition Methods 0.000 claims description 157
- 238000013500 data storage Methods 0.000 claims description 76
- 230000002085 persistent effect Effects 0.000 claims description 28
- 230000004044 response Effects 0.000 claims description 8
- 230000005012 migration Effects 0.000 abstract description 43
- 238000013508 migration Methods 0.000 abstract description 43
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 5
- 230000002688 persistence Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000007717 exclusion Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a data processing method and a data processing device, wherein the method comprises the following steps: acquiring first data stored in a first storage unit, and storing the first data in a second storage unit, wherein the first storage unit is a storage unit in a target server, and the second storage unit is a storage unit in a capacity expansion server; after the first data is successfully stored in the second storage unit, acquiring the current data volume of the second storage unit and the current data volume of the first storage unit; confirming that the current data size of the second storage unit is smaller than the current data size of the first storage unit; and acquiring second data stored in the first storage unit, and storing the second data into the second storage unit, wherein the second data is data which is requested by the client to be stored into the second storage unit in the process of storing the first data by the capacity expansion server. By adopting the embodiment of the application, the data written in the migration process can be prevented from being covered by the data being migrated, and the reliability of the data is improved.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data processing method and apparatus.
Background
A distributed storage system is used for storing data on a plurality of servers in a distributed mode. Taking the schematic structural diagram of the distributed storage system shown in fig. 1 as an example, the distributed storage system includes servers 1-3, each of which includes three storage disks (e.g., storage disk 0, storage disk 1, and storage disk 2). Before data storage is performed in the distributed storage system, partition needs to be performed, for example, a storage disk 0 in the server 1, a storage disk 1 in the server 2, and a storage disk 2 in the server 3 constitute a partition 1, and a storage disk 1 in the server 1, a storage disk 0 in the server 2, and a storage disk 0 in the server 3 constitute a partition 2. When a data storage request is received by the distributed storage system, a partition may be selected and data stored to the individual storage disks contained in the selected partition.
When the storage capacity of the distributed storage system is insufficient, capacity expansion is required. In the capacity expansion process, data needs to be uniformly distributed in each server to realize load balancing, so that data migration needs to be performed in a background. Taking fig. 1 as an example, if the server 4 is expanded, the storage disk 2 of the server 3 in the partition 1 is replaced by the storage disk 1 in the server 4 through the load balancing calculation, that is, the data stored in the storage disk 2 of the server 3 is migrated to the storage disk 1 of the server 4. In the data migration process, a client (client) has data to be written into the storage disk 1 in the server 4, and if the data requested to be stored by the client and the data migrated in the data migration process are directed to the same object, how to avoid the data written in the migration process being covered by the data being migrated is a technical problem which needs to be solved urgently at present.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a data processing method and apparatus, which can prevent data written in a migration process from being covered by data being migrated, and improve reliability of the data.
In a first aspect, an embodiment of the present application provides a data processing method, where a capacity expansion server may obtain first data stored in a first storage unit, and store the first data in a second storage unit. After the first data is successfully stored in the second storage unit, the capacity expansion server obtains the current data volume of the second storage unit and the current data volume of the first storage unit. And the capacity expansion server confirms that the current data volume of the second storage unit is smaller than the current data volume of the first storage unit, then acquires the second data stored in the first storage unit and stores the second data in the second storage unit.
The capacity expansion server may include at least one storage disk, and each storage disk may include at least one storage unit. The first storage unit is a storage unit in a first storage disk of the target server. The second storage unit is a storage unit in a second storage disk of the capacity expansion server. The target server needs to migrate data to the capacity expansion server.
The second data is data which is requested by the client to be stored in the second storage unit in the process of storing the first data by the capacity expansion server.
In the technical scheme, when a data storage request sent by a client is received in the data migration process, the capacity expansion server can record the sum of the data volume of first data which needs to be migrated from the first storage unit to the second storage unit and second data which the client requests to store, and after the data migration is completed, the capacity expansion server can acquire the second data which the client requests to store in the first storage unit and store the second data in the second storage unit, so that the data written in the migration process can be prevented from being covered by the data which is being migrated, and the reliability of the data is improved.
Optionally, before the capacity expansion server obtains the first data stored in the first storage unit, the capacity expansion server may send a unit identifier obtaining request to the main server, and receive the unit identifier set from the main server. Further, the capacity expansion server obtains the first data stored in the first storage unit, and stores the first data in the second storage unit, which may specifically be: the capacity expansion server obtains first data stored in a first storage unit corresponding to the first unit identifier, and stores the first data in a second storage unit corresponding to the first unit identifier.
The unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs. The primary server includes a primary storage disk in the partition to which the first storage disk belongs. The unit identification set comprises unit identifications of all storage units contained in the main storage disk in the partition corresponding to the partition identification. The unit identifier of each storage unit included in the first storage disk is respectively the same as the unit identifier of each storage unit included in the main storage disk, and the unit identifier of each storage unit included in the second storage disk is respectively the same as the unit identifier of each storage unit included in the main storage disk. For example, the main storage disk includes ten storage units, the unit identifications of the storage units are plog1-plog10, respectively, then the first storage disk also includes ten storage units, the unit identifications of the storage units included in the first storage disk are plog1-plog10, respectively, the second storage disk also includes ten storage units, the unit identifications of the storage units included in the second storage disk are plog1-plog10, respectively, and the capacity expansion server may obtain the first data stored in the plog1 of the first storage disk and store the first data into the plog1 of the second storage disk.
Optionally, after the capacity expansion server obtains the first data stored in the first storage unit corresponding to the first unit identifier and stores the first data in the second storage unit corresponding to the first unit identifier, the capacity expansion server may further delete the first unit identifier from the unit identifier set.
Optionally, the capacity expansion server may further receive a first data storage request from the client before obtaining the current data volume of the second storage unit and the current data volume of the first storage unit, where the first data storage request carries a first unit identifier, determine that the first unit identifier exists in the unit identifier set, obtain the data volume of the first storage unit before data migration, and add the data volume of the first storage unit before data migration and the data volume of the second data to obtain the current data volume of the first storage unit.
The first data storage request is used for indicating the client to request to store the second data into the second storage unit corresponding to the first unit identification.
Optionally, when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs, the capacity expansion server may send a storage failure message to the service layer, and the capacity expansion server receives a second data storage request through the persistent layer, where the second data storage request carries the target partition identifier and the second unit identifier, and stores the second data in a third storage unit corresponding to the second unit identifier.
The partition corresponding to the target partition identifier is different from the partition to which the second storage disk belongs, and each storage disk contained in the partition corresponding to the target partition identifier can store data. The third storage unit is a storage unit in a third storage disk of the capacity expansion server, and the partition identifier of the partition to which the third storage disk belongs is a target partition identifier.
In a second aspect, an embodiment of the present application provides a data processing method, where a capacity expansion server may obtain first data stored in a first storage unit, and store the first data in a second storage unit. In the process of storing the first data, the capacity expansion server receives a first data storage request from the client, wherein the first data storage request carries the first unit identifier of the second storage unit. And the capacity expansion server sends an error message to the service layer. And the capacity expansion server allocates the storage disk space of the second storage disk to the fourth storage unit created by the service layer in response to the error message. And the capacity expansion server receives a third data storage request through the persistent layer, wherein the third data storage request carries a third unit identifier of the fourth storage unit. And the capacity expansion server stores the second data into a fourth storage unit corresponding to the third unit identifier.
The capacity expansion server may include at least one storage disk, and each storage disk may include at least one storage unit. The first storage unit is a storage unit in a first storage disk of the target server. The second storage unit is a storage unit in a second storage disk of the capacity expansion server. The target server needs to migrate data to the capacity expansion server.
In the technical scheme, when a data storage request sent by a client is received in the data migration process, the capacity expansion server can store second data requested to be stored by the data storage request into the newly-built fourth storage unit, so that the data written in the migration process can be prevented from being covered by the data being migrated, mutual exclusion between the written data and the data being migrated is realized, and the reliability of the data is improved.
Optionally, before the capacity expansion server obtains the first data stored in the first storage unit, a unit identifier obtaining request may be sent to the main server, and the unit identifier set may be received from the main server. Further, the capacity expansion server obtains the first data stored in the first storage unit, and stores the first data in the second storage unit, which may specifically be: and acquiring first data stored in a first storage unit corresponding to the first unit identifier, and storing the first data in a second storage unit corresponding to the first unit identifier.
The unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs, and the main server comprises a main storage disk in the partition to which the first storage disk belongs. The unit identification set comprises unit identifications of all storage units contained in a main storage disk in a partition corresponding to the partition identification, the unit identification of all storage units contained in the first storage disk is respectively the same as the unit identification of all storage units contained in the main storage disk, and the unit identification of all storage units contained in the second storage disk is respectively the same as the unit identification of all storage units contained in the main storage disk.
Optionally, after the capacity expansion server obtains the first data stored in the first storage unit corresponding to the first unit identifier and stores the first data in the second storage unit corresponding to the first unit identifier, the capacity expansion server may further delete the first unit identifier from the unit identifier set.
Optionally, when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs, the capacity expansion server may send a storage failure message to the service layer, receive the second data storage request through the persistent layer, and store the second data in the third storage unit corresponding to the second unit identifier.
The second data storage request carries a target partition identifier and a second unit identifier, the partition corresponding to the target partition identifier is different from the partition to which the second storage disk belongs, and each storage disk contained in the partition corresponding to the target partition identifier can store data. The third storage unit is a storage unit in a third storage disk of the capacity expansion server, and the partition identifier of the partition to which the third storage disk belongs is a target partition identifier.
In a third aspect, an embodiment of the present application provides a non-volatile computer storage medium, where the computer storage medium stores a program, and when the program is executed by a server, the server executes the method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a non-volatile computer storage medium, where the computer storage medium stores a program, and when the program is executed by a server, the server executes the method according to any one of the second aspects.
In a fifth aspect, an embodiment of the present application provides an apparatus, where the apparatus has a function of implementing a capacity expansion server behavior in the method example described in the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the above functions.
In one design, the apparatus may structurally include a receiving module and a processing module, where the processing module is configured to support the capacity expansion server to perform corresponding functions in any one of the methods of the first aspect. The receiving module is used for supporting communication between the capacity expansion server and other servers.
In a sixth aspect, an embodiment of the present application provides an apparatus, where the apparatus has a function of implementing a capacity expansion server behavior in the method example described in the second aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the above functions.
In one design, the apparatus may structurally include a receiving module, a processing module and a sending module, where the processing module is configured to support the capacity expansion server to execute corresponding functions in any one of the methods in the second aspect. The receiving module and the sending module are used for supporting communication between the capacity expansion server and other servers.
In a seventh aspect, an embodiment of the present application provides a computer program product containing instructions, which when run on a server, cause the server to perform the method of any one of the first aspect.
In an eighth aspect, embodiments of the present application provide a computer program product comprising instructions that, when executed on a server, cause the server to perform the method of any of the second aspects.
In a ninth aspect, embodiments of the present application provide a server, where the server includes a receiver, a processor, a memory device, and at least one storage disk, and each storage disk includes at least one storage unit, which is used to implement the functions referred to in the first aspect, for example, to generate or process data and/or information referred to in the method described above.
In a tenth aspect, embodiments of the present application provide a server, where the server includes a receiver, a transmitter, a processor, a memory device, and at least one storage disk, where each storage disk includes at least one storage unit, and is used to implement the functions referred to in the second aspect, for example, to generate or process data and/or information referred to in the foregoing method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic structural diagram of a distributed storage system disclosed in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a server disclosed in an embodiment of the present application;
FIG. 3 is a schematic diagram of a distributed storage system according to another embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a data processing method disclosed in an embodiment of the present application;
FIG. 5 is a schematic illustration of a data migration disclosed in an embodiment of the present application;
FIG. 6 is a flow chart illustrating a data processing method according to another embodiment of the present application;
FIG. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a server disclosed in an embodiment of the present application;
FIG. 9 is a block diagram of a data processing apparatus according to another embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a server according to another embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
In order to better understand a data processing method and apparatus disclosed in the embodiments of the present application, a server implemented in the present application is first described below. Referring to fig. 2, fig. 2 is a schematic structural diagram of a server disclosed in an embodiment of the present application, where the server may include a Service Layer (Service Layer) and a Persistence Layer (Persistence Layer).
The service layer is configured to interact with the client, that is, provide a foreground service to the outside, where the foreground service may be a service for requesting the client to read data or write data, such as an S3Get/Put service, and S3 is an object storage service of amazon (amazon), where an object created based on the object storage of amazon is referred to as an S3 object.
And the persistence layer is used for providing a persistent data service for the service layer, such as storing the S3 object and the like. The interface between the service layer and the persistent layer performs communication based on plog (i.e., a storage unit), that is, after the persistent layer acquires data from the service layer, the data is stored based on plog as granularity, which specifically includes: the data is stored in a certain plog. plog is based on large-sized memory cells that can only be appended (appended), and only write, append write, and delete can be created, and data modification cannot be performed, for example, when the current data size of plog is 10M, and when the data size requested to be stored in plog is 1M, the data requested to be stored can be written in the tail of the data already stored in plog.
Illustratively, the data capacity of the plog is 1G, the service layer may store a plurality of foreground business objects, for example, a plurality of S3 objects, in one plog of the persistence layer, which is only an example and may not be limited to the S3 object, and one plog may store a plurality of objects created in any object storage system. When a plurality of objects are written into a plog, in order to find a specific object, the object ID, the offset and the length of the object in the plog need to be recorded in the object metadata (for example, redo log), and the object to be found can be accurately found according to the object ID, the offset and the length of the object in the plog.
For the write operation of foreground service, the service layer can use the application plog interface provided by the persistent layer to write data into a certain plog. If no plog is available, the service layer may call the interface function of the persistence layer to create the plog. The service layer may select a partition to allocate disk space for the created plog. Taking fig. 1 as an example, the service layer may create 15 plogs, and allocate plogs 1-5 to disk 0 of server 1, plogs 6-10 to disk 1 of server 2, and plogs 11-15 to disk 1 of server 4.
When the persistent layer has a storage disk failure in the partition, it may be determined that the application plog operation fails to write on the failed disk, the persistent layer may return a storage failure message to the application plog write operation of the service layer, and the service layer may select the plog of each storage disk included in the new partition to rewrite the plog. And the service layer calls a Create plog interface of the persistent layer, the persistent layer creates plog on the partition with normal storage disks, and the data is ensured to be successfully stored in all the storage disks contained in the partition, so that the reliability of the data is improved. When a certain storage disk in the partition fails or a server to which the certain storage disk belongs in the partition fails, it can be determined that the write failure of the application plog operation on the failed disk occurs.
Referring to fig. 3 based on the schematic structural diagram of the server shown in fig. 2, fig. 3 is a schematic structural diagram of a distributed storage system disclosed in the embodiment of the present application. As shown in fig. 3, the distributed storage system may include at least a capacity expansion server 301, a main server 302, and a target server 303. In this case, two servers of the capacity expansion server 301, the main server 302, and the target server 303 may establish data communication to implement distributed storage.
The capacity expansion server 301 may correspond to the server 4 in fig. 1, the main server 302 may correspond to the server 1 in fig. 1, and the target server may correspond to the server 3 in fig. 1. Disk 0 in the primary server 302 and disk 2 in the target server 303 constitute a partition, where the primary disk in the partition is disk 0 in the primary server 302. The host server 302 may create a plurality of storage units in the storage disk 0 of the host server 302. Target server 303 may create a plurality of storage units in storage disk 2 of target server 303. The unit id of each storage unit in the storage disk 0 of the main server 302 is the same as the unit id of each storage unit in the storage disk 2 of the target server 303. If the volume server 301 is used for capacity expansion and load balancing calculation is performed, the data stored in the storage disk 2 of the target server 303 needs to be migrated to the storage disk 1 of the volume server 301, the volume server 301 may create a plurality of storage units in the storage disk 1 of the volume server 301, the unit identifier of each storage unit in the storage disk 1 of the volume server 301 is the same as the unit identifier of each storage unit in the storage disk 0 of the main server 302, and the unit identifier of each storage unit in the storage disk 2 of the target server 303 is the same as the unit identifier of each storage unit in the storage disk 0 of the main server 302.
During the data migration process, the capacity expansion server 301 may obtain the first data from the first storage unit of the storage disk 2 of the target server 303, and store the first data in the second storage unit of the storage disk 1 of the capacity expansion server 301. When the first data is successfully stored in the second storage unit, the capacity expansion server 301 may obtain the current data size of the second storage unit and the current data size of the first storage unit. The capacity expansion server 301 determines that the current data volume of the second storage unit is smaller than the current data volume of the first storage unit, and then the capacity expansion server 301 may obtain the second data from the first storage unit and store the second data in the second storage unit. According to the data migration method and device, when a data storage request sent by a client is received in the data migration process, the sum of the data volume of a first storage unit before data migration and the data volume of second data requested to be stored by the client can be recorded, after the data migration is completed, the second data requested to be stored by the client is obtained from the first storage unit and stored in a second storage unit, the data written in the migration process can be prevented from being covered by the data being migrated, and the reliability of the data is improved.
It should be noted that the distributed storage system in the embodiment of the present application includes, but is not limited to, three servers, and each server includes, but is not limited to, three storage disks, and is not specifically limited to the embodiment of the present application. The storage disk in the embodiment of the present application may be a solid state drive (solid state drive), and may also be a hard disk drive (hard disk drive) or another type of storage disk.
It should be noted that each partition contains a primary storage disk, which may be selected based on some mechanism or randomly determined. The server including the primary storage disk may be a primary server, the primary server may store a unit identifier set of a partition to which the primary storage disk belongs, the unit identifier set may include a unit identifier of each storage unit in the primary storage disk, and unit identifiers of storage units of each storage disk in the same partition are the same, for example, a storage disk 0 of the primary server 202 includes 3 storage units, and the unit identifiers of the 3 storage units are a storage unit 1, a storage unit 2, and a storage unit 3, respectively; the storage disk 2 of the target server 203 also includes 3 storage units, and the unit identifiers of the 3 storage units are respectively storage unit 1, storage unit 2 and storage unit 3; the storage disk 1 of the capacity expansion server 201 also includes 3 storage units, and the unit identifiers of the 3 storage units are respectively the storage unit 1, the storage unit 2, and the storage unit 3.
Referring to fig. 4, fig. 4 is a data processing method provided in the embodiment of the present application, which includes, but is not limited to, the following steps:
step S401: the capacity expansion server acquires the first data stored in the first storage unit and stores the first data in the second storage unit.
For example, the primary storage disk in the primary server and the first storage disk in the target server form a partition, and the primary storage disk in the partition is the primary storage disk in the primary server. When the storage capacity of the distributed storage system is insufficient, the capacity expansion server can be added to realize capacity expansion. The capacity expansion server may include at least one storage disk, and each storage disk may include at least one storage unit. If the data in the first storage disk of the target server needs to be migrated to the second storage disk of the capacity expansion server, the capacity expansion server may obtain the first data stored in the first storage unit of the first storage disk of the target server, and store the first data in the second storage unit of the second storage disk of the capacity expansion server, so as to implement data migration. After the data migration is completed, the storage disk included in the partition may be a primary storage disk in the primary server and a secondary storage disk in the flash server. The memory cell can be equivalent to plog in the above-described embodiment.
In a possible embodiment, before the capacity expansion server obtains the first data stored in the first storage unit, a unit identifier obtaining request may be sent to the main server, where the unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs. The capacity expansion server receives a unit identification set from the main server, wherein the unit identification set comprises unit identifications of all storage units contained in a main storage disk in a partition corresponding to the partition identification. Then, the capacity expansion server may obtain the first data stored in the first storage unit corresponding to the first unit identifier, and store the first data in the second storage unit corresponding to the first unit identifier.
In a possible embodiment, after the capacity expansion server obtains the first data from the first storage unit corresponding to the first unit identifier and stores the first data in the second storage unit corresponding to the first unit identifier, the capacity expansion server may delete the first unit identifier from the unit identifier set.
Specifically, if data in the first storage disk of the target server needs to be migrated to the second storage disk of the capacity expansion server at present, the capacity expansion server may perform data migration based on the partition. Taking fig. 1 as an example, a storage disk 0 of a server 1, a storage disk 1 of a server 2, and a storage disk 2 of a server 3 constitute a partition 1, and a main storage disk in the partition 1 is the storage disk 0 of the server 1. If the data of the storage disk 2 of the server 3 needs to be migrated to the storage disk 1 of the server 4, the server 4 may determine that the storage disk 2 of the server 3 belongs to the partition 1, and the main storage disk in the partition 1 is the storage disk 0 of the server 1, and then the server 4 may send a unit identifier obtaining request to the server 1, where the unit identifier obtaining request carries the group identifier of the partition 1, the server 1 may send a plog list of the partition 1 to the server 4, where the plog list may include unit identifiers of all plogs in the main storage disk included in the partition 1, that is, before the data migration, the plog list includes unit identifiers of all the plogs in the storage disk 0 of the server 1. During the data migration process, the server 4 can use the plog list sent by the server 1 as the to-be-recovered plog list (i.e. the unit identification set). The server 4 can acquire data from the plog1 in the storage disk 0 of the server 1 and store the data into the plog1 in the storage disk 1 of the server 4, and then the server 4 can delete the plog id of the plog1 in the storage disk 0 of the server 1 from the to-be-restored plog list, that is, after data migration, the plog list includes the unit ids of other plogs in the storage disk 0 of the server 1 except for the plog 1. It should be noted that the plog list stored in the host server remains unchanged, and after migrating data of one plog, the capacity expansion server deletes the unit identifier of the plog in the to-be-restored plog list stored in the capacity expansion server.
Step S402: after the first data is successfully stored in the second storage unit, the capacity expansion server obtains the current data volume of the second storage unit and the current data volume of the first storage unit.
Specifically, in the data migration process, when receiving a write request from the client, the capacity expansion server may count the current data size of the first storage unit. When the capacity expansion server detects that the first data is successfully stored in the second storage unit, the capacity expansion server may obtain the counted current data volume of the first storage unit.
In a feasible embodiment, before the capacity expansion server obtains the current data volume of the first storage unit, a first data storage request may be received through the service layer, where the first data storage request carries a first unit identifier, the first data storage request is used to instruct the client to request to store second data in a second storage unit corresponding to the first unit identifier, it is determined that the first unit identifier exists in the unit identifier set, and the data volume of the first storage unit before data migration and the data volume of the second data are obtained and added, so as to obtain the current data volume of the first storage unit.
For example, when the persistent layer receives a write request from the service layer through the application plog interface, the capacity expansion server may determine whether the plog list to be restored includes a plog identifier carried by the write request, and when the plog list to be restored includes the plog identifier carried by the write request, the capacity expansion server may determine that data migration is currently performed, and the capacity expansion server may record a reference data volume of the plog corresponding to the plog identifier (that is, a current data volume of the first storage unit) in the redo log, where the reference data volume is obtained by adding a data volume of data stored in the write request to a data volume of the first storage unit before data migration, so that the capacity expansion server writes the data stored in the write request after completing the data migration. If the redo log corresponding to the plog does not exist, the capacity expansion server can add a redo log to record the reference data volume of the plog; if the redo log corresponding to the plog exists, after the capacity expansion server receives the write request, the reference data volume of the plog recorded by the redo log can be updated.
Taking the schematic diagram of data migration shown in fig. 5 as an example, when the data volume of the data in the plog1 of the storage disk 2 of the server 3 is 10MB, and the server 4 needs to migrate the data in the plog1 of the storage disk 2 of the server 3 to the plog1 of the storage disk 1 of the server 4, the data volume of the plog1 of the storage disk 2 of the server 3 before data migration is 10 MB. When the server 4 stores 8MB of data in the plog1 of the storage disk 2 of the server 3 in the plog1 of the storage disk 1 of the server 4, it is determined that there is a 4MB data request to store in the plog1 of the storage disk 1 of the server 4, and the server 4 can add 10MB and 4MB to obtain that the current data volume of the plog1 of the storage disk 2 of the server 3 is 14MB, that is, the reference data volume of the plog1 of the storage disk 1 of the recording server 4 is 14 MB.
Step S403: and the capacity expansion server confirms that the current data volume of the second storage unit is smaller than the current data volume of the first storage unit.
After the capacity expansion server obtains the current data volume of the first storage unit and the current data volume of the second storage unit, whether the current data volume of the first storage unit is larger than the current data volume of the second storage unit or not can be judged, and when the current data volume of the first storage unit is larger than the current data volume of the second storage unit, the capacity expansion server can obtain second data stored in the first storage unit; when the current data volume of the first storage unit is equal to the current data volume of the second storage unit, the capacity expansion server may end the process.
For example, when the persistent layer receives a write request from the service layer through the application plog interface, the data stored in the write request may be stored in the specified plog of each storage disk included in the partition, for example, the storage disk included in the partition is a main storage disk of the main server, a first storage disk of the target server, and a second storage disk in the capacity expansion server, and then the main server may store the data stored in the write request in a target storage unit of the main storage disk, where the target storage unit is the plog corresponding to the plog identifier carried in the write request. The target server may store the data requested to be stored by the write request in a first storage unit of the first storage disk, where the first storage unit is a plog corresponding to the plog identifier carried by the write request. When the persistent layer in the capacity expansion server receives the write request from the service layer, the capacity expansion server is performing data migration, and then the capacity expansion server may record the reference data volume of the plog corresponding to the plog identifier carried by the write request in the second storage disk, and after the data migration is completed, obtain the data stored in the write request from the first storage unit.
Step S404: the capacity expansion server acquires second data stored in the first storage unit and stores the second data in the second storage unit.
After the capacity expansion server obtains the second data stored in the first storage unit, the second data may be stored in the second storage unit.
In a possible embodiment, when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs, the capacity expansion server may send a storage failure message to the service layer, receive, through the persistent layer, a second data storage request, where the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the first storage disk belongs, and each storage disk included in the partition corresponding to the target partition identifier is capable of storing data, and store the second data in a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the capacity expansion server, and a partition identifier of the partition to which the third storage disk belongs is the target partition identifier.
Taking fig. 1 as an example, the storage disk 0 of the server 1, the storage disk 1 of the server 2, and the storage disk 1 of the server 4 constitute a partition 1, and the storage disk 1 of the server 1, the storage disk 0 of the server 2, and the storage disk 2 of the server 4 constitute a partition 2. If the storage disk 1 of the server 2 fails, which results in that the second data is not successfully stored in all the storage disks included in the partition 1, the capacity expansion server may send a storage failure message to the service layer. The service layer may determine to store the second data in each storage disk included in the partition 2, and then the service layer may send a second data storage request to the persistent layer, where the second data storage request carries the partition identifier and the second unit identifier of the partition 2, and after receiving the second data storage request, the capacity expansion server may store the second data in a third storage unit corresponding to the second unit identifier of the storage disk 2.
Taking the schematic diagram of data migration shown in fig. 5 as an example, when all the data in the plog1 of the storage disk 2 of the server 3 is successfully stored in the plog1 of the storage disk 1 of the server 4, the current data volume of the plog1 of the storage disk 1 of the server 4 is 10MB, the server 4 can determine that the current data volume of the plog1 of the storage disk 2 of the server 3 is greater than the current data volume of the plog1 of the storage disk 1 of the server 4, further obtain the newly written 4MB data from the plog1 of the storage disk 2 of the server 3, and further the server 4 additionally writes the 4MB data to the tail of the plog1 of the storage disk 1 of the server 4.
In the method described in fig. 4, the first data stored in the first storage unit is acquired and stored in the second storage unit, when the first data is successfully stored in the second storage unit, the current data size of the first storage unit and the current data size of the second storage unit are acquired, and when the current data size of the first storage unit is greater than the current data size of the second storage unit, the second data is acquired from the first storage unit and stored in the second storage unit, so that the data written in the migration process can be prevented from being overwritten by the data being migrated, the reliability of the data is improved, and the reliability of the data is improved.
Referring to fig. 6, fig. 6 is a data processing method according to another embodiment of the present application, which includes, but is not limited to, the following steps:
step S601: the capacity expansion server acquires the first data stored in the first storage unit and stores the first data in the second storage unit. The first storage unit is a storage unit in a first storage disk of the target server, the second storage unit is a storage unit in the first storage disk of the capacity expansion server, and the target server needs to migrate data to the capacity expansion server.
It should be noted that step S501 in the embodiment of the present application may specifically refer to the description of step S301 in fig. 3, and the embodiment of the present application is not described again.
Step S602: and the capacity expansion server receives a first data storage request from the service layer in the process of storing the first data, wherein the first data storage request carries the first unit identifier of the second storage unit.
For example, when a receiving layer in the capacity expansion server receives a write request from a service layer through an application plog interface, it may be determined whether a to-be-recovered plog list includes a plog identifier carried by the write request, and when the to-be-recovered plog list includes the plog identifier carried by the write request, the capacity expansion server may determine that data migration is currently performed, that is, the capacity expansion server may determine that a first data storage request is received from the service layer in a process of storing first data.
Step S603: and the capacity expansion server sends an error message to the service layer.
And in the process of storing the first data, the capacity expansion server receives the first data storage request from the service layer, and the capacity expansion server can refuse to respond to the first data storage request and further send an error message to the service layer.
Step S604: and the capacity expansion server allocates the storage disk space of the second storage disk to a fourth storage unit created by the service layer in response to the error message, wherein the fourth storage unit is a storage unit in the second storage disk.
After receiving the error message, the service layer may determine that the capacity expansion server rejects to store the data requested to be stored by the first data storage request to the second storage unit, and then the service layer may create a fourth storage unit, where a storage disk space allocated to the fourth storage unit may be each storage disk included in the partition to which the second storage disk belongs.
Step S605: and the capacity expansion server receives a third data storage request through the persistent layer, wherein the third data storage request carries a third unit identifier of the fourth storage unit.
Step S606: and the capacity expansion server stores the second data into a fourth storage unit corresponding to the third unit identifier.
After receiving the second data storage request, the capacity expansion server may store the second data in a fourth storage unit corresponding to the third unit identifier. That is, when a data storage request from a client is received, the plog which does not complete data migration is not additionally written, but a new plog is used for storing the data requested to be stored by the client, so that the written data and the data which is being migrated can be mutually exclusive, and the reliability of the data is improved.
In the method described in fig. 6, first data stored in a first storage unit is acquired, the first data is stored in a second storage unit, in the process of storing the first data, a first data storage request is received from a service layer, the first data storage request carries a first unit identifier of the second storage unit, an error message is sent to the service layer, a storage disk space of a second storage disk is allocated to a fourth storage unit created by the service layer in response to the error message, a third data storage request is received from a persistent layer, the third data storage request carries a third unit identifier of the fourth storage unit, and the second data is stored in the fourth storage unit corresponding to the third unit identifier, so that mutual exclusion between written data and data being migrated can be realized, and reliability of the data is improved.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, configured to implement the functions of the volume expansion server in fig. 4 and 6, where functional blocks of the data processing apparatus may be implemented by hardware, software, or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks described in fig. 7 may be combined or separated into sub-blocks to implement the application scheme. Thus, the above description in this application may support any possible combination or separation or further definition of the functional blocks described below.
As shown in fig. 7, the data processing apparatus is applied to a capacity expansion server, where the capacity expansion server includes at least one storage disk, each storage disk includes at least one storage unit, and the data processing apparatus may include: a receiving module 701 and a processing module 702, wherein the detailed description of each module is as follows.
A receiving module 701, configured to obtain first data stored in a first storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, and the target server needs to migrate the data to the capacity expansion server;
a processing module 702, configured to store the first data in a second storage unit, where the second storage unit is a storage unit in a second storage disk of the flash server;
the processing module 702 is further configured to, when the first data is successfully stored in the second storage unit, obtain a current data size of the second storage unit and a current data size of the first storage unit;
the processing module 702 is further configured to confirm that the current data size of the second storage unit is smaller than the current data size of the first storage unit;
the receiving module 701 is further configured to obtain second data stored in the first storage unit;
the processing module 702 is further configured to store the second data in the second storage unit.
Optionally, the data processing apparatus in this embodiment of the application may further include a sending module 703, where the sending module 703 is configured to send a unit identifier obtaining request to a main server before the receiving module 701 obtains the first data stored in the first storage unit, where the unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs, and the main server includes a main storage disk in the partition to which the first storage disk belongs;
the receiving module 701 is further configured to receive a unit identifier set from the primary server, where the unit identifier set includes unit identifiers of storage units included in a primary storage disk in a partition corresponding to the partition identifier, the unit identifiers of the storage units included in the first storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk, and the unit identifiers of the storage units included in the second storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk;
the receiving module 701 obtains the first data stored in the first storage unit, and includes:
acquiring first data stored in a first storage unit corresponding to the first unit identifier;
the processing module 702 stores the first data in the second storage unit, including:
and storing the first data into a second storage unit corresponding to the first unit identification.
Optionally, the processing module 702 is further configured to delete the first unit identifier from the unit identifier set after the first data is stored in the second storage unit corresponding to the first unit identifier.
Optionally, the receiving module 701 is further configured to receive a first data storage request from the client before the processing module 702 obtains the current data size of the second storage unit and the current data size of the first storage unit, where the first data storage request carries the first unit identifier, and the first data storage request is used to instruct the client to request to store the second data in the second storage unit corresponding to the first unit identifier;
the processing module 702 is further configured to determine that the first unit identifier exists in the unit identifier set;
the processing module 702 is further configured to obtain a data amount of the first storage unit before data migration;
the processing module 702 is further configured to add the data size of the first storage unit before data migration to the data size of the second data, so as to obtain the current data size of the first storage unit.
Optionally, the data processing apparatus in this embodiment of the application may further include:
a sending module 703, configured to send a storage failure message to the service layer when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs;
the receiving module 701 is further configured to receive a second data storage request through a persistent layer, where the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the second storage disk belongs, and each storage disk included in the partition corresponding to the target partition identifier can store data;
the processing module 702 is further configured to store the second data in a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the volume expansion server, and a partition identifier of a partition to which the third storage disk belongs is the target partition identifier.
It should be noted that the implementation of each module may also correspond to the corresponding description of the embodiments shown in fig. 4 and 6.
It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. Each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a server disclosed in an embodiment of the present application, where the server may include a server. As shown in fig. 8, the server may include: at least one processor 801, a bus 802, a receiver 803, a memory device 804, and at least one storage disk 805, each storage disk including at least one plog. Wherein, the receiver 803, the memory device 804, the storage disk 805 and the processor 801 are connected with each other through a bus 802; the bus 802 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus. The Processor 801 may be a Central Processing Unit (CPU), a Network Processor (NP), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. In the case where the processor 801 is one CPU, the CPU may be a single-core CPU or a multi-core CPU. The memory device 804 can store program code, redo log, and a list of to-be-recovered plogs. The server may also include a transmitter 806, wherein:
the receiver 803 obtains first data stored in a first storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, and the target server needs to migrate the data to the capacity expansion server;
the processor 801 stores the first data in a second storage unit, where the second storage unit is a storage unit in a second storage disk of the capacity expansion server;
when the first data is successfully stored in the second storage unit, the processor 801 acquires the current data volume of the second storage unit and the current data volume of the first storage unit;
the processor 801 confirms that the current data size of the second storage unit is smaller than the current data size of the first storage unit;
the receiver 803 acquires the second data stored in the first storage unit;
the processor 801 stores the second data in the second storage unit.
Optionally, the transmitter 806 may send a unit identifier obtaining request to a primary server before the receiver 803 obtains the first data stored in the first storage unit, where the unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs, and the primary server includes a primary storage disk in the partition to which the first storage disk belongs;
the receiver 803 receives a unit identifier set from the primary server, where the unit identifier set includes unit identifiers of storage units included in a primary storage disk in a partition corresponding to the partition identifier, the unit identifiers of the storage units included in the first storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk, and the unit identifiers of the storage units included in the second storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk;
the receiver 803 obtains the first data stored in the first storage unit, including:
acquiring first data stored in a first storage unit corresponding to the first unit identifier;
the processor 801 stores the first data in the second storage unit, including:
and storing the first data into a second storage unit corresponding to the first unit identification.
Optionally, after the processor 801 stores the first data in the second storage unit corresponding to the first unit identifier, the first unit identifier may be further deleted from the unit identifier set.
Optionally, before the processor 801 obtains the current data size of the second storage unit and the current data size of the first storage unit, the receiver 803 may receive a first data storage request from the client, where the first data storage request carries the first unit identifier, and the first data storage request is used to indicate that the client requests to store the second data in the second storage unit corresponding to the first unit identifier;
the processor 801 determines that the first unit identity exists in the set of unit identities;
the processor 801 acquires the data volume of the first storage unit before data migration;
the processor 801 adds the data size of the first storage unit before data migration to the data size of the second data to obtain the current data size of the first storage unit.
Optionally, the transmitter 806 may send a storage failure message to the service layer when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs;
the receiver 803 receives a second data storage request through the persistent layer, where the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the second storage disk belongs, and each storage disk included in the partition corresponding to the target partition identifier can store data;
the processor 801 stores the second data in a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the volume expansion server, and a partition identifier of a partition to which the third storage disk belongs is the target partition identifier.
It should be understood that a server is merely one example provided by embodiments of the present application and that a server may have more or fewer components than shown, may combine two or more components, or may have a different configuration implementation of components.
Specifically, the server described in this embodiment of the present application may be used to implement part or all of the processes in the method embodiments described in this application in conjunction with fig. 4 and 6.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application, where the data processing apparatus is applied to a capacity expansion server, where the capacity expansion server includes at least one storage disk, each of the storage disks includes at least one storage unit, and is used to implement the functions of the capacity expansion server in the embodiments of fig. 4 and 6, and functional blocks of the data processing apparatus may be implemented by hardware, software, or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks described in FIG. 9 may be combined or separated into sub-blocks to implement the application scheme. Thus, the above description in this application may support any possible combination or separation or further definition of the functional blocks described below.
As shown in fig. 9, the data processing apparatus may include: a receiving module 901, a processing module 902 and a sending module 903, wherein the detailed description of each module is as follows.
A receiving module 901, configured to obtain first data stored in a first storage unit;
a processing module 902, configured to store the first data in a second storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, the second storage unit is a storage unit in a second storage disk of the capacity expansion server, and the target server needs to migrate data to the capacity expansion server;
the receiving module 901 is further configured to receive a first data storage request from a client during the process of storing the first data, where the first data storage request carries a first unit identifier of the second storage unit;
a sending module 903, configured to send an error message to the service layer;
the processing module 902 is further configured to allocate a disk space of the second storage disk to a fourth storage unit created by the service layer in response to the error message;
the receiving module 901 is further configured to receive a second data storage request through a persistent layer, where the second data storage request carries a third unit identifier of the fourth storage unit;
the processing module 902 is further configured to store the second data in a fourth storage unit corresponding to the third unit identifier.
Optionally, the sending module 903 may send a unit identifier obtaining request to a main server before the receiving module 901 obtains the first data stored in the first storage unit, where the unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs, and the main server includes a main storage disk in the partition to which the first storage disk belongs;
the receiving module 901 receives a unit identifier set from the primary server, where the unit identifier set includes unit identifiers of storage units included in a primary storage disk in a partition corresponding to the partition identifier, the unit identifiers of the storage units included in the first storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk, and the unit identifiers of the storage units included in the second storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk;
the receiving module 901 obtains first data stored in a first storage unit, including:
acquiring first data stored in a first storage unit corresponding to the first unit identifier;
the processing module 902 stores the first data in a second storage unit, including:
and storing the first data into a second storage unit corresponding to the first unit identification.
Optionally, after the processing module 902 stores the first data in the second storage unit corresponding to the first unit identifier, the first unit identifier is deleted from the unit identifier set.
Optionally, when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs, the sending module 903 may further send a storage failure message to the service layer;
the receiving module 901 receives a second data storage request through a persistent layer, where the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the second storage disk belongs, and each storage disk included in the partition corresponding to the target partition identifier can store data;
the processing module 902 stores the second data in a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the flash server, and a partition identifier of a partition to which the third storage disk belongs is the target partition identifier.
It should be noted that the implementation of each module may also correspond to the corresponding description of the embodiments shown in fig. 4 and 6.
It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. Each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a server according to another embodiment of the present application. As shown in fig. 10, the server may include: at least one processor 1001, a bus 1002, a receiver 1003, a transmitter 1004, a memory device 1005, and at least one storage disk 1006, each storage disk including at least one plog. The receiver 1003, the transmitter 1004, the memory device 1005, the storage disk 1006 and the processor 1001 are connected to each other through a bus 1002; the bus 1002 may be a PCI bus or an EISA bus, etc. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus. The processor 1001 may be a CPU, NP, general processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. The memory device 804 can store program code and a list of to-be-recovered plogs. In the case where the processor 1001 is a CPU, the CPU may be a single-core CPU or a multi-core CPU, where:
the receiver 1003 acquires the first data stored in the first storage unit;
the processor 1001 stores the first data in a second storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, the second storage unit is a storage unit in a second storage disk of the capacity expansion server, and the target server needs to migrate data to the capacity expansion server;
the receiver 1003 receives a first data storage request from a client in a process of storing the first data, where the first data storage request carries a first unit identifier of the second storage unit;
the processor 1001 allocates the disk space of the second storage disk to the fourth storage unit created by the service layer in response to the error message;
the receiver 1003 receives a second data storage request through the persistent layer, where the second data storage request carries a third unit identifier of the fourth storage unit;
the processor 1001 stores the second data in a fourth storage unit corresponding to the third unit identifier.
Optionally, the transmitter 1004 may send a unit identifier obtaining request to a main server before the receiver 1003 obtains the first data stored in the first storage unit, where the unit identifier obtaining request carries a partition identifier of a partition to which the second storage disk belongs, and the main server includes a main storage disk in the partition to which the first storage disk belongs;
the receiver 1003 receives a unit identifier set from the primary server, where the unit identifier set includes unit identifiers of storage units included in a primary storage disk in a partition corresponding to the partition identifier, the unit identifiers of the storage units included in the first storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk, and the unit identifiers of the storage units included in the second storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk;
the receiver 1003 acquires the first data stored in the first storage unit, and includes:
acquiring first data stored in a first storage unit corresponding to the first unit identifier;
the processor 1001 stores the first data in a second storage unit, including:
and storing the first data into a second storage unit corresponding to the first unit identification.
Optionally, after the processor 1001 stores the first data in the second storage unit corresponding to the first unit identifier, the first unit identifier is deleted from the unit identifier set.
Optionally, when the second data is not successfully stored in any storage disk included in the partition to which the second storage disk belongs, the transmitter 1004 may further send a storage failure message to the service layer;
the receiver 1003 receives a second data storage request through a persistent layer, where the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the second storage disk belongs, and each storage disk included in the partition corresponding to the target partition identifier can store data;
the processor 1001 stores the second data in a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the volume expansion server, and a partition identifier of a partition to which the third storage disk belongs is the target partition identifier.
It should be understood that a server is merely one example provided by embodiments of the present application and that a server may have more or fewer components than shown, may combine two or more components, or may have a different configuration implementation of components.
Specifically, the server described in this embodiment of the present application may be used to implement part or all of the processes in the method embodiments described in this application in conjunction with fig. 4 and 6.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Claims (7)
1. A data processing method is applied to a capacity expansion server, wherein the capacity expansion server comprises at least one storage disk, each storage disk comprises at least one storage unit, and the method comprises the following steps:
acquiring first data stored in a first storage unit, and storing the first data into a second storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, the second storage unit is a storage unit in a second storage disk of the capacity expansion server, and the target server needs to migrate data to the capacity expansion server;
receiving a first data storage request from a client in the process of storing the first data, wherein the first data storage request carries a first unit identifier of the second storage unit;
sending an error message to a service layer;
allocating storage disk space of the second storage disk to a fourth storage unit created by the service layer in response to the error message;
receiving a third data storage request through a persistent layer, wherein the third data storage request carries a third unit identifier of the fourth storage unit;
and storing second data into a fourth storage unit corresponding to the third unit identifier, wherein the second data is data requested by the client to be stored into the second storage unit in the process of storing the first data by the capacity expansion server.
2. The method of claim 1, wherein prior to obtaining the first data stored in the first storage unit, further comprising:
sending a unit identifier acquisition request to a main server, wherein the unit identifier acquisition request carries a partition identifier of a partition to which the second storage disk belongs, and the main server comprises a main storage disk in the partition to which the first storage disk belongs;
receiving a unit identifier set from the primary server, where the unit identifier set includes unit identifiers of storage units included in a primary storage disk in a partition corresponding to the partition identifier, the unit identifiers of the storage units included in the first storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk, and the unit identifiers of the storage units included in the second storage disk are respectively the same as the unit identifiers of the storage units included in the primary storage disk;
the obtaining first data stored in a first storage unit and storing the first data in a second storage unit includes:
the method comprises the steps of obtaining first data stored in a first storage unit corresponding to a first unit identifier, and storing the first data in a second storage unit corresponding to the first unit identifier.
3. The method of claim 2, wherein after obtaining the first data stored in the first storage location corresponding to the first location identification and storing the first data in the second storage location corresponding to the first location identification, further comprising:
deleting the first unit identity from the set of unit identities.
4. The method of claim 1, wherein the method further comprises:
when the second data is not successfully stored in any storage disk contained in the partition to which the second storage disk belongs, sending a storage failure message to a service layer;
receiving a second data storage request through a persistent layer, wherein the second data storage request carries a target partition identifier and a second unit identifier, a partition corresponding to the target partition identifier is different from a partition to which the second storage disk belongs, and each storage disk contained in the partition corresponding to the target partition identifier can store data;
and storing the second data into a third storage unit corresponding to the second unit identifier, where the third storage unit is a storage unit in a third storage disk of the capacity expansion server, and a partition identifier of a partition to which the third storage disk belongs is the target partition identifier.
5. A data processing apparatus, wherein the apparatus is applied to a capacity expansion server, the capacity expansion server includes at least one storage disk, each storage disk includes at least one storage unit, and the apparatus includes:
the system comprises a receiving module, a capacity expansion server and a storage module, wherein the receiving module is used for acquiring first data stored in a first storage unit, the first storage unit is a storage unit in a first storage disk of a target server, and the target server needs to migrate the data to the capacity expansion server;
the processing module is used for storing the first data into a second storage unit, wherein the second storage unit is a storage unit in a second storage disk of the capacity expansion server;
the receiving module is further configured to receive a first data storage request from a client during the process of storing the first data, where the first data storage request carries a first unit identifier of the second storage unit;
the sending module is used for sending an error message to the service layer;
the processing module is further configured to allocate the disk space of the second storage disk to a fourth storage unit created by the service layer in response to the error message;
the receiving module is further configured to receive a third data storage request through a persistent layer, where the third data storage request carries a third unit identifier of the fourth storage unit;
the processing module is further configured to store the second data in a fourth storage unit corresponding to the third unit identifier, where the second data is data requested by the client to be stored in the second storage unit in the process of storing the first data by the capacity expansion server.
6. A capacity expansion server, comprising a receiver, a transmitter, a processor, a memory device, and at least one storage disk, each storage disk comprising at least one storage unit, wherein:
the receiver is configured to acquire first data stored in a first storage unit, where the first storage unit is a storage unit in a first storage disk of a target server, and the target server needs to migrate the data to the capacity expansion server;
the processor is configured to store the first data in a second storage unit, where the second storage unit is a storage unit in a second storage disk of the flash server;
the receiver is further configured to receive a first data storage request from a client during the process of storing the first data, where the first data storage request carries a first unit identifier of the second storage unit;
the transmitter is used for transmitting an error message to the service layer;
the processor is further configured to allocate the disk space of the second storage disk to a fourth storage unit created by the service layer in response to the error message;
the receiver is further configured to receive a third data storage request through a persistent layer, where the third data storage request carries a third unit identifier of the fourth storage unit;
the processor is further configured to store the second data in a fourth storage unit corresponding to the third unit identifier, where the second data is data requested by the client to be stored in the second storage unit in the process of storing the first data by the capacity expansion server.
7. A computer storage medium, characterized in that the computer storage medium stores a program which, when executed, comprises the data processing method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810192378.4A CN108509150B (en) | 2018-03-08 | 2018-03-08 | Data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810192378.4A CN108509150B (en) | 2018-03-08 | 2018-03-08 | Data processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108509150A CN108509150A (en) | 2018-09-07 |
CN108509150B true CN108509150B (en) | 2021-08-20 |
Family
ID=63377305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810192378.4A Active CN108509150B (en) | 2018-03-08 | 2018-03-08 | Data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509150B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110896406A (en) | 2018-09-13 | 2020-03-20 | 华为技术有限公司 | Data storage method and device and server |
CN110083095A (en) * | 2019-04-29 | 2019-08-02 | 贵州贵谷农业股份有限公司 | A kind of greenhouse control system of local update model |
CN113832651B (en) * | 2020-06-24 | 2023-11-03 | 云米互联科技(广东)有限公司 | Sterilization method and system for clothes care device, device and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006053601A (en) * | 2004-08-09 | 2006-02-23 | Hitachi Ltd | Storage device |
CN105468473B (en) * | 2014-07-16 | 2019-03-01 | 北京奇虎科技有限公司 | Data migration method and data migration device |
CN106294471A (en) * | 2015-06-03 | 2017-01-04 | 中兴通讯股份有限公司 | Data Migration processing method and processing device |
CN107357883A (en) * | 2017-06-30 | 2017-11-17 | 北京奇虎科技有限公司 | Data migration method and device |
-
2018
- 2018-03-08 CN CN201810192378.4A patent/CN108509150B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108509150A (en) | 2018-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110096220B (en) | Distributed storage system, data processing method and storage node | |
CN108509150B (en) | Data processing method and device | |
CN111654519B (en) | Method and device for transmitting data processing requests | |
EP2830284A1 (en) | Caching method for distributed storage system, node and computer readable medium | |
CN103607428B (en) | A kind of method and apparatus for accessing shared drive | |
CN109582213B (en) | Data reconstruction method and device and data storage system | |
CN113900972A (en) | Data transmission method, chip and equipment | |
CN114741234A (en) | Data backup storage method, equipment and system | |
CN113961139A (en) | Method for processing data by using intermediate device, computer system and intermediate device | |
EP4379543A1 (en) | Cloud desktop data migration method, service node, management node, server, electronic device, and computer-readable storage medium | |
CN112839076A (en) | Data storage method, data reading method, gateway, electronic equipment and storage medium | |
US20220107752A1 (en) | Data access method and apparatus | |
CN113411363A (en) | Uploading method of image file, related equipment and computer storage medium | |
CN111104057B (en) | Node capacity expansion method in storage system and storage system | |
CN107169126B (en) | Log processing method and related equipment | |
CN109144403B (en) | Method and equipment for switching cloud disk modes | |
CN110896408B (en) | Data processing method and server cluster | |
WO2020083106A1 (en) | Node expansion method in storage system and storage system | |
CN103685359B (en) | Data processing method and device | |
CN112783698A (en) | Method and device for managing metadata in storage system | |
CN115390754A (en) | Hard disk management method and device | |
US20230121646A1 (en) | Storage Operation Processing During Data Migration Using Selective Migration Notification | |
KR101793963B1 (en) | Remote Memory Data Management Method and System for Data Processing Based on Mass Memory | |
CN110019031B (en) | File creation method and file management device | |
CN114327248A (en) | Storage node, storage device and network chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220215 Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province Patentee after: Huawei Cloud Computing Technologies Co.,Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd. |
|
TR01 | Transfer of patent right |