CN113127438A - Method, apparatus, server and medium for storing data - Google Patents

Method, apparatus, server and medium for storing data Download PDF

Info

Publication number
CN113127438A
CN113127438A CN201911395206.8A CN201911395206A CN113127438A CN 113127438 A CN113127438 A CN 113127438A CN 201911395206 A CN201911395206 A CN 201911395206A CN 113127438 A CN113127438 A CN 113127438A
Authority
CN
China
Prior art keywords
data
migration
metadata
stored
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911395206.8A
Other languages
Chinese (zh)
Other versions
CN113127438B (en
Inventor
施黄骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911395206.8A priority Critical patent/CN113127438B/en
Publication of CN113127438A publication Critical patent/CN113127438A/en
Application granted granted Critical
Publication of CN113127438B publication Critical patent/CN113127438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)
  • Retry When Errors Occur (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, a server and a medium for storing data, and relates to the field of cloud computing. A specific embodiment of the present disclosure includes: determining a migration position of the expanded user data in response to the triggering of the expansion strategy, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored; starting to store the newly added data into the original position and store the migration source end data into the migration position, wherein the migration source end data is the data stored in the original position; and updating the storage position of the newly added data which is not stored into the migration position in response to the fact that the storage progress of the data of the migration source end is the same as the storage progress of the newly added data. By simultaneously starting the storage of the newly added data to the original position and the writing of the migration source end data to the migration position, a user can normally operate the data in the original position in the process of storing the data, so that the dynamic capacity expansion is realized on the premise of not influencing the user operation.

Description

Method, apparatus, server and medium for storing data
Technical Field
Embodiments of the present disclosure relate to the field of cloud computing technologies, and in particular, to a method, an apparatus, a server, and a medium for storing data.
Background
In the field of cloud data storage, in order to deal with storage and processing of mass data, a scheme of database division and table division or federation is generally adopted. However, since neither of these two schemes considers the data change process, when the user data growth scale is not in accordance with the expectation, the following disadvantages exist in the related art: on one hand, under the condition that the size of user data cannot be predicted, the fixed Hash, the sub-table or the sub-library scheme is used, so that the conditions of data inclination and overlarge partial table are caused, and the performance of the server is influenced. On the other hand, when a certain user has a large volume and becomes a hot spot, the capacity cannot be expanded without loss, the write stop time in the period is directly related to the user volume, and the larger the time the user stops writing is, the longer the time the user stops writing is, and the resource waste is caused.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for storing data.
In a first aspect, an embodiment of the present disclosure provides a method for storing data, the method including: responding to the triggered capacity expansion strategy, and determining the distribution position of the expanded user data as a migration position, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored; starting storage of the newly added data to an original position and storage of the migration source end data to a migration position, wherein the migration source end data is data stored in the original position; and updating the storage position of the newly added data which is not stored into the migration position in response to the fact that the storage progress of the data of the migration source end is the same as the storage progress of the newly added data.
In some embodiments, the method further comprises: responding to the triggered capacity expansion strategy, and determining that metadata corresponding to the expanded user data is dynamic migration metadata, wherein the dynamic migration metadata comprises previous information metadata which is used for representing an original position; and after determining that the distribution position of the user data after capacity expansion is the migration position, executing the following operations on the dynamic migration metadata: setting the type of the type metadata as migration, and representing that the user data is in a migration state; and adding extended information metadata for representing the migration position.
In some embodiments, prior to determining the data in the origin location as migration source data, the method further comprises: based on the extended information metadata, the history data is updated such that the migration location is included in the history data.
In some embodiments, the method further comprises: responding to all the newly added data stored in the migration position, and executing the following operations: updating the type metadata type to be normal to represent that the user data is in a normal state; replacing the content in the previous metadata with the content in the extended information metadata; the extended information metadata is deleted.
In some embodiments, the method further comprises: responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions; and starting storage of the migration source data to the migration position, wherein the migration source data is the data stored in the original position.
In a second aspect, an embodiment of the present disclosure provides an apparatus for storing data, the apparatus including: the triggering unit is configured to respond to the capacity expansion strategy and determine the distribution position of the expanded user data as a migration position, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored; the storage unit is configured to start storage of the newly added data to an original position and storage of the migration source data to a migration position, wherein the migration source data is data stored in the original position; and a location updating unit configured to update the storage location of the newly added data that is not stored to the migration location in response to the storing progress of the migration source-side data being the same as the storing progress of the newly added data.
In some embodiments, the apparatus further comprises a metadata processing unit configured to: responding to the triggered capacity expansion strategy, and determining that metadata corresponding to the expanded user data is dynamic migration metadata, wherein the dynamic migration metadata comprises previous information metadata which is used for representing an original position; and after determining that the distribution position of the user data after capacity expansion is the migration position, executing the following operations on the dynamic migration metadata: setting the type of the type metadata as migration, and representing that the user data is in a migration state; and adding extended information metadata for representing the migration position.
In some embodiments, prior to determining the data in the origin location as migration source data, the storage unit is further configured to: based on the extended information metadata, the history data is updated such that the migration location is included in the history data.
In some embodiments, the metadata processing unit is further configured to: responding to all the newly added data stored in the migration position, and executing the following operations: updating the type metadata type to be normal to represent that the user data is in a normal state; replacing the content in the previous metadata with the content in the extended information metadata; the extended information metadata is deleted.
In some embodiments, the trigger unit is further configured to: responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions; the storage unit is further configured to: and starting storage of the migration source data to the migration position, wherein the migration source data is the data stored in the original position.
According to the method and the device for storing data, the storage of the newly added data to the original position and the storage of the migration source end data to the migration position are started simultaneously, and in the process of storing the data, a user can normally operate the data in the original position, so that dynamic capacity expansion is achieved on the premise that the operation of the user is not affected.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for storing data, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for storing data in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for storing data according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for storing data according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for storing data or an apparatus for storing data to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user can use the terminal devices 101, 102, 103 to communicate with the server 105 through the network 104 to receive or transmit data or the like. The terminal devices 101, 102, 103 may have client applications of various cloud storage databases installed thereon.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a function of data interaction with a server, including but not limited to a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides a data storage service, such as a data server that stores data uploaded by the terminal devices 101, 102, 103. The data server can store the received user data and carry out operations such as modification, addition and deletion and the like on the user data according to the received instruction.
It should be noted that the method for storing data provided by the embodiment of the present disclosure may be executed by the server 105. Accordingly, means for storing data may be provided in the server 105. The server may be hardware, may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server on which various database software, such as Mysql, Nosql, and Newsql, may run.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for storing data in accordance with the present disclosure is shown. The method for storing data comprises the following steps:
step 201, in response to the triggering of the capacity expansion policy, determining the distribution position of the user data after capacity expansion as a migration position. In this embodiment, the expanded user data includes history data stored in the original location and new data to be stored.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for storing data may perform data interaction from a terminal with which a user communicates by means of a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Generally, a user uploads data to a server through a terminal, and database software running on the server allocates a certain storage space, such as a certain number of libraries or tables, for the user's data according to a specific operation rule. Since the capacity of each library or table is limited, the performance of the database is affected when the amount of data stored by the user increases to a certain extent. To avoid this, in this embodiment, the capacity expansion policy may be set such that the remaining storage space of the single table is smaller than the threshold.
For example, assuming that a server allocates 4 tables for user data, the storage space of each table is 1G (Gigabyte), and if the remaining storage space of a single table is less than 0.1G, a performance inflection point occurs, and the performance of reading and writing data by the server is reduced, so the threshold may be set to 0.1G, that is, the capacity expansion policy is that the remaining storage space of the single table is less than 0.1G. When the storage amount of the user data increases to a state where the remaining storage space of a certain table is less than 0.1G, the capacity expansion policy is triggered, and the execution main body (e.g., the server 105 shown in fig. 1) of this embodiment executes the capacity expansion operation, so as to determine a storage location with a larger storage space (e.g., 6 tables may be allocated to the user data after capacity expansion), that is, a migration location, for storing the user data after capacity expansion. Accordingly, the expanded user data includes the history data already stored in the original location (e.g., 4 tables in the example) and the new data to be stored.
Step 202, starting the storage of the new data to the original position and the storage of the migration source end data to the migration position.
In this embodiment, the migration source data is data in the original position, that is, the migration source data includes history data stored before the flash strategy is triggered and newly added data stored after the flash strategy is triggered, and because the migration source data is updated in real time along with the storage of the newly added data, operations of the user on the data in the original position during this process are all synchronized to the data in the flash position, so that consistency between the original position and the data in the flash position is ensured, and thus it is ensured that normal operations of the user on the data, such as modification, addition, deletion, query of range data, and the like of the data are not affected when the user data is in the flash stage during the flash process.
As an example, the storage of migration source data to the migration address may be realized by a CDC (change data capture) asynchronous module. Because newly added data are stored in the original position while the data of the migration source end are stored in the migration position, the data of the migration source end are updated in real time, the newly added data can be captured by means of the CDC asynchronous module, the data of the migration source end are stored in the migration position according to the data storage time sequence, the newly added data are stored in the migration position after the historical data are stored in the migration position, and data disorder caused by data migration can be avoided.
Step 203, in response to that the storage progress of the data of the migration source end is the same as the storage progress of the newly added data, updating the storage position of the newly added data which is not stored into the migration position.
In this embodiment, since the storage space of the migration position is larger than the storage space of the original position, and the read-write speed of the data is positively correlated to the size of the storage space, the storage speed of the data at the migration source end is larger than the storage speed of the new data, and the storage sequence of the data at the migration source end is correlated to the storage time sequence of the data, and the later the storage time is, the later the storage sequence of the data at the migration position is. For example, when the execution subject stores the migration source data to the migration location, the history data is first stored, and after all the history data are stored in the migration location, the newly added data in the original location are stored in the migration location. When all the newly added data stored in the original position are stored in the migration position, the storage progress of the data at the source end of the migration is the same as the storage progress of the newly added data. At this time, the newly added data can be directly stored in the migration position, so that the dynamic capacity expansion process is completed.
In some optional implementation manners of this embodiment, in order to further ensure that the data after capacity expansion is consistent with the data before capacity expansion, when it is detected that the storage progress of the data at the migration source end is the same as the storage progress of the newly added data, bidirectional verification is performed on the data in the migration position and the original position, for example, data lists in the two positions may be compared, and after the verification result is consistent, the storage position of the newly added data is updated to the migration position.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for storing data according to the present embodiment. In the application scenario of fig. 3, a user uploads data to the server 302 through the terminal 301, and the server 302 stores the data. After the user data is increased to a certain extent, the capacity expansion policy is triggered when the user stores the newly added data, and then the server 302 executes the method for storing data in the embodiment to determine a migration location larger than the storage space of the original location for the expected storage amount of the user data. And migrating the data in the original position while continuously writing the new data. When the storage progress of the data at the source end of the migration tracks the storage progress of the newly added data, the storage position of the newly added data which is not stored is modified, so that the newly added data is directly stored in the migration position, and dynamic expansion is realized.
According to the method and the device for storing data, the storage of the newly added data to the original position and the writing of the data of the migration source end to the migration position are started simultaneously, the stored data are migrated to the storage position after capacity expansion while the newly added data are written, dynamic capacity expansion can be performed according to the size of user data, and the user can still operate the data normally in the capacity expansion process, so that automatic and non-inductive capacity expansion is achieved.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for storing data is shown. The process 400 of the method for storing data includes the steps of:
step 401, in response to the triggering of the capacity expansion policy, determining the distribution position of the user data after capacity expansion as a migration position. In this embodiment, the expanded user data includes history data stored in the original location and new data to be stored.
This step is similar to the step 201 and will not be described herein.
At step 402, live migration metadata is determined. The metadata is data of data, and various information for characterizing the data, such as storage location information of the data, size of the data, modification record of the data, and the like, and information of each data is for one metadata. For another example, the previous information metadata in this embodiment represents an original location of the user data, and includes distribution information of a database or a table storing the user data before the capacity expansion policy is triggered, and the execution subject may locate the user data in the original location according to the previous information metadata and store the newly added user data in the original location.
In this embodiment, in order to ensure normal operation of the user on the data during the capacity expansion process and consistency and accuracy of the data during the storage and migration processes, operations need to be performed on the related metadata. For example, the live migration metadata in this embodiment may include previous information metadata, type metadata, and extend information metadata, where the type metadata is used to represent that the user data is in a migration or normal state, and the extend information metadata is used to represent migration location information of the user data, and specifically in this embodiment, the extend information metadata includes distribution information of a library or a table for storing the expanded user data, and when the user data is in a normal state, there is no migration location, so that the series of metadata corresponding to the user data does not include the extend information metadata.
Step 403, setting the type of the type metadata to be in migration, and using the type metadata to represent that the user data is in a migration state at the moment.
In step 404, extend information metadata is added, and when the user data is in a migration state, the data is migrated from the original location to the migration location, so that metadata representing the migration location needs to be added. In this embodiment, the extended information metadata is used to characterize the migration location of the user data, and includes distribution information of the database or table storing the user data after the extension.
Step 405, updating the history data based on the extended information metadata. Since the history data is stored in the database before capacity expansion, migration location information needs to be written in the history data. As an example, migration location information may be acquired from the extended metadata by a CAS asynchronous module (compare and replace) and written into the history data. In addition, in some optional implementation manners of this embodiment, by means of the CAS asynchronous module, historical data may be compared with a plurality of metadata, so as to ensure consistency and accuracy of user data, so as to avoid a problem of inconsistency occurring in a data migration process due to data modification by a user.
Step 406, the storage of the new data to the original location and the storage of the migration source data to the migration location are initiated. This step is similar to the step 202, and will not be described herein. It should be noted that, because the extended information metadata representing the migration position information is added to the metadata in step 404, the newly added data after capacity expansion already includes the migration position information when being stored, so that the newly added data is consistent with the migration position information in the history data after updating in step 405.
Step 407, in response to that the storage progress of the data at the migration source end is the same as the storage progress of the newly added data, updating the storage position of the newly added data which is not stored to the migration position. This step is similar to the step 203, and will not be described herein.
Step 408, in response to the newly added data being all stored in the migration location, setting the type of the type metadata to be normal, in this embodiment, the state of the user data is characterized by the type metadata, and the newly added data is all stored in the migration location, which indicates that the processing for the user data in the capacity expansion operation is completed, so that the type of the type metadata can be modified to be normal.
In step 409, the content in the previous information metadata is replaced with the content in the extended information metadata. After the operation for the user data is completed, the storage location of the user data has been changed to the migration location, and therefore, it is necessary to update the content in the previous information metadata.
In step 410, the extended information metadata is deleted, and the user data in the normal state only needs the previous information metadata for representing the storage location, so the extended information metadata is deleted.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for storing data in this embodiment represents the processing of dynamic metadata in the capacity expansion process. By means of the metadata, the scheme described in this embodiment may implement monitoring of the capacity expansion process and updating of data, thereby improving the reliability of capacity expansion.
In some optional implementations of the foregoing embodiment, the method may further include: responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions; and starting storage of the migration source data to the migration position, wherein the migration source data is the data stored in the original position. The steps can realize the non-scalable capacity of the database, namely the data of a plurality of users with smaller storage capacity and more occupied storage positions, and the storage positions are more reasonably distributed to avoid resource waste. The following description is given with reference to an application example, and when the size of user data cannot be known, a server allocates a certain number of libraries or tables to each user, so that some users with small storage capacity occupy a large storage space, which results in resource waste. For example, the data of 4 users occupy 6 tables, and the actual storage amount of the 4 users only needs 2 tables, which results in waste of storage space. Correspondingly, the capacity reduction strategy can be set to be that the number of users is greater than a certain number, the sum of the user data storage amount is smaller than a certain threshold value, and when the server scans data periodically, the capacity reduction strategy is triggered, so that the steps are executed, the data of 4 users are redistributed to the migration position, the migration position comprises the distribution information of 2 tables, and on the premise that the number of users and the data are not changed, the resource waste is avoided.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for storing data, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for storing data of the present embodiment includes: a triggering unit 501, configured to determine, in response to a capacity expansion policy being triggered, a distribution position of expanded user data as a migration position, where the expanded user data includes history data stored in an original position and new data to be stored; a storage unit 502 configured to initiate storage of the new data into the original location and storage of the migration source data into the migration location, wherein the migration source data is data stored in the original location; a location updating unit 503 configured to update the storage location of the newly added data that is not stored to the migration location in response to the storing progress of the migration source side data and the storing progress of the newly added data being the same.
In this embodiment, the apparatus further comprises a metadata processing unit configured to: responding to the triggered capacity expansion strategy, and determining that metadata corresponding to the expanded user data is dynamic migration metadata, wherein the dynamic migration metadata comprises previous information metadata which is used for representing an original position; and after determining that the distribution position of the user data after capacity expansion is the migration position, executing the following operations on the dynamic migration metadata: setting the type of the type metadata as migration, and representing that the user data is in a migration state; and adding extended information metadata for representing the migration position.
In this embodiment, before determining the data in the original location as the migration source data, the storage unit 502 is further configured to: based on the extended information metadata, the history data is updated such that the migration location is included in the history data.
In this embodiment, the metadata processing unit is further configured to: responding to all the newly added data stored in the migration position, and executing the following operations: updating the type metadata type to be normal to represent that the user data is in a normal state; replacing the content in the previous metadata with the content in the extended information metadata; the extended information metadata is deleted.
In this embodiment, the triggering unit 501 is further configured to: responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions; the storage unit 502 is further configured to: and starting storage of the migration source data to the migration position, wherein the migration source data is the data stored in the original position.
Referring now to FIG. 6, a schematic diagram of an electronic device (e.g., the server of FIG. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the server; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a migration position of the expanded user data in response to the triggering of the expansion strategy, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored; and starting the storage of the new data to the original position and the storage of the migration source end data to the migration position, wherein the migration source end data is the data stored in the original position.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a trigger unit, a storage unit, and a location update unit. The names of these units do not form a limitation on the unit itself in some cases, for example, the triggering unit may also be described as a "unit that determines the migration location of the user data after capacity expansion in response to the capacity expansion policy being triggered".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for storing data, comprising:
responding to the triggered capacity expansion strategy, and determining the distribution position of the expanded user data as a migration position, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored;
starting storage of the newly added data to the original position and storage of migration source end data to the migration position, wherein the migration source end data is data stored in the original position;
and updating the storage position of the newly added data which is not stored into the migration position in response to the fact that the storage progress of the data of the migration source end is the same as the storage progress of the newly added data.
2. The method of claim 1, further comprising:
determining that metadata corresponding to the expanded user data is dynamic migration metadata in response to a triggered expansion strategy, wherein the dynamic migration metadata comprises previous information metadata, and the previous information metadata is used for representing the original position;
and after determining that the distribution position of the expanded user data is a migration position, executing the following operations on the dynamic migration metadata:
setting the type of the type metadata as migration, and representing that the user data is in a migration state;
and adding extended information metadata for representing the migration position.
3. The method of claim 2, wherein prior to determining the data in the origin location as migration source data, the method further comprises:
updating the historical data based on the extended information metadata such that the migration location is included in the historical data.
4. The method of claim 3, further comprising:
responding to the fact that all the newly added data are stored in the migration position, and executing the following operations:
updating the type metadata type to be normal so as to represent that the user data is in a normal state;
replacing contents in the previous information metadata with contents in the extended information metadata;
deleting the extended information metadata.
5. The method of one of claims 1 to 4, further comprising:
responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions;
and starting storage of migration source end data to the migration position, wherein the migration source end data is the data stored in the original position.
6. An apparatus for storing data, comprising:
the triggering unit is configured to respond to the capacity expansion strategy and determine the distribution position of the expanded user data as a migration position, wherein the expanded user data comprises historical data stored in an original position and newly added data to be stored;
a storage unit configured to initiate storage of the new data into the original location and storage of migration source data into the migration location, wherein the migration source data is data stored in the original location;
a location updating unit configured to update a storage location of the newly added data that is not stored to the migration location in response to a storing progress of the migration source data being the same as a storing progress of the newly added data.
7. The apparatus of claim 6, further comprising a metadata processing unit configured to:
determining that metadata corresponding to the expanded user data is dynamic migration metadata in response to a triggered expansion strategy, wherein the dynamic migration metadata comprises previous information metadata, and the previous information metadata is used for representing the original position;
and after determining that the distribution position of the expanded user data is a migration position, executing the following operations on the dynamic migration metadata:
setting the type of the type metadata as migration, and representing that the user data is in a migration state;
and adding extended information metadata for representing the migration position.
8. The apparatus of claim 7, prior to determining the data in the origin location as migration source data, the storage unit further configured to:
updating the historical data based on the extended information metadata such that the migration location is included in the historical data.
9. The apparatus of claim 8, wherein the metadata processing unit is further configured to:
responding to the fact that all the newly added data are stored in the migration position, and executing the following operations:
updating the type metadata type to be normal so as to represent that the user data is in a normal state;
replacing contents in the previous information metadata with contents in the extended information metadata;
deleting the extended information metadata.
10. The apparatus according to one of claims 6 to 9,
the trigger unit is further configured to: responding to the triggering of the capacity reduction strategy, determining that the distribution positions of the plurality of user data after capacity reduction are migration positions, wherein the data of the plurality of users after capacity reduction comprise the data of the plurality of users stored in the original positions;
the storage unit is further configured to: and starting storage of migration source end data to the migration position, wherein the migration source end data is the data stored in the original position.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN201911395206.8A 2019-12-30 2019-12-30 Method, apparatus, server and medium for storing data Active CN113127438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911395206.8A CN113127438B (en) 2019-12-30 2019-12-30 Method, apparatus, server and medium for storing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911395206.8A CN113127438B (en) 2019-12-30 2019-12-30 Method, apparatus, server and medium for storing data

Publications (2)

Publication Number Publication Date
CN113127438A true CN113127438A (en) 2021-07-16
CN113127438B CN113127438B (en) 2023-07-28

Family

ID=76767684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911395206.8A Active CN113127438B (en) 2019-12-30 2019-12-30 Method, apparatus, server and medium for storing data

Country Status (1)

Country Link
CN (1) CN113127438B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867643A (en) * 2021-09-29 2021-12-31 北京金山云网络技术有限公司 Data storage method, device, equipment and storage medium
CN116627349A (en) * 2023-06-14 2023-08-22 富士胶片(中国)投资有限公司 Medical image data processing method, medical image data processing device, medical image data processing equipment, medical image data processing medium and computer program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050278492A1 (en) * 2004-06-10 2005-12-15 Stakutis Christopher J Method, system, and program for migrating source data to target data
CN102200892A (en) * 2011-04-29 2011-09-28 华中科技大学 Capacity expansion method based on dynamic redundant array of independent disks (RAID) system
US20150149605A1 (en) * 2013-11-25 2015-05-28 Violin Memory Inc. Method and apparatus for data migration
CN106843755A (en) * 2017-01-04 2017-06-13 北京百度网讯科技有限公司 For the data balancing method and device of server cluster
CN109857723A (en) * 2019-01-31 2019-06-07 深圳市迷你玩科技有限公司 Dynamic date migration method and relevant device based on extendible capacity data-base cluster
WO2019134649A1 (en) * 2018-01-03 2019-07-11 中兴通讯股份有限公司 Implementation method and apparatus for control-plane resource migration, and network functional entity

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050278492A1 (en) * 2004-06-10 2005-12-15 Stakutis Christopher J Method, system, and program for migrating source data to target data
CN102200892A (en) * 2011-04-29 2011-09-28 华中科技大学 Capacity expansion method based on dynamic redundant array of independent disks (RAID) system
US20150149605A1 (en) * 2013-11-25 2015-05-28 Violin Memory Inc. Method and apparatus for data migration
CN106843755A (en) * 2017-01-04 2017-06-13 北京百度网讯科技有限公司 For the data balancing method and device of server cluster
WO2019134649A1 (en) * 2018-01-03 2019-07-11 中兴通讯股份有限公司 Implementation method and apparatus for control-plane resource migration, and network functional entity
CN109857723A (en) * 2019-01-31 2019-06-07 深圳市迷你玩科技有限公司 Dynamic date migration method and relevant device based on extendible capacity data-base cluster

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867643A (en) * 2021-09-29 2021-12-31 北京金山云网络技术有限公司 Data storage method, device, equipment and storage medium
CN116627349A (en) * 2023-06-14 2023-08-22 富士胶片(中国)投资有限公司 Medical image data processing method, medical image data processing device, medical image data processing equipment, medical image data processing medium and computer program product

Also Published As

Publication number Publication date
CN113127438B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US12019652B2 (en) Method and device for synchronizing node data
US11146502B2 (en) Method and apparatus for allocating resource
US10951702B2 (en) Synchronized content library
US9747013B2 (en) Predictive caching and fetch priority
CN111309732B (en) Data processing method, device, medium and computing equipment
CN109447635B (en) Information storage method and device for block chain
US9892000B2 (en) Undo changes on a client device
US11057465B2 (en) Time-based data placement in a distributed storage system
CN110046175B (en) Cache updating and data returning method and device
CN109804359A (en) For the system and method by write back data to storage equipment
CN113127438B (en) Method, apparatus, server and medium for storing data
CN112965945A (en) Data storage method and device, electronic equipment and computer readable medium
US10574751B2 (en) Identifying data for deduplication in a network storage environment
CN110990356B (en) Real-time automatic capacity expansion method and system for logical mirror image
CN109918381B (en) Method and apparatus for storing data
US9619336B2 (en) Managing production data
US9588884B2 (en) Systems and methods for in-place reorganization of device storage
CN106156038B (en) Date storage method and device
CN105808451B (en) Data caching method and related device
CN108205559B (en) Data management method and equipment thereof
JP2016515258A (en) File aggregation for optimized file operation
CN113032349A (en) Data storage method and device, electronic equipment and computer readable medium
US11366613B2 (en) Method and apparatus for writing data
US9880904B2 (en) Supporting multiple backup applications using a single change tracker
US20220342599A1 (en) Memory Management System and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant