CN113835637A - Data writing method, device and equipment - Google Patents

Data writing method, device and equipment Download PDF

Info

Publication number
CN113835637A
CN113835637A CN202111124319.1A CN202111124319A CN113835637A CN 113835637 A CN113835637 A CN 113835637A CN 202111124319 A CN202111124319 A CN 202111124319A CN 113835637 A CN113835637 A CN 113835637A
Authority
CN
China
Prior art keywords
storage
data
written
writing
local server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111124319.1A
Other languages
Chinese (zh)
Inventor
阳振坤
杨苏立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oceanbase Technology Co Ltd
Original Assignee
Beijing Oceanbase Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Oceanbase Technology Co Ltd filed Critical Beijing Oceanbase Technology Co Ltd
Priority to CN202111124319.1A priority Critical patent/CN113835637A/en
Publication of CN113835637A publication Critical patent/CN113835637A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files

Abstract

The application discloses a data writing method, which comprises the following steps: if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strips, writing the data of at least two data blocks in the file to be read into a local server; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers. According to the embodiment of the application, more data blocks of data required by the local server are written into the local server, so that the performance of the server in data reading is improved.

Description

Data writing method, device and equipment
The application is a divisional application of Chinese patent application CN111399780A, and the application date of the original application is as follows: year 2020, 3, 19; the application numbers are: 202010198337.3, respectively; the invention provides the following: a data writing method, a data writing device and data writing equipment are provided.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for writing data.
Background
A storage stripe refers to a collection used to store data. When the set stores data, continuous data can be written into a plurality of servers, and the data belonging to the same storage stripe are associated. If a server fails and data of the server is lost, the lost data can be recovered by other servers in the same storage stripe, so that the written data is safer and more reliable. For example, a piece of continuous data is written into a storage stripe, the storage stripe is distributed on 6 servers, and if 2 servers fail to cause data existing in the servers to be lost, the lost data can be recovered through other servers in the storage stripe.
When the existing method for writing data into the storage stripe is applied, continuous data is written into different servers, and the performance of the servers is poor when the data is read.
Disclosure of Invention
In view of this, embodiments of the present application provide a data writing method, apparatus and device, which are used to solve the problem of poor data reading performance of a server in the prior art.
The embodiment of the application adopts the following technical scheme:
the embodiment of the application provides a data writing method, which comprises the following steps:
if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strips, writing the data of at least two data blocks in the file to be read into a local server;
if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
An embodiment of the present application further provides a data writing device, where the device includes:
the writing unit is used for writing the data of at least two data blocks in the file to be read into the local server if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
Embodiments of the present application further provide a data writing device, which includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the device is triggered to execute the following means:
the writing unit is used for writing the data of at least two data blocks in the file to be read into the local server if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
according to the embodiment of the application, more data blocks of data required by the local server are written into the local server, so that the performance of the server in data reading is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a data writing method according to a first embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a data writing method according to a second embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a data writing method for a memory stripe in the prior art according to a second embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a data writing method for a memory stripe according to the present application provided in the second embodiment of the present application;
fig. 5 is a schematic structural diagram of a data writing device provided in the third embodiment of the specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a data writing method according to an embodiment of the present disclosure, where the schematic flowchart includes:
step S101, if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip, writing the data of at least two data blocks in the file to be read into a local server; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
Corresponding to the above embodiments, fig. 2 is a schematic flow chart of a data writing method provided in a second embodiment of this specification, where the schematic flow chart includes:
step S201, generating a storage stripe corresponding to attribute information according to service requirements, and dividing the storage stripe into a plurality of storage blocks.
In step S201 of the embodiment of this specification, the attribute information includes a type of the storage stripe or a storage space of the storage stripe, and if the service requirement is a storage stripe of a specific type, the storage stripe of the attribute information may be a storage stripe of a corresponding type; if the service type is a storage stripe of a specific storage space, the storage stripe of the attribute information may correspond to a storage stripe of the storage space, for example, the service requirement is a storage stripe with a production storage space of 3M, or the service requirement is a storage stripe with a production type of 4+ 2.
In step S201 of the embodiment of the present specification, the storage blocks are independent storage units, and each storage block is used for storing data. The system is initialized according to the business requirements, and data strips of one or more kinds of attribute information can be produced. Taking erasure codes as an example, different strips such as 4+2, 8+3 and the like can be generated in the system according to different data reliability requirements; according to different data storage space requirements, 3M or other storage space storage stripes can be generated in the system. For example, a 4+2 type storage stripe is distributed over 6 servers, and the storage stripe is divided into 6 storage blocks, and if 2 servers fail, the storage stripe can be restored and written to the failed server through the storage blocks distributed over the remaining 4 servers; the storage space of the storage stripe is 3M, and if the storage stripe needs to be applied to a 4+2 type storage stripe, the storage space of each storage block can be determined to be 500kb, the division position of the storage stripe is determined, and the storage stripe is further divided into 6 storage blocks.
Step S202, if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip, writing the data of at least two data blocks in the file to be read into a local server; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
In step S202 of the embodiment of the present specification, it is necessary to place a storage stripe in a different server, so that data can be recovered through other storage blocks in the same storage stripe when a server fails.
With respect to step S202, it is preferable that all data of the file to be read is written to the local server.
It should be noted that, according to task requirements, a server to be read from may be designated as a local server, for example, server a needs to read file a, and server a may be designated as a local server when reading file a.
In the prior art, when data is written into a storage stripe, one storage stripe is written into the storage stripe first and then other storage stripes are written into the storage stripe, because the storage stripe is placed in different servers, a local server can only store a part of data required by the local server, and because the local server needs to read the data, cross-server calling is required during reading, which brings great inconvenience to the reading work of the local server and may cause problems such as network delay, and specific explanation is made for the problems:
in this embodiment, a 4+2 type of storage stripe is taken as an example, and a technical solution in the prior art is described, where the storage stripe is to distribute the storage stripe over 4 servers in order to make the reliability of data better, and even if any 2 servers fail, the data is not lost, and can be recovered by the remaining 4 servers. However, in the prior art, when writing continuous data into a storage stripe, integral writing is adopted, that is, when writing the continuous data into one storage stripe, the continuous data is written into other storage stripes until the storage space of the storage stripe is full; alternatively, when the storage space of a storage stripe is large enough, all the consecutive data is written into one storage stripe.
However, in the prior art, since the continuous data is written into different servers, when a local server needs to read data subsequently distributed in other servers, a network or other transmission means needs to be borrowed, which may reduce the performance of the entire system, increase the delay of the entire system, and also may cause consumption of the network and the CPU. It should be noted that the continuous file read by the local server in this embodiment is a file required by the server.
In this embodiment, the technical solution of the prior art is further explained, referring to fig. 3, four segments of continuous data are respectively a data segment one: s1d1, S2d1, S3d1, S4d 1; and a second data segment: s1d2, S2d2, S3d3, S4d 4; data segment three: s1d3, S2d3, S3d3, S4d 3; and a data section four: s1d4, S2d4, S3d4, S4d 4. The six servers are Client1, Client2, Client3, Client4, Client5 and Client6 respectively. The four segments of storage stripes are Stripe1, Stripe2, Stripe3, Stripe4, respectively. When four segments of continuous data are written into the server, the data segment I S1d1, S2d1, S3d1 and S4d1 are written into the Stripe 1; writing S1d2, S2d2, S3d3 and S4d4 of the second data segment into the Stripe 2; writing S1d3, S2d3, S3d3 and S4d3 of data segment three to Stripe 3; s1d4, S2d4, S3d4, S4d4 of data segment four are written to Stripe 4. Therefore, it can be seen that, in the prior art, when processing consecutive data segments, in order to ensure reliable and secure data, the consecutive data segments are distributed to different servers, if Client1 needs to read data segment one: after the Client1 locally reads S1d1, S2d1, S2d1, S3d1, S4d1 need to read S2d1, S3d1, S4d1, and S2d1, S3d1, S4d1 are distributed in other servers, and it is necessary to borrow a network or other transmission means during reading, which may reduce the performance of the whole system, increase the delay of the whole system, and may cause the consumption of the network and the CPU.
In this embodiment, a storage stripe of a 4+2 type is taken as an example to describe the technical solution of the present application, and in order to make the reliability of data better, the storage stripe may be distributed over 4 servers, and even if any 2 servers fail, data is not lost, and can be recovered by the remaining 4 servers. The method comprises the steps of dividing a storage stripe into a plurality of storage blocks for storing data, and writing the data into different storage stripes located in a local server. Specifically, data is written into the local server, and if the storage space of the local server is not convenient for storing the data, the remaining data can be written into the adjacent server. In this case, since the continuous data is written in the local server, the local server does not need to read the required data by using a network or other transmission means when reading the continuous data, thereby improving the performance of the system. It should be noted that the continuous file read by the local server in this embodiment is data required by the server.
In this embodiment, the technical solution of the present application is further explained, referring to fig. 4, four segments of continuous data are respectively a data segment one: s1d1, S2d1, S3d1, S4d 1; and a second data segment: s1d2, S2d2, S3d3, S4d 4; data segment three: s1d3, S2d3, S3d3, S4d 3; and a data section four: s1d4, S2d4, S3d4, S4d 4. The six servers are Client1, Client2, Client3, Client4, Client5 and Client6 respectively. The four segments of storage stripes are Stripe1, Stripe2, Stripe3, Stripe4, respectively. When four segments of continuous data are written into the server, the data segment I S1d1, S2d1, S3d1 and S4d1 are written into the Client 1; writing S1d2, S2d2, S3d3 and S4d4 of the second data segment into the Client 2; writing S1d3, S2d3, S3d3 and S4d3 of a data segment three to the Client 3; s1d4, S2d4, S3d4 and S4d4 of data segment four are written to Client 4. The data segment I is data required by the Client1, the data segment II is data required by the Client2, the data segment III is data required by the Client3, and the data segment IV is data required by the Client 4. Therefore, when the continuous data segments S1d1, S2d1, S3d1 and S4d1 are written, the continuous data segments S1d1, S2d1, S3d1 and S4d1 are written into the Client1 (the Client1 needs to read S1d1, S2d1, S3d1 and S4d1 later), and at the same time, since the continuous data segments S1d1, S2d1, S3d1 and S4d1 are in four storage stripes, i.e., the Stripe1, the Stripe2, the Stripe3 and the Stripe4, if the Client1 fails, the recovery can be performed by other servers. At this time, the Client1 can read S2d1, S3d1 and S4d1 locally without borrowing a network or other transmission means, thereby improving the performance of the system.
In addition, in the prior art, a disk array (RAID) technology is also adopted, in which continuous data is stored in a single server, so that a local server can read the continuous data more conveniently, but when the server fails, recovery by another server is not possible. In the prior art, a technology with multiple copies is adopted, which is to avoid the problem that data cannot be recovered due to the failure of a single server, and can also realize the reading of required continuous data at a local server, but the multiple copies have the greatest defect of cost, for example, two copies need to be set up with two identical servers, the data stored by the two servers are synchronized in real time, and if one server fails, the other server can be applied; three copies require three identical servers to be set up. It can be seen that multiple copies of this technology need to be built to occupy more storage space than the storage stripe of the present application, making the construction cost high.
It should be noted that, in the present application, only data that should be distributed to multiple servers needs to be written into the local server more, so as to reduce the number of times of calling the data of the local server.
Further, before the step of executing writes the data into the storage block of the corresponding server according to the first service requirement, the step of: and marking the storage block. The storage block may include a data block storing data and a check block storing a check code, wherein the check code is used for checking the data stored in the storage stripe. Marking the storage block, specifically comprising: marking the data block as allocable and readable and writable; the parity chunks are marked as allocable. For example, referring to fig. 4, according to the flag, S1p1, S2p1, S3p1 and S4p1 are parity chunks distributed in Client5, S1p2, S2p2, S3p2 and S4p2 bits are parity chunks distributed in Client6, S1d1, S2d1, S3d1, S4d1 and the like in the figure are data chunks, and S1p1, S2p1, S3p1 and S4p1 and the like are parity chunks.
Further, after the step of dividing the memory stripe into a plurality of memory blocks is performed, the method further includes:
and marking the corresponding functions of the storage blocks in the storage stripe according to the type of the predefined storage blocks. The storage block type comprises a data block for storing data and a check block for storing a check code, wherein the check code is used for checking the data in the storage stripe.
Marking the corresponding function of the storage block in the storage stripe specifically comprises:
marking the data block as allocable and readable and writable;
marking the parity chunks as allocable.
After the execution is finished, if the number m of the data blocks of the file to be read is smaller than or equal to the number n of the storage spaces of the storage strips, writing the data of at least two data blocks in the file to be read into the local server; if the number m of the data blocks of the file to be read is greater than the number n of the storage spaces of the storage stripe, after the step of writing the data of at least two data blocks into the local server when the data of the first n data blocks in the file to be read is written, the method further comprises:
and updating the check code in the check block according to all written data. For example, referring to fig. 4, after the data block S1d1 is written into the Client1, check codes are generated in S1p1 of the Client5 and S1p2 of the Client6, when S1d2 is written into the Client2, since S1d1 and S1d2 are in the same storage stripe, the check codes in S1p1 of the Client5 and S1p2 of the Client6 are recalculated, and the calculation result is updated to the check codes in S1p1 of the Client5 and S1p2 of the Client 6.
Further, if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip, writing the data of at least two data blocks in the file to be read into the local server; if the number m of the data blocks of the file to be read is greater than the number n of the storage spaces of the storage stripe, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written to the front of the local server, and the method further comprises the following steps:
judging whether a pre-input instruction is convenient for a local server to read the data of the file to be read or not;
if the pre-input instruction is judged to be convenient for the local server to read the data of the file to be read, executing the step of writing the data of at least two data blocks in the file to be read into the local server if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage space of the storage strip; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage strip, writing the data of the first n data blocks in the file to be read into a local server;
and if the pre-input instruction is judged not to be convenient for the local server to read the data of the file to be read, continuously writing the data of the file to be read into different servers according to the position of the storage strip in the server.
It should be noted that the above-described scheme may be applied to a distributed storage system, a distributed memory system, or a storage system requiring high-reliability data. The storage blocks in the present application may be stored in a hard disk, or may be stored in a memory or other storage device.
According to the embodiment of the application, more data blocks of data required by the local server are written into the local server, so that the performance of the server in data reading is improved.
Corresponding to the above embodiment, fig. 5 is a schematic structural diagram of a data writing device provided in the third embodiment of this specification, where the schematic structural diagram includes: the device comprises a writing unit 1, a generating unit 2, a marking unit 3, an updating unit 4, a judging unit 5 and an executing unit 6.
The writing unit 1 is configured to write data of at least two data blocks in a file to be read into a local server if the number m of the data blocks of the file to be read is less than or equal to the number n of storage spaces of a storage stripe; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
The generating unit 2 is configured to generate a storage stripe corresponding to attribute information according to a service requirement, and divide the storage stripe into a plurality of storage blocks, where the attribute information includes a type of the storage stripe and a storage space of the storage stripe.
The marking unit 3 is configured to mark a corresponding function to a storage block in the storage stripe according to a predefined type of the storage block.
Further, the type of the storage block includes a data block storing data and a check block storing a check code, where the check code is used to check the data in the storage stripe.
Further, the marking unit 3 is specifically configured to:
marking the data block as allocable and readable and writable;
marking the parity chunks as allocable.
The updating unit 4 is configured to update the check code in the check block according to all written data.
The judging unit 5 is used for judging whether a pre-input instruction is convenient for the local server to read the data of the file to be read;
the execution unit 6 is configured to, if it is determined that the pre-input instruction is convenient for the local server to read the data of the file to be read, execute the writing of the data of at least two data blocks in the file to be read into the local server if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage stripe; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage strip, writing the data of the first n data blocks in the file to be read into a local server; and if the pre-input instruction is judged not to be convenient for the local server to read the data of the file to be read, continuously writing the data of the file to be read into different servers according to the position of the storage strip in the server.
Embodiments of the present application further provide a data writing device, which includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the device is triggered to execute the following means:
the writing unit is used for writing the data of at least two data blocks in the file to be read into the local server if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strip; if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the local server, wherein the storage stripes are located in different servers.
According to the embodiment of the application, more data blocks of data required by the local server are written into the local server, so that the performance of the server in data reading is improved.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A method of writing data, the method comprising:
generating storage stripes, wherein each storage stripe is located in a different server;
writing data needing to be written into the server;
wherein writing data to be written to the server comprises: writing data to be written into a storage space of a storage strip on a local server; if the storage space of the storage strip on the local server is not convenient for storing all the data to be written, writing the rest data to be written into the adjacent server after the storage space of the storage strip on the local server is full.
2. The method of claim 1, wherein each of the storage stripes has storage space at the local server.
3. The method of claim 1, writing data to be written to a storage space of a storage stripe on a local server comprises:
and writing the data to be written into the storage space of different storage stripes on the local server.
4. The method of claim 3, writing the data to be written to a storage space of a different storage stripe on the local server comprises:
and writing the continuous data needing to be written into the storage space of different storage stripes on the local server.
5. The method of claim 1, wherein the data to be written is a data block of the file to be read; writing the data to be written into the storage space of the storage stripe on the local server comprises the following steps:
if the number m of the data blocks of the file to be read is less than or equal to the number n of the storage spaces of the storage strips, writing the data of at least two data blocks in the file to be read into the storage spaces of the storage strips on the local server;
if the number m of the data blocks of the file to be read is larger than the number n of the storage spaces of the storage stripes, when the data of the first n data blocks in the file to be read is written, the data of at least two data blocks are written into the storage spaces of the storage stripes on the local server.
6. The method of claim 5, before writing the data to be written into the storage space of the storage stripe on the local server, the method further comprising:
judging whether a pre-input instruction is convenient for a local server to read the file to be read or not;
writing the data to be written into the storage space of the storage stripe on the local server comprises the following steps:
and if the pre-input instruction is convenient for the local server to read the file to be read, writing the data needing to be written into the storage space of the storage strip on the local server.
7. The method of claim 1, thereafter, the method further comprising:
dividing each of the memory stripes into a plurality of memory blocks;
and marking the storage blocks in the storage stripe with corresponding functions according to the type of the predefined storage blocks.
8. The method of claim 7, wherein the types of the storage blocks include data blocks storing data and check blocks storing check codes; wherein the check code is used for checking the data in the storage stripe.
9. The method of claim 8, marking memory blocks in the memory stripe with corresponding functions, comprising:
marking the data block as allocable and readable and writable;
marking the parity chunks as allocable.
10. The method of claim 8, further comprising:
and updating the check code in the check block according to all written data.
11. The method of claim 1, generating a memory stripe comprising: generating a storage stripe corresponding to attribute information, wherein the attribute information comprises the type of the storage stripe and the storage space of the storage stripe.
12. The method of claim 11, generating a memory stripe of corresponding attribute information comprising: generating a storage strip corresponding to the attribute information according to the service requirement; if the service requirement is a storage stripe of a specific type, the attribute information is the specific type; and if the service requirement is a storage strip of a specific storage space, the attribute information is the specific storage space.
13. An apparatus for writing data, the apparatus comprising:
the generating unit is used for generating storage stripes, and each storage stripe is positioned in a different server;
the writing unit is used for writing the data needing to be written into the server;
wherein writing data to be written to the server comprises: writing data to be written into a storage space of a storage strip on a local server; if the storage space of the storage strip on the local server is not convenient for storing all the data to be written, writing the rest data to be written into the adjacent server after the storage space of the storage strip on the local server is full.
14. A writing apparatus for data, the apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions; wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform the method of any of claims 1 to 12.
CN202111124319.1A 2020-03-19 2020-03-19 Data writing method, device and equipment Pending CN113835637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111124319.1A CN113835637A (en) 2020-03-19 2020-03-19 Data writing method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010198337.3A CN111399780B (en) 2020-03-19 2020-03-19 Data writing method, device and equipment
CN202111124319.1A CN113835637A (en) 2020-03-19 2020-03-19 Data writing method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010198337.3A Division CN111399780B (en) 2020-03-19 2020-03-19 Data writing method, device and equipment

Publications (1)

Publication Number Publication Date
CN113835637A true CN113835637A (en) 2021-12-24

Family

ID=71432684

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010198337.3A Active CN111399780B (en) 2020-03-19 2020-03-19 Data writing method, device and equipment
CN202111124319.1A Pending CN113835637A (en) 2020-03-19 2020-03-19 Data writing method, device and equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010198337.3A Active CN111399780B (en) 2020-03-19 2020-03-19 Data writing method, device and equipment

Country Status (2)

Country Link
CN (2) CN111399780B (en)
WO (1) WO2021184901A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111399780B (en) * 2020-03-19 2021-08-24 蚂蚁金服(杭州)网络技术有限公司 Data writing method, device and equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193350A1 (en) * 2012-07-27 2015-07-09 Tencent Technology (Shezhen) Comany Limited Data storage space processing method and processing system, and data storage server
CN106030501A (en) * 2014-09-30 2016-10-12 株式会社日立制作所 Distributed storage system
CN108008909A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of date storage method, apparatus and system
CN109445687A (en) * 2015-09-30 2019-03-08 华为技术有限公司 A kind of date storage method and protocol server
WO2019090756A1 (en) * 2017-11-13 2019-05-16 清华大学 Raid mechanism-based data storage system for sharing resources globally
US20190220356A1 (en) * 2016-09-30 2019-07-18 Huawei Technologies Co., Ltd. Data Processing Method, System, and Apparatus
CN110058961A (en) * 2018-01-18 2019-07-26 伊姆西Ip控股有限责任公司 Method and apparatus for managing storage system
JP2019159416A (en) * 2018-03-07 2019-09-19 Necソリューションイノベータ株式会社 Data management device, file system, data management method, and program
CN110651246A (en) * 2017-10-25 2020-01-03 华为技术有限公司 Data reading and writing method and device and storage server

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI254854B (en) * 2004-11-19 2006-05-11 Via Tech Inc Method and related apparatus for data migration of disk arrays
GB0610335D0 (en) * 2006-05-24 2006-07-05 Oxford Semiconductor Ltd Redundant storage of data on an array of storage devices
US8117388B2 (en) * 2009-04-30 2012-02-14 Netapp, Inc. Data distribution through capacity leveling in a striped file system
JP2013196276A (en) * 2012-03-19 2013-09-30 Fujitsu Ltd Information processor, program and data arrangement method
CN104881242A (en) * 2014-02-28 2015-09-02 中兴通讯股份有限公司 Data writing method and data writing device
JP2018116526A (en) * 2017-01-19 2018-07-26 東芝メモリ株式会社 Storage control apparatus, storage control method and program
CN109814807B (en) * 2018-12-28 2022-05-06 曙光信息产业(北京)有限公司 Data storage method and device
CN110347340A (en) * 2019-07-05 2019-10-18 北京谷数科技有限公司 A kind of method and apparatus improving storage system RAID performance
CN111399780B (en) * 2020-03-19 2021-08-24 蚂蚁金服(杭州)网络技术有限公司 Data writing method, device and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193350A1 (en) * 2012-07-27 2015-07-09 Tencent Technology (Shezhen) Comany Limited Data storage space processing method and processing system, and data storage server
CN106030501A (en) * 2014-09-30 2016-10-12 株式会社日立制作所 Distributed storage system
CN109445687A (en) * 2015-09-30 2019-03-08 华为技术有限公司 A kind of date storage method and protocol server
US20190220356A1 (en) * 2016-09-30 2019-07-18 Huawei Technologies Co., Ltd. Data Processing Method, System, and Apparatus
CN108008909A (en) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 A kind of date storage method, apparatus and system
CN110651246A (en) * 2017-10-25 2020-01-03 华为技术有限公司 Data reading and writing method and device and storage server
WO2019090756A1 (en) * 2017-11-13 2019-05-16 清华大学 Raid mechanism-based data storage system for sharing resources globally
CN111095217A (en) * 2017-11-13 2020-05-01 清华大学 Data storage system based on RAID mechanism with global resource sharing
CN110058961A (en) * 2018-01-18 2019-07-26 伊姆西Ip控股有限责任公司 Method and apparatus for managing storage system
JP2019159416A (en) * 2018-03-07 2019-09-19 Necソリューションイノベータ株式会社 Data management device, file system, data management method, and program

Also Published As

Publication number Publication date
CN111399780A (en) 2020-07-10
WO2021184901A1 (en) 2021-09-23
CN111399780B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN107220148B (en) Reconstruction method and device for redundant array of independent disks
CN107577697B (en) Data processing method, device and equipment
CN110058953B (en) Method, apparatus, and storage medium for changing type of storage system
US10592337B1 (en) Systems and methods for distributing information across distributed storage devices
US10503620B1 (en) Parity log with delta bitmap
CN113569508B (en) Database model construction method and device for data indexing and access based on ID
CN109284066B (en) Data processing method, device, equipment and system
CN110807013A (en) Data migration method and device for distributed data storage cluster
CN111399780B (en) Data writing method, device and equipment
CN108647112B (en) Data backup method and device and distributed transaction processing system
CN104268097A (en) Metadata processing method and system
CN107632779B (en) Data processing method and device and server
CN108733307B (en) Storage management method, apparatus and computer readable medium
CN112119380B (en) Parity check recording with bypass
CN111124294B (en) Sector mapping information management method and device, storage medium and equipment
CN115391337A (en) Database partitioning method and device, storage medium and electronic equipment
US10210063B2 (en) Disk array storage controller
CN113641872B (en) Hashing method, hashing device, hashing equipment and hashing medium
CN107239270B (en) Code processing method and device
CN110633321B (en) Data synchronization method, device and equipment
CN112749152A (en) Data migration method and device of intelligent contract and storage medium
CN103176843A (en) File migration method and file migration equipment of Map Reduce distributed system
US20200363958A1 (en) Efficient recovery of resilient spaces
CN111897676A (en) File backup method and device based on database index
CN107391223B (en) File processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination