CN112148206A - Data reading and writing method and device, electronic equipment and medium - Google Patents

Data reading and writing method and device, electronic equipment and medium Download PDF

Info

Publication number
CN112148206A
CN112148206A CN201910580273.0A CN201910580273A CN112148206A CN 112148206 A CN112148206 A CN 112148206A CN 201910580273 A CN201910580273 A CN 201910580273A CN 112148206 A CN112148206 A CN 112148206A
Authority
CN
China
Prior art keywords
data
read
written
hard disk
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910580273.0A
Other languages
Chinese (zh)
Inventor
杨稼晟
姚国涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Beijing Kingsoft Cloud Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Beijing Kingsoft Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd, Beijing Kingsoft Cloud Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN201910580273.0A priority Critical patent/CN112148206A/en
Publication of CN112148206A publication Critical patent/CN112148206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a data reading and writing method, a data reading and writing device, electronic equipment and a medium, relates to the technical field of computers, and can shorten a path of reading and writing operation and reduce the time consumed by the reading and writing operation, thereby reducing the occurrence of failure of the reading and writing operation caused by overlong time consumed by the reading and writing operation. The embodiment of the invention comprises the following steps: determining a fragment for storing data to be read and written in the virtual disk according to a storage position for storing the data to be read and written in the virtual disk, wherein the virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size. And then determining virtual nodes for storing the fragment mapping of the data to be read and written, wherein the virtual nodes correspond to a preset number of servers in the storage cluster. And then sending an input/output (I/O) request to a server corresponding to the virtual node, wherein the I/O request is used for requesting to read and write data to be read and written from the storage position.

Description

Data reading and writing method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data reading and writing method and apparatus, an electronic device, and a medium.
Background
In the related art, the storage form of data includes: local storage and distributed storage. Local storage means that any read-write operation does not pass through a network and is only performed in a local disk. Because the read-write operation of the local storage is performed in the local disk, the capacity of the local storage disk is limited, and if the server using the local storage goes down, the read-write operation cannot be executed. In order to solve the above problems, distributed storage has emerged.
Distributed storage refers to: the storage system comprises metadata nodes and storage nodes, wherein the metadata nodes are used for storing metadata information of files or data blocks, any read-write operation needs to pass through the metadata nodes firstly to obtain equipment identifiers, disk identifiers, offset addresses and the like of the stored data blocks or files, and then read-write requests are sent to data storage servers corresponding to the obtained equipment identifiers to perform corresponding read-write operations.
However, in the distributed storage system, since each time the read/write operation is performed, the metadata node is passed through first, the execution path of the read/write operation is long, the consumed time is long, and even the read/write operation fails. For example: if the network jitters, the read-write requests of each terminal in the network can be queued, and the delay caused by queuing can affect the data delay of the transmission from the client to the server, which easily causes the failure of the read-write requests.
Disclosure of Invention
Embodiments of the present invention provide a data read/write method, an apparatus, an electronic device, and a medium, so as to shorten a read/write operation path and reduce time consumed by the read/write operation, thereby reducing occurrence of a failure of the read/write operation due to an excessively long time consumed by the read/write operation. The specific technical scheme is as follows:
in a first aspect, a data reading and writing method is provided, which is applied to a terminal, and the method includes:
determining a fragment used for storing data to be read and written in a virtual disk according to a storage position used for storing the data to be read and written in the virtual disk, wherein the virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size;
determining virtual nodes for storing the fragment mapping of the data to be read and written, wherein the virtual nodes correspond to a preset number of servers in a storage cluster;
and sending an input/output (I/O) request to a server corresponding to the virtual node, wherein the I/O request is used for requesting to read and write the data to be read and written from the storage position.
Optionally, before determining, according to a storage location in the virtual disk, where the storage location is used to store data to be read and written, a fragment in the virtual disk, where the fragment is used to store the data to be read and written, the method further includes:
and acquiring a virtual node list, wherein the virtual node list comprises hard disk identifications corresponding to the virtual nodes and server Internet Protocol (IP) addresses to which the hard disks belong.
Optionally, the obtaining the virtual node list includes:
sending an acquisition request to a designated server, wherein the acquisition request is used for requesting to acquire the virtual node list;
and receiving the virtual node list, and establishing long connection with the server sending the virtual node list.
Optionally, the determining, according to a storage location in the virtual disk, where the storage location is used to store data to be read and written, a fragment in the virtual disk, where the fragment is used to store the data to be read and written, includes:
determining a starting fragment corresponding to the data to be read and written according to a starting storage position for storing the data to be read and written in the virtual disk and the size of each fragment in the virtual disk;
determining a trailing fragment corresponding to the data to be read and written according to the sizes of the starting fragment and the data to be read and written, and determining the starting fragment, the trailing fragment and a fragment between the starting fragment and the trailing fragment as fragments for storing the data to be read and written.
Optionally, after determining the virtual node for storing the fragment mapping of the data to be read and written, the method further includes:
and determining a hard disk set for storing the data to be read and written according to the virtual node list, wherein the hard disk set comprises a hard disk corresponding to the virtual node for storing the fragment mapping of the data to be read and written.
Optionally, the sending the input/output I/O request to the server corresponding to the virtual node is performed by:
determining the last hard disk of the hard disk set, and sending the write-in request to a server to which the last hard disk belongs, so that after the data to be read and written is written in by the last hard disk, the data to be written in by the last hard disk is synchronized with other hard disks in the hard disk set according to hard disk identifiers in the hard disk set, wherein the write-in request comprises: data to be written and the identifier of the virtual node; alternatively, the first and second electrodes may be,
and respectively sending the write-in request to a server to which each hard disk in the hard disk set belongs.
Optionally, the sending the input/output I/O request to the server corresponding to the virtual node, where the I/O request is a read request, includes:
determining the last hard disk of the hard disk set, and sending the reading request to a server to which the last hard disk belongs, wherein the reading request comprises: and the identifier of the data to be read and the identifier of the virtual node.
In a second aspect, a data reading and writing apparatus is provided, which is applied to a terminal, and the apparatus includes:
the determining module is used for determining a fragment used for storing data to be read and written in a virtual disk according to a storage position used for storing the data to be read and written in the virtual disk, wherein the virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size;
the determining module is further configured to determine virtual nodes for storing the fragment mapping of the data to be read and written, where the virtual nodes correspond to a preset number of servers in a storage cluster;
a sending module, configured to send an input/output I/O request to the server corresponding to the virtual node determined by the determining module, where the I/O request is used to request to read and write the data to be read and written from the storage location.
Optionally, the apparatus further comprises: an acquisition module;
the obtaining module is configured to obtain a virtual node list before determining, according to a storage location in the virtual disk, that is used for storing data to be read and written, and before determining that the virtual disk is used for storing a fragment of the data to be read and written, where the virtual node list includes a hard disk identifier corresponding to each virtual node and a server internet protocol IP address to which the hard disk belongs.
Optionally, the obtaining module is specifically configured to:
sending an acquisition request to a designated server, wherein the acquisition request is used for requesting to acquire the virtual node list;
and receiving the virtual node list, and establishing long connection with the server sending the virtual node list.
Optionally, the determining module is specifically configured to:
determining a starting fragment corresponding to the data to be read and written according to a starting storage position for storing the data to be read and written in the virtual disk and the size of each fragment in the virtual disk;
determining a trailing fragment corresponding to the data to be read and written according to the sizes of the starting fragment and the data to be read and written, and determining the starting fragment, the trailing fragment and a fragment between the starting fragment and the trailing fragment as fragments for storing the data to be read and written.
Optionally, the determining module is further configured to determine, after the virtual node for storing the fragment mapping of the data to be read and written is determined, a hard disk set for storing the data to be read and written according to the virtual node list, where the hard disk set includes a hard disk corresponding to the virtual node for storing the fragment mapping of the data to be read and written.
Optionally, the I/O request is a write request, and the sending module is specifically configured to:
determining the last hard disk of the hard disk set, and sending the write-in request to a server to which the last hard disk belongs, so that after the data to be read and written is written in by the last hard disk, the data to be written in by the last hard disk is synchronized with other hard disks in the hard disk set according to hard disk identifiers in the hard disk set, wherein the write-in request comprises: data to be written and the identifier of the virtual node; alternatively, the first and second electrodes may be,
and respectively sending the write-in request to a server to which each hard disk in the hard disk set belongs.
Optionally, the I/O request is a read request, and the sending module is specifically configured to:
determining the last hard disk of the hard disk set, and sending the reading request to a server to which the last hard disk belongs, wherein the reading request comprises: and the identifier of the data to be read and the identifier of the virtual node.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the steps of the data reading and writing method when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method implements any of the above-mentioned steps of the data reading and writing method.
In a fifth aspect, an embodiment of the present invention further provides a computer program product including instructions, which, when run on a computer, causes the computer to perform any of the steps of the data reading and writing method described above.
The data reading and writing method, the data reading and writing device, the electronic device and the medium provided by the embodiment of the invention can determine the fragments for storing the data to be read and written in the virtual disk according to the storage position for storing the data to be read and written in the virtual disk. And then determining virtual nodes for storing the fragment mapping of the data to be read and written, wherein the virtual nodes correspond to a preset number of servers in the storage cluster. And then sending the I/O request to a server corresponding to the virtual node. Because the terminal can directly perform data interaction with the server in the storage cluster without passing through a metadata node, the embodiment of the invention can shorten the path of the read-write operation and reduce the time consumed by the read-write operation, thereby reducing the occurrence of failure of the read-write operation caused by overlong time consumed by the read-write operation.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a data read/write system according to an embodiment of the present invention;
fig. 2 is a flowchart of a data reading and writing method according to an embodiment of the present invention;
fig. 3 is an exemplary schematic diagram of a data reading and writing system according to an embodiment of the present invention;
FIG. 4 is an exemplary diagram of another data reading and writing system according to an embodiment of the present invention;
FIG. 5 is an exemplary diagram of another data reading and writing system according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a data reading/writing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a data reading and writing system shown in fig. 1 includes: a terminal and a storage cluster.
Wherein, the terminal can be: the mobile phone, the tablet computer or the computer is an electronic device capable of installing the client software. The terminal may read data from the servers of the storage cluster or write data in the servers of the storage cluster.
The storage cluster comprises each server, the server comprises a plurality of hard disks, and the hard disks are used for storing data written by the terminals.
One virtual node corresponds to one or more hard disks in the storage cluster, the hard disks corresponding to each virtual node can be different, and the number of the hard disks corresponding to each virtual node can also be different.
It should be noted that fig. 1 exemplarily shows one terminal, and two servers capable of communicating with the terminal, where the two servers are located in a storage cluster, and the number of servers included in the storage cluster in the embodiment of the present application is not limited thereto.
Fig. 2 is a flowchart of a data reading and writing method according to an embodiment of the present invention, where the data reading and writing method is applied to a terminal of the data reading and writing system shown in fig. 1, and referring to fig. 2, the method includes the following steps:
step 201, determining the fragments for storing the data to be read and written in the virtual disk according to the storage location for storing the data to be read and written in the virtual disk.
The virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size. For example: a size of 4 Megabytes (MB) is specified.
In one embodiment, the method for determining the fragmentation for storing the data to be read and written comprises the following two steps:
step one, determining a starting fragment corresponding to data to be read and written according to a starting storage position for storing the data to be read and written in a virtual disk and the size of each fragment in the virtual disk.
Optionally, before obtaining the initial storage location for storing the data to be read and written in the virtual disk, the terminal may request the storage cluster to obtain the metadata information of the virtual disk. The metadata information is information describing data attributes, and the terminal can determine a starting storage position for storing data to be read and written in the virtual disk according to the metadata information. And if the created virtual disk does not exist in the storage cluster, the storage cluster creates the virtual disk and then sends the metadata information of the virtual disk to the terminal. And if the created virtual disk exists in the storage cluster, sending the metadata information of the virtual disk to the terminal.
Specifically, the storage cluster of the data read-write system shown in fig. 1 determines whether a header file (for example, an AAA-header file) is stored in the storage cluster, and if the header file is stored in the storage cluster, it indicates that the created virtual disk exists in the storage cluster.
As an example, if the initial storage location for storing data to be read and written in the virtual disk acquired by the terminal is: the position of the 300MB worth of data stored in the virtual disk, and the size of each fragment in the virtual disk is 4MB, the terminal may determine that the initial fragment corresponding to the data to be read and written is: if 300MB/4MB is 75, it means that the data of 300MB in the virtual disk is stored in the 75 th slice, so the starting slice is the 75 th slice of the virtual disk.
It can be understood that the terminal may determine a starting storage location in the virtual disk for storing the data to be written according to the size of the data already stored in the virtual disk. For example: if 200MB of data is stored in the virtual disk, the corresponding initial storage position of the data to be written in the virtual disk is 201 MB.
The terminal can determine an initial storage position for storing the data to be read in the virtual disk according to the identifier of the data to be read.
And step two, determining a tail fragment corresponding to the data to be read and written according to the sizes of the start fragment and the data to be read and written, and determining the start fragment, the tail fragment and the fragments between the start fragment and the tail fragment as the fragments for storing the data to be read and written.
For example: the size of the data to be read and written is 12MB, the starting fragment is the 2 nd fragment of the virtual disk, and the ending fragment is: slice 4 of the virtual disk. The 2 nd slice of the virtual disk stores 0MB-4MB of data to be read and written, the 3 rd slice stores 5MB-8MB of data to be read and written, and the 4 th slice stores 9MB-12MB of data to be read and written. And determining the 2 nd fragment, the 3 rd fragment and the 4 th fragment of the virtual disk as fragments for storing data to be read and written.
Step 202, determining a virtual node for storing the fragment mapping of the data to be read and written.
The virtual nodes correspond to a preset number of servers in the storage cluster. It can be understood that the servers corresponding to the virtual nodes are: and the hard disk corresponding to the virtual node belongs to the server.
Optionally, the virtual node mapped to the fragment for storing the data to be read and written may be determined according to the identifier of the fragment for storing the data to be read and written and the total number of virtual nodes.
For example: the identifier of the fragment for storing the data to be read and written is 3, and the total number of the virtual nodes is 10, then the identifier of the virtual node mapped by the fragment for storing the data to be read and written is 3% to 10. Where,% represents a remainder operation.
After determining the virtual node for storing the fragment mapping of the data to be read and written, the hard disk set for storing the data to be read and written may be determined according to the virtual node list. The hard disk set comprises hard disks corresponding to the virtual nodes for storing the fragmentation mapping of the data to be read and written.
It can be understood that one virtual node corresponds to a plurality of hard disks in the storage cluster, and in order to achieve data redundancy, each data to be written is stored as a plurality of copies when written into the storage cluster, and one copy is stored in each hard disk corresponding to the virtual node to which the data to be written is mapped.
If the number of servers in the storage cluster of the data reading and writing system shown in fig. 1 is greater than or equal to the number of hard disks corresponding to the virtual nodes, the hard disks corresponding to the virtual nodes are respectively located in different servers; if the number of the servers of the storage cluster is less than the number of the hard disks corresponding to the virtual node, the hard disks corresponding to the virtual node comprise: hard disks in different servers and different hard disks of the same server.
For example, if the number of servers in the storage cluster is 2, which are server 1 and server 2, respectively, and the number of hard disks corresponding to the virtual node is 2, the hard disks corresponding to the virtual node include: hard disk a in server 1 and hard disk B in server 2.
For another example, the number of servers in the storage cluster is 2, which are respectively: server 1 and server 2. The number of the hard disks corresponding to the virtual node is 3, and the hard disks corresponding to the virtual node include: one hard disk in the server 1 and two hard disks in the server 2, or two hard disks in the server 1 and one hard disk in the server 2.
Because the hard disks corresponding to the virtual nodes are positioned in different servers, if one of the servers corresponding to the virtual nodes is down, other servers corresponding to the virtual nodes can respond to the I/O request sent by the terminal, and the success rate of reading or writing data by the terminal is improved.
Step 203, an Input/Output (I/O) request is sent to the server corresponding to the virtual node.
The I/O request is used for requesting to read and write the data to be read and written from the storage position used for storing the data to be read and written in the virtual disk.
It can be understood that the reading and writing data from the storage location in the virtual disk by the terminal may be the reading and writing data from the hard disk of the server corresponding to the virtual node mapped by the fragment of the virtual disk by the terminal.
In one embodiment, the terminal sends an I/O request to a server to which a hard disk corresponding to the virtual node belongs, and after receiving the I/O request, the server writes data to be written into the hard disk corresponding to the virtual node, or reads the data to be read from the hard disk corresponding to the virtual node.
The data reading and writing method provided by the embodiment of the invention can determine the fragments for storing the data to be read and written in the virtual disk according to the storage position for storing the data to be read and written in the virtual disk. And then determining virtual nodes for storing the fragment mapping of the data to be read and written, wherein the virtual nodes correspond to a preset number of servers in the storage cluster. And then sending the I/O request to a server corresponding to the virtual node. Because the terminal can directly perform data interaction with the server in the storage cluster without passing through a metadata node, the embodiment of the invention can shorten the path of the read-write operation and reduce the time consumed by the read-write operation, thereby reducing the occurrence of failure of the read-write operation caused by overlong time consumed by the read-write operation.
In this embodiment of the present invention, before determining the fragment for storing data to be read and written in step 202, the terminal further needs to obtain a virtual node list. The virtual node list includes hard disk identifiers corresponding to the virtual nodes and Internet Protocol (IP) addresses of servers to which the hard disks belong. The method for acquiring the virtual node list comprises the following steps:
step one, sending an acquisition request to a specified server.
The obtaining request is used for requesting to obtain a virtual node list, and the appointed server is as follows: and the server corresponding to one IP address in the configuration file acquired in advance. The IP addresses in the configuration file are: and storing the preset number of the IP addresses of the servers in the cluster. In order to avoid that a connection cannot be established with a server corresponding to an IP address in the configuration file, or that a server corresponding to an IP address in the configuration file is unstable, the configuration file may include a plurality of IP addresses. And if the terminal cannot be successfully connected with the server corresponding to one IP address in the configuration file, connecting the server corresponding to the other IP address in the configuration file. Therefore, the configuration file comprises a plurality of IP addresses, so that the situation that the terminal cannot acquire the virtual node list is reduced.
Alternatively, the terminal may call a Software Development Kit (SDK) to connect to the storage cluster in the data read-write system shown in fig. 1. And then, long connection is established with the specified server, and after the connection is successful, an acquisition request is sent to the specified server.
And step two, receiving the virtual node list and establishing long connection with the server sending the virtual node list.
If the designated server stores the virtual node list, the terminal may receive the virtual node list sent by the designated server.
If the designated server does not store the virtual node list, the designated server queries whether the virtual node list is stored in another server in the storage cluster of the data read-write system shown in fig. 1. If one server in the storage cluster stores the virtual node list, the server establishes long connection with the terminal and sends the virtual node list to the terminal.
Optionally, the designated server may sequentially query other servers in the storage cluster of the data read-write system shown in fig. 1 according to the sequence from small to large of the IP addresses until querying the server storing the virtual node list.
Optionally, the manner in which the terminal sends the I/O request to the server corresponding to the virtual node in step 203 includes two cases:
in the first case, when the I/O request is a write request, the following two ways are used for the terminal to send the I/O request to the server corresponding to the virtual node:
optionally, the first way of sending the write request by the terminal may be applicable to a scenario where a larger number of I/O requests sent by the terminal may cause a bottleneck in the network card usage of the terminal, but is not limited thereto.
And the first mode is that the last hard disk of the hard disk set is determined, and a write-in request is sent to a server to which the last hard disk belongs.
In the embodiment of the present invention, with reference to fig. 3, a terminal sends a write request to a server to which a last hard disk (e.g., hard disk 3 in fig. 3) in a hard disk set belongs, and after the last hard disk writes data to be read and written, the server to which the last hard disk belongs may synchronize the data to be written, which is written in the last hard disk, to other hard disks (e.g., hard disk 2 and hard disk 1 in fig. 3) in the hard disk set according to hard disk identifiers in the hard disk set.
Wherein the write request includes: the data to be written, the virtual node identifiers and the hard disk identifiers in the hard disk set.
After writing the data to be written, the server can also send a confirmation message to the terminal, wherein the confirmation message is used for indicating that the data to be written is successfully written into the hard disk.
Optionally, the last hard disk of the hard disk set may be determined according to the descending order of the hard disk identifiers.
It can be understood that each server of the storage cluster includes a plurality of hard disks, and the server needs to determine the hard disk to which data to be written is written according to the identifier of the virtual node. And each hard disk to which data to be written needs to be written is different, and after the server to which the last hard disk belongs writes the data to be written, the data to be written needs to be synchronized to other hard disks of the hard disk set according to the hard disk identification in the hard disk set.
It can be understood that, when data to be written is not written in a hard disk concentrated part of hard disks, a server to which the hard disk to which the data to be written does not belong receives a read request for the data to be written, and at this time, the server cannot read the data to be written, and the terminal fails to read the data.
In order to reduce the above situations, the terminal may send a write request to the server to which the last hard disk belongs, so that the server to which the last hard disk belongs writes data to be written first, and then synchronizes the data to be written to other hard disks in the hard disk set. And when the terminal reads the data, reading the data from the last hard disk. Since the last hard disk writes the data to be written first, the terminal reads the data to be written from the last hard disk with the highest possibility.
Optionally, in order to distinguish data written by different users, the write request may further include a user identifier. The user identifier may be an account identifier for logging in the terminal. It can be understood that the virtual disks corresponding to different accounts may be different, and different users may be distinguished according to different accounts logged in by the terminal.
The write request may also include a virtual disk identification. It can be understood that the same account may also correspond to multiple virtual disks, and different virtual disks of the same account may be distinguished through virtual disk identifiers. For example, the virtual disk identification is: 11A, represents the virtual disk A of the user 11.
Therefore, after the terminal sends the write-in request to the server to which the last hard disk of the hard disk set belongs, the data to be written in by all the hard disks in the hard disk set can be synchronously written. Compared with the mode of respectively sending the write requests to the server to which each hard disk belongs, the method can reduce the flow of the write requests. The problem that the writing request cannot be sent or the speed of sending the writing request is low due to the fact that the flow of the writing request sent by the terminal is too large and the network card processing capacity of the terminal is limited is solved.
Optionally, the second way of sending the write request by the terminal may be applicable to a scenario where the number of I/O requests sent by the terminal is small and a shorter time-consuming I/O operation is required, but is not limited to this.
In the second method, with reference to fig. 4, the terminal sends a write request to a server to which each hard disk (e.g., hard disk 1, hard disk 2, and hard disk 3 in fig. 4) in the hard disk set belongs. After writing the data to be written, the server can also send a confirmation message to the terminal, wherein the confirmation message is used for indicating that the data to be written is successfully written into the hard disk.
Wherein the write request includes: data to be written and the identification of the virtual node. Optionally, in order to distinguish data written by different users, the write request may further include a user identifier.
Therefore, compared with the first mode, the method that the terminal respectively sends the write-in request to the server to which each hard disk in the hard disk set belongs has the advantage that the data to be written can be written into each hard disk in the hard disk set at the same time, so that the time for writing the data to be written is reduced.
In case two, when the I/O request is a read request, the method for the terminal to send the read request to the server corresponding to the virtual node is as follows: and determining the last hard disk of the hard disk set, and sending a reading request to a server to which the last hard disk belongs.
Wherein the read request comprises: the identity of the data to be read and the identity of the virtual node. Optionally, in order to distinguish data read by different users, the read request may further include a user identifier.
It can be understood that, after receiving the read request, the server to which the last hard disk belongs sends the data to be read to the terminal if the data to be read is stored in the server. And if the data to be read is not stored in the server, reading the data to be read from other hard disks corresponding to the virtual node according to the sequence of the hard disk identifications from large to small, and if the data to be read is read, sending the data to be read to the terminal.
In order to more clearly explain the data reading and writing method provided by the embodiment of the present invention, the embodiment of the present invention further provides another exemplary schematic diagram of a data reading and writing system, and the system is shown in fig. 5.
With reference to the system shown in fig. 5, the data reading and writing method provided in the embodiment of the present invention includes the following steps:
step one, a terminal sends an acquisition request to a designated server corresponding to an IP address in a configuration file and receives a virtual node list sent by the designated server.
For example, in conjunction with fig. 5, specifying a list of virtual nodes sent by the server includes: virtual node 1, virtual node 2, and virtual node 3. The virtual nodes 1 correspond to the hard disk X of the server 1 and the hard disk Y of the server 2, respectively.
And step two, the terminal determines the fragments for storing the data to be read and written according to the position of the data to be read and written in the virtual disk and the size of the data to be read and written, and determines the virtual nodes corresponding to the fragments for storing the data to be read and written.
For example, referring to fig. 5, the terminal determines that the virtual node corresponding to the slice for storing the data to be read and written is virtual node 1.
And step three, the terminal determines the hard disk corresponding to the virtual node determined in the step two according to the virtual node list.
For example, referring to fig. 5, the terminal determines that the hard disks corresponding to the virtual node 1 are a hard disk X and a hard disk Y.
And step four, the terminal writes the data to be written into each hard disk determined in the step three, or reads the data to be read from the hard disk determined in the step three.
For example, in connection with fig. 5, the terminal sends write requests to the server 1 and the server 2, respectively. After receiving the write request, the server 1 writes the data to be written into the hard disk X, and sends a confirmation message to the terminal. After receiving the write request, the server 2 writes the data to be written into the hard disk Y, and sends a confirmation message to the terminal.
Alternatively, the terminal transmits a read request to the server 1 or the server 2. And the server 1 reads the data to be read from the hard disk X after receiving the reading and sends the data to the terminal. Or, after receiving the read request, the server 2 reads the data to be read from the hard disk Y and sends the data to the terminal.
The embodiment of the invention also has the following beneficial effects: because the storage cluster of the embodiment of the invention does not comprise the metadata node, compared with the storage cluster comprising the metadata node, the embodiment of the invention can reduce the storage components of the storage cluster, reduce the total number of devices of the storage cluster and save resources.
Moreover, if a metadata node in a storage cluster including the metadata node fails, the terminal cannot acquire a hard disk identifier corresponding to data to be read and written, cannot read or write data in a hard disk corresponding to the data to be read and written, and cannot process an I/O request by a server for storing data in the storage cluster. However, in the embodiment of the present invention, the terminal does not need to obtain the hard disk corresponding to the data to be read and written from the metadata node, and the terminal can directly perform data interaction with the server for storing the data, so that the embodiment of the present invention can avoid a single point of failure.
In addition, in the embodiment of the invention, the terminal can directly send the write-in request to the server, and the server can also send the confirmation message for indicating the successful write-in of the data to be written to the terminal, so that the consistency of the data written by the terminal and the data stored in the hard disk is ensured, and the quality and the integrity of the stored data can be improved.
In the prior art, if one hard disk in a storage cluster fails, data stored in a large number of hard disks of the storage cluster needs to be migrated.
In the prior art, the identifier of the virtual node is calculated to take the remainder of the total number of hard disks of the storage cluster, and the calculation result is determined as the hard disk corresponding to the virtual node. When one hard disk in the storage cluster fails, the total number of the available hard disks in the storage cluster changes, and then the hard disks corresponding to the virtual nodes change. The hard disk corresponding to each virtual node needs to be re-determined, and the data stored in the hard disk originally corresponding to each virtual node is migrated to the hard disk newly corresponding to the virtual node. The amount of transferred data is huge, and the time consumption is too much, which is not beneficial to realization.
In the embodiment of the present invention, the hard disk corresponding to each virtual node is predetermined. If one hard disk of the storage cluster fails, only the data stored in the failed hard disk needs to be transferred to the other hard disk, and the hard disk corresponding to the virtual node is updated in the virtual node list. Therefore, the embodiment of the invention can also reduce the data volume needing to be migrated when the hard disk of the storage cluster fails.
For example, the list of virtual nodes includes: the virtual node 1 corresponds to a hard disk A and a hard disk B, and the virtual node 2 corresponds to a hard disk C. If the hard disk 1 fails, the data stored in the hard disk 1 is migrated to the hard disk C. And updating the virtual node list as: the virtual node 1 corresponds to the hard disk C and the hard disk B, and the virtual node 2 corresponds to the hard disk C.
Corresponding to the above method embodiment, as shown in fig. 6, an embodiment of the present invention provides a data reading and writing apparatus, which is applied to a terminal of the data reading and writing system shown in fig. 1, and the apparatus includes: a determination module 601 and a sending module 602.
A determining module 601, configured to determine, according to a storage location in the virtual disk, where the storage location is used to store data to be read and written, a fragment in the virtual disk, where the fragment is used to store data to be read and written, where the virtual disk includes multiple fragments, and each fragment is used to store data of an assigned size;
the determining module 601 is further configured to determine virtual nodes for storing fragment mappings of data to be read and written, where the virtual nodes correspond to a preset number of servers in a storage cluster;
a sending module 602, configured to send an input/output I/O request to a server corresponding to the virtual node determined by the determining module, where the I/O request is used to request to read and write data to be read and written from a storage location.
Optionally, the apparatus may further include: an acquisition module;
the acquisition module is used for acquiring a virtual node list before fragmentation of the data to be read and written is stored in the virtual disk according to the storage position of the data to be read and written in the virtual disk, wherein the virtual node list comprises hard disk identifications corresponding to the virtual nodes and server Internet Protocol (IP) addresses to which the hard disks belong.
Optionally, the obtaining module may be specifically configured to:
sending an acquisition request to a designated server, wherein the acquisition request is used for requesting to acquire a virtual node list;
and receiving the virtual node list, and establishing long connection with the server sending the virtual node list.
Optionally, the determining module 601 may be specifically configured to:
determining an initial fragment corresponding to the data to be read and written according to an initial storage position for storing the data to be read and written in the virtual disk and the size of each fragment in the virtual disk;
and determining a tail fragment corresponding to the data to be read and written according to the sizes of the start fragment and the data to be read and written, and determining the start fragment, the tail fragment and the fragments between the start fragment and the tail fragment as fragments for storing the data to be read and written.
Optionally, the determining module 601 may be further configured to determine, after determining the virtual node for storing the sharded mapping of the data to be read and written, a hard disk set for storing the data to be read and written according to the virtual node list, where the hard disk set includes a hard disk corresponding to the virtual node for storing the sharded mapping of the data to be read and written.
Optionally, the I/O request is a write request, and the sending module 602 may be specifically configured to:
determining the last hard disk of the hard disk set, and sending a write-in request to a server to which the last hard disk belongs, so that after the data to be read and written is written in the last hard disk, the data to be written in the last hard disk is synchronized to other hard disks in the hard disk set according to hard disk identifiers in the hard disk set, wherein the write-in request comprises: data to be written and the identifier of the virtual node; alternatively, the first and second electrodes may be,
and respectively sending a write-in request to a server to which each hard disk in the hard disk set belongs.
Optionally, the I/O request is a read request, and the sending module 602 may be specifically configured to:
determining the last hard disk of the hard disk set, and sending a reading request to a server to which the last hard disk belongs, wherein the reading request comprises: the identity of the data to be read and the identity of the virtual node.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 complete mutual communication through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement the steps executed by the terminal in the foregoing method embodiment when executing the program stored in the memory 703.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above data reading and writing methods.
In another embodiment of the present invention, a computer program product containing instructions is provided, which when run on a computer, causes the computer to execute any one of the above data reading and writing methods.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (16)

1. A data read-write method is applied to a terminal, and the method comprises the following steps:
determining a fragment used for storing data to be read and written in a virtual disk according to a storage position used for storing the data to be read and written in the virtual disk, wherein the virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size;
determining virtual nodes for storing the fragment mapping of the data to be read and written, wherein the virtual nodes correspond to a preset number of servers in a storage cluster;
and sending an input/output (I/O) request to a server corresponding to the virtual node, wherein the I/O request is used for requesting to read and write the data to be read and written from the storage position.
2. The method according to claim 1, wherein before determining, according to a storage location in the virtual disk for storing data to be read and written, a storage location in the virtual disk for storing the data to be read and written, the method further comprises:
and acquiring a virtual node list, wherein the virtual node list comprises hard disk identifications corresponding to the virtual nodes and server Internet Protocol (IP) addresses to which the hard disks belong.
3. The method of claim 2, wherein obtaining the list of virtual nodes comprises:
sending an acquisition request to a designated server, wherein the acquisition request is used for requesting to acquire the virtual node list;
and receiving the virtual node list, and establishing long connection with the server sending the virtual node list.
4. The method according to claim 3, wherein the determining, according to a storage location in the virtual disk for storing data to be read and written, a fragment in the virtual disk for storing the data to be read and written comprises:
determining a starting fragment corresponding to the data to be read and written according to a starting storage position for storing the data to be read and written in the virtual disk and the size of each fragment in the virtual disk;
determining a trailing fragment corresponding to the data to be read and written according to the sizes of the starting fragment and the data to be read and written, and determining the starting fragment, the trailing fragment and a fragment between the starting fragment and the trailing fragment as fragments for storing the data to be read and written.
5. The method of claim 3, wherein after determining the virtual node for storing the sharded mapping of data to be read and written, further comprising:
and determining a hard disk set for storing the data to be read and written according to the virtual node list, wherein the hard disk set comprises a hard disk corresponding to the virtual node for storing the fragment mapping of the data to be read and written.
6. The method of claim 5, wherein the I/O request is a write request, and wherein sending an input/output I/O request to a server corresponding to the virtual node comprises:
determining the last hard disk of the hard disk set, and sending the write-in request to a server to which the last hard disk belongs, so that after the data to be read and written is written in by the last hard disk, the data to be written in by the last hard disk is synchronized with other hard disks in the hard disk set according to hard disk identifiers in the hard disk set, wherein the write-in request comprises: data to be written and the identifier of the virtual node; alternatively, the first and second electrodes may be,
and respectively sending the write-in request to a server to which each hard disk in the hard disk set belongs.
7. The method of claim 5, wherein the I/O request is a read request, and wherein sending an input/output I/O request to a server corresponding to the virtual node comprises:
determining the last hard disk of the hard disk set, and sending the reading request to a server to which the last hard disk belongs, wherein the reading request comprises: and the identifier of the data to be read and the identifier of the virtual node.
8. A data reading and writing apparatus, applied to a terminal, the apparatus comprising:
the determining module is used for determining a fragment used for storing data to be read and written in a virtual disk according to a storage position used for storing the data to be read and written in the virtual disk, wherein the virtual disk comprises a plurality of fragments, and each fragment is used for storing data with a specified size;
the determining module is further configured to determine virtual nodes for storing the fragment mapping of the data to be read and written, where the virtual nodes correspond to a preset number of servers in a storage cluster;
a sending module, configured to send an input/output I/O request to the server corresponding to the virtual node determined by the determining module, where the I/O request is used to request to read and write the data to be read and written from the storage location.
9. The apparatus of claim 8, further comprising: an acquisition module;
the obtaining module is configured to obtain a virtual node list before determining, according to a storage location in the virtual disk, that is used for storing data to be read and written, and before determining that the virtual disk is used for storing a fragment of the data to be read and written, where the virtual node list includes a hard disk identifier corresponding to each virtual node and a server internet protocol IP address to which the hard disk belongs.
10. The apparatus of claim 9, wherein the obtaining module is specifically configured to:
sending an acquisition request to a designated server, wherein the acquisition request is used for requesting to acquire the virtual node list;
and receiving the virtual node list, and establishing long connection with the server sending the virtual node list.
11. The apparatus of claim 10, wherein the determining module is specifically configured to:
determining a starting fragment corresponding to the data to be read and written according to a starting storage position for storing the data to be read and written in the virtual disk and the size of each fragment in the virtual disk;
determining a trailing fragment corresponding to the data to be read and written according to the sizes of the starting fragment and the data to be read and written, and determining the starting fragment, the trailing fragment and a fragment between the starting fragment and the trailing fragment as fragments for storing the data to be read and written.
12. The apparatus of claim 10,
the determining module is further configured to determine, after the virtual node for storing the fragment mapping of the data to be read and written is determined, a hard disk set for storing the data to be read and written according to the virtual node list, where the hard disk set includes a hard disk corresponding to the virtual node for storing the fragment mapping of the data to be read and written.
13. The apparatus of claim 12, wherein the I/O request is a write request, and wherein the sending module is specifically configured to:
determining the last hard disk of the hard disk set, and sending the write-in request to a server to which the last hard disk belongs, so that after the data to be read and written is written in by the last hard disk, the data to be written in by the last hard disk is synchronized with other hard disks in the hard disk set according to hard disk identifiers in the hard disk set, wherein the write-in request comprises: data to be written and the identifier of the virtual node; alternatively, the first and second electrodes may be,
and respectively sending the write-in request to a server to which each hard disk in the hard disk set belongs.
14. The apparatus of claim 12, wherein the I/O request is a read request, and wherein the sending module is specifically configured to:
determining the last hard disk of the hard disk set, and sending the reading request to a server to which the last hard disk belongs, wherein the reading request comprises: and the identifier of the data to be read and the identifier of the virtual node.
15. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201910580273.0A 2019-06-28 2019-06-28 Data reading and writing method and device, electronic equipment and medium Pending CN112148206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910580273.0A CN112148206A (en) 2019-06-28 2019-06-28 Data reading and writing method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910580273.0A CN112148206A (en) 2019-06-28 2019-06-28 Data reading and writing method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN112148206A true CN112148206A (en) 2020-12-29

Family

ID=73891499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910580273.0A Pending CN112148206A (en) 2019-06-28 2019-06-28 Data reading and writing method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112148206A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568570A (en) * 2021-06-22 2021-10-29 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113641467A (en) * 2021-10-19 2021-11-12 杭州优云科技有限公司 Distributed block storage implementation method of virtual machine
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103124299A (en) * 2013-03-21 2013-05-29 杭州电子科技大学 Distributed block-level storage system in heterogeneous environment
WO2014202016A1 (en) * 2013-06-20 2014-12-24 中国科学院声学研究所 Classification-based virtual network mapping method and system
CN104298541A (en) * 2014-10-22 2015-01-21 浪潮(北京)电子信息产业有限公司 Data distribution algorithm and data distribution device for cloud storage system
US9032165B1 (en) * 2013-04-30 2015-05-12 Amazon Technologies, Inc. Systems and methods for scheduling write requests for a solid state storage device
CN105511801A (en) * 2015-11-12 2016-04-20 长春理工大学 Data storage method and apparatus
CN106873919A (en) * 2017-03-20 2017-06-20 郑州云海信息技术有限公司 A kind of date storage method and device based on cloud storage system
CN107632788A (en) * 2017-09-26 2018-01-26 郑州云海信息技术有限公司 A kind of method of more controlled storage system I O schedulings and more controlled storage systems
US20180121366A1 (en) * 2016-11-01 2018-05-03 Alibaba Group Holding Limited Read/write request processing method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103124299A (en) * 2013-03-21 2013-05-29 杭州电子科技大学 Distributed block-level storage system in heterogeneous environment
US9032165B1 (en) * 2013-04-30 2015-05-12 Amazon Technologies, Inc. Systems and methods for scheduling write requests for a solid state storage device
WO2014202016A1 (en) * 2013-06-20 2014-12-24 中国科学院声学研究所 Classification-based virtual network mapping method and system
CN104298541A (en) * 2014-10-22 2015-01-21 浪潮(北京)电子信息产业有限公司 Data distribution algorithm and data distribution device for cloud storage system
CN105511801A (en) * 2015-11-12 2016-04-20 长春理工大学 Data storage method and apparatus
US20180121366A1 (en) * 2016-11-01 2018-05-03 Alibaba Group Holding Limited Read/write request processing method and apparatus
CN106873919A (en) * 2017-03-20 2017-06-20 郑州云海信息技术有限公司 A kind of date storage method and device based on cloud storage system
CN107632788A (en) * 2017-09-26 2018-01-26 郑州云海信息技术有限公司 A kind of method of more controlled storage system I O schedulings and more controlled storage systems

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568570A (en) * 2021-06-22 2021-10-29 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113568570B (en) * 2021-06-22 2024-04-12 阿里巴巴创新公司 Data processing method and device
CN113641467A (en) * 2021-10-19 2021-11-12 杭州优云科技有限公司 Distributed block storage implementation method of virtual machine
CN113641467B (en) * 2021-10-19 2022-02-11 杭州优云科技有限公司 Distributed block storage implementation method of virtual machine
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium
CN116360711B (en) * 2023-06-02 2023-08-11 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US11550819B2 (en) Synchronization cache seeding
CN108712457B (en) Method and device for adjusting dynamic load of back-end server based on Nginx reverse proxy
US10382380B1 (en) Workload management service for first-in first-out queues for network-accessible queuing and messaging services
CN112131237B (en) Data synchronization method, device, equipment and computer readable medium
US20200272452A1 (en) Automated transparent distribution of updates to server computer systems in a fleet
CN112148206A (en) Data reading and writing method and device, electronic equipment and medium
CN108228102B (en) Method and device for data migration between nodes, computing equipment and computer storage medium
WO2017088572A1 (en) Data processing method, device, and system
CN110008041B (en) Message processing method and device
CN110119304B (en) Interrupt processing method and device and server
CN111049928A (en) Data synchronization method, system, electronic device and computer readable storage medium
US20110282917A1 (en) System and method for efficient resource management
JP2022550401A (en) Data upload method, system, device and electronic device
US12001450B2 (en) Distributed table storage processing method, device and system
CN110633046A (en) Storage method and device of distributed system, storage equipment and storage medium
CN114461593B (en) Log writing method and device, electronic device and storage medium
CN113794764A (en) Request processing method and medium for server cluster and electronic device
CN111444278A (en) Data synchronization method and device and transfer server
CN111225003B (en) NFS node configuration method and device
CN111385255A (en) Asynchronous call implementation method and device, server and server cluster
CN111125168B (en) Data processing method and device, electronic equipment and storage medium
CN110798358B (en) Distributed service identification method and device, computer readable medium and electronic equipment
CN110502187B (en) Snapshot rollback method and device
CN116594551A (en) Data storage method and device
WO2023116438A1 (en) Data access method and apparatus, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination