CN112698935A - Data access method, data server and data storage system - Google Patents

Data access method, data server and data storage system Download PDF

Info

Publication number
CN112698935A
CN112698935A CN201911008409.7A CN201911008409A CN112698935A CN 112698935 A CN112698935 A CN 112698935A CN 201911008409 A CN201911008409 A CN 201911008409A CN 112698935 A CN112698935 A CN 112698935A
Authority
CN
China
Prior art keywords
data
server
cache
solid
state disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911008409.7A
Other languages
Chinese (zh)
Inventor
徐佳宏
刘瑞顺
朱吕亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ipanel TV Inc
Original Assignee
Shenzhen Ipanel TV Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ipanel TV Inc filed Critical Shenzhen Ipanel TV Inc
Priority to CN201911008409.7A priority Critical patent/CN112698935A/en
Publication of CN112698935A publication Critical patent/CN112698935A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data access method, a data server and a data storage system, when a data request sent by a client is received; the data server selects a target cache server from the main cache server and the slave cache servers, and sends a cache position query request comprising a data number of data accessed by the data access request to the target cache server; if the data cache position information corresponding to the data number exists in the target cache server, the data server receives the data cache position information returned by the target cache server; the data server reads data from the cache position specified by the data cache position information through the solid-state disk server and returns the read data to the client, and the configuration and the data access flow of the data server are modified, so that a plurality of cache servers can simultaneously provide cache position query service for the cache servers, and the problem of unbalanced load of the plurality of cache servers is solved.

Description

Data access method, data server and data storage system
Technical Field
The invention relates to the technical field of computer storage, in particular to a data access method, a data server and a data storage system.
Background
Although the existing high-availability storage system allows a plurality of cache servers to be configured in the system, only one server serves the outside at the same time, and other servers are used as standby machines. When the load exceeds the processing capacity of a single server, the server in a standby state cannot serve externally, the hardware performance of the single server serving externally can be improved, the transverse expansion cannot be realized by additionally arranging the standby server, and the problem of unbalanced load of a plurality of cache servers exists.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data access method, a data server, and a data storage system, so as to achieve the purpose of balancing loads of multiple cache servers.
To achieve the above object, an aspect of the present invention provides a data access method, including:
a data server receives a data access request sent by a client;
the data server selects a target cache server from a main cache server and the cache servers, and sends a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
if the data cache position information corresponding to the data number exists in the target cache server, the data server receives the data cache position information returned by the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the data server reads data from the cache position specified by the data cache position information through a solid-state disk server based on the data cache position information;
and the data server returns the read data to the client.
Optionally, the data cache location information includes a solid-state disk number and at least one disk block number;
the data server reads data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information, and the data server comprises:
the data server sends the data reading request to the solid-state disk server based on the data caching position information; the data reading request comprises a mapping relation, wherein the mapping relation comprises a corresponding relation between the data number and the data block number and the data cache position information;
if the mapping relation recorded in the solid-state disk server is consistent with the mapping relation in the data reading request, the solid-state disk server reads data from a disk block of the solid-state disk specified by the data caching position information in the mapping relation;
and the data server receives the data read by the solid-state disk server.
Optionally, the method further comprises:
if the solid-state disk server does not read data, the data server receives a reading error message returned by the solid-state disk server;
the data server sends a storage position query request to an index server;
the data server receives data storage position information which is sent by the index server and corresponds to the data number;
the data server reads data from a storage position specified by the data storage position information through a mechanical disk server based on the data storage position information;
and the data server returns the read data to the client.
Optionally, the method further comprises:
the method comprises the steps that a main cache server determines the mapping relation of data to be stored, wherein the mapping relation of the data to be stored comprises the corresponding relation between the data number and the data block number of the data to be stored and the data cache position information of the data to be stored;
the main cache server generates a cache updating instruction based on the mapping relation of the data to be stored, and sends the cache updating instruction to the solid-state disk server;
the solid-state disk server updates the mapping relation of the data stored in the solid-state disk server and updates the stored data of the solid-state disk in the solid-state disk server according to the cache updating instruction;
and synchronizing the cache servers based on the mapping relation of the data stored by the solid-state disk server.
Optionally, the method further comprises:
if the target cache server does not have the data cache position information corresponding to the data number, the data server receives query failure information returned by the target cache server;
the data server sends a storage position query request to an index server;
the data server receives data storage position information which is sent by the index server and corresponds to the data number;
the data server reads data from a storage position specified by the data storage position information through a mechanical disk server based on the data storage position information;
and the data server returns the read data to the client.
Another aspect of the present invention provides a data server, including: a receiving unit, a transmitting unit and a reading unit;
the receiving unit is used for receiving a data access request sent by a client;
the sending unit is used for selecting a target cache server from a main cache server and the cache servers and sending a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
the receiving unit is further configured to receive the data cache location information returned by the target cache server if the data cache location information corresponding to the data number exists in the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the reading unit is used for reading data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information, and sending the read data to the client through the sending unit.
Optionally, the reading unit is specifically configured to send the data reading request to the solid-state disk server based on the data cache location information; if the mapping relation recorded in the solid-state disk server is consistent with the mapping relation in the data reading request, receiving data read by the solid-state disk server from a disk block of the solid-state disk specified by data cache position information in the mapping relation; the data cache position information comprises a solid-state disk number and at least one disk block number, the data reading request comprises a mapping relation, and the mapping relation comprises the data number and the corresponding relation between the data block number and the data cache position information.
Optionally, the receiving unit is further configured to receive a read error message returned by the solid-state disk server if the solid-state disk server does not read data; and/or, the server is further configured to receive query failure information returned by the target cache server if the target cache server does not have the data cache location information corresponding to the data number;
the sending unit is further configured to send a storage location query request to the index server;
the receiving unit is further configured to receive data storage location information corresponding to the data number and sent by the index server;
the reading unit is further configured to read data from a storage location specified by the data storage location information through a mechanical disk server based on the data storage location information, and send the read data to the client through the sending unit.
Another aspect of the present invention provides a data storage system, comprising: the system comprises a data server, a main cache server, at least one secondary cache server and a solid-state disk server;
the data server is used for receiving a data access request sent by a client, selecting a target cache server from the main cache server and at least one cache server, and sending a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
the main cache server and the target cache server in the at least one secondary cache server are used for returning the data cache position information to the data server if the data cache position information corresponding to the data number exists in the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the data server is further configured to send the data cache location information to the solid-state disk server;
and the solid-state disk server is used for reading data from the cache position specified by the data cache position information and sending the read data to the client through the data server.
Optionally, the data server is specifically configured to send a data reading request to the solid-state disk server, where the data reading request includes a mapping relationship, and the mapping relationship includes a correspondence between the data number and the data block number and the data cache location information;
the solid-state disk server is further configured to detect a mapping relationship recorded in the solid-state disk server and a mapping relationship in the data reading request; if the mapping relations are not consistent, sending a reading error message to the data server;
the main cache server is further configured to determine a mapping relationship of the data to be stored, where the mapping relationship of the data to be stored includes a data number and a corresponding relationship between a data block number of the data to be stored and data cache position information of the data to be stored; the cache updating instruction is generated based on the mapping relation of the data to be stored, and is sent to the solid-state disk server;
the solid-state disk server is further configured to update the mapping relationship of the data stored in the solid-state disk server and update the stored data of the solid-state disk in the solid-state disk server according to the cache update instruction;
and the secondary cache server is also used for carrying out synchronization based on the mapping relation of the data stored by the solid-state disk server.
According to the scheme, when a data request sent by a client is received; the data server selects a target cache server from the main cache server and the slave cache servers, and sends a cache position query request to the target cache server; the cache position query request comprises a data number of data accessed by the data access request; if the data cache position information corresponding to the data number exists in the target cache server, the data server receives the data cache position information returned by the target cache server; and the data server reads data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information and returns the read data to the client. Different from the prior art that only one cache server serves the external service at the same time, the configuration and the data access flow of the data server are modified, so that the data server can configure parameters (such as IP addresses) of a plurality of cache servers (such as a main cache server and at least one slave cache server), and the plurality of cache servers can simultaneously provide cache position query service for the data server to solve the problem of unbalanced load of the plurality of cache servers.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a data access method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data access method according to another embodiment of the present invention;
FIG. 3 is a flow chart of a data access method according to another embodiment of the present invention;
fig. 4 is a signaling diagram of the data access process according to the above embodiment of the present invention (in a case where no cache location is found);
fig. 5 is a signaling diagram of the data access process in the above embodiment of the present invention (when the cache location and the mapping relationship are found to be consistent);
fig. 6 is a signaling diagram of the data access process in the above embodiment of the present invention (when the cache location and the mapping relationship are found to be inconsistent);
FIG. 7 is a schematic structural diagram of a data server according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of a data storage system according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The prior art generally realizes high availability of cache servers based on open source Keepalived. Two cache servers in the cache system respectively have actual IP addresses, and the IP addresses are assumed to be an address A and an address B respectively. The Keepalived sets a public external virtual IP address, the virtual IP address only points to one of the address A or the address B at the same time, and only the cache server pointed by the virtual IP address can provide a service of querying the cache for the data server. When the query cache load exceeds the processing capacity of a single cache server, the cache server in a standby state cannot share part of the query cache service, and the problem of unbalanced load of a plurality of cache servers exists.
On the other hand, when the cache server connected with the data server is abnormal, Keepalived points the virtual IP address to the IP address switched to another cache server. The handover process pointed to by the virtual IP address typically takes 3 to 5 seconds. During this period, the query service of the cache server is interrupted, the data server considers that the query is failed, and all data access requests are sent to the mechanical disk server, so that solid disk resources are wasted, and the efficiency of data access is reduced. Since accessing data via a solid state disk server is faster than accessing data via a mechanical disk server. This 3 to 5 seconds of interruption of the caching service is not tolerable for data servers that are running at high load.
The method aims at the problems that the loads of a plurality of cache servers are unbalanced and the cache servers are not switched timely in the existing high-availability scheme. The scheme of the invention enables the data server to configure the parameters of a plurality of cache servers by modifying the configuration and the data access flow of the data server, and the plurality of cache servers can simultaneously provide the cache position query service for the cache servers, thereby solving the problem of unbalanced load of the plurality of cache servers. If the network is abnormal or the cache server providing service at present fails to work, the data server fails to connect with the cache server, and the data server sends a query request to another cache server according to the polling sequence, so that the problem that the cache server is not switched timely is solved.
Please refer to fig. 1, which shows a flowchart of a data access method according to an embodiment of the present invention, including:
s101, the data server receives a data access request sent by a client.
The data server in this step may configure parameters, such as IP addresses, of multiple cache servers. The plurality of cache servers comprise a main cache server and at least one secondary cache server, and the functions of all the cache servers are equivalent on the function of inquiring the data cache position information, so that the difference between the main cache server and the secondary cache servers is not embodied in the configuration items related to the cache of the data servers. For example, the configuration items of the data server may adopt the following configuration descriptions:
cache-server is the IP address of the main cache server A, the IP address of the secondary cache server B and the IP address of the secondary cache server C;
or, cache-server is the IP address of the secondary cache server B, the IP address of the primary cache server a, and the IP address of the secondary cache server C;
the IP addresses of the main cache server and the secondary cache servers are not limited in sequence, and the IP addresses of different cache servers are separated by commas.
The data access request in this step includes data identification of the data accessed by the client, such as a data name, which indicates what the data to be accessed by the client is. For example, the data name may be the file name of the file to be accessed.
S102, the data server selects a target cache server from the main cache server and the cache servers, and sends a cache position query request to the target cache server.
In this step, the data server may randomly select a target cache server from the primary cache server and the secondary cache servers; or sequentially selecting a target cache server from the main cache server and the slave cache servers according to the inherent sequence; or other polling mode which can realize balanced access to the main cache server and the slave cache server to determine the target cache server.
For example, the number of the primary cache server and the number of the secondary cache servers are respectively 1 to 3, and the data server may select the number of the cache servers 1 to 3 as the target cache server in a sequential polling manner: the data server selects the cache server No. 1 as a target cache server for the first time, and sends a cache position query request to the cache server No. 1; the data server selects the cache server No. 2 as a target cache server for the second time, and sends a cache position query request to the cache server No. 2; the data server selects the cache server No. 3 as a target cache server for the third time, and sends a cache position query request to the cache server No. 3; the data server selects cache server No. 1 as the target cache server for the fourth time, and sends a cache location query request to cache server No. 1 … ….
If the network is abnormal or the current target cache server fails to be connected, the data server selects the next cache server as the target cache server according to the polling sequence and sends a cache position query request to the target cache server, and the speed of selecting the cache server is higher than the speed of sending a data access request to the mechanical disk server, so that the problem that the cache server is not switched timely is solved.
Before sending the cache location query request, the data number (fid), the data offset location (offset), and the data length (length) are determined according to the data identifier in the data access request. In this step, the cache location query request includes the data number, the data offset location, and the data length of the data accessed by the data access request.
S103, judging whether the target cache server has data cache position information corresponding to the data number, if so, executing the steps S104-S105, and if not, executing the steps S106-S109.
The data cache location information is used for indicating the storage location of the data block in the solid state disk, and includes a solid state disk number (did) and at least one disk block number (disk block id). A solid-state disk server comprises a plurality of sockets, and a plurality of solid-state disks can be inserted into the sockets. A solid state disk, which may be considered a device that continuously stores a byte stream, includes a plurality of disk blocks. The data block size of the target cache server is the same as the disk block size of the solid state disk, so that one data block is stored by one disk block in the solid state disk.
In practical applications, the data to be accessed by the data server may be regarded as a byte stream, and the size of one data block may be predetermined, for example, 1M, 2M, 4M, or other sizes. For the data to be accessed with the same data length, the larger a data block is, the smaller the total number of data blocks of the accessed data is. The number of data blocks corresponding to different data is different from the number of data blocks, so in this step, the target cache server obtains the number of data blocks (file _ black _ id) of the accessed data based on the data number, the data offset position and the data length in the data access request.
The data block number and the data cache position information have a corresponding relation, and the corresponding data cache position information is searched through the data block number. The points to be explained here are: the correspondence between the data block number and the data cache location information may be recorded in the target cache server in the form of a mapping relationship, for example, the mapping relationship includes a correspondence between (fid, file _ blcok _ id) and the data cache location information.
And S104, the data server receives the data cache position information returned by the target cache server.
And S105, the data server reads data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information and returns the read data to the client.
In this step, the data server sends a data reading request to the solid state disk server based on the data cache location information, and the solid state disk server reads data at a corresponding location according to the data cache location information in the data reading request, for example, reads data in a disk block of a corresponding solid state disk based on a solid state disk number (did) and at least one disk block number (disk _ block _ id) in the data cache location information, and sends the read data back to the data server. And the data server receives the data read by the solid-state disk server and returns the read data to the client.
And S106, the data server receives the query failure information returned by the target cache server.
S107, the data server sends a storage position query request to the index server.
The storage location query request includes a data number, a data offset location, and a data length of data accessed by the data access request. The index server obtains the index data block number (file _ block _ id2) of the accessed data according to the data number, the data offset position and the data length, wherein the sizes of the index data block and the data block in the cache server are not necessarily the same.
And S108, the data server receives the data storage position information corresponding to the data number sent by the index server.
The index server records the storage location of all data blocks on the mechanical disk (i.e., data storage location information). And obtaining data storage position information corresponding to the index data block number according to the index data block number. The data storage location information includes a mechanical disc number (did2), a disc number of the mechanical disc (disk _ block _ id2), and a mechanical disc service instance location (ip, port). The index data block is the same size as the mechanical disk block. The mechanical disk may also be considered as a device for continuously storing a byte stream.
And S109, the data server reads the data through the mechanical disk server based on the data storage position information and returns the data to the client.
In the step, the data server sends a data reading request to the mechanical disk server, and the mechanical disk server reads data at a corresponding position according to data storage position information in the data reading request and sends the data to the data server. And the data server receives the data read by the mechanical disk server and returns the read data to the client.
According to the scheme of the embodiment, the configuration and the data access flow of the data server are modified, so that the data server can configure the IP addresses of the plurality of cache servers, and the plurality of cache servers can simultaneously provide the cache position query service for the data server, so that the problem of unbalanced load of the plurality of cache servers is solved. If the data server fails to be connected with the cache server due to the fact that the network is abnormal or the cache server which provides service at present fails, the data server can switch another cache server to be the target cache server and send the query request to the target cache server, and the speed of sending the data access request to the mechanical disk server is higher compared with the speed of sending the data access request to the mechanical disk server by the existing switching server, so that the problem that the cache server is not switched timely is solved.
In the practical application process, the cache data is found to be updated frequently according to the use condition of the data in the storage system, and the mapping relation of a plurality of cache servers needs to be synchronized. The existing synchronization method performs synchronization by converting data to be cached into snapshots. However, the data to be cached is converted into the snapshot, the snapshot is transmitted between the caching servers, and the snapshot is used for reconstructing the data to be cached, which are time-consuming.
To solve the problem of time consumption of synchronization, please refer to fig. 2, which shows a flowchart of a data access method according to another embodiment of the present invention, which adds steps S110-S113 compared to fig. 1, and the process of caching by the master-slave cache server includes:
s110, the main cache server determines the mapping relation of the data to be stored.
The mapping relationship of the data to be stored includes a data number and a corresponding relationship between a data block number of the data to be stored and data cache position information of the data to be cached, that is, a corresponding relationship between (fid, file _ block _ id) of the data to be stored and (did, disk _ block _ id) of the data to be stored. The data to be stored can be determined according to statistical hot data blocks, and the hot data blocks are but not limited to data blocks which are frequently accessed. For example, the access request of the data server may be periodically counted by the main cache server, and the data block with the access frequency exceeding the preset frequency is used as the hot spot data block, so that the data in the hot spot data block is used as the data to be stored.
And S111, the main cache server generates a cache updating instruction based on the mapping relation of the data to be stored, and sends the cache updating instruction to the solid-state disk server.
The cache updating instruction comprises instructions of adding a cache, deleting the cache or modifying the cache and the like.
And S112, the solid-state disk server updates the mapping relation of the data stored in the solid-state disk server and updates the stored data of the solid-state disk in the solid-state disk server according to the cache updating instruction.
Specifically, the solid-state disk server may add a new mapping relationship, delete a stored mapping relationship, or modify a stored mapping relationship according to the cache update instruction. Meanwhile, the solid state disk in the solid state disk server adds, deletes or modifies the stored data based on the change of the mapping relation.
And S113, synchronizing the cache servers based on the mapping relation of the data stored in the solid-state disk server.
In this step, the secondary cache server and the primary cache server are independent of each other. Because the solid state disk is a natural snapshot, synchronizing the mapping relationship from the cache server directly from the solid state disk server is more efficient and time-saving than existing synchronization methods. For example, the mapping relation of the data to be stored can be synchronized from the solid-state disk server from the cache server regularly.
It should be noted that the caching process of the master-slave cache servers in steps S110 to S113 is not necessarily performed after step S109, but fig. 2 only illustrates one embodiment of the caching process, and the timing of performing steps S110 to S113 is not specifically limited herein.
According to the scheme of the embodiment, the master cache server and the slave cache server are independent, and the slave cache server is synchronized based on the mapping relation of the data stored by the solid-state disk server, so that the problem that the master cache server and the slave cache server are time-consuming in synchronization is solved, and the cache efficiency is improved.
The main cache server indicates the solid-state disk server to update the mapping relationship, and a storage mode for synchronously updating the mapping relationship from the cache server has weak consistency, because the mapping relationship of the whole cache system is actually updated after the main cache server issues a cache updating instruction to the solid-state disk server. However, since the secondary cache server and the primary cache server are independent, the primary cache server and the secondary cache server do not communicate in real time. This means that, in a short period of time before the mapping relationship is synchronized from the cache server to the solid-state disk server, the cache mapping relationship on the cache server is an un-updated mapping relationship, which is different from the updated mapping relationship, i.e., the mapping relationship on the master cache server and the slave cache server is inconsistent.
To solve the problem of weak consistency of the master and slave cache servers, the present invention provides a data access method in another embodiment, please refer to fig. 3, which shows a flowchart of the method, including:
s301, the data server receives a data access request sent by the client.
S302, the data server selects a target cache server from the main cache server and the cache servers, and sends a cache position query request to the target cache server.
S303, determining whether the target cache server has the data cache location information corresponding to the data number, if yes, performing steps S304-S307, and if no, performing steps S308-S311.
The steps S301 to S303 are similar to the steps S101 to S103, and for detailed description, refer to the description of the steps S101 to S103 in the above embodiment, which is not repeated herein.
S304, the data server receives the data cache position information returned by the target cache server.
The data cache location information may be returned by the target cache server in a form of a mapping relationship, where the mapping relationship is a mapping relationship of data accessed by the data access request, and includes a corresponding relationship between a data number and a data block number and data cache location information, that is, a corresponding relationship between a (fid, file _ blcok _ id) and a (did, disk _ block _ id), so that the solid-state disk server knows a storage location of the data to be read based on the data cache location information in the mapping relationship. For an explanation of the data cache location information and the nouns such as the solid-state disk server, please refer to step S104, which is not described herein again.
S305, the data server sends a data reading request to the solid-state disk server.
Because the mapping relation enables the solid-state server to know the storage position of the data to be read, the data reading request can be sent to the solid-state disk server based on the mapping relation when the data reading request is sent, wherein the data reading request comprises the mapping relation returned by the target cache server.
S306, the data server reads data from the cache position specified by the data cache position information through the solid-state disk server and returns the read data to the client.
In this step, the mapping relationship recorded in the solid-state disk server is consistent with the mapping relationship in the data reading request, and the target cache server may be the primary cache server, or the secondary cache server that has been synchronized in time, and the returned mapping relationship is the latest mapping relationship. Because the mapping relationship on the solid-state disk server is consistent with the mapping relationship in the main cache server, the data cache position information in the mapping relationship is accurate, and the data in the cache position is also the data to be accessed by the client. Please refer to step S105 for an explanation of the data reading process of the solid-state disk server, which is not described herein again.
S307, if the solid-state disk server does not read the data, the data server receives a reading error message returned by the solid-state disk server.
In this step, it may be that the reason why the solid-state disk server does not read the data is that the mapping relationship recorded in the solid-state disk server is not consistent with the mapping relationship in the data reading request, which indicates that the target cache server may be a secondary cache server that has not synchronized the latest mapping relationship, so that the data cache position information in the returned mapping relationship is not accurate, and the data in the cache position is not the data to be accessed by the client, so that the solid-state disk server returns a reading error message; the failure of the solid state disk server to read data may also be due to an abnormality in the solid state disk server and/or the solid state disk.
After receiving the read error message, the data server executes steps S309-S311.
S308, the data server receives the query failure information returned by the target cache server.
S309, the data server sends a storage location query request to the index server.
And S310, the data server receives the data storage position information corresponding to the data number sent by the index server.
And S311, the data server reads the data through the mechanical disk server based on the data storage position information and returns the data to the client.
Steps S308 to S311 are similar to steps S106 to S109, and for detailed description of the steps, reference is made to the description of steps S106 to S109 in the above embodiment, which is not repeated herein.
According to the scheme of the embodiment, the query flow is modified, namely the mapping relation queried by the target cache server is carried in the data reading request sent by the data server to the solid-state disk server, and the solid-state disk server checks the mapping relation, so that the accuracy of the queried data is ensured, and the problem of weak consistency of the master cache server and the slave cache server is solved.
The following describes the procedure of data access in the above embodiment with reference to a signaling diagram, and the procedure is as follows:
referring to fig. 4, a signaling diagram is shown when an uncached client in a target cache server wants to access data.
The client sends a data access request to the data server; the data access request contains the name of the data to be accessed by the client.
The data server receives the data access request, and determines a data number fid, a data offset position offset and a data length according to the data name; the data server sends a cache location query request to the target cache server, wherein the request includes (fid, offset, length).
And the target cache server obtains (fid, file _ block _ id) according to (fid, offset, length) in the cache position query request, finds no data cache position information (did, disk _ block _ id) corresponding to the (fid, file _ block _ id), and returns a query failure message to the data server.
After receiving the query failure message, the data server sends a storage location query request to the index server, where the request includes (fid, offset, length).
The index server obtains (fid, file _ block _ id2) according to (fid, offset, length) in the storage location query request; and finding corresponding data storage position information (did2, disk _ block _ id2 and ip _ port) according to (fid, file _ block _ id2), and returning the data storage position information to the data server.
The data server sends a data read request to the mechanical disk server, wherein the data read request is accompanied by (did2, disk _ block _ id2, ip _ port).
And the mechanical disk server reads the data of the corresponding position according to (did2, disk _ block _ id2, ip _ port) and returns the data to the data server.
And the data server receives the read data and returns the data to the client.
Referring to fig. 5, it shows a signaling diagram when the target cache server has data cached therein for the client to access, and the mapping relationship recorded by the solid state disk server is consistent with the mapping relationship in the data read request.
The client sends a data access request to the data server; the data access request contains the name of the data to be accessed by the client.
The data server receives the data access request, and determines a data number fid, a data offset position offset and a data length according to the data name; the data server sends a cache location query request to the target cache server, wherein the request includes (fid, offset, length).
And the target cache server obtains (fid, file _ block _ id) according to (fid, offset, length) in the cache position query request, obtains data cache position information (did, disk _ block _ id) corresponding to the (fid, file _ block _ id), and sends the data cache position information (did, disk _ block _ id) to the data server in a mapping relation mode. The mapping relationship is a corresponding relationship between (fid, file _ block _ id) and (did, disk _ block _ id).
The data server sends a data reading request to the solid-state disk server, wherein the data reading request has a mapping relationship (fid, file _ block _ id) → (did, disk _ block _ id).
And the solid-state disk server checks that the mapping relation in the data reading request is consistent with the recorded mapping relation, and reads the data at the corresponding position according to (did, disk _ block _ id) in the mapping relation and returns the data to the data server.
And the data server receives the read data and returns the data to the client.
Referring to fig. 6, it shows a signaling diagram when the target cache server has a client cached to access data, but the mapping recorded by the solid state disk server is inconsistent with the mapping in the data read request.
The client sends a data access request to the data server; the data access request contains the name of the data to be accessed by the client.
The data server receives the data access request, and determines a data number fid, a data offset position offset and a data length according to the data name; the data server sends a cache location query request to the target cache server, wherein the request includes (fid, offset, length).
And the target cache server obtains (fid, file _ block _ id) according to (fid, offset, length) in the cache position query request, obtains data cache position information (did, disk _ block _ id) corresponding to the (fid, file _ block _ id), and returns the number mapping relation to the data server. The mapping relationship is a corresponding relationship between (fid, file _ block _ id) and (did, disk _ block _ id).
The data server sends a data reading request to the solid-state disk server, wherein the data reading request has a mapping relationship (fid, file _ block _ id) → (did, disk _ block _ id).
And the solid-state disk server checks that the mapping relation in the data reading request is inconsistent with the recorded mapping relation, and returns a reading failure message to the data server.
After receiving the read failure message, the data server sends a storage location query request to the index server, where the request includes (fid, offset, length).
The index server obtains (fid, file _ block _ id2) according to (fid, offset, length) in the storage location query request; and finding corresponding data storage position information (did2, disk _ block _ id2 and ip _ port) according to (fid, file _ block _ id2), and returning the data storage position information to the data server.
The data server sends a data read request to the mechanical disk server, wherein the data read request is accompanied by (did2, disk _ block _ id2, ip _ port).
And the mechanical disk server reads the data of the corresponding position according to (did2, disk _ block _ id2, ip _ port) and returns the data to the data server.
And the data server receives the read data and returns the data to the client.
Referring to fig. 7, a schematic structural diagram of a data server according to another embodiment of the present invention is shown, where the data server of this embodiment may configure parameters, such as IP addresses, of multiple cache servers. The plurality of cache servers comprise a main cache server and at least one secondary cache server, and the functions of all the cache servers are equivalent on the function of inquiring the data cache position information, so that the difference between the main cache server and the secondary cache servers is not embodied in the configuration items related to the cache of the data servers. The data server includes: a receiving unit 701, a transmitting unit 702, and a reading unit 703. The functions of the respective units are as follows:
a receiving unit 701, configured to receive a data access request sent by a client. The data access request includes data identification of the data accessed by the client, such as a data name, which indicates what the data to be accessed by the client is. For example, the data name may be the file name of the file to be accessed.
A sending unit 702, configured to select a target cache server from a master cache server and a slave cache server, and send a cache location query request to the target cache server; the cache position query request comprises a data number of data accessed by the data access request. For an explanation of the selection manner of the target cache server, refer to step S102 in the above embodiment, which is not described herein again.
The receiving unit 701 is further configured to receive data cache location information returned by the target cache server if the data cache location information corresponding to the data number exists in the target cache server; and the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number.
In one implementation, the data cache location information received by the receiving unit 701 is returned by the target cache server in the form of a mapping relationship, where the mapping relationship includes a data number and a corresponding relationship between a data block number and the data cache location information. For a conceptual explanation of the data block numbering and mapping relationship, refer to steps S103 and S304 of the above-described embodiment. Please refer to steps S104 and S304 of the above embodiments for the operation process description of different implementations of the receiving unit 701.
And a reading unit 703, configured to read, by the solid-state disk server, data from the cache location specified by the data cache location information based on the data cache location information.
An implementation manner of the reading unit 703 is specifically configured to send a data reading request to the solid-state disk server based on the data cache location information; and receiving data read by the solid-state disk server from the disk blocks of the solid-state disk specified by the data caching position information. The data reading request comprises data cache position information, and the data cache position information is used for indicating the storage position of the data block in the solid state disk and comprises a solid state disk number and at least one disk block number.
Another implementation manner of the reading unit 703 is specifically configured to send a data reading request to the solid-state disk server based on the data cache position information, where the data reading request includes a mapping relationship, and the reading unit 703 sends the data cache position information to the solid-state disk server in the form of the mapping relationship; if the mapping relationship recorded in the solid-state disk server is consistent with the mapping relationship in the data reading request, the reading unit 703 receives data read by the solid-state disk server from the disk block of the solid-state disk specified by the data cache position information in the mapping relationship.
The receiving unit 701 is further configured to receive, by the data server, a read error message returned by the solid-state disk server if the solid-state disk server does not read data; and/or, the server is further configured to receive query failure information returned by the target cache server if the target cache server does not have data cache location information corresponding to the data number. For a description of this function, refer to steps S307 and S308 of the above embodiment.
The sending unit 702 is further configured to send a storage location query request to the index server.
The receiving unit 701 is further configured to receive data storage location information corresponding to the data number sent by the index server.
The reading unit 703 is further configured to read data from the storage location specified by the data storage location information through the mechanical disk server based on the data storage location information.
For functional descriptions of the sending unit 701, the fourth receiving unit 702, and the second reading unit 703 reading data through the mechanical disk server, please refer to steps S106 to S109 in the above embodiment, which is not described herein again.
The sending unit 701 is further configured to send the read data to the client. The read data may be cache data read by a solid state disk server or stored data read by a mechanical disk server.
The data server of the embodiment enables the data server to configure the IP addresses of the plurality of cache servers by modifying the configuration and the data access flow, and the plurality of cache servers can provide the cache location query service for the data server at the same time, thereby solving the problem of unbalanced load of the plurality of cache servers. If the data server fails to be connected with the cache server due to the fact that the network is abnormal or the cache server which provides service at present fails, the data server can switch another cache server to be the target cache server and send the query request to the target cache server, and the speed of sending the data access request to the mechanical disk server is higher compared with the speed of sending the data access request to the mechanical disk server by the existing switching server, so that the problem that the cache server is not switched timely is solved.
Fig. 8 is a schematic structural diagram of a data storage system according to another embodiment of the present invention, which includes a data server, a primary cache server, at least one secondary cache server, and a solid-state disk server.
The data server is used for receiving a data access request sent by the client, selecting a target cache server from the main cache server and at least one cache server, and sending a cache position query request to the target cache server; the cache position query request comprises a data number of data accessed by the access request.
The main cache server and at least one target cache server in the secondary cache servers are used for returning data cache position information to the data server if the data cache position information corresponding to the data number exists in the target cache server; and the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number. For the functional description, refer to steps S103, S104 and S106 of the above embodiments, which are not described herein again.
And the data server is also used for sending data cache position information to the solid-state disk server. For example, the method is specifically used for sending a data reading request to a solid-state disk server, where the data reading request includes data cache location information; or the data reading request comprises a mapping relation, and the mapping relation comprises a data number and a corresponding relation between a data block number and data cache position information.
The solid-state disk server is configured to read data from the cache location specified by the data cache location information, and send the read data to the client through the data server, and this functional description refers to step S112 in the foregoing embodiment and is not described herein again.
When the data reading request sent by the data server comprises a mapping relation, the solid-state disk server is further used for detecting the mapping relation recorded in the solid-state disk server and the mapping relation in the data reading request; if the mapping relation is consistent, reading data from the cache position specified by the data cache position information, and sending the read data to the client through the data server; and if the mapping relations are inconsistent, sending a reading error message to the data server. For a functional description, please refer to steps S306 and S307 of the above embodiments, which are not described herein again.
In addition, the primary cache server is further configured to determine a mapping relationship of the data to be stored, where the mapping relationship of the data to be stored includes a data number and a corresponding relationship between a data block number of the data to be stored and data cache position information of the data to be stored, and this functional description refers to step S110 in the foregoing embodiment and is not repeated herein. The main cache server is further used for generating a cache updating instruction based on the mapping relation of the data to be stored and sending the cache updating instruction to the solid-state disk server. For the functional description, refer to step S111 in the above embodiment, which is not repeated herein.
And the solid-state disk server is further used for updating the mapping relation of the data stored in the solid-state disk server and updating the stored data of the solid-state disk in the solid-state disk server according to the cache updating instruction. For the functional description, refer to step S112 in the above embodiment, which is not repeated herein.
And the secondary cache server is also used for carrying out synchronization based on the mapping relation of the data stored by the solid-state disk server. For the functional description, refer to step S113 in the above embodiment, which is not repeated herein.
According to the scheme of the embodiment, the master cache server and the slave cache server are independent, and the slave cache server is synchronized based on the mapping relation of the data stored by the solid-state disk server, so that the problem that the master cache server and the slave cache server are time-consuming in synchronization is solved, and the cache efficiency is improved. Meanwhile, in the scheme of the embodiment, the query flow is modified, namely the mapping relation queried by the target cache server is carried in the data reading request sent by the data server to the solid-state disk server, and the solid-state disk server checks the mapping relation, so that the accuracy of the queried data is ensured, and the problem of weak consistency of the master cache server and the slave cache server is solved.
In addition, the data storage system further comprises an index server and a mechanical disk server.
And the index server is used for receiving the storage position query request sent by the data server and returning the data storage position information according to the storage position query request. The storage location query request may be sent when the solid-state disk server does not read data, such as when the data server receives a read error message. And the mechanical disk server is used for receiving the data reading request sent by the data server, reading the data according to the data reading request and sending the data to the data server, so that the data is continuously provided to the data server through the index server and the mechanical disk server when the data is not read by the solid-state disk server.
Please refer to steps S108 and S109 in the above embodiments for descriptions of the operation processes of the index server and the mechanical disk server, which are not described herein again.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of data access, comprising:
a data server receives a data access request sent by a client;
the data server selects a target cache server from a main cache server and the cache servers, and sends a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
if the data cache position information corresponding to the data number exists in the target cache server, the data server receives the data cache position information returned by the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the data server reads data from the cache position specified by the data cache position information through a solid-state disk server based on the data cache position information;
and the data server returns the read data to the client.
2. The method of claim 1, wherein the data cache location information comprises a solid state disk number and at least one disk block number;
the data server reads data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information, and the data server comprises:
the data server sends the data reading request to the solid-state disk server based on the data caching position information; the data reading request comprises a mapping relation, wherein the mapping relation comprises a corresponding relation between the data number and the data block number and the data cache position information;
if the mapping relation recorded in the solid-state disk server is consistent with the mapping relation in the data reading request, the solid-state disk server reads data from a disk block of the solid-state disk specified by the data caching position information in the mapping relation;
and the data server receives the data read by the solid-state disk server.
3. The method of claim 1, further comprising:
if the solid-state disk server does not read data, the data server receives a reading error message returned by the solid-state disk server;
the data server sends a storage position query request to an index server;
the data server receives data storage position information which is sent by the index server and corresponds to the data number;
the data server reads data from a storage position specified by the data storage position information through a mechanical disk server based on the data storage position information;
and the data server returns the read data to the client.
4. The method according to any one of claims 1-3, further comprising:
the method comprises the steps that a main cache server determines the mapping relation of data to be stored, wherein the mapping relation of the data to be stored comprises the corresponding relation between the data number and the data block number of the data to be stored and the data cache position information of the data to be stored;
the main cache server generates a cache updating instruction based on the mapping relation of the data to be stored, and sends the cache updating instruction to the solid-state disk server;
the solid-state disk server updates the mapping relation of the data stored in the solid-state disk server and updates the stored data of the solid-state disk in the solid-state disk server according to the cache updating instruction;
and synchronizing the cache servers based on the mapping relation of the data stored by the solid-state disk server.
5. The method of claim 1, further comprising:
if the target cache server does not have the data cache position information corresponding to the data number, the data server receives query failure information returned by the target cache server;
the data server sends a storage position query request to an index server;
the data server receives data storage position information which is sent by the index server and corresponds to the data number;
the data server reads data from a storage position specified by the data storage position information through a mechanical disk server based on the data storage position information;
and the data server returns the read data to the client.
6. A data server, comprising: a receiving unit, a transmitting unit and a reading unit;
the receiving unit is used for receiving a data access request sent by a client;
the sending unit is used for selecting a target cache server from a main cache server and the cache servers and sending a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
the receiving unit is further configured to receive the data cache location information returned by the target cache server if the data cache location information corresponding to the data number exists in the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the reading unit is used for reading data from the cache position specified by the data cache position information through the solid-state disk server based on the data cache position information, and sending the read data to the client through the sending unit.
7. The data server according to claim 6, wherein the reading unit is specifically configured to send the data reading request to the solid-state disk server based on the data cache location information; if the mapping relation recorded in the solid-state disk server is consistent with the mapping relation in the data reading request, receiving data read by the solid-state disk server from a disk block of the solid-state disk specified by data cache position information in the mapping relation; the data cache position information comprises a solid-state disk number and at least one disk block number, the data reading request comprises a mapping relation, and the mapping relation comprises the data number and the corresponding relation between the data block number and the data cache position information.
8. The data server of claim 6 or 7,
the receiving unit is further configured to receive a read error message returned by the solid-state disk server if the solid-state disk server does not read data; and/or, the server is further configured to receive query failure information returned by the target cache server if the target cache server does not have the data cache location information corresponding to the data number;
the sending unit is further configured to send a storage location query request to the index server;
the receiving unit is further configured to receive data storage location information corresponding to the data number and sent by the index server;
the reading unit is further configured to read data from a storage location specified by the data storage location information through a mechanical disk server based on the data storage location information, and send the read data to the client through the sending unit.
9. A data storage system, comprising: the system comprises a data server, a main cache server, at least one secondary cache server and a solid-state disk server;
the data server is used for receiving a data access request sent by a client, selecting a target cache server from the main cache server and at least one cache server, and sending a cache position query request to the target cache server; the cache location query request comprises a data number of data accessed by the data access request;
the main cache server and the target cache server in the at least one secondary cache server are used for returning the data cache position information to the data server if the data cache position information corresponding to the data number exists in the target cache server; the data cache position information is obtained by the target cache server based on the data block number corresponding to the data number;
the data server is further configured to send the data cache location information to the solid-state disk server;
and the solid-state disk server is used for reading data from the cache position specified by the data cache position information and sending the read data to the client through the data server.
10. The data storage system of claim 9,
the data server is specifically configured to send a data reading request to the solid-state disk server, where the data reading request includes a mapping relationship, and the mapping relationship includes a correspondence between the data number and the data block number and the data cache location information;
the solid-state disk server is further configured to detect a mapping relationship recorded in the solid-state disk server and a mapping relationship in the data reading request; if the mapping relations are not consistent, sending a reading error message to the data server;
the main cache server is further configured to determine a mapping relationship of the data to be stored, where the mapping relationship of the data to be stored includes a data number and a corresponding relationship between a data block number of the data to be stored and data cache position information of the data to be stored; the cache updating instruction is generated based on the mapping relation of the data to be stored, and is sent to the solid-state disk server;
the solid-state disk server is further configured to update the mapping relationship of the data stored in the solid-state disk server and update the stored data of the solid-state disk in the solid-state disk server according to the cache update instruction;
and the secondary cache server is also used for carrying out synchronization based on the mapping relation of the data stored by the solid-state disk server.
CN201911008409.7A 2019-10-22 2019-10-22 Data access method, data server and data storage system Pending CN112698935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911008409.7A CN112698935A (en) 2019-10-22 2019-10-22 Data access method, data server and data storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911008409.7A CN112698935A (en) 2019-10-22 2019-10-22 Data access method, data server and data storage system

Publications (1)

Publication Number Publication Date
CN112698935A true CN112698935A (en) 2021-04-23

Family

ID=75504927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911008409.7A Pending CN112698935A (en) 2019-10-22 2019-10-22 Data access method, data server and data storage system

Country Status (1)

Country Link
CN (1) CN112698935A (en)

Similar Documents

Publication Publication Date Title
US10185497B2 (en) Cluster federation and trust in a cloud environment
EP2923272B1 (en) Distributed caching cluster management
US9405781B2 (en) Virtual multi-cluster clouds
US9367261B2 (en) Computer system, data management method and data management program
US10462250B2 (en) Distributed caching cluster client configuration
US9262323B1 (en) Replication in distributed caching cluster
CN108234191A (en) The management method and device of cloud computing platform
US20110178985A1 (en) Master monitoring mechanism for a geographical distributed database
CN107181686B (en) Method, device and system for synchronizing routing table
CN109976941B (en) Data recovery method and device
US9529772B1 (en) Distributed caching cluster configuration
CN108572976A (en) Data reconstruction method, relevant device and system in a kind of distributed data base
CN112015595B (en) Master-slave database switching method, computing device and storage medium
CN112052230B (en) Multi-machine room data synchronization method, computing device and storage medium
CN105323271B (en) Cloud computing system and processing method and device thereof
CN107547605B (en) message reading and writing method based on node queue and node equipment
CN107465706B (en) Distributed data object storage device based on wireless communication network
CN111917870A (en) Request processing method, system, device, electronic equipment and storage medium
CN112698935A (en) Data access method, data server and data storage system
EP4057577A1 (en) Addressing method, addressing system and addressing apparatus
KR20160025994A (en) Cluster management method and data storage system for selecting gateway in distributed storage environment
CN117424907A (en) File acquisition method, device, node equipment and computer readable storage medium
CN116225337A (en) Data storage processing method and device, electronic equipment and storage medium
CN112100129A (en) Data access method, data storage method, data access device and file storage system
CN114301930A (en) Distributed data synchronization method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination