CN112235405A - Distributed storage system and data delivery method - Google Patents

Distributed storage system and data delivery method Download PDF

Info

Publication number
CN112235405A
CN112235405A CN202011101781.5A CN202011101781A CN112235405A CN 112235405 A CN112235405 A CN 112235405A CN 202011101781 A CN202011101781 A CN 202011101781A CN 112235405 A CN112235405 A CN 112235405A
Authority
CN
China
Prior art keywords
redis
server
instance
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011101781.5A
Other languages
Chinese (zh)
Inventor
罗绍华
史业政
付光增
郑文琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011101781.5A priority Critical patent/CN112235405A/en
Publication of CN112235405A publication Critical patent/CN112235405A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a distributed storage system, comprising: the system comprises a Redis proxy server, a Redis server and a server which accords with a preset deployment condition, wherein the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and the Redis instances are used for storing target data; the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the instance in the server is used for consuming the target data in the publish-subscribe message system, so that the Redis server and the server store the same target data; the Redis proxy server is also used for receiving a data reading request, and reading target data in a Redis instance or target data in an instance based on the identification in the data reading request. The invention also discloses a data delivery method; by means of the mode that the Redis server and the server are arranged in a main standby mode, compared with the mode that the main machine and the standby machine both use the same configuration Redis server with high deployment cost, the effects of low cost and high availability are achieved.

Description

Distributed storage system and data delivery method
Technical Field
The invention relates to the technical field of distributed storage, in particular to a distributed storage system and a data delivery method.
Background
With the development of internet technology, mass data needs to be accurately and quickly put on line in advertising business, and low response delay is realized. Therefore, a distributed system is needed to realize accurate and fast online delivery, and in the prior art, no matter a Redis scheme or a coding scheme is adopted, the distributed storage system deployed by the distributed storage system has the problem of high deployment cost in order to realize high availability.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a distributed storage system and a data delivery method, and aims to solve the technical problem that the existing distributed storage system is high in deployment cost for realizing high availability.
To achieve the above object, the present invention provides a distributed storage system, including: server clusters and Redis proxy servers; wherein the content of the first and second substances,
the server cluster comprises a Redis server and a server meeting a preset deployment condition, wherein the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and the Redis instances are used for storing target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance;
the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the instance in the server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the server store the same target data;
the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the Redis instance based on an identifier in the data reading request.
Optionally, the server is an LMDB server, and the instance is an LMDB instance;
the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the distributed storage system further comprises a coordination server, wherein a mapping relation table is stored in the coordination server, and the mapping relation table comprises mapping relations between the plurality of logic slots and the plurality of virtual groups, and mapping relations between the plurality of virtual groups and the Redis instance and/or the LMDB instance.
Optionally, the Redis proxy server is further configured to receive a data reading request, obtain a corresponding target logical slot based on a value corresponding to the identifier in the data reading request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and read target data in a Redis instance having a mapping relationship with the target virtual group from the Redis server or read target data in an LMDB instance having a mapping relationship with the target virtual group from the LMDB server.
Optionally, the Redis proxy server is further configured to receive a data write request, obtain a target logical slot based on an identifier in the data write request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and store target data in the data write request into a Redis instance having a mapping relationship with the target virtual group.
Optionally, the Redis proxy server is further configured to authenticate user information corresponding to the data reading request when the data reading request is received, and read target data in the Redis instance or target data in the LMDB instance based on an identifier in the data reading request after the authentication is passed.
Optionally, the Redis proxy server is further configured to, when receiving the data reading request, determine whether a request amount in unit time reaches a threshold, and if so, perform traffic monitoring.
Optionally, the Redis proxy server establishes a long link with the server cluster.
Optionally, the distributed storage system includes a plurality of Redis proxy servers, each of the Redis proxy servers establishing a long link with the server cluster.
Optionally, the distributed storage system further includes a monitoring server, where the monitoring server includes a monitoring component and a display component; wherein the content of the first and second substances,
the monitoring component is used for receiving data storage information sent by the Redis proxy server and/or receiving instance state information sent by the server cluster;
the display component is used for displaying the data storage information and/or the instance state information.
In addition, in order to achieve the above object, the present invention further provides a data delivery method, where the data delivery method is applied to a Redis proxy server, the Redis proxy server is communicatively connected to a server cluster, the server cluster includes a Redis server and a server meeting a predetermined deployment condition, the Redis server includes multiple Redis instances, the server includes multiple instances, and the Redis instances store target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance;
the data delivery method comprises the following steps:
sending the target data in the Redis instance to a publish-subscribe message system so that the instance consumes the target data in the publish-subscribe message system;
receiving a data reading request;
reading target data in the Redis instance or target data in the instance based on the identification in the data reading request;
and delivering the target data.
Optionally, the server is an LMDB server, and the instance is an LMDB instance; the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the step of reading the target data in the Redis instance or the LMDB instance based on the identification in the data read request comprises:
obtaining a corresponding target logical slot based on a value corresponding to the identifier in the data reading request;
obtaining a target virtual group with a mapping relation with the target logical slot;
reading target data in the Redis instance in the mapping relation with the target virtual group from the Redis server or reading target data in the LMDB instance in the mapping relation with the target virtual group from the LMDB server.
Optionally, before the step of sending the target data in the Redis instance to a publish-subscribe message system, the method further includes:
receiving a data write request;
obtaining a target logical slot based on the identification in the data write request;
obtaining a target virtual group with a mapping relation with the target logical slot;
and storing target data in the data writing request into the Redis instance with a mapping relation with the target virtual group.
The invention can achieve the beneficial effects.
The embodiment of the invention provides a distributed storage system and a data delivery method, wherein the distributed storage system comprises a server cluster and a Redis proxy server; the server cluster comprises a Redis server and a server meeting a preset deployment condition, wherein the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and the Redis instances are used for storing target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance; the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the instance in the server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the server store the same target data; the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the Redis instance based on an identifier in the data reading request. Therefore, in the distributed storage system, in order to realize a Redis scheme, a Redis server is adopted as a host to store target data in a distributed manner, on the one hand, in order to realize high availability, a server meeting a preset deployment condition is adopted as a standby machine, a publish-subscribe message system is used for synchronizing data of the Redis server and the server, and a mode that the Redis server and the server meeting the preset deployment condition are arranged in a standby mode is utilized, so that compared with the existing mode that the same-configuration Redis server with high deployment cost is used by both a master node and a slave node, a database server with lower cost can be adopted, and the effects of low cost and high availability are achieved; meanwhile, on the other hand, by using the mapping relation between the logical slot configured in the Redis proxy server and each instance, the fragmentation of the database bottom layer is avoided, and the low coupling design of the Redis proxy server and the cluster is realized, so that the Redis proxy server can be matched with Redis software of various versions in the cluster, therefore, the distributed system of the embodiment not only realizes low cost and high availability, but also has the data bottom Redis server compatible with Redis of any native version.
Drawings
FIG. 1 is a schematic structural diagram of a distributed storage system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a fragment mapping process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an authentication process according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating flow control according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a data monitoring warning process according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of a data delivery method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the descriptions relating to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The main solution of the embodiment of the invention is as follows: a distributed storage system is adopted, and comprises a server cluster and a Redis proxy server; the server cluster comprises a Redis server and a server meeting a preset deployment condition, wherein the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and the Redis instances are used for storing target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance; the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the instance in the server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the server store the same target data; the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the Redis instance based on an identifier in the data reading request.
In order to realize efficient management of data in the prior art, a distributed system is generally adopted for solving, for example, Redis-Cluster, code, some Redis solutions on the cloud, and the like. In order to realize high availability, the Redis scheme or the codis scheme is limited by the difficulty of software configuration and deployment, a master-slave co-configuration machine scheme with a full memory is adopted to realize a master-slave structure, and the deployed distributed storage system has high cost; in addition, according to the existing codis scheme, the codis does not adopt logic fragmentation but performs fragmentation on a data bottom layer according to the codis code display of the current version, and the current codis version is low and cannot be compatible with a high-version redis database.
The invention provides a solution, in a distributed storage system, in order to realize Redis scheme, a Redis server is adopted as a host to store target data in a distributed manner, in order to realize high availability, a server which meets preset deployment conditions is adopted as a standby machine, and a publish-subscribe message system is used for synchronizing data of the Redis server and the server; in addition, the mapping relation between the logic slot configured in the Redis proxy server and each instance is utilized, so that the fragmentation of the bottom layer of the database is avoided, and the problem of compatibility between the proxy server and the bottom layer database is solved. Therefore, by means of the mode of setting the Redis server and the main and standby servers meeting the preset deployment conditions, compared with the mode that the existing master and slave nodes use the same-configuration Redis server with high deployment cost, the problem of compatibility can be solved through the setting of the logic slot, so that the server with lower cost is adopted, and the effects of low cost and high availability are achieved.
Referring to fig. 1, an embodiment of the present invention provides a distributed storage system, in this embodiment, a server meeting a predetermined deployment condition may be an LMDB server with lower cost;
the distributed storage system includes: server clusters and Redis proxy servers; wherein the content of the first and second substances,
the server cluster comprises a Redis server and an LMDB server, the Redis server comprises a plurality of Redis instances, the LMDB server comprises a plurality of LMDB instances, and the plurality of Redis instances are used for storing target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance;
the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the LMDB instance in the LMDB server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the LMDB server store the same target data;
the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the LMDB instance based on an identifier in the data reading request.
It should be noted that the full name of Redis is Remote Dictionary Server, which is an open source, written in ANSI C language, supporting network, log-type database which can be based on memory and can be persistent, Key-Value database, and API for multiple languages. In particular, redis supports relatively more stored value types, including string, list, set, and zset. In the present embodiment, the Redis server refers to a server for storing data in which Redis software is installed and configured.
The LMDB is called Lightning Memory-Mapped Database, and is translated into: as for a memory mapping database of lightning, the LMDB accesses files by using a memory mapping mode, so that the addressing overhead in the files is very small, and the file addressing can be realized by using pointer operation; the database single file can also reduce the overhead of the data set copying/transmitting process. In the present embodiment, the LMDB server is a server for storing data, in which LMDB software is installed and configured, and serves as a backup of the Redis server.
It can be seen that since both Redis and LMDB are memory-based data structure stores, they implement the functionality of the database by calling memory. Therefore, under the same conditions, in this embodiment, compared to Redis, the LMDB is selected to have relatively less requirements on the memory space, and accordingly, the configuration requirements on the hardware are relatively lower, and the cost of the deployed hardware is relatively lower.
In addition, the Redis proxy server is a server which is provided with a Redis proxy program, and the Redis proxy program is a proxy program in a Redis scheme; redis proxy supports multiple business statement access modes, the command mode is seamlessly compatible with Redis syntax commands, and the vector mode provides binary data/natural statement input.
In a specific implementation process, in the distributed storage system in this embodiment, the server cluster is in communication connection with the Redis proxy server to implement reading and writing of data. In order to realize high availability while expanding data storage capacity, the server cluster may include a plurality of Redis groups, each Redis group including a Redis server and an LMDB server, the Redis server serving as a host and the LMDB server serving as a standby.
In order to avoid the single point problem of the proxy point, a plurality of Redis proxy servers may be provided and respectively connected to a plurality of Redis groups in a communication manner.
Referring to fig. 1, the distributed storage system includes: 3 Redis proxy servers, 3 Redis groups, each Redis group comprising one Redis server and one LMDB server. Each Redis proxy server establishes long links with each Redis group respectively, and long links can reduce time consumption caused by frequent links.
In an embodiment, a long link pool mechanism may be established between the Redis proxy server and the Redis group, and specifically, a long chain takeover program may be added to the Redis proxy server to perform the following steps:
when a cache data read-write request is monitored, acquiring an idle link in a link pool;
sending a data reading/writing request to a Redis server through the idle link so that the Redis server can execute target data reading and writing operations according to the data reading/writing request;
if the read-write operation is detected to fail, judging the idle link as an invalid link;
and if the idle link is detected to be an invalid link, sending an invalid notification to a link pool manager so that the link pool manager starts a link recovery mechanism for the idle link.
Compared with the prior art that invalid links in a link pool are periodically recovered, the link management method in the embodiment immediately sends an invalid notification to the link pool manager when the idle links are detected to be invalid links, so that the link pool manager can timely start a link recovery mechanism for the idle links, the technical defect that the invalid links cannot be timely recovered in the prior art is overcome, and the invalid links are prevented from occupying link pool resources.
In addition, the 3 Redis proxy servers may also be connected to a user Side (also referred to as a service Side in this embodiment), and in one scenario, the distributed storage system is used to launch portrait data online, so that the user Side may develop a set of software to facilitate importing of portrait data and outputting data read from the server cluster to an advertisement launching software DSP (Demand Side Platform) to facilitate launching of the DSP. It should be noted that a long link may also be established between the Redis proxy server and the user side.
Further, the Redis server may include a plurality of Redis instances, and the LMDB server may include a plurality of LMDB instances, the plurality of Redis instances for storing the target data. The storage mode can be cache, and different from disk storage, the cache can be read and written quickly, so that data release is faster.
The Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the LMDB instance in the LMDB server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the LMDB server store the same target data. In this case, the Redis server serves as a host, stores the target data, and is used by the user end to read the target data through the Redis proxy server; and the LMDB server serves as a standby machine, and when the Redis instance in the host computer is offline, the user end reads the target data through the Redis proxy server.
Specifically, the subscription and issuance message system may be Kafka, the Kafka server installed with Kafka software is in communication connection with the Redis proxy server and the LMDB server, and under a normal condition of the Redis instance, the read-write data of the user side is read-write data of the host Redis instance through the Redis proxy server, and the Redis proxy server can be used as a producer to produce the write command data to the lmdbtopic theme of Kafka; the LMDB proxy server serves as a consumer to consume data in Kafka in a quasi-real-time manner, so that the final consistency of the data between the main server and the standby server is guaranteed, and the high availability of the distributed system is guaranteed under the condition of lower deployment cost.
In addition, under the condition that the Redis instance is offline, the Redis proxy server checks the failed instance to start a standby machine scheme, the read command data of the offline instance is switched to the standby machine LMDB proxy server to be read, the write command data of the offline instance can be used as production data to be produced on the failtopic theme of Kafka for one more time, the failtopic can be consumed after the instance is online, write data is cached during the offline instance period, and loss and inconsistency between the main and standby are avoided.
When the Redis instance is recovered to be normal, the instance is sensed to be recovered through the calling component and used as a consumer to start consuming the data of the failtopic, the data are directly written into the Redis instance, and when the data in the failtopic are consumed completely, the read-write request is switched from the standby machine to the host machine, so that normal high-performance service is provided.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and those skilled in the art can set the technical solution based on the needs in practical application, and the technical solution is not limited herein.
As can be easily found from the above description, in the distributed storage system provided in this embodiment, in order to implement a Redis scheme, a Redis server is used as a host to store target data in a distributed manner, and in order to implement high availability, an LMDB server is used as a standby machine, and a publish-subscribe message system is used to synchronize data of the Redis server and the LMDB server.
In another embodiment of the present application, on the basis of the first embodiment, the server is an LMDB server, and the instance is an LMDB instance; the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the distributed storage system further comprises a coordination server, wherein a mapping relation table is stored in the coordination server, and the mapping relation table comprises mapping relations between the plurality of logic slots and the plurality of virtual groups, and mapping relations between the plurality of virtual groups and the Redis instance and/or the LMDB instance.
In particular implementations, the coordination server may be a zookeeper server. Referring to FIG. 2, slots are obtained in the present embodiment for logical partitioning, and are therefore referred to as logical slots; and performing logic fragmentation by using a Cyclic Redundancy Check (CRC) algorithm according to the identification key in the data to obtain 1024 logic slots, namely slot0, slot1 and slot 1023.
Setting a table name rule according to a service end, namely: the method comprises the steps that an instance is mapped by ApName.TableName, a plurality of slot logic slots (app.table- > slot) are statically configured, different slots are also mapped to a corresponding group virtual group (slot- > group) relationship, the virtual group corresponds to a master multi-slave instance, and the relationship among the master multi-slave instance, the master multi-slave instance and the slave multi-slave instance is stored in a zookeeper server;
when data reading and writing are carried out on a Redis server, all slot relations are obtained through AppName.
Get the slotid that falls based on crc32 (key)% 1024;
and searching the virtual group corresponding to the slotid, and obtaining the instance information corresponding to the group.
As an optional implementation manner, the Redis proxy server is further configured to receive a data write request, obtain a target logical slot based on an identifier in the data write request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and store target data in the data write request into a Redis instance having a mapping relationship with the target virtual group.
For example, when the service end has a data write request related to image data deviced 5:1101(key: value format), after the Redis proxy server receives the data write request, the request is subjected to CRC32 algorithm on the key, and modulo 1024 is performed, so as to obtain a slotid, which is a corresponding ID of the logical slot; and then finding the corresponding virtual group according to the slotid, obtaining Redis instance information (IP and Port) according to the mapping relation table of the virtual group, and writing the data devicemd5:1101 into the Redis instance.
As another optional implementation manner, the Redis proxy server is further configured to receive a data reading request, obtain a corresponding target logical slot based on a value corresponding to an identifier in the data reading request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and read target data in a Redis instance having a mapping relationship with the target virtual group from the Redis server or read target data in an LMDB instance having a mapping relationship with the target virtual group from the LMDB server.
It should be noted that the value corresponding to the identifier here is a value corresponding to the identifier key in the key-value data format.
For example, when the service end is about to launch a head advertisement and trigger a data reading request of the DSP for image data, a value corresponding to the data devicemd5(key) is obtained, a corresponding virtual group is found according to a logical slot corresponding to the value, Redis instance information (IP and Port) is obtained according to a mapping relation table thereof, and the data devicemd5:1101 is read from the Redis instance.
Specifically, the process of releasing the complete image data is as follows:
firstly, an image data writing process:
the portrait data factory assembles original data into an interactive protocol through a service end and sends the interactive protocol to a Redis proxy server by using a Pipeline mode;
a receiving thread of the Redis proxy server receives a request from a data factory;
a processing thread of the Redis proxy server splits the request according to the Redis instance, constructs the Redis instance request and sends the Redis instance request to the Redis instance;
a link pool of the Redis proxy server manages long link connection between the Redis proxy server and the server cluster;
and the sending thread of the Redis proxy server replies the processing result to the service end.
The next step is the reading process of the image data:
an advertiser sends an advertisement putting demand party;
the ad placement platform DSP assembles a read command with the service end based on didmd5 and sends to the Redis proxy server in Pipeline mode (batch); after receiving the response of the Redis proxy server, returning the bidding result to the advertiser according to the bidding logic to finish the delivery of the portrait data;
a receiving thread of the Redis proxy server receives a request from a service end;
a processing thread of the Redis proxy server splits the request according to the Redis instance, constructs the Redis instance request and sends the Redis instance request to the Redis instance;
a link pool of the Redis proxy server manages long link connection between the Redis proxy server and the server cluster;
and the sending thread of the Redis proxy server replies the processing result to the service end.
The above is a process of implementing image data delivery by using the distributed storage system of the embodiment.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and those skilled in the art can set the technical solution based on the needs in practical application, and the technical solution is not limited herein.
As can be easily found from the above description, in the distributed storage system in this embodiment, logical fragmentation is performed in the Redis proxy server, and the mapping relationship table is stored in the zookeeper server, so as to implement a low coupling design with the cluster, that is, Redis software of various versions in the cluster can be matched; compared with the existing codis scheme, according to the codis code display of the current version, the codis does not adopt logic fragmentation, but fragmentation is carried out on the data bottom layer, and the current codis version is low and cannot be compatible with the Redis database of the high version, so that the distributed system of the embodiment not only realizes low cost and high availability, but also has the data bottom layer Redis server compatible with any native version of Redis and supports the upgrade of the Redis version.
In another optional embodiment of the present application, on the basis of the foregoing embodiment, the Redis proxy server is further configured to authenticate user information corresponding to the data reading request when the data reading request is received, and read target data in the Redis instance or target data in the LMDB instance based on an identifier in the data reading request after the authentication is passed.
In a specific implementation process, referring to fig. 3, a user may first register an account and a password through a Web page of a service end, store registration information in a DataBase, and synchronize the registration information in a zookeeper server; as shown in fig. 3, when the Redis proxy server receives the data reading request, the account Id and the account password registered on the interface tape provided by the Redis proxy server are verified, if the verification passes, the access of the target data of the Redis instance in the Redis server is allowed, and if the verification does not pass, the verification fails.
Through the above description, it is not difficult to find that the proxy layer RedisProxy in the distributed storage system of this embodiment provides ACL authentication capability, and it is ensured that only authorized service account can access data, thereby avoiding the Redis data being taken off trousers.
In addition, it can be understood that, because the registered account is synchronized into zookeeper, the configuration hot update function of zookeeper can be utilized to realize the addition and deletion of the service account without restarting the service Redisproxy in the Redis proxy server, thereby reducing the influence on normal service.
In another optional embodiment of the present application, the Redis proxy server is further configured to, when receiving the data reading request, determine whether a request amount in a unit time reaches a threshold, and if so, perform traffic monitoring.
In a specific implementation process, referring to fig. 4, a user may first set a traffic threshold value in a unit time according to traffic/machine performance of a service scene through a Web page of a service end, persist the traffic threshold value into a DataBase, synchronize the DataBase to a zookeeper server, and load the traffic threshold value into a memory through a proxy RedisProxy in a Redis proxy server; as shown in fig. 3, when the Redis proxy server receives the data reading request, the RedisProxy server calculates whether the request amount in unit time reaches a threshold, and if the request amount reaches the threshold, it is determined that flow control is required, and a flow control error code is returned to the service segment; if the threshold is not reached, no flow control is required, the request reaches the Redis instance, and the expected result is returned.
Through the above description, it is not difficult to find that the proxy layer RedisProxy of the distributed storage system of this embodiment supports flow control, and can perform flow monitoring and flow limiting based on different services, thereby avoiding overload to a server cluster caused by a flow peak.
In another optional embodiment of the present application, the distributed storage system further includes a monitoring server, where the monitoring server includes a monitoring component and a display component; wherein the content of the first and second substances,
the monitoring component is used for receiving data storage information sent by the Redis proxy server and/or receiving instance state information sent by the server cluster;
the display component is used for displaying the data storage information and/or the instance state information.
In a specific implementation, the monitoring component may be a Prometheus component and the display component may be a Grafana component. Referring to fig. 5, in this embodiment, a Redis proxy server accesses an open source component Prometheus to perform data collection and storage engine, and Grafana is used as data display and alarm setting; for a Redis instance, state data in a cluster instance, such as data of link number, used memory, command number per second, hit rate and the like, is automatically reported to Prometous by introducing a Redis-exporter tool, the data is displayed by Grafana, a service end can develop a threshold value warning function, and if an abnormality occurs, the data is sent to an enterprise WeChat notification item worker; for the proxy RedisProxy, reporting service data in real time through a proxy interface, for example: and (3) requesting amount, responding amount, average consumed time, exceeding time, accessing IP (Internet protocol) and other data by a client, displaying the data through Grafana, performing threshold value alarm on the data possibly during service development, and sending the data to an enterprise micro-communication notification project worker if the data is abnormal.
Through the above description, it is easy to find that the entire architecture of the distributed storage system of the embodiment supports 7 × 24 monitoring and maintenance for the Redis proxy server and the server cluster, alarms in real time, and ensures that the service availability reaches 99.95%.
Referring to an advertisement service scene of a certain company, taking a 1280g memory as an example of storage requirement, the read-write meets 20w/tps, the time delay of P9910 ms is high, the availability is high, and the deployment cost is as follows.
Figure BDA0002724420270000131
Figure BDA0002724420270000141
As can be seen from the data in the table, when the memory requirements of the same size are realized, the distributed storage system of the embodiment has the lowest deployment cost, but meets the requirement of fast delivery with 20w/tps and P9910 ms time delay, and realizes high availability.
Referring to fig. 6, based on the same inventive concept as the foregoing embodiment, an embodiment of the present application further provides a data delivery method, where the data delivery method is applied to a Redis proxy server, where the Redis proxy server is communicatively connected with a server cluster, the server cluster includes a Redis server and a server meeting a predetermined deployment condition, the Redis server includes multiple Redis instances, the server includes multiple instances, and the Redis instances store target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance;
the data delivery method comprises the following steps:
s10, sending the target data in the Redis instance to a publish-subscribe message system so that the instance consumes the target data in the publish-subscribe message system;
s20, receiving a data reading request;
s30, reading target data in the Redis instance or target data in the Redis instance based on the identification in the data reading request;
and S40, delivering the target data.
As an alternative embodiment, the server is an LMDB server, the instance is an LMDB instance; the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the step of reading the target data in the Redis instance or the LMDB instance based on the identification in the data read request comprises:
obtaining a corresponding target logical slot based on a value corresponding to the identifier in the data reading request;
obtaining a target virtual group with a mapping relation with the target logical slot;
reading target data in the Redis instance in the mapping relation with the target virtual group from the Redis server or reading target data in the LMDB instance in the mapping relation with the target virtual group from the LMDB server.
As an optional embodiment, before the step of sending the target data in the Redis instance to a publish-subscribe messaging system, the method further comprises:
receiving a data write request;
obtaining a target logical slot based on the identification in the data write request;
obtaining a target virtual group with a mapping relation with the target logical slot;
and storing target data in the data writing request into the Redis instance with a mapping relation with the target virtual group.
It should be noted that the delivering method in this embodiment corresponds to the functions of the distributed storage system in the foregoing embodiment one to one, and therefore, various embodiments thereof may also refer to the embodiments in the foregoing embodiment, which are not described herein again.
On the basis of the distributed storage system of the foregoing embodiment, an embodiment of the present application further provides a storage capacity adjustment method, where the storage capacity adjustment method includes:
generating a migration schedule about a target logical slot in the proxy server based on the acquired capacity adjustment requirement, so that the proxy server sets a hit instance in the migration schedule as a migration state after acquiring the migration schedule; when the hit instance is in a migration state, the proxy server transfers a read-write access path of data in the hit instance to the standby storage server;
when the hit instance is in a migration state, acquiring an identifier of target data corresponding to the target logical slot in the original instance;
migrating the target data from the original instance to the target instance and migrating the mapping relation between the target logical slot and the original instance to the target instance based on the identification of the target data, so that the target logical slot and the target instance have the mapping relation; the target instance is an instance which is newly added during capacity expansion or an instance which is left during capacity reduction of the distributed storage system and corresponds to the capacity adjustment requirement.
It should be noted that, in this embodiment, the storage capacity adjustment refers to performing capacity expansion or capacity reduction on the distributed storage system in the foregoing embodiment, that is, adding or subtracting instances to a server cluster, which may be embodied as adding or subtracting a primary storage server from a hardware perspective.
In particular, the method of this embodiment may be implemented by a program process, which may be installed on a host different from the proxy server, and may establish a communication connection with the server cluster. The method execution process of the present embodiment will be described below with reference to the program process as an execution subject.
First, acquiring capacity adjustment requirements is performed.
In a specific implementation process, the storage capacity adjustment refers to performing capacity expansion or capacity reduction on the distributed storage system in the above embodiment, and the capacity adjustment requirement may be obtained from the operation and maintenance system. The operation and maintenance system has the functions of: when business data grows/decreases, a Redis instance expansion/contraction instruction is triggered through the tool/platform.
Next, generating a migration schedule about a target logical slot in the proxy server based on the acquired capacity adjustment requirement, so that the proxy server sets a hit instance in the migration schedule as a migration state after acquiring the migration schedule;
and when the hit instance is in a migration state, the proxy server transfers the read-write access path of the data in the hit instance to the standby storage server.
In a specific implementation process, the migration schedule refers to a migration schedule related to a logical slot, and the target logical slot refers to a logical slot corresponding to target data migration in the capacity adjustment requirement. It should be noted that, in this embodiment, a plurality of logical slots are configured in the proxy server, and a mapping relationship exists between the plurality of logical slots and the plurality of instances. Therefore, for increasing or decreasing instances, re-fragmentation is required, and the logical slot is migrated to reestablish the mapping relationship, thereby implementing data reading and writing.
In addition, the migration schedule includes hit instances, where the hit instances include an original instance of the target logical slot pre-migration mapping and a target instance of the target logical slot post-migration mapping. In order to realize the reading and writing during the data migration, after the proxy server acquires the migration schedule, the proxy server may set a hit instance in the migration schedule to be in a migration state, and when the hit instance is in the migration state, the proxy server transfers a read-write access path of the data in the hit instance to the standby storage server. Compared with the prior art, the data distribution is realized through the logical slot in the proxy server, and the fragmentation is not carried out on the bottom layer of the database, so that when the expansion/contraction operation is carried out on the bottom layer of the database of the distributed storage system, the read-write access can be temporarily transferred to the standby storage server without influencing the read-write access of the data, and the problems that the data distribution is realized through the physical slot, the standby scheme cannot be realized, and the data writing in the expansion/contraction period cannot be realized in the prior art are solved; and then after the expansion/contraction operation is finished, the data read and written temporarily by the standby storage server can be synchronized to the cluster, so that the data consistency of the distributed storage system is ensured.
Therefore, before data migration, a migration schedule needs to be generated for the target logical slot.
As an alternative embodiment, the step of generating a migration schedule for a target logical slot in the proxy server based on the capacity adjustment requirement includes:
and generating a migration schedule about the target logic slot in the proxy server by adopting a balance algorithm based on the capacity adjustment requirement.
In the specific implementation process, the allocation of the migrated logic slot can be more balanced by adopting a balancing algorithm, so that the read-write performance of the expanded/reduced distributed storage system is more stable.
As an optional embodiment, when the capacity adjustment requirement is a capacity expansion requirement, the target instance is an instance in a newly added primary storage server; the step of generating a migration schedule about a target logical slot in the proxy server by using a balancing algorithm based on the capacity adjustment requirement includes:
based on the capacity expansion requirement, obtaining the average number of the logic slots distributed to the capacity expanded instance;
based on the number of the logic slots of each instance before capacity expansion, performing descending order arrangement on each instance before capacity expansion;
traversing each instance after descending order arrangement before capacity expansion, distributing the logical slots exceeding the average number in each instance before capacity expansion to the target instance until the number of the logical slots in the instance before capacity expansion is equal to the average number, and generating a migration schedule of the target logical slots in the proxy server.
In a specific implementation process, when the capacity adjustment requirement is a capacity expansion requirement, and the target instance is an instance in a newly added primary storage server, at this time, an existing part of the logical slots needs to be allocated to the newly added instance, and therefore, the allocation of the logical slots needs to be performed again. Specifically, the sorted examples are sequentially distributed according to the number of the logical slots and the average number, so that the balance of the distributed logical slots is ensured.
As another optional embodiment, when the capacity adjustment requirement is a capacity reduction requirement, the target instance is an instance in an original primary storage server; the step of generating a migration schedule about a target logical slot in the proxy server by using a balancing algorithm based on the capacity adjustment requirement includes:
obtaining an average number of logical slots allocated to the scaled instances based on the scaling requirements;
based on the number of logic slots of each instance before capacity reduction, carrying out ascending order arrangement on the instances reserved for capacity reduction;
and traversing the instances of the capacity reduction reservation after the ascending arrangement, distributing the logic slots in the reduced instances into the target instances until the number of the logic slots in the instances of the capacity reduction reservation is equal to the average number, and generating a migration schedule of the target logic slots in the proxy server.
In a specific implementation process, when the capacity adjustment requirement is a capacity reduction requirement and the target instance is an instance in the original primary storage server, at this time, the logical slot in the reduced instance needs to be mapped into the target instance, and therefore, the logical slot needs to be newly allocated. Specifically, the sorted examples are sequentially distributed according to the number of the logical slots and the average number, so that the balance of the distributed logical slots is ensured.
And then, when the hit instance is in a migration state, acquiring the identifier of the target data corresponding to the target logical slot in the original instance.
In a specific implementation process, before migrating target data, in order to subsequently generate a migration command, the target data is migrated, and first, an instance needs to be scanned to obtain an identifier of the target data. In the process, the hit instance is in a migration state, and the read-write performance is realized.
Then, based on the identification of the target data, migrating the target data from the original instance to the target instance, and migrating the mapping relation between the target logical slot and the original instance to the target instance, so that the target logical slot and the target instance have a mapping relation; the target instance is an instance which is newly added during capacity expansion or an instance which is left during capacity reduction of the distributed storage system and corresponds to the capacity adjustment requirement.
In a specific implementation process, a migration command needs to be generated first based on the identification of the target data. As an optional embodiment, the migrating the target data from the original instance to the target instance and the mapping relationship between the target logical slot and the original instance to the target instance based on the identifier of the target data includes:
generating a native migration command based on the identification of the target data;
migrating the target data from the original instance to the target instance and migrating a mapping relationship of the target logical slot to the original instance to the target instance based on the native migration command.
The native migration command is a native Migrate command. In a specific implementation process, the native Migrate command can realize batch migration, improve data migration efficiency, and further improve capacity expansion/reduction efficiency.
In a specific implementation process, first, target data is migrated, and as an optional embodiment, the migration command includes migration plans of the target data corresponding to a plurality of target logical slots; the step of migrating the target data from the original instance to the target instance based on the migration command includes:
creating a task queue based on the migration command, wherein the task queue comprises migration plans of target data corresponding to the target logical slots;
creating a plurality of threads with the same number as the target logic slots on the basis of the task queue;
and the multiple threads respectively obtain the migration plans in the task queue so as to migrate the target data corresponding to the multiple target logic slots from the original examples to the target examples.
It can be understood that, in a specific implementation process, when a migration command includes migration plans of target data corresponding to multiple target logical slots, a task queue of a shared memory may be created, the migration plan of the target data corresponding to each target logical slot is taken as a task to be put into the queue, the number of CPU cores is fully utilized, multiple threads are created, each thread acquires the task in the queue, the target data corresponding to the multiple target logical slots are migrated from the original instance to the target instance, concurrent migration is implemented, and the migration speed is increased.
Based on the distributed storage system in the foregoing embodiment, an embodiment of the present application further provides a data synchronization method for a distributed storage system. In this implementation, the distributed storage system further includes a monitoring server, a coordination server, and a publish-subscribe message system based on the foregoing embodiment.
Specifically, the monitoring server is connected with the main storage server on one hand to monitor the state of the instance in the main storage server; on the other hand, the system is connected with a publish-subscribe message system to consume the data in the publish-subscribe message system; and meanwhile, connecting with the coordination server to store the state information of the instance. The proxy server is connected with the coordination server to read the instance state information in the proxy server; the publish-subscribe message system is also connected with the proxy server and the standby storage server to receive data writing of the proxy server and enable the standby storage server to consume data in the data writing.
The data synchronization method of the distributed storage system of the embodiment comprises the following steps:
monitoring a status of an instance in the primary storage server;
if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server respectively writes target data into the standby storage server and a publish-subscribe message system;
and after the target instance is monitored to be restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so that the data in the main storage server and the standby storage server are synchronized.
It should be noted that, because the instance has an online state and an offline state, the instance is in a normal state when in the online state, and data reading and writing can be normally performed; when the instance is in the offline state, data reading and writing cannot be performed, and therefore the data in the instance is not synchronous. The data synchronization method of the embodiment can synchronize the data in each instance aiming at the data asynchronization caused by the instance state change.
In addition, the method execution subject in this implementation may be a monitoring server.
First, monitoring the status of instances in the primary storage server is performed.
In the specific implementation process, the state of the instance refers to whether each instance in the main storage server is in an online state or an offline state. Since data synchronization is due to a change in the state of an instance, to perform data synchronization, the state of the instance in the primary storage server is first monitored.
In one embodiment, the step of monitoring the status of instances in the primary storage server comprises:
periodically detecting the network connection condition of the instance in the main storage server according to a preset time interval;
and if the example fails in network connection for at least three times continuously, judging that the example is in an offline state.
In the specific implementation process, the network connection can be ping command connection, ping is a command under Windows, Unix and Linux systems, ping also belongs to a communication protocol and is a part of a TCP/IP protocol, and whether the network is connected or not can be checked by using the 'ping' command, so that the network fault can be well analyzed and judged. Thus, it can be detected through the ping connection whether the instance is on-line, i.e. if the network connection with the instance is not possible, it is indicated that the instance is off-line.
Specifically, since a ping connection failure may be a temporary ping connection failure caused by an accidental network drop, in this embodiment, if the example fails in ping connection at least three times continuously, it is determined that the example is in an offline state.
Further, after the step of determining that the instance is in the offline state if the instance fails in ping connection for at least three consecutive times, the method further includes:
and when the instance is in the offline state, if the successful ping connection of the instance for at least one time is detected, judging that the target instance is recovered to the online state from the offline state.
In the specific implementation process, a successful ping connection represents network connection, and at this time, the instance is necessarily in an online state.
Next, if the target instance in the main storage server is monitored to be in an offline state, generating offline state information of the target instance, so that the proxy server switches a read-write access path of the target instance to the standby storage server after reading the offline state information; and when the read-write access path of the target instance is switched to the standby storage server, the proxy server writes target data into the standby storage server and a subscription information publishing system respectively.
It should be noted that the target instance is an instance in the primary storage server. Under normal conditions, the instances are all in an online state, and when the monitoring server monitors that the target instance in the main storage server is in an offline state, the offline state information of the target instance is generated.
Specifically, the distributed storage system may further include a coordination server; at this time, the offline state information may be stored in the coordination server, and the proxy server may read the offline state information from the coordination server, determine that the target instance is in the offline state, and at this time, switch the read-write access path to the target instance to the standby storage server. And during the period that the read-write access path of the target instance is switched to the standby storage server, the proxy server writes target data into the standby storage server and the publish-subscribe message system respectively.
The writing of the target data into the standby storage server is to ensure that the standby storage server is used as a standby machine and the data can be written normally when the distributed storage system is offline in the target instance; the target data is written into the publish-subscribe message system, so that the target data is supplemented into the target instance after the target instance is online subsequently, and synchronous data in the main storage server and the standby storage server is ensured.
Therefore, next, after it is monitored that the target instance is restored from the offline state to the online state, consuming the target data in the publish-subscribe message system, and writing the target data in the publish-subscribe message system into the target instance in the online state, so as to synchronize data in the primary storage server and the standby storage server.
In a specific implementation process, after monitoring that a target instance is offline, the state of the target instance is continuously monitored, and after monitoring that the target instance is recovered from the offline state to the online state, a monitoring server consumes target data in the publish-subscribe message system and writes the target data in the publish-subscribe message system into the target instance in the online state, so that data in the main storage server and the standby storage server are synchronized.
Specifically, since the target data in the publish-subscribe message system is also synchronously stored in the backup storage server, writing the target data in the publish-subscribe message system into the target instance in an online state can synchronize the data in the primary storage server and the backup storage server.
As an embodiment, after the step of monitoring the instance status in the primary storage server, the method further comprises:
if the target instance in the main storage server is monitored to be in an online state, generating online state information of the target instance, so that the proxy server sends target data in the target instance to the publishing and subscribing message system after reading the online state information;
the standby storage server and the publish-subscribe message system have a subscription relationship, so that the standby storage server consumes target data in the publish-subscribe message system and synchronizes data in the main storage server and the standby storage server.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention essentially or contributing to the prior art can be embodied in the form of a software product, which is stored in a storage medium (e.g. rom/ram, magnetic disk, optical disk) and includes instructions for enabling a multimedia terminal (e.g. mobile phone, computer, television receiver, or network device) to execute the method according to the embodiments of the present invention
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (12)

1. A distributed storage system, comprising: a server cluster and a remote dictionary service Redis proxy server; wherein the content of the first and second substances,
the server cluster comprises a Redis server and a server meeting a preset deployment condition, wherein the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and the Redis instances are used for storing target data; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have mapping relation with the Redis instance and/or the instance;
the Redis proxy server is used for sending the target data in the Redis instance to a publish-subscribe message system; the instance in the server is used for consuming target data in the publish-subscribe message system, so that the Redis server and the server store the same target data;
the Redis proxy server is further configured to receive a data reading request, and read target data in the Redis instance or target data in the Redis instance based on an identifier in the data reading request.
2. The distributed storage system of claim 1, wherein the server is an LMDB server and the instance is an LMDB instance;
the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the distributed storage system further comprises a coordination server, wherein a mapping relation table is stored in the coordination server, and the mapping relation table comprises mapping relations between the plurality of logic slots and the plurality of virtual groups, and mapping relations between the plurality of virtual groups and the Redis instance and/or the LMDB instance.
3. The distributed storage system according to claim 2, wherein the Redis proxy server is further configured to receive a data read request, obtain a corresponding target logical slot based on a value corresponding to the identifier in the data read request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and read target data in a Redis instance having a mapping relationship with the target virtual group from the Redis server or read target data in an LMDB instance having a mapping relationship with the target virtual group from the LMDB server.
4. The distributed storage system according to claim 2, wherein the Redis proxy server is further configured to receive a data write request, obtain a target logical slot based on the identifier in the data write request, obtain a target virtual group having a mapping relationship with the target logical slot based on the mapping relationship table, and store target data in the data write request in a Redis instance having a mapping relationship with the target virtual group.
5. The distributed storage system according to claim 2, wherein the Redis proxy server is further configured to authenticate user information corresponding to the data read request when the data read request is received, and read target data in the Redis instance or target data in the LMDB instance based on the identifier in the data read request after the authentication is passed.
6. The distributed storage system according to claim 2, wherein the Redis proxy server is further configured to, when receiving the data reading request, determine whether a request amount per unit time reaches a threshold, and if so, perform traffic monitoring.
7. The distributed storage system of claim 1, wherein the Redis proxy server establishes a long link with the cluster of servers.
8. The distributed storage system according to claim 7, wherein the distributed storage system comprises a plurality of Redis proxy servers, each of the Redis proxy servers establishing a long link with the cluster of servers.
9. The distributed storage system according to claim 1, further comprising a monitoring server, the monitoring server comprising a monitoring component and a display component; wherein the content of the first and second substances,
the monitoring component is used for receiving data storage information sent by the Redis proxy server and/or receiving instance state information sent by the server cluster;
the display component is used for displaying the data storage information and/or the instance state information.
10. A data delivery method is applied to a Redis proxy server, the Redis proxy server is in communication connection with a server cluster, the server cluster comprises a Redis server and a server meeting a preset deployment condition, the Redis server comprises a plurality of Redis instances, the server comprises a plurality of instances, and target data are stored in the Redis instances; the Redis proxy server is configured with a plurality of logical slots, and the logical slots have a mapping relation with the Redis instance and/or the instance;
the data delivery method comprises the following steps:
sending the target data in the Redis instance to a publish-subscribe message system so that the instance consumes the target data in the publish-subscribe message system;
receiving a data reading request;
reading target data in the Redis instance or target data in the instance based on the identification in the data reading request;
and delivering the target data.
11. The data placement method of claim 10, wherein said server is an LMDB server and said instance is an LMDB instance; the Redis proxy server is configured with a plurality of virtual groups and a plurality of logical slots, the plurality of logical slots and the plurality of virtual groups have mapping relations, and the plurality of virtual groups and the Redis instance and/or the LMDB instance have mapping relations;
the step of reading the target data in the Redis instance or the LMDB instance based on the identification in the data read request comprises:
obtaining a corresponding target logical slot based on a value corresponding to the identifier in the data reading request;
obtaining a target virtual group with a mapping relation with the target logical slot;
reading target data in the Redis instance in the mapping relation with the target virtual group from the Redis server or reading target data in the LMDB instance in the mapping relation with the target virtual group from the LMDB server.
12. The data placement method of claim 11, wherein prior to the step of sending the target data in the Redis instance into a publish-subscribe messaging system, the method further comprises:
receiving a data write request;
obtaining a target logical slot based on the identification in the data write request;
obtaining a target virtual group with a mapping relation with the target logical slot;
and storing target data in the data writing request into the Redis instance with a mapping relation with the target virtual group.
CN202011101781.5A 2020-10-14 2020-10-14 Distributed storage system and data delivery method Pending CN112235405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011101781.5A CN112235405A (en) 2020-10-14 2020-10-14 Distributed storage system and data delivery method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011101781.5A CN112235405A (en) 2020-10-14 2020-10-14 Distributed storage system and data delivery method

Publications (1)

Publication Number Publication Date
CN112235405A true CN112235405A (en) 2021-01-15

Family

ID=74113089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011101781.5A Pending CN112235405A (en) 2020-10-14 2020-10-14 Distributed storage system and data delivery method

Country Status (1)

Country Link
CN (1) CN112235405A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948666A (en) * 2021-01-28 2021-06-11 浪潮云信息技术股份公司 Method for realizing database table data release as API service
CN113010622A (en) * 2021-03-08 2021-06-22 智道网联科技(北京)有限公司 Real-time traffic data processing method and device and electronic equipment
CN114143196A (en) * 2021-11-25 2022-03-04 北京百度网讯科技有限公司 Instance configuration update method, device, apparatus, storage medium, and program product

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948666A (en) * 2021-01-28 2021-06-11 浪潮云信息技术股份公司 Method for realizing database table data release as API service
CN113010622A (en) * 2021-03-08 2021-06-22 智道网联科技(北京)有限公司 Real-time traffic data processing method and device and electronic equipment
CN114143196A (en) * 2021-11-25 2022-03-04 北京百度网讯科技有限公司 Instance configuration update method, device, apparatus, storage medium, and program product
CN114143196B (en) * 2021-11-25 2023-07-28 北京百度网讯科技有限公司 Instance configuration updating method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
US11249815B2 (en) Maintaining two-site configuration for workload availability between sites at unlimited distances for products and services
US9965203B1 (en) Systems and methods for implementing an enterprise-class converged compute-network-storage appliance
CN102594849B (en) Data backup and recovery method and device, virtual machine snapshot deleting and rollback method and device
US10084858B2 (en) Managing continuous priority workload availability and general workload availability between sites at unlimited distances for products and services
US9015164B2 (en) High availability for cloud servers
CN112235405A (en) Distributed storage system and data delivery method
US10922303B1 (en) Early detection of corrupt data partition exports
CN102955845B (en) Data access method, device and distributed data base system
US9367261B2 (en) Computer system, data management method and data management program
CN110247984B (en) Service processing method, device and storage medium
CN105069152B (en) data processing method and device
JP5686034B2 (en) Cluster system, synchronization control method, server device, and synchronization control program
CN105095317A (en) Distributive database service management system
CN112230853A (en) Storage capacity adjusting method, device, equipment and storage medium
CN111158949A (en) Configuration method, switching method and device of disaster recovery architecture, equipment and storage medium
CN112243030A (en) Data synchronization method, device, equipment and medium of distributed storage system
CN109992447B (en) Data copying method, device and storage medium
WO2022227719A1 (en) Data backup method and system, and related device
US11704289B2 (en) Role reversal of primary and secondary sites with minimal replication delay
JP2012053795A (en) Information processing system
US11010351B1 (en) File system replication between software defined network attached storage processes using file system snapshots
CN109976944B (en) Data processing method and system, storage medium and electronic device
Li et al. A hybrid disaster-tolerant model with DDF technology for MooseFS open-source distributed file system
CN112306746A (en) Method, apparatus and computer program product for managing snapshots in an application environment
CN115098259A (en) Resource management method and device, cloud platform, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination