CN112698926A - Data processing method, device, equipment, storage medium and system - Google Patents

Data processing method, device, equipment, storage medium and system Download PDF

Info

Publication number
CN112698926A
CN112698926A CN202110317270.5A CN202110317270A CN112698926A CN 112698926 A CN112698926 A CN 112698926A CN 202110317270 A CN202110317270 A CN 202110317270A CN 112698926 A CN112698926 A CN 112698926A
Authority
CN
China
Prior art keywords
instance
request
target
hash slot
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110317270.5A
Other languages
Chinese (zh)
Other versions
CN112698926B (en
Inventor
赵永亮
杨易
高斌
张清林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202110317270.5A priority Critical patent/CN112698926B/en
Publication of CN112698926A publication Critical patent/CN112698926A/en
Application granted granted Critical
Publication of CN112698926B publication Critical patent/CN112698926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method, a device, equipment, a storage medium and a system, and relates to the field of data processing. The method comprises the following steps: receiving a request and a hash slot sent by an agent node; the hash slot is obtained by the proxy node through calculation according to the hash identification in the request; determining that the instance in which the hash slot is located is a first target instance corresponding to the request; updating the data stored by the hash slot in the first target instance according to the request. Compared with the prior art, the problem of low reliability of the storage system is avoided.

Description

Data processing method, device, equipment, storage medium and system
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method, apparatus, device, storage medium, and system.
Background
With the development of modern internet technology, many industries turn to mobile terminals (application programs, wechat applets, diversion channels and the like) to acquire users on line, so that the cost of acquiring the users is reduced. However, such a method may lead to a rapid increase of the user quantity, so that the requirement on the response speed of the background server is high, and the Redis is widely applied by virtue of the characteristics of high performance and simple deployment.
In the prior art, data access is generally realized through a Redis cluster scheme, capacity expansion is realized through fragmentation in the Redis cluster scheme, a multi-master multi-slave mode is adopted, centralization is completely removed, all node information of the whole cluster is maintained by each node, data are exchanged and updated through a pairwise communication mode of timing and fixed frequency among the nodes, a client only contacts a master node when data request is carried out, the slave nodes serve as standby nodes, disaster recovery can be completed through a voting mechanism when the nodes fail, light weight protocol communication is used among the nodes, bandwidth occupation is reduced, and cluster performance is improved.
However, in the Redis cluster scheme, the client must cache information of the hash slot and needs quasi-real-time synchronization, which has a certain requirement on the reliability of the client, so that if the support for the cluster client is incomplete, the problem that the reliability of the whole storage system is not high is caused.
Disclosure of Invention
An object of the present application is to provide a data processing method, device, apparatus, storage medium, and system to solve the problem of low reliability of the storage system in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a data processing method, where the method includes: the method is applied to any storage node in a tiled storage system, wherein the storage node comprises a plurality of instances, and the method comprises the following steps:
receiving a request and a hash slot sent by an agent node; the hash slot is obtained by the proxy node through calculation according to the hash identification in the request;
determining the instance of the hash slot as a first target instance corresponding to the request according to the mapping relation between the hash slot range and the instances; each storage node stores a hash slot range corresponding to the storage node and a mapping relation between each instance;
updating the data stored by the hash slot in the first target instance according to the request.
Optionally, the method further comprises:
receiving an instance adjustment command sent by a management device; wherein, the example adjusting command comprises: identification of a second target instance and a migration instruction;
determining a second target instance on the storage node according to the identifier of the second target instance;
and migrating the data in the second target instance according to the migration instruction.
Optionally, the migration instruction includes: an instance to be added;
the migrating the data in the second target instance according to the migration instruction includes:
adding the to-be-added example;
and migrating part of the data in the hash slot range in the second target instance to the to-be-added instance, and deleting the successfully migrated data in the second target instance.
Optionally, the migration instruction further includes: a target hash slot range; the partial hash slot range is the hash slot range indicated by the target hash slot range.
Optionally, the migration instruction includes: an identification of a scaled instance; the capacity reduction instance is an instance of the storage node except the second target instance;
the migrating the data in the second target instance according to the migration instruction includes:
determining the capacity reduction example on the storage node according to the identification of the capacity reduction example;
and migrating all data in the second target instance to the capacity reduction instance, and deleting the data which is successfully migrated in the second target instance.
Optionally, the updating, according to the request, the data stored in the hash slot in the first target instance includes:
if the request is received by the first target instance in the migration process, re-determining the instance in which the hash slot is located as a third target instance according to the corresponding relation between the adjusted instance in the instance adjustment command and the hash slot range;
updating the data stored by the hash slot in the third target instance according to the request.
Optionally, the method further comprises:
if the fourth target instance is monitored to be invalid or faulty, sending fault reporting information to the agent node and the management device, wherein the fault reporting information comprises: an identification of the fourth target instance.
Optionally, the method further comprises:
receiving a fault instance update command sent by the management device, wherein the fault instance update command comprises: an identification of a target standby instance of the fourth target instance;
and configuring the hash slot range corresponding to the target standby instance as the hash slot range corresponding to the fourth target instance.
In a second aspect, another embodiment of the present application provides a data processing method, where the method is applied to any proxy node in a tiled system, and the method includes:
calculating a hash slot corresponding to a request according to a hash identifier in the request sent by a client;
determining the storage node where the hash slot is located as a target storage node corresponding to the request;
and sending the request and the hash slot to the target storage node, so that the target storage node determines the instance of the hash slot as a first target instance corresponding to the request according to the mapping relation between the hash slot range and each instance, and updates the data stored in the hash slot in the first target instance according to the request.
Optionally, the calculating a hash slot corresponding to the request according to the hash identifier in the request sent by the client includes:
and calculating the hash mark by adopting a cyclic redundancy check algorithm, and performing remainder operation on the calculation result to obtain the hash slot.
Optionally, the method further comprises:
receiving a configuration update request sent by a management device, wherein the configuration update request comprises: the corresponding relation between the hash slot range and the partition;
and updating the corresponding relation between the hash groove range and the partition according to the configuration updating request.
In a third aspect, another embodiment of the present application provides a data processing apparatus, which is applied to any storage node in a tiled storage system, where the storage node includes multiple instances, and the apparatus includes: the device comprises a receiving module, a determining module and an updating module, wherein:
the receiving module is used for receiving the request and the hash slot sent by the proxy node; the hash slot is obtained by the proxy node through calculation according to the hash identification in the request;
the determining module is configured to determine that the instance in which the hash slot is located is a first target instance corresponding to the request;
the updating module is used for updating the data stored in the hash slot in the first target instance according to the request.
Optionally, the apparatus further comprises a migration module, wherein:
the determining module is specifically configured to determine a second target instance on the storage node according to the identifier of the second target instance;
and the migration module is used for migrating the data in the second target instance according to the migration instruction.
Optionally, the migration instruction includes: an instance to be added; the device further comprises: the adding module is used for adding the to-be-added examples;
the migration module is specifically configured to migrate a part of data in the hash slot range in the second target instance to the to-be-added instance, and delete the successfully-migrated data in the second target instance.
Optionally, the migration instruction further includes: a target hash slot range; the partial hash slot range is the hash slot range indicated by the target hash slot range.
Optionally, the migration instruction includes: an identification of a scaled instance; the capacity reduction instance is an instance of the storage node except the second target instance;
the determining module is specifically configured to determine the capacity reduction instance on the storage node according to the identifier of the capacity reduction instance;
the migration module is specifically configured to migrate all data in the second target instance to the capacity reduction instance, and delete the successfully migrated data in the second target instance.
Optionally, the apparatus further comprises: an update module, wherein:
the determining module is specifically configured to, if the request is received during the migration process of the second target instance, re-determine, according to a correspondence between the adjusted instance in the instance adjustment command and the hash slot range, that the instance in which the hash slot is located is the third target instance;
the updating module is used for updating the data stored in the hash slot in the third target instance according to the request.
Optionally, the apparatus further comprises: a sending module, configured to send, if it is monitored that the fourth target instance fails or fails, failure report information to the agent node and the management device, where the failure report information includes: an identification of the fourth target instance.
Optionally, the apparatus further comprises: a configuration module, wherein:
the receiving module is configured to receive a fault instance update command sent by the management device, where the fault instance update command includes: an identification of a target standby instance of the fourth target instance;
the configuration module is configured to configure the hash slot range corresponding to the target standby instance as the hash slot range corresponding to the fourth target instance.
In a fourth aspect, another embodiment of the present application provides a data processing apparatus, which is applied to any proxy node in a tile system, and the apparatus includes: the device comprises a calculation module, a determination module and a sending module, wherein:
the computing module is used for computing a hash slot corresponding to the request according to the hash identification in the request sent by the client;
the determining module is configured to determine that the storage node where the hash slot is located is a target storage node corresponding to the request;
the sending module is configured to send the request and the hash slot to the target storage node, so that the target storage node determines that the instance in which the hash slot is located is the first target instance corresponding to the request, and updates the data stored in the hash slot in the first target instance according to the request.
Optionally, the calculation module is specifically configured to calculate the hash identifier by using a cyclic redundancy check algorithm, and perform a remainder operation on a calculation result to obtain the hash slot.
Optionally, the apparatus further comprises: a receiving module and an updating module, wherein:
the receiving module is configured to receive a configuration update request sent by a management device, where the configuration update request includes: the corresponding relation between the hash slot range and the partition;
and the updating module is used for updating the corresponding relation between the hash groove range and the partition according to the configuration updating request.
In a fifth aspect, another embodiment of the present application provides a data processing apparatus, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the data processing apparatus is running, the processor executing the machine-readable instructions to perform the steps of the method according to any one of the first or second aspects.
In a sixth aspect, another embodiment of the present application provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the method according to any one of the first or second aspects.
In a seventh aspect, another embodiment of the present application provides a distributed data processing system, including: a plurality of storage nodes and a plurality of proxy nodes, wherein each proxy node is connected to the plurality of storage nodes, wherein each storage node is configured to perform the method according to any one of the above first aspects, and each proxy node is configured to perform the method according to any one of the above second aspects.
The beneficial effect of this application is: by adopting the data processing method provided by the application, after the request and the hash slot sent by the proxy node are received, the instance where the hash slot is located can be determined to be the first target instance corresponding to the request according to the mapping relation between the hash slot range and each instance stored in the storage node, and then the data stored in the hash slot in the first target instance is updated according to the request.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a block diagram of a data processing system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 4 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 5 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application;
fig. 10 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application;
fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application;
fig. 12 is a schematic structural diagram of a data processing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Additionally, the flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
For the purpose of facilitating an understanding of the embodiments of the present application, the following partial terms are used in the present application:
remote Dictionary service (Remote Dictionary Server, Redis): the system is an open source log-type and Key-Value database which is written by using ANSI C language, supports network, can be based on memory and can also be persistent, and provides Application Programming Interfaces (API) of multiple languages.
The embodiment provided in this application is applied to a data processing system, and fig. 1 is a schematic structural diagram of a data processing system provided in an embodiment of this application, and as shown in fig. 1, the system includes: a plurality of agent nodes, a plurality of storage nodes, and a management device, wherein:
the plurality of proxy nodes jointly form a proxy server, the proxy server is connected with a client, and the client is used for initiating a request and requesting operation data; the proxy server provides connection based on a Redis protocol for the outside, and supports the calculation of a hash slot according to a hash identifier, and the essence of the proxy server is a cluster of a plurality of proxy nodes; the proxy nodes are executors of proxy services, each proxy node is a single server without state, and each proxy node can realize the communication protocol of Redis.
According to the scheme provided by the application, the storage space of a distributed storage system such as a whole Redis cluster is divided into 1024 hash slots, each storage node has a corresponding hash slot range, and each storage node comprises a plurality of instances and metadata of mapping relations between the hash slot ranges and the instances; after the agent node receives a request sent by a client, determining a hash slot corresponding to the request according to the request, then determining a storage node where the hash slot is located as a target storage node corresponding to the request in a plurality of storage nodes according to the hash slot, and after the agent node sends the request and the hash slot to the target storage node, determining an instance where the hash slot is located as a first target instance corresponding to the request by the target storage node according to the mapping relation between the range of the stored hash slot and each instance; each instance of each storage node is a single requested execution node; the multiple instances in each storage node may include multiple master instances or multiple slave instances, which is not limited herein.
The management device is mainly responsible for adjusting and configuring data resources, including but not limited to capacity expansion, capacity reduction, data backup, master-slave switching and the like of the data resources, and is also responsible for synchronizing configuration data to each agent node in a long connection mode so as to ensure the consistency of the configuration data among the agent nodes in the agent cluster; the synchronization time may be, for example, once per preset interval, or the synchronization is triggered after the configuration data is updated, and the triggering manner of the specific synchronization configuration data may be flexibly adjusted according to the user requirement, which is not limited to the embodiment described above.
After a client sends a request, any one of the agent nodes receives the request sent by the client, calculates a hash slot corresponding to the request according to a hash identifier in the request, then determines a storage node where the hash slot is located as a target storage node according to preset configuration data and the hash slot, determines a first target instance corresponding to the request according to the hash slot range and the mapping relation between the instances by the target storage node, and updates data stored in the hash in the first target instance according to the request.
In the system setting mode, the proxy server is added between the client and the Redis cluster as an intermediate layer, the proxy server provides a Redis protocol externally and serves as a data server, and for the Redis cluster, the proxy server serves as the role of the client to access data. By means of the design, the external traffic can be observed and intercepted when passing through the proxy server, so that hot data in a cluster can be effectively mastered, targeted node expansion can be performed on a trainee subsequently according to the observed data, fusing can be triggered in an access peak or abnormal traffic attack, and the effect of protecting the data security of the whole rear-end storage system is achieved. The proxy server in the application is in a cluster structure consisting of a plurality of proxy nodes, so that the problem of system breakdown caused by the fact that the proxy nodes are failed due to the fact that only one proxy node exists in the proxy server can be solved while the service capability is provided for the outside.
In addition, the method provided by the application divides the entire Redis cluster into 1024 hash slots, and a plurality of storage nodes exist, each storage node has its corresponding hash slot range, so that the Redis cluster is partitioned, and after the Redis cluster is partitioned, each storage node further comprises a plurality of instances and metadata of the hash slot ranges and the mapping relations between the instances, so that each partition is further partitioned, after receiving a request sent by a client, each storage node calculates the hash slot according to the hash identifier in the request, determines a target storage node according to the hash slot, then determines a first target instance according to the hash slot range and the metadata of the mapping relations between the instances, and updates the data stored in the hash slot in the first target instance according to the request.
A data processing method provided in the embodiments of the present application is explained below with reference to a plurality of specific application examples. Fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application, which is applied to any storage node in a tiled storage system, where the storage node includes multiple instances, and as shown in fig. 2, the method includes:
s101: and receiving the request and the hash groove sent by the proxy node.
In an embodiment of the present application, for example, after the crc16 operation is performed on the hash identifier, a modulo operation is performed on 1024 to obtain a remainder, and the obtained remainder is a hash slot position corresponding to the hash identifier, it should be understood that the above algorithm is only an exemplary description, and the specific algorithm for calculating the hash slot may be flexibly adjusted according to user needs, and the crc16 operation is selected in the present application to reduce the operation time.
S102: and determining the instance of the hash slot as the first target instance corresponding to the request according to the mapping relation between the hash slot range and the instances.
The storage system may include a plurality of storage nodes, each storage node may be understood as a partition, and each storage node stores a hash slot range corresponding to the storage node and a mapping relationship between each instance, that is, each partition stores a hash slot range and a mapping relationship between each instance.
For example, in an embodiment of the present application, the tiled storage system may include two partitions, for example, where there are three instances in partition 1, where instance 1 manages hash slots with hash slot ranges of 0-200, instance 2 is used to manage hash slots with hash slot ranges of 201 and 400, and instance 3 is used to manage hash slots with hash slot ranges of 401 and 512; there are three examples in partition 2, example 4 manages hash slots with hash slot range 513-; it should be understood that the above embodiments are only exemplary, and the setting of specific partitions, and the setting of embodiments included in each partition, and the hash slot range managed by each embodiment may be flexibly adjusted according to the needs of users, and are not limited to the foregoing embodiments.
For example, in an embodiment of the present application, a whole cluster may be divided into 1024 hash slots, and after receiving a hash slot sent by a proxy node, a first target instance is determined according to the hash slot, where 1024 hash slots are preset in the whole cluster, so that hash slots calculated by the same key value are necessarily consistent, and in addition, the present application places restrictions on setting of a partition, and a mapping relationship between a hash slot range and each instance is stored in the partition, and each hash slot maps an instance, and a specific hash slot falls on which instance, which needs to be determined by the partition according to the hash slot sent by the proxy node, the hash slot range and the mapping relationship between the instances.
S103: and updating the data stored in the hash slot in the first target instance according to the request.
In a possible implementation manner, the data stored in the hash slot in the first target instance may be updated according to the data operation corresponding to the request.
For example, in an embodiment of the application, after a request "set user _ phone _ {007} 18900000000" is transmitted by a client, a value of a key of "user _ phone _ {007 }" is set to be "18900000000"; then searching information in the proxy server, finding that data with a hash slot of 369 needs to fall into the partition 1, and then sending the request to the partition 1 by the proxy server; after the partition 1 obtains the request with the hash slot of 369, the logical partition configuration is searched according to the hash slot range stored in the partition 1 and the mapping relationship between the instances, the request with the hash slot of 369 is found to fall on the instance 2, and then the instance 2 updates the data stored in the hash slot in the instance 2 according to the request, so that the operation is completed.
By adopting the data processing method provided by the application, after the request and the hash slot sent by the proxy node are received, the instance where the hash slot is located can be determined to be the first target instance corresponding to the request according to the mapping relation between the hash slot range and each instance stored in the storage node, and then the data stored in the hash slot in the first target instance is updated according to the request.
Optionally, on the basis of the foregoing embodiments, the embodiments of the present application may further provide a data processing method, and an implementation process of the foregoing method is described as follows with reference to the accompanying drawings. Fig. 3 is a schematic flow chart of a data processing method according to another embodiment of the present application, and as shown in fig. 3, the method may further include:
s104: and receiving an instance adjustment command sent by the management device.
Wherein, the example adjusting command comprises: identification of a second target instance and a migration instruction.
In one embodiment of the present application, for example, a daemon process may be included in the management device, and the daemon process may be executed to receive an instance adjustment command and execute the received instance adjustment command.
For example, in some possible embodiments, the scenario in which the management device sends the instance adjustment command may be, for example: when the amount of data stored in or accessed to a certain instance is large, the instance needs to be expanded, or when the certain instance is offline or otherwise, the instance needs to be contracted, so as to reduce the maintenance cost.
S105: a second target instance is determined on the storage node based on the identification of the second target instance.
And the identifier of the second target instance is used for uniquely indicating the second target instance of the storage node, and the second target instance is an instance to be migrated. Each instance on the storage node has a corresponding identity, and the identity of each instance is used to indicate the corresponding instance.
S106: and migrating the data in the second target instance according to the migration instruction.
In some possible embodiments, it may be specified that an instance of the storage node except the second target instance receives the data migrated by the second target instance, or it may be randomly determined that an instance of the storage node except the second target instance receives the data migrated by the second target instance, and a specific manner of receiving the data migrated by the second target instance may be flexibly adjusted according to a user requirement, which is not limited to the manner provided in the foregoing embodiments.
In some possible embodiments, the migration instructions may include, for example: an instance to be added; that is, the corresponding scene is a scene for expanding the second target instance, and at this time, S102 may be a scene for adding an instance to be added; and migrating part of the data in the hash slot range in the second target instance to the instance to be added, and deleting the successfully migrated data in the second target instance.
The migration instruction may be sent by an operation and maintenance manager, or may be sent by the system to monitor each instance, and when it is determined by monitoring that the second target instance meets the expansion condition, the triggering manner of the specific migration instruction may be adjusted according to a user requirement, which is not limited to that provided in the foregoing embodiment.
For example, when the migration instruction includes an instance to be added, the migration instruction may further include: a target hash slot range; part of the hash slot range is the hash slot range indicated by the target hash slot range, that is, the second target instance can migrate the data in the target hash slot range to the to-be-added instance according to the target hash slot range indicated in the migration instruction; in other embodiments, it may also be determined at random that data in the partial hash slot range in the second target instance is migrated to the to-be-added instance, and the specific manner of determining the data in the partial hash slot range is not limited to that provided in the above embodiments, and may be flexibly adjusted according to user needs.
For example, after the storage node receives a migration instruction initiated by the management device, the migration instruction may include, for example: second target example: example 1, example to be added: example 9, target hash slot range 100-; the method includes that an instance 9 is newly built in a storage node corresponding to the instance 1, then hash slots with hash slot ranges of 100 and 200 in the instance 1 can be allocated to the instance 9 according to a migration instruction, and the instance 1 after migration is responsible for managing the hash slots with hash slot ranges of 0 to 100, so that load capacity of a cluster is improved.
The specific step of expanding the capacity of the example 1 may be, for example: after receiving the migration instruction, the daemon process pulls up the newly-built instance 9, detects the current state of the instance 9 in a test reading and writing mode, and after confirming that the instance 9 can normally respond, changes the state of the instance 1 into 'migration', changes the state of the partition 1 into 'capacity expansion', and pushes the changed information, namely 'the hash slot of which the hash slot range is 0-100 and the hash slot range of which the hash slot range is 100-200', to the partition 1 for temporary storage for judging the slot position in the capacity expansion process.
The daemon process scans key values of all point cloud data in the example 1, calculates hash slot values of all point cloud data in the example 1, takes the point cloud data in the hash slot positions of 100 and 200 as data to be migrated, and writes the data to be migrated into the example 9, so that data migration is achieved. After the data to be migrated is confirmed to be written successfully, the data to be migrated which is migrated successfully in the example 1 can be deleted, only one data to be migrated is migrated each time, the state of the example 1 is changed from 'migrating' to normal operation until all the data to be migrated which meets the conditions are migrated, the state of the partition 1 is changed from 'capacity expansion' to normal operation, and the mapping relation between the hash groove range corresponding to the partition 1 and the example is updated. For example, the rule "example 1 manages keys with hash slots 0-100 and example 9 manages keys with hash slots 100-200" is used as the hash slot position determination rule for partition 1 in the long-term stable state.
When a second target instance has a high load in the using process, the second target instance needs to be expanded, so that the load capacity of the cluster is improved. In an embodiment of the present application, the capacity expansion scheme is that, first, the state of the second target instance is set to "migration in progress", and then, according to the partition configuration, a Least Recently Used (LRU) algorithm is performed to migrate hash slot data in the range of the target hash slot in the first target instance to the to-be-added instance. And if a request is transmitted, preferentially processing the request, taking out the data in the request, migrating the data to a new node for capacity expansion, continuing capacity expansion until all the data to be migrated meeting the condition are migrated, pushing the mapping relation after migration into the metadata of the partition corresponding to the second target instance, pushing the metadata to a daemon process by virtue of a long link, and reversely synchronizing the metadata to the proxy server by the daemon process.
In other possible embodiments, the migration instruction may further include: an identification of a scaled instance; the capacity reduction example is an example except the second target example on the storage node; s102 may determine a capacity reduction instance on the storage node according to the identifier of the capacity reduction instance; and migrating all the data in the second target instance to the capacity reduction instance, and deleting the data which is successfully migrated in the second target instance.
For example, after the storage node receives a migration instruction initiated by the management device, the migration instruction may include, for example: second target example: example 1; identification of the Capacity instance: example 2; it shows that at this time, the instance 1 needs to be reduced, that is, the instance 1 in the partition 1 is removed, and the keys of the hash slots 0-200 in the instance 1 are allocated to the instance 2 for management, so as to reduce the maintenance cost.
After receiving the request, the daemon process detects the state of the example 2, tests and reads and writes the example 2, and after confirming that the example 2 can normally respond to read-write operation, changes the state of the example 1 into ' migration ', changes the state of the partition 1 into ' capacity expansion ', and pushes the changed information that the management hash slot of the example 2 is 0-200 key ' to the partition 1 for temporary storage for hash slot position judgment in the capacity reduction process; then, the daemon process starts to scan the keys of all the point cloud data in the example 1, starts to perform data migration, writes the point cloud data in the example 1 into the example 2, after the successful writing is confirmed, deletes the point cloud data successfully written in the example 1, migrates only one point cloud data each time until all the point cloud data in the example 1 are migrated, changes the state of the example 1 from 'in migration' to 'off-line', changes the state of the partition 1 'in capacity expansion' to normal operation, updates the mapping relation in the partition 1, and uses the rule that the key of which the hash slot managed by the example 2 is 0-200 as the hash slot position judgment rule in the long-term stable state.
The processing method can ensure that a request is transmitted in the migration process because the instance node is still available even if the instance is in the data migration state, and at the moment, after the request reaches the partition, the partition can determine the instance corresponding to the request according to the updated corresponding relation and determine the instance for processing the request.
Optionally, on the basis of the foregoing embodiment, an embodiment of the present application may further provide a data processing method, and an implementation process of updating data stored in the hash slot in the first target instance in the foregoing method is described as follows with reference to the accompanying drawings. Fig. 4 is a flowchart illustrating a data processing method according to another embodiment of the present application, and as shown in fig. 4, S103 may include:
s107: and if the request is received by the first target instance in the migration process, re-determining the instance in which the hash slot is located as the third target instance according to the corresponding relation between the adjusted instance and the hash slot range in the instance adjustment command.
Taking the migration process as an expansion process as an example, if data writing occurs in the migration process of the first target instance, for example, after the proxy server receives a request "set user _ phone _ {666} 18900000000" sent from the client, the value of the key of "user _ phone _ {666 }" is set to be "18900000000"; the partition 1 receives the request, searches the mapping relation between the hash slot range and each instance, finds that the point cloud data with the hash slot of 133 should fall on the instance 1, but at this time, because the instance 1 is in the state of "migration", it is further queried whether instance change information exists in the partition 1, finds that "the instance 1 manages a key with the hash slot of 0-100, and the instance 9 manages a key with the hash slot of 100 and 200", and confirms that the instance 9 is the third target instance.
Taking a migration process as an example of a capacity reduction process, if data writing occurs in a migration process of a first target instance, for example, the proxy server receives a request "set user _ phone _ {777} 18900000000" sent by a client, that is, a value of the key of "user _ phone _ {777 }" is set to be "18900000000"; the partition 1 receives the request, searches the mapping relation between the hash slot range and each instance, finds 108 that the hash slot range should fall on the instance 1, but at this time, since the instance 1 is in the state of "migration", it is further inquired whether the instance logical partition configuration change information exists in the partition 1, finds that the partition 1 has "the key of which the management hash slot of the instance 2 is 0-200", and determines that the instance 2 is the third target instance.
S108: and updating the data stored in the hash slot in the third target instance according to the request.
According to the determination result, writing the data corresponding to the request into the third target instance, and deleting the data corresponding to the request in the first target instance after the success of the writing is confirmed, it should be understood that in the migration process of the first target instance, no matter whether the data writing occurs, the daemon process scans all key values in the instance 1 and migrates all the data to be migrated in the instance 1 in a single migration manner until all the migrations are completed.
Optionally, on the basis of the foregoing embodiments, the embodiments of the present application may further provide a data processing method, and an implementation process of the foregoing method is described as follows with reference to the accompanying drawings. Fig. 5 is a schematic flowchart of a data processing method according to another embodiment of the present application, and as shown in fig. 5, the method further includes:
s109: and receiving a fault instance updating command sent by the management equipment.
The failure instance update command includes: an identification of a target standby instance of the fourth target instance.
If the fourth target instance is monitored to be invalid or faulty, the fault instance updating command is fault reporting information sent to the agent node and the management device, and the fault reporting information comprises: identification of a fourth target instance.
S110: and configuring the hash slot range corresponding to the target standby instance as the hash slot range corresponding to the fourth target instance.
Illustratively, in some possible embodiments, when it is necessary to manually confirm the failed instance, confirm that the data consistency of the target standby instance, the node performance, the network uo synchronization, and other conditions are all satisfied, the fourth target instance is replaced with the target standby instance by the daemon, and then the production is resumed.
In an embodiment of the present application, the specific processing flow may be, for example: a client transmits a request 'get user _ phone _ {007} ", a proxy server receives the request, firstly analyzes a hash identifier' { } of the request, acquires the character string '007', performs crc16 operation on the character string, and obtains a remainder 369 as a hash slot corresponding to the request, wherein the remainder 369 is a hash slot corresponding to the request; after obtaining a request with a hash slot of 369, the partition 1 searches for a logical partition configuration corresponding to the request, finds that 369 falls on the example 2, and then requests data from the example 2, but at the moment, the example 2 is in a node failure, reports the node failure to the proxy server and the daemon, and sets the state of the example 2 as 'failure' so that the node cannot perform read-write operation to protect the data; and when the disaster condition is confirmed manually and the conditions of data consistency, node performance, network synchronization and the like of the standby example are all met, replacing the target standby example with the example 2 through a daemon process, and then recovering production.
The data processing method provided by the application supports the operation of Pipeline and part of Multi-Key, and in addition, because the application uses class-centered design, on the premise of considering the stability of the whole architecture, the cluster management and storage service is partially decoupled, thereby facilitating the understanding and maintenance of developers; the method provided by the application supports dynamic expansion or contraction of each node, so that the data migration scheme is simple and effective, and the service availability can be maintained to the maximum extent. The method provided by the application can still keep high availability of the service, and as long as a plurality of instances in the same partition are not completely down, the partition can continue to respond to the request normally, so that the stability of the service is kept; in time, under disaster recovery scenes such as downtime, capacity expansion, capacity reduction and the like of the instances, the strong consistency of data can still be maintained due to the dynamic capacity expansion and capacity reduction provided by the application and the setting of determining the target standby instance and performing data migration in a fault state.
Optionally, on the basis of the foregoing embodiments, the embodiments of the present application may further provide a data processing method, and an implementation process of the foregoing method is described as follows with reference to the accompanying drawings. Fig. 6 is a schematic flowchart of a data processing method according to another embodiment of the present application, applied to any proxy node in a tile system, as shown in fig. 6, the method may include:
s201: and calculating a hash slot corresponding to the request according to the hash identification in the request sent by the client.
In some possible embodiments, for example, a cyclic redundancy check algorithm may be used to calculate the hash identifier, and perform a remainder operation on the calculation result to obtain the hash slot.
S202: and determining the storage node where the hash slot is located as a target storage node corresponding to the request.
S203: a request is sent to the target storage node, and the hash slot.
And the target storage node determines the instance of the hash slot as the first target instance corresponding to the request according to the mapping relation between the hash slot range and each instance, and updates the data stored in the hash slot in the first target instance according to the request.
Since the method provided in fig. 6 is different from the methods provided in fig. 2 to fig. 5 only in terms of execution subject, but the beneficial effects are the same, the beneficial effects brought by the method provided in fig. 6 are not described herein again.
Optionally, on the basis of the foregoing embodiments, the embodiments of the present application may further provide a data processing method, and the implementation process of the foregoing method in fig. 6 is described as follows with reference to the accompanying drawings. Fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application, and as shown in fig. 7, the method may further include:
s204: and receiving a configuration updating request sent by the management equipment.
The configuration update request includes: hash slot range and partition.
S205: and updating the corresponding relation between the hash groove range and the partition according to the configuration updating request.
The following explains a data processing apparatus provided in the present application with reference to the drawings, where the data processing apparatus can execute any one of the data processing methods in fig. 2 to 5, and specific implementation and beneficial effects of the data processing apparatus refer to the above descriptions, and are not described again below.
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus includes: the device is applied to any storage node in a sliced storage system, wherein the storage node comprises a plurality of instances, and the device comprises: a receiving module 301, a determining module 302, and an updating module 303, wherein:
a receiving module 301, configured to receive a request and a hash slot sent by an agent node; and the hash slot is obtained by calculating the proxy node according to the hash identifier in the request.
A determining module 302, configured to determine that the instance where the hash slot is located is the first target instance corresponding to the request.
And an updating module 303, configured to update the data stored in the hash slot in the first target instance according to the request.
Optionally, on the basis of the above-mentioned embodiment of fig. 8, an embodiment of the present application may further provide a data processing apparatus, and the following describes an example of the structure of the above-mentioned apparatus with reference to the drawings. Fig. 9 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application, and as shown in fig. 9, the apparatus further includes a migration module 304, where:
the determining module 302 is specifically configured to determine the second target instance on the storage node according to the identifier of the second target instance.
The migration module 304 is configured to migrate the data in the second target instance according to the migration instruction.
The migration instruction includes: an instance to be added; as shown in fig. 9, the apparatus further includes: an adding module 305, configured to add an instance to be added;
the migration module 304 is specifically configured to migrate the data in the range of the partial hash slot in the second target instance to the to-be-added instance, and delete the data that is successfully migrated in the second target instance.
Optionally, the migration instruction further comprises: a target hash slot range; the partial hash slot range is the hash slot range indicated by the target hash slot range.
Optionally, the migration instruction comprises: an identification of a scaled instance; a capacity reduction instance is an instance of a storage node other than the second target instance.
The determining module 302 is specifically configured to determine a capacity reduction instance on a storage node according to the identifier of the capacity reduction instance.
The migration module 304 is specifically configured to migrate all data in the second target instance to the capacity reduction instance, and delete the data that is successfully migrated in the second target instance.
As shown in fig. 9, the apparatus further includes: an update module 306, wherein:
the determining module 302 is specifically configured to, if the request is received during the migration process of the second target instance, re-determine that the instance in which the hash slot is located is the third target instance according to the corresponding relationship between the adjusted instance and the hash slot range in the instance adjustment command.
And an updating module 306, configured to update the data stored in the hash slot in the third target instance according to the request.
As shown in fig. 9, the apparatus further includes: a sending module 307, configured to send fault reporting information to the agent node and the management device if it is monitored that the fourth target instance fails or fails, where the fault reporting information includes: identification of a fourth target instance.
As shown in fig. 9, the apparatus further includes: a configuration module 308, wherein:
a receiving module 301, configured to receive a fault instance update command sent by a management device, where the fault instance update command includes: an identification of a target standby instance of the fourth target instance.
The configuring module 308 is configured to configure the hash slot range corresponding to the target standby instance as the hash slot range corresponding to the fourth target instance.
The following explains a data processing apparatus provided in the present application with reference to the drawings, where the data processing apparatus can execute any one of the data processing methods in fig. 6 to 7, and specific implementation and beneficial effects of the data processing apparatus refer to the above descriptions, and are not described again below.
Fig. 10 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application, applied to any proxy node in a tile system, as shown in fig. 10, the apparatus includes: a calculation module 401, a determination module 402 and a sending module 403, wherein:
the calculating module 401 is configured to calculate a hash slot corresponding to the request according to the hash identifier in the request sent by the client.
A determining module 402, configured to determine that the storage node where the hash slot is located is a target storage node corresponding to the request.
The sending module 403 is configured to send the request and the hash slot to the target storage node, so that the target storage node determines that the instance where the hash slot is located is the first target instance corresponding to the request, and updates the data stored in the hash slot in the first target instance according to the request.
Optionally, the calculation module 401 is specifically configured to calculate the hash identifier by using a cyclic redundancy check algorithm, and perform a remainder operation on a calculation result to obtain the hash slot.
Alternatively, on the basis of the above-mentioned embodiment of fig. 10, an embodiment of the present application may further provide a data processing apparatus, and the following describes an example of the structure of the above-mentioned apparatus with reference to the drawings. Fig. 11 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 11, the apparatus further includes: a receiving module 404 and an updating module 405, wherein:
a receiving module 404, configured to receive a configuration update request sent by a management device, where the configuration update request includes: hash slot range and partition.
And an updating module 405, configured to update the correspondence between the hash slot range and the partition according to the configuration updating request.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 12 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 12, the data processing apparatus includes: a processor 501, a storage medium 502, and a bus 503.
The processor 501 is configured to store a program, and the processor 501 calls the program stored in the storage medium 502 to execute the method embodiments corresponding to fig. 2 to fig. 5 when the processor 501 is a storage node; when the processor 501 is a proxy node, the method embodiments corresponding to fig. 6 to fig. 7 are executed. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application also provides a program product, such as a storage medium, on which a computer program is stored, including a program, which, when executed by a processor, performs embodiments corresponding to the above-described method.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (16)

1. A data processing method is applied to any storage node in a fragmented storage system, wherein the storage node comprises a plurality of instances, and the method comprises the following steps:
receiving a request and a hash slot sent by an agent node; the hash slot is obtained by the proxy node through calculation according to the hash identification in the request;
determining the instance of the hash slot as a first target instance corresponding to the request according to the mapping relation between the hash slot range and the instances; each storage node stores a hash slot range corresponding to the storage node and a mapping relation between each instance;
updating the data stored by the hash slot in the first target instance according to the request.
2. The method of claim 1, wherein the method further comprises:
receiving an instance adjustment command sent by a management device; wherein, the example adjusting command comprises: identification of a second target instance and a migration instruction;
determining a second target instance on the storage node according to the identifier of the second target instance;
and migrating the data in the second target instance according to the migration instruction.
3. The method of claim 2, wherein the migration instruction comprises: an instance to be added;
the migrating the data in the second target instance according to the migration instruction includes:
adding the to-be-added example;
and migrating part of the data in the hash slot range in the second target instance to the to-be-added instance, and deleting the successfully migrated data in the second target instance.
4. The method of claim 3, wherein the migration instruction further comprises: a target hash slot range; the partial hash slot range is the hash slot range indicated by the target hash slot range.
5. The method of claim 2, wherein the migration instruction comprises: an identification of a scaled instance; the capacity reduction instance is an instance of the storage node except the second target instance;
the migrating the data in the second target instance according to the migration instruction includes:
determining the capacity reduction example on the storage node according to the identification of the capacity reduction example;
and migrating all data in the second target instance to the capacity reduction instance, and deleting the data which is successfully migrated in the second target instance.
6. The method of claim 2, wherein said updating the data stored by the hash slot in the first target instance in accordance with the request comprises:
if the request is received by the first target instance in the migration process, re-determining the instance in which the hash slot is located as a third target instance according to the corresponding relation between the adjusted instance in the instance adjustment command and the hash slot range;
updating the data stored by the hash slot in the third target instance according to the request.
7. The method of claim 2, wherein the method further comprises:
if the fourth target instance is monitored to be invalid or faulty, sending fault reporting information to the agent node and the management device, wherein the fault reporting information comprises: an identification of the fourth target instance.
8. The method of claim 7, wherein the method further comprises:
receiving a fault instance update command sent by the management device, wherein the fault instance update command comprises: an identification of a target standby instance of the fourth target instance;
and configuring the hash slot range corresponding to the target standby instance as the hash slot range corresponding to the fourth target instance.
9. A data processing method is applied to any proxy node in a fragmented system, and the method comprises the following steps:
calculating a hash slot corresponding to a request according to a hash identifier in the request sent by a client;
determining the storage node where the hash slot is located as a target storage node corresponding to the request;
and sending the request and the hash slot to the target storage node, so that the target storage node determines the instance of the hash slot as a first target instance corresponding to the request according to the mapping relation between the hash slot range and each instance, and updates the data stored in the hash slot in the first target instance according to the request.
10. The method of claim 9, wherein the calculating the hash slot corresponding to the request according to the hash identifier in the request sent by the client comprises:
and calculating the hash mark by adopting a cyclic redundancy check algorithm, and performing remainder operation on the calculation result to obtain the hash slot.
11. The method of claim 9, wherein the method further comprises:
receiving a configuration update request sent by a management device, wherein the configuration update request comprises: the corresponding relation between the hash slot range and the partition;
and updating the corresponding relation between the hash groove range and the partition according to the configuration updating request.
12. A data processing apparatus, applied to any storage node in a tiled storage system, the storage node including a plurality of instances, the apparatus comprising: the device comprises a receiving module, a determining module and an updating module, wherein:
the receiving module is used for receiving the request and the hash slot sent by the proxy node; the hash slot is obtained by the proxy node through calculation according to the hash identification in the request;
the determining module is configured to determine that the instance in which the hash slot is located is a first target instance corresponding to the request;
the updating module is used for updating the data stored in the hash slot in the first target instance according to the request.
13. A data processing apparatus, applied to any proxy node in a tiled system, the apparatus comprising: the device comprises a calculation module, a determination module and a sending module, wherein:
the computing module is used for computing a hash slot corresponding to the request according to the hash identification in the request sent by the client;
the determining module is configured to determine that the storage node where the hash slot is located is a target storage node corresponding to the request;
the sending module is configured to send the request and the hash slot to the target storage node, so that the target storage node determines that the instance in which the hash slot is located is the first target instance corresponding to the request, and updates the data stored in the hash slot in the first target instance according to the request.
14. A data processing apparatus, characterized in that the apparatus comprises: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the data processing is executed, the processor executing the machine-readable instructions to perform the method of any one of claims 1-11.
15. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the method of any of the preceding claims 1-11.
16. A data processing system, wherein the system is a distributed data processing system, comprising: a plurality of storage nodes and a plurality of proxy nodes, wherein each proxy node is connected to the plurality of storage nodes, wherein each storage node is configured to perform the method of any of claims 1-8, and each proxy node is configured to perform the method of any of claims 9-11.
CN202110317270.5A 2021-03-25 2021-03-25 Data processing method, device, equipment, storage medium and system Active CN112698926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110317270.5A CN112698926B (en) 2021-03-25 2021-03-25 Data processing method, device, equipment, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110317270.5A CN112698926B (en) 2021-03-25 2021-03-25 Data processing method, device, equipment, storage medium and system

Publications (2)

Publication Number Publication Date
CN112698926A true CN112698926A (en) 2021-04-23
CN112698926B CN112698926B (en) 2021-07-02

Family

ID=75515740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110317270.5A Active CN112698926B (en) 2021-03-25 2021-03-25 Data processing method, device, equipment, storage medium and system

Country Status (1)

Country Link
CN (1) CN112698926B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656144A (en) * 2021-08-17 2021-11-16 百度在线网络技术(北京)有限公司 Data publishing system, method and device, electronic equipment and storage medium
CN113791729A (en) * 2021-08-11 2021-12-14 合肥先进产业研究院 Dynamic capacity expansion method for monitoring data storage of internet of things equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933408A (en) * 2016-04-20 2016-09-07 中国银联股份有限公司 Implementation method and device of Redis universal middleware
US9619391B2 (en) * 2015-05-28 2017-04-11 International Business Machines Corporation In-memory caching with on-demand migration
CN109683826A (en) * 2018-12-26 2019-04-26 北京百度网讯科技有限公司 Expansion method and device for distributed memory system
CN109769028A (en) * 2019-01-25 2019-05-17 深圳前海微众银行股份有限公司 Redis cluster management method, device, equipment and readable storage medium storing program for executing
CN109995813A (en) * 2017-12-29 2019-07-09 杭州华为数字技术有限公司 A kind of partition extension method, date storage method and device
US20190364060A1 (en) * 2015-08-31 2019-11-28 Splunk Inc. Annotation of event data to include access interface identifiers for use by downstream entities in a distributed data processing system
CN110874384A (en) * 2018-09-03 2020-03-10 阿里巴巴集团控股有限公司 Database cluster capacity expansion method, device and system
CN111104058A (en) * 2018-10-26 2020-05-05 慧与发展有限责任合伙企业 Key-value storage on persistent memory
CN111274288A (en) * 2020-01-17 2020-06-12 腾讯云计算(北京)有限责任公司 Distributed retrieval method, device, system, computer equipment and storage medium
CN111290834A (en) * 2020-01-21 2020-06-16 苏州浪潮智能科技有限公司 Method, device and equipment for realizing high availability of service based on cloud management platform
CN111414422A (en) * 2020-03-19 2020-07-14 上海达梦数据库有限公司 Data distribution method, device, equipment and storage medium
CN111522811A (en) * 2020-03-18 2020-08-11 大箴(杭州)科技有限公司 Database processing method and device, storage medium and terminal
CN111782633A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Data processing method and device and electronic equipment
CN111913977A (en) * 2020-08-19 2020-11-10 上海莉莉丝网络科技有限公司 Data processing method, device and medium
CN112199427A (en) * 2020-09-24 2021-01-08 中国建设银行股份有限公司 Data processing method and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619391B2 (en) * 2015-05-28 2017-04-11 International Business Machines Corporation In-memory caching with on-demand migration
US20190364060A1 (en) * 2015-08-31 2019-11-28 Splunk Inc. Annotation of event data to include access interface identifiers for use by downstream entities in a distributed data processing system
CN105933408A (en) * 2016-04-20 2016-09-07 中国银联股份有限公司 Implementation method and device of Redis universal middleware
CN109995813A (en) * 2017-12-29 2019-07-09 杭州华为数字技术有限公司 A kind of partition extension method, date storage method and device
CN110874384A (en) * 2018-09-03 2020-03-10 阿里巴巴集团控股有限公司 Database cluster capacity expansion method, device and system
CN111104058A (en) * 2018-10-26 2020-05-05 慧与发展有限责任合伙企业 Key-value storage on persistent memory
CN109683826A (en) * 2018-12-26 2019-04-26 北京百度网讯科技有限公司 Expansion method and device for distributed memory system
CN109769028A (en) * 2019-01-25 2019-05-17 深圳前海微众银行股份有限公司 Redis cluster management method, device, equipment and readable storage medium storing program for executing
CN111274288A (en) * 2020-01-17 2020-06-12 腾讯云计算(北京)有限责任公司 Distributed retrieval method, device, system, computer equipment and storage medium
CN111290834A (en) * 2020-01-21 2020-06-16 苏州浪潮智能科技有限公司 Method, device and equipment for realizing high availability of service based on cloud management platform
CN111522811A (en) * 2020-03-18 2020-08-11 大箴(杭州)科技有限公司 Database processing method and device, storage medium and terminal
CN111414422A (en) * 2020-03-19 2020-07-14 上海达梦数据库有限公司 Data distribution method, device, equipment and storage medium
CN111782633A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Data processing method and device and electronic equipment
CN111913977A (en) * 2020-08-19 2020-11-10 上海莉莉丝网络科技有限公司 Data processing method, device and medium
CN112199427A (en) * 2020-09-24 2021-01-08 中国建设银行股份有限公司 Data processing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周旭东: ""基于Redis分布式存储的负载平衡及性能优化研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
程序员历小冰: ""Redis Cluster 的数据分片机制"", 《HTTPS://WWW.CNBLOGS.COM/REMCARPEDIEM/P/12078328.HTML》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113791729A (en) * 2021-08-11 2021-12-14 合肥先进产业研究院 Dynamic capacity expansion method for monitoring data storage of internet of things equipment
CN113656144A (en) * 2021-08-17 2021-11-16 百度在线网络技术(北京)有限公司 Data publishing system, method and device, electronic equipment and storage medium
CN113656144B (en) * 2021-08-17 2023-08-11 百度在线网络技术(北京)有限公司 Data release system, method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112698926B (en) 2021-07-02

Similar Documents

Publication Publication Date Title
US11550675B2 (en) Remote data replication method and system
CN109683826B (en) Capacity expansion method and device for distributed storage system
CN108509153B (en) OSD selection method, data writing and reading method, monitor and server cluster
US10642694B2 (en) Monitoring containers in a distributed computing system
US10838829B2 (en) Method and apparatus for loading data from a mirror server and a non-transitory computer readable storage medium
CN111078667B (en) Data migration method and related device
CN112698926B (en) Data processing method, device, equipment, storage medium and system
US20230305936A1 (en) Methods and systems for a non-disruptive automatic unplanned failover from a primary copy of data at a primary storage system to a mirror copy of the data at a cross-site secondary storage system
CN106062717A (en) Distributed storage replication system and method
CN112468601B (en) Data synchronization method, access method and system of distributed storage system
US11892982B2 (en) Facilitating immediate performance of volume resynchronization with the use of passive cache entries
KR20070061088A (en) File management method in file system and metadata server for the same
CN105069152B (en) data processing method and device
US20120278429A1 (en) Cluster system, synchronization controlling method, server, and synchronization controlling program
CN112632029B (en) Data management method, device and equipment of distributed storage system
CN110545203B (en) Method for establishing initial resource backup pool and self-healing repair of cloud platform by cloud platform
CN112905556A (en) Directory lease management method, device, equipment and storage medium for distributed system
CN112235405A (en) Distributed storage system and data delivery method
CN113190619B (en) Data read-write method, system, equipment and medium for distributed KV database
CN105323271B (en) Cloud computing system and processing method and device thereof
CN111404737B (en) Disaster recovery processing method and related device
CN112000850A (en) Method, device, system and equipment for data processing
CN112749172A (en) Data synchronization method and system between cache and database
JP6376626B2 (en) Data storage method, data storage device, and storage device
CN111752892A (en) Distributed file system, method for implementing the same, management system, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant