CN116467086A - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116467086A
CN116467086A CN202310511950.XA CN202310511950A CN116467086A CN 116467086 A CN116467086 A CN 116467086A CN 202310511950 A CN202310511950 A CN 202310511950A CN 116467086 A CN116467086 A CN 116467086A
Authority
CN
China
Prior art keywords
data
node
range
target
ranges
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511950.XA
Other languages
Chinese (zh)
Inventor
沈泰宁
刘奇
黄东旭
崔秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingkai Star Beijing Technology Co ltd
Original Assignee
Pingkai Star Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pingkai Star Beijing Technology Co ltd filed Critical Pingkai Star Beijing Technology Co ltd
Priority to CN202310511950.XA priority Critical patent/CN116467086A/en
Publication of CN116467086A publication Critical patent/CN116467086A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/217Database tuning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a data processing method, a data processing device, electronic equipment and a storage medium, and relates to the technical field of databases. The method comprises the following steps: acquiring data ranges of data to be processed corresponding to at least two nodes respectively, and data flow of the data to be processed corresponding to each of the at least two data ranges in a preset time period; selecting at least one target range from each data range according to the data flow rate in each data range; selecting at least one target node from each node according to the load level corresponding to each node; and updating at least one target range, and distributing the obtained at least one updating range to at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range. According to the embodiment of the application, the processing pressure of each node is reduced, and the data processing efficiency is improved.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of database technologies, and in particular, to a data processing method, a data processing device, an electronic device, and a storage medium.
Background
The data synchronization means that the data in the downstream system is consistent with the data in the upstream system all the time, and when the data in the upstream system changes, the downstream system can sense the change of the data and synchronize the changed data.
When the upstream system is a relational database, the data in the relational database may be stored in the form of a data table, and the data synchronization may be converted into synchronizing the data table in the upstream system to the downstream system. The existing data synchronization method generally realizes data synchronization between upstream and downstream systems through a single node, and the processing mechanism of the single node in the existing data synchronization method easily causes overload of the single node and lower processing efficiency.
Disclosure of Invention
The embodiment of the application provides a data processing method, a device, electronic equipment and a storage medium, which can solve the problems that in the prior art, the load of a single node is overlarge and the processing efficiency is low.
The technical scheme is as follows:
according to an aspect of the embodiments of the present application, there is provided a data processing method, including:
acquiring data ranges of data to be processed corresponding to at least two nodes respectively, and data flow of the data to be processed corresponding to each of the at least two data ranges in a preset time period;
Selecting at least one target range from each data range according to the data flow rate in each data range;
selecting at least one target node from each node according to the load level corresponding to each node;
and updating the at least one target range, and distributing the obtained at least one updating range to the at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range.
Optionally, the acquiring the data range of the data to be processed corresponding to each node and the data flow of the data to be processed corresponding to each data range in the preset time period are triggered based on at least one of the following conditions:
receiving an update instruction;
presetting a time length at each interval;
detecting that the corresponding data flow in any data range in the at least two data ranges is abnormal.
Optionally, the selecting at least one target range from the data ranges according to the data traffic in the data ranges includes:
taking the data range meeting the first preset condition as a target range; the first preset condition comprises that the flow rate in the data range is larger than a first threshold value in a first preset time period;
The updating the at least one target range and assigning the resulting at least one updated range to the at least one target node comprises:
splitting the target range into a continuous preset number of update ranges, and correspondingly distributing the preset number of update ranges to the preset number of target nodes.
Optionally, the selecting at least one target range from the data ranges according to the data traffic in the data ranges includes:
taking at least two data ranges meeting a second preset condition as at least two target ranges;
the second preset condition includes that the at least two data ranges are adjacent to each other, and the flow rates in the at least two data ranges are smaller than a second threshold value in a second preset time period;
the updating the at least one target range and assigning the resulting at least one updated range to the at least one target node comprises:
merging the at least two target ranges into the update range and assigning the update range to the at least one target node.
Optionally, the selecting at least one target node from the nodes according to the load levels corresponding to the nodes includes:
Determining, for each node, a load level of the node based on a real-time operational state of the node;
taking at least one node meeting a third preset condition as at least one target node; the third preset condition includes a load level of the node being higher than a third threshold.
Optionally, the method further comprises:
determining at least one original node corresponding to the at least one target range;
acquiring the current processing progress of each original node aiming at the data to be processed;
determining an initial processing position based on the current processing progress corresponding to each original node;
and sending the initial processing position to the at least one target node so that the at least one target node can process the data corresponding to the at least one updating range based on the initial processing position.
Optionally, for each target node, the processing the data corresponding to the at least one update range based on the start processing position includes:
and pulling initial data of an update range corresponding to the target node from the initial processing position of a first data table in the first system, processing the initial data into target data suitable for a second system, and writing the target data into the second system.
Optionally, the method further comprises:
receiving a processing completion notification sent by the at least one target node after the current data processing is completed;
sending a processing termination notification to the at least one original node, so that the at least one original node stops writing data of at least one target range corresponding to the at least one original node into a second system, and sending a writing stop notification to a scheduling node;
and receiving a write-stop notification returned by the at least one original node, and correspondingly sending a write-start notification to the at least one target node so that the at least one target node starts to write data of at least one update range corresponding to the at least one target node into the second system.
According to another aspect of an embodiment of the present application, there is provided a data processing apparatus, the apparatus including:
the acquisition module is used for acquiring the data range of the data to be processed corresponding to at least two nodes respectively and the data flow of the data to be processed corresponding to each of at least two data ranges in a preset time period;
the target range determining module is used for selecting at least one target range from the data ranges according to the data flow in the data ranges;
The target node determining module is used for selecting at least one target node from the nodes according to the load levels corresponding to the nodes;
and the updating module is used for updating the at least one target range and distributing the obtained at least one updating range to the at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range.
According to another aspect of the embodiments of the present application, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any one of the data processing methods described above when executing the program.
According to a further aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the data processing methods described above.
The beneficial effects that technical scheme that this application embodiment provided brought are:
the data processing is performed by adopting a plurality of nodes, so that the problem of performance hot spots caused by a single-point processing mechanism is avoided, the processing pressure of each node is reduced, and the data processing efficiency is improved. Even if one node fails, the other nodes can continue to process the data, so that the data processing is not stopped, and the availability of the method is improved.
Meanwhile, the data range corresponding to each node is dynamically adjusted according to the data flow in each data range and the load level corresponding to each node, so that the dynamic balance load of each node is realized, idle resources in the system are fully utilized, the utilization rate of the whole system resources is improved, and the data processing efficiency is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a system architecture diagram of a data processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a data processing method according to another embodiment of the present disclosure;
FIG. 4 is a flowchart of a data processing method according to another embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present application. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates that at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The technical solutions of the embodiments of the present application and technical effects produced by the technical solutions of the present application are described below by describing several exemplary embodiments. It should be noted that the following embodiments may be referred to, or combined with each other, and the description will not be repeated for the same terms, similar features, similar implementation steps, and the like in different embodiments.
FIG. 1 is a system architecture diagram of a data processing system according to an embodiment of the present application, where, as shown in FIG. 1, the system includes a first system and a second system, and a node set, where the node set includes at least one node and a scheduling node, and each node may interact with each other by using an RPC (Remote Procedure Call ) method.
The first system may be an upstream system, and the second system may be a downstream system, where the upstream and downstream may be links with a sequence in a certain business process. The first system may be a relational database, such as MySQL (a relational database) or TiDB (a distributed relational database), and the second system may be a relational database or a message queue, such as Kafka message queue (a message queue).
In the application scenario shown in the system architecture, the node may be configured to acquire corresponding update data from the first system, and synchronize the update data to the second system, so as to achieve data synchronization in the first system and the second system. Those skilled in the art will appreciate that in other application scenarios, the node may perform other data processing tasks, and examples provided in embodiments of the present application are not limited to specific functions of the node.
Fig. 2 is a flow chart of a data processing method according to an embodiment of the present application, as shown in fig. 2, where the method includes:
step S101, obtaining data ranges of the data to be processed corresponding to at least two nodes respectively, and data flow of the data to be processed corresponding to each of the at least two data ranges in a preset time period.
Specifically, the data processing method provided by the embodiment of the application can be applied to a scheduling node, and the scheduling node can be used for scheduling at least two nodes in a node set. The node may be a unit for executing a program, and the node may be a process or a thread, or may be a server or a client.
For each node, the node may correspond to a data range, and the node may be configured to process data to be processed in the corresponding data range. The data ranges of the data to be processed corresponding to the nodes can be aimed at one data table or different data tables, and the data ranges corresponding to the nodes are not overlapped with each other.
And in a preset time period, the data range corresponding to one node has the data flow of the data to be processed in the data range. The data flow may reflect the operation condition of the data to be processed in the data range in the preset time period, and the larger the data flow, the more frequent the operation of the data to be processed in the data range, that is, the greater the possibility that the data range is a performance hot spot range.
For each node, the node may send the corresponding data range and the data traffic of the data to be processed of the data range within the preset time to the scheduling node. The scheduling node may collect data ranges corresponding to at least two nodes, and data traffic of each node in the data range corresponding to the preset time period.
Step S102, selecting at least one target range from the data ranges according to the data flow rate in the data ranges.
Specifically, after the scheduling node collects the quantity traffic in the data range corresponding to each node, the scheduling node may detect the data processing condition in each data range according to the data traffic in each data range, and select at least one target range from each data range. The target range may be a data range that needs to be updated, for example, the target range may be a data range in which an abnormality occurs.
And step S103, selecting at least one target node from the nodes according to the load levels corresponding to the nodes.
Specifically, for each node, the node may also report the load level of the node itself to the scheduling node. The load level of a node may be used to reflect the level of data processing tasks that the node is capable of carrying, with greater load levels of the node indicating more resources that the node is not utilizing and more data processing tasks that the node is capable of carrying.
After the scheduling node collects the load levels corresponding to the nodes, the scheduling node can screen the nodes according to the load levels corresponding to the nodes, and at least one target node is selected from the nodes. The target node may be configured to process data in the updated data range, for example, the target node may be an idle node.
Step S104, updating at least one target range, and distributing the obtained at least one updating range to at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range.
Specifically, after determining the target range and the target node, the scheduling node may update the at least one target range according to the data traffic corresponding to the at least one target range, to obtain at least one updated range.
The updating mode of the target range can be determined based on the data flow corresponding to the target range, and when the data flow corresponding to the target range is larger, the target range can be split to obtain a plurality of sub-ranges; when the data traffic corresponding to the target ranges is smaller, the target ranges can be combined to obtain a total data range, so that the data traffic in each data range cannot be excessively different, the intensity of the data processing task corresponding to each data range is relatively balanced, and the balanced load of each node is realized later.
The scheduling node may allocate at least one update range to at least one target node, and for each target node, the target node may process the data to be processed in the update range according to the corresponding update range. The target node can be a node with lower resource utilization rate, namely a node with higher load level, and the update range is distributed to the target node for data processing, so that idle resources in the system are fully utilized, and the utilization rate of the resources of the whole system is improved.
Alternatively, step S101 may be triggered based on at least one of the following:
Receiving an update instruction;
presetting a time length at each interval;
and detecting that the corresponding data flow in any data range is abnormal.
Specifically, when the scheduling node encounters at least one of receiving an update instruction, presetting a time length at each interval, or detecting that an abnormality occurs in a data flow corresponding to any data range in at least two data ranges, the scheduling node may execute the steps S101 to S103 to update the data range corresponding to each node, so as to implement dynamic load balancing of each node.
The data flow anomalies corresponding to the data range may include that the data flow in the data range is greater than a maximum flow threshold, or that the data flow is less than a minimum flow threshold, and the maximum flow threshold and the minimum flow threshold may be specifically set according to an actual application scenario. The update instruction may be user-actively triggered.
According to the data processing method provided by the embodiment of the application, the plurality of nodes are adopted for data processing, so that the problem of performance hot spots caused by a single-point processing mechanism is avoided, the processing pressure of each node is reduced, and the data processing efficiency is improved. Even if one node fails, the other nodes can continue to process the data, so that the data processing is not stopped, and the availability of the method is improved.
Meanwhile, the data range corresponding to each node is dynamically adjusted according to the data flow in each data range and the load level corresponding to each node, so that the dynamic balance load of each node is realized, idle resources in the system are fully utilized, the utilization rate of the whole system resources is improved, and the data processing efficiency is further improved.
As an alternative embodiment, the method further comprises:
and determining initial data ranges corresponding to the preset number of nodes based on the preset number of synchronous partitions of the first data table in the first system.
Specifically, when the first system is a relational database, data in the first system may be stored in a first data table, and the first data table may be divided into a preset number of synchronization partitions, and a preset number of nodes are set to correspond to the preset number of synchronization partitions one by one. Wherein, a synchronous partition may include a plurality of line data of a primary key within a data range of the synchronous partition, so as to ensure that the same plurality of line data of the primary key are located in the same synchronous partition.
For each node, the data range of the synchronous partition corresponding to the node can be used as the initial data range corresponding to the node.
Alternatively, the partition field may be determined based on the primary key of the first data table, thereby determining the range of the partition field. Wherein the partition field may be a field for dividing the data table, and the partition field may include information of a primary key of the first data table.
After determining the range of the partition field, the range may be divided into a preset number of range intervals. Wherein the individual range intervals are consecutive and do not overlap each other. For any row of data in the first data table, a range section to which the row of data belongs can be determined based on a primary key corresponding to the row of data, and a plurality of rows of data belonging to one range section are used as one synchronous partition, so that the first data table can be divided into a preset number of synchronous partitions.
Alternatively, the partition field may be determined based on the type of primary key. If the primary key is of a numerical type, the primary key can be directly used as a partition field. If the primary key is of a character string type, the primary key can be mapped to a corresponding numerical value, and the mapped numerical value is used as a partition field. After determining the partition field, the scope of the partition field may be determined based on the data type of the partition field.
For example, when the primary key is of the int (integer) type, the primary key is used as the partition field, the range of value of the int type is [ -2147483648, 2147483647] which is used as the range of the partition field, and when the range is divided into three range sections, the range can be divided into:
[-2147483648,-1073741823),[-1073741823,0),[0,1073741823),[1073741823,2147483647]。
for another example, when the primary key is of the VARCHAR (10) type, the primary key may be hashed to obtain a value of the type uint32, and the range of uint32 may be used as the range of the partition field.
Alternatively, for any row of data in the first data table, the primary key of the row of data and the row of data may be taken as one data pair, for example, the primary key is denoted as key, and the row of data is denoted as value, and then the data pair of key (value) may be obtained.
The data pair is used as a partition field, the data pair is serialized, and a byte range corresponding to the binary string obtained by serialization is used as a range of the partition field. For example, the first two byte range of the binary string after serialization is [0x00 0x00,0x00 0xff ], when the range is divided into two range sections, the range can be divided into: [0x00 0x00,0x00 0x7f ] [0x00 0x7f,0x00 0xff).
As an alternative embodiment, step S102 includes:
taking the data range meeting the first preset condition as a target range; the first preset condition comprises that the flow rate in the data range is larger than a first threshold value in a first preset time period;
on this basis, step S104 includes:
splitting the target range into a continuous preset number of update ranges, and correspondingly distributing the preset number of update ranges to the preset number of target nodes.
Specifically, after obtaining the data traffic in the data range corresponding to each node, the scheduling node may determine the data traffic in each data range, and use a data range in which the data traffic is greater than a first threshold in a first preset time period as a target range. The first threshold and the first preset time period may be specifically set according to an actual application scenario. It should be noted that, for a data range in which the traffic in the data range is less than or equal to the first threshold in the first preset period, the update operation of splitting may not be performed on the data range.
At this time, the data flow in the selected target range is larger, the data processing pressure of the original node corresponding to the target range is larger, the target range can be split into a preset number of update ranges, the preset number of update ranges are correspondingly distributed to the preset number of target nodes, namely, one target node can be responsible for processing the data corresponding to one update range, and then the data processing pressure of the original node is distributed to a plurality of target nodes, so that the processing pressure of a single node is reduced, the balanced load of each node is realized, and the data processing efficiency is improved.
For example, the target range is [1,100], which the scheduling node splits into update range a: [1, 50) and update range B: [50,100], in turn, can assign an update scope A to target node a and an update scope B to target node B.
According to the method provided by the embodiment of the application, the data range, in which the flow in the data range is larger than the first threshold value in the first preset time period, is used as the target range, the target range is split into the continuous preset number of update ranges, the preset number of update ranges are correspondingly distributed to the preset number of target nodes, the data processing tasks of the nodes with higher data processing pressure are distributed to the plurality of target nodes, the processing pressure of a single node is reduced, the balanced load of each node is realized, and the data processing efficiency is improved.
As an alternative embodiment, step S102 includes:
taking at least two data ranges meeting a second preset condition as at least two target ranges;
the second preset condition comprises that at least two data ranges are adjacent to each other, and the flow rates in the at least two data ranges are smaller than a second threshold value in a second preset time period;
on this basis, step S104 includes:
At least two target ranges are merged into an update range and the update range is assigned to at least one target node.
Specifically, the scheduling node may further use at least two data ranges with data traffic smaller than the second threshold value in the second preset time period and adjacent data ranges as the target range. The second threshold and the second preset time period may be specifically set according to an actual application scenario. It should be noted that, for at least two data ranges adjacent to each other, when the data flow rate of at least one data range in the at least two data ranges is greater than or equal to the second threshold value in the second preset period, the update operation of not merging the at least two data ranges may be performed.
At this time, the data traffic in the at least two selected target ranges is smaller, the data processing pressure of the corresponding original node is smaller, and in order to avoid wasting resources of each node, the data in the at least two target ranges can be distributed to one node for processing. At least two target ranges are adjacent to each other, so that at least two target ranges can be merged into one update range and the merged update range is allocated to one target node. The target node may be any one of at least two original nodes corresponding to at least two target ranges, or may be other nodes except for at least two original nodes, which is not limited in the embodiment of the present application.
For example, the two target ranges are [1,50 ] and [50,100], and the scheduling node merges the two target ranges into one update range [1,100], and may allocate the update range to the target node
According to the method provided by the embodiment of the application, the data ranges are adjacent to each other, at least two data ranges with the flow in the data ranges smaller than the second threshold value in the second preset time period are used as at least two target ranges, the at least two target ranges are combined into the update range, and the update range is distributed to the target nodes, so that waste of resources in the system is avoided, and the resource utilization rate of the whole system is improved.
As an alternative embodiment, step S103 includes:
determining a load level of the node based on a real-time running state of the node for each node;
taking at least one node meeting a third preset condition as at least one target node; the third preset condition includes the load level of the node being higher than a third threshold.
Specifically, to screen out the target node, each node may report the corresponding real-time running state to the scheduling node, and the scheduling node may determine, according to the real-time running state of each node, a load level corresponding to each node, and select, according to the load level corresponding to each node, a node with a load level higher than a third threshold value from each node as the target node.
The real-time operation state of the node may include a parameter reflecting a real-time operation condition of the node, and the real-time operation state of the node may include at least one of a CPU (Central Processing Unit ) usage rate, a memory usage rate, and a disk usage rate of the node. When the real-time running state of the node reflects that the resource utilization rate of the node is lower, the node can bear more data processing tasks, namely, the higher the load level of the node is.
For nodes with load levels lower than or equal to the third threshold, less resources are available, and the load level is lower, and the node with lower load level is not considered as a target node.
The number of the target nodes can be determined according to the updating mode, and when one target node is needed in the updating mode, the node with the highest load level in each node can be used as the target node; when the updating mode needs to specify a plurality of target nodes, the nodes can be ordered according to the mode that the load level corresponding to each node is from large to small, and the specified plurality of nodes with the earlier ordering are used as the target nodes.
According to the method provided by the embodiment of the application, the load level of the node is determined based on the real-time running state of the node, at least one node with the load level higher than the third threshold value is used as at least one target node, idle resources in the system are fully utilized, and the utilization rate of the resources of the whole system is improved; meanwhile, the load level of the target node is higher, the corresponding processing capacity is stronger, the target node is used for processing the data to be processed corresponding to the update range, and the data processing efficiency is further improved.
As an alternative embodiment, the method further comprises:
determining at least one original node corresponding to at least one target range;
acquiring the current processing progress of each original node aiming at the data to be processed;
determining an initial processing position based on the current processing progress corresponding to each original node;
and transmitting the initial processing position to at least one target node so that the at least one target node can process the data corresponding to the at least one updating range based on the initial processing position.
Specifically, the triggering time of the scheduling node for updating the data range corresponding to each node may occur in the process of data processing, so as to ensure that the updating of the data range corresponding to each node does not affect the process of data processing, the scheduling node may further determine at least one original node corresponding to at least one target range, where the original node may be the node corresponding to the target range before the updating.
After determining at least one original node, each original node may send the current processing progress of the original node for the data to be processed to a scheduling node, and the scheduling node may determine the initial processing position for the target node according to the current processing progress corresponding to each original node.
The current processing progress of the original node may be the degree of processing progress of the original node on the data to be processed currently, and the current processing progress of the original node may be expressed as the number of lines from the original node to the data to be processed currently.
The initial processing position is for the target nodes, the initial processing position can be understood as a starting point of processing by the target nodes, and for each target node, the target nodes can process the data to be processed in the corresponding update range by taking the initial processing position as the starting point.
When one target range is split into a plurality of update ranges and the update ranges are correspondingly distributed to a plurality of target nodes, the current processing progress of the original node corresponding to the target range can be directly used as the initial processing position of the plurality of target nodes, namely, each target node starts to continuously process data from the current processing position of the original node, so that the continuity of the data processing process is ensured.
When the multiple target ranges are combined into one update range, the current processing progress of the multiple original nodes corresponding to the multiple target ranges may not be consistent, the current processing progress of each original node may be ordered according to the sequence, and the current processing progress with the highest order, that is, the processing progress with the slowest progress, is used as the initial processing position of the target node. By taking the current processing progress with the slowest progress as the initial processing position of the target node, the problem of data omission caused by inconsistent progress among the original nodes is prevented.
For example, the target range R1 is [1, 50), the target range R2 is [50,100], the scheduling node merges the two target ranges into one update range R3[1,100], the current processing progress of the original node corresponding to the target range R1 is the 50 th line, the current processing progress of the original node corresponding to the target range R2 is the 60 th line, and since the 50 th line is more forward than the 60 th line, the 50 th line is taken as the initial processing position of the target node corresponding to the update range R3, thereby preventing missing of the processing of the data between the 50 th line and the 60 th line.
According to the method provided by the embodiment of the application, the current processing progress of each original node for the data to be processed is obtained, the initial processing position is determined based on the current processing progress corresponding to each original node, and the initial processing position is sent to at least one target node, so that the influence of updating of the data range corresponding to each node on the data processing progress is avoided, the continuity of the data processing process is ensured, and the problem of data omission caused by inconsistent progress among the original nodes is prevented.
As an alternative embodiment, for each target node, processing the data corresponding to at least one update range based on the start processing location includes:
And pulling initial data of an update range corresponding to the target node from the initial processing position of the first data table in the first system, processing the initial data into target data suitable for the second system, and writing the target data into the second system.
Specifically, when the target node is configured to perform a data synchronization task of synchronizing data in the first system to the second system, the target node may pull initial data in a corresponding update range from a start processing position of the first data table in the first system, convert the initial data into target data applicable to the second system, determine a writing order of the target data, and sequentially write the target data into the second system according to the writing order, so as to achieve data synchronization between the first system and the second system. The initial data in the update range may be data that changes in the update range.
Alternatively, for different links of the data processing flow in the target node, one link may be allocated to one node for processing separately. The target node may include a pulling node, a processing node and a writing node, where the pulling node is responsible for acquiring initial data in a corresponding update range from a first data table of the first system, and the processing node is responsible for performing format conversion on the initial data, converting the initial data into a data format suitable for the second system, and taking the data obtained after format conversion as target data; meanwhile, the processing node may also be responsible for determining the write order of the plurality of line data included in the target data. The writing node is responsible for sequentially writing the target data into the second system according to the writing order.
By distributing the data processing process of the target node to a plurality of nodes, the processing pressure of a single node is further reduced, and the processing efficiency is improved.
As an alternative embodiment, the method further comprises:
receiving a processing completion notification sent by at least one target node after the current data processing is completed;
sending a processing termination notification to the at least one original node, so that the at least one original node stops writing data of at least one target range corresponding to the at least one original node into the second system, and sending a writing stop notification to the scheduling node;
and receiving a write-stop notification returned by the at least one original node, and correspondingly sending a write-start notification to the at least one target node so that the at least one target node starts to write the data of the at least one update range corresponding to the at least one target node into the second system.
Specifically, the data synchronization process may include three links of data pulling, data processing and data writing, and the original node may also perform data synchronization simultaneously in the process of performing data synchronization by the target node.
For each target node, when the target node performs two links of data pulling and data processing on the current data, a processing completion notification can be sent to the scheduling node. The current data may be incremental data that changes in the first system between the current processing progress of the corresponding original node and the current time. At this time, the target node does not start the link of data writing. Moreover, the original node can write data when the target node performs data pulling and data processing.
After receiving the processing completion notification of the target node, the scheduling node may send a processing termination notification to the original node, and after receiving the processing termination notification sent by the scheduling node, the original node may stop writing data in the corresponding target range into the second system, and after confirming to stop writing, the original node sends a writing stop notification to the scheduling node. After receiving the write-stop notification sent by the original node, the scheduling node may send a write-start notification to the target node, so that the target node writes the data in the corresponding update range into the second system.
In the embodiment of the application, after the target node finishes the initial data processing, the original node is instructed to stop data writing, and the target node is instructed to start data writing, so that only one node can write data in one data range at the same time, and the problem of disorder caused by writing data in one data range by a plurality of nodes at the same time is avoided; and the time interval between the writing stop of the original node and the writing start of the target node is reduced as far as possible, so that a user of the second system perceives that data is always written in the process of updating the data range of the node, and the user experience is improved.
Fig. 3 is a flow chart of a data processing method provided in the embodiment of the present application, as shown in fig. 3, a scheduling node may split a target range corresponding to an original node into two update ranges, and allocate the two update ranges to a target node 1 and a target node 2 respectively.
The method comprises the following steps:
s201, the scheduling node obtains the current processing progress C of the original node corresponding to the target range at the current time t1 t1
S202, the original node returns the current processing progress to the dispatching node;
s203, two updated ranges after the target range of the scheduling node is split are respectively distributed to the target node 1 and the target node 2, and the current processing progress C is processed t1 As a starting processing position, sending to the target node 1 and the target node 2;
s204, the target node 1 and the target node 2 perform data pulling and data processing according to the initial processing position and the corresponding updating range, but do not perform data writing;
s205, after completing the data processing link, the target node 1 and the target node 2 send a processing completion notification to the scheduling node;
s206, after receiving the processing completion notification of the target node 1 and the target node 2, the scheduling node sends a processing termination notification message to enable the original node to stop writing data;
S207, after stopping data writing, the original node sends a writing stop notification to the scheduling node, and the current processing progress C of the original node at the current time t2 of stopping writing t2
S208, the scheduling node informs the target node 1 and the target node 2 to start data writing, and receives the current processing progress C t2 Sending the updated initial processing position to the target node 1 and the target node 2;
s209, after receiving the notification, the target node 1 and the target node 2 start data writing from the initial processing position;
s210, the destination node 1 and the destination node 2 notify the scheduling node after completing the data writing.
Fig. 4 is a flow chart of a data processing method provided in the embodiment of the present application, as shown in fig. 4, in the method, a scheduling node may combine target ranges corresponding to an original node 1 and an original node 2 respectively into update ranges, and allocate the update ranges to the target nodes respectively.
The method comprises the following steps:
s301, a scheduling node obtains current processing progress of an original node 1 and an original node 2 corresponding to two target ranges respectively at a current time t1
S302, the original node 1 and the original node 2 respectively return the corresponding current processing progress to the dispatching node;
S303, the scheduling node takes the current processing progress with slower progress as an initial processing position from the two obtained current processing progress, and sends an update range obtained by combining the initial processing position and the two target ranges to the target node;
s304, the target node starts data pulling and data processing according to the initial processing position and the corresponding updating range, but does not execute data writing;
s305, after finishing the data processing link, the target node sends a processing completion notification to the scheduling node;
s306, after receiving the processing completion notification of the target node, the scheduling node sends a processing termination notification message to enable the original node 1 and the original node 2 to stop data writing;
s307, after stopping data writing, the original node 1 and the original node 2 send a writing stopping notification to the scheduling node, and the current processing progress of the original node 1 and the original node 2 corresponding to the current time t2 of stopping writing respectively
S308, the scheduling node takes the current processing progress with slower progress as an initial processing position from the received two current processing progress, sends the updated initial processing position to the target node, and informs the target node to start writing;
S309, after receiving the notification, the target node starts data writing from the received initial processing position;
s310, the target node notifies the scheduling node after completing the data writing.
Fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 5, the apparatus of this embodiment may include:
the acquiring module 201 is configured to acquire data ranges of data to be processed corresponding to at least two nodes respectively, and data traffic of the data to be processed corresponding to each of the at least two data ranges within a preset time period;
a target range determining module 202, configured to select at least one target range from each data range according to the data traffic in each data range;
a target node determining module 203, configured to select at least one target node from each node according to a load level corresponding to each node;
the updating module 204 is configured to update the at least one target range, and allocate the obtained at least one update range to the at least one target node, so that the at least one target node processes the data to be processed corresponding to the at least one update range.
As an alternative embodiment, the device further comprises a triggering module for performing at least one of the following steps:
Receiving an update instruction;
presetting a time length at each interval;
detecting that the corresponding data flow in any data range in at least two data ranges is abnormal.
As an alternative embodiment, the target range determining module is specifically configured to:
taking the data range meeting the first preset condition as a target range; the first preset condition comprises that the flow rate in the data range is larger than a first threshold value in a first preset time period;
the updating module is specifically used for:
splitting the target range into a continuous preset number of update ranges, and correspondingly distributing the preset number of update ranges to the preset number of target nodes.
As an alternative embodiment, the target range determining module is specifically configured to:
taking at least two data ranges meeting a second preset condition as at least two target ranges;
the second preset condition comprises that at least two data ranges are adjacent to each other, and the flow rates in the at least two data ranges are smaller than a second threshold value in a second preset time period;
the updating module is specifically used for:
at least two target ranges are merged into an update range and the update range is assigned to at least one target node.
As an alternative embodiment, the target node determining module is specifically configured to:
Determining a load level of the node based on a real-time running state of the node for each node;
taking at least one node meeting a third preset condition as at least one target node; the third preset condition includes the load level of the node being higher than a third threshold.
As an alternative embodiment, the apparatus further comprises a start processing position determining module for:
determining at least one original node corresponding to at least one target range;
acquiring the current processing progress of each original node aiming at the data to be processed;
determining an initial processing position based on the current processing progress corresponding to each original node;
and transmitting the initial processing position to at least one target node so that the at least one target node can process the data corresponding to the at least one updating range based on the initial processing position.
As an alternative embodiment, for each target node, processing the data corresponding to at least one update range based on the start processing location includes:
and pulling initial data of an update range corresponding to the target node from the initial processing position of the first data table in the first system, processing the initial data into target data suitable for the second system, and writing the target data into the second system.
As an alternative embodiment, the apparatus further comprises an interaction module for:
receiving a processing completion notification sent by at least one target node after the current data processing is completed;
sending a processing termination notification to the at least one original node, so that the at least one original node stops writing data of at least one target range corresponding to the at least one original node into the second system, and sending a writing stop notification to the scheduling node;
and receiving a write-stop notification returned by the at least one original node, and correspondingly sending a write-start notification to the at least one target node so that the at least one target node starts to write the data of the at least one update range corresponding to the at least one target node into the second system.
The apparatus of the embodiments of the present application may perform the method provided by the embodiments of the present application, and implementation principles of the method are similar, and actions performed by each module in the apparatus of each embodiment of the present application correspond to steps in the method of each embodiment of the present application, and detailed functional descriptions of each module of the apparatus may be referred to in the corresponding method shown in the foregoing, which is not repeated herein.
An embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement steps of the data processing method, and compared with the related art, the steps of the data processing method may be implemented: the data processing is performed by adopting a plurality of nodes, so that the problem of performance hot spots caused by a single-point processing mechanism is avoided, the processing pressure of each node is reduced, and the data processing efficiency is improved. Even if one node fails, the other nodes can continue to process the data, so that the data processing is not stopped, and the availability of the method is improved.
Meanwhile, the data range corresponding to each node is dynamically adjusted according to the data flow in each data range and the load level corresponding to each node, so that the dynamic balance load of each node is realized, idle resources in the system are fully utilized, the utilization rate of the whole system resources is improved, and the data processing efficiency is further improved.
In an alternative embodiment, an electronic device is provided, as shown in fig. 6, the electronic device 4000 shown in fig. 6 includes: a processor 4001 and a memory 4003. Wherein the processor 4001 is coupled to the memory 4003, such as via a bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The processor 4001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 4002 may include a path to transfer information between the aforementioned components. Bus 4002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer.
The memory 4003 is used for storing a computer program that executes an embodiment of the present application, and is controlled to be executed by the processor 4001. The processor 4001 is configured to execute a computer program stored in the memory 4003 to realize the steps shown in the foregoing method embodiment.
Embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, may implement the steps and corresponding content of the foregoing method embodiments.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in other sequences than those illustrated or otherwise described.
It should be understood that, although the flowcharts of the embodiments of the present application indicate the respective operation steps by arrows, the order of implementation of these steps is not limited to the order indicated by the arrows. In some implementations of embodiments of the present application, the implementation steps in the flowcharts may be performed in other orders as desired, unless explicitly stated herein. Furthermore, some or all of the steps in the flowcharts may include multiple sub-steps or multiple stages based on the actual implementation scenario. Some or all of these sub-steps or phases may be performed at the same time, or each of these sub-steps or phases may be performed at different times, respectively. In the case of different execution time, the execution sequence of the sub-steps or stages may be flexibly configured according to the requirement, which is not limited in the embodiment of the present application.
The foregoing is merely an optional implementation manner of the implementation scenario of the application, and it should be noted that, for those skilled in the art, other similar implementation manners based on the technical ideas of the application are adopted without departing from the technical ideas of the application, and also belong to the protection scope of the embodiments of the application.

Claims (11)

1. A method of data processing, comprising:
acquiring data ranges of data to be processed corresponding to at least two nodes respectively, and data flow of the data to be processed corresponding to each of the at least two data ranges in a preset time period;
selecting at least one target range from each data range according to the data flow rate in each data range;
selecting at least one target node from each node according to the load level corresponding to each node;
and updating the at least one target range, and distributing the obtained at least one updating range to the at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range.
2. The data processing method according to claim 1, wherein the obtaining the data range of the data to be processed corresponding to each node and the data flow of the data to be processed corresponding to each data range in the preset time period are triggered based on at least one of the following conditions:
Receiving an update instruction;
presetting a time length at each interval;
detecting that the corresponding data flow in any data range in the at least two data ranges is abnormal.
3. The data processing method according to claim 1, wherein the selecting at least one target range from the respective data ranges according to the data traffic in the respective data ranges includes:
taking the data range meeting the first preset condition as a target range; the first preset condition comprises that the flow rate in the data range is larger than a first threshold value in a first preset time period;
the updating the at least one target range and assigning the resulting at least one updated range to the at least one target node comprises:
splitting the target range into a continuous preset number of update ranges, and correspondingly distributing the preset number of update ranges to the preset number of target nodes.
4. The data processing method according to claim 1, wherein the selecting at least one target range from the respective data ranges according to the data traffic in the respective data ranges includes:
taking at least two data ranges meeting a second preset condition as at least two target ranges;
The second preset condition includes that the at least two data ranges are adjacent to each other, and the flow rates in the at least two data ranges are smaller than a second threshold value in a second preset time period;
the updating the at least one target range and assigning the resulting at least one updated range to the at least one target node comprises:
merging the at least two target ranges into the update range and assigning the update range to the at least one target node.
5. The method according to any one of claims 1 to 4, wherein selecting at least one target node from the respective nodes according to the load levels corresponding to the respective nodes, comprises:
determining, for each node, a load level of the node based on a real-time operational state of the node;
taking at least one node meeting a third preset condition as at least one target node; the third preset condition includes a load level of the node being higher than a third threshold.
6. The data processing method according to any one of claims 1 to 4, characterized in that the method further comprises:
determining at least one original node corresponding to the at least one target range;
Acquiring the current processing progress of each original node aiming at the data to be processed;
determining an initial processing position based on the current processing progress corresponding to each original node;
and sending the initial processing position to the at least one target node so that the at least one target node can process the data corresponding to the at least one updating range based on the initial processing position.
7. The method according to claim 6, wherein for each target node, the processing the data corresponding to the at least one update range based on the start processing position includes:
and pulling initial data of an update range corresponding to the target node from the initial processing position of a first data table in the first system, processing the initial data into target data suitable for a second system, and writing the target data into the second system.
8. The data processing method of claim 6, wherein the method further comprises:
receiving a processing completion notification sent by the at least one target node after the current data processing is completed;
sending a processing termination notification to the at least one original node, so that the at least one original node stops writing data of at least one target range corresponding to the at least one original node into a second system, and sending a writing stop notification to a scheduling node;
And receiving a write-stop notification returned by the at least one original node, and correspondingly sending a write-start notification to the at least one target node so that the at least one target node starts to write data of at least one update range corresponding to the at least one target node into the second system.
9. A data processing apparatus, comprising:
the acquisition module is used for acquiring the data range of the data to be processed corresponding to at least two nodes respectively and the data flow of the data to be processed corresponding to each of at least two data ranges in a preset time period;
the target range determining module is used for selecting at least one target range from the data ranges according to the data flow in the data ranges;
the target node determining module is used for selecting at least one target node from the nodes according to the load levels corresponding to the nodes;
and the updating module is used for updating the at least one target range and distributing the obtained at least one updating range to the at least one target node so as to enable the at least one target node to process the data to be processed corresponding to the at least one updating range.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any one of claims 1 to 8.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
CN202310511950.XA 2023-05-08 2023-05-08 Data processing method, device, electronic equipment and storage medium Pending CN116467086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511950.XA CN116467086A (en) 2023-05-08 2023-05-08 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511950.XA CN116467086A (en) 2023-05-08 2023-05-08 Data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116467086A true CN116467086A (en) 2023-07-21

Family

ID=87178947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511950.XA Pending CN116467086A (en) 2023-05-08 2023-05-08 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116467086A (en)

Similar Documents

Publication Publication Date Title
US10649953B2 (en) Blockchain-based data migration method and apparatus
CN109739929B (en) Data synchronization method, device and system
TWI755417B (en) Computing task allocation method, execution method of stream computing task, control server, stream computing center server cluster, stream computing system and remote multi-active system
US20150295970A1 (en) Method and device for augmenting and releasing capacity of computing resources in real-time stream computing system
CN106843745A (en) Capacity expansion method and device
US9917884B2 (en) File transmission method, apparatus, and distributed cluster file system
US8930501B2 (en) Distributed data storage system and method
CN110648178A (en) Method for increasing kafka consumption capacity
CN103544285A (en) Data loading method and device
US9753769B2 (en) Apparatus and method for sharing function logic between functional units, and reconfigurable processor thereof
CN102137091B (en) Overload control method, device and system as well as client-side
CN102970349B (en) A kind of memory load equalization methods of DHT network
CN111913807A (en) Event processing method, system and device based on multiple storage areas
CN116467086A (en) Data processing method, device, electronic equipment and storage medium
CN111858656A (en) Static data query method and device based on distributed architecture
CN115495265A (en) Method for improving kafka consumption capacity based on hadoop
CN113032134A (en) Method and device for realizing cloud computing resource allocation and cloud management server
CN114422526B (en) Block synchronization method and device, electronic equipment and storage medium
CN112416980B (en) Data service processing method, device and equipment
CN111131083B (en) Method, device and equipment for data transmission between nodes and computer readable storage medium
CN116016555A (en) Message synchronization method, device, equipment and computer storage medium
CN116467391A (en) Data synchronization method, device, electronic equipment and storage medium
CN112541038A (en) Time series data management method, system, computing device and storage medium
JP6688240B2 (en) Distributed synchronous processing system and distributed synchronous processing method
CN111367785A (en) SDN-based fault detection method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination