CN110928489B - Data writing method and device and storage node - Google Patents

Data writing method and device and storage node Download PDF

Info

Publication number
CN110928489B
CN110928489B CN201911031856.4A CN201911031856A CN110928489B CN 110928489 B CN110928489 B CN 110928489B CN 201911031856 A CN201911031856 A CN 201911031856A CN 110928489 B CN110928489 B CN 110928489B
Authority
CN
China
Prior art keywords
preset time
write requests
time period
time
aggregation queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911031856.4A
Other languages
Chinese (zh)
Other versions
CN110928489A (en
Inventor
谭春华
黄勇辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN201911031856.4A priority Critical patent/CN110928489B/en
Publication of CN110928489A publication Critical patent/CN110928489A/en
Application granted granted Critical
Publication of CN110928489B publication Critical patent/CN110928489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

A data writing method, a data writing device and a storage node relate to the technical field of storage and can solve the problem that aggregation time delay is large under the condition that the number of writing requests is small. The data writing method is applied to storage nodes in the distributed storage system. The storage node determines a first concurrency number and a second concurrency number, wherein the first concurrency number is the number of concurrent write requests in the aggregation queue within a unit time of a first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within a unit time of a second preset time period. If the difference value between the first concurrency number and the second concurrency number is smaller than or equal to a preset numerical value, the storage node calculates to obtain a target concurrency number according to the first concurrency number and the second concurrency number, aggregates the data of the write requests in the aggregation queue according to the target concurrency number, and writes the aggregated data into the distributed cache resource pool.

Description

Data writing method and device and storage node
Technical Field
The present application relates to the field of storage technologies, and in particular, to a data writing method and apparatus, and a storage node.
Background
In a distributed storage system, in order to reduce the write latency, a storage node generally feeds back a response message to a host after writing data to a cache. And then, when the data stored in the cache reaches a certain data volume, the storage node writes the data in the cache into a disk, namely, the disk refreshing (stage) operation is completed.
In practical applications, a storage node may receive a large number of write requests in a short time. In order to further reduce the write latency, after receiving each write request, the storage node writes the data in the write request into the aggregation queue. And when the quantity of the data in the aggregation queue reaches a preset threshold value or the time of the data in the aggregation queue exceeds a preset duration, the storage node aggregates the data in the aggregation queue and writes the aggregated data into the cache. However, in the case of a small number of write requests, this method introduces an aggregation delay, which in turn increases the write delay.
Disclosure of Invention
The application provides a data writing method, a data writing device and a storage node, which can solve the problem that aggregation time delay is large under the condition that the number of writing requests is small.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a data writing method is provided, and the data writing method is applied to storage nodes in a distributed storage system. Specifically, the storage node determines a first concurrency number and a second concurrency number, where the first concurrency number is the number of concurrent write requests in the aggregation queue within a unit time of a first preset time period, the second concurrency number is the number of concurrent write requests in the aggregation queue within a unit time of a second preset time period, both a start time of the first preset time period and a start time of the second preset time period are located before the current time, a time difference between the start time of the first preset time period and the current time is less than or equal to a first preset time length, and a time difference between the start time of the second preset time period and the current time is less than or equal to the first preset time length. And then, if the difference value between the first concurrency number and the second concurrency number is smaller than or equal to a preset numerical value, the storage node calculates to obtain a target concurrency number according to the first concurrency number and the second concurrency number, and then the storage node aggregates the data of the write request in the aggregation queue according to the target concurrency number and writes the aggregated data into the distributed cache resource pool. The target concurrency number is used for representing the number of concurrent write requests in the aggregation queue within a unit time of a first duration, and the first duration comprises a first preset time period and a second preset time period.
The storage node determines the aggregation number (corresponding to the target concurrency number) of the write requests at the current time by analyzing the number of the write requests entering the aggregation queue in the first preset time period and the second preset time period, and aggregates the write requests in the aggregation queue according to the determined aggregation number. Therefore, the storage node can aggregate the data in the aggregation queue by adopting different aggregation numbers under different conditions according to actual service requirements, and the aggregation time delay is effectively reduced.
Optionally, in a possible implementation manner of the present application, the storage node further receives a first write request, and determines whether a size of data in the first write request is smaller than a size of a stripe in the distributed cache resource pool (the size of the stripe is preset). And if the size of the data in the first write request is smaller than the size of the stripe, the storage node writes the first write request into the aggregation queue. And if the size of the data in the first write request is larger than or equal to the size of the stripe, the storage node writes the first write request into the distributed cache resource pool.
The distributed cache resource pool comprises a plurality of sub-storage spaces, and each storage space can be represented by a strip. And the storage node determines whether to write the first write request into the aggregation queue according to the size of the stripe and the size of the data in the first write request.
Optionally, in another possible implementation manner of the present application, the method for determining, by a storage node, a first concurrency number and a second concurrency number includes: the storage node determines the number of write requests and the number of non-concurrent write requests in a converged queue in a first preset time period, and determines the number of write requests and the number of non-concurrent write requests in the converged queue in a second preset time period; calculating to obtain a first concurrent number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue within a first preset time period; and calculating to obtain a second concurrency number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue in a second preset time period.
Optionally, in another possible implementation manner of the present application, the method for determining, by the storage node, the number of non-concurrent write requests in the aggregation queue in the first preset time period and the number of non-concurrent write requests in the aggregation queue in the second preset time period includes: the storage node judges whether the time difference between the enqueue time of the ith (i is more than or equal to 1 and less than or equal to m, wherein m represents the number of the write requests entering the aggregation queue in the first preset time) write request and the enqueue time of the (i-1) th write request is less than a second preset time length, and the enqueue time refers to the time of entering the aggregation queue; if the time difference between the enqueue time of the ith write request and the enqueue time of the (i-1) th write request is greater than a second preset time length, the storage node determines that the number of the non-concurrent write requests in the aggregation queue is increased by one within a first preset time period; the storage node judges whether the time difference between the enqueue time of the j (j is more than or equal to 1 and less than or equal to n, wherein n represents the number of the write requests entering the aggregation queue in the second preset time) write request and the enqueue time of the j-1 write request is less than the second preset time; and if the time difference between the enqueue time of the jth write request and the enqueue time of the jth-1 write request is larger than a second preset time length, the storage node determines that the number of the non-concurrent write requests in the aggregation queue in the second preset time period is increased by one. When i is equal to 1, the number of non-concurrent write requests in the aggregation queue in the first preset time period is one. When j is 1, the number of non-concurrent write requests in the aggregation queue in the second preset time period is one.
It is easy to understand that if the time difference between the enqueue times of two adjacent write requests entering the aggregation queue is greater than the second preset time length, the two write requests can be considered to be non-concurrent write requests.
In a second aspect, a data writing device is provided, which is capable of implementing the functions of the first aspect and any one of its possible implementations. These functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In a possible manner of this application, the data writing device may include a determining unit, a judging unit, a calculating unit, an aggregating unit, and a writing unit, and the determining unit, the judging unit, the calculating unit, the aggregating unit, and the writing unit may perform corresponding functions in the data writing method described in the first aspect and any one of the possible implementations thereof. For example: a determining unit configured to determine a first concurrency number and a second concurrency number; the first concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a second preset time period; the starting time of the first preset time period and the starting time of the second preset time period are both before the current time; the time difference between the starting time and the current time of the first preset time period is less than or equal to a first preset time length; the time difference between the starting time and the current time of the second preset time period is less than or equal to the first preset time length. And the judging unit is used for judging whether the difference value between the first concurrent number and the second concurrent number determined by the determining unit is smaller than or equal to a preset numerical value. And the calculating unit is used for calculating a target concurrency number according to the first concurrency number and the second concurrency number if the judging unit determines that the difference value between the first concurrency number and the second concurrency number is smaller than or equal to a preset value, wherein the target concurrency number is used for representing the number of concurrent write requests in the aggregation queue within unit time of the first time length, and the first time length comprises a first preset time period and a second preset time period. And the aggregation unit is used for aggregating the data of the write requests in the aggregation queue according to the target concurrency number calculated by the calculation unit. And the writing unit is used for writing the data aggregated by the aggregation unit into the distributed cache resource pool.
In a third aspect, a storage node is provided, which is applied to a distributed storage system. The storage node includes: a processor, and a cache. The cache is coupled with the processor, and the cache stores program codes; the processor calls the program code in the cache to execute the data writing method of the first aspect and its various possible implementations.
Optionally, the storage node further includes a transceiver, and the transceiver may be configured to perform the step of transceiving data, signaling, or information in the data writing method according to the first aspect and any one of the possible implementation manners of the first aspect, for example, obtain the first write request.
In a fourth aspect, a computer-readable storage medium having computer instructions stored therein is also provided; when the computer instructions are run on a computer, the computer performs the data writing method as described above in the first aspect and its various possible implementations.
In a fifth aspect, there is also provided a computer program product, which includes computer instructions, when the computer instructions are run on a computer, cause the computer to execute the data writing method according to the first aspect and its various possible implementations.
It should be noted that all or part of the computer instructions may be stored in a computer storage medium, where the first computer storage medium may be packaged together with the processor or may be packaged separately from the processor, and the present application is not limited thereto.
For the descriptions of the second aspect, the third aspect, the fourth aspect, the fifth aspect and various implementations thereof in this application, reference may be made to the detailed description of the first aspect and various implementations thereof; moreover, the beneficial effects of the second aspect, the third aspect, the fourth aspect, the fifth aspect and various implementation manners thereof may refer to the beneficial effect analysis of the first aspect and various implementation manners thereof, and are not described herein again.
In the present application, the names of the above-mentioned writing data devices do not limit the devices or functional modules themselves, and in actual implementation, the devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic structural diagram of a distributed storage system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a data writing method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a data writing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer program product according to an embodiment of the present invention.
Detailed Description
In a distributed storage system, a storage node usually adopts a high-performance storage medium, such as a Solid State Drive (SSD), as a cache medium. In order to reduce the write latency, the storage node generally feeds back a response message to the host after writing data to the cache. And then, when the data stored in the cache reaches a certain data volume, the storage node writes the data in the cache into a disk.
In order to further reduce the write latency, after receiving each write request, the storage node writes the data in the write request into the aggregation queue. And when the quantity of the data in the aggregation queue reaches a preset threshold value or the time of the data in the aggregation queue exceeds a preset duration, the storage node aggregates the data in the aggregation queue and writes the aggregated data into the cache. However, in the case of a small number of write requests, this method introduces an aggregation delay, which in turn increases the write delay.
In order to solve the foregoing problems, an embodiment of the present invention provides a data writing method applied to a distributed storage system, where a storage node analyzes, in real time, a plurality of write requests entering an aggregation queue of the storage node, determines, according to an analysis result, an aggregation number of the write requests (corresponding to a target concurrency number in the embodiment of the present invention), and aggregates, according to the determined aggregation number, the write requests in the aggregation queue. Therefore, the storage node in the embodiment of the invention can aggregate the data in the aggregation queue by adopting different aggregation numbers under different conditions according to actual service requirements, thereby effectively reducing the aggregation time delay.
The data writing method provided by the embodiment of the invention is suitable for a distributed storage system. Fig. 1 shows the architecture of the distributed storage system. As shown in fig. 1, the distributed storage system provided by the embodiment of the present invention includes at least one service node 10 and at least two storage nodes (e.g., a storage node a and a storage node B). For each storage node, the storage node may communicate with each service node 10.
The service node 10 may determine a storage node for storing data specifically, and send a write request or a read request to the determined storage node; and can also communicate with the client, such as receiving a write request or a read request sent by the client.
The storage nodes are used for storing data. As shown in fig. 1, the storage node a may include a controller 110 and a hard disk 111.
The service node 10 may send a write request to the controller 110, and after receiving the write request, the controller 110 writes data carried in the write request into the hard disk 111. The service node 10 may also send a read request to the controller 110, and after receiving the read request, the controller 110 obtains data to be read and sends the data to be read to the service node 10.
The hard disks of at least two storage nodes are constructed into a shared distributed storage resource pool through a distributed technology. Any storage node can acquire data of the distributed storage resource pool.
As shown in fig. 1, controller 110 includes at least a processor 1101 and a cache 1102.
Processor 1101 is a Central Processing Unit (CPU). In an embodiment of the present invention, processor 1101 may be configured to receive read requests and write requests from service node 10, and process the read requests and the write requests.
The cache 1102 is used to temporarily store data received from the service node 10 or data read from the hard disk 111. When receiving multiple write requests sent by service node 10, controller 110 may temporarily store data in the multiple write requests in cache 1102. When the capacity of the cache 1102 reaches a certain threshold, the data stored in the cache 1102 is stored in the distributed storage resource pool. Cache 1102 includes volatile memory, non-volatile memory, or a combination thereof. Volatile memory is, for example, random-access memory (RAM). Non-volatile memory such as floppy disks, hard disks, SSDs, optical disks, and various other machine readable and writable media on which program code may be stored.
The caches of at least two storage nodes are also constructed into a shared distributed cache resource pool (such as a distributed cache resource pool) through a distributed technology, and all the caches are used by all the services together.
In this embodiment of the present invention, the service node 10 and the storage node may be a physical machine (e.g., a server), a virtual machine, or any other device for providing a storage service, which is not specifically limited in this embodiment of the present invention.
The following describes a data writing method provided by an embodiment of the present invention with reference to the distributed storage system shown in fig. 1. Since the processing procedure of each storage node is the same, the embodiment of the present invention is described by taking the processing procedure of the storage node a as an example.
The data writing method provided by the embodiment of the invention can be applied to the controller 110 shown in fig. 1, and the following steps are executed by the processor 1101 unless otherwise specified. As shown in fig. 2, a data writing method provided in an embodiment of the present invention includes:
s200, the processor 1101 receives a plurality of write requests.
In practical applications, the plurality of write requests received by the processor 1101 may be split by one write data request, or may be split by a plurality of write data requests. The multiple write requests received by the processor 1101 may be from the same service node 10, or may be from different service nodes 10, which is not limited in this embodiment of the present invention.
S201, for each received write request, the processor 1101 determines whether the size of data in the write request is smaller than the size of a stripe in the distributed cache resource pool.
The distributed cache resource pool comprises a plurality of sub-storage spaces, and each storage space can be represented by a strip. The size of the stripe in the distributed cache resource pool is preset, such as: 32 kbit.
If the size of the data in a write request (taking the first write request as an example) is equal to the size of the stripe, the processor 1101 may write the data in the first write request in one stripe. If the size of the data in the first write request is greater than the size of the stripe, the processor 1101 may write the data in the first write request in a plurality of stripes. That is, if the size of the data in the first write request is greater than or equal to the size of the stripe, the processor 1101 continues to execute S202 described below. If the size of the data in the first write request is smaller than the size of the stripe, the processor 1101 may write the first write request into the aggregation queue, that is, the processor 1101 continues to execute S203 described below.
S202, if the size of the data in the write request is greater than or equal to the size of the stripe, the processor 1101 writes the write request into the distributed cache resource pool.
Specifically, if the size of the data in the write request is greater than or equal to the size of the stripe, the processor 1101 writes the write request into the stripe of the distributed cache resource pool.
S203, if the size of the data in the write request is smaller than the size of the stripe, the processor 1101 writes the write request into the aggregation queue.
For a write request in the write aggregation queue, the processor 1101 performs the processes described in S204 to S208 below.
S204, the processor 1101 determines the number of write requests and the number of non-concurrent write requests in the aggregation queue in a first preset time period, and determines the number of write requests and the number of non-concurrent write requests in the aggregation queue in a second preset time period.
The starting time of the first preset time period and the starting time of the second preset time period are both before the current time. The time difference between the starting time and the current time of the first preset time period is less than or equal to a first preset time length. The time difference between the starting time and the current time of the second preset time period is less than or equal to the first preset time length.
The duration of the first preset time period and the duration of the second preset time period are self-defined by the system side or the user side, which is not limited in the embodiment of the present invention. In addition, there may be an overlapping duration between the first preset time period and the second preset time period, or two mutually independent time periods, which is not limited in the embodiment of the present invention.
For each write request entering the aggregate queue, processor 1101 also records the time the write request entered the aggregate queue, i.e., the enqueue time for storing the write request.
The processor 1101 may start a counter for recording the number of write requests entering the aggregation queue during a first preset time period and for recording the number of write requests entering the aggregation queue during a second preset time period.
The method for the processor 1101 to determine the number of non-concurrent write requests in the aggregation queue in the first preset time period may be: processor 1101 determines whether the time difference between the enqueue time of the ith (i is greater than or equal to 1 and less than or equal to m) write request and the enqueue time of the (i-1) th write request is less than (or equal to) a second preset time (e.g., 100 microseconds), wherein m represents the number of write requests entering the aggregation queue within the first preset time. If the time difference between the enqueue time of the ith write request and the enqueue time of the (i-1) th write request is greater than (or equal to or greater than) a second preset time period, the processor 1101 determines that the number of non-concurrent write requests in the aggregation queue is increased by one within a first preset time period. When i is 1, the number of non-concurrent write requests in the aggregation queue in the first preset time period is one.
Similarly, the method for the processor 1101 to determine the number of non-concurrent write requests in the aggregation queue in the second preset time period may be: the processor 1101 judges whether the time difference between the enqueue time of the j (j is more than or equal to 1 and less than or equal to n) th write request and the enqueue time of the j-1 th write request is less than (or equal to) a second preset time, wherein n represents the number of write requests entering the aggregation queue within the second preset time. If the time difference between the enqueue time of the jth write request and the enqueue time of the jth write request is greater than (or may be greater than or equal to) a second preset time period, the processor 1101 determines that the number of non-concurrent write requests in the aggregation queue within the second preset time period is increased by one. When j is 1, the number of non-concurrent write requests in the aggregation queue in the second preset time period is one.
Illustratively, the second predetermined time period is 100 microseconds. In a certain time period, the first write request entering the aggregation queue is write request 1, the second write request entering the aggregation queue is write request 2, and the processor 1101 records that the time when write request 1 enters the aggregation queue is time 1, and the time when write request 2 enters the aggregation queue is time 2. If the time difference between time 2 and time 1 is greater than 100 microseconds, the processor 1101 determines that the non-concurrency number in the time period is an initial value +1, that is, 1+1 is 2. In addition, processor 1101 also determines the number of write requests in the time period to be 2.
S205, the processor 1101 calculates a first concurrency number according to the number of write requests and the number of non-concurrent write requests in the aggregation queue in the first preset time period, and calculates a second concurrency number according to the number of write requests and the number of non-concurrent write requests in the aggregation queue in the second preset time period.
The first concurrency number is the number of concurrent write requests in the aggregation queue within the unit time of the first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within the unit time of the second preset time period.
In one implementation, the processor 1101 determines a ratio of the number of write requests to the number of non-concurrent write requests in the aggregation queue within a first preset time period as a first concurrent number. The processor 1101 determines a ratio of the number of write requests and the number of non-concurrent write requests in the aggregation queue within a second preset time period as a second concurrent number.
S206, the processor 1101 determines whether a difference between the first concurrency number and the second concurrency number is less than or equal to a preset value.
If the difference between the first concurrency number and the second concurrency number is less than or equal to the preset value, it indicates that the number of the write requests entering the aggregation queue in the first time period and the second time period is stable, and the processor 1101 determines the rule of the write requests entering the aggregation queue. In this way, the processor 1101 may determine the target concurrency number (refer to the following description), i.e., execute S207.
If the difference between the first concurrent number and the second concurrent number is greater than the preset value, it indicates that the number of the write requests entering the aggregation queue in the first time period and the second time period is unstable, and the processor 1101 cannot accurately determine the number of the write requests entering the aggregation queue in the first time period (including the first preset time period and the second preset time period). In this case, the processor 1101 further needs to determine the unit time of other preset time periods (for example, a third preset time period) as the number of concurrent write requests, and re-execute S204 until determining the rule of the write requests entering the aggregation queue.
S207, the processor 1101 determines a target concurrency number according to the first concurrency number and the second concurrency number.
The target concurrency number is used for representing the number of concurrent write requests in the aggregation queue within a unit time of a first duration, and the first duration comprises a first preset time period and a second preset time period.
Alternatively, if the first concurrency number and the second concurrency number are equal, the processor 1101 determines that the target concurrency number is the first concurrency number or the second concurrency number.
For example, if the second preset time duration is 100 microseconds, the number of write requests entering the aggregation queue within 1-100 microseconds is 5. After the interval of 200 microseconds, the number of write requests entering the aggregation queue within 300-400 microseconds is also 5. After 200 microseconds of interval again, the number of write requests entering the aggregation queue in 600-700 microseconds is also 5. The first preset time period is 1-300 microseconds, and the second preset time period is 1-700 microseconds. The number of write requests entering the aggregation queue in the first preset time period is 5, and the number of non-concurrent write requests is 1, so that the first concurrency number is 5/1-5. The number of write requests entering the aggregation queue in the second preset time period is 5+5+ 5-15, and the number of non-concurrent write requests is 3, so that the second concurrency number is 15/3-5. At this time, the first concurrency number is equal to the second concurrency number, and thus, the processor 1101 determines that the target concurrency number is 5.
Optionally, if the first concurrency number and the second concurrency number are not equal, the processor 1101 may determine the target concurrency number by using any one of the following formulas:
Figure BDA0002250370780000061
Figure BDA0002250370780000062
Figure BDA0002250370780000063
Figure BDA0002250370780000064
wherein min (first concurrency number, second concurrency number) represents the minimum value of the first concurrency number and the second concurrency number,
Figure BDA0002250370780000065
indicating downward forensics, max (first concurrency number, second concurrency number) indicates the maximum of the first concurrency number and the second concurrency number,
Figure BDA0002250370780000066
is shown facing upwardsAnd (6) taking the whole.
For example, if the predetermined value is 1, the second predetermined time period is 100 microseconds. The number of write requests entering the aggregation queue within 1-100 microseconds is 5. After the interval of 200 microseconds, the number of write requests entering the aggregation queue in 300-400 microseconds is 3. After 200 microseconds, the number of write requests entering the aggregation queue in 600-700 microseconds is also 5. The first preset time period is 1-400 microseconds, and the second preset time period is 1-700 microseconds. The number of write requests entering the aggregation queue in the first preset time period is 5+ 3-8, and the number of non-concurrent write requests is 2, so that the first concurrency number is 8/2-4. The number of write requests entering the aggregation queue in the second preset time period is 13 +3+5, and the number of non-concurrent write requests is 3, so that the second concurrency number is 13/3. At this time, the first concurrency number and the second concurrency number are not equal. The difference between the first concurrency number 4 and the second concurrency number 13/3 is less than the preset value 1, so that the processor 1101 can determine the target concurrency number according to the first concurrency number 4 and the second concurrency number 13/3. If the processor 1101 determines the target concurrency number using the above equation (1), the target concurrency number is 4.
S208, the processor 1101 aggregates the data of the write requests in the aggregation queue according to the target concurrency number.
S209, the processor 1101 writes the aggregated data into the distributed cache resource pool.
In the data writing method provided in the embodiment of the present invention, the processor 1101 may periodically execute S204 to S209. In this way, the processor 1101 may determine different target concurrency numbers under different situations according to actual requirements.
After S202 and S209, the data in the distributed cache resource pool may be written into the distributed storage resource pool according to actual needs or pre-configuration.
The storage node analyzes a plurality of write requests entering an aggregation queue of the storage node in real time, determines an aggregation number (namely, the target concurrency number) of the write requests according to an analysis result, and aggregates the write requests in the aggregation queue according to the determined aggregation number. Therefore, the storage node in the embodiment of the invention can aggregate the data in the aggregation queue by adopting different aggregation numbers under different conditions according to actual service requirements, thereby effectively reducing the aggregation time delay.
The scheme provided by the embodiment of the invention is mainly introduced from the perspective of a method. In order to implement the above functions, it includes a hardware structure and/or a software module for performing each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The embodiment of the present invention may perform division of function modules on the data writing device according to the method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 3 is a schematic structural diagram of a data writing device 30 according to an embodiment of the present invention. The data writing device 30 is used to execute the data writing method shown in fig. 3. The data writing device 30 may include a determination unit 301, a judgment unit 302, a calculation unit 303, an aggregation unit 304, and a writing unit 305.
The determining unit 301 is a determining unit, configured to determine a first concurrency number and a second concurrency number; the first concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a second preset time period; the starting time of the first preset time period and the starting time of the second preset time period are both before the current time; the time difference between the starting time and the current time of the first preset time period is less than or equal to a first preset time length; the time difference between the starting time and the current time of the second preset time period is less than or equal to the first preset time length. For example, in connection with fig. 2, the determining unit 301 may be configured to perform S204, S205, S207.
A judging unit 302, configured to judge whether a difference between the first concurrency number and the second concurrency number determined by the determining unit 301 is less than or equal to a preset value. For example, in conjunction with fig. 2, the determining unit 302 may be configured to execute S204.
A calculating unit 303, configured to calculate a target concurrency number according to the first concurrency number and the second concurrency number if the determining unit 302 determines that the difference between the first concurrency number and the second concurrency number is smaller than or equal to a preset value, where the target concurrency number is used to indicate the number of concurrent write requests in the aggregation queue in a unit time of a first time duration, and the first time duration includes a first preset time period and a second preset time period. For example, in connection with fig. 3, the calculation unit 303 may be configured to perform S205.
And an aggregation unit 304, configured to aggregate the data of the write request in the aggregation queue according to the target concurrency number calculated by the calculation unit 303. For example, in conjunction with fig. 3, the aggregation unit 304 may be used to perform S208.
A writing unit 305, configured to write the data aggregated by the aggregation unit 304 into the distributed cache resource pool. For example, in conjunction with fig. 3, the write unit 305 may be configured to perform S209.
Optionally, the data writing device 30 further includes a receiving unit 306. The receiving unit 306 is configured to receive a first write request. For example, in conjunction with fig. 3, the receiving unit 306 may be configured to perform S200. The determining unit 302 is further configured to determine whether the size of the data in the first write request received by the receiving unit 306 is smaller than the size of a stripe in the distributed cache resource pool, where the size of the stripe is preset. For example, in conjunction with fig. 3, the determining unit 302 may be configured to perform S201. The writing unit 305 is further configured to write the first write request into the aggregation queue if the determining unit 302 determines that the size of the data in the first write request is smaller than the size of the stripe, and write the first write request into the distributed cache resource pool if the determining unit determines that the size of the data in the first write request is greater than or equal to the size of the stripe. In connection with fig. 3, the writing unit 305 may be configured to perform S202, S203.
Optionally, the determining unit 301 is specifically configured to: determining the number of write requests and the number of non-concurrent write requests in a convergence queue in a first preset time period, and determining the number of write requests and the number of non-concurrent write requests in the convergence queue in a second preset time period; calculating to obtain a first concurrent number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue within a first preset time period; and calculating to obtain a second concurrency number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue in a second preset time period.
Optionally, the determining unit 302 is further configured to determine whether a time difference between an enqueuing time of the ith write request and an enqueuing time of the (i-1) th write request is smaller than a second preset time duration, where the enqueuing time is a time of entering the aggregation queue, i is greater than or equal to 1 and is less than or equal to m, and m represents a number of write requests entering the aggregation queue within the first preset time. Correspondingly, the determining unit 301 is specifically configured to determine that the number of non-concurrent write requests in the aggregation queue in the first preset time period is increased by one if the determining unit 302 determines that the time difference between the enqueue time of the ith write request and the enqueue time of the (i-1) th write request is greater than the second preset time period. The determining unit 302 is further configured to determine whether a time difference between the enqueue time of the jth write request and the enqueue time of the j-1 th write request is smaller than a second preset time duration, where j is greater than or equal to 1 and is less than or equal to n, and n represents the number of write requests entering the aggregation queue within the second preset time. The determining unit 301 is specifically configured to determine that the number of non-concurrent write requests in the aggregation queue in the second preset time period is increased by one if the determining unit 302 determines that the time difference between the enqueue time of the j-th write request and the enqueue time of the j-1-th write request is greater than the second preset time period. When i is equal to 1, the number of non-concurrent write requests in the aggregation queue in the first preset time period is one; when j is 1, the number of non-concurrent write requests in the aggregation queue in the second preset time period is one.
Of course, the data writing device 30 provided by the embodiment of the present invention includes, but is not limited to, the above modules, for example, the data writing device 30 may further include the storage unit 307. The storage unit 307 may be used to store the program code of the data writing device 30, and may also be used to store data generated by the data writing device 30 during operation, such as data in a write request.
In actual implementation, the determining unit 301, the judging unit 302, the calculating unit 303, the aggregating unit 304, and the writing unit 305 may be implemented by the processor 1101 shown in fig. 1 calling the program code in the cache 1102. For a specific implementation process, reference may be made to the description of the data writing method portion shown in fig. 2, which is not described herein again.
Another embodiment of the present invention further provides a storage node, which includes the above data writing device. The structure of the storage node can refer to the structure of the storage node in fig. 1, and is not described herein again.
Another embodiment of the present invention further provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the method shown in the above method embodiment.
In some embodiments, the disclosed methods may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles of manufacture.
Fig. 4 schematically illustrates a conceptual partial view of a computer program product comprising a computer program for executing a computer process on a computing device provided by an embodiment of the invention.
In one embodiment, the computer program product is provided using a signal bearing medium 410. The signal bearing medium 410 may include one or more program instructions that, when executed by one or more processors, may provide the functions or portions of the functions described above with respect to fig. 2. Thus, for example, referring to the embodiment shown in FIG. 2, one or more features of S200-S209 may be undertaken by one or more instructions associated with the signal bearing medium 410. Further, the program instructions in FIG. 4 also describe example instructions.
In some examples, signal bearing medium 410 may include a computer readable medium 411, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), a digital tape, a memory, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
In some implementations, the signal bearing medium 410 may comprise a computer recordable medium 412 such as, but not limited to, a memory, a read/write (R/W) CD, a R/W DVD, and the like.
In some implementations, the signal bearing medium 410 may include a communication medium 413, such as, but not limited to, a digital and/or analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
The signal bearing medium 410 may be conveyed by a wireless form of communication medium 413, such as a wireless communication medium compliant with the IEEE 802.41 standard or other transport protocol. The one or more program instructions may be, for example, computer-executable instructions or logic-implementing instructions.
In some examples, a data writing apparatus, such as that described with respect to fig. 2, may be configured to provide various operations, functions, or actions in response to one or more program instructions through computer-readable medium 411, computer-recordable medium 412, and/or communication medium 413.
It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and that some elements may be omitted altogether depending upon the desired results. In addition, many of the described elements are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to embodiments of the present invention occur, in whole or in part, when computer-executable instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The foregoing is only illustrative of the present invention. Those skilled in the art can conceive of changes or substitutions based on the specific embodiments provided by the present invention, and all such changes or substitutions are intended to be included within the scope of the present invention.

Claims (10)

1. A data writing method is applied to storage nodes in a distributed storage system, and comprises the following steps:
determining a first concurrency number and a second concurrency number; the first concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a second preset time period; the starting time of the first preset time period and the starting time of the second preset time period are both before the current time; the time difference between the starting time of the first preset time period and the current time is less than or equal to a first preset time length; the time difference between the starting time of the second preset time period and the current time is less than or equal to the first preset time length;
if the difference between the first concurrency number and the second concurrency number is smaller than or equal to a preset numerical value, calculating to obtain a target concurrency number according to the first concurrency number and the second concurrency number, wherein the target concurrency number is used for representing the number of concurrent write requests in the aggregation queue within unit time of a first time length, and the first time length comprises the first preset time period and the second preset time period;
aggregating the data of the write requests in the aggregation queue according to the target concurrency number;
and writing the aggregated data into a distributed cache resource pool.
2. The data writing method of claim 1, further comprising:
receiving a first write request;
judging whether the size of the data in the first write request is smaller than the size of the stripe in the distributed cache resource pool, wherein the size of the stripe is preset;
if the size of the data in the first write request is smaller than the size of the stripe, writing the first write request into the aggregation queue;
and if the size of the data in the first write request is larger than or equal to the size of the stripe, writing the first write request into the distributed cache resource pool.
3. The method for writing data according to claim 1 or 2, wherein the determining the first concurrency number and the second concurrency number comprises:
determining the number of write requests and the number of non-concurrent write requests in the aggregation queue in the first preset time period, and determining the number of write requests and the number of non-concurrent write requests in the aggregation queue in the second preset time period;
calculating to obtain the first concurrent number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue within the first preset time period;
and calculating to obtain the second concurrency number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue in the second preset time period.
4. The method for writing data according to claim 3, wherein the determining the number of non-concurrent write requests in the aggregation queue in the first preset time period and the number of non-concurrent write requests in the aggregation queue in the second preset time period comprises:
judging whether the time difference between the enqueuing time of the ith write request and the enqueuing time of the (i-1) th write request is smaller than a second preset time length or not, wherein the enqueuing time refers to the time for entering the aggregation queue, i is greater than or equal to 1 and is less than or equal to m, and m represents the number of the write requests entering the aggregation queue within the first preset time;
if the time difference between the enqueuing time of the ith write request and the enqueuing time of the (i-1) th write request is greater than the second preset time, determining that the number of the non-concurrent write requests in the aggregation queue within the first preset time period is increased by one;
judging whether the time difference between the enqueue time of the jth write request and the enqueue time of the jth-1 write request is smaller than a second preset time length, wherein j is more than or equal to 1 and is less than or equal to n, and n represents the number of write requests entering the aggregation queue within the second preset time;
if the time difference between the enqueue time of the jth write request and the enqueue time of the jth-1 write request is greater than the second preset time, determining that the number of the non-concurrent write requests in the aggregation queue is increased by one within the second preset time period;
when i is 1, the number of non-concurrent write requests in the aggregation queue in the first preset time period is one; when j is 1, the number of non-concurrent write requests in the aggregation queue in the second preset time period is one.
5. A data writing apparatus, comprising:
a determining unit configured to determine a first concurrency number and a second concurrency number; the first concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a first preset time period, and the second concurrency number is the number of concurrent write requests in the aggregation queue within unit time of a second preset time period; the starting time of the first preset time period and the starting time of the second preset time period are both before the current time; the time difference between the starting time of the first preset time period and the current time is less than or equal to a first preset time length; the time difference between the starting time of the second preset time period and the current time is less than or equal to the first preset time length;
a determining unit, configured to determine whether a difference between the first concurrent number and the second concurrent number determined by the determining unit is smaller than or equal to a preset value;
a calculating unit, configured to calculate a target concurrency number according to the first concurrency number and the second concurrency number if the determining unit determines that the difference between the first concurrency number and the second concurrency number is smaller than or equal to a preset value, where the target concurrency number is used to indicate the number of concurrent write requests in the aggregation queue within a unit time of a first time duration, and the first time duration includes the first preset time period and the second preset time period;
the aggregation unit is used for aggregating the data of the write requests in the aggregation queue according to the target concurrency number calculated by the calculation unit;
and the writing unit is used for writing the data aggregated by the aggregation unit into the distributed cache resource pool.
6. The data writing device of claim 5, further comprising a receiving unit;
the receiving unit is used for receiving a first write request;
the judging unit is further configured to judge whether the size of the data in the first write request received by the receiving unit is smaller than the size of a stripe in the distributed cache resource pool, where the size of the stripe is preset;
the writing unit is further configured to write the first write request into the aggregation queue if the determining unit determines that the size of the data in the first write request is smaller than the size of the stripe, and write the first write request into the distributed cache resource pool if the determining unit determines that the size of the data in the first write request is greater than or equal to the size of the stripe.
7. The data writing device according to claim 5 or 6, wherein the determining unit is specifically configured to:
determining the number of write requests and the number of non-concurrent write requests in the aggregation queue in the first preset time period, and determining the number of write requests and the number of non-concurrent write requests in the aggregation queue in the second preset time period;
calculating to obtain the first concurrent number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue within the first preset time period;
and calculating to obtain the second concurrency number according to the number of the write requests and the number of the non-concurrent write requests in the aggregation queue in the second preset time period.
8. The data writing device of claim 7,
the judging unit is further configured to judge whether a time difference between an enqueue time of an ith write request and an enqueue time of an i-1 st write request is smaller than a second preset time, the enqueue time is a time for entering the aggregation queue, i is greater than or equal to 1 and less than or equal to m, and m represents the number of write requests entering the aggregation queue within the first preset time;
the determining unit is specifically configured to determine that the number of non-concurrent write requests in the aggregation queue within the first preset time period is increased by one if the determining unit determines that the time difference between the enqueue time of the ith write request and the enqueue time of the (i-1) th write request is greater than the second preset time period;
the judging unit is further configured to judge whether a time difference between the enqueue time of the jth write request and the enqueue time of the jth-1 write request is smaller than a second preset time, j is greater than or equal to 1 and is less than or equal to n, and n represents the number of write requests entering the aggregation queue within the second preset time;
the determining unit is specifically configured to determine that the number of the non-concurrent write requests in the aggregation queue is increased by one within a second preset time period if the determining unit determines that the time difference between the enqueue time of the jth write request and the enqueue time of the jth-1 write request is greater than the second preset time period;
when i is 1, the number of non-concurrent write requests in the aggregation queue in the first preset time period is one; when j is 1, the number of non-concurrent write requests in the aggregation queue in the second preset time period is one.
9. A storage node, which is applied to a distributed storage system, the storage node comprising: a processor, and a cache; the cache is coupled with the processor, and the cache stores program codes;
the processor calls the program code in the cache to realize the data writing method of any one of claims 1 to 4.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to implement the method of writing data of any one of claims 1-4.
CN201911031856.4A 2019-10-28 2019-10-28 Data writing method and device and storage node Active CN110928489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911031856.4A CN110928489B (en) 2019-10-28 2019-10-28 Data writing method and device and storage node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911031856.4A CN110928489B (en) 2019-10-28 2019-10-28 Data writing method and device and storage node

Publications (2)

Publication Number Publication Date
CN110928489A CN110928489A (en) 2020-03-27
CN110928489B true CN110928489B (en) 2022-09-09

Family

ID=69849643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911031856.4A Active CN110928489B (en) 2019-10-28 2019-10-28 Data writing method and device and storage node

Country Status (1)

Country Link
CN (1) CN110928489B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11762559B2 (en) 2020-05-15 2023-09-19 International Business Machines Corporation Write sort management in a multiple storage controller data storage system
US11580022B2 (en) 2020-05-15 2023-02-14 International Business Machines Corporation Write sort management in a multiple storage controller data storage system
CN112905358A (en) * 2021-02-05 2021-06-04 中国工商银行股份有限公司 Software distribution method, device and system of distributed system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679442A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and device for improving performance of disk array
CN108427537A (en) * 2018-01-12 2018-08-21 上海凯翔信息科技有限公司 Distributed memory system and its file write-in optimization method, client process method
CN109032530A (en) * 2018-08-21 2018-12-18 成都华为技术有限公司 A kind of data flow processing method and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276654A1 (en) * 2008-05-02 2009-11-05 International Business Machines Corporation Systems and methods for implementing fault tolerant data processing services
US9436634B2 (en) * 2013-03-14 2016-09-06 Seagate Technology Llc Enhanced queue management
US9507740B2 (en) * 2014-06-10 2016-11-29 Oracle International Corporation Aggregation of interrupts using event queues
US9875024B2 (en) * 2014-11-25 2018-01-23 Sap Se Efficient block-level space allocation for multi-version concurrency control data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679442A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and device for improving performance of disk array
CN108427537A (en) * 2018-01-12 2018-08-21 上海凯翔信息科技有限公司 Distributed memory system and its file write-in optimization method, client process method
CN109032530A (en) * 2018-08-21 2018-12-18 成都华为技术有限公司 A kind of data flow processing method and equipment

Also Published As

Publication number Publication date
CN110928489A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
US11394625B2 (en) Service level agreement based storage access
CN110928489B (en) Data writing method and device and storage node
US8984085B2 (en) Apparatus and method for controlling distributed memory cluster
US8601213B2 (en) System, method, and computer-readable medium for spool cache management
US11050550B2 (en) Methods and systems for reading data based on plurality of blockchain networks
CN108874324B (en) Access request processing method, device, equipment and readable storage medium
EP3812994A1 (en) Data evidence preservation method and system based on multiple blockchain networks
CN112100293A (en) Data processing method, data access method, data processing device, data access device and computer equipment
CN106649145A (en) Self-adaptive cache strategy updating method and system
US11249987B2 (en) Data storage in blockchain-type ledger
EP3812998A1 (en) Data storage and attestation method and system based on multiple blockchain networks
CN111459948B (en) Transaction integrity verification method based on centralized block chain type account book
CN111737212A (en) Method and equipment for improving performance of distributed file system
WO2020244243A1 (en) Method and device for dividing a plurality of storage devices into device groups
CN110502187B (en) Snapshot rollback method and device
CN108205559B (en) Data management method and equipment thereof
KR101810180B1 (en) Method and apparatus for distributed processing of big data based on user equipment
CN107229424B (en) Data writing method for distributed storage system and distributed storage system
US20210382644A1 (en) Method and device for dividing storage devices into device groups
US11086849B2 (en) Methods and systems for reading data based on plurality of blockchain networks
CN110019372A (en) Data monitoring method, device, server and storage medium
CN110187987B (en) Method and apparatus for processing requests
CN115499513A (en) Data request processing method and device, computer equipment and storage medium
CN109992217B (en) Service quality control method and device, electronic equipment and storage medium
CN108628551B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant