CN113126917A - Request processing method, system, device and medium in distributed storage - Google Patents
Request processing method, system, device and medium in distributed storage Download PDFInfo
- Publication number
- CN113126917A CN113126917A CN202110353993.0A CN202110353993A CN113126917A CN 113126917 A CN113126917 A CN 113126917A CN 202110353993 A CN202110353993 A CN 202110353993A CN 113126917 A CN113126917 A CN 113126917A
- Authority
- CN
- China
- Prior art keywords
- queue
- processing
- request
- time sequence
- read
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 68
- 238000003672 processing method Methods 0.000 title abstract description 7
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000008569 process Effects 0.000 claims abstract description 18
- 230000004044 response Effects 0.000 claims abstract description 17
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 239000000725 suspension Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a request processing method in distributed storage, which comprises the following steps: creating a plurality of main queues comprising a read request processing queue and a write request processing queue and determining the corresponding relation between each main queue and a homing group; in response to receiving an operation request for a storage object, determining a homing group corresponding to the storage object; according to the type of the operation request, enabling the operation request to fall into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group; and utilizing a plurality of threads to independently process the operation requests in the read request processing queue and the write request queue respectively. The invention also discloses a system, a computer device and a readable storage medium. The scheme provided by the embodiment of the invention can process the read-write requests in parallel through the separate design of the read-write queue, and reduces the mutual influence of the read-write requests in the queue.
Description
Technical Field
The present invention relates to the field of distributed storage, and in particular, to a method, system, device, and storage medium for processing a request in distributed storage.
Background
Distributed storage is generally designed to exhibit quasi-linear growth in external performance as nodes grow, especially in the case of read-only, write-only. Pure writing generally adopts an asynchronous mode, pre-reading can also exist in case of continuous reading, and the influence of the condition that reading and writing are not separated is not great. However, in the HPC field, there are a large number of irregular read-write hybrid applications, and random reads have a low hit probability in the cache, and most of them need to read from the HDD disk, and if there are many write operations ahead of the read request, there is queue wait consumption, resulting in high read latency. In the HPC domain, the overall performance job efficiency is low if a job lasts long.
Disclosure of Invention
In view of the above, in order to overcome at least one aspect of the above problems, an embodiment of the present invention provides a method for processing a request in a distributed storage, including:
creating a plurality of main queues comprising a read request processing queue and a write request processing queue and determining the corresponding relation between each main queue and a homing group;
in response to receiving an operation request for a storage object, determining a homing group corresponding to the storage object;
according to the type of the operation request, enabling the operation request to fall into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group;
and utilizing a plurality of threads to independently process the operation requests in the read request processing queue and the write request queue respectively.
In some embodiments, determining the storage object corresponding to the set of storage objects further comprises:
creating a time sequence table corresponding to the storage object;
generating corresponding time sequence numbers according to the operation requests in sequence and adding the time sequence numbers to the operation requests;
and adding the time sequence number to the corresponding time sequence list in sequence.
In some embodiments, the processing of the operation requests in the read request processing queue and the write request queue independently by a plurality of threads, further comprises:
responding to the operation request for processing, and judging whether a time sequence number which is more front than the time sequence number exists in the corresponding time sequence list or not according to the time sequence number carried by the operation request;
in response to the presence, registering a callback and suspending processing of the operation request;
and in response to the completion of the processing of the operation request corresponding to the earlier time sequence number, triggering a callback to continue the processing of the operation request which is suspended from processing.
In some embodiments, continuing processing of the operation request for suspension processing further comprises:
judging whether an idle thread exists in a queue where the operation request for suspending processing is positioned;
in response to the absence, increasing the priority of the suspended processing operation request to preferentially process the suspended processing operation request;
and responding to the existence, and directly continuing to process the operation request of the suspended processing.
In some embodiments, the processing of the operation requests in the read request processing queue and the write request queue independently by a plurality of threads, further comprises:
creating a thread pool;
and respectively distributing a plurality of threads in the thread pool to each main queue.
In some embodiments, determining a correspondence of each of the primary queues to a parked group further comprises:
and determining the corresponding relation between each main queue and the arrangement group according to a Hash algorithm.
In some embodiments, the processing of the operation requests in the read request processing queue and the write request queue independently by a plurality of threads, further comprises:
and in response to that the storage objects corresponding to a plurality of operation requests in the read request processing queue or the write request queue are the same, merging the plurality of operation requests.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a system for processing a request in distributed storage, including:
a creation module configured to create a plurality of main queues including a read request processing queue and a write request processing queue and determine a correspondence of each of the main queues to a homing group;
the receiving module is configured to respond to the received operation request of the storage object, and determine a homing group corresponding to the storage object;
a grouping module configured to drop the operation request into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group according to a type of the operation request;
a processing module configured to independently process the operation requests in the read request processing queue and the write request queue using a plurality of threads, respectively.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of any one of the distributed storage request processing methods as described above.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of any one of the request processing methods in distributed storage as described above.
The invention has one of the following beneficial technical effects: the scheme provided by the embodiment of the invention can process the read-write requests in parallel through the separate design of the read-write queue, and reduces the mutual influence of the read-write requests in the queue.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a request processing method in distributed storage according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a request processing system in a distributed storage according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a computer device provided in an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
According to an aspect of the present invention, an embodiment of the present invention provides a method for processing a request in a distributed storage, as shown in fig. 1, which may include the steps of:
s1, creating a plurality of main queues including a read request processing queue and a write request processing queue and determining the corresponding relation between each main queue and a homing group;
s2, in response to receiving an operation request for a storage object, determining a storage group corresponding to the storage object;
s3, dropping the operation request into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding homing group according to the type of the operation request;
and S4, processing the operation requests in the read request processing queue and the write request queue respectively and independently by using a plurality of threads.
The scheme provided by the embodiment of the invention can process the read-write requests in parallel through the separate design of the read-write queue, and reduces the mutual influence of the read-write requests in the queue.
In some embodiments, step S1, determining a corresponding relationship between each of the primary queues and the home group, further includes:
and determining the corresponding relation between each main queue and the arrangement group according to a Hash algorithm.
Specifically, to manage data distribution, the system first creates a Storage pool, and then the Storage pool is divided into multiple PGs (virtual Group, a virtual logical unit for data distribution), where each PG includes multiple OSDs (Object Storage devices). Each PG has a main OSD, the front end writing service firstly sends data to the main OSD, then carries out erasure segmentation calculation and sends the data to each member OSD of the PG. When the storage object on the OSD needs to be operated (for example, read or written), the main OSD is used for receiving an operation request, then the object is mapped to an OSD vector according to a consistent hash algorithm, so that the PG corresponding to the storage object is determined, finally the main queue corresponding to each PG is determined according to the hash algorithm, different PGs belong to each main queue, and the PGs are isolated by the objects.
In some embodiments, step S2, determining the storage object corresponding to the storage object, further includes:
creating a time sequence table corresponding to the storage object;
generating corresponding time sequence numbers according to the operation requests in sequence and adding the time sequence numbers to the operation requests;
and adding the time sequence number to the corresponding time sequence list in sequence.
Specifically, when a read or write operation is performed on an object, some requests in the read request processing queue and some requests in the write request processing queue in each main queue have a time sequence relationship, so that in order to make it impossible to change an operation result after an upper service time sequence IO, that is, an operation result of a system before any operation is optimized is consistent with an operation result of a system after any operation is optimized, ordered processing is performed according to an operation service delivery time sequence, a time sequence number needs to be generated for each operation request, for example, when an operation request of a storage object is delivered to a main OSD, a time sequence number seq is generated on the main OSD for a PG to which the storage object belongs, and is appended to the operation request, and then the operation request falls into a queue.
In some embodiments, the smaller the time sequence number is, the earlier the issuing time sequence is, and each storage object corresponds to one time sequence table to record the time sequence number corresponding to the operation request of the storage object. And since the time sequence number is generated from the PG, and there are a plurality of storage objects within the PG, the time sequence numbers in the time sequence table of the storage objects may be discontinuous. And when an operation request is processed, the corresponding sequence number in the sequence list is correspondingly deleted.
In some embodiments, step S4, processing the operation requests in the read request processing queue and the write request queue independently by using a plurality of threads, respectively, further includes:
responding to the operation request for processing, and judging whether a time sequence number which is more front than the time sequence number exists in the corresponding time sequence list or not according to the time sequence number carried by the operation request;
in response to the presence, registering a callback and suspending processing of the operation request;
and in response to the completion of the processing of the operation request corresponding to the earlier time sequence number, triggering a callback to continue the processing of the operation request which is suspended from processing.
Specifically, because the processing of the read request queue and the write request queue in the main queue to the operation request is independent from each other, before the operation requests in the read request processing queue and the write request queue are processed by using the thread, whether the storage object to which the operation request to be processed belongs has an operation request with a front time sequence or not is judged according to the time sequence table corresponding to the operation request to be processed. For example, when an operation request with sequence number seq131415 is processed by reading the request queue, if the sequence number seq131413 exists in the time-series list of the storage object to which the operation request belongs, it is described that the operation request corresponding to the sequence number seq131413 has not been processed, and the operation request with sequence number seq131415 is suspended and a callback is registered. And when the operation request with the sequence number seq131413 is processed, a callback is triggered to continue processing the operation request with the sequence number seq 131415.
In some embodiments, continuing processing of the operation request for suspension processing further comprises:
judging whether an idle thread exists in a queue where the operation request for suspending processing is positioned;
in response to the absence, increasing the priority of the suspended processing operation request to preferentially process the suspended processing operation request;
and responding to the existence, and directly continuing to process the operation request of the suspended processing.
Specifically, after the operation request for the suspended processing is triggered to continue to be processed, whether an idle thread exists in a queue where the operation request for the suspended processing is located is judged, if so, the operation request for the suspended processing is directly processed, and if not, the priority of the operation request for the suspended processing is increased, so that the operation request for the suspended processing is preferentially processed after the idle thread exists.
In some embodiments, the processing of the operation requests in the read request processing queue and the write request queue independently by a plurality of threads, further comprises:
creating a thread pool;
and respectively distributing a plurality of threads in the thread pool to each main queue.
Specifically, a thread pool may be created and a plurality of threads may be allocated to each main queue, and the specific number may be adjusted according to actual service requirements. In some embodiments, the threads allocated to the read request processing queue and the write request queue in each main queue may be divided into two types, wherein one type may process only the threads with high priority, i.e., the operation requests whose processing is suspended before being processed. The other is normally processed.
In some embodiments, the processing of the operation requests in the read request processing queue and the write request queue independently by a plurality of threads, further comprises:
and in response to that the storage objects corresponding to a plurality of operation requests in the read request processing queue or the write request queue are the same, merging the plurality of operation requests.
Specifically, when processing the operation requests in the read request processing queue or the write request queue, the subsequent operations are actively detected, and if the operation requests belong to the same object, the subsequent operations are actively merged.
It should be noted that the merge operation needs to be in the same queue, i.e. in the same read request processing queue or the write request queue, and there is no interval in the processing timing, which cannot affect the order-preserving processing.
According to the scheme provided by the embodiment of the invention, through the separate design of the read-write queue, when the operation associated to the same object is not met, the read-write request can be independently and parallelly processed, the mutual influence of the read-write request in the queue is reduced, the sequence can be preserved, the data consistency is ensured, namely the operation result after the IO of the upper-layer service time sequence is not changed, and the system operation result before and after any operation is optimized is consistent.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a distributed storage request processing system 400, as shown in fig. 2, including:
a creating module 401, where the creating module 401 is configured to create a plurality of main queues including a read request processing queue and a write request processing queue and determine a corresponding relationship between each main queue and a home group;
a receiving module 402, wherein the receiving module 402 is configured to determine a storage object corresponding to a homing group in response to receiving an operation request for the storage object;
a grouping module 403, where the grouping module 403 is configured to drop the operation request into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group according to the type of the operation request;
a processing module 404, wherein the processing module 404 is configured to independently process the operation requests in the read request processing queue and the write request queue by using a plurality of threads.
According to the scheme provided by the embodiment of the invention, through the separate design of the read-write queue, when the operation associated to the same object is not met, the read-write request can be independently and parallelly processed, the mutual influence of the read-write request in the queue is reduced, the sequence can be preserved, the data consistency is ensured, namely the operation result after the IO of the upper-layer service time sequence is not changed, and the system operation result before and after any operation is optimized is consistent.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 3, an embodiment of the present invention further provides a computer apparatus 501, comprising:
at least one processor 520; and
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 4, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the steps of the request processing method in any one of the above distributed storages.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.
Claims (10)
1. A method for processing requests in distributed storage is characterized by comprising the following steps:
creating a plurality of main queues comprising a read request processing queue and a write request processing queue and determining the corresponding relation between each main queue and a homing group;
in response to receiving an operation request for a storage object, determining a homing group corresponding to the storage object;
according to the type of the operation request, enabling the operation request to fall into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group;
and utilizing a plurality of threads to independently process the operation requests in the read request processing queue and the write request queue respectively.
2. The method of claim 1, wherein determining the homing group to which the storage object corresponds, further comprises:
creating a time sequence table corresponding to the storage object;
generating corresponding time sequence numbers according to the operation requests in sequence and adding the time sequence numbers to the operation requests;
and adding the time sequence number to the corresponding time sequence list in sequence.
3. The method of claim 2, wherein the operation requests in the read request processing queue and the write request queue are independently processed by a plurality of threads, respectively, further comprising:
responding to the operation request for processing, and judging whether a time sequence number which is more front than the time sequence number exists in the corresponding time sequence list or not according to the time sequence number carried by the operation request;
in response to the presence, registering a callback and suspending processing of the operation request;
and in response to the completion of the processing of the operation request corresponding to the earlier time sequence number, triggering a callback to continue the processing of the operation request which is suspended from processing.
4. The method of claim 3, wherein continuing processing of the operation request for suspension processing further comprises:
judging whether an idle thread exists in a queue where the operation request for suspending processing is positioned;
in response to the absence, increasing the priority of the suspended processing operation request to preferentially process the suspended processing operation request;
and responding to the existence, and directly continuing to process the operation request of the suspended processing.
5. The method of claim 1, wherein the operation requests in the read request processing queue and the write request queue are independently processed by a plurality of threads, respectively, further comprising:
creating a thread pool;
and respectively distributing a plurality of threads in the thread pool to each main queue.
6. The method of claim 1, wherein determining a correspondence of each of the primary queues to a staging group further comprises:
and determining the corresponding relation between each main queue and the arrangement group according to a Hash algorithm.
7. The method of claim 1, wherein the operation requests in the read request processing queue and the write request queue are independently processed by a plurality of threads, respectively, further comprising:
and in response to that the storage objects corresponding to a plurality of operation requests in the read request processing queue or the write request queue are the same, merging the plurality of operation requests.
8. A system for processing requests in distributed storage, comprising:
a creation module configured to create a plurality of main queues including a read request processing queue and a write request processing queue and determine a correspondence of each of the main queues to a homing group;
the receiving module is configured to respond to the received operation request of the storage object, and determine a homing group corresponding to the storage object;
a grouping module configured to drop the operation request into the read request processing queue or the write request processing queue in the main queue corresponding to the corresponding home group according to a type of the operation request;
a processing module configured to independently process the operation requests in the read request processing queue and the write request queue using a plurality of threads, respectively.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110353993.0A CN113126917A (en) | 2021-04-01 | 2021-04-01 | Request processing method, system, device and medium in distributed storage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110353993.0A CN113126917A (en) | 2021-04-01 | 2021-04-01 | Request processing method, system, device and medium in distributed storage |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113126917A true CN113126917A (en) | 2021-07-16 |
Family
ID=76774516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110353993.0A Pending CN113126917A (en) | 2021-04-01 | 2021-04-01 | Request processing method, system, device and medium in distributed storage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113126917A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502935A (en) * | 2016-11-04 | 2017-03-15 | 郑州云海信息技术有限公司 | FPGA isomery acceleration systems, data transmission method and FPGA |
CN106681661A (en) * | 2016-12-23 | 2017-05-17 | 郑州云海信息技术有限公司 | Read-write scheduling method and device in solid state disk |
CN109426434A (en) * | 2017-08-23 | 2019-03-05 | 北京易华录信息技术股份有限公司 | A kind of data of optical disk reading/writing method |
CN110058816A (en) * | 2019-04-10 | 2019-07-26 | 中国人民解放军陆军工程大学 | DDR-based high-speed multi-user queue manager and method |
CN110058926A (en) * | 2018-01-18 | 2019-07-26 | 伊姆西Ip控股有限责任公司 | For handling the method, equipment and computer-readable medium of GPU task |
CN112256204A (en) * | 2020-10-28 | 2021-01-22 | 重庆紫光华山智安科技有限公司 | Storage resource allocation method and device, storage node and storage medium |
-
2021
- 2021-04-01 CN CN202110353993.0A patent/CN113126917A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502935A (en) * | 2016-11-04 | 2017-03-15 | 郑州云海信息技术有限公司 | FPGA isomery acceleration systems, data transmission method and FPGA |
CN106681661A (en) * | 2016-12-23 | 2017-05-17 | 郑州云海信息技术有限公司 | Read-write scheduling method and device in solid state disk |
CN109426434A (en) * | 2017-08-23 | 2019-03-05 | 北京易华录信息技术股份有限公司 | A kind of data of optical disk reading/writing method |
CN110058926A (en) * | 2018-01-18 | 2019-07-26 | 伊姆西Ip控股有限责任公司 | For handling the method, equipment and computer-readable medium of GPU task |
CN110058816A (en) * | 2019-04-10 | 2019-07-26 | 中国人民解放军陆军工程大学 | DDR-based high-speed multi-user queue manager and method |
CN112256204A (en) * | 2020-10-28 | 2021-01-22 | 重庆紫光华山智安科技有限公司 | Storage resource allocation method and device, storage node and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287044B (en) | Lock-free shared memory processing method and device, electronic equipment and readable storage medium | |
US9230002B2 (en) | High performant information sharing and replication for single-publisher and multiple-subscriber configuration | |
US10540246B2 (en) | Transfer track format information for tracks in cache at a first processor node to a second process node to which the first processor node is failing over | |
US9305392B2 (en) | Fine-grained parallel traversal for ray tracing | |
CN108009008A (en) | Data processing method and system, electronic equipment | |
US20190050340A1 (en) | Invalidating track format information for tracks in cache | |
US20190034355A1 (en) | Saving track metadata format information for tracks demoted from cache for use when the demoted track is later staged into cache | |
US11036641B2 (en) | Invalidating track format information for tracks demoted from cache | |
US11036635B2 (en) | Selecting resources to make available in local queues for processors to use | |
JP4667092B2 (en) | Information processing apparatus and data control method in information processing apparatus | |
KR102505036B1 (en) | Embedded reference counter and special data pattern auto-detect | |
WO2023040399A1 (en) | Service persistence method and apparatus | |
CN114924999B (en) | Cache management method, device, system, equipment and medium | |
CN111737212A (en) | Method and equipment for improving performance of distributed file system | |
CN112346647A (en) | Data storage method, device, equipment and medium | |
CN108733585B (en) | Cache system and related method | |
US6954825B2 (en) | Disk subsystem | |
CN104407990B (en) | A kind of disk access method and device | |
CN112039999A (en) | Method and system for accessing distributed block storage system in kernel mode | |
US10180901B2 (en) | Apparatus, system and method for managing space in a storage device | |
US10838949B2 (en) | Shared resource update apparatus and shared resource update method | |
US20190235915A1 (en) | Techniques for ordering atomic operations | |
CN113535087A (en) | Data processing method, server and storage system in data migration process | |
US11249914B2 (en) | System and methods of an efficient cache algorithm in a hierarchical storage system | |
CN113126917A (en) | Request processing method, system, device and medium in distributed storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210716 |
|
RJ01 | Rejection of invention patent application after publication |